Sample records for pseudo-random array standards

  1. Crosstalk Reduction for High-Frequency Linear-Array Ultrasound Transducers Using 1–3 Piezocomposites With Pseudo-Random Pillars

    PubMed Central

    Yang, Hao-Chung; Cannata, Jonathan; Williams, Jay; Shung, K. Kirk

    2013-01-01

    The goal of this research was to develop a novel diced 1–3 piezocomposite geometry to reduce pulse–echo ring down and acoustic crosstalk between high-frequency ultrasonic array elements. Two PZT-5H-based 1–3 composites (10 and 15 MHz) of different pillar geometries [square (SQ), 45° triangle (TR), and pseudo-random (PR)] were fabricated and then made into single-element ultrasound transducers. The measured pulse–echo waveforms and their envelopes indicate that the PR composites had the shortest −20-dB pulse length and highest sensitivity among the composites evaluated. Using these composites, 15-MHz array subapertures with a 0.95λ pitch were fabricated to assess the acoustic crosstalk between array elements. The combined electrical and acoustical crosstalk between the nearest array elements of the PR array sub-apertures (−31.8 dB at 15 MHz) was 6.5 and 2.2 dB lower than those of the SQ and the TR array subapertures, respectively. These results demonstrate that the 1–3 piezocomposite with the pseudo-random pillars may be a better choice for fabricating enhanced high-frequency linear-array ultrasound transducers; especially when mechanical dicing is used. PMID:23143580

  2. Calibration of the modulation transfer function of surface profilometers with binary pseudo-random test standards: expanding the application range

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yashchuk, Valeriy V.; Anderson, Erik H.; Barber, Samuel K.

    2011-03-14

    A modulation transfer function (MTF) calibration method based on binary pseudo-random (BPR) gratings and arrays [Proc. SPIE 7077-7 (2007), Opt. Eng. 47, 073602 (2008)] has been proven to be an effective MTF calibration method for a number of interferometric microscopes and a scatterometer [Nucl. Instr. and Meth. A616, 172 (2010)]. Here we report on a further expansion of the application range of the method. We describe the MTF calibration of a 6 inch phase shifting Fizeau interferometer. Beyond providing a direct measurement of the interferometer's MTF, tests with a BPR array surface have revealed an asymmetry in the instrument's datamore » processing algorithm that fundamentally limits its bandwidth. Moreover, the tests have illustrated the effects of the instrument's detrending and filtering procedures on power spectral density measurements. The details of the development of a BPR test sample suitable for calibration of scanning and transmission electron microscopes are also presented. Such a test sample is realized as a multilayer structure with the layer thicknesses of two materials corresponding to BPR sequence. The investigations confirm the universal character of the method that makes it applicable to a large variety of metrology instrumentation with spatial wavelength bandwidths from a few nanometers to hundreds of millimeters.« less

  3. Calibration of the modulation transfer function of surface profilometers with binary pseudo-random test standards: Expanding the application range

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yashchuk, Valeriy V; Anderson, Erik H.; Barber, Samuel K.

    2010-07-26

    A modulation transfer function (MTF) calibration method based on binary pseudo-random (BPR) gratings and arrays [Proc. SPIE 7077-7 (2007), Opt. Eng. 47(7), 073602-1-5 (2008)] has been proven to be an effective MTF calibration method for a number of interferometric microscopes and a scatterometer [Nucl. Instr. and Meth. A 616, 172-82 (2010]. Here we report on a significant expansion of the application range of the method. We describe the MTF calibration of a 6 inch phase shifting Fizeau interferometer. Beyond providing a direct measurement of the interferometer's MTF, tests with a BPR array surface have revealed an asymmetry in the instrument'smore » data processing algorithm that fundamentally limits its bandwidth. Moreover, the tests have illustrated the effects of the instrument's detrending and filtering procedures on power spectral density measurements. The details of the development of a BPR test sample suitable for calibration of scanning and transmission electron microscopes are also presented. Such a test sample is realized as a multilayer structure with the layer thicknesses of two materials corresponding to BPR sequence. The investigations confirm the universal character of the method that makes it applicable to a large variety of metrology instrumentation with spatial wavelength bandwidths from a few nanometers to hundreds of millimeters.« less

  4. A Memory-Based Programmable Logic Device Using Look-Up Table Cascade with Synchronous Static Random Access Memories

    NASA Astrophysics Data System (ADS)

    Nakamura, Kazuyuki; Sasao, Tsutomu; Matsuura, Munehiro; Tanaka, Katsumasa; Yoshizumi, Kenichi; Nakahara, Hiroki; Iguchi, Yukihiro

    2006-04-01

    A large-scale memory-technology-based programmable logic device (PLD) using a look-up table (LUT) cascade is developed in the 0.35-μm standard complementary metal oxide semiconductor (CMOS) logic process. Eight 64 K-bit synchronous SRAMs are connected to form an LUT cascade with a few additional circuits. The features of the LUT cascade include: 1) a flexible cascade connection structure, 2) multi phase pseudo asynchronous operations with synchronous static random access memory (SRAM) cores, and 3) LUT-bypass redundancy. This chip operates at 33 MHz in 8-LUT cascades at 122 mW. Benchmark results show that it achieves a comparable performance to field programmable gate array (FPGAs).

  5. Generating random numbers by means of nonlinear dynamic systems

    NASA Astrophysics Data System (ADS)

    Zang, Jiaqi; Hu, Haojie; Zhong, Juhua; Luo, Duanbin; Fang, Yi

    2018-07-01

    To introduce the randomness of a physical process to students, a chaotic pendulum experiment was opened in East China University of Science and Technology (ECUST) on the undergraduate level in the physics department. It was shown chaotic motion could be initiated through adjusting the operation of a chaotic pendulum. By using the data of the angular displacements of chaotic motion, random binary numerical arrays can be generated. To check the randomness of generated numerical arrays, the NIST Special Publication 800-20 method was adopted. As a result, it was found that all the random arrays which were generated by the chaotic motion could pass the validity criteria and some of them were even better than the quality of pseudo-random numbers generated by a computer. Through the experiments, it is demonstrated that chaotic pendulum can be used as an efficient mechanical facility in generating random numbers, and can be applied in teaching random motion to the students.

  6. Improved diagonal queue medical image steganography using Chaos theory, LFSR, and Rabin cryptosystem.

    PubMed

    Jain, Mamta; Kumar, Anil; Choudhary, Rishabh Charan

    2017-06-01

    In this article, we have proposed an improved diagonal queue medical image steganography for patient secret medical data transmission using chaotic standard map, linear feedback shift register, and Rabin cryptosystem, for improvement of previous technique (Jain and Lenka in Springer Brain Inform 3:39-51, 2016). The proposed algorithm comprises four stages, generation of pseudo-random sequences (pseudo-random sequences are generated by linear feedback shift register and standard chaotic map), permutation and XORing using pseudo-random sequences, encryption using Rabin cryptosystem, and steganography using the improved diagonal queues. Security analysis has been carried out. Performance analysis is observed using MSE, PSNR, maximum embedding capacity, as well as by histogram analysis between various Brain disease stego and cover images.

  7. Test surfaces useful for calibration of surface profilometers

    DOEpatents

    Yashchuk, Valeriy V; McKinney, Wayne R; Takacs, Peter Z

    2013-12-31

    The present invention provides for test surfaces and methods for calibration of surface profilometers, including interferometric and atomic force microscopes. Calibration is performed using a specially designed test surface, or the Binary Pseudo-random (BPR) grating (array). Utilizing the BPR grating (array) to measure the power spectral density (PSD) spectrum, the profilometer is calibrated by determining the instrumental modulation transfer.

  8. Testability Design Rating System: Testability Handbook. Volume 1

    DTIC Science & Technology

    1992-02-01

    4-10 4.7.5 Summary of False BIT Alarms (FBA) ............................. 4-10 4.7.6 Smart BIT Technique...Circuit Board PGA Pin Grid Array PLA Programmable Logic Array PLD Programmable Logic Device PN Pseudo-Random Number PREDICT Probabilistic Estimation of...11 4.7.6 Smart BIT ( reference: RADC-TR-85-198). " Smart " BIT is a term given to BIT circuitry in a system LRU which includes dedicated processor/memory

  9. Two-dimensional optoelectronic interconnect-processor and its operational bit error rate

    NASA Astrophysics Data System (ADS)

    Liu, J. Jiang; Gollsneider, Brian; Chang, Wayne H.; Carhart, Gary W.; Vorontsov, Mikhail A.; Simonis, George J.; Shoop, Barry L.

    2004-10-01

    Two-dimensional (2-D) multi-channel 8x8 optical interconnect and processor system were designed and developed using complementary metal-oxide-semiconductor (CMOS) driven 850-nm vertical-cavity surface-emitting laser (VCSEL) arrays and the photodetector (PD) arrays with corresponding wavelengths. We performed operation and bit-error-rate (BER) analysis on this free-space integrated 8x8 VCSEL optical interconnects driven by silicon-on-sapphire (SOS) circuits. Pseudo-random bit stream (PRBS) data sequence was used in operation of the interconnects. Eye diagrams were measured from individual channels and analyzed using a digital oscilloscope at data rates from 155 Mb/s to 1.5 Gb/s. Using a statistical model of Gaussian distribution for the random noise in the transmission, we developed a method to compute the BER instantaneously with the digital eye-diagrams. Direct measurements on this interconnects were also taken on a standard BER tester for verification. We found that the results of two methods were in the same order and within 50% accuracy. The integrated interconnects were investigated in an optoelectronic processing architecture of digital halftoning image processor. Error diffusion networks implemented by the inherently parallel nature of photonics promise to provide high quality digital halftoned images.

  10. Identification of phenolic compounds from the leaf part of Teucrium pseudo-Scorodonia Desf. collected from Algeria.

    PubMed

    Belarbi, Karima; Atik-Bekkara, Fawzia; El Haci, Imad Abdelhamid; Bensaid, Ilhem; Bekhechi, Chahrazed

    2018-02-01

    In the present paper,we reported for the first time, the identification of the phenolic compounds in butanolic fraction obtained from the leaf part of Teucrium pseudo-Scorodonia Desf. collected from Algeria using RP-HPLC-PDA (Reversed Phase High Performance Liquid Chromatography/Photo Diode Array) technique. Several standards were used for this purpose. The analysis led to the identification of six phenolic acids (ferulic, sinapic, rosmarinic, syringique, caffeic, p-coumaric acids) and one flavonoid (rutin), the last one, has interesting pharmacological properties.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    V Yashchuk; R Conley; E Anderson

    Verification of the reliability of metrology data from high quality X-ray optics requires that adequate methods for test and calibration of the instruments be developed. For such verification for optical surface profilometers in the spatial frequency domain, a modulation transfer function (MTF) calibration method based on binary pseudo-random (BPR) gratings and arrays has been suggested [1] and [2] and proven to be an effective calibration method for a number of interferometric microscopes, a phase shifting Fizeau interferometer, and a scatterometer [5]. Here we describe the details of development of binary pseudo-random multilayer (BPRML) test samples suitable for characterization of scanningmore » (SEM) and transmission (TEM) electron microscopes. We discuss the results of TEM measurements with the BPRML test samples fabricated from a WiSi2/Si multilayer coating with pseudo-randomly distributed layers. In particular, we demonstrate that significant information about the metrological reliability of the TEM measurements can be extracted even when the fundamental frequency of the BPRML sample is smaller than the Nyquist frequency of the measurements. The measurements demonstrate a number of problems related to the interpretation of the SEM and TEM data. Note that similar BPRML test samples can be used to characterize X-ray microscopes. Corresponding work with X-ray microscopes is in progress.« less

  12. Standard, Random, and Optimum Array conversions from Two-Pole resistance data

    DOE PAGES

    Rucker, D. F.; Glaser, Danney R.

    2014-09-01

    We present an array evaluation of standard and nonstandard arrays over a hydrogeological target. We develop the arrays by linearly combining data from the pole-pole (or 2-pole) array. The first test shows that reconstructed resistances for the standard Schlumberger and dipoledipole arrays are equivalent or superior to the measured arrays in terms of noise, especially at large geometric factors. The inverse models for the standard arrays also confirm what others have presented in terms of target resolvability, namely the dipole-dipole array has the highest resolution. In the second test, we reconstruct random electrode combinations from the 2-pole data segregated intomore » inner, outer, and overlapping dipoles. The resistance data and inverse models from these randomized arrays show those with inner dipoles to be superior in terms of noise and resolution and that overlapping dipoles can cause model instability and low resolution. Finally, we use the 2-pole data to create an optimized array that maximizes the model resolution matrix for a given electrode geometry. The optimized array produces the highest resolution and target detail. Thus, the tests demonstrate that high quality data and high model resolution can be achieved by acquiring field data from the pole-pole array.« less

  13. A high-speed on-chip pseudo-random binary sequence generator for multi-tone phase calibration

    NASA Astrophysics Data System (ADS)

    Gommé, Liesbeth; Vandersteen, Gerd; Rolain, Yves

    2011-07-01

    An on-chip reference generator is conceived by adopting the technique of decimating a pseudo-random binary sequence (PRBS) signal in parallel sequences. This is of great benefit when high-speed generation of PRBS and PRBS-derived signals is the objective. The design implemented standard CMOS logic is available in commercial libraries to provide the logic functions for the generator. The design allows the user to select the periodicity of the PRBS and the PRBS-derived signals. The characterization of the on-chip generator marks its performance and reveals promising specifications.

  14. Lensless digital holography with diffuse illumination through a pseudo-random phase mask.

    PubMed

    Bernet, Stefan; Harm, Walter; Jesacher, Alexander; Ritsch-Marte, Monika

    2011-12-05

    Microscopic imaging with a setup consisting of a pseudo-random phase mask, and an open CMOS camera, without an imaging objective, is demonstrated. The pseudo random phase mask acts as a diffuser for an incoming laser beam, scattering a speckle pattern to a CMOS chip, which is recorded once as a reference. A sample which is afterwards inserted somewhere in the optical beam path changes the speckle pattern. A single (non-iterative) image processing step, comparing the modified speckle pattern with the previously recorded one, generates a sharp image of the sample. After a first calibration the method works in real-time and allows quantitative imaging of complex (amplitude and phase) samples in an extended three-dimensional volume. Since no lenses are used, the method is free from lens abberations. Compared to standard inline holography the diffuse sample illumination improves the axial sectioning capability by increasing the effective numerical aperture in the illumination path, and it suppresses the undesired so-called twin images. For demonstration, a high resolution spatial light modulator (SLM) is programmed to act as the pseudo-random phase mask. We show experimental results, imaging microscopic biological samples, e.g. insects, within an extended volume at a distance of 15 cm with a transverse and longitudinal resolution of about 60 μm and 400 μm, respectively.

  15. Binary pseudo-random patterned structures for modulation transfer function calibration and resolution characterization of a full-field transmission soft x-ray microscope

    DOE PAGES

    Yashchuk, V. V.; Fischer, P. J.; Chan, E. R.; ...

    2015-12-09

    We present a modulation transfer function (MTF) calibration method based on binary pseudo-random (BPR) one-dimensional sequences and two-dimensional arrays as an effective method for spectral characterization in the spatial frequency domain of a broad variety of metrology instrumentation, including interferometric microscopes, scatterometers, phase shifting Fizeau interferometers, scanning and transmission electron microscopes, and at this time, x-ray microscopes. The inherent power spectral density of BPR gratings and arrays, which has a deterministic white-noise-like character, allows a direct determination of the MTF with a uniform sensitivity over the entire spatial frequency range and field of view of an instrument. We demonstrate themore » MTF calibration and resolution characterization over the full field of a transmission soft x-ray microscope using a BPR multilayer (ML) test sample with 2.8 nm fundamental layer thickness. We show that beyond providing a direct measurement of the microscope's MTF, tests with the BPRML sample can be used to fine tune the instrument's focal distance. Finally, our results confirm the universality of the method that makes it applicable to a large variety of metrology instrumentation with spatial wavelength bandwidths from a few nanometers to hundreds of millimeters.« less

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yashchuk, Valeriy V; Conley, Raymond; Anderson, Erik H

    Verification of the reliability of metrology data from high quality x-ray optics requires that adequate methods for test and calibration of the instruments be developed. For such verification for optical surface profilometers in the spatial frequency domain, a modulation transfer function (MTF) calibration method based on binary pseudo-random (BPR) gratings and arrays has been suggested [Proc. SPIE 7077-7 (2007), Opt. Eng. 47(7), 073602-1-5 (2008)} and proven to be an effective calibration method for a number of interferometric microscopes, a phase shifting Fizeau interferometer, and a scatterometer [Nucl. Instr. and Meth. A 616, 172-82 (2010)]. Here we describe the details ofmore » development of binary pseudo-random multilayer (BPRML) test samples suitable for characterization of scanning (SEM) and transmission (TEM) electron microscopes. We discuss the results of TEM measurements with the BPRML test samples fabricated from a WiSi2/Si multilayer coating with pseudo randomly distributed layers. In particular, we demonstrate that significant information about the metrological reliability of the TEM measurements can be extracted even when the fundamental frequency of the BPRML sample is smaller than the Nyquist frequency of the measurements. The measurements demonstrate a number of problems related to the interpretation of the SEM and TEM data. Note that similar BPRML test samples can be used to characterize x-ray microscopes. Corresponding work with x-ray microscopes is in progress.« less

  17. Design and performance of single photon APD focal plane arrays for 3-D LADAR imaging

    NASA Astrophysics Data System (ADS)

    Itzler, Mark A.; Entwistle, Mark; Owens, Mark; Patel, Ketan; Jiang, Xudong; Slomkowski, Krystyna; Rangwala, Sabbir; Zalud, Peter F.; Senko, Tom; Tower, John; Ferraro, Joseph

    2010-08-01

    ×We describe the design, fabrication, and performance of focal plane arrays (FPAs) for use in 3-D LADAR imaging applications requiring single photon sensitivity. These 32 × 32 FPAs provide high-efficiency single photon sensitivity for three-dimensional LADAR imaging applications at 1064 nm. Our GmAPD arrays are designed using a planarpassivated avalanche photodiode device platform with buried p-n junctions that has demonstrated excellent performance uniformity, operational stability, and long-term reliability. The core of the FPA is a chip stack formed by hybridizing the GmAPD photodiode array to a custom CMOS read-out integrated circuit (ROIC) and attaching a precision-aligned GaP microlens array (MLA) to the back-illuminated detector array. Each ROIC pixel includes an active quenching circuit governing Geiger-mode operation of the corresponding avalanche photodiode pixel as well as a pseudo-random counter to capture per-pixel time-of-flight timestamps in each frame. The FPA has been designed to operate at frame rates as high as 186 kHz for 2 μs range gates. Effective single photon detection efficiencies as high as 40% (including all optical transmission and MLA losses) are achieved for dark count rates below 20 kHz. For these planar-geometry diffused-junction GmAPDs, isolation trenches are used to reduce crosstalk due to hot carrier luminescence effects during avalanche events, and we present details of the crosstalk performance for different operating conditions. Direct measurement of temporal probability distribution functions due to cumulative timing uncertainties of the GmAPDs and ROIC circuitry has demonstrated a FWHM timing jitter as low as 265 ps (standard deviation is ~100 ps).

  18. Freely Drifting Swallow Float Array: August 1988 Trip Report

    DTIC Science & Technology

    1989-01-01

    situ meas- urements of the floats’ clock drifts were obtained; the absolute drifts were on the order of / one part in 105 and the relative clock...Finally, in situ meas- urements of the floats’ clock drifts were obtained, the absolute drifts were on the order of one part in W05 and the relative...FSK mode). That is, the pseudo-random noise generator (PRNG) created a string of ones and zeros ; a zero caused a 12 kHz tone to be broadcast from

  19. Oxide-confined 2D VCSEL arrays for high-density inter/intra-chip interconnects

    NASA Astrophysics Data System (ADS)

    King, Roger; Michalzik, Rainer; Jung, Christian; Grabherr, Martin; Eberhard, Franz; Jaeger, Roland; Schnitzer, Peter; Ebeling, Karl J.

    1998-04-01

    We have designed and fabricated 4 X 8 vertical-cavity surface-emitting laser (VCSEL) arrays intended to be used as transmitters in short-distance parallel optical interconnects. In order to meet the requirements of 2D, high-speed optical links, each of the 32 laser diodes is supplied with two individual top contacts. The metallization scheme allows flip-chip mounting of the array modules junction-side down on silicon complementary metal oxide semiconductor (CMOS) chips. The optical and electrical characteristics across the arrays with device pitch of 250 micrometers are quite homogeneous. Arrays with 3 micrometers , 6 micrometers and 10 micrometers active diameter lasers have been investigated. The small devices show threshold currents of 600 (mu) A, single-mode output powers as high as 3 mW and maximum wavelength deviations of only 3 nm. The driving characteristics of all arrays are fully compatible to advanced 3.3 V CMOS technology. Using these arrays, we have measured small-signal modulation bandwidths exceeding 10 GHz and transmitted pseudo random data at 8 Gbit/s channel over 500 m graded index multimode fiber. This corresponds to a data transmission rate of 256 Gbit/s per array of 1 X 2 mm2 footprint area.

  20. Analysis of backward error recovery for concurrent processes with recovery blocks

    NASA Technical Reports Server (NTRS)

    Shin, K. G.; Lee, Y. H.

    1982-01-01

    Three different methods of implementing recovery blocks (RB's). These are the asynchronous, synchronous, and the pseudo recovery point implementations. Pseudo recovery points so that unbounded rollback may be avoided while maintaining process autonomy are proposed. Probabilistic models for analyzing these three methods under standard assumptions in computer performance analysis, i.e., exponential distributions for related random variables were developed. The interval between two successive recovery lines for asynchronous RB's mean loss in computation power for the synchronized method, and additional overhead and rollback distance in case PRP's are used were estimated.

  1. Pseudo-Random Number Generator Based on Coupled Map Lattices

    NASA Astrophysics Data System (ADS)

    Lü, Huaping; Wang, Shihong; Hu, Gang

    A one-way coupled chaotic map lattice is used for generating pseudo-random numbers. It is shown that with suitable cooperative applications of both chaotic and conventional approaches, the output of the spatiotemporally chaotic system can easily meet the practical requirements of random numbers, i.e., excellent random statistical properties, long periodicity of computer realizations, and fast speed of random number generations. This pseudo-random number generator system can be used as ideal synchronous and self-synchronizing stream cipher systems for secure communications.

  2. Method and apparatus for determining position using global positioning satellites

    NASA Technical Reports Server (NTRS)

    Ward, John (Inventor); Ward, William S. (Inventor)

    1998-01-01

    A global positioning satellite receiver having an antenna for receiving a L1 signal from a satellite. The L1 signal is processed by a preamplifier stage including a band pass filter and a low noise amplifier and output as a radio frequency (RF) signal. A mixer receives and de-spreads the RF signal in response to a pseudo-random noise code, i.e., Gold code, generated by an internal pseudo-random noise code generator. A microprocessor enters a code tracking loop, such that during the code tracking loop, it addresses the pseudo-random code generator to cause the pseudo-random code generator to sequentially output pseudo-random codes corresponding to satellite codes used to spread the L1 signal, until correlation occurs. When an output of the mixer is indicative of the occurrence of correlation between the RF signal and the generated pseudo-random codes, the microprocessor enters an operational state which slows the receiver code sequence to stay locked with the satellite code sequence. The output of the mixer is provided to a detector which, in turn, controls certain routines of the microprocessor. The microprocessor will output pseudo range information according to an interrupt routine in response detection of correlation. The pseudo range information is to be telemetered to a ground station which determines the position of the global positioning satellite receiver.

  3. Pseudo-random bit generator based on lag time series

    NASA Astrophysics Data System (ADS)

    García-Martínez, M.; Campos-Cantón, E.

    2014-12-01

    In this paper, we present a pseudo-random bit generator (PRBG) based on two lag time series of the logistic map using positive and negative values in the bifurcation parameter. In order to hidden the map used to build the pseudo-random series we have used a delay in the generation of time series. These new series when they are mapped xn against xn+1 present a cloud of points unrelated to the logistic map. Finally, the pseudo-random sequences have been tested with the suite of NIST giving satisfactory results for use in stream ciphers.

  4. Three-dimensional vectorial multifocal arrays created by pseudo-period encoding

    NASA Astrophysics Data System (ADS)

    Zeng, Tingting; Chang, Chenliang; Chen, Zhaozhong; Wang, Hui-Tian; Ding, Jianping

    2018-06-01

    Multifocal arrays have been attracting considerable attention recently owing to their potential applications in parallel optical tweezers, parallel single-molecule orientation determination, parallel recording and multifocal multiphoton microscopy. However, the generation of vectorial multifocal arrays with a tailorable structure and polarization state remains a great challenge, and reports on multifocal arrays have hitherto been restricted either to scalar focal spots without polarization versatility or to regular arrays with fixed spacing. In this work, we propose a specific pseudo-period encoding technique to create three-dimensional (3D) vectorial multifocal arrays with the ability to manipulate the position, polarization state and intensity of each focal spot. We experimentally validated the flexibility of our approach in the generation of 3D vectorial multiple spots with polarization multiplicity and position tunability.

  5. Pseudo CT estimation from MRI using patch-based random forest

    NASA Astrophysics Data System (ADS)

    Yang, Xiaofeng; Lei, Yang; Shu, Hui-Kuo; Rossi, Peter; Mao, Hui; Shim, Hyunsuk; Curran, Walter J.; Liu, Tian

    2017-02-01

    Recently, MR simulators gain popularity because of unnecessary radiation exposure of CT simulators being used in radiation therapy planning. We propose a method for pseudo CT estimation from MR images based on a patch-based random forest. Patient-specific anatomical features are extracted from the aligned training images and adopted as signatures for each voxel. The most robust and informative features are identified using feature selection to train the random forest. The well-trained random forest is used to predict the pseudo CT of a new patient. This prediction technique was tested with human brain images and the prediction accuracy was assessed using the original CT images. Peak signal-to-noise ratio (PSNR) and feature similarity (FSIM) indexes were used to quantify the differences between the pseudo and original CT images. The experimental results showed the proposed method could accurately generate pseudo CT images from MR images. In summary, we have developed a new pseudo CT prediction method based on patch-based random forest, demonstrated its clinical feasibility, and validated its prediction accuracy. This pseudo CT prediction technique could be a useful tool for MRI-based radiation treatment planning and attenuation correction in a PET/MRI scanner.

  6. Noise-Induced Synchronization among Sub-RF CMOS Analog Oscillators for Skew-Free Clock Distribution

    NASA Astrophysics Data System (ADS)

    Utagawa, Akira; Asai, Tetsuya; Hirose, Tetsuya; Amemiya, Yoshihito

    We present on-chip oscillator arrays synchronized by random noises, aiming at skew-free clock distribution on synchronous digital systems. Nakao et al. recently reported that independent neural oscillators can be synchronized by applying temporal random impulses to the oscillators [1], [2]. We regard neural oscillators as independent clock sources on LSIs; i. e., clock sources are distributed on LSIs, and they are forced to synchronize through the use of random noises. We designed neuron-based clock generators operating at sub-RF region (<1GHz) by modifying the original neuron model to a new model that is suitable for CMOS implementation with 0.25-μm CMOS parameters. Through circuit simulations, we demonstrate that i) the clock generators are certainly synchronized by pseudo-random noises and ii) clock generators exhibited phase-locked oscillations even if they had small device mismatches.

  7. On splice site prediction using weight array models: a comparison of smoothing techniques

    NASA Astrophysics Data System (ADS)

    Taher, Leila; Meinicke, Peter; Morgenstern, Burkhard

    2007-11-01

    In most eukaryotic genes, protein-coding exons are separated by non-coding introns which are removed from the primary transcript by a process called "splicing". The positions where introns are cut and exons are spliced together are called "splice sites". Thus, computational prediction of splice sites is crucial for gene finding in eukaryotes. Weight array models are a powerful probabilistic approach to splice site detection. Parameters for these models are usually derived from m-tuple frequencies in trusted training data and subsequently smoothed to avoid zero probabilities. In this study we compare three different ways of parameter estimation for m-tuple frequencies, namely (a) non-smoothed probability estimation, (b) standard pseudo counts and (c) a Gaussian smoothing procedure that we recently developed.

  8. Optical analogue of relativistic Dirac solitons in binary waveguide arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tran, Truong X., E-mail: truong.tran@mpl.mpg.de; Max Planck Institute for the Science of Light, Günther-Scharowsky str. 1, 91058 Erlangen; Longhi, Stefano

    2014-01-15

    We study analytically and numerically an optical analogue of Dirac solitons in binary waveguide arrays in the presence of Kerr nonlinearity. Pseudo-relativistic soliton solutions of the coupled-mode equations describing dynamics in the array are analytically derived. We demonstrate that with the found soliton solutions, the coupled mode equations can be converted into the nonlinear relativistic 1D Dirac equation. This paves the way for using binary waveguide arrays as a classical simulator of quantum nonlinear effects arising from the Dirac equation, something that is thought to be impossible to achieve in conventional (i.e. linear) quantum field theory. -- Highlights: •An opticalmore » analogue of Dirac solitons in nonlinear binary waveguide arrays is suggested. •Analytical solutions to pseudo-relativistic solitons are presented. •A correspondence of optical coupled-mode equations with the nonlinear relativistic Dirac equation is established.« less

  9. Large-pitch steerable synthetic transmit aperture imaging (LPSSTA)

    NASA Astrophysics Data System (ADS)

    Li, Ying; Kolios, Michael C.; Xu, Yuan

    2016-04-01

    A linear ultrasound array system usually has a larger pitch and is less costly than a phased array system, but loses the ability to fully steer the ultrasound beam. In this paper, we propose a system whose hardware is similar to a large-pitch linear array system, but whose ability to steer the beam is similar to a phased array system. The motivation is to reduce the total number of measurement channels M (the product of the number of transmissions, nT, and the number of the receive channels in each transmission, nR), while maintaining reasonable image quality. We combined adjacent elements (with proper delays introduced) into groups that would be used in both the transmit and receive processes of synthetic transmit aperture imaging. After the M channels of RF data were acquired, a pseudo-inversion was applied to estimate the equivalent signal in traditional STA to reconstruct a STA image. Even with the similar M, different choices of nT and nR will produce different image quality. The images produced with M=N2/15 in the selected regions of interest (ROI) were demonstrated to be comparable with a full phased array, where N is the number of the array elements. The disadvantage of the proposed system is that its field of view in one delay-configuration is smaller than a standard full phased array. However, by adjusting the delay for each element within each group, the beam can be steered to cover the same field of view as the standard fully-filled phased array. The LPSSTA system might be useful for 3D ultrasound imaging.

  10. Estimation of perfusion properties with MR Fingerprinting Arterial Spin Labeling.

    PubMed

    Wright, Katherine L; Jiang, Yun; Ma, Dan; Noll, Douglas C; Griswold, Mark A; Gulani, Vikas; Hernandez-Garcia, Luis

    2018-03-12

    In this study, the acquisition of ASL data and quantification of multiple hemodynamic parameters was explored using a Magnetic Resonance Fingerprinting (MRF) approach. A pseudo-continuous ASL labeling scheme was used with pseudo-randomized timings to acquire the MRF ASL data in a 2.5 min acquisition. A large dictionary of MRF ASL signals was generated by combining a wide range of physical and hemodynamic properties with the pseudo-random MRF ASL sequence and a two-compartment model. The acquired signals were matched to the dictionary to provide simultaneous quantification of cerebral blood flow, tissue time-to-peak, cerebral blood volume, arterial time-to-peak, B 1 , and T 1. A study in seven healthy volunteers resulted in the following values across the population in grey matter (mean ± standard deviation): cerebral blood flow of 69.1 ± 6.1 ml/min/100 g, arterial time-to-peak of 1.5 ± 0.1 s, tissue time-to-peak of 1.5 ± 0.1 s, T 1 of 1634 ms, cerebral blood volume of 0.0048 ± 0.0005. The CBF measurements were compared to standard pCASL CBF estimates using a one-compartment model, and a Bland-Altman analysis showed good agreement with a minor bias. Repeatability was tested in five volunteers in the same exam session, and no statistical difference was seen. In addition to this validation, the MRF ASL acquisition's sensitivity to the physical and physiological parameters of interest was studied numerically. Copyright © 2018 Elsevier Inc. All rights reserved.

  11. Fast and secure encryption-decryption method based on chaotic dynamics

    DOEpatents

    Protopopescu, Vladimir A.; Santoro, Robert T.; Tolliver, Johnny S.

    1995-01-01

    A method and system for the secure encryption of information. The method comprises the steps of dividing a message of length L into its character components; generating m chaotic iterates from m independent chaotic maps; producing an "initial" value based upon the m chaotic iterates; transforming the "initial" value to create a pseudo-random integer; repeating the steps of generating, producing and transforming until a pseudo-random integer sequence of length L is created; and encrypting the message as ciphertext based upon the pseudo random integer sequence. A system for accomplishing the invention is also provided.

  12. GPS-Like Phasing Control of the Space Solar Power System Transmission Array

    NASA Technical Reports Server (NTRS)

    Psiaki, Mark L.

    2003-01-01

    The problem of phasing of the Space Solar Power System's transmission array has been addressed by developing a GPS-like radio navigation system. The goal of this system is to provide power transmission phasing control for each node of the array that causes the power signals to add constructively at the ground reception station. The phasing control system operates in a distributed manner, which makes it practical to implement. A leader node and two radio navigation beacons are used to control the power transmission phasing of multiple follower nodes. The necessary one-way communications to the follower nodes are implemented using the RF beacon signals. The phasing control system uses differential carrier phase relative navigation/timing techniques. A special feature of the system is an integer ambiguity resolution procedure that periodically resolves carrier phase cycle count ambiguities via encoding of pseudo-random number codes on the power transmission signals. The system is capable of achieving phasing accuracies on the order of 3 mm down to 0.4 mm depending on whether the radio navigation beacons operate in the L or C bands.

  13. An On-Demand Optical Quantum Random Number Generator with In-Future Action and Ultra-Fast Response

    PubMed Central

    Stipčević, Mario; Ursin, Rupert

    2015-01-01

    Random numbers are essential for our modern information based society e.g. in cryptography. Unlike frequently used pseudo-random generators, physical random number generators do not depend on complex algorithms but rather on a physicsal process to provide true randomness. Quantum random number generators (QRNG) do rely on a process, wich can be described by a probabilistic theory only, even in principle. Here we present a conceptualy simple implementation, which offers a 100% efficiency of producing a random bit upon a request and simultaneously exhibits an ultra low latency. A careful technical and statistical analysis demonstrates its robustness against imperfections of the actual implemented technology and enables to quickly estimate randomness of very long sequences. Generated random numbers pass standard statistical tests without any post-processing. The setup described, as well as the theory presented here, demonstrate the maturity and overall understanding of the technology. PMID:26057576

  14. Radar Attitude Sensing System (RASS)

    NASA Technical Reports Server (NTRS)

    1971-01-01

    The initial design and fabrication efforts for a radar attitude sensing system (RASS) are covered. The design and fabrication of the RASS system is being undertaken in two phases, 1B1 and 1B2. The RASS system as configured under phase 1B1 contains the solid state transmitter and local oscillator, the antenna system, the receiving system, and the altitude electronics. RASS employs a pseudo-random coded cw signal and receiver correlation techniques to measure range. The antenna is a planar, phased array, monopulse type, whose beam is electronically steerable using diode phase shifters. The beam steering computer and attitude sensing circuitry are to be included in Phase 1B2 of the program.

  15. A focal plane metrology system and PSF centroiding experiment

    NASA Astrophysics Data System (ADS)

    Li, Haitao; Li, Baoquan; Cao, Yang; Li, Ligang

    2016-10-01

    In this paper, we present an overview of a detector array equipment metrology testbed and a micro-pixel centroiding experiment currently under development at the National Space Science Center, Chinese Academy of Sciences. We discuss on-going development efforts aimed at calibrating the intra-/inter-pixel quantum efficiency and pixel positions for scientific grade CMOS detector, and review significant progress in achieving higher precision differential centroiding for pseudo star images in large area back-illuminated CMOS detector. Without calibration of pixel positions and intrapixel response, we have demonstrated that the standard deviation of differential centroiding is below 2.0e-3 pixels.

  16. Random Numbers and Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Scherer, Philipp O. J.

    Many-body problems often involve the calculation of integrals of very high dimension which cannot be treated by standard methods. For the calculation of thermodynamic averages Monte Carlo methods are very useful which sample the integration volume at randomly chosen points. After summarizing some basic statistics, we discuss algorithms for the generation of pseudo-random numbers with given probability distribution which are essential for all Monte Carlo methods. We show how the efficiency of Monte Carlo integration can be improved by sampling preferentially the important configurations. Finally the famous Metropolis algorithm is applied to classical many-particle systems. Computer experiments visualize the central limit theorem and apply the Metropolis method to the traveling salesman problem.

  17. Wanted: A Positive Control for Anomalous Subdiffusion

    PubMed Central

    Saxton, Michael J.

    2012-01-01

    Anomalous subdiffusion in cells and model systems is an active area of research. The main questions are whether diffusion is anomalous or normal, and if it is anomalous, its mechanism. The subject is controversial, especially the hypothesis that crowding causes anomalous subdiffusion. Anomalous subdiffusion measurements would be strengthened by an experimental standard, particularly one able to cross-calibrate the different types of measurements. Criteria for a calibration standard are proposed. First, diffusion must be anomalous over the length and timescales of the different measurements. The length-scale is fundamental; the time scale can be adjusted through the viscosity of the medium. Second, the standard must be theoretically well understood, with a known anomalous subdiffusion exponent, ideally readily tunable. Third, the standard must be simple, reproducible, and independently characterizable (by, for example, electron microscopy for nanostructures). Candidate experimental standards are evaluated, including obstructed lipid bilayers; aqueous systems obstructed by nanopillars; a continuum percolation system in which a prescribed fraction of randomly chosen obstacles in a regular array is ablated; single-file diffusion in pores; transient anomalous subdiffusion due to binding of particles in arrays such as transcription factors in randomized DNA arrays; and computer-generated physical trajectories. PMID:23260043

  18. Comparison of tool feed influence in CNC polishing between a novel circular-random path and other pseudo-random paths.

    PubMed

    Takizawa, Ken; Beaucamp, Anthony

    2017-09-18

    A new category of circular pseudo-random paths is proposed in order to suppress repetitive patterns and improve surface waviness on ultra-precision polished surfaces. Random paths in prior research had many corners, therefore deceleration of the polishing tool affected the surface waviness. The new random path can suppress velocity changes of the polishing tool and thus restrict degradation of the surface waviness, making it suitable for applications with stringent mid-spatial-frequency requirements such as photomask blanks for EUV lithography.

  19. 32 x 16 CMOS smart pixel array for optical interconnects

    NASA Astrophysics Data System (ADS)

    Kim, Jongwoo; Guilfoyle, Peter S.; Stone, Richard V.; Hessenbruch, John M.; Choquette, Kent D.; Kiamilev, Fouad E.

    2000-05-01

    Free space optical interconnects can increase throughput capacities and eliminate much of the energy consumption required for `all electronic' systems. High speed optical interconnects can be achieved by integrating optoelectronic devices with conventional electronics. Smart pixel arrays have been developed which use optical interconnects. An individual smart pixel cell is composed of a vertical cavity surface emitting laser (VCSEL), a photodetector, an optical receiver, a laser driver, and digital logic circuitry. Oxide-confined VCSELs are being developed to operate at 850 nm with a threshold current of approximately 1 mA. Multiple quantum well photodetectors are being fabricated from AlGaAs for use with the 850 nm VCSELs. The VCSELs and photodetectors are being integrated with complementary metal oxide semiconductor (CMOS) circuitry using flip-chip bonding. CMOS circuitry is being integrated with a 32 X 16 smart pixel array. The 512 smart pixels are serially linked. Thus, an entire data stream may be clocked through the chip and output electrically by the last pixel. Electrical testing is being performed on the CMOS smart pixel array. Using an on-chip pseudo random number generator, a digital data sequence was cycled through the chip verifying operation of the digital circuitry. Although, the prototype chip was fabricated in 1.2 micrometers technology, simulations have demonstrated that the array can operate at 1 Gb/s per pixel using 0.5 micrometers technology.

  20. Flip-chip bonded optoelectronic integration based on ultrathin silicon (UTSi) CMOS

    NASA Astrophysics Data System (ADS)

    Hong, Sunkwang; Ho, Tawei; Zhang, Liping; Sawchuk, Alexander A.

    2003-06-01

    We describe the design and test of flip-chip bonded optoelectronic CMOS devices based on Peregrine Semiconductor's 0.5 micron Ultra-Thin Silicon on sapphire (UTSi) technology. The UTSi process eliminates the substrate leakage that typically results in crosstalk and reduces parasitic capacitance to the substrate, providing many benefits compared to bulk silicon CMOS. The low-loss synthetic sapphire substrate is optically transparent and has a coefficient of thermal expansion suitable for flip-chip bonding of vertical cavity surface emitting lasers (VCSELs) and detectors. We have designed two different UTSi CMOS chips. One contains a flip-chip bonded 1 x 4 photodiode array, a receiver array, a double edge triggered D-flip flop-based 2047-pattern pseudo random bit stream (PRBS) generator and a quadrature-phase LC-voltage controlled oscillator (VCO). The other chip contains a flip-chip bonded 1 x 4 VCSEL array, a driver array based on high-speed low-voltage differential signals (LVDS) and a full-balanced differential LC-VCO. Each VCSEL driver and receiver has individual input and bias voltage adjustments. Each UTSi chip is mounted on different printed circuit boards (PCBs) which have holes with about 1 mm radius for optical output and input paths through the sapphire substrate. We discuss preliminary testing of these chips.

  1. Pseudo-random tool paths for CNC sub-aperture polishing and other applications.

    PubMed

    Dunn, Christina R; Walker, David D

    2008-11-10

    In this paper we first contrast classical and CNC polishing techniques in regard to the repetitiveness of the machine motions. We then present a pseudo-random tool path for use with CNC sub-aperture polishing techniques and report polishing results from equivalent random and raster tool-paths. The random tool-path used - the unicursal random tool-path - employs a random seed to generate a pattern which never crosses itself. Because of this property, this tool-path is directly compatible with dwell time maps for corrective polishing. The tool-path can be used to polish any continuous area of any boundary shape, including surfaces with interior perforations.

  2. An inverter-based capacitive trans-impedance amplifier readout with offset cancellation and temporal noise reduction for IR focal plane array

    NASA Astrophysics Data System (ADS)

    Chen, Hsin-Han; Hsieh, Chih-Cheng

    2013-09-01

    This paper presents a readout integrated circuit (ROIC) with inverter-based capacitive trans-impedance amplifier (CTIA) and pseudo-multiple sampling technique for infrared focal plane array (IRFPA). The proposed inverter-based CTIA with a coupling capacitor [1], executing auto-zeroing technique to cancel out the varied offset voltage from process variation, is used to substitute differential amplifier in conventional CTIA. The tunable detector bias is applied from a global external bias before exposure. This scheme not only retains stable detector bias voltage and signal injection efficiency, but also reduces the pixel area as well. Pseudo-multiple sampling technique [2] is adopted to reduce the temporal noise of readout circuit. The noise reduction performance is comparable to the conventional multiple sampling operation without need of longer readout time proportional to the number of samples. A CMOS image sensor chip with 55×65 pixel array has been fabricated in 0.18um CMOS technology. It achieves a 12um×12um pixel size, a frame rate of 72 fps, a power-per-pixel of 0.66uW/pixel, and a readout temporal noise of 1.06mVrms (16 times of pseudo-multiple sampling), respectively.

  3. High density submicron magnetoresistive random access memory (invited)

    NASA Astrophysics Data System (ADS)

    Tehrani, S.; Chen, E.; Durlam, M.; DeHerrera, M.; Slaughter, J. M.; Shi, J.; Kerszykowski, G.

    1999-04-01

    Various giant magnetoresistance material structures were patterned and studied for their potential as memory elements. The preferred memory element, based on pseudo-spin valve structures, was designed with two magnetic stacks (NiFeCo/CoFe) of different thickness with Cu as an interlayer. The difference in thickness results in dissimilar switching fields due to the shape anisotropy at deep submicron dimensions. It was found that a lower switching current can be achieved when the bits have a word line that wraps around the bit 1.5 times. Submicron memory elements integrated with complementary metal-oxide-semiconductor (CMOS) transistors maintained their characteristics and no degradation to the CMOS devices was observed. Selectivity between memory elements in high-density arrays was demonstrated.

  4. The Development of a Stochastic Model of the Atmosphere Between 30 and 90 Km to Be Used in Determining the Effect of Atmospheric Variability on Space Shuttle Entry Parameters. Ph.D. Thesis - Virginia Polytechnic Inst. and State Univ.

    NASA Technical Reports Server (NTRS)

    Campbell, J. W.

    1973-01-01

    A stochasitc model of the atmosphere between 30 and 90 km was developed for use in Monte Carlo space shuttle entry studies. The model is actually a family of models, one for each latitude-season category as defined in the 1966 U.S. Standard Atmosphere Supplements. Each latitude-season model generates a pseudo-random temperature profile whose mean is the appropriate temperature profile from the Standard Atmosphere Supplements. The standard deviation of temperature at each altitude for a given latitude-season model was estimated from sounding-rocket data. Departures from the mean temperature at each altitude were produced by assuming a linear regression of temperature on the solar heating rate of ozone. A profile of random ozone concentrations was first generated using an auxiliary stochastic ozone model, also developed as part of this study, and then solar heating rates were computed for the random ozone concentrations.

  5. Non-Hermitian engineering of single mode two dimensional laser arrays

    PubMed Central

    Teimourpour, Mohammad H.; Ge, Li; Christodoulides, Demetrios N.; El-Ganainy, Ramy

    2016-01-01

    A new scheme for building two dimensional laser arrays that operate in the single supermode regime is proposed. This is done by introducing an optical coupling between the laser array and lossy pseudo-isospectral chains of photonic resonators. The spectrum of this discrete reservoir is tailored to suppress all the supermodes of the main array except the fundamental one. This spectral engineering is facilitated by employing the Householder transformation in conjunction with discrete supersymmetry. The proposed scheme is general and can in principle be used in different platforms such as VCSEL arrays and photonic crystal laser arrays. PMID:27698355

  6. On the predictive control of foveal eye tracking and slow phases of optokinetic and vestibular nystagmus.

    PubMed Central

    Yasui, S; Young, L R

    1984-01-01

    Smooth pursuit and saccadic components of foveal visual tracking as well as more involuntary ocular movements of optokinetic (o.k.n.) and vestibular nystagmus slow phase components were investigated in man, with particular attention given to their possible input-adaptive or predictive behaviour. Each component in question was isolated from the eye movement records through a computer-aided procedure. The frequency response method was used with sinusoidal (predictable) and pseudo-random (unpredictable) stimuli. When the target motion was pseudo-random, the frequency response of pursuit eye movements revealed a large phase lead (up to about 90 degrees) at low stimulus frequencies. It is possible to interpret this result as a predictive effect, even though the stimulation was pseudo-random and thus 'unpredictable'. The pseudo-random-input frequency response intrinsic to the saccadic system was estimated in an indirect way from the pursuit and composite (pursuit + saccade) frequency response data. The result was fitted well by a servo-mechanism model, which has a simple anticipatory mechanism to compensate for the inherent neuromuscular saccadic delay by utilizing the retinal slip velocity signal. The o.k.n. slow phase also exhibited a predictive effect with sinusoidal inputs; however, pseudo-random stimuli did not produce such phase lead as found in the pursuit case. The vestibular nystagmus slow phase showed no noticeable sign of prediction in the frequency range examined (0 approximately 0.7 Hz), in contrast to the results of the visually driven eye movements (i.e. saccade, pursuit and o.k.n. slow phase) at comparable stimulus frequencies. PMID:6707954

  7. Pseudo-Random Number Generation in Children with High-Functioning Autism and Asperger's Disorder: Further Evidence for a Dissociation in Executive Functioning?

    ERIC Educational Resources Information Center

    Rinehart, Nicole J.; Bradshaw, John L.; Moss, Simon A.; Brereton, Avril V.; Tonge, Bruce J.

    2006-01-01

    The repetitive, stereotyped and obsessive behaviours, which are core diagnostic features of autism, are thought to be underpinned by executive dysfunction. This study examined executive impairment in individuals with autism and Asperger's disorder using a verbal equivalent of an established pseudo-random number generating task. Different patterns…

  8. Minimizing effects of methodological decisions on interpretation and prediction in species distribution studies: An example with background selection

    USGS Publications Warehouse

    Jarnevich, Catherine S.; Talbert, Marian; Morisette, Jeffrey T.; Aldridge, Cameron L.; Brown, Cynthia; Kumar, Sunil; Manier, Daniel; Talbert, Colin; Holcombe, Tracy R.

    2017-01-01

    Evaluating the conditions where a species can persist is an important question in ecology both to understand tolerances of organisms and to predict distributions across landscapes. Presence data combined with background or pseudo-absence locations are commonly used with species distribution modeling to develop these relationships. However, there is not a standard method to generate background or pseudo-absence locations, and method choice affects model outcomes. We evaluated combinations of both model algorithms (simple and complex generalized linear models, multivariate adaptive regression splines, Maxent, boosted regression trees, and random forest) and background methods (random, minimum convex polygon, and continuous and binary kernel density estimator (KDE)) to assess the sensitivity of model outcomes to choices made. We evaluated six questions related to model results, including five beyond the common comparison of model accuracy assessment metrics (biological interpretability of response curves, cross-validation robustness, independent data accuracy and robustness, and prediction consistency). For our case study with cheatgrass in the western US, random forest was least sensitive to background choice and the binary KDE method was least sensitive to model algorithm choice. While this outcome may not hold for other locations or species, the methods we used can be implemented to help determine appropriate methodologies for particular research questions.

  9. Encryption method based on pseudo random spatial light modulation for single-fibre data transmission

    NASA Astrophysics Data System (ADS)

    Kowalski, Marcin; Zyczkowski, Marek

    2017-11-01

    Optical cryptosystems can provide encryption and sometimes compression simultaneously. They are increasingly attractive for information securing especially for image encryption. Our studies shown that the optical cryptosystems can be used to encrypt optical data transmission. We propose and study a new method for securing fibre data communication. The paper presents a method for optical encryption of data transmitted with a single optical fibre. The encryption process relies on pseudo-random spatial light modulation, combination of two encryption keys and the Compressed Sensing framework. A linear combination of light pulses with pseudo-random patterns provides a required encryption performance. We propose an architecture to transmit the encrypted data through the optical fibre. The paper describes the method, presents the theoretical analysis, design of physical model and results of experiment.

  10. Eating in the absence of hunger in adolescents: intake after a large-array meal compared with that after a standardized meal.

    PubMed

    Shomaker, Lauren B; Tanofsky-Kraff, Marian; Zocca, Jaclyn M; Courville, Amber; Kozlosky, Merel; Columbo, Kelli M; Wolkoff, Laura E; Brady, Sheila M; Crocker, Melissa K; Ali, Asem H; Yanovski, Susan Z; Yanovski, Jack A

    2010-10-01

    Eating in the absence of hunger (EAH) is typically assessed by measuring youths' intake of palatable snack foods after a standard meal designed to reduce hunger. Because energy intake required to reach satiety varies among individuals, a standard meal may not ensure the absence of hunger among participants of all weight strata. The objective of this study was to compare adolescents' EAH observed after access to a very large food array with EAH observed after a standardized meal. Seventy-eight adolescents participated in a randomized crossover study during which EAH was measured as intake of palatable snacks after ad libitum access to a very large array of lunch-type foods (>10,000 kcal) and after a lunch meal standardized to provide 50% of the daily estimated energy requirements. The adolescents consumed more energy and reported less hunger after the large-array meal than after the standardized meal (P values < 0.001). They consumed ≈70 kcal less EAH after the large-array meal than after the standardized meal (295 ± 18 compared with 365 ± 20 kcal; P < 0.001), but EAH intakes after the large-array meal and after the standardized meal were positively correlated (P values < 0.001). The body mass index z score and overweight were positively associated with EAH in both paradigms after age, sex, race, pubertal stage, and meal intake were controlled for (P values ≤ 0.05). EAH is observable and positively related to body weight regardless of whether youth eat in the absence of hunger from a very large-array meal or from a standardized meal. This trial was registered at clinicaltrials.gov as NCT00631644.

  11. Laser positioning of four-quadrant detector based on pseudo-random sequence

    NASA Astrophysics Data System (ADS)

    Tang, Yanqin; Cao, Ercong; Hu, Xiaobo; Gu, Guohua; Qian, Weixian

    2016-10-01

    Nowadays the technology of laser positioning based on four-quadrant detector has the wide scope of the study and application areas. The main principle of laser positioning is that by capturing the projection of the laser spot on the photosensitive surface of the detector, and then calculating the output signal from the detector to obtain the coordinates of the spot on the photosensitive surface of the detector, the coordinate information of the laser spot in the space with respect to detector system which reflects the spatial position of the target object is calculated effectively. Given the extensive application of FPGA technology and the pseudo-random sequence has the similar correlation of white noise, the measurement process of the interference, noise has little effect on the correlation peak. In order to improve anti-jamming capability of the guided missile in tracking process, when the laser pulse emission, the laser pulse period is pseudo-random encoded which maintains in the range of 40ms-65ms so that people of interfering can't find the exact real laser pulse. Also, because the receiver knows the way to solve the pseudo-random code, when the receiver receives two consecutive laser pulses, the laser pulse period can be decoded successfully. In the FPGA hardware implementation process, around each laser pulse arrival time, the receiver can open a wave door to get location information contained the true signal. Taking into account the first two consecutive pulses received have been disturbed, so after receiving the first laser pulse, it receives all the laser pulse in the next 40ms-65ms to obtain the corresponding pseudo-random code.

  12. Quantum Hash function and its application to privacy amplification in quantum key distribution, pseudo-random number generation and image encryption

    NASA Astrophysics Data System (ADS)

    Yang, Yu-Guang; Xu, Peng; Yang, Rui; Zhou, Yi-Hua; Shi, Wei-Min

    2016-01-01

    Quantum information and quantum computation have achieved a huge success during the last years. In this paper, we investigate the capability of quantum Hash function, which can be constructed by subtly modifying quantum walks, a famous quantum computation model. It is found that quantum Hash function can act as a hash function for the privacy amplification process of quantum key distribution systems with higher security. As a byproduct, quantum Hash function can also be used for pseudo-random number generation due to its inherent chaotic dynamics. Further we discuss the application of quantum Hash function to image encryption and propose a novel image encryption algorithm. Numerical simulations and performance comparisons show that quantum Hash function is eligible for privacy amplification in quantum key distribution, pseudo-random number generation and image encryption in terms of various hash tests and randomness tests. It extends the scope of application of quantum computation and quantum information.

  13. Quantum Hash function and its application to privacy amplification in quantum key distribution, pseudo-random number generation and image encryption

    PubMed Central

    Yang, Yu-Guang; Xu, Peng; Yang, Rui; Zhou, Yi-Hua; Shi, Wei-Min

    2016-01-01

    Quantum information and quantum computation have achieved a huge success during the last years. In this paper, we investigate the capability of quantum Hash function, which can be constructed by subtly modifying quantum walks, a famous quantum computation model. It is found that quantum Hash function can act as a hash function for the privacy amplification process of quantum key distribution systems with higher security. As a byproduct, quantum Hash function can also be used for pseudo-random number generation due to its inherent chaotic dynamics. Further we discuss the application of quantum Hash function to image encryption and propose a novel image encryption algorithm. Numerical simulations and performance comparisons show that quantum Hash function is eligible for privacy amplification in quantum key distribution, pseudo-random number generation and image encryption in terms of various hash tests and randomness tests. It extends the scope of application of quantum computation and quantum information. PMID:26823196

  14. Quantum Hash function and its application to privacy amplification in quantum key distribution, pseudo-random number generation and image encryption.

    PubMed

    Yang, Yu-Guang; Xu, Peng; Yang, Rui; Zhou, Yi-Hua; Shi, Wei-Min

    2016-01-29

    Quantum information and quantum computation have achieved a huge success during the last years. In this paper, we investigate the capability of quantum Hash function, which can be constructed by subtly modifying quantum walks, a famous quantum computation model. It is found that quantum Hash function can act as a hash function for the privacy amplification process of quantum key distribution systems with higher security. As a byproduct, quantum Hash function can also be used for pseudo-random number generation due to its inherent chaotic dynamics. Further we discuss the application of quantum Hash function to image encryption and propose a novel image encryption algorithm. Numerical simulations and performance comparisons show that quantum Hash function is eligible for privacy amplification in quantum key distribution, pseudo-random number generation and image encryption in terms of various hash tests and randomness tests. It extends the scope of application of quantum computation and quantum information.

  15. Polarimetry With Phased Array Antennas: Theoretical Framework and Definitions

    NASA Astrophysics Data System (ADS)

    Warnick, Karl F.; Ivashina, Marianna V.; Wijnholds, Stefan J.; Maaskant, Rob

    2012-01-01

    For phased array receivers, the accuracy with which the polarization state of a received signal can be measured depends on the antenna configuration, array calibration process, and beamforming algorithms. A signal and noise model for a dual-polarized array is developed and related to standard polarimetric antenna figures of merit, and the ideal polarimetrically calibrated, maximum-sensitivity beamforming solution for a dual-polarized phased array feed is derived. A practical polarimetric beamformer solution that does not require exact knowledge of the array polarimetric response is shown to be equivalent to the optimal solution in the sense that when the practical beamformers are calibrated, the optimal solution is obtained. To provide a rough initial polarimetric calibration for the practical beamformer solution, an approximate single-source polarimetric calibration method is developed. The modeled instrumental polarization error for a dipole phased array feed with the practical beamformer solution and single-source polarimetric calibration was -10 dB or lower over the array field of view for elements with alignments perturbed by random rotations with 5 degree standard deviation.

  16. Design and implementation of Gm-APD array readout integrated circuit for infrared 3D imaging

    NASA Astrophysics Data System (ADS)

    Zheng, Li-xia; Yang, Jun-hao; Liu, Zhao; Dong, Huai-peng; Wu, Jin; Sun, Wei-feng

    2013-09-01

    A single-photon detecting array of readout integrated circuit (ROIC) capable of infrared 3D imaging by photon detection and time-of-flight measurement is presented in this paper. The InGaAs avalanche photon diodes (APD) dynamic biased under Geiger operation mode by gate controlled active quenching circuit (AQC) are used here. The time-of-flight is accurately measured by a high accurate time-to-digital converter (TDC) integrated in the ROIC. For 3D imaging, frame rate controlling technique is utilized to the pixel's detection, so that the APD related to each pixel should be controlled by individual AQC to sense and quench the avalanche current, providing a digital CMOS-compatible voltage pulse. After each first sense, the detector is reset to wait for next frame operation. We employ counters of a two-segmental coarse-fine architecture, where the coarse conversion is achieved by a 10-bit pseudo-random linear feedback shift register (LFSR) in each pixel and a 3-bit fine conversion is realized by a ring delay line shared by all pixels. The reference clock driving the LFSR counter can be generated within the ring delay line Oscillator or provided by an external clock source. The circuit is designed and implemented by CSMC 0.5μm standard CMOS technology and the total chip area is around 2mm×2mm for 8×8 format ROIC with 150μm pixel pitch. The simulation results indicate that the relative time resolution of the proposed ROIC can achieve less than 1ns, and the preliminary test results show that the circuit function is correct.

  17. High precision computing with charge domain devices and a pseudo-spectral method therefor

    NASA Technical Reports Server (NTRS)

    Barhen, Jacob (Inventor); Toomarian, Nikzad (Inventor); Fijany, Amir (Inventor); Zak, Michail (Inventor)

    1997-01-01

    The present invention enhances the bit resolution of a CCD/CID MVM processor by storing each bit of each matrix element as a separate CCD charge packet. The bits of each input vector are separately multiplied by each bit of each matrix element in massive parallelism and the resulting products are combined appropriately to synthesize the correct product. In another aspect of the invention, such arrays are employed in a pseudo-spectral method of the invention, in which partial differential equations are solved by expressing each derivative analytically as matrices, and the state function is updated at each computation cycle by multiplying it by the matrices. The matrices are treated as synaptic arrays of a neural network and the state function vector elements are treated as neurons. In a further aspect of the invention, moving target detection is performed by driving the soliton equation with a vector of detector outputs. The neural architecture consists of two synaptic arrays corresponding to the two differential terms of the soliton-equation and an adder connected to the output thereof and to the output of the detector array to drive the soliton equation.

  18. Observations of the R Reflector and Sediment Interface Reflection at the Shallow Water 󈧊 Central Site

    DTIC Science & Technology

    2008-08-28

    line array position of Woods Hole Oceanographic Institution (WHOI) during the SWARM experiment by 26 km, and southeast of the AMCOR borehole No. 6010...guided by the stratigraphic constraints provided by closely spaced 50 m chirp seismic reflection profiles that provide pseudo three-dimensional... array is at the center of set of stations at location M. c Geometry showing source position R/V KNORR with respect to the receiving array and the

  19. Pseudo-random number generator for the Sigma 5 computer

    NASA Technical Reports Server (NTRS)

    Carroll, S. N.

    1983-01-01

    A technique is presented for developing a pseudo-random number generator based on the linear congruential form. The two numbers used for the generator are a prime number and a corresponding primitive root, where the prime is the largest prime number that can be accurately represented on a particular computer. The primitive root is selected by applying Marsaglia's lattice test. The technique presented was applied to write a random number program for the Sigma 5 computer. The new program, named S:RANDOM1, is judged to be superior to the older program named S:RANDOM. For applications requiring several independent random number generators, a table is included showing several acceptable primitive roots. The technique and programs described can be applied to any computer having word length different from that of the Sigma 5.

  20. Accelerating Pseudo-Random Number Generator for MCNP on GPU

    NASA Astrophysics Data System (ADS)

    Gong, Chunye; Liu, Jie; Chi, Lihua; Hu, Qingfeng; Deng, Li; Gong, Zhenghu

    2010-09-01

    Pseudo-random number generators (PRNG) are intensively used in many stochastic algorithms in particle simulations, artificial neural networks and other scientific computation. The PRNG in Monte Carlo N-Particle Transport Code (MCNP) requires long period, high quality, flexible jump and fast enough. In this paper, we implement such a PRNG for MCNP on NVIDIA's GTX200 Graphics Processor Units (GPU) using CUDA programming model. Results shows that 3.80 to 8.10 times speedup are achieved compared with 4 to 6 cores CPUs and more than 679.18 million double precision random numbers can be generated per second on GPU.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yashchuk, V.V.; Conley, R.; Anderson, E.H.

    Verification of the reliability of metrology data from high quality X-ray optics requires that adequate methods for test and calibration of the instruments be developed. For such verification for optical surface profilometers in the spatial frequency domain, a modulation transfer function (MTF) calibration method based on binarypseudo-random (BPR) gratings and arrays has been suggested and and proven to be an effective calibration method for a number of interferometric microscopes, a phase shifting Fizeau interferometer, and a scatterometer. Here we describe the details of development of binarypseudo-random multilayer (BPRML) test samples suitable for characterization of scanning (SEM) and transmission (TEM) electronmore » microscopes. We discuss the results of TEM measurements with the BPRML test samples fabricated from a WiSi{sub 2}/Si multilayer coating with pseudo-randomly distributed layers. In particular, we demonstrate that significant information about the metrological reliability of the TEM measurements can be extracted even when the fundamental frequency of the BPRML sample is smaller than the Nyquist frequency of the measurements. The measurements demonstrate a number of problems related to the interpretation of the SEM and TEM data. Note that similar BPRML testsamples can be used to characterize X-ray microscopes. Corresponding work with X-ray microscopes is in progress.« less

  2. Range-azimuth decouple beamforming for frequency diverse array with Costas-sequence modulated frequency offsets

    NASA Astrophysics Data System (ADS)

    Wang, Zhe; Wang, Wen-Qin; Shao, Huaizong

    2016-12-01

    Different from the phased-array using the same carrier frequency for each transmit element, the frequency diverse array (FDA) uses a small frequency offset across the array elements to produce range-angle-dependent transmit beampattern. FDA radar provides new application capabilities and potentials due to its range-dependent transmit array beampattern, but the FDA using linearly increasing frequency offsets will produce a range and angle coupled transmit beampattern. In order to decouple the range-azimuth beampattern for FDA radar, this paper proposes a uniform linear array (ULA) FDA using Costas-sequence modulated frequency offsets to produce random-like energy distribution in the transmit beampattern and thumbtack transmit-receive beampattern. In doing so, the range and angle of targets can be unambiguously estimated through matched filtering and subspace decomposition algorithms in the receiver signal processor. Moreover, random-like energy distributed beampattern can also be utilized for low probability of intercept (LPI) radar applications. Numerical results show that the proposed scheme outperforms the standard FDA in focusing the transmit energy, especially in the range dimension.

  3. [A magnetic therapy apparatus with an adaptable electromagnetic spectrum for the treatment of prostatitis and gynecopathies].

    PubMed

    Kuz'min, A A; Meshkovskiĭ, D V; Filist, S A

    2008-01-01

    Problems of engineering and algorithm development of magnetic therapy apparatuses with pseudo-random radiation spectrum within the audio range for treatment of prostatitis and gynecopathies are considered. A typical design based on a PIC 16F microcontroller is suggested. It includes a keyboard, LCD indicator, audio amplifier, inducer, and software units. The problem of pseudo-random signal generation within the audio range is considered. A series of rectangular pulses is generated on a random-length interval on the basis of a three-component random vector. This series provides the required spectral characteristics of the therapeutic magnetic field and their adaptation to the therapeutic conditions and individual features of the patient.

  4. CMOS gate array characterization procedures

    NASA Astrophysics Data System (ADS)

    Spratt, James P.

    1993-09-01

    Present procedures are inadequate for characterizing the radiation hardness of gate array product lines prior to personalization because the selection of circuits to be used, from among all those available in the manufacturer's circuit library, is usually uncontrolled. (Some circuits are fundamentally more radiation resistant than others.) In such cases, differences in hardness can result between different designs of the same logic function. Hardness also varies because many gate arrays feature large custom-designed megacells (e.g., microprocessors and random access memories-MicroP's and RAM's). As a result, different product lines cannot be compared equally. A characterization strategy is needed, along with standardized test vehicle(s), methodology, and conditions, so that users can make informed judgments on which gate arrays are best suited for their needs. The program described developed preferred procedures for the radiation characterization of gate arrays, including a gate array evaluation test vehicle, featuring a canary circuit, designed to define the speed versus hardness envelope of the gate array. A multiplier was chosen for this role, and a baseline multiplier architecture is suggested that could be incorporated into an existing standard evaluation circuit chip.

  5. UBF-binding site arrays form pseudo-NORs and sequester the RNA polymerase I transcription machinery

    PubMed Central

    Mais, Christine; Wright, Jane E.; Prieto, José-Luis; Raggett, Samantha L.; McStay, Brian

    2005-01-01

    Human ribosomal genes (rDNA) are located in nucleolar organizer regions (NORs) on the short arms of acrocentric chromosomes. Metaphase NORs that were transcriptionally active in the previous cell cycle appear as prominent chromosomal features termed secondary constrictions that are achromatic in chromosome banding and positive in silver staining. The architectural RNA polymerase I (pol I) transcription factor UBF binds extensively across rDNA throughout the cell cycle. To determine if UBF binding underpins NOR structure, we integrated large arrays of heterologous UBF-binding sequences at ectopic sites on human chromosomes. These arrays efficiently recruit UBF even to sites outside the nucleolus and, during metaphase, form novel silver stainable secondary constrictions, termed pseudo-NORs, morphologically similar to NORs. We demonstrate for the first time that in addition to UBF the other components of the pol I machinery are found associated with sequences across the entire human rDNA repeat. Remarkably, a significant fraction of these same pol I factors are sequestered by pseudo-NORs independent of both transcription and nucleoli. Because of the heterologous nature of the sequence employed, we infer that sequestration is mediated primarily by protein–protein interactions with UBF. These results suggest that extensive binding of UBF is responsible for formation and maintenance of the secondary constriction at active NORs. Furthermore, we propose that UBF mediates recruitment of the pol I machinery to nucleoli independently of promoter elements. PMID:15598984

  6. Development of micro-mirror slicer integral field unit for space-borne solar spectrographs

    NASA Astrophysics Data System (ADS)

    Suematsu, Yoshinori; Saito, Kosuke; Koyama, Masatsugu; Enokida, Yukiya; Okura, Yukinobu; Nakayasu, Tomoyasu; Sukegawa, Takashi

    2017-12-01

    We present an innovative optical design for image slicer integral field unit (IFU) and a manufacturing method that overcomes optical limitations of metallic mirrors. Our IFU consists of a micro-mirror slicer of 45 arrayed, highly narrow, flat metallic mirrors and a pseudo-pupil-mirror array of off-axis conic aspheres forming three pseudo slits of re-arranged slicer images. A prototype IFU demonstrates that the final optical quality is sufficiently high for a visible light spectrograph. Each slicer micro-mirror is 1.58 mm long and 30 μm wide with surface roughness ≤1 nm rms, and edge sharpness ≤ 0.1 μm, etc. This IFU is small size and can be implemented in a multi-slit spectrograph without any moving mechanism and fore optics, in which one slit is real and the others are pseudo slits from the IFU. The IFU mirrors were deposited by a space-qualified, protected silver coating for high reflectivity in visible and near IR wavelength regions. These properties are well suitable for space-borne spectrograph such as the future Japanese solar space mission SOLAR-C. We present the optical design, performance of prototype IFU, and space qualification tests of the silver coating.

  7. Generation of pseudo-random numbers

    NASA Technical Reports Server (NTRS)

    Howell, L. W.; Rheinfurth, M. H.

    1982-01-01

    Practical methods for generating acceptable random numbers from a variety of probability distributions which are frequently encountered in engineering applications are described. The speed, accuracy, and guarantee of statistical randomness of the various methods are discussed.

  8. Hybrid spread spectrum radio system

    DOEpatents

    Smith, Stephen F.; Dress, William B.

    2010-02-02

    Systems and methods are described for hybrid spread spectrum radio systems. A method includes modulating a signal by utilizing a subset of bits from a pseudo-random code generator to control an amplification circuit that provides a gain to the signal. Another method includes: modulating a signal by utilizing a subset of bits from a pseudo-random code generator to control a fast hopping frequency synthesizer; and fast frequency hopping the signal with the fast hopping frequency synthesizer, wherein multiple frequency hops occur within a single data-bit time.

  9. Navigation Using Orthogonal Frequency Division Multiplexed Signals of Opportunity

    DTIC Science & Technology

    2007-09-01

    transmits a 32,767 bit pseudo -random “short” code that repeats 37.5 times per second. Since the pseudo -random bit pattern and modulation scheme are... correlation process takes two “ sample windows,” both of which are ν = 16 samples wide and are spaced N = 64 samples apart, and compares them. When the...technique in (3.4) is a necessary step in order to get a more accurate estimate of the sample shift from the symbol boundary correlator in (3.1). Figure

  10. One- and two-dimensional chemical exchange nuclear magnetic resonance studies of the creatine kinase catalyzed reaction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gober, J.R.

    1988-01-01

    The equilibrium chemical exchange dynamics of the creatine kinase enzyme system were studied by one- and two-dimensional {sup 31}P NMR techniques. Pseudo-first-order reaction rate constants were measured by the saturation transfer method under an array of experimental conditions of pH and temperature. Quantitative one-dimensional spectra were collected under the same conditions in order to calculate the forward and reverse reaction rates, the K{sub eq}, the hydrogen ion stoichiometry, and the standard thermodynamic functions. The pure absorption mode in four quadrant two-dimensional chemical exchange experiment was employed so that the complete kinetic matrix showing all of the chemical exchange process couldmore » be realized.« less

  11. Closing the evidence gap in infectious disease: point-of-care randomization and informed consent.

    PubMed

    Huttner, A; Leibovici, L; Theuretzbacher, U; Huttner, B; Paul, M

    2017-02-01

    The informed consent document is intended to provide basic rights to patients but often fails to do so. Patients' autonomy may be diminished by virtue of their illness; evidence shows that even patients who appear to be ideal candidates for understanding and granting informed consent rarely are, particularly those with acute infections. We argue that for low-risk trials whose purpose is to evaluate nonexperimental therapies or other measures towards which the medical community is in a state of equipoise, ethics committees should play a more active role in a more standardized fashion. Patients in the clinic are continually subject to spontaneous 'pseudo-randomizations' based on local dogma and the anecdotal experience of their physicians. Stronger ethics oversight would allow point-of-care trials to structure these spontaneous randomizations, using widely available informatics tools, in combination with opt-out informed consent where deemed appropriate. Copyright © 2016. Published by Elsevier Ltd.

  12. Experimental demonstration of the optical multi-mesh hypercube: scaleable interconnection network for multiprocessors and multicomputers.

    PubMed

    Louri, A; Furlonge, S; Neocleous, C

    1996-12-10

    A prototype of a novel topology for scaleable optical interconnection networks called the optical multi-mesh hypercube (OMMH) is experimentally demonstrated to as high as a 150-Mbit/s data rate (2(7) - 1 nonreturn-to-zero pseudo-random data pattern) at a bit error rate of 10(-13)/link by the use of commercially available devices. OMMH is a scaleable network [Appl. Opt. 33, 7558 (1994); J. Lightwave Technol. 12, 704 (1994)] architecture that combines the positive features of the hypercube (small diameter, connectivity, symmetry, simple routing, and fault tolerance) and the mesh (constant node degree and size scaleability). The optical implementation method is divided into two levels: high-density local connections for the hypercube modules, and high-bit-rate, low-density, long connections for the mesh links connecting the hypercube modules. Free-space imaging systems utilizing vertical-cavity surface-emitting laser (VCSEL) arrays, lenslet arrays, space-invariant holographic techniques, and photodiode arrays are demonstrated for the local connections. Optobus fiber interconnects from Motorola are used for the long-distance connections. The OMMH was optimized to operate at the data rate of Motorola's Optobus (10-bit-wide, VCSEL-based bidirectional data interconnects at 150 Mbits/s). Difficulties encountered included the varying fan-out efficiencies of the different orders of the hologram, misalignment sensitivity of the free-space links, low power (1 mW) of the individual VCSEL's, and noise.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yashchuk, V. V.; Fischer, P. J.; Chan, E. R.

    We present a modulation transfer function (MTF) calibration method based on binary pseudo-random (BPR) one-dimensional sequences and two-dimensional arrays as an effective method for spectral characterization in the spatial frequency domain of a broad variety of metrology instrumentation, including interferometric microscopes, scatterometers, phase shifting Fizeau interferometers, scanning and transmission electron microscopes, and at this time, x-ray microscopes. The inherent power spectral density of BPR gratings and arrays, which has a deterministic white-noise-like character, allows a direct determination of the MTF with a uniform sensitivity over the entire spatial frequency range and field of view of an instrument. We demonstrate themore » MTF calibration and resolution characterization over the full field of a transmission soft x-ray microscope using a BPR multilayer (ML) test sample with 2.8 nm fundamental layer thickness. We show that beyond providing a direct measurement of the microscope's MTF, tests with the BPRML sample can be used to fine tune the instrument's focal distance. Finally, our results confirm the universality of the method that makes it applicable to a large variety of metrology instrumentation with spatial wavelength bandwidths from a few nanometers to hundreds of millimeters.« less

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yashchuk, V. V., E-mail: VVYashchuk@lbl.gov; Chan, E. R.; Lacey, I.

    We present a modulation transfer function (MTF) calibration method based on binary pseudo-random (BPR) one-dimensional sequences and two-dimensional arrays as an effective method for spectral characterization in the spatial frequency domain of a broad variety of metrology instrumentation, including interferometric microscopes, scatterometers, phase shifting Fizeau interferometers, scanning and transmission electron microscopes, and at this time, x-ray microscopes. The inherent power spectral density of BPR gratings and arrays, which has a deterministic white-noise-like character, allows a direct determination of the MTF with a uniform sensitivity over the entire spatial frequency range and field of view of an instrument. We demonstrate themore » MTF calibration and resolution characterization over the full field of a transmission soft x-ray microscope using a BPR multilayer (ML) test sample with 2.8 nm fundamental layer thickness. We show that beyond providing a direct measurement of the microscope’s MTF, tests with the BPRML sample can be used to fine tune the instrument’s focal distance. Our results confirm the universality of the method that makes it applicable to a large variety of metrology instrumentation with spatial wavelength bandwidths from a few nanometers to hundreds of millimeters.« less

  15. Identifying equivalent sound sources from aeroacoustic simulations using a numerical phased array

    NASA Astrophysics Data System (ADS)

    Pignier, Nicolas J.; O'Reilly, Ciarán J.; Boij, Susann

    2017-04-01

    An application of phased array methods to numerical data is presented, aimed at identifying equivalent flow sound sources from aeroacoustic simulations. Based on phased array data extracted from compressible flow simulations, sound source strengths are computed on a set of points in the source region using phased array techniques assuming monopole propagation. Two phased array techniques are used to compute the source strengths: an approach using a Moore-Penrose pseudo-inverse and a beamforming approach using dual linear programming (dual-LP) deconvolution. The first approach gives a model of correlated sources for the acoustic field generated from the flow expressed in a matrix of cross- and auto-power spectral values, whereas the second approach results in a model of uncorrelated sources expressed in a vector of auto-power spectral values. The accuracy of the equivalent source model is estimated by computing the acoustic spectrum at a far-field observer. The approach is tested first on an analytical case with known point sources. It is then applied to the example of the flow around a submerged air inlet. The far-field spectra obtained from the source models for two different flow conditions are in good agreement with the spectra obtained with a Ffowcs Williams-Hawkings integral, showing the accuracy of the source model from the observer's standpoint. Various configurations for the phased array and for the sources are used. The dual-LP beamforming approach shows better robustness to changes in the number of probes and sources than the pseudo-inverse approach. The good results obtained with this simulation case demonstrate the potential of the phased array approach as a modelling tool for aeroacoustic simulations.

  16. SRAM As An Array Of Energetic-Ion Detectors

    NASA Technical Reports Server (NTRS)

    Buehler, Martin G.; Blaes, Brent R.; Lieneweg, Udo; Nixon, Robert H.

    1993-01-01

    Static random-access memory (SRAM) designed for use as array of energetic-ion detectors. Exploits well-known tendency of incident energetic ions to cause bit flips in cells of electronic memories. Design of ion-detector SRAM involves modifications of standard SRAM design to increase sensitivity to ions. Device fabricated by use of conventional complementary metal oxide/semiconductor (CMOS) processes. Potential uses include gas densimetry, position sensing, and measurement of cosmic-ray spectrum.

  17. Effects of momentary self-monitoring on empowerment in a randomized controlled trial in patients with depression.

    PubMed

    Simons, C J P; Hartmann, J A; Kramer, I; Menne-Lothmann, C; Höhn, P; van Bemmel, A L; Myin-Germeys, I; Delespaul, P; van Os, J; Wichers, M

    2015-11-01

    Interventions based on the experience sampling method (ESM) are ideally suited to provide insight into personal, contextualized affective patterns in the flow of daily life. Recently, we showed that an ESM-intervention focusing on positive affect was associated with a decrease in symptoms in patients with depression. The aim of the present study was to examine whether ESM-intervention increased patient empowerment. Depressed out-patients (n=102) receiving psychopharmacological treatment who had participated in a randomized controlled trial with three arms: (i) an experimental group receiving six weeks of ESM self-monitoring combined with weekly feedback sessions, (ii) a pseudo-experimental group participating in six weeks of ESM self-monitoring without feedback, and (iii) a control group (treatment as usual only). Patients were recruited in the Netherlands between January 2010 and February 2012. Self-report empowerment scores were obtained pre- and post-intervention. There was an effect of group×assessment period, indicating that the experimental (B=7.26, P=0.061, d=0.44, statistically imprecise) and pseudo-experimental group (B=11.19, P=0.003, d=0.76) increased more in reported empowerment compared to the control group. In the pseudo-experimental group, 29% of the participants showed a statistically reliable increase in empowerment score and 0% reliable decrease compared to 17% reliable increase and 21% reliable decrease in the control group. The experimental group showed 19% reliable increase and 4% reliable decrease. These findings tentatively suggest that self-monitoring to complement standard antidepressant treatment may increase patients' feelings of empowerment. Further research is necessary to investigate long-term empowering effects of self-monitoring in combination with person-tailored feedback. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  18. Quantum Information Processing in the Wall of Cytoskeletal Microtubules

    PubMed Central

    Qiu, Xijun; Wu, Tongcheng; Li, Ruxin

    2006-01-01

    Microtubules (MT) are composed of 13 protofilaments, each of which is a series of two-state tubulin dimers. In the MT wall, these dimers can be pictured as “lattice” sites similar to crystal lattices. Based on the pseudo-spin model, two different location states of the mobile electron in each dimer are proposed. Accordingly, the MT wall is described as an anisotropic two-dimensional (2D) pseudo-spin system considering a periodic triangular “lattice”. Because three different “spin-spin” interactions in each cell exist periodically in the whole MT wall, the system may be shown to be an array of three types of two-pseudo-spin-state dimers. For the above-mentioned condition, the processing of quantum information is presented by using the scheme developed by Lloyd. PMID:19669447

  19. Investigation of the performance characteristics of Doppler radar technique for aircraft collision hazard warning, phase 3

    NASA Technical Reports Server (NTRS)

    1972-01-01

    System studies, equipment simulation, hardware development and flight tests which were conducted during the development of aircraft collision hazard warning system are discussed. The system uses a cooperative, continuous wave Doppler radar principle with pseudo-random frequency modulation. The report presents a description of the system operation and deals at length with the use of pseudo-random coding techniques. In addition, the use of mathematical modeling and computer simulation to determine the alarm statistics and system saturation characteristics in terminal area traffic of variable density is discussed.

  20. Measuring order in disordered systems and disorder in ordered systems: Random matrix theory for isotropic and nematic liquid crystals and its perspective on pseudo-nematic domains

    NASA Astrophysics Data System (ADS)

    Zhao, Yan; Stratt, Richard M.

    2018-05-01

    Surprisingly long-ranged intermolecular correlations begin to appear in isotropic (orientationally disordered) phases of liquid crystal forming molecules when the temperature or density starts to close in on the boundary with the nematic (ordered) phase. Indeed, the presence of slowly relaxing, strongly orientationally correlated, sets of molecules under putatively disordered conditions ("pseudo-nematic domains") has been apparent for some time from light-scattering and optical-Kerr experiments. Still, a fully microscopic characterization of these domains has been lacking. We illustrate in this paper how pseudo-nematic domains can be studied in even relatively small computer simulations by looking for order-parameter tensor fluctuations much larger than one would expect from random matrix theory. To develop this idea, we show that random matrix theory offers an exact description of how the probability distribution for liquid-crystal order parameter tensors converges to its macroscopic-system limit. We then illustrate how domain properties can be inferred from finite-size-induced deviations from these random matrix predictions. A straightforward generalization of time-independent random matrix theory also allows us to prove that the analogous random matrix predictions for the time dependence of the order-parameter tensor are similarly exact in the macroscopic limit, and that relaxation behavior of the domains can be seen in the breakdown of the finite-size scaling required by that random-matrix theory.

  1. Tailpulse signal generator

    DOEpatents

    Baker, John [Walnut Creek, CA; Archer, Daniel E [Knoxville, TN; Luke, Stanley John [Pleasanton, CA; Decman, Daniel J [Livermore, CA; White, Gregory K [Livermore, CA

    2009-06-23

    A tailpulse signal generating/simulating apparatus, system, and method designed to produce electronic pulses which simulate tailpulses produced by a gamma radiation detector, including the pileup effect caused by the characteristic exponential decay of the detector pulses, and the random Poisson distribution pulse timing for radioactive materials. A digital signal process (DSP) is programmed and configured to produce digital values corresponding to pseudo-randomly selected pulse amplitudes and pseudo-randomly selected Poisson timing intervals of the tailpulses. Pulse amplitude values are exponentially decayed while outputting the digital value to a digital to analog converter (DAC). And pulse amplitudes of new pulses are added to decaying pulses to simulate the pileup effect for enhanced realism in the simulation.

  2. A SPAD-based 3D imager with in-pixel TDC for 145ps-accuracy ToF measurement

    NASA Astrophysics Data System (ADS)

    Vornicu, I.; Carmona-Galán, R.; Rodríguez-Vázquez, Á.

    2015-03-01

    The design and measurements of a CMOS 64 × 64 Single-Photon Avalanche-Diode (SPAD) array with in-pixel Time-to-Digital Converter (TDC) are presented. This paper thoroughly describes the imager at architectural and circuit level with particular emphasis on the characterization of the SPAD-detector ensemble. It is aimed to 2D imaging and 3D image reconstruction in low light environments. It has been fabricated in a standard 0.18μm CMOS process, i. e. without high voltage or low noise features. In these circumstances, we are facing a high number of dark counts and low photon detection efficiency. Several techniques have been applied to ensure proper functionality, namely: i) time-gated SPAD front-end with fast active-quenching/recharge circuit featuring tunable dead-time, ii) reverse start-stop scheme, iii) programmable time resolution of the TDC based on a novel pseudo-differential voltage controlled ring oscillator with fast start-up, iv) a global calibration scheme against temperature and process variation. Measurements results of individual SPAD-TDC ensemble jitter, array uniformity and time resolution programmability are also provided.

  3. Cardiorespiratory Kinetics Determined by Pseudo-Random Binary Sequences - Comparisons between Walking and Cycling.

    PubMed

    Koschate, J; Drescher, U; Thieschäfer, L; Heine, O; Baum, K; Hoffmann, U

    2016-12-01

    This study aims to compare cardiorespiratory kinetics as a response to a standardised work rate protocol with pseudo-random binary sequences between cycling and walking in young healthy subjects. Muscular and pulmonary oxygen uptake (V̇O 2 ) kinetics as well as heart rate kinetics were expected to be similar for walking and cycling. Cardiac data and V̇O 2 of 23 healthy young subjects were measured in response to pseudo-random binary sequences. Kinetics were assessed applying time series analysis. Higher maxima of cross-correlation functions between work rate and the respective parameter indicate faster kinetics responses. Muscular V̇O 2 kinetics were estimated from heart rate and pulmonary V̇O 2 using a circulatory model. Muscular (walking vs. cycling [mean±SD in arbitrary units]: 0.40±0.08 vs. 0.41±0.08) and pulmonary V̇O 2 kinetics (0.35±0.06 vs. 0.35±0.06) were not different, although the time courses of the cross-correlation functions of pulmonary V̇O 2 showed unexpected biphasic responses. Heart rate kinetics (0.50±0.14 vs. 0.40±0.14; P=0.017) was faster for walking. Regarding the biphasic cross-correlation functions of pulmonary V̇O 2 during walking, the assessment of muscular V̇O 2 kinetics via pseudo-random binary sequences requires a circulatory model to account for cardio-dynamic distortions. Faster heart rate kinetics for walking should be considered by comparing results from cycle and treadmill ergometry. © Georg Thieme Verlag KG Stuttgart · New York.

  4. Measurement time and statistics for a noise thermometer with a synthetic-noise reference

    NASA Astrophysics Data System (ADS)

    White, D. R.; Benz, S. P.; Labenski, J. R.; Nam, S. W.; Qu, J. F.; Rogalla, H.; Tew, W. L.

    2008-08-01

    This paper describes methods for reducing the statistical uncertainty in measurements made by noise thermometers using digital cross-correlators and, in particular, for thermometers using pseudo-random noise for the reference signal. First, a discrete-frequency expression for the correlation bandwidth for conventional noise thermometers is derived. It is shown how an alternative frequency-domain computation can be used to eliminate the spectral response of the correlator and increase the correlation bandwidth. The corresponding expressions for the uncertainty in the measurement of pseudo-random noise in the presence of uncorrelated thermal noise are then derived. The measurement uncertainty in this case is less than that for true thermal-noise measurements. For pseudo-random sources generating a frequency comb, an additional small reduction in uncertainty is possible, but at the cost of increasing the thermometer's sensitivity to non-linearity errors. A procedure is described for allocating integration times to further reduce the total uncertainty in temperature measurements. Finally, an important systematic error arising from the calculation of ratios of statistical variables is described.

  5. Method of multiplexed analysis using ion mobility spectrometer

    DOEpatents

    Belov, Mikhail E [Richland, WA; Smith, Richard D [Richland, WA

    2009-06-02

    A method for analyzing analytes from a sample introduced into a Spectrometer by generating a pseudo random sequence of a modulation bins, organizing each modulation bin as a series of submodulation bins, thereby forming an extended pseudo random sequence of submodulation bins, releasing the analytes in a series of analyte packets into a Spectrometer, thereby generating an unknown original ion signal vector, detecting the analytes at a detector, and characterizing the sample using the plurality of analyte signal subvectors. The method is advantageously applied to an Ion Mobility Spectrometer, and an Ion Mobility Spectrometer interfaced with a Time of Flight Mass Spectrometer.

  6. A new computer program for mass screening of visual defects in preschool children.

    PubMed

    Briscoe, D; Lifshitz, T; Grotman, M; Kushelevsky, A; Vardi, H; Weizman, S; Biedner, B

    1998-04-01

    To test the effectiveness of a PC computer program for detecting vision disorders which could be used by non-trained personnel, and to determine the prevalence of visual impairment in a sample population of preschool children in the city of Beer-Sheba, Israel. 292 preschool children, aged 4-6 years, were examined in the kindergarten setting, using the computer system and "gold standard" tests. Visual acuity and stereopsis were tested and compared using Snellen type symbol charts and random dot stereograms respectively. The sensitivity, specificity, positive predictive value, negative predictive value, and kappa test were evaluated. A computer pseudo Worth four dot test was also performed but could not be compared with the standard Worth four dot test owing to the inability of many children to count. Agreement between computer and gold standard tests was 83% and 97.3% for visual acuity and stereopsis respectively. The sensitivity of the computer stereogram was only 50%, but it had a specificity of 98.9%, whereas the sensitivity and specificity of the visual acuity test were 81.5% and 83% respectively. The positive predictive value of both tests was about 63%. 27.7% of children tested had a visual acuity of 6/12 or less and stereopsis was absent in 28% using standard tests. Impairment of fusion was found in 5% of children using the computer pseudo Worth four dot test. The computer program was found to be stimulating, rapid, and easy to perform. The wide availability of computers in schools and at home allow it to be used as an additional screening tool by non-trained personnel, such as teachers and parents, but it is not a replacement for standard testing.

  7. Ultra-thin silicon (UTSi) on insulator CMOS transceiver and time-division multiplexed switch chips for smart pixel integration

    NASA Astrophysics Data System (ADS)

    Zhang, Liping; Sawchuk, Alexander A.

    2001-12-01

    We describe the design, fabrication and functionality of two different 0.5 micron CMOS optoelectronic integrated circuit (OEIC) chips based on the Peregrine Semiconductor Ultra-Thin Silicon on insulator technology. The Peregrine UTSi silicon- on-sapphire (SOS) technology is a member of the silicon-on- insulator (SOI) family. The low-loss synthetic sapphire substrate is optically transparent and has good thermal conductivity and coefficient of thermal expansion properties, which meet the requirements for flip-chip bonding of VCSELs and other optoelectronic input-output components. One chip contains transceiver and network components, including four channel high-speed CMOS transceiver modules, pseudo-random bit stream (PRBS) generators, a voltage controlled oscillator (VCO) and other test circuits. The transceiver chips can operate in both self-testing mode and networking mode. An on- chip clock and true-single-phase-clock (TSPC) D-flip-flop have been designed to generate a PRBS at over 2.5 Gb/s for the high-speed transceiver arrays to operate in self-testing mode. In the networking mode, an even number of transceiver chips forms a ring network through free-space or fiber ribbon interconnections. The second chip contains four channel optical time-division multiplex (TDM) switches, optical transceiver arrays, an active pixel detector and additional test devices. The eventual applications of these chips will require monolithic OEICs with integrated optical input and output. After fabrication and testing, the CMOS transceiver array dies will be packaged with 850 nm vertical cavity surface emitting lasers (VCSELs), and metal-semiconductor- metal (MSM) or GaAs p-i-n detector die arrays to achieve high- speed optical interconnections. The hybrid technique could be either wire bonding or flip-chip bonding of the CMOS SOS smart-pixel arrays with arrays of VCSELs and photodetectors onto an optoelectronic chip carrier as a multi-chip module (MCM).

  8. Binaural Simulation Experiments in the NASA Langley Structural Acoustics Loads and Transmission Facility

    NASA Technical Reports Server (NTRS)

    Grosveld, Ferdinand W.; Silcox, Richard (Technical Monitor)

    2001-01-01

    A location and positioning system was developed and implemented in the anechoic chamber of the Structural Acoustics Loads and Transmission (SALT) facility to accurately determine the coordinates of points in three-dimensional space. Transfer functions were measured between a shaker source at two different panel locations and the vibrational response distributed over the panel surface using a scanning laser vibrometer. The binaural simulation test matrix included test runs for several locations of the measuring microphones, various attitudes of the mannequin, two locations of the shaker excitation and three different shaker inputs including pulse, broadband random, and pseudo-random. Transfer functions, auto spectra, and coherence functions were acquired for the pseudo-random excitation. Time histories were acquired for the pulse and broadband random input to the shaker. The tests were repeated with a reflective surface installed. Binary data files were converted to universal format and archived on compact disk.

  9. On the Impact of Sea Level Fingerprints on the Estimation of the Meridional Geostrophic Transport in the Atlantic Basin

    NASA Astrophysics Data System (ADS)

    Hsu, C. W.; Velicogna, I.

    2017-12-01

    The mid-ocean geostrophic transport accounts for more than half of the seasonal and inter-annual variabilities in Atlantic meridional overturning circulation (AMOC) based on the in-situ measurement from RAPID MOC/MOCHA array since 2004. Here, we demonstrate that the mid-ocean geostrophic transport estimates derived from ocean bottom pressure (OBP) are affected by the sea level fingerprint (SLF), which is a variation of the equi-geopotential height (relative sea level) due to rapid mass unloading of the entire Earth system and in particular from glaciers and ice sheets. This potential height change, although it alters the OBP, should not be included in the derivation of the mid-ocean geostrophic transport. This "pseudo" geostrophic-transport due to the SLF is in-phase with the seasonal and interannual signal in the upper mid-ocean geostrophic transport. The east-west SLF gradient across the Atlantic basin could be mistaken as a north-south geostrophic transport that increases by 54% of its seasonal variability and by 20% of its inter-annual variability. This study demonstrates for the first time the importance of this pseudo transport in both the annual and interannual signals by comparing the SLF with in-situ observation from RAPID MOC/MOCHA array. The pseudo transport needs to be taken into account if OBP measurements and remote sensing are used to derive mid-ocean geostrophic transport.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Novak, Erik; Trolinger, James D.; Lacey, Ian

    This work reports on the development of a binary pseudo-random test sample optimized to calibrate the MTF of optical microscopes. The sample consists of a number of 1-D and 2-D patterns, with different minimum sizes of spatial artifacts from 300 nm to 2 microns. We describe the mathematical background, fabrication process, data acquisition and analysis procedure to return spatial frequency based instrument calibration. We show that the developed samples satisfy the characteristics of a test standard: functionality, ease of specification and fabrication, reproducibility, and low sensitivity to manufacturing error. © (2015) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading ofmore » the abstract is permitted for personal use only.« less

  11. Fast measurement of proton exchange membrane fuel cell impedance based on pseudo-random binary sequence perturbation signals and continuous wavelet transform

    NASA Astrophysics Data System (ADS)

    Debenjak, Andrej; Boškoski, Pavle; Musizza, Bojan; Petrovčič, Janko; Juričić, Đani

    2014-05-01

    This paper proposes an approach to the estimation of PEM fuel cell impedance by utilizing pseudo-random binary sequence as a perturbation signal and continuous wavelet transform with Morlet mother wavelet. With the approach, the impedance characteristic in the frequency band from 0.1 Hz to 500 Hz is identified in 60 seconds, approximately five times faster compared to the conventional single-sine approach. The proposed approach was experimentally evaluated on a single PEM fuel cell of a larger fuel cell stack. The quality of the results remains at the same level compared to the single-sine approach.

  12. Local Risk-Minimization for Defaultable Claims with Recovery Process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biagini, Francesca, E-mail: biagini@mathematik.uni-muenchen.de; Cretarola, Alessandra, E-mail: alessandra.cretarola@dmi.unipg.it

    We study the local risk-minimization approach for defaultable claims with random recovery at default time, seen as payment streams on the random interval [0,{tau} Logical-And T], where T denotes the fixed time-horizon. We find the pseudo-locally risk-minimizing strategy in the case when the agent information takes into account the possibility of a default event (local risk-minimization with G-strategies) and we provide an application in the case of a corporate bond. We also discuss the problem of finding a pseudo-locally risk-minimizing strategy if we suppose the agent obtains her information only by observing the non-defaultable assets.

  13. National implementation of standards of practice for non-prescription medicines in Australia.

    PubMed

    Benrimoj, Shalom I; Gilbert, Andrew L; de Almeida Neto, Abilio C; Kelly, Fiona

    2009-04-01

    In Australia, there are two categories of non-prescription medicines: pharmacy medicines and pharmacist only medicines. Standards were developed to define and describe the professional activities required for the provision of these medicines at a consistent and measurable level of practice. Our objective was to implement nationally a quality improvement package in relation to the Standards of Practice for the Provision of Non-Prescription Medicines. Approximately 50% of Australian pharmacies (n = 2,706) were randomly selected by local registering authorities. Trained pharmacy educators audited each community pharmacy in the study three times, 7 weeks apart on Standards of Practice for the Provision of Non-Prescription Medicines, Visit 1 involved the educator explaining the project and conducting an assessment of the pharmacy's level of compliance. Behaviour of community pharmacists and their staff in relation to these standards was measured by conducting pseudo-patron visits. Pseudopatron visits were conducted at Visit 2, with the educator providing immediate feedback and coaching and a compliance assessment. Visit 3 involved a compliance assessment, and a second pseudo-patron visit for those pharmacies that had performed poorly at the first visit. At Visit 1, the lowest levels of compliance were to the standards relating to the documentation process (44%) and customer care and advice (46%). By Visit 2, more than 80% of pharmacies had met most criteria. At Visit 3, compliance had significantly improved compared to Visits 1 and 2 (P < 0.001). The lowest levels of compliance were to criteria which required written operating procedures for specific tasks, but these also improved significantly over time (P < 0.001). Professional practice in relation to the handling of pharmacist only and pharmacy medicines improved considerably as measured by the auditing process, and the results indicate that Australian pharmacies are well-equipped to provide high quality service to consumers of these medicines. The acceptability of national implementation of these standards of practice in Australia indicates that such an approach could be taken internationally.

  14. Tilted hexagonal post arrays: DNA electrophoresis in anisotropic media

    PubMed Central

    Chen, Zhen; Dorfman, Kevin D.

    2013-01-01

    Using Brownian dynamics simulations, we show that DNA electrophoresis in a hexagonal array of micron-sized posts changes qualitatively when the applied electric field vector is not coincident with the lattice vectors of the array. DNA electrophoresis in such “tilted” post arrays is superior to the standard “un-tilted” approach; while the time required to achieve a resolution of unity in a tilted post array is similar to an un-tilted array at a low electric field strengths, this time (i) decreases exponentially with electric field strength in a tilted array and (ii) increases exponentially with electric field strength in an un-tilted array. Although the DNA dynamics in a post array are complicated, the electrophoretic mobility results indicate that the “free path”, i.e., the average distance of ballistic trajectories of point sized particles launched from random positions in the unit cell until they intersect the next post, is a useful proxy for the detailed DNA trajectories. The analysis of the free path reveals a fundamental connection between anisotropy of the medium and DNA transport therein that goes beyond simply improving the separation device. PMID:23868490

  15. Design automation techniques for custom LSI arrays

    NASA Technical Reports Server (NTRS)

    Feller, A.

    1975-01-01

    The standard cell design automation technique is described as an approach for generating random logic PMOS, CMOS or CMOS/SOS custom large scale integration arrays with low initial nonrecurring costs and quick turnaround time or design cycle. The system is composed of predesigned circuit functions or cells and computer programs capable of automatic placement and interconnection of the cells in accordance with an input data net list. The program generates a set of instructions to drive an automatic precision artwork generator. A series of support design automation and simulation programs are described, including programs for verifying correctness of the logic on the arrays, performing dc and dynamic analysis of MOS devices, and generating test sequences.

  16. Effects of Random Shadings, Phasing Errors, and Element Failures on the Beam Patterns of Linear and Planar Arrays

    DTIC Science & Technology

    1980-03-14

    failure Sigmar (Or) in line 50, the standard deviation of the relative error of the weights Sigmap (o) in line 60, the standard deviation of the phase...200, the weight structures in the x and y coordinates Q in line 210, the probability of element failure Sigmar (Or) in line 220, the standard...NUMBER OF ELEMENTS =u;2*H 120 PRINT "Pr’obability of elemenit failure al;O 130 PRINT "Standard dtvi&t ion’ oe r.1&tive ýrror of wl; Sigmar 14 0 PRINT

  17. Study on a novel laser target detection system based on software radio technique

    NASA Astrophysics Data System (ADS)

    Song, Song; Deng, Jia-hao; Wang, Xue-tian; Gao, Zhen; Sun, Ji; Sun, Zhi-hui

    2008-12-01

    This paper presents that software radio technique is applied to laser target detection system with the pseudo-random code modulation. Based on the theory of software radio, the basic framework of the system, hardware platform, and the implementation of the software system are detailed. Also, the block diagram of the system, DSP circuit, block diagram of the pseudo-random code generator, and soft flow diagram of signal processing are designed. Experimental results have shown that the application of software radio technique provides a novel method to realize the modularization, miniaturization and intelligence of the laser target detection system, and the upgrade and improvement of the system will become simpler, more convenient, and cheaper.

  18. Strain-Engineering of Giant Pseudo-Magnetic Fields in Graphene/Boron Nitride (BN) Periodic Nanostructures

    NASA Astrophysics Data System (ADS)

    Hsu, Chen-Chih; Wang, Jiaqing; Teague, Marcus; Chen, Chien-Chang; Yeh, Nai-Chang

    2015-03-01

    Ideal graphene is strain-free whereas non-trivial strain can induce pseudo-magnetic fields as predicted theoretically and manifested experimentally. Here we employ nearly strain-free single-domain graphene, grown by plasma-enhanced chemical vapor deposition (PECVD) at low temperatures, to induce controlled strain by placing the PECVD-graphene on substrates containing engineered nanostructures. We fabricate periodic pyramid nanostructures (typically 100 ~ 200 nm laterally and 10 ~ 60 nm in height) on Si substrates by focused ion beam, and determine the topography of these nanostructures using atomic force microscopy and scanning electron microscopy after we transferred monolayer h-BN followed by PECVD-graphene onto these substrates. We find both layers conform well to the nanostructures so that we can control the size, arrangement, separation, and shape of the nanostructures to generate desirable pseudo-magnetic fields. We also employ molecular dynamics simulation to determine the displacement of carbon atoms under a given nanostructure. The pseudo-magnetic field thus obtained is ~150T in the center, relatively homogeneous over 50% of the area, and drops off precipitously near the edge. These findings are extended to arrays of nanostructures and compared with topographic and spectroscopic studies by STM. Supported by NSF.

  19. Recommendations and illustrations for the evaluation of photonic random number generators

    NASA Astrophysics Data System (ADS)

    Hart, Joseph D.; Terashima, Yuta; Uchida, Atsushi; Baumgartner, Gerald B.; Murphy, Thomas E.; Roy, Rajarshi

    2017-09-01

    The never-ending quest to improve the security of digital information combined with recent improvements in hardware technology has caused the field of random number generation to undergo a fundamental shift from relying solely on pseudo-random algorithms to employing optical entropy sources. Despite these significant advances on the hardware side, commonly used statistical measures and evaluation practices remain ill-suited to understand or quantify the optical entropy that underlies physical random number generation. We review the state of the art in the evaluation of optical random number generation and recommend a new paradigm: quantifying entropy generation and understanding the physical limits of the optical sources of randomness. In order to do this, we advocate for the separation of the physical entropy source from deterministic post-processing in the evaluation of random number generators and for the explicit consideration of the impact of the measurement and digitization process on the rate of entropy production. We present the Cohen-Procaccia estimate of the entropy rate h (𝜖 ,τ ) as one way to do this. In order to provide an illustration of our recommendations, we apply the Cohen-Procaccia estimate as well as the entropy estimates from the new NIST draft standards for physical random number generators to evaluate and compare three common optical entropy sources: single photon time-of-arrival detection, chaotic lasers, and amplified spontaneous emission.

  20. Development of a pseudo phased array technique using EMATs for DM weld testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cobb, Adam C., E-mail: adam.cobb@swri.org; Fisher, Jay L., E-mail: adam.cobb@swri.org; Shiokawa, Nobuyuki

    2015-03-31

    Ultrasonic inspection of dissimilar metal (DM) welds in piping with cast austenitic stainless steel (CASS) has been an area ongoing research for many years given its prevalence in the petrochemical and nuclear industries. A typical inspection strategy for pipe welds is to use an ultrasonic phased array system to scan the weld from a sensor located on the outer surface of the pipe. These inspection systems generally refract either longitudinal or shear vertical (SV) waves at varying angles to inspect the weld radially. In DM welds, however, the welding process can produce a columnar grain structure in the CASS materialmore » in a specific orientation. This columnar grain structure can skew ultrasonic waves away from their intended path, especially for SV and longitudinal wave modes. Studies have shown that inspection using the shear horizontal (SH) wave mode significantly reduces the effect of skewing. Electromagnetic acoustic transducers (EMATs) are known to be effective for producing SH waves in field settings. This paper presents an inspection strategy that seeks to reproduce the scanning and imaging capabilities of a commercial phase array system using EMATs. A custom-built EMAT was used to collect data at multiple propagation angles, and a processing strategy known as the synthetic aperture focusing technique (SAFT) was used to combine the data to produce an image. Results are shown using this pseudo phased array technique to inspect samples with a DM weld and artificial defects, demonstrating the potential of this approach in a laboratory setting. Recommendations for future work to transition the technique to the field are also provided.« less

  1. Seminar on Understanding Digital Control and Analysis in Vibration Test Systems, part 2

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A number of techniques for dealing with important technical aspects of the random vibration control problem are described. These include the generation of pseudo-random and true random noise, the control spectrum estimation problem, the accuracy/speed tradeoff, and control correction strategies. System hardware, the operator-system interface, safety features, and operational capabilities of sophisticated digital random vibration control systems are also discussed.

  2. Bio-Inspired Asynchronous Pixel Event Tricolor Vision Sensor.

    PubMed

    Lenero-Bardallo, Juan Antonio; Bryn, D H; Hafliger, Philipp

    2014-06-01

    This article investigates the potential of the first ever prototype of a vision sensor that combines tricolor stacked photo diodes with the bio-inspired asynchronous pixel event communication protocol known as Address Event Representation (AER). The stacked photo diodes are implemented in a 22 × 22 pixel array in a standard STM 90 nm CMOS process. Dynamic range is larger than 60 dB and pixels fill factor is 28%. The pixels employ either simple pulse frequency modulation (PFM) or a Time-to-First-Spike (TFS) mode. A heuristic linear combination of the chip's inherent pseudo colors serves to approximate RGB color representation. Furthermore, the sensor outputs can be processed to represent the radiation in the near infrared (NIR) band without employing external filters, and to color-encode direction of motion due to an asymmetry in the update rates of the different diode layers.

  3. Tilted hexagonal post arrays: DNA electrophoresis in anisotropic media.

    PubMed

    Chen, Zhen; Dorfman, Kevin D

    2014-02-01

    Using Brownian dynamics simulations, we show that DNA electrophoresis in a hexagonal array of micron-sized posts changes qualitatively when the applied electric field vector is not coincident with the lattice vectors of the array. DNA electrophoresis in such "tilted" post arrays is superior to the standard "un-tilted" approach; while the time required to achieve a resolution of unity in a tilted post array is similar to an un-tilted array at a low-electric field strengths, this time (i) decreases exponentially with electric field strength in a tilted array and (ii) increases exponentially with electric field strength in an un-tilted array. Although the DNA dynamics in a post array are complicated, the electrophoretic mobility results indicate that the "free path," i.e. the average distance of ballistic trajectories of point-sized particles launched from random positions in the unit cell until they intersect the next post, is a useful proxy for the detailed DNA trajectories. The analysis of the free path reveals a fundamental connection between anisotropy of the medium and DNA transport therein that goes beyond simply improving the separation device. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Wideband propagation measurements at 30.3 GHz through a pecan orchard in Texas

    NASA Astrophysics Data System (ADS)

    Papazian, Peter B.; Jones, David L.; Espeland, Richard H.

    1992-09-01

    Wideband propagation measurements were made in a pecan orchard in Texas during April and August of 1990 to examine the propagation characteristics of millimeter-wave signals through vegetation. Measurements were made on tree obstructed paths with and without leaves. The study presents narrowband attenuation data at 9.6 and 28.8 GHz as well as wideband impulse response measurements at 30.3 GHz. The wideband probe (Violette et al., 1983), provides amplitude and delay of reflected and scattered signals and bit-error rate. This is accomplished using a 500 MBit/sec pseudo-random code to BPSK modulate a 28.8 GHz carrier. The channel impulse response is then extracted by cross correlating the received pseudo-random sequence with a locally generated replica.

  5. Professional opinion concerning the effectiveness of bracing relative to observation in adolescent idiopathic scoliosis.

    PubMed

    Dolan, Lori A; Donnelly, Melanie J; Spratt, Kevin F; Weinstein, Stuart L

    2007-01-01

    To determine if community equipoise exists concerning the effectiveness of bracing in adolescent idiopathic scoliosis. Bracing is the standard of care for adolescent idiopathic scoliosis despite the lack of strong reasearch evidence concerning its effectiveness. Thus, some researchers support the idea of a randomized trial, whereas others think that randomization in the face of a standard of care would be unethical. A random of Scoliosis Research Society and Pediatric Orthopaedic Society of North America members were asked to consider 12 clinical profiles and to give their opinion concerning the radiographic outcomes after observation and bracing. An expert panel was created from the respondents. They expressed a wide array of opinions concerning the percentage of patients within each scenario who would benefit from bracing. Agreement was noted concerning the risk due to bracing for post-menarchal patients only. : This study found a high degree of variability in opinion among clinicians concerning the effectiveness of bracing, suggesting that a randomized trial of bracing would be ethical.

  6. Comparing pseudo-absences generation techniques in Boosted Regression Trees models for conservation purposes: A case study on amphibians in a protected area.

    PubMed

    Cerasoli, Francesco; Iannella, Mattia; D'Alessandro, Paola; Biondi, Maurizio

    2017-01-01

    Boosted Regression Trees (BRT) is one of the modelling techniques most recently applied to biodiversity conservation and it can be implemented with presence-only data through the generation of artificial absences (pseudo-absences). In this paper, three pseudo-absences generation techniques are compared, namely the generation of pseudo-absences within target-group background (TGB), testing both the weighted (WTGB) and unweighted (UTGB) scheme, and the generation at random (RDM), evaluating their performance and applicability in distribution modelling and species conservation. The choice of the target group fell on amphibians, because of their rapid decline worldwide and the frequent lack of guidelines for conservation strategies and regional-scale planning, which instead could be provided through an appropriate implementation of SDMs. Bufo bufo, Salamandrina perspicillata and Triturus carnifex were considered as target species, in order to perform our analysis with species having different ecological and distributional characteristics. The study area is the "Gran Sasso-Monti della Laga" National Park, which hosts 15 Natura 2000 sites and represents one of the most important biodiversity hotspots in Europe. Our results show that the model calibration ameliorates when using the target-group based pseudo-absences compared to the random ones, especially when applying the WTGB. Contrarily, model discrimination did not significantly vary in a consistent way among the three approaches with respect to the tree target species. Both WTGB and RDM clearly isolate the highly contributing variables, supplying many relevant indications for species conservation actions. Moreover, the assessment of pairwise variable interactions and their three-dimensional visualization further increase the amount of useful information for protected areas' managers. Finally, we suggest the use of RDM as an admissible alternative when it is not possible to individuate a suitable set of species as a representative target-group from which the pseudo-absences can be generated.

  7. Habitat classification modeling with incomplete data: Pushing the habitat envelope

    USGS Publications Warehouse

    Zarnetske, P.L.; Edwards, T.C.; Moisen, Gretchen G.

    2007-01-01

    Habitat classification models (HCMs) are invaluable tools for species conservation, land-use planning, reserve design, and metapopulation assessments, particularly at broad spatial scales. However, species occurrence data are often lacking and typically limited to presence points at broad scales. This lack of absence data precludes the use of many statistical techniques for HCMs. One option is to generate pseudo-absence points so that the many available statistical modeling tools can be used. Traditional techniques generate pseudoabsence points at random across broadly defined species ranges, often failing to include biological knowledge concerning the species-habitat relationship. We incorporated biological knowledge of the species-habitat relationship into pseudo-absence points by creating habitat envelopes that constrain the region from which points were randomly selected. We define a habitat envelope as an ecological representation of a species, or species feature's (e.g., nest) observed distribution (i.e., realized niche) based on a single attribute, or the spatial intersection of multiple attributes. We created HCMs for Northern Goshawk (Accipiter gentilis atricapillus) nest habitat during the breeding season across Utah forests with extant nest presence points and ecologically based pseudo-absence points using logistic regression. Predictor variables were derived from 30-m USDA Landfire and 250-m Forest Inventory and Analysis (FIA) map products. These habitat-envelope-based models were then compared to null envelope models which use traditional practices for generating pseudo-absences. Models were assessed for fit and predictive capability using metrics such as kappa, thresholdindependent receiver operating characteristic (ROC) plots, adjusted deviance (Dadj2), and cross-validation, and were also assessed for ecological relevance. For all cases, habitat envelope-based models outperformed null envelope models and were more ecologically relevant, suggesting that incorporating biological knowledge into pseudo-absence point generation is a powerful tool for species habitat assessments. Furthermore, given some a priori knowledge of the species-habitat relationship, ecologically based pseudo-absence points can be applied to any species, ecosystem, data resolution, and spatial extent. ?? 2007 by the Ecological Society of America.

  8. Abbreviation definition identification based on automatic precision estimates.

    PubMed

    Sohn, Sunghwan; Comeau, Donald C; Kim, Won; Wilbur, W John

    2008-09-25

    The rapid growth of biomedical literature presents challenges for automatic text processing, and one of the challenges is abbreviation identification. The presence of unrecognized abbreviations in text hinders indexing algorithms and adversely affects information retrieval and extraction. Automatic abbreviation definition identification can help resolve these issues. However, abbreviations and their definitions identified by an automatic process are of uncertain validity. Due to the size of databases such as MEDLINE only a small fraction of abbreviation-definition pairs can be examined manually. An automatic way to estimate the accuracy of abbreviation-definition pairs extracted from text is needed. In this paper we propose an abbreviation definition identification algorithm that employs a variety of strategies to identify the most probable abbreviation definition. In addition our algorithm produces an accuracy estimate, pseudo-precision, for each strategy without using a human-judged gold standard. The pseudo-precisions determine the order in which the algorithm applies the strategies in seeking to identify the definition of an abbreviation. On the Medstract corpus our algorithm produced 97% precision and 85% recall which is higher than previously reported results. We also annotated 1250 randomly selected MEDLINE records as a gold standard. On this set we achieved 96.5% precision and 83.2% recall. This compares favourably with the well known Schwartz and Hearst algorithm. We developed an algorithm for abbreviation identification that uses a variety of strategies to identify the most probable definition for an abbreviation and also produces an estimated accuracy of the result. This process is purely automatic.

  9. Electroacupuncture is not effective in chronic painful neuropathies.

    PubMed

    Penza, Paola; Bricchi, Monica; Scola, Amalia; Campanella, Angela; Lauria, Giuseppe

    2011-12-01

      To investigate the analgesic efficacy of electroacupuncture (EA) in patients with chronic painful neuropathy.   Double-blind, placebo-controlled, cross-over study. Inclusion criteria were diagnosis of peripheral neuropathy, neuropathic pain (visual analog scale > 4) for at least 6 months, and stable analgesic medications for at least 3 months.   Sixteen patients were randomized into two arms to be treated with EA or pseudo-EA (placebo).   The protocol included 6 weeks of treatment, 12 weeks free of treatment, and then further 6 weeks of treatment. EA or pseudo-EA was performed weekly during each treatment period.   The primary outcome was the number of patients treated with EA achieving at least 50% of pain relief at the end of each treatment compared with pain intensity at baseline. Secondary outcomes were modification in patient's global impression of change, depression and anxiety, and quality of life.   Eleven patients were randomized to EA and five patients to pseudo-EA as the first treatment. Only one patient per group (EA and pseudo-EA) reported 50% of pain relief at the end of each treatment compared with pain intensity at baseline. Pain intensity did not differ between EA (5.7 ± 2.3 at baseline and 4.97 ± 3.23 after treatment) and pseudo-EA (4.9 ± 1.9 at baseline and 4.18 ± 2.69 after treatment). There was no difference between patients who received EA as the first treatment and patients initially treated with placebo. There was no change in the secondary outcomes.   Our results do not support the use of EA in this population of painful neuropathy patients. Further studies in larger groups of patients are warranted to confirm our observation. Wiley Periodicals, Inc.

  10. Near-field electromagnetic holography for high-resolution analysis of network interactions in neuronal tissue

    PubMed Central

    Kjeldsen, Henrik D.; Kaiser, Marcus; Whittington, Miles A.

    2015-01-01

    Background Brain function is dependent upon the concerted, dynamical interactions between a great many neurons distributed over many cortical subregions. Current methods of quantifying such interactions are limited by consideration only of single direct or indirect measures of a subsample of all neuronal population activity. New method Here we present a new derivation of the electromagnetic analogy to near-field acoustic holography allowing high-resolution, vectored estimates of interactions between sources of electromagnetic activity that significantly improves this situation. In vitro voltage potential recordings were used to estimate pseudo-electromagnetic energy flow vector fields, current and energy source densities and energy dissipation in reconstruction planes at depth into the neural tissue parallel to the recording plane of the microelectrode array. Results The properties of the reconstructed near-field estimate allowed both the utilization of super-resolution techniques to increase the imaging resolution beyond that of the microelectrode array, and facilitated a novel approach to estimating causal relationships between activity in neocortical subregions. Comparison with existing methods The holographic nature of the reconstruction method allowed significantly better estimation of the fine spatiotemporal detail of neuronal population activity, compared with interpolation alone, beyond the spatial resolution of the electrode arrays used. Pseudo-energy flow vector mapping was possible with high temporal precision, allowing a near-realtime estimate of causal interaction dynamics. Conclusions Basic near-field electromagnetic holography provides a powerful means to increase spatial resolution from electrode array data with careful choice of spatial filters and distance to reconstruction plane. More detailed approaches may provide the ability to volumetrically reconstruct activity patterns on neuronal tissue, but the ability to extract vectored data with the method presented already permits the study of dynamic causal interactions without bias from any prior assumptions on anatomical connectivity. PMID:26026581

  11. Near-field electromagnetic holography for high-resolution analysis of network interactions in neuronal tissue.

    PubMed

    Kjeldsen, Henrik D; Kaiser, Marcus; Whittington, Miles A

    2015-09-30

    Brain function is dependent upon the concerted, dynamical interactions between a great many neurons distributed over many cortical subregions. Current methods of quantifying such interactions are limited by consideration only of single direct or indirect measures of a subsample of all neuronal population activity. Here we present a new derivation of the electromagnetic analogy to near-field acoustic holography allowing high-resolution, vectored estimates of interactions between sources of electromagnetic activity that significantly improves this situation. In vitro voltage potential recordings were used to estimate pseudo-electromagnetic energy flow vector fields, current and energy source densities and energy dissipation in reconstruction planes at depth into the neural tissue parallel to the recording plane of the microelectrode array. The properties of the reconstructed near-field estimate allowed both the utilization of super-resolution techniques to increase the imaging resolution beyond that of the microelectrode array, and facilitated a novel approach to estimating causal relationships between activity in neocortical subregions. The holographic nature of the reconstruction method allowed significantly better estimation of the fine spatiotemporal detail of neuronal population activity, compared with interpolation alone, beyond the spatial resolution of the electrode arrays used. Pseudo-energy flow vector mapping was possible with high temporal precision, allowing a near-realtime estimate of causal interaction dynamics. Basic near-field electromagnetic holography provides a powerful means to increase spatial resolution from electrode array data with careful choice of spatial filters and distance to reconstruction plane. More detailed approaches may provide the ability to volumetrically reconstruct activity patterns on neuronal tissue, but the ability to extract vectored data with the method presented already permits the study of dynamic causal interactions without bias from any prior assumptions on anatomical connectivity. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  12. Do pharmacy staff recommend evidenced-based smoking cessation products? A pseudo patron study.

    PubMed

    Chiang, P P C; Chapman, S

    2006-06-01

    To determine whether pharmacy staff recommend evidence-based smoking cessation aids. Pseudo patron visit to 50 randomly selected Sydney pharmacies where the pseudo patron enquired about the 'best' way to quit smoking and about the efficacy of a non-evidence-based cessation product, NicoBloc. Nicotine replacement therapy was universally stocked and the first product recommended by 90% of pharmacies. After prompting, 60% of pharmacies, either also recommended NicoBloc or deferred to 'customer choice'. About 34% disparaged the product. Evidence-based smoking cessation advice in Sydney pharmacies is fragile and may be compromised by commercial concerns. Smokers should be provided with independent point-of-sale summaries of evidence of cessation product effectiveness and warned about unsubstantiated claims.

  13. Random vibration analysis of train-bridge under track irregularities and traveling seismic waves using train-slab track-bridge interaction model

    NASA Astrophysics Data System (ADS)

    Zeng, Zhi-Ping; Zhao, Yan-Gang; Xu, Wen-Tao; Yu, Zhi-Wu; Chen, Ling-Kun; Lou, Ping

    2015-04-01

    The frequent use of bridges in high-speed railway lines greatly increases the probability that trains are running on bridges when earthquakes occur. This paper investigates the random vibrations of a high-speed train traversing a slab track on a continuous girder bridge subjected to track irregularities and traveling seismic waves by the pseudo-excitation method (PEM). To derive the equations of motion of the train-slab track-bridge interaction system, the multibody dynamics and finite element method models are used for the train and the track and bridge, respectively. By assuming track irregularities to be fully coherent random excitations with time lags between different wheels and seismic accelerations to be uniformly modulated, non-stationary random excitations with time lags between different foundations, the random load vectors of the equations of motion are transformed into a series of deterministic pseudo-excitations based on PEM and the wheel-rail contact relationship. A computer code is developed to obtain the time-dependent random responses of the entire system. As a case study, the random vibration characteristics of an ICE-3 high-speed train traversing a seven-span continuous girder bridge simultaneously excited by track irregularities and traveling seismic waves are analyzed. The influence of train speed and seismic wave propagation velocity on the random vibration characteristics of the bridge and train are discussed.

  14. High-speed imaging using CMOS image sensor with quasi pixel-wise exposure

    NASA Astrophysics Data System (ADS)

    Sonoda, T.; Nagahara, H.; Endo, K.; Sugiyama, Y.; Taniguchi, R.

    2017-02-01

    Several recent studies in compressive video sensing have realized scene capture beyond the fundamental trade-off limit between spatial resolution and temporal resolution using random space-time sampling. However, most of these studies showed results for higher frame rate video that were produced by simulation experiments or using an optically simulated random sampling camera, because there are currently no commercially available image sensors with random exposure or sampling capabilities. We fabricated a prototype complementary metal oxide semiconductor (CMOS) image sensor with quasi pixel-wise exposure timing that can realize nonuniform space-time sampling. The prototype sensor can reset exposures independently by columns and fix these amount of exposure by rows for each 8x8 pixel block. This CMOS sensor is not fully controllable via the pixels, and has line-dependent controls, but it offers flexibility when compared with regular CMOS or charge-coupled device sensors with global or rolling shutters. We propose a method to realize pseudo-random sampling for high-speed video acquisition that uses the flexibility of the CMOS sensor. We reconstruct the high-speed video sequence from the images produced by pseudo-random sampling using an over-complete dictionary.

  15. MicroRNA array normalization: an evaluation using a randomized dataset as the benchmark.

    PubMed

    Qin, Li-Xuan; Zhou, Qin

    2014-01-01

    MicroRNA arrays possess a number of unique data features that challenge the assumption key to many normalization methods. We assessed the performance of existing normalization methods using two microRNA array datasets derived from the same set of tumor samples: one dataset was generated using a blocked randomization design when assigning arrays to samples and hence was free of confounding array effects; the second dataset was generated without blocking or randomization and exhibited array effects. The randomized dataset was assessed for differential expression between two tumor groups and treated as the benchmark. The non-randomized dataset was assessed for differential expression after normalization and compared against the benchmark. Normalization improved the true positive rate significantly in the non-randomized data but still possessed a false discovery rate as high as 50%. Adding a batch adjustment step before normalization further reduced the number of false positive markers while maintaining a similar number of true positive markers, which resulted in a false discovery rate of 32% to 48%, depending on the specific normalization method. We concluded the paper with some insights on possible causes of false discoveries to shed light on how to improve normalization for microRNA arrays.

  16. MicroRNA Array Normalization: An Evaluation Using a Randomized Dataset as the Benchmark

    PubMed Central

    Qin, Li-Xuan; Zhou, Qin

    2014-01-01

    MicroRNA arrays possess a number of unique data features that challenge the assumption key to many normalization methods. We assessed the performance of existing normalization methods using two microRNA array datasets derived from the same set of tumor samples: one dataset was generated using a blocked randomization design when assigning arrays to samples and hence was free of confounding array effects; the second dataset was generated without blocking or randomization and exhibited array effects. The randomized dataset was assessed for differential expression between two tumor groups and treated as the benchmark. The non-randomized dataset was assessed for differential expression after normalization and compared against the benchmark. Normalization improved the true positive rate significantly in the non-randomized data but still possessed a false discovery rate as high as 50%. Adding a batch adjustment step before normalization further reduced the number of false positive markers while maintaining a similar number of true positive markers, which resulted in a false discovery rate of 32% to 48%, depending on the specific normalization method. We concluded the paper with some insights on possible causes of false discoveries to shed light on how to improve normalization for microRNA arrays. PMID:24905456

  17. Optimal Attitude Control of Agile Spacecraft Using Combined Reaction Wheel and Control Moment Gyroscope Arrays

    DTIC Science & Technology

    2015-12-01

    10 IMU Inertial Measurement Unit . . . . . . . . . . . . . . . . . . . . . . . . . 11 PS Pseudo...filters to diminish the effect of gyro corruption in the inertial measurement unit ( IMU ) [32]. Therefore, s/c states determined by the hardware...simulator’s IMU hold the required level of accuracy for characterization of the RWCMG system in the current research. Future external state measurement systems

  18. Analysis on pseudo excitation of random vibration for structure of time flight counter

    NASA Astrophysics Data System (ADS)

    Wu, Qiong; Li, Dapeng

    2015-03-01

    Traditional computing method is inefficient for getting key dynamical parameters of complicated structure. Pseudo Excitation Method(PEM) is an effective method for calculation of random vibration. Due to complicated and coupling random vibration in rocket or shuttle launching, the new staging white noise mathematical model is deduced according to the practical launch environment. This deduced model is applied for PEM to calculate the specific structure of Time of Flight Counter(ToFC). The responses of power spectral density and the relevant dynamic characteristic parameters of ToFC are obtained in terms of the flight acceptance test level. Considering stiffness of fixture structure, the random vibration experiments are conducted in three directions to compare with the revised PEM. The experimental results show the structure can bear the random vibration caused by launch without any damage and key dynamical parameters of ToFC are obtained. The revised PEM is similar with random vibration experiment in dynamical parameters and responses are proved by comparative results. The maximum error is within 9%. The reasons of errors are analyzed to improve reliability of calculation. This research provides an effective method for solutions of computing dynamical characteristic parameters of complicated structure in the process of rocket or shuttle launching.

  19. A novel pseudo resistor structure for biomedical front-end amplifiers.

    PubMed

    Yu-Chieh Huang; Tzu-Sen Yang; Shun-Hsi Hsu; Xin-Zhuang Chen; Jin-Chern Chiou

    2015-08-01

    This study proposes a novel pseudo resistor structure with a tunable DC bias voltage for biomedical front-end amplifiers (FEAs). In the proposed FEA, the high-pass filter composed of differential difference amplifier and a pseudo resistor is implemented. The FEA is manufactured by using a standard TSMC 0.35 μm CMOS process. In this study, three types FEAs included three different pseudo resistor are simulated, fabricated and measured for comparison and electrocorticography (ECoG) measurement, and all the results show the proposed pseudo resistor is superior to other two types in bandwidth. In chip implementation, the lower and upper cutoff frequencies of the high-pass filter with the proposed pseudo resistor are 0.15 Hz and 4.98 KHz, respectively. It also demonstrates lower total harmonic distortion performance of -58 dB at 1 kHz and higher stability with wide supply range (1.8 V and 3.3 V) and control voltage range (0.9 V and 1.65 V) than others. Moreover, the FEA with the proposed pseudo successfully recorded spike-and-wave discharges of ECoG signal in in vivo experiment on rat with pentylenetetrazol-induced seizures.

  20. The MoEDAL experiment at the LHC. Searching beyond the standard model

    NASA Astrophysics Data System (ADS)

    Pinfold, James L.

    2016-11-01

    MoEDAL is a pioneering experiment designed to search for highly ionizing avatars of new physics such as magnetic monopoles or massive (pseudo-)stable charged particles. Its groundbreaking physics program defines a number of scenarios that yield potentially revolutionary insights into such foundational questions as: are there extra dimensions or new symmetries; what is the mechanism for the generation of mass; does magnetic charge exist; what is the nature of dark matter; and, how did the big-bang develop. MoEDAL's purpose is to meet such far-reaching challenges at the frontier of the field. The innovative MoEDAL detector employs unconventional methodologies tuned to the prospect of discovery physics. The largely passive MoEDAL detector, deployed at Point 8 on the LHC ring, has a dual nature. First, it acts like a giant camera, comprised of nuclear track detectors - analyzed offline by ultra fast scanning microscopes - sensitive only to new physics. Second, it is uniquely able to trap the particle messengers of physics beyond the Standard Model for further study. MoEDAL's radiation environment is monitored by a state-of-the-art real-time TimePix pixel detector array. A new MoEDAL sub-detector to extend MoEDAL's reach to millicharged, minimally ionizing, particles (MMIPs) is under study Finally we shall describe the next step for MoEDAL called Cosmic MoEDAL, where we define a very large high altitude array to take the search for highly ionizing avatars of new physics to higher masses that are available from the cosmos.

  1. False Operation of Static Random Access Memory Cells under Alternating Current Power Supply Voltage Variation

    NASA Astrophysics Data System (ADS)

    Sawada, Takuya; Takata, Hidehiro; Nii, Koji; Nagata, Makoto

    2013-04-01

    Static random access memory (SRAM) cores exhibit susceptibility against power supply voltage variation. False operation is investigated among SRAM cells under sinusoidal voltage variation on power lines introduced by direct RF power injection. A standard SRAM core of 16 kbyte in a 90 nm 1.5 V technology is diagnosed with built-in self test and on-die noise monitor techniques. The sensitivity of bit error rate is shown to be high against the frequency of injected voltage variation, while it is not greatly influenced by the difference in frequency and phase against SRAM clocking. It is also observed that the distribution of false bits is substantially random in a cell array.

  2. The application of structural reliability techniques to plume impingement loading of the Space Station Freedom Photovoltaic Array

    NASA Technical Reports Server (NTRS)

    Yunis, Isam S.; Carney, Kelly S.

    1993-01-01

    A new aerospace application of structural reliability techniques is presented, where the applied forces depend on many probabilistic variables. This application is the plume impingement loading of the Space Station Freedom Photovoltaic Arrays. When the space shuttle berths with Space Station Freedom it must brake and maneuver towards the berthing point using its primary jets. The jet exhaust, or plume, may cause high loads on the photovoltaic arrays. The many parameters governing this problem are highly uncertain and random. An approach, using techniques from structural reliability, as opposed to the accepted deterministic methods, is presented which assesses the probability of failure of the array mast due to plume impingement loading. A Monte Carlo simulation of the berthing approach is used to determine the probability distribution of the loading. A probability distribution is also determined for the strength of the array. Structural reliability techniques are then used to assess the array mast design. These techniques are found to be superior to the standard deterministic dynamic transient analysis, for this class of problem. The results show that the probability of failure of the current array mast design, during its 15 year life, is minute.

  3. Elimination of the light shift in rubidium gas cell frequency standards using pulsed optical pumping

    NASA Technical Reports Server (NTRS)

    English, T. C.; Jechart, E.; Kwon, T. M.

    1978-01-01

    Changes in the intensity of the light source in an optically pumped, rubidium, gas cell frequency standard can produce corresponding frequency shifts, with possible adverse effects on the long-term frequency stability. A pulsed optical pumping apparatus was constructed with the intent of investigating the frequency stability in the absence of light shifts. Contrary to original expectations, a small residual frequency shift due to changes in light intensity was experimentally observed. Evidence is given which indicates that this is not a true light-shift effect. Preliminary measurements of the frequency stability of this apparatus, with this small residual pseudo light shift present, are presented. It is shown that this pseudo light shift can be eliminated by using a more homogeneous C-field. This is consistent with the idea that the pseudo light shift is due to inhomogeneity in the physics package (position-shift effect).

  4. Fabrication of plasmonic cavity arrays for SERS analysis

    NASA Astrophysics Data System (ADS)

    Li, Ning; Feng, Lei; Teng, Fei; Lu, Nan

    2017-05-01

    The plasmonic cavity arrays are ideal substrates for surface enhanced Raman scattering analysis because they can provide hot spots with large volume for analyte molecules. The large area increases the probability to make more analyte molecules on hot spots and leads to a high reproducibility. Therefore, to develop a simple method for creating cavity arrays is important. Herein, we demonstrate how to fabricate a V and W shape cavity arrays by a simple method based on self-assembly. Briefly, the V and W shape cavity arrays are respectively fabricated by taking KOH etching on a nanohole and a nanoring array patterned silicon (Si) slides. The nanohole array is generated by taking a reactive ion etching on a Si slide assembled with monolayer of polystyrene (PS) spheres. The nanoring array is generated by taking a reactive ion etching on a Si slide covered with a monolayer of octadecyltrichlorosilane before self-assembling PS spheres. Both plasmonic V and W cavity arrays can provide large hot area, which increases the probability for analyte molecules to deposit on the hot spots. Taking 4-Mercaptopyridine as analyte probe, the enhancement factor can reach 2.99 × 105 and 9.97 × 105 for plasmonic V cavity and W cavity array, respectively. The relative standard deviations of the plasmonic V and W cavity arrays are 6.5% and 10.2% respectively according to the spectra collected on 20 random spots.

  5. Fabrication of plasmonic cavity arrays for SERS analysis.

    PubMed

    Li, Ning; Feng, Lei; Teng, Fei; Lu, Nan

    2017-05-05

    The plasmonic cavity arrays are ideal substrates for surface enhanced Raman scattering analysis because they can provide hot spots with large volume for analyte molecules. The large area increases the probability to make more analyte molecules on hot spots and leads to a high reproducibility. Therefore, to develop a simple method for creating cavity arrays is important. Herein, we demonstrate how to fabricate a V and W shape cavity arrays by a simple method based on self-assembly. Briefly, the V and W shape cavity arrays are respectively fabricated by taking KOH etching on a nanohole and a nanoring array patterned silicon (Si) slides. The nanohole array is generated by taking a reactive ion etching on a Si slide assembled with monolayer of polystyrene (PS) spheres. The nanoring array is generated by taking a reactive ion etching on a Si slide covered with a monolayer of octadecyltrichlorosilane before self-assembling PS spheres. Both plasmonic V and W cavity arrays can provide large hot area, which increases the probability for analyte molecules to deposit on the hot spots. Taking 4-Mercaptopyridine as analyte probe, the enhancement factor can reach 2.99 × 10 5 and 9.97 × 10 5 for plasmonic V cavity and W cavity array, respectively. The relative standard deviations of the plasmonic V and W cavity arrays are 6.5% and 10.2% respectively according to the spectra collected on 20 random spots.

  6. Pseudo-time-reversal symmetry and topological edge states in two-dimensional acoustic crystals

    PubMed Central

    Mei, Jun; Chen, Zeguo; Wu, Ying

    2016-01-01

    We propose a simple two-dimensional acoustic crystal to realize topologically protected edge states for acoustic waves. The acoustic crystal is composed of a triangular array of core-shell cylinders embedded in a water host. By utilizing the point group symmetry of two doubly degenerate eigenstates at the Γ point, we can construct pseudo-time-reversal symmetry as well as pseudo-spin states in this classical system. We develop an effective Hamiltonian for the associated dispersion bands around the Brillouin zone center, and find the inherent link between the band inversion and the topological phase transition. With numerical simulations, we unambiguously demonstrate the unidirectional propagation of acoustic edge states along the interface between a topologically nontrivial acoustic crystal and a trivial one, and the robustness of the edge states against defects with sharp bends. Our work provides a new design paradigm for manipulating and transporting acoustic waves in a topologically protected manner. Technological applications and devices based on our design are expected in various frequency ranges of interest, spanning from infrasound to ultrasound. PMID:27587311

  7. Wavelet-Based Signal Processing for Monitoring Discomfort and Fatigue

    DTIC Science & Technology

    2008-06-01

    Wigner - Ville distribution ( WVD ), the short-time Fourier transform (STFT) or spectrogram, the Choi-Williams distribution (CWD), the smoothed pseudo Wigner ...has the advantage of being computationally less expensive than other standard techniques, such as the Wigner - Ville distribution ( WVD ), the spectrogram...slopes derived from the spectrogram and the smoothed pseudo Wigner - Ville distribution . Furthermore, slopes derived from the filter bank

  8. Demonstration of Johnson noise thermometry with all-superconducting quantum voltage noise source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamada, Takahiro, E-mail: yamada-takahiro@aist.go.jp; Urano, Chiharu; Maezawa, Masaaki

    We present a Johnson noise thermometry (JNT) system based on an integrated quantum voltage noise source (IQVNS) that has been fully implemented using superconducting circuit technology. To enable precise measurement of Boltzmann's constant, an IQVNS chip was designed to produce intrinsically calculable pseudo-white noise to calibrate the JNT system. On-chip real-time generation of pseudo-random codes via simple circuits produced pseudo-voltage noise with a harmonic tone interval of less than 1 Hz, which was one order of magnitude finer than the harmonic tone interval of conventional quantum voltage noise sources. We estimated a value for Boltzmann's constant experimentally by performing JNT measurementsmore » at the temperature of the triple point of water using the IQVNS chip.« less

  9. Application of Monte Carlo Method for Evaluation of Uncertainties of ITS-90 by Standard Platinum Resistance Thermometer

    NASA Astrophysics Data System (ADS)

    Palenčár, Rudolf; Sopkuliak, Peter; Palenčár, Jakub; Ďuriš, Stanislav; Suroviak, Emil; Halaj, Martin

    2017-06-01

    Evaluation of uncertainties of the temperature measurement by standard platinum resistance thermometer calibrated at the defining fixed points according to ITS-90 is a problem that can be solved in different ways. The paper presents a procedure based on the propagation of distributions using the Monte Carlo method. The procedure employs generation of pseudo-random numbers for the input variables of resistances at the defining fixed points, supposing the multivariate Gaussian distribution for input quantities. This allows taking into account the correlations among resistances at the defining fixed points. Assumption of Gaussian probability density function is acceptable, with respect to the several sources of uncertainties of resistances. In the case of uncorrelated resistances at the defining fixed points, the method is applicable to any probability density function. Validation of the law of propagation of uncertainty using the Monte Carlo method is presented on the example of specific data for 25 Ω standard platinum resistance thermometer in the temperature range from 0 to 660 °C. Using this example, we demonstrate suitability of the method by validation of its results.

  10. Professional Opinion Concerning the Effectiveness of Bracing Relative to Observation in Adolescent Idiopathic Scoliosis

    PubMed Central

    Dolan, Lori A.; Donnelly, Melanie J.; Spratt, Kevin F.; Weinstein, Stuart L.

    2015-01-01

    Objective To determine if community equipoise exists concerning the effectiveness of bracing in adolescent idiopathic scoliosis. Background Data Bracing is the standard of care for adolescent idiopathic scoliosis despite the lack of strong reasearch evidence concerning its effectiveness. Thus, some researchers support the idea of a randomized trial, whereas others think that randomization in the face of a standard of care would be unethical. Methods A random of Scoliosis Research Society and Pediatric Orthopaedic Society of North America members were asked to consider 12 clinical profiles and to give their opinion concerning the radiographic outcomes after observation and bracing. Results An expert panel was created from the respondents. They expressed a wide array of opinions concerning the percentage of patients within each scenario who would benefit from bracing. Agreement was noted concerning the risk due to bracing for post-menarchal patients only. Conclusions This study found a high degree of variability in opinion among clinicians concerning the effectiveness of bracing, suggesting that a randomized trial of bracing would be ethical. PMID:17414008

  11. The development of GPU-based parallel PRNG for Monte Carlo applications in CUDA Fortran

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kargaran, Hamed, E-mail: h-kargaran@sbu.ac.ir; Minuchehr, Abdolhamid; Zolfaghari, Ahmad

    The implementation of Monte Carlo simulation on the CUDA Fortran requires a fast random number generation with good statistical properties on GPU. In this study, a GPU-based parallel pseudo random number generator (GPPRNG) have been proposed to use in high performance computing systems. According to the type of GPU memory usage, GPU scheme is divided into two work modes including GLOBAL-MODE and SHARED-MODE. To generate parallel random numbers based on the independent sequence method, the combination of middle-square method and chaotic map along with the Xorshift PRNG have been employed. Implementation of our developed PPRNG on a single GPU showedmore » a speedup of 150x and 470x (with respect to the speed of PRNG on a single CPU core) for GLOBAL-MODE and SHARED-MODE, respectively. To evaluate the accuracy of our developed GPPRNG, its performance was compared to that of some other commercially available PPRNGs such as MATLAB, FORTRAN and Miller-Park algorithm through employing the specific standard tests. The results of this comparison showed that the developed GPPRNG in this study can be used as a fast and accurate tool for computational science applications.« less

  12. Enhancement of DRPE performance with a novel scheme based on new RAC: Principle, security analysis and FPGA implementation

    NASA Astrophysics Data System (ADS)

    Neji, N.; Jridi, M.; Alfalou, A.; Masmoudi, N.

    2016-02-01

    The double random phase encryption (DRPE) method is a well-known all-optical architecture which has many advantages especially in terms of encryption efficiency. However, the method presents some vulnerabilities against attacks and requires a large quantity of information to encode the complex output plane. In this paper, we present an innovative hybrid technique to enhance the performance of DRPE method in terms of compression and encryption. An optimized simultaneous compression and encryption method is applied simultaneously on the real and imaginary components of the DRPE output plane. The compression and encryption technique consists in using an innovative randomized arithmetic coder (RAC) that can well compress the DRPE output planes and at the same time enhance the encryption. The RAC is obtained by an appropriate selection of some conditions in the binary arithmetic coding (BAC) process and by using a pseudo-random number to encrypt the corresponding outputs. The proposed technique has the capabilities to process video content and to be standard compliant with modern video coding standards such as H264 and HEVC. Simulations demonstrate that the proposed crypto-compression system has presented the drawbacks of the DRPE method. The cryptographic properties of DRPE have been enhanced while a compression rate of one-sixth can be achieved. FPGA implementation results show the high performance of the proposed method in terms of maximum operating frequency, hardware occupation, and dynamic power consumption.

  13. Likelihood-based methods for evaluating principal surrogacy in augmented vaccine trials.

    PubMed

    Liu, Wei; Zhang, Bo; Zhang, Hui; Zhang, Zhiwei

    2017-04-01

    There is growing interest in assessing immune biomarkers, which are quick to measure and potentially predictive of long-term efficacy, as surrogate endpoints in randomized, placebo-controlled vaccine trials. This can be done under a principal stratification approach, with principal strata defined using a subject's potential immune responses to vaccine and placebo (the latter may be assumed to be zero). In this context, principal surrogacy refers to the extent to which vaccine efficacy varies across principal strata. Because a placebo recipient's potential immune response to vaccine is unobserved in a standard vaccine trial, augmented vaccine trials have been proposed to produce the information needed to evaluate principal surrogacy. This article reviews existing methods based on an estimated likelihood and a pseudo-score (PS) and proposes two new methods based on a semiparametric likelihood (SL) and a pseudo-likelihood (PL), for analyzing augmented vaccine trials. Unlike the PS method, the SL method does not require a model for missingness, which can be advantageous when immune response data are missing by happenstance. The SL method is shown to be asymptotically efficient, and it performs similarly to the PS and PL methods in simulation experiments. The PL method appears to have a computational advantage over the PS and SL methods.

  14. Array invariant-based ranging of a source of opportunity.

    PubMed

    Byun, Gihoon; Kim, J S; Cho, Chomgun; Song, H C; Byun, Sung-Hoon

    2017-09-01

    The feasibility of tracking a ship radiating random and anisotropic noise is investigated using ray-based blind deconvolution (RBD) and array invariant (AI) with a vertical array in shallow water. This work is motivated by a recent report [Byun, Verlinden, and Sabra, J. Acoust. Soc. Am. 141, 797-807 (2017)] that RBD can be applied to ships of opportunity to estimate the Green's function. Subsequently, the AI developed for robust source-range estimation in shallow water can be applied to the estimated Green's function via RBD, exploiting multipath arrivals separated in beam angle and travel time. In this letter, a combination of the RBD and AI is demonstrated to localize and track a ship of opportunity (200-900 Hz) to within a 5% standard deviation of the relative range error along a track at ranges of 1.8-3.4 km, using a 16-element, 56-m long vertical array in approximately 100-m deep shallow water.

  15. Split-plot microarray experiments: issues of design, power and sample size.

    PubMed

    Tsai, Pi-Wen; Lee, Mei-Ling Ting

    2005-01-01

    This article focuses on microarray experiments with two or more factors in which treatment combinations of the factors corresponding to the samples paired together onto arrays are not completely random. A main effect of one (or more) factor(s) is confounded with arrays (the experimental blocks). This is called a split-plot microarray experiment. We utilise an analysis of variance (ANOVA) model to assess differentially expressed genes for between-array and within-array comparisons that are generic under a split-plot microarray experiment. Instead of standard t- or F-test statistics that rely on mean square errors of the ANOVA model, we use a robust method, referred to as 'a pooled percentile estimator', to identify genes that are differentially expressed across different treatment conditions. We illustrate the design and analysis of split-plot microarray experiments based on a case application described by Jin et al. A brief discussion of power and sample size for split-plot microarray experiments is also presented.

  16. Implementing informative priors for heterogeneity in meta-analysis using meta-regression and pseudo data.

    PubMed

    Rhodes, Kirsty M; Turner, Rebecca M; White, Ian R; Jackson, Dan; Spiegelhalter, David J; Higgins, Julian P T

    2016-12-20

    Many meta-analyses combine results from only a small number of studies, a situation in which the between-study variance is imprecisely estimated when standard methods are applied. Bayesian meta-analysis allows incorporation of external evidence on heterogeneity, providing the potential for more robust inference on the effect size of interest. We present a method for performing Bayesian meta-analysis using data augmentation, in which we represent an informative conjugate prior for between-study variance by pseudo data and use meta-regression for estimation. To assist in this, we derive predictive inverse-gamma distributions for the between-study variance expected in future meta-analyses. These may serve as priors for heterogeneity in new meta-analyses. In a simulation study, we compare approximate Bayesian methods using meta-regression and pseudo data against fully Bayesian approaches based on importance sampling techniques and Markov chain Monte Carlo (MCMC). We compare the frequentist properties of these Bayesian methods with those of the commonly used frequentist DerSimonian and Laird procedure. The method is implemented in standard statistical software and provides a less complex alternative to standard MCMC approaches. An importance sampling approach produces almost identical results to standard MCMC approaches, and results obtained through meta-regression and pseudo data are very similar. On average, data augmentation provides closer results to MCMC, if implemented using restricted maximum likelihood estimation rather than DerSimonian and Laird or maximum likelihood estimation. The methods are applied to real datasets, and an extension to network meta-analysis is described. The proposed method facilitates Bayesian meta-analysis in a way that is accessible to applied researchers. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  17. Construction of self-supported porous TiO2/NiO core/shell nanorod arrays for electrochemical capacitor application

    NASA Astrophysics Data System (ADS)

    Wu, J. B.; Guo, R. Q.; Huang, X. H.; Lin, Y.

    2013-12-01

    High-quality metal oxides hetero-structured nanoarrays have been receiving great attention in electrochemical energy storage application. Self-supported TiO2/NiO core/shell nanorod arrays are prepared on carbon cloth via the combination of hydrothermal synthesis and electro-deposition methods. The obtained core/shell nanorods consist of nanorod core and interconnected nanoflake shell, as well as hierarchical porosity. As cathode materials for pseudo-capacitors, the TiO2/NiO core/shell nanorod arrays display impressive electrochemical performances with both high capacitance of 611 F g-1 at 2 A g-1, and pretty good cycling stability with a retention of 89% after 5000 cycles. Besides, as compared to the single NiO nanoflake arrays on carbon cloth, the TiO2/NiO core/shell nanorod arrays exhibit much better electrochemical properties with higher capacitance, better electrochemical activity and cycling life. This enhanced performance is mainly due to the core/shell nanorods architecture offering fast ion/electron transfer and sufficient contact between active materials and electrolyte.

  18. Assessment of a novel multi-array normalization method based on spike-in control probes suitable for microRNA datasets with global decreases in expression.

    PubMed

    Sewer, Alain; Gubian, Sylvain; Kogel, Ulrike; Veljkovic, Emilija; Han, Wanjiang; Hengstermann, Arnd; Peitsch, Manuel C; Hoeng, Julia

    2014-05-17

    High-quality expression data are required to investigate the biological effects of microRNAs (miRNAs). The goal of this study was, first, to assess the quality of miRNA expression data based on microarray technologies and, second, to consolidate it by applying a novel normalization method. Indeed, because of significant differences in platform designs, miRNA raw data cannot be normalized blindly with standard methods developed for gene expression. This fundamental observation motivated the development of a novel multi-array normalization method based on controllable assumptions, which uses the spike-in control probes to adjust the measured intensities across arrays. Raw expression data were obtained with the Exiqon dual-channel miRCURY LNA™ platform in the "common reference design" and processed as "pseudo-single-channel". They were used to apply several quality metrics based on the coefficient of variation and to test the novel spike-in controls based normalization method. Most of the considerations presented here could be applied to raw data obtained with other platforms. To assess the normalization method, it was compared with 13 other available approaches from both data quality and biological outcome perspectives. The results showed that the novel multi-array normalization method reduced the data variability in the most consistent way. Further, the reliability of the obtained differential expression values was confirmed based on a quantitative reverse transcription-polymerase chain reaction experiment performed for a subset of miRNAs. The results reported here support the applicability of the novel normalization method, in particular to datasets that display global decreases in miRNA expression similarly to the cigarette smoke-exposed mouse lung dataset considered in this study. Quality metrics to assess between-array variability were used to confirm that the novel spike-in controls based normalization method provided high-quality miRNA expression data suitable for reliable downstream analysis. The multi-array miRNA raw data normalization method was implemented in an R software package called ExiMiR and deposited in the Bioconductor repository.

  19. Assessment of a novel multi-array normalization method based on spike-in control probes suitable for microRNA datasets with global decreases in expression

    PubMed Central

    2014-01-01

    Background High-quality expression data are required to investigate the biological effects of microRNAs (miRNAs). The goal of this study was, first, to assess the quality of miRNA expression data based on microarray technologies and, second, to consolidate it by applying a novel normalization method. Indeed, because of significant differences in platform designs, miRNA raw data cannot be normalized blindly with standard methods developed for gene expression. This fundamental observation motivated the development of a novel multi-array normalization method based on controllable assumptions, which uses the spike-in control probes to adjust the measured intensities across arrays. Results Raw expression data were obtained with the Exiqon dual-channel miRCURY LNA™ platform in the “common reference design” and processed as “pseudo-single-channel”. They were used to apply several quality metrics based on the coefficient of variation and to test the novel spike-in controls based normalization method. Most of the considerations presented here could be applied to raw data obtained with other platforms. To assess the normalization method, it was compared with 13 other available approaches from both data quality and biological outcome perspectives. The results showed that the novel multi-array normalization method reduced the data variability in the most consistent way. Further, the reliability of the obtained differential expression values was confirmed based on a quantitative reverse transcription–polymerase chain reaction experiment performed for a subset of miRNAs. The results reported here support the applicability of the novel normalization method, in particular to datasets that display global decreases in miRNA expression similarly to the cigarette smoke-exposed mouse lung dataset considered in this study. Conclusions Quality metrics to assess between-array variability were used to confirm that the novel spike-in controls based normalization method provided high-quality miRNA expression data suitable for reliable downstream analysis. The multi-array miRNA raw data normalization method was implemented in an R software package called ExiMiR and deposited in the Bioconductor repository. PMID:24886675

  20. Highly sensitive and area-efficient CMOS image sensor using a PMOSFET-type photodetector with a built-in transfer gate

    NASA Astrophysics Data System (ADS)

    Seo, Sang-Ho; Kim, Kyoung-Do; Kong, Jae-Sung; Shin, Jang-Kyoo; Choi, Pyung

    2007-02-01

    In this paper, a new CMOS image sensor is presented, which uses a PMOSFET-type photodetector with a transfer gate that has a high and variable sensitivity. The proposed CMOS image sensor has been fabricated using a 0.35 μm 2-poly 4- metal standard CMOS technology and is composed of a 256 × 256 array of 7.05 × 7.10 μm pixels. The unit pixel has a configuration of a pseudo 3-transistor active pixel sensor (APS) with the PMOSFET-type photodetector with a transfer gate, which has a function of conventional 4-transistor APS. The generated photocurrent is controlled by the transfer gate of the PMOSFET-type photodetector. The maximum responsivity of the photodetector is larger than 1.0 × 10 3 A/W without any optical lens. Fabricated 256 × 256 CMOS image sensor exhibits a good response to low-level illumination as low as 5 lux.

  1. Faster heart rate and muscular oxygen uptake kinetics in type 2 diabetes patients following endurance training.

    PubMed

    Koschate, Jessica; Drescher, Uwe; Brinkmann, Christian; Baum, Klaus; Schiffer, Thorsten; Latsch, Joachim; Brixius, Klara; Hoffmann, Uwe

    2016-11-01

    Cardiorespiratory kinetics were analyzed in type 2 diabetes patients before and after a 12-week endurance exercise-training intervention. It was hypothesized that muscular oxygen uptake and heart rate (HR) kinetics would be faster after the training intervention and that this would be detectable using a standardized work rate protocol with pseudo-random binary sequences. The cardiorespiratory kinetics of 13 male sedentary, middle-aged, overweight type 2 diabetes patients (age, 60 ± 8 years; body mass index, 33 ± 4 kg·m -2 ) were tested before and after the 12-week exercise intervention. Subjects performed endurance training 3 times a week on nonconsecutive days. Pseudo-random binary sequences exercise protocols in combination with time series analysis were used to estimate kinetics. Greater maxima in cross-correlation functions (CCF max ) represent faster kinetics of the respective parameter. CCF max of muscular oxygen uptake (pre-training: 0.31 ± 0.03; post-training: 0.37 ± 0.1, P = 0.024) and CCF max of HR (pre-training: 0.25 ± 0.04; post-training: 0.29 ± 0.06, P = 0.007) as well as peak oxygen uptake (pre-training: 24.4 ± 4.7 mL·kg -1 ·min -1 ; post-training: 29.3 ± 6.5 mL·kg -1 ·min -1 , P = 0.004) increased significantly over the course of the exercise intervention. In conclusion, kinetic responses to changing work rates in the moderate-intensity range are similar to metabolic demands occurring in everyday habitual activities. Moderate endurance training accelerated the kinetic responses of HR and muscular oxygen uptake. Furthermore, the applicability of the used method to detect these accelerations was demonstrated.

  2. Self-deception as pseudo-rational regulation of belief.

    PubMed

    Michel, Christoph; Newen, Albert

    2010-09-01

    Self-deception is a special kind of motivational dominance in belief-formation. We develop criteria which set paradigmatic self-deception apart from related phenomena of auto-manipulation such as pretense and motivational bias. In self-deception rational subjects defend or develop beliefs of high subjective importance in response to strong counter-evidence. Self-deceivers make or keep these beliefs tenable by putting prima-facie rational defense-strategies to work against their established standards of rational evaluation. In paradigmatic self-deception, target-beliefs are made tenable via reorganizations of those belief-sets that relate relevant data to target-beliefs. This manipulation of the evidential value of relevant data goes beyond phenomena of motivated perception of data. In self-deception belief-defense is pseudo-rational. Self-deceivers will typically apply a dual standard of evaluation that remains intransparent to the subject. The developed model of self-deception as pseudo-rational belief-defense is empirically anchored. So, we hope to put forward a promising candidate. Copyright © 2010 Elsevier Inc. All rights reserved.

  3. A Comparison of Pseudo-Maximum Likelihood and Asymptotically Distribution-Free Dynamic Factor Analysis Parameter Estimation in Fitting Covariance Structure Models to Block-Toeplitz Matrices Representing Single-Subject Multivariate Time-Series.

    ERIC Educational Resources Information Center

    Molenaar, Peter C. M.; Nesselroade, John R.

    1998-01-01

    Pseudo-Maximum Likelihood (p-ML) and Asymptotically Distribution Free (ADF) estimation methods for estimating dynamic factor model parameters within a covariance structure framework were compared through a Monte Carlo simulation. Both methods appear to give consistent model parameter estimates, but only ADF gives standard errors and chi-square…

  4. Natural electroweak breaking from a mirror symmetry.

    PubMed

    Chacko, Z; Goh, Hock-Seng; Harnik, Roni

    2006-06-16

    We present "twin Higgs models," simple realizations of the Higgs boson as a pseudo Goldstone boson that protect the weak scale from radiative corrections up to scales of order 5-10 TeV. In the ultraviolet these theories have a discrete symmetry which interchanges each standard model particle with a corresponding particle which transforms under a twin or a mirror standard model gauge group. In addition, the Higgs sector respects an approximate global symmetry. When this global symmetry is broken, the discrete symmetry tightly constrains the form of corrections to the pseudo Goldstone Higgs potential, allowing natural electroweak symmetry breaking. Precision electroweak constraints are satisfied by construction. These models demonstrate that, contrary to the conventional wisdom, stabilizing the weak scale does not require new light particles charged under the standard model gauge groups.

  5. Beam combining and SBS suppression in white noise and pseudo-random modulated amplifiers

    NASA Astrophysics Data System (ADS)

    Anderson, Brian; Flores, Angel; Holten, Roger; Ehrenreich, Thomas; Dajani, Iyad

    2015-03-01

    White noise phase modulation (WNS) and pseudo-random binary sequence phase modulation (PRBS) are effective techniques for mitigation of nonlinear effects such as stimulated Brillouin scattering (SBS); thereby paving the way for higher power narrow linewidth fiber amplifiers. However, detailed studies comparing both coherent beam combination and the SBS suppression of these phase modulation schemes have not been reported. In this study an active fiber cutback experiment is performed comparing the enhancement factor of a PRBS and WNS broadened seed as a function of linewidth and fiber length. Furthermore, two WNS and PRBS modulated fiber lasers are coherently combined to measure and compare the fringe visibility and coherence length as a function of optical path length difference. Notably, the discrete frequency comb of PRBS modulation provides a beam combining re-coherence effect where the lasers periodically come back into phase. Significantly, this may reduce path length matching complexity in coherently combined fiber laser systems.

  6. Pseudo-random generator based on Chinese Remainder Theorem

    NASA Astrophysics Data System (ADS)

    Bajard, Jean Claude; Hördegen, Heinrich

    2009-08-01

    Pseudo-Random Generators (PRG) are fundamental in cryptography. Their use occurs at different level in cipher protocols. They need to verify some properties for being qualified as robust. The NIST proposes some criteria and a tests suite which gives informations on the behavior of the PRG. In this work, we present a PRG constructed from the conversion between further residue systems of representation of the elements of GF(2)[X]. In this approach, we use some pairs of co-prime polynomials of degree k and a state vector of 2k bits. The algebraic properties are broken by using different independent pairs during the process. Since this method is reversible, we also can use it as a symmetric crypto-system. We evaluate the cost of a such system, taking into account that some operations are commonly implemented on crypto-processors. We give the results of the different NIST Tests and we explain this choice compare to others found in the literature. We describe the behavior of this PRG and explain how the different rounds are chained for ensuring a fine secure randomness.

  7. Enhancing active and passive remote sensing in the ocean using broadband acoustic transmissions and coherent hydrophone arrays

    NASA Astrophysics Data System (ADS)

    Tran, Duong Duy

    The statistics of broadband acoustic signal transmissions in a random continental shelf waveguide are characterized for the fully saturated regime. The probability distribution of broadband signal energies after saturated multi-path propagation is derived using coherence theory. The frequency components obtained from Fourier decomposition of a broadband signal are each assumed to be fully saturated, where the energy spectral density obeys the exponential distribution with 5.6 dB standard deviation and unity scintillation index. When the signal bandwidth and measurement time are respectively larger than the correlation bandwidth and correlation time of its energy spectral density components, the broadband signal energy obtained by integrating the energy spectral density across the signal bandwidth then follows the Gamma distribution with standard deviation smaller than 5.6 dB and scintillation index less than unity. The theory is verified with broadband transmissions in the Gulf of Maine shallow water waveguide in the 300-1200 Hz frequency range. The standard deviations of received broadband signal energies range from 2.7 to 4.6 dB for effective bandwidths up to 42 Hz, while the standard deviations of individual energy spectral density components are roughly 5.6 dB. The energy spectral density correlation bandwidths of the received broadband signals are found to be larger for signals with higher center frequency. Sperm whales in the New England continental shelf and slope were passively localized, in both range and bearing using a single low-frequency (< 2500 Hz), densely sampled, towed horizontal coherent hydrophone array system. Whale bearings were estimated using time-domain beamforming that provided high coherent array gain in sperm whale click signal-to-noise ratio. Whale ranges from the receiver array center were estimated using the moving array triangulation technique from a sequence of whale bearing measurements. The dive profile was estimated for a sperm whale in the shallow waters of the Gulf of Maine with 160 m water-column depth, located close to the array's near-field where depth estimation was feasible by employing time difference of arrival of the direct and multiply reflected click signals received on the array. The dependence of broadband energy on bandwidth and measurement time was verified employing recorded sperm whale clicks in the Gulf of Maine.

  8. An integrated open-cavity system for magnetic bead manipulation.

    PubMed

    Abu-Nimeh, F T; Salem, F M

    2013-02-01

    Superparamagnetic beads are increasingly used in biomedical assays to manipulate, transport, and maneuver biomaterials. We present a low-cost integrated system designed in bulk CMOS to manipulate and separate biomedical magnetic beads. The system consists of 8 × 8 coil-arrays suitable for single bead manipulation, or collaborative multi-bead manipulation, using pseudo-parallel executions. We demonstrate the flexibility of the design in terms of different coil sizes, DC current levels, and layout techniques. In one array module example, the size of a single coil is 30 μm × 30 μm and the full array occupies an area of 248 μm × 248 μm in 0.5 μm CMOS technology. The programmable DC current source supports 8 discrete levels up to 1.5 mA. The total power consumption of the entire module is 9 mW when running at full power.

  9. Quasi-random array imaging collimator

    DOEpatents

    Fenimore, E.E.

    1980-08-20

    A hexagonally shaped quasi-random no-two-holes-touching imaging collimator. The quasi-random array imaging collimator eliminates contamination from small angle off-axis rays by using a no-two-holes-touching pattern which simultaneously provides for a self-supporting array increasing throughput by elimination of a substrate. The present invention also provides maximum throughput using hexagonally shaped holes in a hexagonal lattice pattern for diffraction limited applications. Mosaicking is also disclosed for reducing fabrication effort.

  10. Random array grid collimator

    DOEpatents

    Fenimore, E.E.

    1980-08-22

    A hexagonally shaped quasi-random no-two-holes touching grid collimator. The quasi-random array grid collimator eliminates contamination from small angle off-axis rays by using a no-two-holes-touching pattern which simultaneously provides for a self-supporting array increasng throughput by elimination of a substrate. The presentation invention also provides maximum throughput using hexagonally shaped holes in a hexagonal lattice pattern for diffraction limited applications. Mosaicking is also disclosed for reducing fabrication effort.

  11. Pseudo 2-transistor active pixel sensor using an n-well/gate-tied p-channel metal oxide semiconductor field eeffect transistor-type photodetector with built-in transfer gate

    NASA Astrophysics Data System (ADS)

    Seo, Sang-Ho; Seo, Min-Woong; Kong, Jae-Sung; Shin, Jang-Kyoo; Choi, Pyung

    2008-11-01

    In this paper, a pseudo 2-transistor active pixel sensor (APS) has been designed and fabricated by using an n-well/gate-tied p-channel metal oxide semiconductor field effect transistor (PMOSFET)-type photodetector with built-in transfer gate. The proposed sensor has been fabricated using a 0.35 μm 2-poly 4-metal standard complementary metal oxide semiconductor (CMOS) logic process. The pseudo 2-transistor APS consists of two NMOSFETs and one photodetector which can amplify the generated photocurrent. The area of the pseudo 2-transistor APS is 7.1 × 6.2 μm2. The sensitivity of the proposed pixel is 49 lux/(V·s). By using this pixel, a smaller pixel area and a higher level of sensitivity can be realized when compared with a conventional 3-transistor APS which uses a pn junction photodiode.

  12. Non-prescription medicines: a process for standards development and testing in community pharmacy.

    PubMed

    Benrimoj, Shalom Charlie I; Gilbert, Andrew; Quintrell, Neil; Neto, Abilio C de Almeida

    2007-08-01

    The objective of the study was to develop and test standards of practice for handling non-prescription medicines. In consultation with pharmacy registering authorities, key professional and consumer groups and selected community pharmacists, standards of practice were developed in the areas of Resource Management; Professional Practice; Pharmacy Design and Environment; and Rights and Needs of Customers. These standards defined and described minimum professional activities required in the provision of non-prescription medicines at a consistent and measurable level of practice. Seven standards were described and further defined by 20 criteria, including practice indicators. The Standards were tested in 40 community pharmacies in two States and after further adaptation, endorsed by all Australian pharmacy registering authorities and major Australian pharmacy and consumer organisations. The consultation process effectively engaged practicing pharmacists in developing standards to enable community pharmacists meet their legislative and professional responsibilities. Community pharmacies were audited against a set of standards of practice for handling non-prescription medicines developed in this project. Pharmacies were audited on the Standards at baseline, mid-intervention and post-intervention. Behavior of community pharmacists and their staff in relation to these standards was measured by conducting pseudo-patron visits to participating pharmacies. The testing process demonstrated a significant improvement in the quality of service delivered by staff in community pharmacies in the management of requests involving non-prescription medicines. The use of pseudo-patron visits, as a training tool with immediate feedback, was an acceptable and effective method of achieving changes in practice. Feedback from staff in the pharmacies regarding the pseudo-patron visits was very positive. Results demonstrated the methodology employed was effective in increasing overall compliance with the Standards from a rate of 47.4% to 70.0% (P < 0.01). This project led to a recommendation for the development and execution of a national implementation strategy.

  13. The Random Telegraph Signal Behavior of Intermittently Stuck Bits in SDRAMs

    NASA Astrophysics Data System (ADS)

    Chugg, Andrew Michael; Burnell, Andrew J.; Duncan, Peter H.; Parker, Sarah; Ward, Jonathan J.

    2009-12-01

    This paper reports behavior analogous to the Random Telegraph Signal (RTS) seen in the leakage currents from radiation induced hot pixels in Charge Coupled Devices (CCDs), but in the context of stuck bits in Synchronous Dynamic Random Access Memories (SDRAMs). Our analysis suggests that pseudo-random sticking and unsticking of the SDRAM bits is due to thermally induced fluctuations in leakage current through displacement damage complexes in depletion regions that were created by high-energy neutron and proton interactions. It is shown that the number of observed stuck bits increases exponentially with temperature, due to the general increase in the leakage currents through the damage centers with temperature. Nevertheless, some stuck bits are seen to pseudo-randomly stick and unstick in the context of a continuously rising trend of temperature, thus demonstrating that their damage centers can exist in multiple widely spaced, discrete levels of leakage current, which is highly consistent with RTS. This implies that these intermittently stuck bits (ISBs) are a displacement damage phenomenon and are unrelated to microdose issues, which is confirmed by the observation that they also occur in unbiased irradiation. Finally, we note that observed variations in the periodicity of the sticking and unsticking behavior on several timescales is most readily explained by multiple leakage current pathways through displacement damage complexes spontaneously and independently opening and closing under the influence of thermal vibrations.

  14. Sex difference in human fingertip recognition of micron-level randomness as unpleasant.

    PubMed

    Nakatani, M; Kawasoe, T; Denda, M

    2011-08-01

    We investigated sex difference in evaluation, using the human fingertip, of the tactile impressions of three different micron-scale patterns laser-engraved on plastic plates. There were two ordered (periodical) patterns consisting of ripples on a scale of a few micrometres and one pseudo-random (non-periodical) pattern; these patterns were considered to mimic the surface geometry of healthy and damaged human hair, respectively. In the first experiment, 10 women and 10 men ran a fingertip over each surface and determined which of the three plates felt most unpleasant. All 10 female participants reported the random pattern, but not the ordered patterns, as unpleasant, whereas the majority of the male participants did not. In the second experiment, 9 of 10 female participants continued to report the pseudo-random pattern as unpleasant even after their fingertip had been coated with a collodion membrane. In the third experiment, participants were asked to evaluate the magnitude of the tactile impression for each pattern. The results again indicated that female participants tend to report a greater magnitude of unpleasantness than male participants. Our findings indicate that the female participants could readily detect microgeometric surface characteristics and that they evaluated the random pattern as more unpleasant. Possible physical and perceptual mechanisms involved are discussed. © 2011 The Authors. ICS © 2011 Society of Cosmetic Scientists and the Société Française de Cosmétologie.

  15. The cosmic microwave background and pseudo-Nambu-Goldstone bosons: Searching for Lorentz violations in the cosmos

    NASA Astrophysics Data System (ADS)

    Leon, David; Kaufman, Jonathan; Keating, Brian; Mewes, Matthew

    2017-01-01

    One of the most powerful probes of new physics is the polarized cosmic microwave background (CMB). The detection of a nonzero polarization angle rotation between the CMB surface of last scattering and today could provide evidence of Lorentz-violating physics. The purpose of this paper is two-fold. First, we review one popular mechanism for polarization rotation of CMB photons: the pseudo-Nambu-Goldstone boson (PNGB). Second, we propose a method to use the POLARBEAR experiment to constrain Lorentz-violating physics in the context of the Standard Model Extension (SME), a framework to standardize a large class of potential Lorentz-violating terms in particle physics.

  16. Novel pseudo-random number generator based on quantum random walks.

    PubMed

    Yang, Yu-Guang; Zhao, Qian-Qian

    2016-02-04

    In this paper, we investigate the potential application of quantum computation for constructing pseudo-random number generators (PRNGs) and further construct a novel PRNG based on quantum random walks (QRWs), a famous quantum computation model. The PRNG merely relies on the equations used in the QRWs, and thus the generation algorithm is simple and the computation speed is fast. The proposed PRNG is subjected to statistical tests such as NIST and successfully passed the test. Compared with the representative PRNG based on quantum chaotic maps (QCM), the present QRWs-based PRNG has some advantages such as better statistical complexity and recurrence. For example, the normalized Shannon entropy and the statistical complexity of the QRWs-based PRNG are 0.999699456771172 and 1.799961178212329e-04 respectively given the number of 8 bits-words, say, 16Mbits. By contrast, the corresponding values of the QCM-based PRNG are 0.999448131481064 and 3.701210794388818e-04 respectively. Thus the statistical complexity and the normalized entropy of the QRWs-based PRNG are closer to 0 and 1 respectively than those of the QCM-based PRNG when the number of words of the analyzed sequence increases. It provides a new clue to construct PRNGs and also extends the applications of quantum computation.

  17. Novel pseudo-random number generator based on quantum random walks

    PubMed Central

    Yang, Yu-Guang; Zhao, Qian-Qian

    2016-01-01

    In this paper, we investigate the potential application of quantum computation for constructing pseudo-random number generators (PRNGs) and further construct a novel PRNG based on quantum random walks (QRWs), a famous quantum computation model. The PRNG merely relies on the equations used in the QRWs, and thus the generation algorithm is simple and the computation speed is fast. The proposed PRNG is subjected to statistical tests such as NIST and successfully passed the test. Compared with the representative PRNG based on quantum chaotic maps (QCM), the present QRWs-based PRNG has some advantages such as better statistical complexity and recurrence. For example, the normalized Shannon entropy and the statistical complexity of the QRWs-based PRNG are 0.999699456771172 and 1.799961178212329e-04 respectively given the number of 8 bits-words, say, 16Mbits. By contrast, the corresponding values of the QCM-based PRNG are 0.999448131481064 and 3.701210794388818e-04 respectively. Thus the statistical complexity and the normalized entropy of the QRWs-based PRNG are closer to 0 and 1 respectively than those of the QCM-based PRNG when the number of words of the analyzed sequence increases. It provides a new clue to construct PRNGs and also extends the applications of quantum computation. PMID:26842402

  18. Reflectometer for pseudo-Brewster angle spectrometry (BAIRS)

    NASA Astrophysics Data System (ADS)

    Potter, Roy F.

    2000-10-01

    A simple, robust reflectometer, pre-set for several angles of incidence (AOI), has been designed and used for determining the optical parameters of opaque samples having a specular surface. A single, linear polarizing element permits the measurement of perpendicular(s) and parallel (p) reflectence at each AOI. The BAIRS algorithm determines the empirical optical parameters for the subject surface at the pseudo-Brewster AOI, based on the measurement of p/s at two AOI's and, in turn the optical constants n and k (or (epsilon) 1 and (epsilon) 2). Radiation sources in current use, are a stabilized tungsten-halide lamp or a deuterium lamp for the visible and near UV spectral regions. Silica fiber optics and lenses deliver input and output radiation from the source and to a CCD array scanned diffraction spectrometer. Results for a sample of GaAs will be presented along with a discussion of dispersion features in the optical constant spectra.

  19. Application of Immunosignatures for Diagnosis of Valley Fever

    PubMed Central

    Navalkar, Krupa Arun; Johnston, Stephen Albert; Woodbury, Neal; Galgiani, John N.; Magee, D. Mitchell; Chicacz, Zbigniew

    2014-01-01

    Valley fever (VF) is difficult to diagnose, partly because the symptoms of VF are confounded with those of other community-acquired pneumonias. Confirmatory diagnostics detect IgM and IgG antibodies against coccidioidal antigens via immunodiffusion (ID). The false-negative rate can be as high as 50% to 70%, with 5% of symptomatic patients never showing detectable antibody levels. In this study, we tested whether the immunosignature diagnostic can resolve VF false negatives. An immunosignature is the pattern of antibody binding to random-sequence peptides on a peptide microarray. A 10,000-peptide microarray was first used to determine whether valley fever patients can be distinguished from 3 other cohorts with similar infections. After determining the VF-specific peptides, a small 96-peptide diagnostic array was created and tested. The performances of the 10,000-peptide array and the 96-peptide diagnostic array were compared to that of the ID diagnostic standard. The 10,000-peptide microarray classified the VF samples from the other 3 infections with 98% accuracy. It also classified VF false-negative patients with 100% sensitivity in a blinded test set versus 28% sensitivity for ID. The immunosignature microarray has potential for simultaneously distinguishing valley fever patients from those with other fungal or bacterial infections. The same 10,000-peptide array can diagnose VF false-negative patients with 100% sensitivity. The smaller 96-peptide diagnostic array was less specific for diagnosing false negatives. We conclude that the performance of the immunosignature diagnostic exceeds that of the existing standard, and the immunosignature can distinguish related infections and might be used in lieu of existing diagnostics. PMID:24964807

  20. Local Random Quantum Circuits are Approximate Polynomial-Designs

    NASA Astrophysics Data System (ADS)

    Brandão, Fernando G. S. L.; Harrow, Aram W.; Horodecki, Michał

    2016-09-01

    We prove that local random quantum circuits acting on n qubits composed of O( t 10 n 2) many nearest neighbor two-qubit gates form an approximate unitary t-design. Previously it was unknown whether random quantum circuits were a t-design for any t > 3. The proof is based on an interplay of techniques from quantum many-body theory, representation theory, and the theory of Markov chains. In particular we employ a result of Nachtergaele for lower bounding the spectral gap of frustration-free quantum local Hamiltonians; a quasi-orthogonality property of permutation matrices; a result of Oliveira which extends to the unitary group the path-coupling method for bounding the mixing time of random walks; and a result of Bourgain and Gamburd showing that dense subgroups of the special unitary group, composed of elements with algebraic entries, are ∞-copy tensor-product expanders. We also consider pseudo-randomness properties of local random quantum circuits of small depth and prove that circuits of depth O( t 10 n) constitute a quantum t-copy tensor-product expander. The proof also rests on techniques from quantum many-body theory, in particular on the detectability lemma of Aharonov, Arad, Landau, and Vazirani. We give applications of the results to cryptography, equilibration of closed quantum dynamics, and the generation of topological order. In particular we show the following pseudo-randomness property of generic quantum circuits: Almost every circuit U of size O( n k ) on n qubits cannot be distinguished from a Haar uniform unitary by circuits of size O( n ( k-9)/11) that are given oracle access to U.

  1. The Space Telescope SI C&DH system. [Scientific Instrument Control and Data Handling Subsystem

    NASA Technical Reports Server (NTRS)

    Gadwal, Govind R.; Barasch, Ronald S.

    1990-01-01

    The Hubble Space Telescope Scientific Instrument Control and Data Handling Subsystem (SI C&DH) is designed to interface with five scientific instruments of the Space Telescope to provide ground and autonomous control and collect health and status information using the Standard Telemetry and Command Components (STACC) multiplex data bus. It also formats high throughput science data into packets. The packetized data is interleaved and Reed-Solomon encoded for error correction and Pseudo Random encoded. An inner convolutional coding with the outer Reed-Solomon coding provides excellent error correction capability. The subsystem is designed with the capacity for orbital replacement in order to meet a mission life of fifteen years. The spacecraft computer and the SI C&DH computer coordinate the activities of the spacecraft and the scientific instruments to achieve the mission objectives.

  2. Spatiotemporal norepinephrine mapping using a high-density CMOS microelectrode array.

    PubMed

    Wydallis, John B; Feeny, Rachel M; Wilson, William; Kern, Tucker; Chen, Tom; Tobet, Stuart; Reynolds, Melissa M; Henry, Charles S

    2015-10-21

    A high-density amperometric electrode array containing 8192 individually addressable platinum working electrodes with an integrated potentiostat fabricated using Complementary Metal Oxide Semiconductor (CMOS) processes is reported. The array was designed to enable electrochemical imaging of chemical gradients with high spatiotemporal resolution. Electrodes are arranged over a 2 mm × 2 mm surface area into 64 subarrays consisting of 128 individual Pt working electrodes as well as Pt pseudo-reference and auxiliary electrodes. Amperometric measurements of norepinephrine in tissue culture media were used to demonstrate the ability of the array to measure concentration gradients in complex media. Poly(dimethylsiloxane) microfluidics were incorporated to control the chemical concentrations in time and space, and the electrochemical response at each electrode was monitored to generate electrochemical heat maps, demonstrating the array's imaging capabilities. A temporal resolution of 10 ms can be achieved by simultaneously monitoring a single subarray of 128 electrodes. The entire 2 mm × 2 mm area can be electrochemically imaged in 64 seconds by cycling through all subarrays at a rate of 1 Hz per subarray. Monitoring diffusional transport of norepinephrine is used to demonstrate the spatiotemporal resolution capabilities of the system.

  3. A pseudo-thermodynamic description of dispersion for nanocomposites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Yan; Beaucage, Gregory; Vogtt, Karsten

    Dispersion in polymer nanocomposites is determined by the kinetics of mixing and chemical affinity. Compounds like reinforcing filler/elastomer blends display some similarity to colloidal solutions in that the filler particles are close to randomly dispersed through processing. It is attractive to apply a pseudo-thermodynamic approach taking advantage of this analogy between the kinetics of mixing for polymer compounds and thermally driven dispersion for colloids. In order to demonstrate this pseudo-thermodynamic approach, two polybutadienes and one polyisoprene were milled with three carbon blacks and two silicas. These samples were examined using small-angle x-ray scattering as a function of filler concentration tomore » determine a pseudo-second order virial coefficient, A2, which is used as an indicator for compatibility of the filler and polymer. It is found that A2 follows the expected behavior with lower values for smaller primary particles indicating that smaller particles are less compatible and more difficult to mix. The measured values of A2 can be used to specify repulsive interaction potentials for coarse grain DPD simulations of filler/elastomer systems. In addition, new methods to quantify the filler percolation threshold and filler mesh size as a function of filler concentration are obtained. Moreover, the results represent a new approach to understanding and predicting compatibility in polymer nanocomposites based on a pseudo-thermodynamic approach.« less

  4. A pseudo-thermodynamic description of dispersion for nanocomposites

    DOE PAGES

    Jin, Yan; Beaucage, Gregory; Vogtt, Karsten; ...

    2017-09-18

    Dispersion in polymer nanocomposites is determined by the kinetics of mixing and chemical affinity. Compounds like reinforcing filler/elastomer blends display some similarity to colloidal solutions in that the filler particles are close to randomly dispersed through processing. It is attractive to apply a pseudo-thermodynamic approach taking advantage of this analogy between the kinetics of mixing for polymer compounds and thermally driven dispersion for colloids. In order to demonstrate this pseudo-thermodynamic approach, two polybutadienes and one polyisoprene were milled with three carbon blacks and two silicas. These samples were examined using small-angle x-ray scattering as a function of filler concentration tomore » determine a pseudo-second order virial coefficient, A2, which is used as an indicator for compatibility of the filler and polymer. It is found that A2 follows the expected behavior with lower values for smaller primary particles indicating that smaller particles are less compatible and more difficult to mix. The measured values of A2 can be used to specify repulsive interaction potentials for coarse grain DPD simulations of filler/elastomer systems. In addition, new methods to quantify the filler percolation threshold and filler mesh size as a function of filler concentration are obtained. Moreover, the results represent a new approach to understanding and predicting compatibility in polymer nanocomposites based on a pseudo-thermodynamic approach.« less

  5. Frequency stabilization for multilocation optical FDM networks

    NASA Astrophysics Data System (ADS)

    Jiang, Quan; Kavehrad, Mohsen

    1993-04-01

    In a multi-location optical FDM network, the frequency of each user's transmitter can be offset-locked, through a Fabry-Perot, to an absolute frequency standard which is distributed to the users. To lock the local Fabry-Perot to the frequency standard, the standard has to be frequency-dithered by a sinusoidal signal and the sinusoidal reference has to be transmitted to the user location since the lock-in amplifier in the stabilization system requires the reference for synchronous detection. We proposed two solutions to avoid transmitting the reference. One uses an extraction circuit to obtain the sinusoidal signal from the incoming signal. A nonlinear circuit following the photodiode produces a strong second-order harmonic of the sinusoidal signal and a phase-locked loop is locked to it. The sinusoidal reference is obtained by a divide- by-2 circuit. The phase ambiguity (0 degree(s) or 180 degree(s)) is resolved by using a selection- circuit and an initial scan. The other method uses a pseudo-random sequence instead of a sinusoidal signal to dither the frequency standard and a surface-acoustic-wave (SAW) matched-filter instead of a lock-in amplifier to obtain the frequency error. The matched-filter serves as a correlator and does not require the dither reference.

  6. Feedback shift register sequences versus uniformly distributed random sequences for correlation chromatography

    NASA Technical Reports Server (NTRS)

    Kaljurand, M.; Valentin, J. R.; Shao, M.

    1996-01-01

    Two alternative input sequences are commonly employed in correlation chromatography (CC). They are sequences derived according to the algorithm of the feedback shift register (i.e., pseudo random binary sequences (PRBS)) and sequences derived by using the uniform random binary sequences (URBS). These two sequences are compared. By applying the "cleaning" data processing technique to the correlograms that result from these sequences, we show that when the PRBS is used the S/N of the correlogram is much higher than the one resulting from using URBS.

  7. Pseudo-random properties of a linear congruential generator investigated by b-adic diaphony

    NASA Astrophysics Data System (ADS)

    Stoev, Peter; Stoilova, Stanislava

    2017-12-01

    In the proposed paper we continue the study of the diaphony, defined in b-adic number system, and we extend it in different directions. We investigate this diaphony as a tool for estimation of the pseudorandom properties of some of the most used random number generators. This is done by evaluating the distribution of specially constructed two-dimensional nets on the base of the obtained random numbers. The aim is to see how the generated numbers are suitable for calculations in some numerical methods (Monte Carlo etc.).

  8. Polypyrrole/titanium oxide nanotube arrays composites as an active material for supercapacitors.

    PubMed

    Kim, Min Seok; Park, Jong Hyeok

    2011-05-01

    The authors present the first reported use of vertically oriented titanium oxide nanotube/polypyrrole (PPy) nanocomposites to increase the specific capacitance of TiO2 based energy storage devices. To increase their electrical storage capacity, titanium oxide nanotubes were coated with PPy and their morphologies were characterized. The incorporation of PPy increased the specific capacitance of the titanium oxide nanotube based supercapacitor system, due to their increased surface area and additional pseudo-capacitance.

  9. Visual Evoked Cortical Potential (VECP) Elicited by Sinusoidal Gratings Controlled by Pseudo-Random Stimulation

    PubMed Central

    Araújo, Carolina S.; Souza, Givago S.; Gomes, Bruno D.; Silveira, Luiz Carlos L.

    2013-01-01

    The contributions of contrast detection mechanisms to the visual cortical evoked potential (VECP) have been investigated studying the contrast-response and spatial frequency-response functions. Previously, the use of m-sequences for stimulus control has been almost restricted to multifocal electrophysiology stimulation and, in some aspects, it substantially differs from conventional VECPs. Single stimulation with spatial contrast temporally controlled by m-sequences has not been extensively tested or compared to multifocal techniques. Our purpose was to evaluate the influence of spatial frequency and contrast of sinusoidal gratings on the VECP elicited by pseudo-random stimulation. Nine normal subjects were stimulated by achromatic sinusoidal gratings driven by pseudo random binary m-sequence at seven spatial frequencies (0.4–10 cpd) and three stimulus sizes (4°, 8°, and 16° of visual angle). At 8° subtence, six contrast levels were used (3.12–99%). The first order kernel (K1) did not provide a consistent measurable signal across spatial frequencies and contrasts that were tested–signal was very small or absent–while the second order kernel first (K2.1) and second (K2.2) slices exhibited reliable responses for the stimulus range. The main differences between results obtained with the K2.1 and K2.2 were in the contrast gain as measured in the amplitude versus contrast and amplitude versus spatial frequency functions. The results indicated that K2.1 was dominated by M-pathway, but for some stimulus condition some P-pathway contribution could be found, while the second slice reflected the P-pathway contribution. The present work extended previous findings of the visual pathways contribution to VECP elicited by pseudorandom stimulation for a wider range of spatial frequencies. PMID:23940546

  10. Long period pseudo random number sequence generator

    NASA Technical Reports Server (NTRS)

    Wang, Charles C. (Inventor)

    1989-01-01

    A circuit for generating a sequence of pseudo random numbers, (A sub K). There is an exponentiator in GF(2 sup m) for the normal basis representation of elements in a finite field GF(2 sup m) each represented by m binary digits and having two inputs and an output from which the sequence (A sub K). Of pseudo random numbers is taken. One of the two inputs is connected to receive the outputs (E sub K) of maximal length shift register of n stages. There is a switch having a pair of inputs and an output. The switch outputs is connected to the other of the two inputs of the exponentiator. One of the switch inputs is connected for initially receiving a primitive element (A sub O) in GF(2 sup m). Finally, there is a delay circuit having an input and an output. The delay circuit output is connected to the other of the switch inputs and the delay circuit input is connected to the output of the exponentiator. Whereby after the exponentiator initially receives the primitive element (A sub O) in GF(2 sup m) through the switch, the switch can be switched to cause the exponentiator to receive as its input a delayed output A(K-1) from the exponentiator thereby generating (A sub K) continuously at the output of the exponentiator. The exponentiator in GF(2 sup m) is novel and comprises a cyclic-shift circuit; a Massey-Omura multiplier; and, a control logic circuit all operably connected together to perform the function U(sub i) = 92(sup i) (for n(sub i) = 1 or 1 (for n(subi) = 0).

  11. The data preprocessing in apparent resistivity pesudo-section construction of two-dimensional electrical resistivity tomography survey

    NASA Astrophysics Data System (ADS)

    Zhou, Q.

    2015-12-01

    Although three-dimensional (3-D) electrical resistivity tomography (ERT) survey has become a popular practice in the site characterization and process monitoring, the two-dimensional (2-D) ERT survey is still often used in the field. This is because that the 2-D ERT survey is relatively easy to do and the focus of site characterization is on the information of 2-D cross section, not necessarily of the 3-D subsurface structure. Examples of such practice include tunnel line and crossing fault survey. In these cases, depending on the property of surface soil to be surveyed, the 2-D ERT survey with pole-pole array may occasionally make us obtain quality good data, however it often gives us a suit of data set both with real and erroneous ones that incorporated the effects of electrode contact and not far enough far electrodes. Without preprocessing, the apparent resistivity pseudo-section constructed from this kind of data set may quite deviate from the real one and the information obtained from it may be misleading and even completely incorrect. In this study, we developed a method of far electrode dynamic correction that is appropriate for raw data preprocessing from 2-D pole-pole array ERT survey. Based on this method, we not only can find and delete the abnormal data points easily, but also can position the coordinates of far electrodes actually working in the field, thus delete the far electrode effects and make best use of the looked strange data points. The method also makes us to be able to judge the effects of electrode contact and avoid using such data points in the following apparent resistivity pseudo-section construction. With this preprocessing to the data set, the constructed apparent resistivity pseudo-section is demonstrated to be more approximate to the real one. This makes the following reversion calculation more robust. We'll introduce this far electrode dynamic correction method and show application examples in the meeting.

  12. Pseudo-random dynamic address configuration (PRDAC) algorithm for mobile ad hoc networks

    NASA Astrophysics Data System (ADS)

    Wu, Shaochuan; Tan, Xuezhi

    2007-11-01

    By analyzing all kinds of address configuration algorithms, this paper provides a new pseudo-random dynamic address configuration (PRDAC) algorithm for mobile ad hoc networks. Based on PRDAC, the first node that initials this network randomly chooses a nonlinear shift register that can generates an m-sequence. When another node joins this network, the initial node will act as an IP address configuration sever to compute an IP address according to this nonlinear shift register, and then allocates this address and tell the generator polynomial of this shift register to this new node. By this means, when other node joins this network, any node that has obtained an IP address can act as a server to allocate address to this new node. PRDAC can also efficiently avoid IP conflicts and deal with network partition and merge as same as prophet address (PA) allocation and dynamic configuration and distribution protocol (DCDP). Furthermore, PRDAC has less algorithm complexity, less computational complexity and more sufficient assumption than PA. In addition, PRDAC radically avoids address conflicts and maximizes the utilization rate of IP addresses. Analysis and simulation results show that PRDAC has rapid convergence, low overhead and immune from topological structures.

  13. CR-Calculus and adaptive array theory applied to MIMO random vibration control tests

    NASA Astrophysics Data System (ADS)

    Musella, U.; Manzato, S.; Peeters, B.; Guillaume, P.

    2016-09-01

    Performing Multiple-Input Multiple-Output (MIMO) tests to reproduce the vibration environment in a user-defined number of control points of a unit under test is necessary in applications where a realistic environment replication has to be achieved. MIMO tests require vibration control strategies to calculate the required drive signal vector that gives an acceptable replication of the target. This target is a (complex) vector with magnitude and phase information at the control points for MIMO Sine Control tests while in MIMO Random Control tests, in the most general case, the target is a complete spectral density matrix. The idea behind this work is to tailor a MIMO random vibration control approach that can be generalized to other MIMO tests, e.g. MIMO Sine and MIMO Time Waveform Replication. In this work the approach is to use gradient-based procedures over the complex space, applying the so called CR-Calculus and the adaptive array theory. With this approach it is possible to better control the process performances allowing the step-by-step Jacobian Matrix update. The theoretical bases behind the work are followed by an application of the developed method to a two-exciter two-axis system and by performance comparisons with standard methods.

  14. Replica amplification of nucleic acid arrays

    DOEpatents

    Church, George M.

    2002-01-01

    A method of producing a plurality of a nucleic acid array, comprising, in order, the steps of amplifying in situ nucleic acid molecules of a first randomly-patterned, immobilized nucleic acid array comprising a heterogeneous pool of nucleic acid molecules affixed to a support, transferring at least a subset of the nucleic acid molecules produced by such amplifying to a second support, and affixing the subset so transferred to the second support to form a second randomly-patterned, immobilized nucleic acid array, wherein the nucleic acid molecules of the second array occupy positions that correspond to those of the nucleic acid molecules from which they were amplified on the first array, so that the first array serves as a template to produce a plurality, is disclosed.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vay, Jean-Luc, E-mail: jlvay@lbl.gov; Haber, Irving; Godfrey, Brendan B.

    Pseudo-spectral electromagnetic solvers (i.e. representing the fields in Fourier space) have extraordinary precision. In particular, Haber et al. presented in 1973 a pseudo-spectral solver that integrates analytically the solution over a finite time step, under the usual assumption that the source is constant over that time step. Yet, pseudo-spectral solvers have not been widely used, due in part to the difficulty for efficient parallelization owing to global communications associated with global FFTs on the entire computational domains. A method for the parallelization of electromagnetic pseudo-spectral solvers is proposed and tested on single electromagnetic pulses, and on Particle-In-Cell simulations of themore » wakefield formation in a laser plasma accelerator. The method takes advantage of the properties of the Discrete Fourier Transform, the linearity of Maxwell’s equations and the finite speed of light for limiting the communications of data within guard regions between neighboring computational domains. Although this requires a small approximation, test results show that no significant error is made on the test cases that have been presented. The proposed method opens the way to solvers combining the favorable parallel scaling of standard finite-difference methods with the accuracy advantages of pseudo-spectral methods.« less

  16. A pseudo differential Gm—C complex filter with frequency tuning for IEEE802.15.4 applications

    NASA Astrophysics Data System (ADS)

    Xin, Cheng; Lungui, Zhong; Haigang, Yang; Fei, Liu; Tongqiang, Gao

    2011-07-01

    This paper presents a CMOS Gm—C complex filter for a low-IF receiver of the IEEE 802.15.4 standard. A pseudo differential OTA with reconfigurable common mode feedback and common mode feed-forward is proposed as well as the frequency tuning method based on a relaxation oscillator. A detailed analysis of non-ideality of the OTA and the frequency tuning method is elaborated. The analysis and measurement results have shown that the center frequency of the complex filter could be tuned accurately. The chip was fabricated in a standard 0.35 μm CMOS process, with a single 3.3 V power supply. The filter consumes 2.1mA current, has a measured in-band group delay ripple of less than 0.16 μs and an IRR larger than 28 dB at 2 MHz apart, which could meet the requirements oftheIEEE802.15.4 standard.

  17. 3D digital image correlation using single color camera pseudo-stereo system

    NASA Astrophysics Data System (ADS)

    Li, Junrui; Dan, Xizuo; Xu, Wan; Wang, Yonghong; Yang, Guobiao; Yang, Lianxiang

    2017-10-01

    Three dimensional digital image correlation (3D-DIC) has been widely used by industry to measure the 3D contour and whole-field displacement/strain. In this paper, a novel single color camera 3D-DIC setup, using a reflection-based pseudo-stereo system, is proposed. Compared to the conventional single camera pseudo-stereo system, which splits the CCD sensor into two halves to capture the stereo views, the proposed system achieves both views using the whole CCD chip and without reducing the spatial resolution. In addition, similarly to the conventional 3D-DIC system, the center of the two views stands in the center of the CCD chip, which minimizes the image distortion relative to the conventional pseudo-stereo system. The two overlapped views in the CCD are separated by the color domain, and the standard 3D-DIC algorithm can be utilized directly to perform the evaluation. The system's principle and experimental setup are described in detail, and multiple tests are performed to validate the system.

  18. Pseudo Random Stimulus Response of Combustion Systems.

    DTIC Science & Technology

    1980-01-01

    is also 7 applicable to the coalescence/dispersion (C/D) micromixing model In the C/D model, micromixing is simulated by considering the reacting...the turbulent fluctuations on the local heat release rate. Thus the micromixing ’noise’ measurements will not be valid, however, deductions

  19. Variable word length encoder reduces TV bandwith requirements

    NASA Technical Reports Server (NTRS)

    Sivertson, W. E., Jr.

    1965-01-01

    Adaptive variable resolution encoding technique provides an adaptive compression pseudo-random noise signal processor for reducing television bandwidth requirements. Complementary processors are required in both the transmitting and receiving systems. The pretransmission processor is analog-to-digital, while the postreception processor is digital-to-analog.

  20. Even and odd normalized zero modes in random interacting Majorana models respecting the parity P and the time-reversal-symmetry T

    NASA Astrophysics Data System (ADS)

    Monthus, Cécile

    2018-06-01

    For random interacting Majorana models where the only symmetries are the parity P and the time-reversal-symmetry T, various approaches are compared to construct exact even and odd normalized zero modes Γ in finite size, i.e. Hermitian operators that commute with the Hamiltonian, that square to the identity, and that commute (even) or anticommute (odd) with the parity P. Even normalized zero-modes are well known under the name of ‘pseudo-spins’ in the field of many-body-localization or more precisely ‘local integrals of motion’ (LIOMs) in the many-body-localized-phase where the pseudo-spins happens to be spatially localized. Odd normalized zero-modes are popular under the name of ‘Majorana zero modes’ or ‘strong zero modes’. Explicit examples for small systems are described in detail. Applications to real-space renormalization procedures based on blocks containing an odd number of Majorana fermions are also discussed.

  1. Random network model of electrical conduction in two-phase rock

    NASA Astrophysics Data System (ADS)

    Fuji-ta, Kiyoshi; Seki, Masayuki; Ichiki, Masahiro

    2018-05-01

    We developed a cell-type lattice model to clarify the interconnected conductivity mechanism of two-phase rock. We quantified electrical conduction networks in rock and evaluated electrical conductivity models of the two-phase interaction. Considering the existence ratio of conductive and resistive cells in the model, we generated natural matrix cells simulating a natural mineral distribution pattern, using Mersenne Twister random numbers. The most important and prominent feature of the model simulation is a drastic increase in the pseudo-conductivity index for conductor ratio R > 0.22. This index in the model increased from 10-4 to 100 between R = 0.22 and 0.9, a change of four orders of magnitude. We compared our model responses with results from previous model studies. Although the pseudo-conductivity computed by the model differs slightly from that of the previous model, model responses can account for the conductivity change. Our modeling is thus effective for quantitatively estimating the degree of interconnection of rock and minerals.

  2. First international two-way satellite time and frequency transfer experiment employing dual pseudo-random noise codes.

    PubMed

    Tseng, Wen-Hung; Huang, Yi-Jiun; Gotoh, Tadahiro; Hobiger, Thomas; Fujieda, Miho; Aida, Masanori; Li, Tingyu; Lin, Shinn-Yan; Lin, Huang-Tien; Feng, Kai-Ming

    2012-03-01

    Two-way satellite time and frequency transfer (TWSTFT) is one of the main techniques used to compare atomic time scales over long distances. To both improve the precision of TWSTFT and decrease the satellite link fee, a new software-defined modem with dual pseudo-random noise (DPN) codes has been developed. In this paper, we demonstrate the first international DPN-based TWSTFT experiment over a period of 6 months. The results of DPN exhibit excellent performance, which is competitive with the Global Positioning System (GPS) precise point positioning (PPP) technique in the short-term and consistent with the conventional TWSTFT in the long-term. Time deviations of less than 75 ps are achieved for averaging times from 1 s to 1 d. Moreover, the DPN data has less diurnal variation than that of the conventional TWSTFT. Because the DPN-based system has advantages of higher precision and lower bandwidth cost, it is one of the most promising methods to improve international time-transfer links.

  3. Night-to-Night Sleep Variability in Older Adults With Chronic Insomnia: Mediators and Moderators in a Randomized Controlled Trial of Brief Behavioral Therapy (BBT-I)

    PubMed Central

    Chan, Wai Sze; Williams, Jacob; Dautovich, Natalie D.; McNamara, Joseph P.H.; Stripling, Ashley; Dzierzewski, Joseph M.; Berry, Richard B.; McCoy, Karin J.M.; McCrae, Christina S.

    2017-01-01

    Study Objectives: Sleep variability is a clinically significant variable in understanding and treating insomnia in older adults. The current study examined changes in sleep variability in the course of brief behavioral therapy for insomnia (BBT-I) in older adults who had chronic insomnia. Additionally, the current study examined the mediating mechanisms underlying reductions of sleep variability and the moderating effects of baseline sleep variability on treatment responsiveness. Methods: Sixty-two elderly participants were randomly assigned to either BBT-I or self-monitoring and attention control (SMAC). Sleep was assessed by sleep diaries and actigraphy from baseline to posttreatment and at 3-month follow-up. Mixed models were used to examine changes in sleep variability (within-person standard deviations of weekly sleep parameters) and the hypothesized mediation and moderation effects. Results: Variabilities in sleep diary-assessed sleep onset latency (SOL) and actigraphy-assessed total sleep time (TST) significantly decreased in BBT-I compared to SMAC (Pseudo R2 = .12, .27; P = .018, .008). These effects were mediated by reductions in bedtime and wake time variability and time in bed. Significant time × group × baseline sleep variability interactions on sleep outcomes indicated that participants who had higher baseline sleep variability were more responsive to BBT-I; their actigraphy-assessed TST, SOL, and sleep efficiency improved to a greater degree (Pseudo R2 = .15 to .66; P < .001 to .044). Conclusions: BBT-I is effective in reducing sleep variability in older adults who have chronic insomnia. Increased consistency in bedtime and wake time and decreased time in bed mediate reductions of sleep variability. Baseline sleep variability may serve as a marker of high treatment responsiveness to BBT-I. Clinical Trial Registration: ClinicalTrials.gov, Identifier: NCT02967185 Citation: Chan WS, Williams J, Dautovich ND, McNamara JP, Stripling A, Dzierzewski JM, Berry RB, McCoy KJ, McCrae CS. Night-to-night sleep variability in older adults with chronic insomnia: mediators and moderators in a randomized controlled trial of brief behavioral therapy (BBT-I). J Clin Sleep Med. 2017;13(11):1243–1254. PMID:28992829

  4. Extracting random numbers from quantum tunnelling through a single diode.

    PubMed

    Bernardo-Gavito, Ramón; Bagci, Ibrahim Ethem; Roberts, Jonathan; Sexton, James; Astbury, Benjamin; Shokeir, Hamzah; McGrath, Thomas; Noori, Yasir J; Woodhead, Christopher S; Missous, Mohamed; Roedig, Utz; Young, Robert J

    2017-12-19

    Random number generation is crucial in many aspects of everyday life, as online security and privacy depend ultimately on the quality of random numbers. Many current implementations are based on pseudo-random number generators, but information security requires true random numbers for sensitive applications like key generation in banking, defence or even social media. True random number generators are systems whose outputs cannot be determined, even if their internal structure and response history are known. Sources of quantum noise are thus ideal for this application due to their intrinsic uncertainty. In this work, we propose using resonant tunnelling diodes as practical true random number generators based on a quantum mechanical effect. The output of the proposed devices can be directly used as a random stream of bits or can be further distilled using randomness extraction algorithms, depending on the application.

  5. Flight parameter estimation using instantaneous frequency and time delay measurements from a three-element planar acoustic array.

    PubMed

    Lo, Kam W

    2016-05-01

    The acoustic signal emitted by a turbo-prop aircraft consists of a strong narrowband tone superimposed on a broadband random component. A ground-based three-element planar acoustic array can be used to estimate the full set of flight parameters of a turbo-prop aircraft in transit by measuring the time delay (TD) between the signal received at the reference sensor and the signal received at each of the other two sensors of the array over a sufficiently long period of time. This paper studies the possibility of using instantaneous frequency (IF) measurements from the reference sensor to improve the precision of the flight parameter estimates. A simplified Cramer-Rao lower bound analysis shows that the standard deviations in the estimates of the aircraft velocity and altitude can be greatly reduced when IF measurements are used together with TD measurements. Two flight parameter estimation algorithms that utilize both IF and TD measurements are formulated and their performances are evaluated using both simulated and real data.

  6. Sound propagation in a monodisperse bubble cloud: from the crystal to the glass.

    PubMed

    Devaud, M; Hocquet, T; Leroy, V

    2010-05-01

    We present a theoretical study of the propagation of a monochromatic pressure wave in an unbounded monodisperse bubbly liquid. We begin with the case of a regular bubble array--a bubble crystal--for which we derive a dispersion relation. In order to interpret the different branches of this relation, we introduce a formalism, the radiative picture, which is the adaptation to acoustics of the standard splitting of the electric field in an electrostatic and a radiative part in Coulomb gauge. In the case of an irregular or completely random array--a bubble glass--and at wavelengths large compared to the size of the bubble array spatial inhomogeneities, the difference between order and disorder is not felt by the pressure wave: a dispersion relation still holds, coinciding with that of a bubble crystal with the same bubble size and air volume fraction at the centre of its first Brillouin zone. This relation is discussed and compared to that obtained by Foldy in the framework of his multiscattering approach.

  7. Synthetic resistivity calculations for the canonical depth-to-bedrock problem: A critical examination of the thin interbed problem and electrical equivalence theories

    NASA Astrophysics Data System (ADS)

    Weiss, C. J.; Knight, R.

    2009-05-01

    One of the key factors in the sensible inference of subsurface geologic properties from both field and laboratory experiments is the ability to quantify the linkages between the inherently fine-scale structures, such as bedding planes and fracture sets, and their macroscopic expression through geophysical interrogation. Central to this idea is the concept of a "minimal sampling volume" over which a given geophysical method responds to an effective medium property whose value is dictated by the geometry and distribution of sub- volume heterogeneities as well as the experiment design. In this contribution we explore the concept of effective resistivity volumes for the canonical depth-to-bedrock problem subject to industry-standard DC resistivity survey designs. Four models representing a sedimentary overburden and flat bedrock interface were analyzed through numerical experiments of six different resistivity arrays. In each of the four models, the sedimentary overburden consists of a thinly interbedded resistive and conductive laminations, with equivalent volume-averaged resistivity but differing lamination thickness, geometry, and layering sequence. The numerical experiments show striking differences in the apparent resistivity pseudo-sections which belie the volume-averaged equivalence of the models. These models constitute the synthetic data set offered for inversion in this Back to Basics Resistivity Modeling session and offer the promise to further our understanding of how the sampling volume, as affected by survey design, can be constrained by joint-array inversion of resistivity data.

  8. A microlens-array based pupil slicer and double scrambler for MAROON-X

    NASA Astrophysics Data System (ADS)

    Seifahrt, Andreas; Stürmer, Julian; Bean, Jacob L.

    2016-07-01

    We report on the design and construction of a microlens-array (MLA)-based pupil slicer and double scrambler for MAROON-X, a new fiber-fed, red-optical, high-precision radial-velocity spectrograph for one of the twin 6.5m Magellan Telescopes in Chile. We have constructed a 3X slicer based on a single cylindrical MLA and show that geometric efficiencies of >=85% can be achieved, limited by the fill factor and optical surface quality of the MLA. We present here the final design of the 3x pupil slicer and double scrambler for MAROON-X, based on a dual MLA design with (a)spherical lenslets. We also discuss the techniques used to create a pseudo-slit of rectangular core fibers with low FRD levels.

  9. Predictors of outcome from computer-based treatment for substance use disorders: Results from a randomized clinical trial.

    PubMed

    Kim, Sunny Jung; Marsch, Lisa A; Guarino, Honoria; Acosta, Michelle C; Aponte-Melendez, Yesenia

    2015-12-01

    Although empirical evidence for the effectiveness of technology-mediated interventions for substance use disorders is rapidly growing, the role of baseline characteristics of patients in predicting treatment outcomes of a technology-based therapy is largely unknown. Participants were randomly assigned to either standard methadone maintenance treatment or reduced standard treatment combined with the computer-based therapeutic education system (TES). An array of demographic and behavioral characteristics of participants (N=160) was measured at baseline. Opioid abstinence and treatment retention were measured weekly for a 52-week intervention period. Generalized linear model and Cox-regression were used to estimate the predictive roles of baseline characteristics in predicting treatment outcomes. We found significant predictors of opioid abstinence and treatment retention within and across conditions. Among 21 baseline characteristics of participants, employment status, anxiety, and ambivalent attitudes toward substance use predicted better opioid abstinence in the reduced-standard-plus-TES condition compared to standard treatment. Participants who had used cocaine/crack in the past 30 days at baseline showed lower dropout rates in standard treatment, whereas those who had not used exhibited lower dropout rates in the reduced-standard-plus-TES condition. This study is the first randomized controlled trial, evaluating over a 12-month period, how various aspects of participant characteristics impact outcomes for treatments that do or do not include technology-based therapy. Compared to standard alone treatment, including TES as part of the care was preferable for patients who were employed, highly anxious, and ambivalent about substance use and did not produce worse outcomes for any subgroups of participants. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  10. Development of compact integral field unit for spaceborne solar spectro-polarimeter

    NASA Astrophysics Data System (ADS)

    Suematsu, Y.; Koyama, M.; Sukegawa, T.; Enokida, Y.; Saito, K.; Okura, Y.; Nakayasu, T.; Ozaki, S.; Tsuneta, S.

    2017-11-01

    A 1.5-m class aperture Solar Ultra-violet Visible and IR telescope (SUVIT) and its instruments for the Japanese next space solar mission SOLAR-C [1] are under study to obtain critical physical parameters in the lower solar atmosphere. For the precise magnetic field measurements covering field-of-view of 3 arcmin x3 acmin, a full stokes polarimetry at three magnetic sensitive lines in wavelength range of 525 nm to 1083 nm with a four-slit spectrograph of two dinesional image scanning mechanism is proposed: one is a true slit and the other three are pseudo-slits from integral field unit (IFU). To suit this configuration, besides a fiber bundle IFU, a compact mirror slicer IFU is designed and being developed. Integral field spectroscopy (IFS), which is realized with IFU, is a two dimensional spectroscopy, providing spectra simultaneously for each spatial direction of an extended two-dimensional field. The scientific advantages of the IFS for studies of localized and transient solar surface phenomena are obvious. There are in general three methods [2][3] to realize the IFS depending on image slicing devices such as a micro-lenslet array, an optical fiber bundle and a narrow rectangular image slicer array. So far, there exist many applications of the IFS for ground-based astronomical observations [4]. Regarding solar instrumentations, the IFS of micro-lenslet array was done by Suematsu et al. [5], the IFS of densely packed rectangular fiber bundle with thin clads was realized [6] and being developed for 4-m aperture solar telescope DKIST by Lin [7] and being considered for space solar telescope SOLAR-C by Katsukawa et al. [8], and the IFS with mirror slicer array was presented by Ren et al. [9] and under study for up-coming large-aperture solar telescope in Europe by Calcines et al. [10] From the view point of a high efficiency spectroscopy, a wide wavelength coverage, a precision spectropolarimetry and space application, the image slicer consisting of all reflective optics is the best option among the three. However, the image slicers are presently limited either by their risk in the case of classical glass polishing techniques (see Vivès et al. [11] for recent development) or by their optical performances when constituted by metallic mirrors. For space instruments, small sized units are much advantageous and demands that width of each slicer mirror is as narrow as an optimal slit width (< 100 micron) of spectrograph which is usually hard to manufacture with glass polishing techniques. On the other hand, Canon is developing a novel technique for such as high performance gratings which can be applicable for manufacturing high optical performance metallic mirrors of small dimensions. For the space-borne spectrograph of SUVIT to be aboard SOLAR-C, we designed the IFS made of a micro image slicer of 45 arrayed 30-micron-thick metal mirrors and a pseudo-pupil metal mirror array re-formatting three pseudo-slits; the design is feasible for optical configuration sharing a spectrograph with a conventional real slit. According to the optical deign, Canon manufactured a prototype IFU for evaluation, demonstrating high performances of micro image slicer and pupil mirrors; enough small micro roughness for visible light spectrographs, sharp edges for efficient image slices, surface figure for high image quality, etc. In the following, we describe the optical design of IFU feasible for space-borne spectrograph, manufacturing method to attain high optical performance of metal mirrors developed by Canon, and resulted performance of prototype IFU in detail.

  11. Scope of Various Random Number Generators in Ant System Approach for TSP

    NASA Technical Reports Server (NTRS)

    Sen, S. K.; Shaykhian, Gholam Ali

    2007-01-01

    Experimented on heuristic, based on an ant system approach for traveling Salesman problem, are several quasi and pseudo-random number generators. This experiment is to explore if any particular generator is most desirable. Such an experiment on large samples has the potential to rank the performance of the generators for the foregoing heuristic. This is just to seek an answer to the controversial performance ranking of the generators in probabilistic/statically sense.

  12. Multifunctional Architectures Constructing of PANI Nanoneedle Arrays on MoS2 Thin Nanosheets for High-Energy Supercapacitors.

    PubMed

    Zhu, Jixin; Sun, Wenping; Yang, Dan; Zhang, Yu; Hoon, Hng Huey; Zhang, Hua; Yan, Qingyu

    2015-09-02

    Multifunctional MoS2 @PANI (polyaniline) pseudo-supercapacitor electrodes consisting of MoS2 thin nanosheets and PANI nanoarrays are fabricated via a large-scale approach. The superior capacitance retention is retained up to 91% after 4000 cycles and a high energy density of 106 Wh kg(-1) is delivered at a power density of 106 kW kg(-1) . © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. GOES-R Proving Ground Activities at the NASA Short-Term Prediction Research and Transition (SPoRT) Center

    NASA Technical Reports Server (NTRS)

    Molthan, Andrew

    2011-01-01

    SPoRT is actively involved in GOES-R Proving Ground activities in a number of ways: (1) Applying the paradigm of product development, user training, and interaction to foster interaction with end users at NOAA forecast offices national centers. (2) Providing unique capabilities in collaboration with other GOES-R Proving Ground partners (a) Hybrid GOES-MODIS imagery (b) Pseudo-GLM via regional lightning mapping arrays (c) Developing new RGB imagery from EUMETSAT guidelines

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hauer, John F.; Mittelstadt, William; Martin, Kenneth E.

    During 2005 and 2006 the Western Electricity Coordinating Council (WECC) performed three major tests of western system dynamics. These tests used a Wide Area Measurement System (WAMS) based primarily on Phasor Measurement Units (PMUs) to determine response to events including the insertion of the 1400-MW Chief Joseph braking resistor, probing signals, and ambient events. Test security was reinforced through real-time analysis of wide area effects, and high-quality data provided dynamic profiles for interarea modes across the entire western interconnection. The tests established that low-level optimized pseudo-random ±20-MW probing with the Pacific DC Intertie (PDCI) roughly doubles the apparent noise thatmore » is natural to the power system, providing sharp dynamic information with negligible interference to system operations. Such probing is an effective alternative to use of the 1400-MW Chief Joseph dynamic brake, and it is under consideration as a standard means for assessing dynamic security.« less

  15. Hopping transport through an array of Luttinger liquid stubs

    NASA Astrophysics Data System (ADS)

    Chudnovskiy, A. L.

    2004-01-01

    We consider a thermally activated transport across and array of parallel one-dimensional quantum wires of finite length (quantum stubs). The disorder enters as a random tunneling between the nearest-neighbor stubs as well as a random shift of the bottom of the energy band in each stub. Whereas one-particle wave functions are localized across the array, the plasmons are delocalized, which affects the variable-range hopping. A perturbative analytical expression for the low-temperature resistance across the array is obtained for a particular choice of plasmon dispersion.

  16. Application of global positioning system to determination of tectonic plate movements and crustal deformations

    NASA Technical Reports Server (NTRS)

    Anderle, R. J.

    1978-01-01

    It is shown that pseudo-range measurements to four GPS satellites based on correlation of the pseudo random code transmissions from the satellites can be used to determine the relative position of ground stations which are separated by several hundred kilometers to a precision at the centimeter level. Carrier signal measurements during the course of passage of satellites over a pair of stations also yield centimeter precision in the relative position, but oscillator instabilities limit the accuracy. The accuracy of solutions based on either type of data is limited by unmodeled tropospheric refraction effects which would reach 5 centimeters at low elevation angles for widely separated stations.

  17. The statistics of Pearce element diagrams and the Chayes closure problem

    NASA Astrophysics Data System (ADS)

    Nicholls, J.

    1988-05-01

    Pearce element ratios are defined as having a constituent in their denominator that is conserved in a system undergoing change. The presence of a conserved element in the denominator simplifies the statistics of such ratios and renders them subject to statistical tests, especially tests of significance of the correlation coefficient between Pearce element ratios. Pearce element ratio diagrams provide unambigous tests of petrologic hypotheses because they are based on the stoichiometry of rock-forming minerals. There are three ways to recognize a conserved element: 1. The petrologic behavior of the element can be used to select conserved ones. They are usually the incompatible elements. 2. The ratio of two conserved elements will be constant in a comagmatic suite. 3. An element ratio diagram that is not constructed with a conserved element in the denominator will have a trend with a near zero intercept. The last two criteria can be tested statistically. The significance of the slope, intercept and correlation coefficient can be tested by estimating the probability of obtaining the observed values from a random population of arrays. This population of arrays must satisfy two criteria: 1. The population must contain at least one array that has the means and variances of the array of analytical data for the rock suite. 2. Arrays with the means and variances of the data must not be so abundant in the population that nearly every array selected at random has the properties of the data. The population of random closed arrays can be obtained from a population of open arrays whose elements are randomly selected from probability distributions. The means and variances of these probability distributions are themselves selected from probability distributions which have means and variances equal to a hypothetical open array that would give the means and variances of the data on closure. This hypothetical open array is called the Chayes array. Alternatively, the population of random closed arrays can be drawn from the compositional space available to rock-forming processes. The minerals comprising the available space can be described with one additive component per mineral phase and a small number of exchange components. This space is called Thompson space. Statistics based on either space lead to the conclusion that Pearce element ratios are statistically valid and that Pearce element diagrams depict the processes that create chemical inhomogeneities in igneous rock suites.

  18. Computational Models for Belief Revision, Group Decision-Making and Cultural Shifts

    DTIC Science & Technology

    2010-10-25

    34social" networks; the green numbers are pseudo-trees or artificial (non-social) constructions. The dashed blue line indicates the range of Erdos- Renyi ...non-social networks such as Erdos- Renyi random graphs or the more passive non-cognitive spreading of disease or information flow, As mentioned

  19. Neurobehavioral testing in subarachnoid hemorrhage: A review of methods and current findings in rodents.

    PubMed

    Turan, Nefize; Miller, Brandon A; Heider, Robert A; Nadeem, Maheen; Sayeed, Iqbal; Stein, Donald G; Pradilla, Gustavo

    2017-11-01

    The most important aspect of a preclinical study seeking to develop a novel therapy for neurological diseases is whether the therapy produces any clinically relevant functional recovery. For this purpose, neurobehavioral tests are commonly used to evaluate the neuroprotective efficacy of treatments in a wide array of cerebrovascular diseases and neurotrauma. Their use, however, has been limited in experimental subarachnoid hemorrhage studies. After several randomized, double-blinded, controlled clinical trials repeatedly failed to produce a benefit in functional outcome despite some improvement in angiographic vasospasm, more rigorous methods of neurobehavioral testing became critical to provide a more comprehensive evaluation of the functional efficacy of proposed treatments. While several subarachnoid hemorrhage studies have incorporated an array of neurobehavioral assays, a standardized methodology has not been agreed upon. Here, we review neurobehavioral tests for rodents and their potential application to subarachnoid hemorrhage studies. Developing a standardized neurobehavioral testing regimen in rodent studies of subarachnoid hemorrhage would allow for better comparison of results between laboratories and a better prediction of what interventions would produce functional benefits in humans.

  20. A random variance model for detection of differential gene expression in small microarray experiments.

    PubMed

    Wright, George W; Simon, Richard M

    2003-12-12

    Microarray techniques provide a valuable way of characterizing the molecular nature of disease. Unfortunately expense and limited specimen availability often lead to studies with small sample sizes. This makes accurate estimation of variability difficult, since variance estimates made on a gene by gene basis will have few degrees of freedom, and the assumption that all genes share equal variance is unlikely to be true. We propose a model by which the within gene variances are drawn from an inverse gamma distribution, whose parameters are estimated across all genes. This results in a test statistic that is a minor variation of those used in standard linear models. We demonstrate that the model assumptions are valid on experimental data, and that the model has more power than standard tests to pick up large changes in expression, while not increasing the rate of false positives. This method is incorporated into BRB-ArrayTools version 3.0 (http://linus.nci.nih.gov/BRB-ArrayTools.html). ftp://linus.nci.nih.gov/pub/techreport/RVM_supplement.pdf

  1. Study of statistical coding for digital TV

    NASA Technical Reports Server (NTRS)

    Gardenhire, L. W.

    1972-01-01

    The results are presented for a detailed study to determine a pseudo-optimum statistical code to be installed in a digital TV demonstration test set. Studies of source encoding were undertaken, using redundancy removal techniques in which the picture is reproduced within a preset tolerance. A method of source encoding, which preliminary studies show to be encouraging, is statistical encoding. A pseudo-optimum code was defined and the associated performance of the code was determined. The format was fixed at 525 lines per frame, 30 frames per second, as per commercial standards.

  2. Occupancy Modeling Species-Environment Relationships with Non-ignorable Survey Designs.

    PubMed

    Irvine, Kathryn M; Rodhouse, Thomas J; Wright, Wilson J; Olsen, Anthony R

    2018-05-26

    Statistical models supporting inferences about species occurrence patterns in relation to environmental gradients are fundamental to ecology and conservation biology. A common implicit assumption is that the sampling design is ignorable and does not need to be formally accounted for in analyses. The analyst assumes data are representative of the desired population and statistical modeling proceeds. However, if datasets from probability and non-probability surveys are combined or unequal selection probabilities are used, the design may be non ignorable. We outline the use of pseudo-maximum likelihood estimation for site-occupancy models to account for such non-ignorable survey designs. This estimation method accounts for the survey design by properly weighting the pseudo-likelihood equation. In our empirical example, legacy and newer randomly selected locations were surveyed for bats to bridge a historic statewide effort with an ongoing nationwide program. We provide a worked example using bat acoustic detection/non-detection data and show how analysts can diagnose whether their design is ignorable. Using simulations we assessed whether our approach is viable for modeling datasets composed of sites contributed outside of a probability design Pseudo-maximum likelihood estimates differed from the usual maximum likelihood occu31 pancy estimates for some bat species. Using simulations we show the maximum likelihood estimator of species-environment relationships with non-ignorable sampling designs was biased, whereas the pseudo-likelihood estimator was design-unbiased. However, in our simulation study the designs composed of a large proportion of legacy or non-probability sites resulted in estimation issues for standard errors. These issues were likely a result of highly variable weights confounded by small sample sizes (5% or 10% sampling intensity and 4 revisits). Aggregating datasets from multiple sources logically supports larger sample sizes and potentially increases spatial extents for statistical inferences. Our results suggest that ignoring the mechanism for how locations were selected for data collection (e.g., the sampling design) could result in erroneous model-based conclusions. Therefore, in order to ensure robust and defensible recommendations for evidence-based conservation decision-making, the survey design information in addition to the data themselves must be available for analysts. Details for constructing the weights used in estimation and code for implementation are provided. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  3. A novel ultrasonic NDE for shrink fit welded structures using interface waves.

    PubMed

    Lee, Jaesun; Park, Junpil; Cho, Younho

    2016-05-01

    Reactor vessel inspection is a critical part of safety maintenance in a nuclear power plant. The inspection of shrink fit welded structures in a reactor nozzle can be a challenging task due to the complicated geometry. Nozzle inspection using pseudo interface waves allows us to inspect the nozzle from outside of the nuclear reactor. In this study, layered concentric pipes were manufactured with perfect shrink fit conditions using stainless steel 316. The displacement distributions were calculated with boundary conditions for a shrink fit welded structure. A multi-transducer guided wave phased array system was employed to monitor the welding quality of the nozzle end at a distance from a fixed position. The complicated geometry of a shrink fit welded structure can be overcome by using the pseudo interface waves in identifying the location and size of defects. The experimental results demonstrate the feasibility of detecting weld delamination and defects. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Small Private Key PKS on an Embedded Microprocessor

    PubMed Central

    Seo, Hwajeong; Kim, Jihyun; Choi, Jongseok; Park, Taehwan; Liu, Zhe; Kim, Howon

    2014-01-01

    Multivariate quadratic ( ) cryptography requires the use of long public and private keys to ensure a sufficient security level, but this is not favorable to embedded systems, which have limited system resources. Recently, various approaches to cryptography using reduced public keys have been studied. As a result of this, at CHES2011 (Cryptographic Hardware and Embedded Systems, 2011), a small public key scheme, was proposed, and its feasible implementation on an embedded microprocessor was reported at CHES2012. However, the implementation of a small private key scheme was not reported. For efficient implementation, random number generators can contribute to reduce the key size, but the cost of using a random number generator is much more complex than computing on modern microprocessors. Therefore, no feasible results have been reported on embedded microprocessors. In this paper, we propose a feasible implementation on embedded microprocessors for a small private key scheme using a pseudo-random number generator and hash function based on a block-cipher exploiting a hardware Advanced Encryption Standard (AES) accelerator. To speed up the performance, we apply various implementation methods, including parallel computation, on-the-fly computation, optimized logarithm representation, vinegar monomials and assembly programming. The proposed method reduces the private key size by about 99.9% and boosts signature generation and verification by 5.78% and 12.19% than previous results in CHES2012. PMID:24651722

  5. Small private key MQPKS on an embedded microprocessor.

    PubMed

    Seo, Hwajeong; Kim, Jihyun; Choi, Jongseok; Park, Taehwan; Liu, Zhe; Kim, Howon

    2014-03-19

    Multivariate quadratic (MQ) cryptography requires the use of long public and private keys to ensure a sufficient security level, but this is not favorable to embedded systems, which have limited system resources. Recently, various approaches to MQ cryptography using reduced public keys have been studied. As a result of this, at CHES2011 (Cryptographic Hardware and Embedded Systems, 2011), a small public key MQ scheme, was proposed, and its feasible implementation on an embedded microprocessor was reported at CHES2012. However, the implementation of a small private key MQ scheme was not reported. For efficient implementation, random number generators can contribute to reduce the key size, but the cost of using a random number generator is much more complex than computing MQ on modern microprocessors. Therefore, no feasible results have been reported on embedded microprocessors. In this paper, we propose a feasible implementation on embedded microprocessors for a small private key MQ scheme using a pseudo-random number generator and hash function based on a block-cipher exploiting a hardware Advanced Encryption Standard (AES) accelerator. To speed up the performance, we apply various implementation methods, including parallel computation, on-the-fly computation, optimized logarithm representation, vinegar monomials and assembly programming. The proposed method reduces the private key size by about 99.9% and boosts signature generation and verification by 5.78% and 12.19% than previous results in CHES2012.

  6. HARMONIC SPACE ANALYSIS OF PULSAR TIMING ARRAY REDSHIFT MAPS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roebber, Elinore; Holder, Gilbert, E-mail: roebbere@physics.mcgill.ca

    2017-01-20

    In this paper, we propose a new framework for treating the angular information in the pulsar timing array (PTA) response to a gravitational wave (GW) background based on standard cosmic microwave background techniques. We calculate the angular power spectrum of the all-sky gravitational redshift pattern induced at the Earth for both a single bright source of gravitational radiation and a statistically isotropic, unpolarized Gaussian random GW background. The angular power spectrum is the harmonic transform of the Hellings and Downs curve. We use the power spectrum to examine the expected variance in the Hellings and Downs curve in both cases.more » Finally, we discuss the extent to which PTAs are sensitive to the angular power spectrum and find that the power spectrum sensitivity is dominated by the quadrupole anisotropy of the gravitational redshift map.« less

  7. High speed visible light communication using blue GaN laser diodes

    NASA Astrophysics Data System (ADS)

    Watson, S.; Viola, S.; Giuliano, G.; Najda, S. P.; Perlin, P.; Suski, T.; Marona, L.; Leszczyński, M.; Wisniewski, P.; Czernecki, R.; Targowski, G.; Watson, M. A.; White, H.; Rowe, D.; Laycock, L.; Kelly, A. E.

    2016-10-01

    GaN-based laser diodes have been developed over the last 20 years making them desirable for many security and defence applications, in particular, free space laser communications. Unlike their LED counterparts, laser diodes are not limited by their carrier lifetime which makes them attractive for high speed communication, whether in free space, through fiber or underwater. Gigabit data transmission can be achieved in free space by modulating the visible light from the laser with a pseudo-random bit sequence (PRBS), with recent results approaching 5 Gbit/s error free data transmission. By exploiting the low-loss in the blue part of the spectrum through water, data transmission experiments have also been conducted to show rates of 2.5 Gbit/s underwater. Different water types have been tested to monitor the effect of scattering and to see how this affects the overall transmission rate and distance. This is of great interest for communication with unmanned underwater vehicles (UUV) as the current method using acoustics is much slower and vulnerable to interception. These types of laser diodes can typically reach 50-100 mW of power which increases the length at which the data can be transmitted. This distance could be further improved by making use of high power laser arrays. Highly uniform GaN substrates with low defectivity allow individually addressable laser bars to be fabricated. This could ultimately increase optical power levels to 4 W for a 20-emitter array. Overall, the development of GaN laser diodes will play an important part in free space optical communications and will be vital in the advancement of security and defence applications.

  8. A novel attack method about double-random-phase-encoding-based image hiding method

    NASA Astrophysics Data System (ADS)

    Xu, Hongsheng; Xiao, Zhijun; Zhu, Xianchen

    2018-03-01

    By using optical image processing techniques, a novel text encryption and hiding method applied by double-random phase-encoding technique is proposed in the paper. The first step is that the secret message is transformed into a 2-dimension array. The higher bits of the elements in the array are used to fill with the bit stream of the secret text, while the lower bits are stored specific values. Then, the transformed array is encoded by double random phase encoding technique. Last, the encoded array is embedded on a public host image to obtain the image embedded with hidden text. The performance of the proposed technique is tested via analytical modeling and test data stream. Experimental results show that the secret text can be recovered either accurately or almost accurately, while maintaining the quality of the host image embedded with hidden data by properly selecting the method of transforming the secret text into an array and the superimposition coefficient.

  9. Mean convergence theorems and weak laws of large numbers for weighted sums of random variables under a condition of weighted integrability

    NASA Astrophysics Data System (ADS)

    Ordóñez Cabrera, Manuel; Volodin, Andrei I.

    2005-05-01

    From the classical notion of uniform integrability of a sequence of random variables, a new concept of integrability (called h-integrability) is introduced for an array of random variables, concerning an array of constantsE We prove that this concept is weaker than other previous related notions of integrability, such as Cesàro uniform integrability [Chandra, Sankhya Ser. A 51 (1989) 309-317], uniform integrability concerning the weights [Ordóñez Cabrera, Collect. Math. 45 (1994) 121-132] and Cesàro [alpha]-integrability [Chandra and Goswami, J. Theoret. ProbabE 16 (2003) 655-669]. Under this condition of integrability and appropriate conditions on the array of weights, mean convergence theorems and weak laws of large numbers for weighted sums of an array of random variables are obtained when the random variables are subject to some special kinds of dependence: (a) rowwise pairwise negative dependence, (b) rowwise pairwise non-positive correlation, (c) when the sequence of random variables in every row is [phi]-mixing. Finally, we consider the general weak law of large numbers in the sense of Gut [Statist. Probab. Lett. 14 (1992) 49-52] under this new condition of integrability for a Banach space setting.

  10. Design and implementation of ATCA-based 100Gbps DP-QPSK optical signal test instrument

    NASA Astrophysics Data System (ADS)

    Su, Shaojing; Qin, Jiangyi; Huang, Zhiping; Liu, Chenwu

    2014-11-01

    In order to achieve the receiving task of 100Gbps Dual Polarization-Quadrature Phase Shift Keying (DP-QPSK) optical signal acquisition instrument, improve acquisition performance of the instrument, this paper has deeply researched DP-QPSK modulation principles, demodulation techniques and the key technologies of optical signal acquisition. The theories of DP-QPSK optical signal transmission are researched. The DP-QPSK optical signal transmission model is deduced. And the clock and data recovery in high-speed data acquisition and offset correction of multi-channel data are researched. By reasonable hardware circuit design and software system construction, the utilization of high performance Advanced Telecom Computing Architecture (ATCA), this paper proposes a 100Gbps DP-QPSK optical signal acquisition instrument which is based on ATCA. The implementations of key modules are presented by comparison and argumentation. According to the modularization idea, the instrument can be divided into eight modules. Each module performs the following functions. (1) DP-QPSK coherent detection demodulation module; (2) deceleration module; (3) FPGA (Field Programmable Gate Array); (4) storage module; (5) data transmission module; (6) clock module; (7) power module; (8) JTAG debugging, configuration module; What is more, this paper has put forward two solutions to test optical signal acquisition instrument performance. The first scenario is based on a standard STM-256 optical signal format and exploits the SignalTap of QuartusII software to monitor the optical signal data. Another scenario is to use a pseudo-random signal series to generate data, acquisition module acquires a certain amount of data signals, and then the signals are transferred to a computer by the Gigabit Ethernet to analyze. Two testing results show that the bit error rate of optical signal acquisition instrument is low. And the instrument fully meets the requirements of signal receiving system. At the same time this design has an important significance in practical applications.

  11. Multi-beam range imager for autonomous operations

    NASA Technical Reports Server (NTRS)

    Marzwell, Neville I.; Lee, H. Sang; Ramaswami, R.

    1993-01-01

    For space operations from the Space Station Freedom the real time range imager will be very valuable in terms of refuelling, docking as well as space exploration operations. For these applications as well as many other robotics and remote ranging applications, a small potable, power efficient, robust range imager capable of a few tens of km ranging with 10 cm accuracy is needed. The system developed is based on a well known pseudo-random modulation technique applied to a laser transmitter combined with a novel range resolution enhancement technique. In this technique, the transmitter is modulated by a relatively low frequency of an order of a few MHz to enhance the signal to noise ratio and to ease the stringent systems engineering requirements while accomplishing a very high resolution. The desired resolution cannot easily be attained by other conventional approaches. The engineering model of the system is being designed to obtain better than 10 cm range accuracy simply by implementing a high precision clock circuit. In this paper we present the principle of the pseudo-random noise (PN) lidar system and the results of the proof of experiment.

  12. Multi-kW coherent combining of fiber lasers seeded with pseudo random phase modulated light

    NASA Astrophysics Data System (ADS)

    Flores, Angel; Ehrehreich, Thomas; Holten, Roger; Anderson, Brian; Dajani, Iyad

    2016-03-01

    We report efficient coherent beam combining of five kilowatt-class fiber amplifiers with a diffractive optical element (DOE). Based on a master oscillator power amplifier (MOPA) configuration, the amplifiers were seeded with pseudo random phase modulated light. Each non-polarization maintaining fiber amplifier was optically path length matched and provides approximately 1.2 kW of near diffraction-limited output power (measured M2<1.1). Consequently, a low power sample of each laser was utilized for active linear polarization control. A low power sample of the combined beam after the DOE provided an error signal for active phase locking which was performed via Locking of Optical Coherence by Single-Detector Electronic-Frequency Tagging (LOCSET). After phase stabilization, the beams were coherently combined via the 1x5 DOE. A total combined output power of 4.9 kW was achieved with 82% combining efficiency and excellent beam quality (M2<1.1). The intrinsic DOE splitter loss was 5%. Similarly, losses due in part to non-ideal polarization, ASE content, uncorrelated wavefront errors, and misalignment errors contributed to the efficiency reduction.

  13. Least squares deconvolution for leak detection with a pseudo random binary sequence excitation

    NASA Astrophysics Data System (ADS)

    Nguyen, Si Tran Nguyen; Gong, Jinzhe; Lambert, Martin F.; Zecchin, Aaron C.; Simpson, Angus R.

    2018-01-01

    Leak detection and localisation is critical for water distribution system pipelines. This paper examines the use of the time-domain impulse response function (IRF) for leak detection and localisation in a pressurised water pipeline with a pseudo random binary sequence (PRBS) signal excitation. Compared to the conventional step wave generated using a single fast operation of a valve closure, a PRBS signal offers advantageous correlation properties, in that the signal has very low autocorrelation for lags different from zero and low cross correlation with other signals including noise and other interference. These properties result in a significant improvement in the IRF signal to noise ratio (SNR), leading to more accurate leak localisation. In this paper, the estimation of the system IRF is formulated as an optimisation problem in which the l2 norm of the IRF is minimised to suppress the impact of noise and interference sources. Both numerical and experimental data are used to verify the proposed technique. The resultant estimated IRF provides not only accurate leak location estimation, but also good sensitivity to small leak sizes due to the improved SNR.

  14. Employing online quantum random number generators for generating truly random quantum states in Mathematica

    NASA Astrophysics Data System (ADS)

    Miszczak, Jarosław Adam

    2013-01-01

    The presented package for the Mathematica computing system allows the harnessing of quantum random number generators (QRNG) for investigating the statistical properties of quantum states. The described package implements a number of functions for generating random states. The new version of the package adds the ability to use the on-line quantum random number generator service and implements new functions for retrieving lists of random numbers. Thanks to the introduced improvements, the new version provides faster access to high-quality sources of random numbers and can be used in simulations requiring large amount of random data. New version program summaryProgram title: TRQS Catalogue identifier: AEKA_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKA_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 18 134 No. of bytes in distributed program, including test data, etc.: 2 520 49 Distribution format: tar.gz Programming language: Mathematica, C. Computer: Any supporting Mathematica in version 7 or higher. Operating system: Any platform supporting Mathematica; tested with GNU/Linux (32 and 64 bit). RAM: Case-dependent Supplementary material: Fig. 1 mentioned below can be downloaded. Classification: 4.15. External routines: Quantis software library (http://www.idquantique.com/support/quantis-trng.html) Catalogue identifier of previous version: AEKA_v1_0 Journal reference of previous version: Comput. Phys. Comm. 183(2012)118 Does the new version supersede the previous version?: Yes Nature of problem: Generation of random density matrices and utilization of high-quality random numbers for the purpose of computer simulation. Solution method: Use of a physical quantum random number generator and an on-line service providing access to the source of true random numbers generated by quantum real number generator. Reasons for new version: Added support for the high-speed on-line quantum random number generator and improved methods for retrieving lists of random numbers. Summary of revisions: The presented version provides two signicant improvements. The first one is the ability to use the on-line Quantum Random Number Generation service developed by PicoQuant GmbH and the Nano-Optics groups at the Department of Physics of Humboldt University. The on-line service supported in the version 2.0 of the TRQS package provides faster access to true randomness sources constructed using the laws of quantum physics. The service is freely available at https://qrng.physik.hu-berlin.de/. The use of this service allows using the presented package with the need of a physical quantum random number generator. The second improvement introduced in this version is the ability to retrieve arrays of random data directly for the used source. This increases the speed of the random number generation, especially in the case of an on-line service, where it reduces the time necessary to establish the connection. Thanks to the speed improvement of the presented version, the package can now be used in simulations requiring larger amounts of random data. Moreover, the functions for generating random numbers provided by the current version of the package more closely follow the pattern of functions for generating pseudo- random numbers provided in Mathematica. Additional comments: Speed comparison: The implementation of the support for the QRNG on-line service provides a noticeable improvement in the speed of random number generation. For the samples of real numbers of size 101; 102,…,107 the times required to generate these samples using Quantis USB device and QRNG service are compared in Fig. 1. The presented results show that the use of the on-line service provides faster access to random numbers. One should note, however, that the speed gain can increase or decrease depending on the connection speed between the computer and the server providing random numbers. Running time: Depends on the used source of randomness and the amount of random data used in the experiment. References: [1] M. Wahl, M. Leifgen, M. Berlin, T. Röhlicke, H.-J. Rahn, O. Benson., An ultrafast quantum random number generator with provably bounded output bias based on photon arrival time measurements, Applied Physics Letters, Vol. 098, 171105 (2011). http://dx.doi.org/10.1063/1.3578456.

  15. Mitigating Upsets in SRAM-Based FPGAs from the Xilinx Virtex 2 Family

    NASA Technical Reports Server (NTRS)

    Swift, G. M.; Yui, C. C.; Carmichael, C.; Koga, R.; George, J. S.

    2003-01-01

    Static random access memory (SRAM) upset rates in field programmable gate arrays (FPGAs) from the Xilinx Virtex 2 family have been tested for radiation effects on configuration memory, block RAM and the power-on-reset (POR) and SelectMAP single event functional interrupts (SEFIs). Dynamic testing has shown the effectiveness and value of Triple Module Redundancy (TMR) and partial reconfiguration when used in conjunction. Continuing dynamic testing for more complex designs and other Virtex 2 capabilities (i.e., I/O standards, digital clock managers (DCM), etc.) is scheduled.

  16. Study of pseudo noise CW diode laser for ranging applications

    NASA Technical Reports Server (NTRS)

    Lee, Hyo S.; Ramaswami, Ravi

    1992-01-01

    A new Pseudo Random Noise (PN) modulated CW diode laser radar system is being developed for real time ranging of targets at both close and large distances (greater than 10 KM) to satisy a wide range of applications: from robotics to future space applications. Results from computer modeling and statistical analysis, along with some preliminary data obtained from a prototype system, are presented. The received signal is averaged for a short time to recover the target response function. It is found that even with uncooperative targets, based on the design parameters used (200-mW laser and 20-cm receiver), accurate ranging is possible up to about 15 KM, beyond which signal to noise ratio (SNR) becomes too small for real time analog detection.

  17. Optimal design of aperiodic, vertical silicon nanowire structures for photovoltaics.

    PubMed

    Lin, Chenxi; Povinelli, Michelle L

    2011-09-12

    We design a partially aperiodic, vertically-aligned silicon nanowire array that maximizes photovoltaic absorption. The optimal structure is obtained using a random walk algorithm with transfer matrix method based electromagnetic forward solver. The optimal, aperiodic structure exhibits a 2.35 times enhancement in ultimate efficiency compared to its periodic counterpart. The spectral behavior mimics that of a periodic array with larger lattice constant. For our system, we find that randomly-selected, aperiodic structures invariably outperform the periodic array.

  18. Lifestyle Modification for Resistant Hypertension: The TRIUMPH Randomized Clinical Trial

    PubMed Central

    Blumenthal, James A.; Sherwood, Andrew; Smith, Patrick J.; Mabe, Stephanie; Watkins, Lana; Lin, Pao-Hwa; Craighead, Linda W.; Babyak, Michael; Tyson, Crystal; Young, Kenlyn; Ashworth, Megan; Kraus, William; Liao, Lawrence; Hinderliter, Alan

    2015-01-01

    Background Resistant hypertension (RH) is a growing health burden in this country affecting as many as one in five adults being treated for hypertension. RH is associated with increased risk of adverse cardiovascular disease (CVD) events and all-cause mortality. Strategies to reduce blood pressure in this high risk population are a national priority. Methods TRIUMPH is a single site, prospective, randomized clinical trial (RCT) to evaluate the efficacy of a center-based lifestyle intervention consisting of exercise training, reduced sodium and calorie DASH eating plan, and weight management compared to standardized education and physician advice in treating patients with RH. Patients (N=150) will be randomized in a 2:1 ratio to receive either a 4-month supervised lifestyle intervention delivered in the setting of a cardiac rehabilitation center or to a standardized behavioral counseling session to simulate real-world medical practice. The primary end point is clinic blood pressure; secondary endpoints include ambulatory blood pressure and an array of CVD biomarkers including left ventricular hypertrophy, arterial stiffness, baroreceptor reflex sensitivity, insulin resistance, lipids, sympathetic nervous system activity, and inflammatory markers. Lifestyle habits, blood pressure and CVD risk factors also will be measured at one year follow-up. Conclusions The TRIUMPH randomized clinical trial (ClinicalTrials.gov NCT02342808) is designed to test the efficacy of an intensive, center-based lifestyle intervention compared to a standardized education and physician advice counseling session on blood presssure and CVD biomarkers in patients with RH after 4 months of treatment, and will determine whether lifestyle changes can be maintained for a year. PMID:26542509

  19. Screening unlabeled DNA targets with randomly ordered fiber-optic gene arrays.

    PubMed

    Steemers, F J; Ferguson, J A; Walt, D R

    2000-01-01

    We have developed a randomly ordered fiber-optic gene array for rapid, parallel detection of unlabeled DNA targets with surface immobilized molecular beacons (MB) that undergo a conformational change accompanied by a fluorescence change in the presence of a complementary DNA target. Microarrays are prepared by randomly distributing MB-functionalized 3-microm diameter microspheres in an array of wells etched in a 500-microm diameter optical imaging fiber. Using several MBs, each designed to recognize a different target, we demonstrate the selective detection of genomic cystic fibrosis related targets. Positional registration and fluorescence response monitoring of the microspheres was performed using an optical encoding scheme and an imaging fluorescence microscope system.

  20. Location of Vibrio anguillarum resistance-associated trait loci in half-smooth tongue sole Cynoglossus semilaevis at its microsatellite linkage map

    NASA Astrophysics Data System (ADS)

    Tang, Zhihong; Guo, Li; Liu, Yang; Shao, Changwei; Chen, Songlin; Yang, Guanpin

    2016-11-01

    A cultured female half-smooth tongue sole ( Cynoglossus semilaevis) was crossed with a wild male, yielding the first filial generation of pseudo-testcrossing from which 200 fish were randomly selected to locate the Vibrio anguillarum resistance trait in half-smooth tongue sole at its microsatellite linkage map. In total, 129 microsatellites were arrayed into 18 linkage groups, ≥4 each. The map reconstructed was 852.85 cM in length with an average spacing of 7.68 cM, covering 72.07% of that expected (1 183.35 cM). The V. anguillarum resistance trait was a composite rather than a unit trait, which was tentatively partitioned into Survival time in Hours After V. anguillarum Infection (SHAVI) and Immunity of V. Anguillarum Infection (IVAI). Above a logarithm of the odds (LOD) threshold of 2.5, 18 loci relative to SHAVI and 3 relative to IVAI were identified. The 3 loci relative to IVAI explained 18.78%, 5.87% and 6.50% of the total phenotypic variation in immunity. The microsatellites bounding the 3 quantitative trait loci (QTLs) of IVAI may in future aid to the selection of V. anguillarum-immune half-smooth tongue sole varieties, and facilitate cloning the gene(s) controlling such immunity.

  1. Multi-iPPseEvo: A Multi-label Classifier for Identifying Human Phosphorylated Proteins by Incorporating Evolutionary Information into Chou's General PseAAC via Grey System Theory.

    PubMed

    Qiu, Wang-Ren; Zheng, Quan-Shu; Sun, Bi-Qian; Xiao, Xuan

    2017-03-01

    Predicting phosphorylation protein is a challenging problem, particularly when query proteins have multi-label features meaning that they may be phosphorylated at two or more different type amino acids. In fact, human protein usually be phosphorylated at serine, threonine and tyrosine. By introducing the "multi-label learning" approach, a novel predictor has been developed that can be used to deal with the systems containing both single- and multi-label phosphorylation protein. Here we proposed a predictor called Multi-iPPseEvo by (1) incorporating the protein sequence evolutionary information into the general pseudo amino acid composition (PseAAC) via the grey system theory, (2) balancing out the skewed training datasets by the asymmetric bootstrap approach, and (3) constructing an ensemble predictor by fusing an array of individual random forest classifiers thru a voting system. Rigorous cross-validations via a set of multi-label metrics indicate that the multi-label phosphorylation predictor is very promising and encouraging. The current approach represents a new strategy to deal with the multi-label biological problems, and the software is freely available for academic use at http://www.jci-bioinfo.cn/Multi-iPPseEvo. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Ultrasound therapy transducers with space-filling non-periodic arrays.

    PubMed

    Raju, Balasundar I; Hall, Christopher S; Seip, Ralf

    2011-05-01

    Ultrasound transducers designed for therapeutic purposes such as tissue ablation, histotripsy, or drug delivery require large apertures for adequate spatial localization while providing sufficient power and steerability without the presence of secondary grating lobes. In addition, it is highly preferred to minimize the total number of channels and to maintain simplicity in electrical matching network design. To this end, we propose array designs that are both space-filling and non-periodic in the placement of the elements. Such array designs can be generated using the mathematical concept of non-periodic or aperiodic tiling (tessellation) and can lead to reduced grating lobes while maintaining full surface area coverage to deliver maximum power. For illustration, we designed two 2-D space-filling therapeutic arrays with 128 elements arranged on a spherical shell. One was based on the two-shape Penrose rhombus tiling, and the other was based on a single rectangular shape arranged non-periodically. The steerability performance of these arrays was studied using acoustic field simulations. For comparison, we also studied two other arrays, one with circular elements distributed randomly, and the other a periodic array with square elements. Results showed that the two space-filling non-periodic arrays were able to steer to treat a volume of 16 x 16 x 20 mm while ensuring that the grating lobes were under -10 dB compared with the main lobe. The rectangular non-periodic array was able to generate two and half times higher power than the random circles array. The rectangular array was then fabricated by patterning the array using laser scribing methods and its steerability performance was validated using hydrophone measurements. This work demonstrates that the concept of space-filling aperiodic/non-periodic tiling can be used to generate therapy arrays that are able to provide higher power for the same total transducer area compared with random arrays while maintaining acceptable grating lobe levels.

  3. A Distributed Simulation Facility to Support Human Factors Research in Advanced Air Transportation Technology

    NASA Technical Reports Server (NTRS)

    Amonlirdviman, Keith; Farley, Todd C.; Hansman, R. John, Jr.; Ladik, John F.; Sherer, Dana Z.

    1998-01-01

    A distributed real-time simulation of the civil air traffic environment developed to support human factors research in advanced air transportation technology is presented. The distributed environment is based on a custom simulation architecture designed for simplicity and flexibility in human experiments. Standard Internet protocols are used to create the distributed environment, linking all advanced cockpit simulator, all Air Traffic Control simulator, and a pseudo-aircraft control and simulation management station. The pseudo-aircraft control station also functions as a scenario design tool for coordinating human factors experiments. This station incorporates a pseudo-pilot interface designed to reduce workload for human operators piloting multiple aircraft simultaneously in real time. The application of this distributed simulation facility to support a study of the effect of shared information (via air-ground datalink) on pilot/controller shared situation awareness and re-route negotiation is also presented.

  4. True random numbers from amplified quantum vacuum.

    PubMed

    Jofre, M; Curty, M; Steinlechner, F; Anzolin, G; Torres, J P; Mitchell, M W; Pruneri, V

    2011-10-10

    Random numbers are essential for applications ranging from secure communications to numerical simulation and quantitative finance. Algorithms can rapidly produce pseudo-random outcomes, series of numbers that mimic most properties of true random numbers while quantum random number generators (QRNGs) exploit intrinsic quantum randomness to produce true random numbers. Single-photon QRNGs are conceptually simple but produce few random bits per detection. In contrast, vacuum fluctuations are a vast resource for QRNGs: they are broad-band and thus can encode many random bits per second. Direct recording of vacuum fluctuations is possible, but requires shot-noise-limited detectors, at the cost of bandwidth. We demonstrate efficient conversion of vacuum fluctuations to true random bits using optical amplification of vacuum and interferometry. Using commercially-available optical components we demonstrate a QRNG at a bit rate of 1.11 Gbps. The proposed scheme has the potential to be extended to 10 Gbps and even up to 100 Gbps by taking advantage of high speed modulation sources and detectors for optical fiber telecommunication devices.

  5. Codimension-1 Sliding Bifurcations of a Filippov Pest Growth Model with Threshold Policy

    NASA Astrophysics Data System (ADS)

    Tang, Sanyi; Tang, Guangyao; Qin, Wenjie

    A Filippov system is proposed to describe the stage structured nonsmooth pest growth with threshold policy control (TPC). The TPC measure is represented by the total density of both juveniles and adults being chosen as an index for decisions on when to implement chemical control strategies. The proposed Filippov system can have three pieces of sliding segments and three pseudo-equilibria, which result in rich sliding mode bifurcations and local sliding bifurcations including boundary node (boundary focus, or boundary saddle) and tangency bifurcations. As the threshold density varies the model exhibits the interesting global sliding bifurcations sequentially: touching → buckling → crossing → sliding homoclinic orbit to a pseudo-saddle → crossing → touching bifurcations. In particular, bifurcation of a homoclinic orbit to a pseudo-saddle with a figure of eight shape, to a pseudo-saddle-node or to a standard saddle-node have been observed for some parameter sets. This implies that control outcomes are sensitive to the threshold level, and hence it is crucial to choose the threshold level to initiate control strategy. One more sliding segment (or pseudo-equilibrium) is induced by the total density of a population guided switching policy, compared to only the juvenile density guided policy, implying that this control policy is more effective in terms of preventing multiple pest outbreaks or causing the density of pests to stabilize at a desired level such as an economic threshold.

  6. Computed narrow-band azimuthal time-reversing array retrofocusing in shallow water.

    PubMed

    Dungan, M R; Dowling, D R

    2001-10-01

    The process of acoustic time reversal sends sound waves back to their point of origin in reciprocal acoustic environments even when the acoustic environment is unknown. The properties of the time-reversed field commonly depend on the frequency of the original signal, the characteristics of the acoustic environment, and the configuration of the time-reversing transducer array (TRA). In particular, vertical TRAs are predicted to produce horizontally confined foci in environments containing random volume refraction. This article validates and extends this prediction to shallow water environments via monochromatic Monte Carlo propagation simulations (based on parabolic equation computations using RAM). The computational results determine the azimuthal extent of a TRA's retrofocus in shallow-water sound channels either having random bottom roughness or containing random internal-wave-induced sound speed fluctuations. In both cases, randomness in the environment may reduce the predicted azimuthal angular width of the vertical TRA retrofocus to as little as several degrees (compared to 360 degrees for uniform environments) for source-array ranges from 5 to 20 km at frequencies from 500 Hz to 2 kHz. For both types of randomness, power law scalings are found to collapse the calculated azimuthal retrofocus widths for shallow sources over a variety of acoustic frequencies, source-array ranges, water column depths, and random fluctuation amplitudes and correlation scales. Comparisons are made between retrofocusing on shallow and deep sources, and in strongly and mildly absorbing environments.

  7. Causal inference in survival analysis using pseudo-observations.

    PubMed

    Andersen, Per K; Syriopoulou, Elisavet; Parner, Erik T

    2017-07-30

    Causal inference for non-censored response variables, such as binary or quantitative outcomes, is often based on either (1) direct standardization ('G-formula') or (2) inverse probability of treatment assignment weights ('propensity score'). To do causal inference in survival analysis, one needs to address right-censoring, and often, special techniques are required for that purpose. We will show how censoring can be dealt with 'once and for all' by means of so-called pseudo-observations when doing causal inference in survival analysis. The pseudo-observations can be used as a replacement of the outcomes without censoring when applying 'standard' causal inference methods, such as (1) or (2) earlier. We study this idea for estimating the average causal effect of a binary treatment on the survival probability, the restricted mean lifetime, and the cumulative incidence in a competing risks situation. The methods will be illustrated in a small simulation study and via a study of patients with acute myeloid leukemia who received either myeloablative or non-myeloablative conditioning before allogeneic hematopoetic cell transplantation. We will estimate the average causal effect of the conditioning regime on outcomes such as the 3-year overall survival probability and the 3-year risk of chronic graft-versus-host disease. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  8. A review of the solar array manufacturing industry costing standards

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The solar array manufacturing industry costing standards model is designed to compare the cost of producing solar arrays using alternative manufacturing processes. Constructive criticism of the methodology used is intended to enhance its implementation as a practical design tool. Three main elements of the procedure include workbook format and presentation, theoretical model validity and standard financial parameters.

  9. Tomographical imaging using uniformly redundant arrays

    NASA Technical Reports Server (NTRS)

    Cannon, T. M.; Fenimore, E. E.

    1979-01-01

    An investigation is conducted of the behavior of two types of uniformly redundant array (URA) when used for close-up imaging. One URA pattern is a quadratic residue array whose characteristics for imaging planar sources have been simulated by Fenimore and Cannon (1978), while the second is based on m sequences that have been simulated by Gunson and Polychronopulos (1976) and by MacWilliams and Sloan (1976). Close-up imaging is necessary in order to obtain depth information for tomographical purposes. The properties of the two URA patterns are compared with a random array of equal open area. The goal considered in the investigation is to determine if a URA pattern exists which has the desirable defocus properties of the random array while maintaining artifact-free image properties for in-focus objects.

  10. Enhanced polarization of (11-22) semi-polar InGaN nanorod array structure

    NASA Astrophysics Data System (ADS)

    Athanasiou, M.; Smith, R. M.; Hou, Y.; Zhang, Y.; Gong, Y.; Wang, T.

    2015-10-01

    By means of a cost effective nanosphere lithography technique, an InGaN/GaN multiple quantum well structure grown on (11-22) semipolar GaN has been fabricated into two dimensional nanorod arrays which form a photonic crystal (PhC) structure. Such a PhC structure demonstrates not only significantly increased emission intensity, but also an enhanced polarization ratio of the emission. This is due to an effective inhibition of the emission in slab modes and then redistribution to the vertical direction, thus minimizing the light scattering processes that lead to randomizing of the optical polarization. The PhC structure is designed based on a standard finite-difference-time-domain simulation, and then optically confirmed by detailed time-resolved photoluminescence measurements. The results presented pave the way for the fabrication of semipolar InGaN/GaN based emitters with both high efficiency and highly polarized emission.

  11. Computational Study of the Blood Flow in Three Types of 3D Hollow Fiber Membrane Bundles

    PubMed Central

    Zhang, Jiafeng; Chen, Xiaobing; Ding, Jun; Fraser, Katharine H.; Ertan Taskin, M.; Griffith, Bartley P.; Wu, Zhongjun J.

    2013-01-01

    The goal of this study is to develop a computational fluid dynamics (CFD) modeling approach to better estimate the blood flow dynamics in the bundles of the hollow fiber membrane based medical devices (i.e., blood oxygenators, artificial lungs, and hemodialyzers). Three representative types of arrays, square, diagonal, and random with the porosity value of 0.55, were studied. In addition, a 3D array with the same porosity was studied. The flow fields between the individual fibers in these arrays at selected Reynolds numbers (Re) were simulated with CFD modeling. Hemolysis is not significant in the fiber bundles but the platelet activation may be essential. For each type of array, the average wall shear stress is linearly proportional to the Re. For the same Re but different arrays, the average wall shear stress also exhibits a linear dependency on the pressure difference across arrays, while Darcy′s law prescribes a power-law relationship, therefore, underestimating the shear stress level. For the same Re, the average wall shear stress of the diagonal array is approximately 3.1, 1.8, and 2.0 times larger than that of the square, random, and 3D arrays, respectively. A coefficient C is suggested to correlate the CFD predicted data with the analytical solution, and C is 1.16, 1.51, and 2.05 for the square, random, and diagonal arrays in this paper, respectively. It is worth noting that C is strongly dependent on the array geometrical properties, whereas it is weakly dependent on the flow field. Additionally, the 3D fiber bundle simulation results show that the three-dimensional effect is not negligible. Specifically, velocity and shear stress distribution can vary significantly along the fiber axial direction. PMID:24141394

  12. Note: The design of thin gap chamber simulation signal source based on field programmable gate array.

    PubMed

    Hu, Kun; Lu, Houbing; Wang, Xu; Li, Feng; Liang, Futian; Jin, Ge

    2015-01-01

    The Thin Gap Chamber (TGC) is an important part of ATLAS detector and LHC accelerator. Targeting the feature of the output signal of TGC detector, we have designed a simulation signal source. The core of the design is based on field programmable gate array, randomly outputting 256-channel simulation signals. The signal is generated by true random number generator. The source of randomness originates from the timing jitter in ring oscillators. The experimental results show that the random number is uniform in histogram, and the whole system has high reliability.

  13. Note: The design of thin gap chamber simulation signal source based on field programmable gate array

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Kun; Wang, Xu; Li, Feng

    The Thin Gap Chamber (TGC) is an important part of ATLAS detector and LHC accelerator. Targeting the feature of the output signal of TGC detector, we have designed a simulation signal source. The core of the design is based on field programmable gate array, randomly outputting 256-channel simulation signals. The signal is generated by true random number generator. The source of randomness originates from the timing jitter in ring oscillators. The experimental results show that the random number is uniform in histogram, and the whole system has high reliability.

  14. On the design of random metasurface based devices.

    PubMed

    Dupré, Matthieu; Hsu, Liyi; Kanté, Boubacar

    2018-05-08

    Metasurfaces are generally designed by placing scatterers in periodic or pseudo-periodic grids. We propose and discuss design rules for functional metasurfaces with randomly placed anisotropic elements that randomly sample a well-defined phase function. By analyzing the focusing performance of random metasurface lenses as a function of their density and the density of the phase-maps used to design them, we find that the performance of 1D metasurfaces is mostly governed by their density while 2D metasurfaces strongly depend on both the density and the near-field coupling configuration of the surface. The proposed approach is used to design all-polarization random metalenses at near infrared frequencies. Challenges, as well as opportunities of random metasurfaces compared to periodic ones are discussed. Our results pave the way to new approaches in the design of nanophotonic structures and devices from lenses to solar energy concentrators.

  15. Calculating Nozzle Side Loads using Acceleration Measurements of Test-Based Models

    NASA Technical Reports Server (NTRS)

    Brown, Andrew M.; Ruf, Joe

    2007-01-01

    As part of a NASA/MSFC research program to evaluate the effect of different nozzle contours on the well-known but poorly characterized "side load" phenomena, we attempt to back out the net force on a sub-scale nozzle during cold-flow testing using acceleration measurements. Because modeling the test facility dynamics is problematic, new techniques for creating a "pseudo-model" of the facility and nozzle directly from modal test results are applied. Extensive verification procedures were undertaken, resulting in a loading scale factor necessary for agreement between test and model based frequency response functions. Side loads are then obtained by applying a wide-band random load onto the system model, obtaining nozzle response PSD's, and iterating both the amplitude and frequency of the input until a good comparison of the response with the measured response PSD for a specific time point is obtained. The final calculated loading can be used to compare different nozzle profiles for assessment during rocket engine nozzle development and as a basis for accurate design of the nozzle and engine structure to withstand these loads. The techniques applied within this procedure have extensive applicability to timely and accurate characterization of all test fixtures used for modal test.A viewgraph presentation on a model-test based pseudo-model used to calculate side loads on rocket engine nozzles is included. The topics include: 1) Side Loads in Rocket Nozzles; 2) Present Side Loads Research at NASA/MSFC; 3) Structural Dynamic Model Generation; 4) Pseudo-Model Generation; 5) Implementation; 6) Calibration of Pseudo-Model Response; 7) Pseudo-Model Response Verification; 8) Inverse Force Determination; 9) Results; and 10) Recent Work.

  16. Dynamic analysis of space-related linear and non-linear structures

    NASA Technical Reports Server (NTRS)

    Bosela, Paul A.; Shaker, Francis J.; Fertis, Demeter G.

    1990-01-01

    In order to be cost effective, space structures must be extremely light weight, and subsequently, very flexible structures. The power system for Space Station Freedom is such a structure. Each array consists of a deployable truss mast and a split blanket of photo-voltaic solar collectors. The solar arrays are deployed in orbit, and the blanket is stretched into position as the mast is extended. Geometric stiffness due to the preload make this an interesting non-linear problem. The space station will be subjected to various dynamic loads, during shuttle docking, solar tracking, attitude adjustment, etc. Accurate prediction of the natural frequencies and mode shapes of the space station components, including the solar arrays, is critical for determining the structural adequacy of the components, and for designing a dynamic control system. The process used in developing and verifying the finite element dynamic model of the photo-voltaic arrays is documented. Various problems were identified, such as grounding effects due to geometric stiffness, large displacement effects, and pseudo-stiffness (grounding) due to lack of required rigid body modes. Analysis techniques, such as development of rigorous solutions using continuum mechanics, finite element solution sequence altering, equivalent systems using a curvature basis, Craig-Bampton superelement approach, and modal ordering schemes were utilized. The grounding problems associated with the geometric stiffness are emphasized.

  17. Dynamic analysis of space-related linear and non-linear structures

    NASA Technical Reports Server (NTRS)

    Bosela, Paul A.; Shaker, Francis J.; Fertis, Demeter G.

    1990-01-01

    In order to be cost effective, space structures must be extremely light weight, and subsequently, very flexible structures. The power system for Space Station Freedom is such a structure. Each array consists of a deployable truss mast and a split blanket of photovoltaic solar collectors. The solar arrays are deployed in orbit, and the blanket is stretched into position as the mast is extended. Geometric stiffness due to the preload make this an interesting non-linear problem. The space station will be subjected to various dynamic loads, during shuttle docking, solar tracking, attitude adjustment, etc. Accurate prediction of the natural frequencies and mode shapes of the space station components, including the solar arrays, is critical for determining the structural adequacy of the components, and for designing a dynamic controls system. The process used in developing and verifying the finite element dynamic model of the photo-voltaic arrays is documented. Various problems were identified, such as grounding effects due to geometric stiffness, large displacement effects, and pseudo-stiffness (grounding) due to lack of required rigid body modes. Analysis techniques, such as development of rigorous solutions using continuum mechanics, finite element solution sequence altering, equivalent systems using a curvature basis, Craig-Bampton superelement approach, and modal ordering schemes were utilized. The grounding problems associated with the geometric stiffness are emphasized.

  18. Development of an Ultrasonic Phased Array System for Wellbore Integrity Evaluation and Near-Wellbore Fracture Network Mapping of Injection and Production Wells in Geothermal Energy Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Almansouri, Hani; Foster, Benjamin; Kisner, Roger A

    2016-01-01

    This paper documents our progress developing an ultrasound phased array system in combination with a model-based iterative reconstruction (MBIR) algorithm to inspect the health of and characterize the composition of the near-wellbore region for geothermal reservoirs. The main goal for this system is to provide a near-wellbore in-situ characterization capability that will significantly improve wellbore integrity evaluation and near well-bore fracture network mapping. A more detailed image of the fracture network near the wellbore in particular will enable the selection of optimal locations for stimulation along the wellbore, provide critical data that can be used to improve stimulation design, andmore » provide a means for measuring evolution of the fracture network to support long term management of reservoir operations. Development of such a measurement capability supports current hydrothermal operations as well as the successful demonstration of Engineered Geothermal Systems (EGS). The paper will include the design of the phased array system, the performance specifications, and characterization methodology. In addition, we will describe the MBIR forward model derived for the phased array system and the propagation of compressional waves through a pseudo-homogenous medium.« less

  19. Parallelization of a Monte Carlo particle transport simulation code

    NASA Astrophysics Data System (ADS)

    Hadjidoukas, P.; Bousis, C.; Emfietzoglou, D.

    2010-05-01

    We have developed a high performance version of the Monte Carlo particle transport simulation code MC4. The original application code, developed in Visual Basic for Applications (VBA) for Microsoft Excel, was first rewritten in the C programming language for improving code portability. Several pseudo-random number generators have been also integrated and studied. The new MC4 version was then parallelized for shared and distributed-memory multiprocessor systems using the Message Passing Interface. Two parallel pseudo-random number generator libraries (SPRNG and DCMT) have been seamlessly integrated. The performance speedup of parallel MC4 has been studied on a variety of parallel computing architectures including an Intel Xeon server with 4 dual-core processors, a Sun cluster consisting of 16 nodes of 2 dual-core AMD Opteron processors and a 200 dual-processor HP cluster. For large problem size, which is limited only by the physical memory of the multiprocessor server, the speedup results are almost linear on all systems. We have validated the parallel implementation against the serial VBA and C implementations using the same random number generator. Our experimental results on the transport and energy loss of electrons in a water medium show that the serial and parallel codes are equivalent in accuracy. The present improvements allow for studying of higher particle energies with the use of more accurate physical models, and improve statistics as more particles tracks can be simulated in low response time.

  20. Probabilistic Air Segmentation and Sparse Regression Estimated Pseudo CT for PET/MR Attenuation Correction

    PubMed Central

    Chen, Yasheng; Juttukonda, Meher; Su, Yi; Benzinger, Tammie; Rubin, Brian G.; Lee, Yueh Z.; Lin, Weili; Shen, Dinggang; Lalush, David

    2015-01-01

    Purpose To develop a positron emission tomography (PET) attenuation correction method for brain PET/magnetic resonance (MR) imaging by estimating pseudo computed tomographic (CT) images from T1-weighted MR and atlas CT images. Materials and Methods In this institutional review board–approved and HIPAA-compliant study, PET/MR/CT images were acquired in 20 subjects after obtaining written consent. A probabilistic air segmentation and sparse regression (PASSR) method was developed for pseudo CT estimation. Air segmentation was performed with assistance from a probabilistic air map. For nonair regions, the pseudo CT numbers were estimated via sparse regression by using atlas MR patches. The mean absolute percentage error (MAPE) on PET images was computed as the normalized mean absolute difference in PET signal intensity between a method and the reference standard continuous CT attenuation correction method. Friedman analysis of variance and Wilcoxon matched-pairs tests were performed for statistical comparison of MAPE between the PASSR method and Dixon segmentation, CT segmentation, and population averaged CT atlas (mean atlas) methods. Results The PASSR method yielded a mean MAPE ± standard deviation of 2.42% ± 1.0, 3.28% ± 0.93, and 2.16% ± 1.75, respectively, in the whole brain, gray matter, and white matter, which were significantly lower than the Dixon, CT segmentation, and mean atlas values (P < .01). Moreover, 68.0% ± 16.5, 85.8% ± 12.9, and 96.0% ± 2.5 of whole-brain volume had within ±2%, ±5%, and ±10% percentage error by using PASSR, respectively, which was significantly higher than other methods (P < .01). Conclusion PASSR outperformed the Dixon, CT segmentation, and mean atlas methods by reducing PET error owing to attenuation correction. © RSNA, 2014 PMID:25521778

  1. Undermining position effects in choices from arrays, with implications for police lineups.

    PubMed

    Palmer, Matthew A; Sauer, James D; Holt, Glenys A

    2017-03-01

    Choices from arrays are often characterized by position effects, such as edge-aversion. We investigated position effects when participants attempted to pick a suspect from an array similar to a police photo lineup. A reanalysis of data from 2 large-scale field studies showed that choices made under realistic conditions-closely matching eyewitness identification decisions in police investigations-displayed edge-aversion and bias to choose from the top row (Study 1). In a series of experiments (Studies 2a-2c and 3), participants guessing the location of a suspect exhibited edge-aversion regardless of whether the lineup was constructed to maximize the chances of the suspect being picked, to ensure the suspect did not stand out, or randomly. Participants favored top locations only when the lineup was constructed to maximize the chances of the suspect being picked. In Studies 4 and 5, position effects disappeared when (a) response options were presented in an array with no obvious center, edges, or corners, and (b) instructions stated that the suspect was placed randomly. These findings show that position effects are influenced by a combination of task instructions and array shape. Randomizing the location of the suspect and modifying the shape of the lineup array may reduce misidentification. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  2. Sub-1-V-60 nm vertical body channel MOSFET-based six-transistor static random access memory array with wide noise margin and excellent power delay product and its optimization with the cell ratio on static random access memory cell

    NASA Astrophysics Data System (ADS)

    Ogasawara, Ryosuke; Endoh, Tetsuo

    2018-04-01

    In this study, with the aim to achieve a wide noise margin and an excellent power delay product (PDP), a vertical body channel (BC)-MOSFET-based six-transistor (6T) static random access memory (SRAM) array is evaluated by changing the number of pillars in each part of a SRAM cell, that is, by changing the cell ratio in the SRAM cell. This 60 nm vertical BC-MOSFET-based 6T SRAM array realizes 0.84 V operation under the best PDP and up to 31% improvement of PDP compared with the 6T SRAM array based on a 90 nm planar MOSFET whose gate length and channel width are the same as those of the 60 nm vertical BC-MOSFET. Additionally, the vertical BC-MOSFET-based 6T SRAM array achieves an 8.8% wider read static noise margin (RSNM), a 16% wider write margin (WM), and an 89% smaller leakage. Moreover, it is shown that changing the cell ratio brings larger improvements of RSNM, WM, and write time in the vertical BC-MOSFET-based 6T SRAM array.

  3. Can a pseudo-Nambu-Goldstone Higgs lead to symmetry non-restoration?

    NASA Astrophysics Data System (ADS)

    Kilic, Can; Swaminathan, Sivaramakrishnan

    2016-01-01

    The calculation of finite temperature contributions to the scalar potential in a quantum field theory is similar to the calculation of loop corrections at zero temperature. In natural extensions of the Standard Model where loop corrections to the Higgs potential cancel between Standard Model degrees of freedom and their symmetry partners, it is interesting to contemplate whether finite temperature corrections also cancel, raising the question of whether a broken phase of electroweak symmetry may persist at high temperature. It is well known that this does not happen in supersymmetric theories because the thermal contributions of bosons and fermions do not cancel each other. However, for theories with same spin partners, the answer is less obvious. Using the Twin Higgs model as a benchmark, we show that although thermal corrections do cancel at the level of quadratic divergences, subleading corrections still drive the system to a restored phase. We further argue that our conclusions generalize to other well-known extensions of the Standard Model where the Higgs is rendered natural by being the pseudo-Nambu-Goldstone mode of an approximate global symmetry.

  4. The Limitation of Species Range: A Consequence of Searching Along Resource Gradients

    PubMed Central

    Rowell, Jonathan T.

    2009-01-01

    Ecological modelers have long puzzled over the spatial distribution of species. The random walk or diffusive approach to dispersal has yielded important results for biology and mathematics, yet it has been inadequate in explaining all phenomenological features. Ranges can terminate non-smoothly absent a complementary shift in the characteristics of the environment. Also unexplained is the absence of a species from nearby areas of adequate, or even abundant, resources. In this paper, I show how local searching behavior - keyed to a density-dependent fitness - can limit the speed and extent of a species’ spread. In contrast to standard diffusive processes, pseudo-rational movement facilitates the clustering of populations. It also can be used to estimate the speed of an expanding population range, explain expansion stall, and provides a mechanism by which a population can colonize seemingly removed regions - biogeographic islands in a continental framework. Finally, I discuss the effect of resource degradation and different resource impact/utilization curves on the model. PMID:19303032

  5. Enhancement of A5/1 encryption algorithm

    NASA Astrophysics Data System (ADS)

    Thomas, Ria Elin; Chandhiny, G.; Sharma, Katyayani; Santhi, H.; Gayathri, P.

    2017-11-01

    Mobiles have become an integral part of today’s world. Various standards have been proposed for the mobile communication, one of them being GSM. With the rising increase of mobile-based crimes, it is necessary to improve the security of the information passed in the form of voice or data. GSM uses A5/1 for its encryption. It is known that various attacks have been implemented, exploiting the vulnerabilities present within the A5/1 algorithm. Thus, in this paper, we proceed to look at what these vulnerabilities are, and propose the enhanced A5/1 (E-A5/1) where, we try to improve the security provided by the A5/1 algorithm by XORing the key stream generated with a pseudo random number, without increasing the time complexity. We need to study what the vulnerabilities of the base algorithm (A5/1) is, and try to improve upon its security. This will help in the future releases of the A5 family of algorithms.

  6. Continuous-Time Random Walk Models of DNA Electrophoresis in a Post Array: II. Mobility and Sources of Band Broadening

    PubMed Central

    Olson, Daniel W.; Dutta, Sarit; Laachi, Nabil; Tian, Mingwei; Dorfman, Kevin D.

    2011-01-01

    Using the two-state, continuous-time random walk model, we develop expressions for the mobility and the plate height during DNA electrophoresis in an ordered post array that delineate the contributions due to (i) the random distance between collisions and (ii) the random duration of a collision. These contributions are expressed in terms of the means and variances of the underlying stochastic processes, which we evaluate from a large ensemble of Brownian dynamics simulations performed using different electric fields and molecular weights in a hexagonal array of 1 μm posts with a 3 μm center-to-center distance. If we fix the molecular weight, we find that the collision frequency governs the mobility. In contrast, the average collision duration is the most important factor for predicting the mobility as a function of DNA size at constant Péclet number. The plate height is reasonably well-described by a single post rope-over-pulley model, provided that the extension of the molecule is small. Our results only account for dispersion inside the post array and thus represent a theoretical lower bound on the plate height in an actual device. PMID:21290387

  7. Three-dimensional cross-linked carbon network wrapped with ordered polyaniline nanowires for high-performance pseudo-supercapacitors

    NASA Astrophysics Data System (ADS)

    Hu, Huan; Liu, Shuwu; Hanif, Muddasir; Chen, Shuiliang; Hou, Haoqing

    2014-12-01

    The polyaniline (PANI)-based pseudo-supercapacitor has been extensively studied due to its good conductivity, ease of synthesis, low-cost monomer, tunable properties and remarkable specific capacitance. In this work, a three-dimensional cross-linked carbon network (3D-CCN) was used as a contact-resistance-free substrate for PANI-based pseudo-supercapacitors. The ordered PANI nanowires (PaNWs) were grown on the 3D-CCN to form PaNWs/3D-CCN composites by in-situ polymerization. The PaNWs/3D-CCN composites exhibited a specific capacitance (Cs) of 1191.8 F g-1 at a current density of 0.5 A g-1 and a superior rate capability with 66.4% capacitance retention at 100.0 A g-1. The high specific capacitance is attributed to the thin PaNW coating and the spaced PANI nanowire array, which ensure a higher utilization of PANI due to the ease of diffusion of protons through/on the PANI nanowires. In addition, the unique 3D-CCN was used as a high-conductivity platform (or skeleton) with no contact resistance for fast electron transfer and facile charge transport within the composites. Therefore, the binder-free composites can process rapid gains or losses of electrons and ions, even at a high current density. As a result, the specific capacitance and rate capability of our composites are remarkably higher than those of other PANI composites.

  8. CRF: detection of CRISPR arrays using random forest.

    PubMed

    Wang, Kai; Liang, Chun

    2017-01-01

    CRISPRs (clustered regularly interspaced short palindromic repeats) are particular repeat sequences found in wide range of bacteria and archaea genomes. Several tools are available for detecting CRISPR arrays in the genomes of both domains. Here we developed a new web-based CRISPR detection tool named CRF (CRISPR Finder by Random Forest). Different from other CRISPR detection tools, a random forest classifier was used in CRF to filter out invalid CRISPR arrays from all putative candidates and accordingly enhanced detection accuracy. In CRF, particularly, triplet elements that combine both sequence content and structure information were extracted from CRISPR repeats for classifier training. The classifier achieved high accuracy and sensitivity. Moreover, CRF offers a highly interactive web interface for robust data visualization that is not available among other CRISPR detection tools. After detection, the query sequence, CRISPR array architecture, and the sequences and secondary structures of CRISPR repeats and spacers can be visualized for visual examination and validation. CRF is freely available at http://bioinfolab.miamioh.edu/crf/home.php.

  9. ArrayInitiative - a tool that simplifies creating custom Affymetrix CDFs

    PubMed Central

    2011-01-01

    Background Probes on a microarray represent a frozen view of a genome and are quickly outdated when new sequencing studies extend our knowledge, resulting in significant measurement error when analyzing any microarray experiment. There are several bioinformatics approaches to improve probe assignments, but without in-house programming expertise, standardizing these custom array specifications as a usable file (e.g. as Affymetrix CDFs) is difficult, owing mostly to the complexity of the specification file format. However, without correctly standardized files there is a significant barrier for testing competing analysis approaches since this file is one of the required inputs for many commonly used algorithms. The need to test combinations of probe assignments and analysis algorithms led us to develop ArrayInitiative, a tool for creating and managing custom array specifications. Results ArrayInitiative is a standalone, cross-platform, rich client desktop application for creating correctly formatted, custom versions of manufacturer-provided (default) array specifications, requiring only minimal knowledge of the array specification rules and file formats. Users can import default array specifications, import probe sequences for a default array specification, design and import a custom array specification, export any array specification to multiple output formats, export the probe sequences for any array specification and browse high-level information about the microarray, such as version and number of probes. The initial release of ArrayInitiative supports the Affymetrix 3' IVT expression arrays we currently analyze, but as an open source application, we hope that others will contribute modules for other platforms. Conclusions ArrayInitiative allows researchers to create new array specifications, in a standard format, based upon their own requirements. This makes it easier to test competing design and analysis strategies that depend on probe definitions. Since the custom array specifications are easily exported to the manufacturer's standard format, researchers can analyze these customized microarray experiments using established software tools, such as those available in Bioconductor. PMID:21548938

  10. Datacube Services in Action, Using Open Source and Open Standards

    NASA Astrophysics Data System (ADS)

    Baumann, P.; Misev, D.

    2016-12-01

    Array Databases comprise novel, promising technology for massive spatio-temporal datacubes, extending the SQL paradigm of "any query, anytime" to n-D arrays. On server side, such queries can be optimized, parallelized, and distributed based on partitioned array storage. The rasdaman ("raster data manager") system, which has pioneered Array Databases, is available in open source on www.rasdaman.org. Its declarative query language extends SQL with array operators which are optimized and parallelized on server side. The rasdaman engine, which is part of OSGeo Live, is mature and in operational use databases individually holding dozens of Terabytes. Further, the rasdaman concepts have strongly impacted international Big Data standards in the field, including the forthcoming MDA ("Multi-Dimensional Array") extension to ISO SQL, the OGC Web Coverage Service (WCS) and Web Coverage Processing Service (WCPS) standards, and the forthcoming INSPIRE WCS/WCPS; in both OGC and INSPIRE, OGC is WCS Core Reference Implementation. In our talk we present concepts, architecture, operational services, and standardization impact of open-source rasdaman, as well as experiences made.

  11. Analysis of magnetic-dipole transitions in tungsten plasmas using detailed and configuration-average descriptions

    NASA Astrophysics Data System (ADS)

    Na, Xieyu; Poirier, Michel

    2017-06-01

    This paper is devoted to the analysis of transition arrays of magnetic-dipole (M1) type in highly charged ions. Such transitions play a significant role in highly ionized plasmas, for instance in the tungsten plasma present in tokamak devices. Using formulas recently published and their implementation in the Flexible Atomic Code for M1-transition array shifts and widths, absorption and emission spectra arising from transitions inside the 3*n complex of highly-charged tungsten ions are analyzed. A comparison of magnetic-dipole transitions with electric-dipole (E1) transitions shows that, while the latter are better described by transition array formulas, M1 absorption and emission structures reveal some insufficiency of these formulas. It is demonstrated that the detailed spectra account for significantly richer structures than those predicted by the transition array formalism. This is due to the fact that M1 transitions may occur between levels inside the same relativistic configuration, while such inner configuration transitions are not accounted for by the currently available averaging expression. In addition, because of configuration interaction, transition processes involving more than one electron jump, such as 3p1/23d5/2 → 3p3/23d3/2, are possible but not accounted for in the transition array formulas. These missing transitions are collected in pseudo-arrays using a post-processing method described in this paper. The relative influence of inner- and inter-configuration transitions is carefully analyzed in cases of tungsten ions with net charge around 50. The need for an additional theoretical development is emphasized.

  12. IT-26IDENTIFICATION OF PSEUDO-PROGRESSION IN NEW DIAGNOSED GLIOBLASTOMA (GBM) IN A RANDOMIZED PHASE 2 OF ICT-107: MRI AND PATHOLOGY CORRELATION

    PubMed Central

    Phuphanich, Surasak; Yu, John; Bannykh, Serguei; Zhu, Jay-Jiguang

    2014-01-01

    BACKGROUND: Previously reports of pseudo-progression in patients with brain tumor after therapeutic vaccines in pediatric and adult glioma (Pollack, JCO online on June 2, 2014 and Okada, JCO Jan 20, 2011; 29: 330-336) demonstrated that RANO criteria for tumor progression may not be adequate for immunotherapy trials. Similar observations were also seen in other checkpoint inhibitor in melanoma and NSLSC. METHODS: We identified 2 patients, who developed tumor progression by RANO criteria, underwent surgery following enrollment in a phase 2 randomized ICT-107 (an autologous vaccine consisting of patient dendritic cells pulsed with peptides from AIM-2, TRP-2, HER2/neu, IL-13Ra2, gp100, MAGE1) after radiation and Temozolomide (TMZ). RESULTS: The first case is a 69 years old Chinese male, who underwent 1st surgery of gross total resection right occipital GBM on 10/26/2011. Subsequently he received 19 cycles of TMZ and 9 vaccines/placebo. MRI from 7/2/2013 showed enhancement surrounding surgical cavity. After 2nd surgery, pathology showed only rare residual tumor cells with macrophages and positive CD 8 cells. He continued on this vaccine program and MRI showed more progression with finger-like extension into parietal lobe 4 months later. The 3rd surgery also showed extensive reactive changes with no active tumor cells. For 2nd case, a 62 years old male, who underwent first surgery on 7/11/2011 of right temporal lobe, developed 2 areas of enhancement after 6 cycles of TMZ and 7 vaccines/placebo on 4/18/2012. With 2nd surgery, pathology showed reactive gliosis without active tumor. The subject continued in this trial. CONCLUSION: Pseudo-progression was confirmed by pathology in these 2 patients at 20 and 9 months which were delayed comparing to pseudo-progression observed in patients treated with concurrent XRT/TMZ (3-6 months). Future iRANO criteria development is essential for immunotherapy trials. Accurately identifying and managing such patients is necessary to avoid premature termination of therapy.

  13. High-density, microsphere-based fiber optic DNA microarrays.

    PubMed

    Epstein, Jason R; Leung, Amy P K; Lee, Kyong Hoon; Walt, David R

    2003-05-01

    A high-density fiber optic DNA microarray has been developed consisting of oligonucleotide-functionalized, 3.1-microm-diameter microspheres randomly distributed on the etched face of an imaging fiber bundle. The fiber bundles are comprised of 6000-50000 fused optical fibers and each fiber terminates with an etched well. The microwell array is capable of housing complementary-sized microspheres, each containing thousands of copies of a unique oligonucleotide probe sequence. The array fabrication process results in random microsphere placement. Determining the position of microspheres in the random array requires an optical encoding scheme. This array platform provides many advantages over other array formats. The microsphere-stock suspension concentration added to the etched fiber can be controlled to provide inherent sensor redundancy. Examining identical microspheres has a beneficial effect on the signal-to-noise ratio. As other sequences of interest are discovered, new microsphere sensing elements can be added to existing microsphere pools and new arrays can be fabricated incorporating the new sequences without altering the existing detection capabilities. These microarrays contain the smallest feature sizes (3 microm) of any DNA array, allowing interrogation of extremely small sample volumes. Reducing the feature size results in higher local target molecule concentrations, creating rapid and highly sensitive assays. The microsphere array platform is also flexible in its applications; research has included DNA-protein interaction profiles, microbial strain differentiation, and non-labeled target interrogation with molecular beacons. Fiber optic microsphere-based DNA microarrays have a simple fabrication protocol enabling their expansion into other applications, such as single cell-based assays.

  14. Applying Neural Networks in Optical Communication Systems: Possible Pitfalls

    NASA Astrophysics Data System (ADS)

    Eriksson, Tobias A.; Bulow, Henning; Leven, Andreas

    2017-12-01

    We investigate the risk of overestimating the performance gain when applying neural network based receivers in systems with pseudo random bit sequences or with limited memory depths, resulting in repeated short patterns. We show that with such sequences, a large artificial gain can be obtained which comes from pattern prediction rather than predicting or compensating the studied channel/phenomena.

  15. PSO algorithm enhanced with Lozi Chaotic Map - Tuning experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pluhacek, Michal; Senkerik, Roman; Zelinka, Ivan

    2015-03-10

    In this paper it is investigated the effect of tuning of control parameters of the Lozi Chaotic Map employed as a chaotic pseudo-random number generator for the particle swarm optimization algorithm. Three different benchmark functions are selected from the IEEE CEC 2013 competition benchmark set. The Lozi map is extensively tuned and the performance of PSO is evaluated.

  16. A cryptographic hash function based on chaotic network automata

    NASA Astrophysics Data System (ADS)

    Machicao, Jeaneth; Bruno, Odemir M.

    2017-12-01

    Chaos theory has been used to develop several cryptographic methods relying on the pseudo-random properties extracted from simple nonlinear systems such as cellular automata (CA). Cryptographic hash functions (CHF) are commonly used to check data integrity. CHF “compress” arbitrary long messages (input) into much smaller representations called hash values or message digest (output), designed to prevent the ability to reverse the hash values into the original message. This paper proposes a chaos-based CHF inspired on an encryption method based on chaotic CA rule B1357-S2468. Here, we propose an hybrid model that combines CA and networks, called network automata (CNA), whose chaotic spatio-temporal outputs are used to compute a hash value. Following the Merkle and Damgård model of construction, a portion of the message is entered as the initial condition of the network automata, so that the rest parts of messages are iteratively entered to perturb the system. The chaotic network automata shuffles the message using flexible control parameters, so that the generated hash value is highly sensitive to the message. As demonstrated in our experiments, the proposed model has excellent pseudo-randomness and sensitivity properties with acceptable performance when compared to conventional hash functions.

  17. Randomised controlled trial to evaluate the efficacy and usability of a computerised phone-based lifestyle coaching system for primary and secondary prevention of stroke.

    PubMed

    Spassova, Lübomira; Vittore, Debora; Droste, Dirk W; Rösch, Norbert

    2016-02-09

    One of the most effective current approaches to preventing stroke events is the reduction of lifestyle risk factors, such as unhealthy diet, physical inactivity and smoking. In this study, we assessed the efficacy and usability of the phone-based Computer-aided Prevention System (CAPSYS) in supporting the reduction of lifestyle-related risk factors. A single-centre two-arm clinical trial was performed between January 2013 and February 2014, based on individual follow-up periods of six months with 94 patients at high risk of stroke, randomly assigned to an intervention group (IC: 48; advised to use the CAPSYS system) or a standard care group (SC: 46). Study parameters, such as blood pressure, blood values (HDL, LDL, HbA1c, glycaemia and triglycerides), weight, height, physical activity as well as nutrition and smoking habits were captured through questionnaires and medical records at baseline and post-intervention and analysed to detect significant changes. The usability of the intervention was assessed based on the standardised System Usability Scale (SUS) complemented by a more system-specific user satisfaction and feedback questionnaire. The statistical evaluation of primary measures revealed significant decreases of systolic blood pressure (mean of the differences = -9 mmHg; p = 0.03; 95% CI = [-17.29, -0.71]), LDL (pseudo-median of the differences = -7.9 mg/dl; p = 0.04; 95% CI = [-18.5, -0.5]) and triglyceride values (pseudo-median of the differences = -12.5 mg/dl; p = 0.04; 95% CI = [-26, -0.5]) in the intervention group, while no such changes could be observed in the control group. Furthermore, we detected a statistically significant increase in self-reported fruit and vegetable consumption (pseudo-median of the differences = 5.4 servings/week; p = 0.04; 95% CI = [0.5, 10.5]) and a decrease in sweets consumption (pseudo-median of the differences = -2 servings/week; p = 0.04; 95% CI = [-4, -0.00001]) in the intervention group. The usability assessment showed that the CAPSYS system was, in general, highly accepted by the users (average SUS score: 80.1). The study provided encouraging results indicating that a computerised phone-based lifestyle coaching system, such as CAPSYS, can support the usual treatment in reducing cerebro-cardiovascular risk factors and that such an approach is well applicable in practice. ClinicalTrials.gov Identifier: NCT02444715.

  18. High power transcranial beam steering for ultrasonic brain therapy

    PubMed Central

    Pernot, Mathieu; Aubry, Jean-François; Tanter, Mickaël; Thomas, Jean-Louis; Fink, Mathias

    2003-01-01

    A sparse phased array is specially designed for non-invasive ultrasound transskull brain therapy. The array is made of 200 single-elements corresponding to a new generation of high power transducers developed in collaboration with Imasonic (Besançon, France). Each element has a surface of 0.5cm2 and works at 0.9 MHz central frequency with a maximum 20W.cm−2 intensity on the transducer surface. In order to optimize the steering capabilities of the array, several transducers distributions on a spherical surface are simulated: hexagonal, annular, and quasi-random distributions. Using a quasi-random distribution significantly reduces the grating lobes. Furthermore, the simulations show the capability of the quasi-random array to electronically move the focal spot in the vicinity of the geometrical focus (up to +/− 15 mm). Based on the simulation study, the array is constructed and tested. The skull aberrations are corrected by using a time reversal mirror with amplitude correction achieved thanks to an implantable hydrophone, and a sharp focus is obtained through a human skull. Several lesions are induced in fresh liver and brain samples through human skulls, demonstrating the accuracy and the steering capabilities of the system. PMID:12974575

  19. Efficient Text Encryption and Hiding with Double-Random Phase-Encoding

    PubMed Central

    Sang, Jun; Ling, Shenggui; Alam, Mohammad S.

    2012-01-01

    In this paper, a double-random phase-encoding technique-based text encryption and hiding method is proposed. First, the secret text is transformed into a 2-dimensional array and the higher bits of the elements in the transformed array are used to store the bit stream of the secret text, while the lower bits are filled with specific values. Then, the transformed array is encoded with double-random phase-encoding technique. Finally, the encoded array is superimposed on an expanded host image to obtain the image embedded with hidden data. The performance of the proposed technique, including the hiding capacity, the recovery accuracy of the secret text, and the quality of the image embedded with hidden data, is tested via analytical modeling and test data stream. Experimental results show that the secret text can be recovered either accurately or almost accurately, while maintaining the quality of the host image embedded with hidden data by properly selecting the method of transforming the secret text into an array and the superimposition coefficient. By using optical information processing techniques, the proposed method has been found to significantly improve the security of text information transmission, while ensuring hiding capacity at a prescribed level. PMID:23202003

  20. High power transcranial beam steering for ultrasonic brain therapy

    NASA Astrophysics Data System (ADS)

    Pernot, M.; Aubry, J.-F.; Tanter, M.; Thomas, J.-L.; Fink, M.

    2003-08-01

    A sparse phased array is specially designed for non-invasive ultrasound transskull brain therapy. The array is made of 200 single elements corresponding to a new generation of high power transducers developed in collaboration with Imasonic (Besançon, France). Each element has a surface of 0.5 cm2 and works at 0.9 MHz central frequency with a maximum 20 W cm-2 intensity on the transducer surface. In order to optimize the steering capabilities of the array, several transducer distributions on a spherical surface are simulated: hexagonal, annular and quasi-random distributions. Using a quasi-random distribution significantly reduces the grating lobes. Furthermore, the simulations show the capability of the quasi-random array to electronically move the focal spot in the vicinity of the geometrical focus (up to +/-15 mm). Based on the simulation study, the array is constructed and tested. The skull aberrations are corrected by using a time reversal mirror with amplitude correction achieved thanks to an implantable hydrophone, and a sharp focus is obtained through a human skull. Several lesions are induced in fresh liver and brain samples through human skulls, demonstrating the accuracy and the steering capabilities of the system.

  1. Three-dimensional scanning near field optical microscopy (3D-SNOM) imaging of random arrays of copper nanoparticles: implications for plasmonic solar cell enhancement.

    PubMed

    Ezugwu, Sabastine; Ye, Hanyang; Fanchini, Giovanni

    2015-01-07

    In order to investigate the suitability of random arrays of nanoparticles for plasmonic enhancement in the visible-near infrared range, we introduced three-dimensional scanning near-field optical microscopy (3D-SNOM) imaging as a useful technique to probe the intensity of near-field radiation scattered by random systems of nanoparticles at heights up to several hundred nm from their surface. We demonstrated our technique using random arrays of copper nanoparticles (Cu-NPs) at different particle diameter and concentration. Bright regions in the 3D-SNOM images, corresponding to constructive interference of forward-scattered plasmonic waves, were obtained at heights Δz ≥ 220 nm from the surface for random arrays of Cu-NPs of ∼ 60-100 nm in diameter. These heights are too large to use Cu-NPs in contact of the active layer for light harvesting in thin organic solar cells, which are typically no thicker than 200 nm. Using a 200 nm transparent spacer between the system of Cu-NPs and the solar cell active layer, we demonstrate that forward-scattered light can be conveyed in 200 nm thin film solar cells. This architecture increases the solar cell photoconversion efficiency by a factor of 3. Our 3D-SNOM technique is general enough to be suitable for a large number of other applications in nanoplasmonics.

  2. Alcohol prevention at sporting events: study protocol for a quasi-experimental control group study.

    PubMed

    Durbeej, Natalie; Elgán, Tobias H; Jalling, Camilla; Gripenberg, Johanna

    2016-06-06

    Alcohol intoxication and overserving of alcohol at sporting events are of great concern, given the relationships between alcohol consumption, public disturbances, and violence. During recent years this matter has been on the agenda for Swedish policymakers, authorities and key stakeholders, with demands that actions be taken. There is promising potential for utilizing an environmental approach to alcohol prevention as a strategy to reduce the level of alcohol intoxication among spectators at sporting events. Examples of prevention strategies may be community mobilization, Responsible Beverage Service training, policy work, and improved controls and sanctions. This paper describes the design of a quasi-experimental control group study to examine the effects of a multi-component community-based alcohol intervention at matches in the Swedish Premier Football League. A baseline assessment was conducted during 2015 and at least two follow-up assessments will be conducted in 2016 and 2017. The two largest cities in Sweden are included in the study, with Stockholm as the intervention area and Gothenburg as the control area. The setting is Licensed Premises (LP) inside and outside Swedish football arenas, in addition to arena entrances. Spectators are randomly selected and invited to participate in the study by providing a breath alcohol sample as a proxy for Blood Alcohol Concentration (BAC). Actors are hired and trained by an expert panel to act out a standardized scene of severe pseudo-intoxication. Four types of cross-sectional data are generated: (i) BAC levels among ≥ 4 200 spectators, frequency of alcohol service to pseudo-intoxicated patrons attempting to purchase alcohol at LP (ii) outside the arenas (≥200 attempts) and (iii) inside the arenas (≥ 200 attempts), and (iv) frequency of security staff interventions towards pseudo-intoxicated patrons attempting to enter the arenas (≥ 200 attempts). There is an urgent need nationally and internationally to reduce alcohol-related problems at sporting events, and it is essential to test prevention strategies to reduce intoxication levels among spectators. This project makes an important contribution not only to the research community, but also to enabling public health officials, decision-makers, authorities, the general public, and the sports community, to implement appropriate evidence-based strategies.

  3. Prediction of antenna array performance from subarray measurements

    NASA Technical Reports Server (NTRS)

    Huisjen, M. A.

    1978-01-01

    Computer runs were used to determine the effect of mechanical distortions on array pattern performance. Subarray gain data, along with feed network insertion loss, and insertion phase data were combined with the analysis of Ruze on random errors to predict gain of a full array.

  4. Background sampling and transferability of species distribution model ensembles under climate change

    NASA Astrophysics Data System (ADS)

    Iturbide, Maialen; Bedia, Joaquín; Gutiérrez, José Manuel

    2018-07-01

    Species Distribution Models (SDMs) constitute an important tool to assist decision-making in environmental conservation and planning. A popular application of these models is the projection of species distributions under climate change conditions. Yet there are still a range of methodological SDM factors which limit the transferability of these models, contributing significantly to the overall uncertainty of the resulting projections. An important source of uncertainty often neglected in climate change studies comes from the use of background data (a.k.a. pseudo-absences) for model calibration. Here, we study the sensitivity to pseudo-absence sampling as a determinant factor for SDM stability and transferability under climate change conditions, focusing on European wide projections of Quercus robur as an illustrative case study. We explore the uncertainty in future projections derived from ten pseudo-absence realizations and three popular SDMs (GLM, Random Forest and MARS). The contribution of the pseudo-absence realization to the uncertainty was higher in peripheral regions and clearly differed among the tested SDMs in the whole study domain, being MARS the most sensitive - with projections differing up to a 40% for different realizations - and GLM the most stable. As a result we conclude that parsimonious SDMs are preferable in this context, avoiding complex methods (such as MARS) which may exhibit poor model transferability. Accounting for this new source of SDM-dependent uncertainty is crucial when forming multi-model ensembles to undertake climate change projections.

  5. Investigating expectation effects using multiple physiological measures

    PubMed Central

    Siller, Alexander; Ambach, Wolfgang; Vaitl, Dieter

    2015-01-01

    The study aimed at experimentally investigating whether the human body can anticipate future events under improved methodological conditions. Previous studies have reported contradictory results for the phenomenon typically called presentiment. If the positive findings are accurate, they call into doubt our views about human perception, and if they are inaccurate, a plausible conventional explanation might be based on the experimental design of the previous studies, in which expectation due to item sequences was misinterpreted as presentiment. To address these points, we opted to collect several physiological variables, to test different randomization types and to manipulate subjective significance individually. For the latter, we combined a mock crime scenario, in which participants had to steal specific items, with a concealed information test (CIT), in which the participants had to conceal their knowledge when interrogated about items they had stolen or not stolen. We measured electrodermal activity, respiration, finger pulse, heart rate (HR), and reaction times. The participants (n = 154) were assigned randomly to four different groups. Items presented in the CIT were either drawn with replacement (full) or without replacement (pseudo) and were either presented category-wise (cat) or regardless of categories (nocat). To understand how these item sequences influence expectation and modulate physiological reactions, we compared the groups with respect to effect sizes for stolen vs. not stolen items. Group pseudo_cat yielded the highest effect sizes, and pseudo_nocat yielded the lowest. We could not find any evidence of presentiment but did find evidence of physiological correlates of expectation. Due to the design differing fundamentally from previous studies, these findings do not allow for conclusions on the question whether the expectation bias is being confounded with presentiment. PMID:26500600

  6. Secure self-calibrating quantum random-bit generator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fiorentino, M.; Santori, C.; Spillane, S. M.

    2007-03-15

    Random-bit generators (RBGs) are key components of a variety of information processing applications ranging from simulations to cryptography. In particular, cryptographic systems require 'strong' RBGs that produce high-entropy bit sequences, but traditional software pseudo-RBGs have very low entropy content and therefore are relatively weak for cryptography. Hardware RBGs yield entropy from chaotic or quantum physical systems and therefore are expected to exhibit high entropy, but in current implementations their exact entropy content is unknown. Here we report a quantum random-bit generator (QRBG) that harvests entropy by measuring single-photon and entangled two-photon polarization states. We introduce and implement a quantum tomographicmore » method to measure a lower bound on the 'min-entropy' of the system, and we employ this value to distill a truly random-bit sequence. This approach is secure: even if an attacker takes control of the source of optical states, a secure random sequence can be distilled.« less

  7. Estimating the variance for heterogeneity in arm-based network meta-analysis.

    PubMed

    Piepho, Hans-Peter; Madden, Laurence V; Roger, James; Payne, Roger; Williams, Emlyn R

    2018-04-19

    Network meta-analysis can be implemented by using arm-based or contrast-based models. Here we focus on arm-based models and fit them using generalized linear mixed model procedures. Full maximum likelihood (ML) estimation leads to biased trial-by-treatment interaction variance estimates for heterogeneity. Thus, our objective is to investigate alternative approaches to variance estimation that reduce bias compared with full ML. Specifically, we use penalized quasi-likelihood/pseudo-likelihood and hierarchical (h) likelihood approaches. In addition, we consider a novel model modification that yields estimators akin to the residual maximum likelihood estimator for linear mixed models. The proposed methods are compared by simulation, and 2 real datasets are used for illustration. Simulations show that penalized quasi-likelihood/pseudo-likelihood and h-likelihood reduce bias and yield satisfactory coverage rates. Sum-to-zero restriction and baseline contrasts for random trial-by-treatment interaction effects, as well as a residual ML-like adjustment, also reduce bias compared with an unconstrained model when ML is used, but coverage rates are not quite as good. Penalized quasi-likelihood/pseudo-likelihood and h-likelihood are therefore recommended. Copyright © 2018 John Wiley & Sons, Ltd.

  8. Low rank approach to computing first and higher order derivatives using automatic differentiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reed, J. A.; Abdel-Khalik, H. S.; Utke, J.

    2012-07-01

    This manuscript outlines a new approach for increasing the efficiency of applying automatic differentiation (AD) to large scale computational models. By using the principles of the Efficient Subspace Method (ESM), low rank approximations of the derivatives for first and higher orders can be calculated using minimized computational resources. The output obtained from nuclear reactor calculations typically has a much smaller numerical rank compared to the number of inputs and outputs. This rank deficiency can be exploited to reduce the number of derivatives that need to be calculated using AD. The effective rank can be determined according to ESM by computingmore » derivatives with AD at random inputs. Reduced or pseudo variables are then defined and new derivatives are calculated with respect to the pseudo variables. Two different AD packages are used: OpenAD and Rapsodia. OpenAD is used to determine the effective rank and the subspace that contains the derivatives. Rapsodia is then used to calculate derivatives with respect to the pseudo variables for the desired order. The overall approach is applied to two simple problems and to MATWS, a safety code for sodium cooled reactors. (authors)« less

  9. Scope of Various Random Number Generators in ant System Approach for TSP

    NASA Technical Reports Server (NTRS)

    Sen, S. K.; Shaykhian, Gholam Ali

    2007-01-01

    Experimented on heuristic, based on an ant system approach for traveling salesman problem, are several quasi- and pseudo-random number generators. This experiment is to explore if any particular generator is most desirable. Such an experiment on large samples has the potential to rank the performance of the generators for the foregoing heuristic. This is mainly to seek an answer to the controversial issue "which generator is the best in terms of quality of the result (accuracy) as well as cost of producing the result (time/computational complexity) in a probabilistic/statistical sense."

  10. Comparative studies on adsorptive removal of heavy metal ions by biosorbent, bio-char and activated carbon obtained from low cost agro-residue.

    PubMed

    Kırbıyık, Çisem; Pütün, Ayşe Eren; Pütün, Ersan

    2016-01-01

    In this study, Fe(III) and Cr(III) metal ion adsorption processes were carried out with three adsorbents in batch experiments and their adsorption performance was compared. These adsorbents were sesame stalk without pretreatment, bio-char derived from thermal decomposition of biomass, and activated carbon which was obtained from chemical activation of biomass. Scanning electron microscopy and Fourier transform-infrared techniques were used for characterization of adsorbents. The optimum conditions for the adsorption process were obtained by observing the influences of solution pH, adsorbent dosage, initial solution concentration, contact time and temperature. The optimum adsorption efficiencies were determined at pH 2.8 and pH 4.0 for Fe(III) and Cr(III) metal ion solutions, respectively. The experimental data were modelled by different isotherm models and the equilibriums were well described by the Langmuir adsorption isotherm model. The pseudo-first-order, pseudo-second-order kinetic, intra-particle diffusion and Elovich models were applied to analyze the kinetic data and to evaluate rate constants. The pseudo-second-order kinetic model gave a better fit than the others. The thermodynamic parameters, such as Gibbs free energy change ΔG°, standard enthalpy change ΔH° and standard entropy change ΔS° were evaluated. The thermodynamic study showed the adsorption was a spontaneous endothermic process.

  11. Comparison of Vertical Soundings and Sidewall Air Temperature Measurements in a Small Alpine Basin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whiteman, Charles D.; Eisenbach, Stefan; Pospichal, Bernhard

    2004-11-01

    Tethered balloon soundings from two sites on the floor of a 1-km diameter limestone sinkhole in the Eastern Alps are compared with pseudo-vertical temperature ‘soundings’ from three lines of temperature data loggers on the basin’s northwest, southwest and southeast sidewalls. Under stable nighttime conditions with low background winds, the pseudo-vertical profiles from all three lines were good proxies for free air temperature soundings over the basin center, with a mean nighttime cold temperature bias of about 0.4°C and a standard deviation of 0.4°C. Cold biases were highest in the upper basin where relatively warm air subsides to replace air thatmore » spills out of the basin through the lowest altitude saddle. On a windy night, standard deviations increased to 1 - 2°C. After sunrise, the varying exposures of the data loggers to sunlight made the pseudo-vertical profiles less useful as proxies for free air soundings. The good correspondence between sidewall and free air temperatures during high static stability conditions suggests that sidewall soundings will prove useful in monitoring temperatures and vertical temperature gradients in the sinkhole. The sidewall soundings can produce more frequent profiles at less cost than tethersondes or rawinsondes, and provide valuable advantages for some types of meteorological analyses.« less

  12. The standardized EEG electrode array of the IFCN.

    PubMed

    Seeck, Margitta; Koessler, Laurent; Bast, Thomas; Leijten, Frans; Michel, Christoph; Baumgartner, Christoph; He, Bin; Beniczky, Sándor

    2017-10-01

    Standardized EEG electrode positions are essential for both clinical applications and research. The aim of this guideline is to update and expand the unifying nomenclature and standardized positioning for EEG scalp electrodes. Electrode positions were based on 20% and 10% of standardized measurements from anatomical landmarks on the skull. However, standard recordings do not cover the anterior and basal temporal lobes, which is the most frequent source of epileptogenic activity. Here, we propose a basic array of 25 electrodes including the inferior temporal chain, which should be used for all standard clinical recordings. The nomenclature in the basic array is consistent with the 10-10-system. High-density scalp EEG arrays (64-256 electrodes) allow source imaging with even sub-lobar precision. This supplementary exam should be requested whenever necessary, e.g. search for epileptogenic activity in negative standard EEG or for presurgical evaluation. In the near future, nomenclature for high density electrodes arrays beyond the 10-10 system needs to be defined, to allow comparison and standardized recordings across centers. Contrary to the established belief that smaller heads needs less electrodes, in young children at least as many electrodes as in adults should be applied due to smaller skull thickness and the risk of spatial aliasing. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  13. Effects of heart rate variability biofeedback during exposure to fear-provoking stimuli within spider-fearful individuals: study protocol for a randomized controlled trial.

    PubMed

    Schäfer, Sarah K; Ihmig, Frank R; Lara H, Karen A; Neurohr, Frank; Kiefer, Stephan; Staginnus, Marlene; Lass-Hennemann, Johanna; Michael, Tanja

    2018-03-16

    Specific phobias are among the most common anxiety disorders. Exposure therapy is the treatment of choice for specific phobias. However, not all patients respond equally well to it. Hence, current research focuses on therapeutic add-ons to increase and consolidate the effects of exposure therapy. One potential therapeutic add-on is biofeedback to increase heart rate variability (HRV). A recent meta-analysis shows beneficial effects of HRV biofeedback interventions on stress and anxiety symptoms. Therefore, the purpose of the current trial is to evaluate the effects of HRV biofeedback, which is practiced before and utilized during exposure, in spider-fearful individuals. Further, this trial is the first to differentiate between the effects of a HRV biofeedback intervention and those of a low-load working memory (WM) task. Eighty spider-fearful individuals participate in the study. All participants receive a training session in which they practice two tasks (HRV biofeedback and a motor pseudo-biofeedback task or two motor pseudo-biofeedback tasks). Afterwards, they train both tasks at home for 6 days. One week later, during the exposure session, they watch 16 1-min spider video clips. Participants are divided into four groups: group 1 practices the HRV biofeedback and one motor pseudo-task before exposure and utilizes HRV biofeedback during exposure. Group 2 receives the same training, but continues the pseudo-biofeedback task during exposure. Group 3 practices two pseudo-biofeedback tasks and continues one of them during exposure. Group 4 trains in two pseudo-biofeedback tasks and has no additional task during exposure. The primary outcome is fear of spiders (measured by the Fear of Spiders Questionnaire and the Behavioral Approach Test). Secondary outcomes are physiological measures based on electrodermal activity, electrocardiogram and respiration. This RCT is the first one to investigate the effects of using a pre-trained HRV biofeedback during exposure in spider-fearful individuals. The study critically contrasts the effects of the biofeedback intervention with those of pseudo-tasks, which also require WM capacity, but which do not have a physiological base. If HRV biofeedback is effective in reducing fear of spiders, it would represent an easy-to-use tool to improve exposure-therapy outcomes. Deutsches Register Klinischer Studien, DRKS00012278 . Registered on 23 May 2017, amendment on 5 October 2017.

  14. Far field beam pattern of one MW combined beam of laser diode array amplifiers for space power transmission

    NASA Technical Reports Server (NTRS)

    Kwon, Jin H.; Lee, Ja H.

    1989-01-01

    The far-field beam pattern and the power-collection efficiency are calculated for a multistage laser-diode-array amplifier consisting of about 200,000 5-W laser diode arrays with random distributions of phase and orientation errors and random diode failures. From the numerical calculation it is found that the far-field beam pattern is little affected by random failures of up to 20 percent of the laser diodes with reference of 80 percent receiving efficiency in the center spot. The random differences in phases among laser diodes due to probable manufacturing errors is allowed to about 0.2 times the wavelength. The maximum allowable orientation error is about 20 percent of the diffraction angle of a single laser diode aperture (about 1 cm). The preliminary results indicate that the amplifier could be used for space beam-power transmission with an efficiency of about 80 percent for a moderate-size (3-m-diameter) receiver placed at a distance of less than 50,000 km.

  15. Wavefront Control and Image Restoration with Less Computing

    NASA Technical Reports Server (NTRS)

    Lyon, Richard G.

    2010-01-01

    PseudoDiversity is a method of recovering the wavefront in a sparse- or segmented- aperture optical system typified by an interferometer or a telescope equipped with an adaptive primary mirror consisting of controllably slightly moveable segments. (PseudoDiversity should not be confused with a radio-antenna-arraying method called pseudodiversity.) As in the cases of other wavefront- recovery methods, the streams of wavefront data generated by means of PseudoDiversity are used as feedback signals for controlling electromechanical actuators of the various segments so as to correct wavefront errors and thereby, for example, obtain a clearer, steadier image of a distant object in the presence of atmospheric turbulence. There are numerous potential applications in astronomy, remote sensing from aircraft and spacecraft, targeting missiles, sighting military targets, and medical imaging (including microscopy) through such intervening media as cells or water. In comparison with prior wavefront-recovery methods used in adaptive optics, PseudoDiversity involves considerably simpler equipment and procedures and less computation. For PseudoDiversity, there is no need to install separate metrological equipment or to use any optomechanical components beyond those that are already parts of the optical system to which the method is applied. In Pseudo- Diversity, the actuators of a subset of the segments or subapertures are driven to make the segments dither in the piston, tilt, and tip degrees of freedom. Each aperture is dithered at a unique frequency at an amplitude of a half wavelength of light. During the dithering, images on the focal plane are detected and digitized at a rate of at least four samples per dither period. In the processing of the image samples, the use of different dither frequencies makes it possible to determine the separate effects of the various dithered segments or apertures. The digitized image-detector outputs are processed in the spatial-frequency (Fourier-transform) domain to obtain measures of the piston, tip, and tilt errors over each segment or subaperture. Once these measures are known, they are fed back to the actuators to correct the errors. In addition, measures of errors that remain after correction by use of the actuators are further utilized in an algorithm in which the image is phase-corrected in the spatial-frequency domain and then transformed back to the spatial domain at each time step and summed with the images from all previous time steps to obtain a final image having a greater signal-to-noise ratio (and, hence, a visual quality) higher than would otherwise be attainable.

  16. Pseudo-radar algorithms with two extremely wet months of disdrometer data in the Paris area

    NASA Astrophysics Data System (ADS)

    Gires, A.; Tchiguirinskaia, I.; Schertzer, D.

    2018-05-01

    Disdrometer data collected during the two extremely wet months of May and June 2016 at the Ecole des Ponts ParisTech are used to get insights on radar algorithms. The rain rate and pseudo-radar quantities (horizontal and vertical reflectivity, specific differential phase shift) are all estimated over several durations with the help of drop size distributions (DSD) collected at 30 s time steps. The pseudo-radar quantities are defined with simplifying hypotheses, in particular on the DSD homogeneity. First it appears that the parameters of the standard radar relations Zh - R, R - Kdp and R - Zh - Zdr for these pseudo-radar quantities exhibit strong variability between events and even within an event. Second an innovative methodology that relies on checking the ability of a given algorithm to reproduce the good scale invariant multifractal behaviour (on scales 30 s - few h) observed on rainfall time series is implemented. In this framework, the classical hybrid model (Zh - R for low rain rates and R - Kdp for great ones) performs best, as well as the local estimates of the radar relations' parameters. However, we emphasise that due to the hypotheses on which they rely these observations cannot be straightforwardly extended to real radar quantities.

  17. Fermionic dark matter with pseudo-scalar Yukawa interaction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghorbani, Karim, E-mail: k-ghorbani@araku.ac.ir

    2015-01-01

    We consider a renormalizable extension of the standard model whose fermionic dark matter (DM) candidate interacts with a real singlet pseudo-scalar via a pseudo-scalar Yukawa term while we assume that the full Lagrangian is CP-conserved in the classical level. When the pseudo-scalar boson develops a non-zero vacuum expectation value, spontaneous CP-violation occurs and this provides a CP-violated interaction of the dark sector with the SM particles through mixing between the Higgs-like boson and the SM-like Higgs boson. This scenario suggests a minimal number of free parameters. Focusing mainly on the indirect detection observables, we calculate the dark matter annihilation crossmore » section and then compute the DM relic density in the range up to m{sub DM} = 300 GeV.We then find viable regions in the parameter space constrained by the observed DM relic abundance as well as invisible Higgs decay width in the light of 125 GeV Higgs discovery at the LHC. We find that within the constrained region of the parameter space, there exists a model with dark matter mass m{sub DM} ∼ 38 GeV annihilating predominantly into b quarks, which can explain the Fermi-LAT galactic gamma-ray excess.« less

  18. Adsorptive Removal of Cadmium (II) from Aqueous Solution by Multi-Carboxylic-Functionalized Silica Gel: Equilibrium, Kinetics and Thermodynamics

    NASA Astrophysics Data System (ADS)

    Li, Min; Meng, Xiaojing; Yuan, Jinhai; Deng, Wenwen; Liang, Xiuke

    2018-01-01

    In the present study, the adsorption behavior of cadmium (II) ion from aqueous solution onto multi-carboxylic-functionalized silica gel (SG-MCF) has been investigated in detail by means of batch and column experiments. Batch experiments were performed to evaluate the effects of various experimental parameters such as pH value, contact time and initial concentration on adsorption capacity of cadmium (II) ion. The kinetic data were analyzed on the basis of the pseudo-first-order kinetic and the pseudo-second-order kinetic models and consequently, the pseudo-second-order kinetic can better describe the adsorption process than the pseudo-first-order kinetic model. Equilibrium isotherms for the adsorption of cadmium (II) ion were analyzed by Freundlich and Langmuir isotherm models, the results indicate that Langmuir isotherm model was found to be credible to express the data for cadmium (II) ion from aqueous solution onto the SG-MCF. Various thermodynamics parameters of the adsorption process, including free energy of adsorption (ΔG0 ), the enthalpy of adsorption (ΔH0 ) and standard entropy changes (ΔS0 ), were calculated to predict the nature of adsorption. The positive value of the enthalpy change and the negative value of free energy change indicate that the process is endothermic and spontaneous process.

  19. A perturbation method to the tent map based on Lyapunov exponent and its application

    NASA Astrophysics Data System (ADS)

    Cao, Lv-Chen; Luo, Yu-Ling; Qiu, Sen-Hui; Liu, Jun-Xiu

    2015-10-01

    Perturbation imposed on a chaos system is an effective way to maintain its chaotic features. A novel parameter perturbation method for the tent map based on the Lyapunov exponent is proposed in this paper. The pseudo-random sequence generated by the tent map is sent to another chaos function — the Chebyshev map for the post processing. If the output value of the Chebyshev map falls into a certain range, it will be sent back to replace the parameter of the tent map. As a result, the parameter of the tent map keeps changing dynamically. The statistical analysis and experimental results prove that the disturbed tent map has a highly random distribution and achieves good cryptographic properties of a pseudo-random sequence. As a result, it weakens the phenomenon of strong correlation caused by the finite precision and effectively compensates for the digital chaos system dynamics degradation. Project supported by the Guangxi Provincial Natural Science Foundation, China (Grant No. 2014GXNSFBA118271), the Research Project of Guangxi University, China (Grant No. ZD2014022), the Fund from Guangxi Provincial Key Laboratory of Multi-source Information Mining & Security, China (Grant No. MIMS14-04), the Fund from the Guangxi Provincial Key Laboratory of Wireless Wideband Communication & Signal Processing, China (Grant No. GXKL0614205), the Education Development Foundation and the Doctoral Research Foundation of Guangxi Normal University, the State Scholarship Fund of China Scholarship Council (Grant No. [2014]3012), and the Innovation Project of Guangxi Graduate Education, China (Grant No. YCSZ2015102).

  20. RANDOMNESS of Numbers DEFINITION(QUERY:WHAT? V HOW?) ONLY Via MAXWELL-BOLTZMANN CLASSICAL-Statistics(MBCS) Hot-Plasma VS. Digits-Clumping Log-Law NON-Randomness Inversion ONLY BOSE-EINSTEIN QUANTUM-Statistics(BEQS) .

    NASA Astrophysics Data System (ADS)

    Siegel, Z.; Siegel, Edward Carl-Ludwig

    2011-03-01

    RANDOMNESS of Numbers cognitive-semantics DEFINITION VIA Cognition QUERY: WHAT???, NOT HOW?) VS. computer-``science" mindLESS number-crunching (Harrel-Sipser-...) algorithmics Goldreich "PSEUDO-randomness"[Not.AMS(02)] mea-culpa is ONLY via MAXWELL-BOLTZMANN CLASSICAL-STATISTICS(NOT FDQS!!!) "hot-plasma" REPULSION VERSUS Newcomb(1881)-Weyl(1914;1916)-Benford(1938) "NeWBe" logarithmic-law digit-CLUMPING/ CLUSTERING NON-Randomness simple Siegel[AMS Joint.Mtg.(02)-Abs. # 973-60-124] algebraic-inversion to THE QUANTUM and ONLY BEQS preferentially SEQUENTIALLY lower-DIGITS CLUMPING/CLUSTERING with d = 0 BEC, is ONLY VIA Siegel-Baez FUZZYICS=CATEGORYICS (SON OF TRIZ)/"Category-Semantics"(C-S), latter intersection/union of Lawvere(1964)-Siegel(1964)] category-theory (matrix: MORPHISMS V FUNCTORS) "+" cognitive-semantics'' (matrix: ANTONYMS V SYNONYMS) yields Siegel-Baez FUZZYICS=CATEGORYICS/C-S tabular list-format matrix truth-table analytics: MBCS RANDOMNESS TRUTH/EMET!!!

  1. Fast physical-random number generation using laser diode's frequency noise: influence of frequency discriminator

    NASA Astrophysics Data System (ADS)

    Matsumoto, Kouhei; Kasuya, Yuki; Yumoto, Mitsuki; Arai, Hideaki; Sato, Takashi; Sakamoto, Shuichi; Ohkawa, Masashi; Ohdaira, Yasuo

    2018-02-01

    Not so long ago, pseudo random numbers generated by numerical formulae were considered to be adequate for encrypting important data-files, because of the time needed to decode them. With today's ultra high-speed processors, however, this is no longer true. So, in order to thwart ever-more advanced attempts to breach our system's protections, cryptologists have devised a method that is considered to be virtually impossible to decode, and uses what is a limitless number of physical random numbers. This research describes a method, whereby laser diode's frequency noise generate a large quantities of physical random numbers. Using two types of photo detectors (APD and PIN-PD), we tested the abilities of two types of lasers (FP-LD and VCSEL) to generate random numbers. In all instances, an etalon served as frequency discriminator, the examination pass rates were determined using NIST FIPS140-2 test at each bit, and the Random Number Generation (RNG) speed was noted.

  2. A first-passage scheme for determination of overall rate constants for non-diffusion-limited suspensions

    NASA Astrophysics Data System (ADS)

    Lu, Shih-Yuan; Yen, Yi-Ming

    2002-02-01

    A first-passage scheme is devised to determine the overall rate constant of suspensions under the non-diffusion-limited condition. The original first-passage scheme developed for diffusion-limited processes is modified to account for the finite incorporation rate at the inclusion surface by using a concept of the nonzero survival probability of the diffusing entity at entity-inclusion encounters. This nonzero survival probability is obtained from solving a relevant boundary value problem. The new first-passage scheme is validated by an excellent agreement between overall rate constant results from the present development and from an accurate boundary collocation calculation for the three common spherical arrays [J. Chem. Phys. 109, 4985 (1998)], namely simple cubic, body-centered cubic, and face-centered cubic arrays, for a wide range of P and f. Here, P is a dimensionless quantity characterizing the relative rate of diffusion versus surface incorporation, and f is the volume fraction of the inclusion. The scheme is further applied to random spherical suspensions and to investigate the effect of inclusion coagulation on overall rate constants. It is found that randomness in inclusion arrangement tends to lower the overall rate constant for f up to the near close-packing value of the regular arrays because of the inclusion screening effect. This screening effect turns stronger for regular arrays when f is near and above the close-packing value of the regular arrays, and consequently the overall rate constant of the random array exceeds that of the regular array. Inclusion coagulation too induces the inclusion screening effect, and leads to lower overall rate constants.

  3. Wavelet images and Chou's pseudo amino acid composition for protein classification.

    PubMed

    Nanni, Loris; Brahnam, Sheryl; Lumini, Alessandra

    2012-08-01

    The last decade has seen an explosion in the collection of protein data. To actualize the potential offered by this wealth of data, it is important to develop machine systems capable of classifying and extracting features from proteins. Reliable machine systems for protein classification offer many benefits, including the promise of finding novel drugs and vaccines. In developing our system, we analyze and compare several feature extraction methods used in protein classification that are based on the calculation of texture descriptors starting from a wavelet representation of the protein. We then feed these texture-based representations of the protein into an Adaboost ensemble of neural network or a support vector machine classifier. In addition, we perform experiments that combine our feature extraction methods with a standard method that is based on the Chou's pseudo amino acid composition. Using several datasets, we show that our best approach outperforms standard methods. The Matlab code of the proposed protein descriptors is available at http://bias.csr.unibo.it/nanni/wave.rar .

  4. Sonography of the chest using linear-array versus sector transducers: Correlation with auscultation, chest radiography, and computed tomography.

    PubMed

    Tasci, Ozlem; Hatipoglu, Osman Nuri; Cagli, Bekir; Ermis, Veli

    2016-07-08

    The primary purpose of our study was to compare the efficacies of two sonographic (US) probes, a high-frequency linear-array probe and a lower-frequency phased-array sector probe in the diagnosis of basic thoracic pathologies. The secondary purpose was to compare the diagnostic performance of thoracic US with auscultation and chest radiography (CXR) using thoracic CT as a gold standard. In total, 55 consecutive patients scheduled for thoracic CT were enrolled in this prospective study. Four pathologic entities were evaluated: pneumothorax, pleural effusion, consolidation, and interstitial syndrome. A portable US scanner was used with a 5-10-MHz linear-array probe and a 1-5-MHz phased-array sector probe. The first probe used was chosen randomly. US, CXR, and auscultation results were compared with the CT results. The linear-array probe had the highest performance in the identification of pneumothorax (83% sensitivity, 100% specificity, and 99% diagnostic accuracy) and pleural effusion (100% sensitivity, 97% specificity, and 98% diagnostic accuracy); the sector probe had the highest performance in the identification of consolidation (89% sensitivity, 100% specificity, and 95% diagnostic accuracy) and interstitial syndrome (94% sensitivity, 93% specificity, and 94% diagnostic accuracy). For all pathologies, the performance of US was superior to those of CXR and auscultation. The linear probe is superior to the sector probe for identifying pleural pathologies, whereas the sector probe is superior to the linear probe for identifying parenchymal pathologies. Thoracic US has better diagnostic performance than CXR and auscultation for the diagnosis of common pathologic conditions of the chest. © 2016 Wiley Periodicals, Inc. J Clin Ultrasound 44:383-389, 2016. © 2016 Wiley Periodicals, Inc.

  5. MethLAB: a graphical user interface package for the analysis of array-based DNA methylation data.

    PubMed

    Kilaru, Varun; Barfield, Richard T; Schroeder, James W; Smith, Alicia K; Conneely, Karen N

    2012-03-01

    Recent evidence suggests that DNA methylation changes may underlie numerous complex traits and diseases. The advent of commercial, array-based methods to interrogate DNA methylation has led to a profusion of epigenetic studies in the literature. Array-based methods, such as the popular Illumina GoldenGate and Infinium platforms, estimate the proportion of DNA methylated at single-base resolution for thousands of CpG sites across the genome. These arrays generate enormous amounts of data, but few software resources exist for efficient and flexible analysis of these data. We developed a software package called MethLAB (http://genetics.emory.edu/conneely/MethLAB) using R, an open source statistical language that can be edited to suit the needs of the user. MethLAB features a graphical user interface (GUI) with a menu-driven format designed to efficiently read in and manipulate array-based methylation data in a user-friendly manner. MethLAB tests for association between methylation and relevant phenotypes by fitting a separate linear model for each CpG site. These models can incorporate both continuous and categorical phenotypes and covariates, as well as fixed or random batch or chip effects. MethLAB accounts for multiple testing by controlling the false discovery rate (FDR) at a user-specified level. Standard output includes a spreadsheet-ready text file and an array of publication-quality figures. Considering the growing interest in and availability of DNA methylation data, there is a great need for user-friendly open source analytical tools. With MethLAB, we present a timely resource that will allow users with no programming experience to implement flexible and powerful analyses of DNA methylation data.

  6. Analysis of multivariate longitudinal kidney function outcomes using generalized linear mixed models.

    PubMed

    Jaffa, Miran A; Gebregziabher, Mulugeta; Jaffa, Ayad A

    2015-06-14

    Renal transplant patients are mandated to have continuous assessment of their kidney function over time to monitor disease progression determined by changes in blood urea nitrogen (BUN), serum creatinine (Cr), and estimated glomerular filtration rate (eGFR). Multivariate analysis of these outcomes that aims at identifying the differential factors that affect disease progression is of great clinical significance. Thus our study aims at demonstrating the application of different joint modeling approaches with random coefficients on a cohort of renal transplant patients and presenting a comparison of their performance through a pseudo-simulation study. The objective of this comparison is to identify the model with best performance and to determine whether accuracy compensates for complexity in the different multivariate joint models. We propose a novel application of multivariate Generalized Linear Mixed Models (mGLMM) to analyze multiple longitudinal kidney function outcomes collected over 3 years on a cohort of 110 renal transplantation patients. The correlated outcomes BUN, Cr, and eGFR and the effect of various covariates such patient's gender, age and race on these markers was determined holistically using different mGLMMs. The performance of the various mGLMMs that encompass shared random intercept (SHRI), shared random intercept and slope (SHRIS), separate random intercept (SPRI) and separate random intercept and slope (SPRIS) was assessed to identify the one that has the best fit and most accurate estimates. A bootstrap pseudo-simulation study was conducted to gauge the tradeoff between the complexity and accuracy of the models. Accuracy was determined using two measures; the mean of the differences between the estimates of the bootstrapped datasets and the true beta obtained from the application of each model on the renal dataset, and the mean of the square of these differences. The results showed that SPRI provided most accurate estimates and did not exhibit any computational or convergence problem. Higher accuracy was demonstrated when the level of complexity increased from shared random coefficient models to the separate random coefficient alternatives with SPRI showing to have the best fit and most accurate estimates.

  7. Wireless Sensor Array Network DoA Estimation from Compressed Array Data via Joint Sparse Representation.

    PubMed

    Yu, Kai; Yin, Ming; Luo, Ji-An; Wang, Yingguan; Bao, Ming; Hu, Yu-Hen; Wang, Zhi

    2016-05-23

    A compressive sensing joint sparse representation direction of arrival estimation (CSJSR-DoA) approach is proposed for wireless sensor array networks (WSAN). By exploiting the joint spatial and spectral correlations of acoustic sensor array data, the CSJSR-DoA approach provides reliable DoA estimation using randomly-sampled acoustic sensor data. Since random sampling is performed at remote sensor arrays, less data need to be transmitted over lossy wireless channels to the fusion center (FC), and the expensive source coding operation at sensor nodes can be avoided. To investigate the spatial sparsity, an upper bound of the coherence of incoming sensor signals is derived assuming a linear sensor array configuration. This bound provides a theoretical constraint on the angular separation of acoustic sources to ensure the spatial sparsity of the received acoustic sensor array signals. The Cram e ´ r-Rao bound of the CSJSR-DoA estimator that quantifies the theoretical DoA estimation performance is also derived. The potential performance of the CSJSR-DoA approach is validated using both simulations and field experiments on a prototype WSAN platform. Compared to existing compressive sensing-based DoA estimation methods, the CSJSR-DoA approach shows significant performance improvement.

  8. Performance measurements of the first RAID prototype

    NASA Technical Reports Server (NTRS)

    Chervenak, Ann L.

    1990-01-01

    The performance is examined of Redundant Arrays of Inexpensive Disks (RAID) the First, a prototype disk array. A hierarchy of bottlenecks was discovered in the system that limit overall performance. The most serious is the memory system contention on the Sun 4/280 host CPU, which limits array bandwidth to 2.3 MBytes/sec. The array performs more successfully on small random operations, achieving nearly 300 I/Os per second before the Sun 4/280 becomes CPU limited. Other bottlenecks in the system are the VME backplane, bandwidth on the disk controller, and overheads associated with the SCSI protocol. All are examined in detail. The main conclusion is that to achieve the potential bandwidth of arrays, more powerful CPU's alone will not suffice. Just as important are adequate host memory bandwidth and support for high bandwidth on disk controllers. Current disk controllers are more often designed to achieve large numbers of small random operations, rather than high bandwidth. Operating systems also need to change to support high bandwidth from disk arrays. In particular, they should transfer data in larger blocks, and should support asynchronous I/O to improve sequential write performance.

  9. Narrow linewidth short cavity Brillouin random laser based on Bragg grating array fiber and dynamical population inversion gratings

    NASA Astrophysics Data System (ADS)

    Popov, S. M.; Butov, O. V.; Chamorovski, Y. K.; Isaev, V. A.; Mégret, P.; Korobko, D. A.; Zolotovskii, I. O.; Fotiadi, A. A.

    2018-06-01

    We report on random lasing observed with 100-m-long fiber comprising an array of weak FBGs inscribed in the fiber core and uniformly distributed over the fiber length. Extended fluctuation-free oscilloscope traces highlight power dynamics typical for lasing. An additional piece of Er-doped fiber included into the laser cavity enables a stable laser generation with a linewidth narrower than 10 kHz.

  10. Micropillar arrays as a high-throughput screening platform for therapeutics in multiple sclerosis.

    PubMed

    Mei, Feng; Fancy, Stephen P J; Shen, Yun-An A; Niu, Jianqin; Zhao, Chao; Presley, Bryan; Miao, Edna; Lee, Seonok; Mayoral, Sonia R; Redmond, Stephanie A; Etxeberria, Ainhoa; Xiao, Lan; Franklin, Robin J M; Green, Ari; Hauser, Stephen L; Chan, Jonah R

    2014-08-01

    Functional screening for compounds that promote remyelination represents a major hurdle in the development of rational therapeutics for multiple sclerosis. Screening for remyelination is problematic, as myelination requires the presence of axons. Standard methods do not resolve cell-autonomous effects and are not suited for high-throughput formats. Here we describe a binary indicant for myelination using micropillar arrays (BIMA). Engineered with conical dimensions, micropillars permit resolution of the extent and length of membrane wrapping from a single two-dimensional image. Confocal imaging acquired from the base to the tip of the pillars allows for detection of concentric wrapping observed as 'rings' of myelin. The platform is formatted in 96-well plates, amenable to semiautomated random acquisition and automated detection and quantification. Upon screening 1,000 bioactive molecules, we identified a cluster of antimuscarinic compounds that enhance oligodendrocyte differentiation and remyelination. Our findings demonstrate a new high-throughput screening platform for potential regenerative therapeutics in multiple sclerosis.

  11. Parrondo Games with Two-Dimensional Spatial Dependence

    NASA Astrophysics Data System (ADS)

    Ethier, S. N.; Lee, Jiyeon

    Parrondo games with one-dimensional (1D) spatial dependence were introduced by Toral and extended to the two-dimensional (2D) setting by Mihailović and Rajković. MN players are arranged in an M × N array. There are three games, the fair, spatially independent game A, the spatially dependent game B, and game C, which is a random mixture or non-random pattern of games A and B. Of interest is μB (or μC), the mean profit per turn at equilibrium to the set of MN players playing game B (or game C). Game A is fair, so if μB ≤ 0 and μC > 0, then we say the Parrondo effect is present. We obtain a strong law of large numbers (SLLN) and a central limit theorem (CLT) for the sequence of profits of the set of MN players playing game B (or game C). The mean and variance parameters are computable for small arrays and can be simulated otherwise. The SLLN justifies the use of simulation to estimate the mean. The CLT permits evaluation of the standard error of a simulated estimate. We investigate the presence of the Parrondo effect for both small arrays and large ones. One of the findings of Mihailović and Rajković was that “capital evolution depends to a large degree on the lattice size.” We provide evidence that this conclusion is partly incorrect. A paradoxical feature of the 2D game B that does not appear in the 1D setting is that, for fixed M and N, the mean function μB is not necessarily a monotone function of its parameters.

  12. Breakdown of the coherence effects and Fermi liquid behavior in YbAl3 nanoparticles

    NASA Astrophysics Data System (ADS)

    Echevarria-Bonet, C.; Rojas, D. P.; Espeso, J. I.; Rodríguez Fernández, J.; Rodríguez Fernández, L.; Bauer, E.; Burdin, S.; Magalhães, S. G.; Fernández Barquín, L.

    2018-04-01

    A change in the Kondo lattice behavior of bulk YbAl3 has been observed when the alloy is shaped into nanoparticles (≈12 nm). Measurements of the electrical resistivity show inhibited coherence effects and deviation from the standard Fermi liquid behavior (T 2-dependence). These results are interpreted as being due to the effect of the disruption of the periodicity of the array of Kondo ions provoked by the size reduction process. Additionally, the ensemble of randomly placed nanoparticles also triggers an extra source of electronic scattering at very low temperatures (≈15 K) due to quantum interference effects.

  13. Higgs boson as a top-mode pseudo-Nambu-Goldstone boson

    NASA Astrophysics Data System (ADS)

    Fukano, Hidenori S.; Kurachi, Masafumi; Matsuzaki, Shinya; Yamawaki, Koichi

    2014-09-01

    In the spirit of the top-quark condensation, we propose a model which has a naturally light composite Higgs boson, "tHiggs" (ht0), to be identified with the 126 GeV Higgs discovered at the LHC. The tHiggs, a bound state of the top quark and its flavor (vectorlike) partner, emerges as a pseudo-Nambu-Goldstone boson (NGB), "top-mode pseudo-Nambu-Goldstone boson," together with the exact NGBs to be absorbed into the W and Z bosons as well as another (heavier) top-mode pseudo-Nambu-Goldstone bosons (CP-odd composite scalar, At0). Those five composite (exact/pseudo-) NGBs are dynamically produced simultaneously by a single supercritical four-fermion interaction having U(3)×U(1) symmetry which includes the electroweak symmetry, where the vacuum is aligned by a small explicit breaking term so as to break the symmetry down to a subgroup, U(2)×U(1)', in a way not to retain the electroweak symmetry, in sharp contrast to the little Higgs models. The explicit breaking term for the vacuum alignment gives rise to a mass of the tHiggs, which is protected by the symmetry and hence naturally controlled against radiative corrections. Realistic top-quark mass is easily realized similarly to the top-seesaw mechanism by introducing an extra (subcritical) four-fermion coupling which explicitly breaks the residual U(2)'×U(1)' symmetry with U(2)' being an extra symmetry besides the above U(3)L×U(1). We present a phenomenological Lagrangian of the top-mode pseudo-Nambu-Goldstone bosons along with the Standard Model particles, which will be useful for the study of the collider phenomenology. The coupling property of the tHiggs is shown to be consistent with the currently available data reported from the LHC. Several phenomenological consequences and constraints from experiments are also addressed.

  14. Zero-Echo-Time and Dixon Deep Pseudo-CT (ZeDD CT): Direct Generation of Pseudo-CT Images for Pelvic PET/MRI Attenuation Correction Using Deep Convolutional Neural Networks with Multiparametric MRI.

    PubMed

    Leynes, Andrew P; Yang, Jaewon; Wiesinger, Florian; Kaushik, Sandeep S; Shanbhag, Dattesh D; Seo, Youngho; Hope, Thomas A; Larson, Peder E Z

    2018-05-01

    Accurate quantification of uptake on PET images depends on accurate attenuation correction in reconstruction. Current MR-based attenuation correction methods for body PET use a fat and water map derived from a 2-echo Dixon MRI sequence in which bone is neglected. Ultrashort-echo-time or zero-echo-time (ZTE) pulse sequences can capture bone information. We propose the use of patient-specific multiparametric MRI consisting of Dixon MRI and proton-density-weighted ZTE MRI to directly synthesize pseudo-CT images with a deep learning model: we call this method ZTE and Dixon deep pseudo-CT (ZeDD CT). Methods: Twenty-six patients were scanned using an integrated 3-T time-of-flight PET/MRI system. Helical CT images of the patients were acquired separately. A deep convolutional neural network was trained to transform ZTE and Dixon MR images into pseudo-CT images. Ten patients were used for model training, and 16 patients were used for evaluation. Bone and soft-tissue lesions were identified, and the SUV max was measured. The root-mean-squared error (RMSE) was used to compare the MR-based attenuation correction with the ground-truth CT attenuation correction. Results: In total, 30 bone lesions and 60 soft-tissue lesions were evaluated. The RMSE in PET quantification was reduced by a factor of 4 for bone lesions (10.24% for Dixon PET and 2.68% for ZeDD PET) and by a factor of 1.5 for soft-tissue lesions (6.24% for Dixon PET and 4.07% for ZeDD PET). Conclusion: ZeDD CT produces natural-looking and quantitatively accurate pseudo-CT images and reduces error in pelvic PET/MRI attenuation correction compared with standard methods. © 2018 by the Society of Nuclear Medicine and Molecular Imaging.

  15. High resolution imaging of the Venus night side using a Rockwell 128x128 HgCdTe array

    NASA Technical Reports Server (NTRS)

    Hodapp, K.-W.; Sinton, W.; Ragent, B.; Allen, D.

    1989-01-01

    The University of Hawaii operates an infrared camera with a 128x128 HgCdTe detector array on loan from JPL's High Resolution Imaging Spectrometer (HIRIS) project. The characteristics of this camera system are discussed. The infrared camera was used to obtain images of the night side of Venus prior to and after inferior conjunction in 1988. The images confirm Allen and Crawford's (1984) discovery of bright features on the dark hemisphere of Venus visible in the H and K bands. Our images of these features are the best obtained to date. Researchers derive a pseudo rotation period of 6.5 days for these features and 1.74 microns brightness temperatures between 425 K and 480 K. The features are produced by nonuniform absorption in the middle cloud layer (47 to 57 Km altitude) of thermal radiation from the lower Venus atmosphere (20 to 30 Km altitude). A more detailed analysis of the data is in progress.

  16. A programmable systolic array correlator as a trigger processor for electron pairs in rich (ring image Cherenkov) counters

    NASA Astrophysics Data System (ADS)

    Männer, R.

    1989-12-01

    This paper describes a systolic array processor for a ring image Cherenkov counter which is capable of identifying pairs of electron circles with a known radius and a certain minimum distance within 15 μs. The processor is a very flexible and fast device. It consists of 128 x 128 processing elements (PEs), where one PE is assigned to each pixel of the image. All PEs run synchronously at 40 MHz. The identification of electron circles is done by correlating the detector image with the proper circle circumference. Circle centers are found by peak detection in the correlation result. A second correlation with a circle disc allows circles of closed electron pairs to be rejected. The trigger decision is generated if a pseudo adder detects at least two remaining circles. The device is controlled by a freely programmable sequencer. A VLSI chip containing 8 x 8 PEs is being developed using a VENUS design system and will be produced in 2μ CMOS technology.

  17. MethLAB

    PubMed Central

    Kilaru, Varun; Barfield, Richard T; Schroeder, James W; Smith, Alicia K

    2012-01-01

    Recent evidence suggests that DNA methylation changes may underlie numerous complex traits and diseases. The advent of commercial, array-based methods to interrogate DNA methylation has led to a profusion of epigenetic studies in the literature. Array-based methods, such as the popular Illumina GoldenGate and Infinium platforms, estimate the proportion of DNA methylated at single-base resolution for thousands of CpG sites across the genome. These arrays generate enormous amounts of data, but few software resources exist for efficient and flexible analysis of these data. We developed a software package called MethLAB (http://genetics.emory.edu/conneely/MethLAB) using R, an open source statistical language that can be edited to suit the needs of the user. MethLAB features a graphical user interface (GUI) with a menu-driven format designed to efficiently read in and manipulate array-based methylation data in a user-friendly manner. MethLAB tests for association between methylation and relevant phenotypes by fitting a separate linear model for each CpG site. These models can incorporate both continuous and categorical phenotypes and covariates, as well as fixed or random batch or chip effects. MethLAB accounts for multiple testing by controlling the false discovery rate (FDR) at a user-specified level. Standard output includes a spreadsheet-ready text file and an array of publication-quality figures. Considering the growing interest in and availability of DNA methylation data, there is a great need for user-friendly open source analytical tools. With MethLAB, we present a timely resource that will allow users with no programming experience to implement flexible and powerful analyses of DNA methylation data. PMID:22430798

  18. Measuring and Modeling the Growth Dynamics of Self-Catalyzed GaP Nanowire Arrays.

    PubMed

    Oehler, Fabrice; Cattoni, Andrea; Scaccabarozzi, Andrea; Patriarche, Gilles; Glas, Frank; Harmand, Jean-Christophe

    2018-02-14

    The bottom-up fabrication of regular nanowire (NW) arrays on a masked substrate is technologically relevant, but the growth dynamic is rather complex due to the superposition of severe shadowing effects that vary with array pitch, NW diameter, NW height, and growth duration. By inserting GaAsP marker layers at a regular time interval during the growth of a self-catalyzed GaP NW array, we are able to retrieve precisely the time evolution of the diameter and height of a single NW. We then propose a simple numerical scheme which fully computes shadowing effects at play in infinite arrays of NWs. By confronting the simulated and experimental results, we infer that re-emission of Ga from the mask is necessary to sustain the NW growth while Ga migration on the mask must be negligible. When compared to random cosine or random uniform re-emission from the mask, the simple case of specular reflection on the mask gives the most accurate account of the Ga balance during the growth.

  19. Hiding message into DNA sequence through DNA coding and chaotic maps.

    PubMed

    Liu, Guoyan; Liu, Hongjun; Kadir, Abdurahman

    2014-09-01

    The paper proposes an improved reversible substitution method to hide data into deoxyribonucleic acid (DNA) sequence, and four measures have been taken to enhance the robustness and enlarge the hiding capacity, such as encode the secret message by DNA coding, encrypt it by pseudo-random sequence, generate the relative hiding locations by piecewise linear chaotic map, and embed the encoded and encrypted message into a randomly selected DNA sequence using the complementary rule. The key space and the hiding capacity are analyzed. Experimental results indicate that the proposed method has a better performance compared with the competing methods with respect to robustness and capacity.

  20. Sparse sampling and reconstruction for electron and scanning probe microscope imaging

    DOEpatents

    Anderson, Hyrum; Helms, Jovana; Wheeler, Jason W.; Larson, Kurt W.; Rohrer, Brandon R.

    2015-07-28

    Systems and methods for conducting electron or scanning probe microscopy are provided herein. In a general embodiment, the systems and methods for conducting electron or scanning probe microscopy with an undersampled data set include: driving an electron beam or probe to scan across a sample and visit a subset of pixel locations of the sample that are randomly or pseudo-randomly designated; determining actual pixel locations on the sample that are visited by the electron beam or probe; and processing data collected by detectors from the visits of the electron beam or probe at the actual pixel locations and recovering a reconstructed image of the sample.

  1. Ring correlations in random networks.

    PubMed

    Sadjadi, Mahdi; Thorpe, M F

    2016-12-01

    We examine the correlations between rings in random network glasses in two dimensions as a function of their separation. Initially, we use the topological separation (measured by the number of intervening rings), but this leads to pseudo-long-range correlations due to a lack of topological charge neutrality in the shells surrounding a central ring. This effect is associated with the noncircular nature of the shells. It is, therefore, necessary to use the geometrical distance between ring centers. Hence we find a generalization of the Aboav-Weaire law out to larger distances, with the correlations between rings decaying away when two rings are more than about three rings apart.

  2. Teaching Psychopathology in a Galaxy Far, Far Away: The Light Side of the Force.

    PubMed

    Friedman, Susan Hatters; Hall, Ryan C W

    2015-12-01

    Star Wars films are among the top box office hits of all time. The films have been popular internationally for almost 40 years. As such, both trainees and attending psychiatrists are likely to be aware of them. This article highlights a vast array of psychopathology in Star Wars films which can be useful in teaching, even when the characters are considered the "good guys". Included are as follows: histrionic, obsessive-compulsive, and dependent personality traits, perinatal psychiatric disorders, prodromal schizophrenia, pseudo-dementia, frontal lobe lesions, pathological gambling, and even malingering. As such, Star Wars has tremendous potential to teach psychiatric trainees about mental health issues.

  3. SNPConvert: SNP Array Standardization and Integration in Livestock Species.

    PubMed

    Nicolazzi, Ezequiel Luis; Marras, Gabriele; Stella, Alessandra

    2016-06-09

    One of the main advantages of single nucleotide polymorphism (SNP) array technology is providing genotype calls for a specific number of SNP markers at a relatively low cost. Since its first application in animal genetics, the number of available SNP arrays for each species has been constantly increasing. However, conversely to that observed in whole genome sequence data analysis, SNP array data does not have a common set of file formats or coding conventions for allele calling. Therefore, the standardization and integration of SNP array data from multiple sources have become an obstacle, especially for users with basic or no programming skills. Here, we describe the difficulties related to handling SNP array data, focusing on file formats, SNP allele coding, and mapping. We also present SNPConvert suite, a multi-platform, open-source, and user-friendly set of tools to overcome these issues. This tool, which can be integrated with open-source and open-access tools already available, is a first step towards an integrated system to standardize and integrate any type of raw SNP array data. The tool is available at: https://github. com/nicolazzie/SNPConvert.git.

  4. C-MOS bulk metal design handbook. [LSI standard cell (circuits)

    NASA Technical Reports Server (NTRS)

    Edge, T. M.

    1977-01-01

    The LSI standard cell array technique was used in the fabrication of more than 20 CMOS custom arrays. This technique consists of a series of computer programs and design automation techniques referred to as the Computer Aided Design And Test (CADAT) system that automatically translate a partitioned logic diagram into a set of instructions for driving an automatic plotter which generates precision mask artwork for complex LSI arrays of CMOS standard cells. The standard cell concept for producing LSI arrays begins with the design, layout, and validation of a group of custom circuits called standard cells. Once validated, these cells are given identification or pattern numbers and are permanently stored. To use one of these cells in a logic design, the user calls for the desired cell by pattern number. The Place, Route in Two Dimension (PR2D) computer program is then used to automatically generate the metalization and/or tunnels to interconnect the standard cells into the required function. Data sheets that describe the function, artwork, and performance of each of the standard cells, the general procedure for implementation of logic in CMOS standard cells, and additional detailed design information are presented.

  5. A New Method for Generating Probability Tables in the Unresolved Resonance Region

    DOE PAGES

    Holcomb, Andrew M.; Leal, Luiz C.; Rahnema, Farzad; ...

    2017-04-18

    One new method for constructing probability tables in the unresolved resonance region (URR) has been developed. This new methodology is an extensive modification of the single-level Breit-Wigner (SLBW) pseudo-resonance pair sequence method commonly used to generate probability tables in the URR. The new method uses a Monte Carlo process to generate many pseudo-resonance sequences by first sampling the average resonance parameter data in the URR and then converting the sampled resonance parameters to the more robust R-matrix limited (RML) format. Furthermore, for each sampled set of pseudo-resonance sequences, the temperature-dependent cross sections are reconstructed on a small grid around themore » energy of reference using the Reich-Moore formalism and the Leal-Hwang Doppler broadening methodology. We then use the effective cross sections calculated at the energies of reference to construct probability tables in the URR. The RML cross-section reconstruction algorithm has been rigorously tested for a variety of isotopes, including 16O, 19F, 35Cl, 56Fe, 63Cu, and 65Cu. The new URR method also produced normalized cross-section factor probability tables for 238U that were found to be in agreement with current standards. The modified 238U probability tables were shown to produce results in excellent agreement with several standard benchmarks, including the IEU-MET-FAST-007 (BIG TEN), IEU-MET-FAST-003, and IEU-COMP-FAST-004 benchmarks.« less

  6. Mapping Topographic Structure in White Matter Pathways with Level Set Trees

    PubMed Central

    Kent, Brian P.; Rinaldo, Alessandro; Yeh, Fang-Cheng; Verstynen, Timothy

    2014-01-01

    Fiber tractography on diffusion imaging data offers rich potential for describing white matter pathways in the human brain, but characterizing the spatial organization in these large and complex data sets remains a challenge. We show that level set trees–which provide a concise representation of the hierarchical mode structure of probability density functions–offer a statistically-principled framework for visualizing and analyzing topography in fiber streamlines. Using diffusion spectrum imaging data collected on neurologically healthy controls (N = 30), we mapped white matter pathways from the cortex into the striatum using a deterministic tractography algorithm that estimates fiber bundles as dimensionless streamlines. Level set trees were used for interactive exploration of patterns in the endpoint distributions of the mapped fiber pathways and an efficient segmentation of the pathways that had empirical accuracy comparable to standard nonparametric clustering techniques. We show that level set trees can also be generalized to model pseudo-density functions in order to analyze a broader array of data types, including entire fiber streamlines. Finally, resampling methods show the reliability of the level set tree as a descriptive measure of topographic structure, illustrating its potential as a statistical descriptor in brain imaging analysis. These results highlight the broad applicability of level set trees for visualizing and analyzing high-dimensional data like fiber tractography output. PMID:24714673

  7. A new communications technique for the nonvocal person, using the Apple II Computer.

    PubMed

    Seamone, W

    1982-01-01

    The purpose of this paper is to describe a technique for nonvocal personal communication for the severely handicapped person, using the Apple II computer system and standard commercially available software diskettes (Visi-Calc). The user's input in a pseudo-Morse code is generated via minute chin motions or limited finger motions applied to a suitable configured two-switch device, and input via the JHU/APL Morse code interface card. The commands and features of the program's row-column matrix, originally intended and widely used for financial management, are used here to call up and modify a large array of stored sentences which can be useful in personal communication. It is not known at this time if the system is in fact cost-effective for the sole purpose of nonvocal communication, since system tradeoff studies have not been made relative to other techniques. However, in some instances an Apple computer may be already available for other purposes at the institution or in the home, and the system described could simply be another utilization of that personal computer. In any case, the system clearly does not meet the requirement of portability. No special components (except for the JHU/APL Morse interface card) and no special programming experience are required to duplicate the communications technique described.

  8. The MoEDAL Experiment at the LHC - a New Light on the Terascale Frontier

    NASA Astrophysics Data System (ADS)

    Pinfold, J. L.

    2015-07-01

    MoEDAL is a pioneering experiment designed to search for highly ionizing avatars of new physics such as magnetic monopoles or massive (pseudo-)stable charged particles. Its groundbreaking physics program defines a number of scenarios that yield potentially revolutionary insights into such foundational questions as: are there extra dimensions or new symmetries; what is the mechanism for the generation of mass; does magnetic charge exist; what is the nature of dark matter; and, how did the big-bang develop. MoEDAL's purpose is to meet such far-reaching challenges at the frontier of the field. The innovative MoEDAL detector employs unconventional methodologies tuned to the prospect of discovery physics. The largely passive MoEDAL detector, deployed at Point 8 on the LHC ring, has a dual nature. First, it acts like a giant camera, comprised of nuclear track detectors - analyzed offline by ultra fast scanning microscopes - sensitive only to new physics. Second, it is uniquely able to trap the particle messengers of physics beyond the Standard Model for further study. MoEDAL's radiation environment is monitored by a state-of-the-art real-time TimePix pixel detector array. A new MoEDAL sub-detector to extend MoEDAL's reach to millicharged, minimally ionizing, particles (MMIPs) is under study.

  9. Echo-Planar Imaging-Based, J-Resolved Spectroscopic Imaging for Improved Metabolite Detection in Prostate Cancer

    DTIC Science & Technology

    2016-12-01

    tiple dimensions (20). Hu et al. employed pseudo-random phase-encoding blips during the EPSI readout to create nonuniform sampling along the spatial...resolved MRSI with Nonuniform Undersampling and Compressed Sensing 514 30.5 Prior-knowledge Fitting for Metabolite Quantitation 515 30.6 Future Directions... NONUNIFORM UNDERSAMPLING AND COMPRESSED SENSING Nonuniform undersampling (NUS) of k-space and subsequent reconstruction using compressed sensing (CS

  10. Ballistic Missile Defense Glossary Version 3.0.

    DTIC Science & Technology

    1997-06-01

    The suppression of background noise for the improvement of an object signal. Battlefield Area Evaluation (USA term). Best and Final Offer...field of the lens are focused. An FPA is a matrix of photon sensitive detectors which, when combined with low noise preamplifiers, provides image data...orbital planes with an orbit period of 12 hours at 10,900 nautical miles altitude. Each satellite transmits three L-band, pseudo-random noise -coded

  11. Mining and Querying Multimedia Data

    DTIC Science & Technology

    2011-09-29

    able to capture more subtle spatial variations such as repetitiveness. Local feature descriptors such as SIFT [74] and SURF [12] have also been widely...empirically set to s = 90%, r = 50%, K = 20, where small variations lead to little perturbation of the output. The pseudo-code of the algorithm is...by constructing a three-layer graph based on clustering outputs, and executing a slight variation of random walk with restart algorithm. It provided

  12. Analysis of heart rate and oxygen uptake kinetics studied by two different pseudo-random binary sequence work rate amplitudes.

    PubMed

    Drescher, U; Koschate, J; Schiffer, T; Schneider, S; Hoffmann, U

    2017-06-01

    The aim of the study was to compare the kinetics responses of heart rate (HR), pulmonary (V˙O 2 pulm) and predicted muscular (V˙O 2 musc) oxygen uptake between two different pseudo-random binary sequence (PRBS) work rate (WR) amplitudes both below anaerobic threshold. Eight healthy individuals performed two PRBS WR protocols implying changes between 30W and 80W and between 30W and 110W. HR and V˙O 2 pulm were measured beat-to-beat and breath-by-breath, respectively. V˙O 2 musc was estimated applying the approach of Hoffmann et al. (Eur J Appl Physiol 113: 1745-1754, 2013) considering a circulatory model for venous return and cross-correlation functions (CCF) for the kinetics analysis. HR and V˙O 2 musc kinetics seem to be independent of WR intensity (p>0.05). V˙O 2 pulm kinetics show prominent differences in the lag of the CCF maximum (39±9s; 31±4s; p<0.05). A mean difference of 14W between the PRBS WR amplitudes impacts venous return significantly, while HR and V˙O 2 musc kinetics remain unchanged. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Reversible ratchet effects for vortices in conformal pinning arrays

    DOE PAGES

    Reichhardt, Charles; Ray, Dipanjan; Reichhardt, Cynthia Jane Olson

    2015-05-04

    A conformal transformation of a uniform triangular pinning array produces a structure called a conformal crystal which preserves the sixfold ordering of the original lattice but contains a gradient in the pinning density. Here we use numerical simulations to show that vortices in type-II superconductors driven with an ac drive over gradient pinning arrays produce the most pronounced ratchet effect over a wide range of parameters for a conformal array, while square gradient or random gradient arrays with equivalent pinning densities give reduced ratchet effects. In the conformal array, the larger spacing of the pinning sites in the direction transversemore » to the ac drive permits easy funneling of interstitial vortices for one driving direction, producing the enhanced ratchet effect. In the square array, the transverse spacing between pinning sites is uniform, giving no asymmetry in the funneling of the vortices as the driving direction switches, while in the random array, there are numerous easy-flow channels present for either direction of drive. We find multiple ratchet reversals in the conformal arrays as a function of vortex density and ac amplitude, and correlate the features with a reversal in the vortex ordering, which is greater for motion in the ratchet direction. In conclusion, the enhanced conformal pinning ratchet effect can also be realized for colloidal particles moving over a conformal array, indicating the general usefulness of conformal structures for controlling the motion of particles.« less

  14. Progressive transmission of pseudo-color images. Appendix 1: Item 4. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Hadenfeldt, Andrew C.

    1991-01-01

    The transmission of digital images can require considerable channel bandwidth. The cost of obtaining such a channel can be prohibitive, or the channel might simply not be available. In this case, progressive transmission (PT) can be useful. PT presents the user with a coarse initial image approximation, and then proceeds to refine it. In this way, the user tends to receive information about the content of the image sooner than if a sequential transmission method is used. PT finds application in image data base browsing, teleconferencing, medical and other applications. A PT scheme is developed for use with a particular type of image data, the pseudo-color or color mapped image. Such images consist of a table of colors called a colormap, plus a 2-D array of index values which indicate which colormap entry is to be used to display a given pixel. This type of image presents some unique problems for a PT coder, and techniques for overcoming these problems are developed. A computer simulation of the color mapped PT scheme is developed to evaluate its performance. Results of simulation using several test images are presented.

  15. Recent updates in developing a statistical pseudo-dynamic source-modeling framework to capture the variability of earthquake rupture scenarios

    NASA Astrophysics Data System (ADS)

    Song, Seok Goo; Kwak, Sangmin; Lee, Kyungbook; Park, Donghee

    2017-04-01

    It is a critical element to predict the intensity and variability of strong ground motions in seismic hazard assessment. The characteristics and variability of earthquake rupture process may be a dominant factor in determining the intensity and variability of near-source strong ground motions. Song et al. (2014) demonstrated that the variability of earthquake rupture scenarios could be effectively quantified in the framework of 1-point and 2-point statistics of earthquake source parameters, constrained by rupture dynamics and past events. The developed pseudo-dynamic source modeling schemes were also validated against the recorded ground motion data of past events and empirical ground motion prediction equations (GMPEs) at the broadband platform (BBP) developed by the Southern California Earthquake Center (SCEC). Recently we improved the computational efficiency of the developed pseudo-dynamic source-modeling scheme by adopting the nonparametric co-regionalization algorithm, introduced and applied in geostatistics initially. We also investigated the effect of earthquake rupture process on near-source ground motion characteristics in the framework of 1-point and 2-point statistics, particularly focusing on the forward directivity region. Finally we will discuss whether the pseudo-dynamic source modeling can reproduce the variability (standard deviation) of empirical GMPEs and the efficiency of 1-point and 2-point statistics to address the variability of ground motions.

  16. High Angular Resolution Microwave Sensing with Large, Sparse, Random Arrays

    DTIC Science & Technology

    1983-11-01

    RESEARCH AFOSR 82-0012 DTIC s" A6 19M UNIVERSITY of PENNSYLVANIA VALLEY FORGE RESEARCH CENTER THE MOORE SCHOOL OF ELECTRICAL ENGINEERING PHILADELPHIA...MICROWAVE SENSING WITH LARGE, SPARSE, RANDOM ARRAYS Final Scientific Report AIR FORCE OFFICE OF SCIENTIFIC RESEARCH AFOSR 82-0012 Valley Forge Research ...CONTROLLING OFFICE NAME AND ADDRESS 12. REPORT DATE Air Force Office of Scientific Research /NE Nov 1983 - . Bildin 41073. NUMBER Or PAG ES BOllinZ AFB, DIC

  17. Lack of association between the pseudo deficiency mutation in the arylsulfatase A gene on chromosome 22 with schizophrenia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, P.L.; Chetty, V.; Kasch, L.

    Arylsulfatase-A deficiency causes the neurodegenerative lysosomal storage disease metachromatic leukodystrophy. In the late-onset variant, schizophrenia-like psychosis is a frequent finding and sometimes given as the initial diagnosis. A mutant allele, pseudo-deficiency, causes deficient enzyme activity but no apparent clinical effect. It occurs at a high frequency and consists of two tightly-linked A{r_arrow}G transitions: one causing the loss of a glycosylation site (PDg); and one causing the loss of a polyadenylation signal (PDa). Since this gene was mapped to chromosome 22q13-qter, a region implicated in a potential linkage with schizophrenia, we hypothesized that this common mutation may be a predisposing geneticmore » factor for schizophrenia. We studied a random sample of schizophrenic patients for possible increase in frequency of the pseudo-deficiency mutations and in multiplex families to verify if the mutations are linked to schizophrenia. Among 50 Caucasian patients identified through out-patient and in-patient clinics, the frequencies for the three alleles PDg + PDa together, PDg or PDa alone were 11%, 5% and 0%, respectively. The corresponding frequencies among 100 Caucasian controls were 7.5%, 6% and 0%, respectively, the differences between the patients and controls being insignificant ({chi}{sup 2}tests: 0.10« less

  18. Modeling of batch sorber system: kinetic, mechanistic, and thermodynamic modeling

    NASA Astrophysics Data System (ADS)

    Mishra, Vishal

    2017-10-01

    The present investigation has dealt with the biosorption of copper and zinc ions on the surface of egg-shell particles in the liquid phase. Various rate models were evaluated to elucidate the kinetics of copper and zinc biosorptions, and the results indicated that the pseudo-second-order model was more appropriate than the pseudo-first-order model. The curve of the initial sorption rate versus the initial concentration of copper and zinc ions also complemented the results of the pseudo-second-order model. Models used for the mechanistic modeling were the intra-particle model of pore diffusion and Bangham's model of film diffusion. The results of the mechanistic modeling together with the values of pore and film diffusivities indicated that the preferential mode of the biosorption of copper and zinc ions on the surface of egg-shell particles in the liquid phase was film diffusion. The results of the intra-particle model showed that the biosorption of the copper and zinc ions was not dominated by the pore diffusion, which was due to macro-pores with open-void spaces present on the surface of egg-shell particles. The thermodynamic modeling reproduced the fact that the sorption of copper and zinc was spontaneous, exothermic with the increased order of the randomness at the solid-liquid interface.

  19. On the significance of δ13C correlations in ancient sediments

    NASA Astrophysics Data System (ADS)

    Derry, Louis A.

    2010-08-01

    A graphical analysis of the correlations between δc and ɛTOC was introduced by Rothman et al. (2003) to obtain estimates of the carbon isotopic composition of inputs to the oceans and the organic carbon burial fraction. Applied to Cenozoic data, the method agrees with independent estimates, but with Neoproterozoic data the method yields results that cannot be accommodated with standard models of sedimentary carbon isotope mass balance. We explore the sensitivity of the graphical correlation method and find that the variance ratio between δc and δo is an important control on the correlation of δc and ɛ. If the variance ratio σc/ σo ≥ 1 highly correlated arrays very similar to those obtained from the data are produced from independent random variables. The Neoproterozoic data shows such variance patterns, and the regression parameters for the Neoproterozoic data are statistically indistinguishable from the randomized model at the 95% confidence interval. The projection of the data into δc- ɛ space cannot distinguish between signal and noise, such as post-depositional alteration, under these circumstances. There appears to be no need to invoke unusual carbon cycle dynamics to explain the Neoproterozoic δc- ɛ array. The Cenozoic data have σc/ σo < 1 and the δc vs. ɛ correlation is probably geologically significant, but the analyzed sample size is too small to yield statistically significant results.

  20. Maternal Risk Exposure and Adult Daughters' Health, Schooling, and Employment: A Constructed Cohort Analysis of 50 Developing Countries.

    PubMed

    Li, Qingfeng; Tsui, Amy O

    2016-06-01

    This study analyzes the relationships between maternal risk factors present at the time of daughters' births-namely, young mother, high parity, and short preceding birth interval-and their subsequent adult developmental, reproductive, and socioeconomic outcomes. Pseudo-cohorts are constructed using female respondent data from 189 cross-sectional rounds of Demographic and Health Surveys conducted in 50 developing countries between 1986 and 2013. Generalized linear models are estimated to test the relationships and calculate cohort-level outcome proportions with the systematic elimination of the three maternal risk factors. The simulation exercise for the full sample of 2,546 pseudo-cohorts shows that the combined elimination of risk exposures is associated with lower mean proportions of adult daughters experiencing child mortality, having a small infant at birth, and having a low body mass index. Among sub-Saharan African cohorts, the estimated changes are larger, particularly for years of schooling. The pseudo-cohort approach can enable longitudinal testing of life course hypotheses using large-scale, standardized, repeated cross-sectional data and with considerable resource efficiency.

  1. Maternal Risk Exposure and Adult Daughters’ Health, Schooling, and Employment: A Constructed Cohort Analysis of 50 Developing Countries

    PubMed Central

    Li, Qingfeng; Tsui, Amy O.

    2016-01-01

    This study analyzes the relationships between maternal risk factors present at the time of daughters’ births—namely, young mother, high parity, and short preceding birth interval—and their subsequent adult developmental, reproductive, and socioeconomic outcomes. Pseudo-cohorts are constructed using female respondent data from 189 cross-sectional rounds of Demographic and Health Surveys conducted in 50 developing countries between 1986 and 2013. Generalized linear models are estimated to test the relationships and calculate cohort-level outcome proportions with the systematic elimination of the three maternal risk factors. The simulation exercise for the full sample of 2,546 pseudo-cohorts shows that the combined elimination of risk exposures is associated with lower mean proportions of adult daughters experiencing child mortality, having a small infant at birth, and having a low body mass index. Among sub-Saharan African cohorts, the estimated changes are larger, particularly for years of schooling. The pseudo-cohort approach can enable longitudinal testing of life course hypotheses using large-scale, standardized, repeated cross-sectional data and with considerable resource efficiency. PMID:27154342

  2. One-loop pseudo-Goldstone masses in the minimal S O (10 ) Higgs model

    NASA Astrophysics Data System (ADS)

    Gráf, Lukáš; Malinský, Michal; Mede, Timon; Susič, Vasja

    2017-04-01

    We calculate the prominent perturbative contributions shaping the one-loop scalar spectrum of the minimal renormalizable nonsupersymmetric S O (10 ) Higgs model whose unified gauge symmetry is spontaneously broken by an adjoint scalar. Focusing on its potentially realistic 45 ⊕126 variant in which the rank is reduced by a vacuum expectation value of the 5-index antisymmetric self-dual tensor, we provide a thorough analysis of the corresponding Coleman-Weinberg one-loop effective potential, paying particular attention to the masses of the potentially tachyonic pseudo-Goldstone bosons transforming as (1, 3, 0) and (8, 1, 0) under the standard model (SM) gauge group. The results confirm the assumed existence of extended regions in the parameter space supporting a locally stable SM-like quantum vacuum inaccessible at the tree level. The effective potential tedium is compared to that encountered in the previously studied 45 ⊕16 S O (10 ) Higgs model where the polynomial corrections to the relevant pseudo-Goldstone masses turn out to be easily calculable within a very simplified purely diagrammatic approach.

  3. Integrating Scientific Array Processing into Standard SQL

    NASA Astrophysics Data System (ADS)

    Misev, Dimitar; Bachhuber, Johannes; Baumann, Peter

    2014-05-01

    We live in a time that is dominated by data. Data storage is cheap and more applications than ever accrue vast amounts of data. Storing the emerging multidimensional data sets efficiently, however, and allowing them to be queried by their inherent structure, is a challenge many databases have to face today. Despite the fact that multidimensional array data is almost always linked to additional, non-array information, array databases have mostly developed separately from relational systems, resulting in a disparity between the two database categories. The current SQL standard and SQL DBMS supports arrays - and in an extension also multidimensional arrays - but does so in a very rudimentary and inefficient way. This poster demonstrates the practicality of an SQL extension for array processing, implemented in a proof-of-concept multi-faceted system that manages a federation of array and relational database systems, providing transparent, efficient and scalable access to the heterogeneous data in them.

  4. The "covariation method" for estimating the parameters of the standard Dynamic Energy Budget model II: Properties and preliminary patterns

    NASA Astrophysics Data System (ADS)

    Lika, Konstadia; Kearney, Michael R.; Kooijman, Sebastiaan A. L. M.

    2011-11-01

    The covariation method for estimating the parameters of the standard Dynamic Energy Budget (DEB) model provides a single-step method of accessing all the core DEB parameters from commonly available empirical data. In this study, we assess the robustness of this parameter estimation procedure and analyse the role of pseudo-data using elasticity coefficients. In particular, we compare the performance of Maximum Likelihood (ML) vs. Weighted Least Squares (WLS) approaches and find that the two approaches tend to converge in performance as the number of uni-variate data sets increases, but that WLS is more robust when data sets comprise single points (zero-variate data). The efficiency of the approach is shown to be high, and the prior parameter estimates (pseudo-data) have very little influence if the real data contain information about the parameter values. For instance, the effects of the pseudo-value for the allocation fraction κ is reduced when there is information for both growth and reproduction, that for the energy conductance is reduced when information on age at birth and puberty is given, and the effects of the pseudo-value for the maturity maintenance rate coefficient are insignificant. The estimation of some parameters (e.g., the zoom factor and the shape coefficient) requires little information, while that of others (e.g., maturity maintenance rate, puberty threshold and reproduction efficiency) require data at several food levels. The generality of the standard DEB model, in combination with the estimation of all of its parameters, allows comparison of species on the basis of parameter values. We discuss a number of preliminary patterns emerging from the present collection of parameter estimates across a wide variety of taxa. We make the observation that the estimated value of the fraction κ of mobilised reserve that is allocated to soma is far away from the value that maximises reproduction. We recognise this as the reason why two very different parameter sets must exist that fit most data set reasonably well, and give arguments why, in most cases, the set with the large value of κ should be preferred. The continued development of a parameter database through the estimation procedures described here will provide a strong basis for understanding evolutionary patterns in metabolic organisation across the diversity of life.

  5. Mapping protein-protein interactions using yeast two-hybrid assays.

    PubMed

    Mehla, Jitender; Caufield, J Harry; Uetz, Peter

    2015-05-01

    Yeast two-hybrid (Y2H) screens are an efficient system for mapping protein-protein interactions and whole interactomes. The screens can be performed using random libraries or collections of defined open reading frames (ORFs) called ORFeomes. This protocol describes both library and array-based Y2H screening, with an emphasis on array-based assays. Array-based Y2H is commonly used to test a number of "prey" proteins for interactions with a single "bait" (target) protein or pool of proteins. The advantage of this approach is the direct identification of interacting protein pairs without further downstream experiments: The identity of the preys is known and does not require further confirmation. In contrast, constructing and screening a random prey library requires identification of individual prey clones and systematic retesting. Retesting is typically performed in an array format. © 2015 Cold Spring Harbor Laboratory Press.

  6. Golden Ratio Versus Pi as Random Sequence Sources for Monte Carlo Integration

    NASA Technical Reports Server (NTRS)

    Sen, S. K.; Agarwal, Ravi P.; Shaykhian, Gholam Ali

    2007-01-01

    We discuss here the relative merits of these numbers as possible random sequence sources. The quality of these sequences is not judged directly based on the outcome of all known tests for the randomness of a sequence. Instead, it is determined implicitly by the accuracy of the Monte Carlo integration in a statistical sense. Since our main motive of using a random sequence is to solve real world problems, it is more desirable if we compare the quality of the sequences based on their performances for these problems in terms of quality/accuracy of the output. We also compare these sources against those generated by a popular pseudo-random generator, viz., the Matlab rand and the quasi-random generator ha/ton both in terms of error and time complexity. Our study demonstrates that consecutive blocks of digits of each of these numbers produce a good random sequence source. It is observed that randomly chosen blocks of digits do not have any remarkable advantage over consecutive blocks for the accuracy of the Monte Carlo integration. Also, it reveals that pi is a better source of a random sequence than theta when the accuracy of the integration is concerned.

  7. A Probabilistic Approach to Predict Thermal Fatigue Life for Ball Grid Array Solder Joints

    NASA Astrophysics Data System (ADS)

    Wei, Helin; Wang, Kuisheng

    2011-11-01

    Numerous studies of the reliability of solder joints have been performed. Most life prediction models are limited to a deterministic approach. However, manufacturing induces uncertainty in the geometry parameters of solder joints, and the environmental temperature varies widely due to end-user diversity, creating uncertainties in the reliability of solder joints. In this study, a methodology for accounting for variation in the lifetime prediction for lead-free solder joints of ball grid array packages (PBGA) is demonstrated. The key aspects of the solder joint parameters and the cyclic temperature range related to reliability are involved. Probabilistic solutions of the inelastic strain range and thermal fatigue life based on the Engelmaier model are developed to determine the probability of solder joint failure. The results indicate that the standard deviation increases significantly when more random variations are involved. Using the probabilistic method, the influence of each variable on the thermal fatigue life is quantified. This information can be used to optimize product design and process validation acceptance criteria. The probabilistic approach creates the opportunity to identify the root causes of failed samples from product fatigue tests and field returns. The method can be applied to better understand how variation affects parameters of interest in an electronic package design with area array interconnections.

  8. Improved electronic measurement of the Boltzmann constant by Johnson noise thermometry

    NASA Astrophysics Data System (ADS)

    Qu, Jifeng; Benz, Samuel P.; Pollarolo, Alessio; Rogalla, Horst; Tew, Weston L.; White, Rod; Zhou, Kunli

    2015-10-01

    The unit of thermodynamic temperature, the kelvin, will be redefined in 2018 by fixing the value of the Boltzmann constant, k. The present CODATA recommended value of k is determined predominantly by acoustic gas-thermometry results. To provide a value of k based on different physical principles, purely electronic measurements of k were performed by using a Johnson noise thermometer to compare the thermal noise power of a 200  Ω sensing resistor immersed in a triple-point-of-water cell to the noise power of a quantum-accurate pseudo-random noise waveform of nominally equal noise power. Measurements integrated over a bandwidth of 575 kHz and a total integration time of about 33 d gave a measured value of k = 1.3806513(53)  ×  10-23 J K-1, for which the relative standard uncertainty is 3.9   ×   10-6 and the relative offset from the CODATA 2010 value is +1.8   ×   10-6.

  9. Wide-band (2.5 - 10.5 µm), high-frame rate IRFPAs based on high-operability MCT on silicon

    NASA Astrophysics Data System (ADS)

    Crosbie, Michael J.; Giess, Jean; Gordon, Neil T.; Hall, David J.; Hails, Janet E.; Lees, David J.; Little, Christopher J.; Phillips, Tim S.

    2010-04-01

    We have previously presented results from our mercury cadmium telluride (MCT, Hg1-xCdxTe) growth on silicon substrate technology for different applications, including negative luminescence, long waveband and mid/long dual waveband infrared imaging. In this paper, we review recent developments in QinetiQ's combined molecular beam epitaxy (MBE) and metal-organic vapor phase epitaxy (MOVPE) MCT growth on silicon; including MCT defect density, uniformity and reproducibility. We also present a new small-format (128 x 128) focal plane array (FPA) for high frame-rate applications. A custom high-speed readout integrated circuit (ROIC) was developed with a large pitch and large charge storage aimed at producing a very high performance FPA (NETD ~10mK) operating at frame rates up to 2kHz for the full array. The array design allows random addressing and this allows the maximum frame rate to be increased as the window size is reduced. A broadband (2.5-10.5 μm) MCT heterostructure was designed and grown by the MBE/MOVPE technique onto silicon substrates. FPAs were fabricated using our standard techniques; wet-etched mesa diodes passivated with epitaxial CdTe and flip-chip bonded to the ROIC. The resulting focal plane arrays were characterized at the maximum frame rate and shown to have the high operabilities and low NETD values characteristic of our LWIR MCT on silicon technology.

  10. Battlefield decision aid for acoustical ground sensors with interface to meteorological data sources

    NASA Astrophysics Data System (ADS)

    Wilson, D. Keith; Noble, John M.; VanAartsen, Bruce H.; Szeto, Gregory L.

    2001-08-01

    The performance of acoustical ground sensors depends heavily on the local atmospheric and terrain conditions. This paper describes a prototype physics-based decision aid, called the Acoustic Battlefield Aid (ABFA), for predicting these environ-mental effects. ABFA integrates advanced models for acoustic propagation, atmospheric structure, and array signal process-ing into a convenient graphical user interface. The propagation calculations are performed in the frequency domain on user-definable target spectra. The solution method involves a parabolic approximation to the wave equation combined with a ter-rain diffraction model. Sensor performance is characterized with Cramer-Rao lower bounds (CRLBs). The CRLB calcula-tions include randomization of signal energy and wavefront orientation resulting from atmospheric turbulence. Available performance characterizations include signal-to-noise ratio, probability of detection, direction-finding accuracy for isolated receiving arrays, and location-finding accuracy for networked receiving arrays. A suite of integrated tools allows users to create new target descriptions from standard digitized audio files and to design new sensor array layouts. These tools option-ally interface with the ARL Database/Automatic Target Recognition (ATR) Laboratory, providing access to an extensive library of target signatures. ABFA also includes a Java-based capability for network access of near real-time data from sur-face weather stations or forecasts from the Army's Integrated Meteorological System. As an example, the detection footprint of an acoustical sensor, as it evolves over a 13-hour period, is calculated.

  11. Resonant dielectric metamaterials

    DOEpatents

    Loui, Hung; Carroll, James; Clem, Paul G; Sinclair, Michael B

    2014-12-02

    A resonant dielectric metamaterial comprises a first and a second set of dielectric scattering particles (e.g., spheres) having different permittivities arranged in a cubic array. The array can be an ordered or randomized array of particles. The resonant dielectric metamaterials are low-loss 3D isotropic materials with negative permittivity and permeability. Such isotropic double negative materials offer polarization and direction independent electromagnetic wave propagation.

  12. Analysis and suppression of passive noise in surface microseismic data

    NASA Astrophysics Data System (ADS)

    Forghani-Arani, Farnoush

    Surface microseismic surveys are gaining popularity in monitoring the hydraulic fracturing process. The effectiveness of these surveys, however, is strongly dependent on the signal-to-noise ratio of the acquired data. Cultural and industrial noise generated during hydraulic fracturing operations usually dominate the data, thereby decreasing the effectiveness of using these data in identifying and locating microseismic events. Hence, noise suppression is a critical step in surface microseismic monitoring. In this thesis, I focus on two important aspects in using surface-recorded microseismic seismic data: first, I take advantage of the unwanted surface noise to understand the characteristics of these noise and extract information about the propagation medium from the noise; second, I propose effective techniques to suppress the surface noise while preserving the waveforms that contain information about the source of microseisms. Automated event identification on passive seismic data using only a few receivers is challenging especially when the record lengths span over long durations of time. I introduce an automatic event identification algorithm that is designed specifically for detecting events in passive data acquired with a small number of receivers. I demonstrate that the conventional STA/LTA (Short-term Average/Long-term Average) algorithm is not sufficiently effective in event detection in the common case of low signal-to-noise ratio. With a cross-correlation based method as an extension of the STA/LTA algorithm, even low signal-to-noise events (that were not detectable with conventional STA/LTA) were revealed. Surface microseismic data contains surface-waves (generated primarily from hydraulic fracturing activities) and body-waves in the form of microseismic events. It is challenging to analyze the surface-waves on the recorded data directly because of the randomness of their source and their unknown source signatures. I use seismic interferometry to extract the surface-wave arrivals. Interferometry is a powerful tool to extract waves (including body-wave and surface-waves) that propagate from any receiver in the array (called a pseudo source) to the other receivers across the array. Since most of the noise sources in surface microseismic data lie on the surface, seismic interferometry yields pseudo source gathers dominated by surface-wave energy. The dispersive characteristics of these surface-waves are important properties that can be used to extract information necessary for suppressing these waves. I demonstrate the application of interferometry to surface passive data recorded during the hydraulic fracturing operation of a tight gas reservoir and extract the dispersion properties of surface-waves corresponding to a pseudo-shot gather. Comparison of the dispersion characteristics of the surface waves from the pseudo-shot gather with that of an active shot-gather shows interesting similarities and differences. The dispersion character (e.g. velocity change with frequency) of the fundamental mode was observed to have the same behavior for both the active and passive data. However, for the higher mode surface-waves, the dispersion properties are extracted at different frequency ranges. Conventional noise suppression techniques in passive data are mostly stacking-based that rely on enforcing the amplitude of the signal by stacking the waveforms at the receivers and are unable to preserve the waveforms at the individual receivers necessary for estimating the microseismic source location and source mechanism. Here, I introduce a technique based on the tau - p transform, that effectively identifies and separates microseismic events from surface-wave noise in the tau -p domain. This technique is superior to conventional stacking-based noise suppression techniques, because it preserves the waveforms at individual receivers. Application of this methodology to microseismic events with isotropic and double-couple source mechanism, show substantial improvement in the signal-to-noise ratio. Imaging of the processed field data also show improved imaging of the hypocenter location of the microseismic source. In the case of double-couple source mechanism, I suggest two approaches for unifying the polarities at the receivers, a cross-correlation approach and a semblance-based prediction approach. The semblance-based approach is more effective at unifying the polarities, especially for low signal-to-noise ratio data.

  13. Microseismic Monitoring of Stimulating Shale Gas Reservoir in SW China: 1. An Improved Matching and Locating Technique for Downhole Monitoring

    NASA Astrophysics Data System (ADS)

    Meng, Xiaobo; Chen, Haichao; Niu, Fenglin; Tang, Youcai; Yin, Chen; Wu, Furong

    2018-02-01

    We introduce an improved matching and locating technique to detect and locate microseismic events (-4 < ML < 0) associated with hydraulic fracturing treatment. We employ a set of representative master events to act as template waveforms and detect slave events that strongly resemble master events through stacking cross correlograms of both P and S waves between the template waveforms and the continuous records of the monitoring array. Moreover, the residual moveout in the cross correlograms across the array is used to locate slave events relative to the corresponding master event. In addition, P wave polarization constraint is applied to resolve the lateral extent of slave events in the case of unfavorable array configuration. We first demonstrate the detectability and location accuracy of the proposed approach with a pseudo-synthetic data set. Compared to the matched filter analysis, the proposed approach can significantly enhance detectability at low false alarm rate and yield robust location estimates of very low SNR events, particularly along the vertical direction. Then, we apply the method to a real microseismic data set acquired in the Weiyuan shale reservoir of China in November of 2014. The expanded microseismic catalog provides more easily interpretable spatiotemporal evolution of microseismicity, which is investigated in detail in a companion paper.

  14. Highly organised and dense vertical silicon nanowire arrays grown in porous alumina template on <100> silicon wafers

    PubMed Central

    2013-01-01

    In this work, nanoimprint lithography combined with standard anodization etching is used to make perfectly organised triangular arrays of vertical cylindrical alumina nanopores onto standard <100>−oriented silicon wafers. Both the pore diameter and the period of alumina porous array are well controlled and can be tuned: the periods vary from 80 to 460 nm, and the diameters vary from 15 nm to any required diameter. These porous thin layers are then successfully used as templates for the guided epitaxial growth of organised mono-crystalline silicon nanowire arrays in a chemical vapour deposition chamber. We report the densities of silicon nanowires up to 9 × 109 cm−2 organised in highly regular arrays with excellent diameter distribution. All process steps are demonstrated on surfaces up to 2 × 2 cm2. Specific emphasis was made to select techniques compatible with microelectronic fabrication standards, adaptable to large surface samples and with a reasonable cost. Achievements made in the quality of the porous alumina array, therefore on the silicon nanowire array, widen the number of potential applications for this technology, such as optical detectors or biological sensors. PMID:23773702

  15. Arsenic metabolism efficiency has a causal role in arsenic toxicity: Mendelian randomization and gene-environment interaction.

    PubMed

    Pierce, Brandon L; Tong, Lin; Argos, Maria; Gao, Jianjun; Farzana, Jasmine; Roy, Shantanu; Paul-Brutus, Rachelle; Rahaman, Ronald; Rakibuz-Zaman, Muhammad; Parvez, Faruque; Ahmed, Alauddin; Quasem, Iftekhar; Hore, Samar K; Alam, Shafiul; Islam, Tariqul; Harjes, Judith; Sarwar, Golam; Slavkovich, Vesna; Gamble, Mary V; Chen, Yu; Yunus, Mohammad; Rahman, Mahfuzar; Baron, John A; Graziano, Joseph H; Ahsan, Habibul

    2013-12-01

    Arsenic exposure through drinking water is a serious global health issue. Observational studies suggest that individuals who metabolize arsenic efficiently are at lower risk for toxicities such as arsenical skin lesions. Using two single nucleotide polymorphisms(SNPs) in the 10q24.32 region (near AS3MT) that show independent associations with metabolism efficiency, Mendelian randomization can be used to assess whether the association between metabolism efficiency and skin lesions is likely to be causal. Using data on 2060 arsenic-exposed Bangladeshi individuals, we estimated associations for two 10q24.32 SNPs with relative concentrations of three urinary arsenic species (representing metabolism efficiency): inorganic arsenic (iAs), monomethylarsonic acid(MMA) and dimethylarsinic acid (DMA). SNP-based predictions of iAs%, MMA% and DMA% were tested for association with skin lesion status among 2483 cases and 2857 controls. Causal odds ratios for skin lesions were 0.90 (95% confidence interval[CI]: 0.87, 0.95), 1.19 (CI: 1.10, 1.28) and 1.23 (CI: 1.12, 1.36)for a one standard deviation increase in DMA%, MMA% and iAs%,respectively. We demonstrated genotype-arsenic interaction, with metabolism-related variants showing stronger associations with skin lesion risk among individuals with high arsenic exposure (synergy index: 1.37; CI: 1.11, 1.62). We provide strong evidence for a causal relationship between arsenic metabolism efficiency and skin lesion risk. Mendelian randomization can be used to assess the causal role of arsenic exposure and metabolism in a wide array of health conditions.exposure and metabolism in a wide array of health conditions.Developing interventions that increase arsenic metabolism efficiency are likely to reduce the impact of arsenic exposure on health.

  16. Absolute GPS Positioning Using Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Ramillien, G.

    A new inverse approach for restoring the absolute coordinates of a ground -based station from three or four observed GPS pseudo-ranges is proposed. This stochastic method is based on simulations of natural evolution named genetic algorithms (GA). These iterative procedures provide fairly good and robust estimates of the absolute positions in the Earth's geocentric reference system. For comparison/validation, GA results are compared to the ones obtained using the classical linearized least-square scheme for the determination of the XYZ location proposed by Bancroft (1985) which is strongly limited by the number of available observations (i.e. here, the number of input pseudo-ranges must be four). The r.m.s. accuracy of the non -linear cost function reached by this latter method is typically ~10-4 m2 corresponding to ~300-500-m accuracies for each geocentric coordinate. However, GA can provide more acceptable solutions (r.m.s. errors < 10-5 m2), even when only three instantaneous pseudo-ranges are used, such as a lost of lock during a GPS survey. Tuned GA parameters used in different simulations are N=1000 starting individuals, as well as Pc=60-70% and Pm=30-40% for the crossover probability and mutation rate, respectively. Statistical tests on the ability of GA to recover acceptable coordinates in presence of important levels of noise are made simulating nearly 3000 random samples of erroneous pseudo-ranges. Here, two main sources of measurement errors are considered in the inversion: (1) typical satellite-clock errors and/or 300-metre variance atmospheric delays, and (2) Geometrical Dilution of Precision (GDOP) due to the particular GPS satellite configuration at the time of acquisition. Extracting valuable information and even from low-quality starting range observations, GA offer an interesting alternative for high -precision GPS positioning.

  17. Combinatorial algorithms for design of DNA arrays.

    PubMed

    Hannenhalli, Sridhar; Hubell, Earl; Lipshutz, Robert; Pevzner, Pavel A

    2002-01-01

    Optimal design of DNA arrays requires the development of algorithms with two-fold goals: reducing the effects caused by unintended illumination (border length minimization problem) and reducing the complexity of masks (mask decomposition problem). We describe algorithms that reduce the number of rectangles in mask decomposition by 20-30% as compared to a standard array design under the assumption that the arrangement of oligonucleotides on the array is fixed. This algorithm produces provably optimal solution for all studied real instances of array design. We also address the difficult problem of finding an arrangement which minimizes the border length and come up with a new idea of threading that significantly reduces the border length as compared to standard designs.

  18. Wideband metamaterial array with polarization-independent and wide incident angle for harvesting ambient electromagnetic energy and wireless power transfer

    NASA Astrophysics Data System (ADS)

    Zhong, Hui-Teng; Yang, Xue-Xia; Song, Xing-Tang; Guo, Zhen-Yue; Yu, Fan

    2017-11-01

    In this work, we introduced the design, demonstration, and discussion of a wideband metamaterial array with polarization-independent and wide-angle for harvesting ambient electromagnetic (EM) energy and wireless power transfer. The array consists of unit cells with one square ring and four metal bars. In comparison to the published metamaterial arrays for harvesting EM energy or wireless transfer, this design had the wide operation bandwidth with the HPBW (Half Power Band Width) of 110% (6.2 GHz-21.4 GHz), which overcomes the narrow-band operation induced by the resonance characteristic of the metamaterial. On the normal incidence, the simulated maximum harvesting efficiency was 96% and the HPBW was 110% for the random polarization wave. As the incident angle increases to 45°, the maximum efficiency remained higher than 88% and the HPBW remained higher than 83% for the random polarization wave. Furthermore, the experimental verification of the designed metamaterial array was conducted, and the measured results were in reasonable agreement with the simulated ones.

  19. Simulated near-field mapping of ripple pattern supported metal nanoparticles arrays for SERS optimization

    NASA Astrophysics Data System (ADS)

    Arya, Mahima; Bhatnagar, Mukul; Ranjan, Mukesh; Mukherjee, Subroto; Nath, Rabinder; Mitra, Anirban

    2017-11-01

    An analytical model has been developed using a modified Yamaguchi model along with the wavelength dependent plasmon line-width correction. The model has been used to calculate the near-field response of random nanoparticles on the plane surface, elongated and spherical silver nanoparticle arrays supported on ion beam produced ripple patterned templates. The calculated near-field mapping for elongated nanoparticles arrays on the ripple patterned surface shows maximum number of hot-spots with a higher near-field enhancement (NFE) as compared to the spherical nanoparticle arrays and randomly distributed nanoparticles on the plane surface. The results from the simulations show a similar trend for the NFE when compared to the far field reflection spectra. The nature of the wavelength dependent NFE is also found to be in agreement with the observed experimental results from surface enhanced Raman spectroscopy (SERS). The calculated and the measured optical response unambiguously reveal the importance of interparticle gap and ordering, where a high intensity Raman signal is obtained for ordered elongated nanoparticles arrays case as against non-ordered and the aligned configuration of spherical nanoparticles on the rippled surface.

  20. Graphical Internet Access on a Budget: Making a Pseudo-SLIP Connection.

    ERIC Educational Resources Information Center

    McCulley, P. Michael

    1995-01-01

    Examines The Internet Adapter (TIA), an Internet protocol that allows computers to be directly on the Internet and access graphics over standard telephone lines using high-speed modems. Compares TIA's system requirements, performance, and costs to other Internet connections. Sidebars describe connections other than TIA and how to find information…

  1. Polarimetric imaging of biological tissues based on the indices of polarimetric purity.

    PubMed

    Van Eeckhout, Albert; Lizana, Angel; Garcia-Caurel, Enric; Gil, José J; Sansa, Adrià; Rodríguez, Carla; Estévez, Irene; González, Emilio; Escalera, Juan C; Moreno, Ignacio; Campos, Juan

    2018-04-01

    We highlight the interest of using the indices of polarimetric purity (IPPs) to the inspection of biological tissues. The IPPs were recently proposed in the literature and they result in a further synthetization of the depolarizing properties of samples. Compared with standard polarimetric images of biological samples, IPP-based images lead to larger image contrast of some biological structures and to a further physical interpretation of the depolarizing mechanisms inherent to the samples. In addition, unlike other methods, their calculation do not require advanced algebraic operations (as is the case of polar decompositions), and they result in 3 indicators of easy implementation. We also propose a pseudo-colored encoding of the IPP information that leads to an improved visualization of samples. This last technique opens the possibility of tailored adjustment of tissues contrast by using customized pseudo-colored images. The potential of the IPP approach is experimentally highlighted along the manuscript by studying 3 different ex-vivo samples. A significant image contrast enhancement is obtained by using the IPP-based methods, compared to standard polarimetric images. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Phase-retrieval attack free cryptosystem based on cylindrical asymmetric diffraction and double-random phase encoding

    NASA Astrophysics Data System (ADS)

    Wang, Jun; Li, Xiaowei; Hu, Yuhen; Wang, Qiong-Hua

    2018-03-01

    A phase-retrieval attack free cryptosystem based on the cylindrical asymmetric diffraction and double-random phase encoding (DRPE) is proposed. The plaintext is abstract as a cylinder, while the observed diffraction and holographic surfaces are concentric cylinders. Therefore, the plaintext can be encrypted through a two-step asymmetric diffraction process with double pseudo random phase masks located on the object surface and the first diffraction surface. After inverse diffraction from a holographic surface to an object surface, the plaintext can be reconstructed using a decryption process. Since the diffraction propagated from the inner cylinder to the outer cylinder is different from that of the reversed direction, the proposed cryptosystem is asymmetric and hence is free of phase-retrieval attack. Numerical simulation results demonstrate the flexibility and effectiveness of the proposed cryptosystem.

  3. Random number generators tested on quantum Monte Carlo simulations.

    PubMed

    Hongo, Kenta; Maezono, Ryo; Miura, Kenichi

    2010-08-01

    We have tested and compared several (pseudo) random number generators (RNGs) applied to a practical application, ground state energy calculations of molecules using variational and diffusion Monte Carlo metheds. A new multiple recursive generator with 8th-order recursion (MRG8) and the Mersenne twister generator (MT19937) are tested and compared with the RANLUX generator with five luxury levels (RANLUX-[0-4]). Both MRG8 and MT19937 are proven to give the same total energy as that evaluated with RANLUX-4 (highest luxury level) within the statistical error bars with less computational cost to generate the sequence. We also tested the notorious implementation of linear congruential generator (LCG), RANDU, for comparison. (c) 2010 Wiley Periodicals, Inc.

  4. Coma cluster ultradiffuse galaxies are not standard radio galaxies

    NASA Astrophysics Data System (ADS)

    Struble, Mitchell F.

    2018-02-01

    Matching members in the Coma cluster catalogue of ultradiffuse galaxies (UDGs) from SUBARU imaging with a very deep radio continuum survey source catalogue of the cluster using the Karl G. Jansky Very Large Array (VLA) within a rectangular region of ∼1.19 deg2 centred on the cluster core reveals matches consistent with random. An overlapping set of 470 UDGs and 696 VLA radio sources in this rectangular area finds 33 matches within a separation of 25 arcsec; dividing the sample into bins with separations bounded by 5, 10, 20 and 25 arcsec finds 1, 4, 17 and 11 matches. An analytical model estimate, based on the Poisson probability distribution, of the number of randomly expected matches within these same separation bounds is 1.7, 4.9, 19.4 and 14.2, each, respectively, consistent with the 95 per cent Poisson confidence intervals of the observed values. Dividing the data into five clustercentric annuli of 0.1° and into the four separation bins, finds the same result. This random match of UDGs with VLA sources implies that UDGs are not radio galaxies by the standard definition. Those VLA sources having integrated flux >1 mJy at 1.4 GHz in Miller, Hornschemeier and Mobasher without SDSS galaxy matches are consistent with the known surface density of background radio sources. We briefly explore the possibility that some unresolved VLA sources near UDGs could be young, compact, bright, supernova remnants of Type Ia events, possibly in the intracluster volume.

  5. Uncoordinated MAC for Adaptive Multi Beam Directional Networks: Analysis and Evaluation

    DTIC Science & Technology

    2016-08-01

    control (MAC) policies for emerging systems that are equipped with fully digital antenna arrays which are capable of adaptive multi-beam directional...Adaptive Beam- forming, Multibeam, Directional Networking, Random Access, Smart Antennas I. INTRODUCTION Fully digital beamforming antenna arrays that...are capable of adaptive multi-beam communications are quickly becoming a reality. These antenna arrays allow users to form multiple simultaneous

  6. Cross-correlation least-squares reverse time migration in the pseudo-time domain

    NASA Astrophysics Data System (ADS)

    Li, Qingyang; Huang, Jianping; Li, Zhenchun

    2017-08-01

    The least-squares reverse time migration (LSRTM) method with higher image resolution and amplitude is becoming increasingly popular. However, the LSRTM is not widely used in field land data processing because of its sensitivity to the initial migration velocity model, large computational cost and mismatch of amplitudes between the synthetic and observed data. To overcome the shortcomings of the conventional LSRTM, we propose a cross-correlation least-squares reverse time migration algorithm in pseudo-time domain (PTCLSRTM). Our algorithm not only reduces the depth/velocity ambiguities, but also reduces the effect of velocity error on the imaging results. It relieves the accuracy requirements on the migration velocity model of least-squares migration (LSM). The pseudo-time domain algorithm eliminates the irregular wavelength sampling in the vertical direction, thus it can reduce the vertical grid points and memory requirements used during computation, which makes our method more computationally efficient than the standard implementation. Besides, for field data applications, matching the recorded amplitudes is a very difficult task because of the viscoelastic nature of the Earth and inaccuracies in the estimation of the source wavelet. To relax the requirement for strong amplitude matching of LSM, we extend the normalized cross-correlation objective function to the pseudo-time domain. Our method is only sensitive to the similarity between the predicted and the observed data. Numerical tests on synthetic and land field data confirm the effectiveness of our method and its adaptability for complex models.

  7. Experimental single-chip color HDTV image acquisition system with 8M-pixel CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Shimamoto, Hiroshi; Yamashita, Takayuki; Funatsu, Ryohei; Mitani, Kohji; Nojiri, Yuji

    2006-02-01

    We have developed an experimental single-chip color HDTV image acquisition system using 8M-pixel CMOS image sensor. The sensor has 3840 × 2160 effective pixels and is progressively scanned at 60 frames per second. We describe the color filter array and interpolation method to improve image quality with a high-pixel-count single-chip sensor. We also describe an experimental image acquisition system we used to measured spatial frequency characteristics in the horizontal direction. The results indicate good prospects for achieving a high quality single chip HDTV camera that reduces pseudo signals and maintains high spatial frequency characteristics within the frequency band for HDTV.

  8. Monitoring on-orbit calibration stability of the Terra MODIS and Landsat 7 ETM+ sensors using pseudo-invariant test sites

    USGS Publications Warehouse

    Chander, G.; Xiong, X.(J.); Choi, T.(J.); Angal, A.

    2010-01-01

    The ability to detect and quantify changes in the Earth's environment depends on sensors that can provide calibrated, consistent measurements of the Earth's surface features through time. A critical step in this process is to put image data from different sensors onto a common radiometric scale. This work focuses on monitoring the long-term on-orbit calibration stability of the Terra Moderate Resolution Imaging Spectroradiometer (MODIS) and the Landsat 7 (L7) Enhanced Thematic Mapper Plus (ETM+) sensors using the Committee on Earth Observation Satellites (CEOS) reference standard pseudo-invariant test sites (Libya 4, Mauritania 1/2, Algeria 3, Libya 1, and Algeria 5). These sites have been frequently used as radiometric targets because of their relatively stable surface conditions temporally. This study was performed using all cloud-free calibrated images from the Terra MODIS and the L7 ETM+ sensors, acquired from launch to December 2008. Homogeneous regions of interest (ROI) were selected in the calibrated images and the mean target statistics were derived from sensor measurements in terms of top-of-atmosphere (TOA) reflectance. For each band pair, a set of fitted coefficients (slope and offset) is provided to monitor the long-term stability over very stable pseudo-invariant test sites. The average percent differences in intercept from the long-term trends obtained from the ETM + TOA reflectance estimates relative to the MODIS for all the CEOS reference standard test sites range from 2.5% to 15%. This gives an estimate of the collective differences due to the Relative Spectral Response (RSR) characteristics of each sensor, bi-directional reflectance distribution function (BRDF), spectral signature of the ground target, and atmospheric composition. The lifetime TOA reflectance trends from both sensors over 10 years are extremely stable, changing by no more than 0.4% per year in its TOA reflectance over the CEOS reference standard test sites.

  9. A 750 GeV portal: LHC phenomenology and dark matter candidates

    DOE PAGES

    D’Eramo, Francesco; de Vries, Jordy; Panci, Paolo

    2016-05-16

    We study the effective field theory obtained by extending the Standard Model field content with two singlets: a 750 GeV (pseudo-)scalar and a stable fermion. Accounting for collider productions initiated by both gluon and photon fusion, we investigate where the theory is consistent with both the LHC diphoton excess and bounds from Run 1. We analyze dark matter phenomenology in such regions, including relic density constraints as well as collider, direct, and indirect bounds. Scalar portal dark matter models are very close to limits from direct detection and mono-jet searches if gluon fusion dominates, and not constrained at all otherwise.more » In conclusion, pseudo-scalar models are challenged by photon line limits and mono-jet searches in most of the parameter space.« less

  10. A 750 GeV portal: LHC phenomenology and dark matter candidates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D’Eramo, Francesco; de Vries, Jordy; Panci, Paolo

    We study the effective field theory obtained by extending the Standard Model field content with two singlets: a 750 GeV (pseudo-)scalar and a stable fermion. Accounting for collider productions initiated by both gluon and photon fusion, we investigate where the theory is consistent with both the LHC diphoton excess and bounds from Run 1. We analyze dark matter phenomenology in such regions, including relic density constraints as well as collider, direct, and indirect bounds. Scalar portal dark matter models are very close to limits from direct detection and mono-jet searches if gluon fusion dominates, and not constrained at all otherwise.more » In conclusion, pseudo-scalar models are challenged by photon line limits and mono-jet searches in most of the parameter space.« less

  11. Model of Semidiurnal Pseudo Tide in the High-Latitude Upper Mesosphere

    NASA Technical Reports Server (NTRS)

    Talaat, E. R.; Mayr, H. G.

    2011-01-01

    We present numerical results for the m = 1 meridional winds of semi diurnal oscillations in the high-latitude upper mesosphere, which are generated in the Numerical Spectral Model (NSM) without solar excitations of the tides. Identified with heuristic computer runs, the pseudo tides attain amplitudes that are, at times, as large as the non-migrating tides produced with standard solar forcing. Under the influence of parameterized gravity waves, the nonlinear NSM generates internal oscillations like the quasi-biennial oscillation, that are produced with periods favored by the dynamical properties of the system. The Coriolis force would favor at polar latitudes the excitation of the 12-hour periodicity. This oscillation may help explain the large non-migrating semidiurnal tides that are observed in the region with ground-based and satellite measurements.

  12. Inter-station coda wavefield studies using a novel icequake database on Erebus volcano

    NASA Astrophysics Data System (ADS)

    Chaput, J. A.; Campillo, M.; Roux, P.; Aster, R. C.

    2013-12-01

    Recent theoretical advances pertaining to the properties of multiply scattered wavefields have yielded a plethora of numerical and controlled source studies aiming to better understand what information may be derived from these otherwise chaotic signals. Practically, multiply scattered wavefields are difficult to compare to numerically derived models due to a combination of source paucity/directionality and array density limitations, particularly in passive seismology scenarios. Furthermore, in situations where data quantities are abundant, such as for ambient noise correlations, it remains very difficult to recover pseudo-Green's function symmetry in the ballistic components of the wavefield, let alone in the coda of the correlations. In this study, we use a large network of short period and broadband instruments on Erebus volcano to show that actual Green's function recovery is indeed possible in some cases. We make use of a large database of small impulsive icequakes distributed randomly on the summit plateau and, using fundamental theoretical properties of equipartitioned wavefields and interstation icequake coda correlations, are able to directly derive notoriously difficult quantities such as the bulk elastic mean free path for the volcano, demonstrations of correlation coda symmetry and its dependence on the number of icequakes used, and a theoretically predicted coherent backscattering amplification factor associated with weak localization. We furthermore show that stable equipartition and H^2/V^2 ratios may be consistently observed for icequake coda, and we perform simple depth inversions of these frequency dependent quantities to compare with known structures.

  13. Residential photovoltaic module and array requirements study

    NASA Technical Reports Server (NTRS)

    Nearhoof, S. L.; Oster, J. R.

    1979-01-01

    Design requirements for photovoltaic modules and arrays used in residential applications were identified. Building codes and referenced standards were reviewed for their applicability to residential photovoltaic array installations. Four installation types were identified - integral (replaces roofing), direct (mounted on top of roofing), stand-off (mounted away from roofing), and rack (for flat or low slope roofs, or ground mounted). Installation costs were developed for these mounting types as a function of panel/module size. Studies were performed to identify optimum module shapes and sizes and operating voltage cost drivers. It is concluded that there are no perceived major obstacles to the use of photovoltaic modules in residential arrays. However, there is no applicable building code category for residential photovoltaic modules and arrays and additional work with standards writing organizations is needed to develop residential module and array requirements.

  14. Performance analysis of structured gradient algorithm. [for adaptive beamforming linear arrays

    NASA Technical Reports Server (NTRS)

    Godara, Lal C.

    1990-01-01

    The structured gradient algorithm uses a structured estimate of the array correlation matrix (ACM) to estimate the gradient required for the constrained least-mean-square (LMS) algorithm. This structure reflects the structure of the exact array correlation matrix for an equispaced linear array and is obtained by spatial averaging of the elements of the noisy correlation matrix. In its standard form the LMS algorithm does not exploit the structure of the array correlation matrix. The gradient is estimated by multiplying the array output with the receiver outputs. An analysis of the two algorithms is presented to show that the covariance of the gradient estimated by the structured method is less sensitive to the look direction signal than that estimated by the standard method. The effect of the number of elements on the signal sensitivity of the two algorithms is studied.

  15. Two-Way Satellite Time and Frequency Transfer Using 1 MChips/s Codes

    DTIC Science & Technology

    2009-11-01

    Abstract The Ku-band transatlantic and Europe-to-Europe two-way satellite time and frequency transfer ( TWSTFT ) operations used 2.5 MChip/s...pseudo-random codes with 3.5 MHz bandwidth until the end of July 2009. The cost of TWSTFT operation is associated with the bandwidth used on a...geostationary satellite. The transatlantic and Europe-to-Europe TWSTFT operations faced a significant increase in cost for using 3.5 MHz bandwidth on a new

  16. Information Encoding on a Pseudo Random Noise Radar Waveform

    DTIC Science & Technology

    2013-03-01

    quadrature mirror filter bank (QMFB) tree diagram [18] . . . . . . . . . . . 18 2.7 QMFB layer 3 contour plot for 7-bit barker code binary phase shift...test signal . . . . . . . . 20 2.9 Block diagram of the FFT accumulation method (FAM) time smoothing method to estimate the spectral correlation ... Samples A m pl itu de (b) Correlator output for an WGN pulse in a AWGN channel Figure 2.2: Effectiveness of correlation for SNR = -10 dB 10 2.3 Radar

  17. NASA-STD-4005 and NASA-HDBK-4006, LEO Spacecraft Solar Array Charging Design Standard

    NASA Technical Reports Server (NTRS)

    Ferguson, Dale C.

    2007-01-01

    Two new NASA Standards are now official. They are the NASA LEO Spacecraft Charging Design Standard (NASA-STD-4005) and the NASA LEO Spacecraft Charging Design Handbook (NASA-HDBK-4006). They give the background and techniques for controlling solar array-induced charging and arcing in LEO. In this paper, a brief overview of the new standards is given, along with where they can be obtained and who should be using them.

  18. Linkage analysis by genotyping of sibling populations: a genetic map for the potato cyst nematode constructed using a "pseudo-F2" mapping strategy.

    PubMed

    Rouppe van der Voort, J N; van Eck, H J; van Zandvoort, P M; Overmars, H; Helder, J; Bakker, J

    1999-07-01

    A mapping strategy is described for the construction of a linkage map of a non-inbred species in which individual offspring genotypes are not amenable to marker analysis. After one extra generation of random mating, the segregating progeny was propagated, and bulked populations of offspring were analyzed. Although the resulting population structure is different from that of commonly used mapping populations, we show that the maximum likelihood formula for a normal F2 is applicable for the estimation of recombination. This "pseudo-F2" mapping strategy, in combination with the development of an AFLP assay for single cysts, facilitated the construction of a linkage map for the potato cyst nematode Globodera rostochiensis. Using 12 pre-selected AFLP primer combinations, a total of 66 segregating markers were identified, 62 of which were mapped to nine linkage groups. These 62 AFLP markers are randomly distributed and cover about 65% of the genome. An estimate of the physical size of the Globodera genome was obtained from comparisons of the number of AFLP fragments obtained with the values for Caenorhabditis elegans. The methodology presented here resulted in the first genomic map for a cyst nematode. The low value of the kilobase/centimorgan (kb/cM) ratio for the Globodera genome will facilitate map-based cloning of genes that mediate the interaction between the nematode and its host plant.

  19. High-density fiber-optic DNA random microsphere array.

    PubMed

    Ferguson, J A; Steemers, F J; Walt, D R

    2000-11-15

    A high-density fiber-optic DNA microarray sensor was developed to monitor multiple DNA sequences in parallel. Microarrays were prepared by randomly distributing DNA probe-functionalized 3.1-microm-diameter microspheres in an array of wells etched in a 500-microm-diameter optical imaging fiber. Registration of the microspheres was performed using an optical encoding scheme and a custom-built imaging system. Hybridization was visualized using fluorescent-labeled DNA targets with a detection limit of 10 fM. Hybridization times of seconds are required for nanomolar target concentrations, and analysis is performed in minutes.

  20. Lowering data retention voltage in static random access memory array by post fabrication self-improvement of cell stability by multiple stress application

    NASA Astrophysics Data System (ADS)

    Mizutani, Tomoko; Takeuchi, Kiyoshi; Saraya, Takuya; Kobayashi, Masaharu; Hiramoto, Toshiro

    2018-04-01

    We propose a new version of the post fabrication static random access memory (SRAM) self-improvement technique, which utilizes multiple stress application. It is demonstrated that, using a device matrix array (DMA) test element group (TEG) with intrinsic channel fully depleted (FD) silicon-on-thin-buried-oxide (SOTB) six-transistor (6T) SRAM cells fabricated by the 65 nm technology, the lowering of data retention voltage (DRV) is more effectively achieved than using the previously proposed single stress technique.

  1. Power generation in random diode arrays

    NASA Astrophysics Data System (ADS)

    Shvydka, Diana; Karpov, V. G.

    2005-03-01

    We discuss nonlinear disordered systems, random diode arrays (RDAs), which can represent such objects as large-area photovoltaics and ion channels of biological membranes. Our numerical modeling has revealed several interesting properties of RDAs. In particular, the geometrical distribution of nonuniformities across a RDA has only a minor effect on its integral characteristics determined by RDA parameter statistics. In the meantime, the dispersion of integral characteristics vs system size exhibits a nontrivial scaling dependence. Our theoretical interpretation here remains limited and is based on the picture of eddy currents flowing through weak diodes in the RDA.

  2. Soliton creation, propagation, and annihilation in aeromechanical arrays of one-way coupled bistable elements

    NASA Astrophysics Data System (ADS)

    Rosenberger, Tessa; Lindner, John F.

    We study the dynamics of mechanical arrays of bistable elements coupled one-way by wind. Unlike earlier hydromechanical unidirectional arrays, our aeromechanical one-way arrays are simpler, easier to study, and exhibit a broader range of phenomena. Soliton-like waves propagate in one direction at speeds proportional to wind speeds. Periodic boundaries enable solitons to annihilate in pairs in even arrays where adjacent elements are attracted to opposite stable states. Solitons propagate indefinitely in odd arrays where pairing is frustrated. Large noise spontaneously creates soliton- antisoliton pairs, as predicted by prior computer simulations. Soliton annihilation times increase quadratically with initial separations, as expected for random walk models of soliton collisions.

  3. Sorption reaction mechanism of some hazardous radionuclides from mixed waste by impregnated crown ether onto polymeric resin.

    PubMed

    Shehata, F A; Attallah, M F; Borai, E H; Hilal, M A; Abo-Aly, M M

    2010-02-01

    A novel impregnated polymeric resin was practically tested as adsorbent material for removal of some hazardous radionuclides from radioactive liquid waste. The applicability for the treatment of low-level liquid radioactive waste was investigated. The material was prepared by loading 4,4'(5')di-t-butylbenzo 18 crown 6 (DtBB18C6) onto poly(acrylamide-acrylic acid-acrylonitril)-N, N'-methylenediacrylamide (P(AM-AA-AN)-DAM). The removal of (134)Cs, (60)Co, (65)Zn , and ((152+154))Eu onto P(AM-AA-AN)-DAM/DtBB18C6 was investigated using a batch equilibrium technique with respect to the pH, contact time, and temperature. Kinetic models are used to determine the rate of sorption and to investigate the mechanism of sorption process. Five kinetics models, pseudo-first-order, pseudo-second-order, intra-particle diffusion, homogeneous particle diffusion (HPDM), and Elovich models, were used to investigate the sorption process. The obtained results of kinetic models predicted that, pseudo-second-order is applicable; the sorption is controlled by particle diffusion mechanism and the process is chemisorption. The obtained values of thermodynamics parameters, DeltaH degrees , DeltaS degrees , and DeltaG degrees indicated that the endothermic nature, increased randomness at the solid/solution interface and the spontaneous nature of the sorption processes. Copyright (c) 2009 Elsevier Ltd. All rights reserved.

  4. Directed assembly of gold nanowires on silicon via reorganization and simultaneous fusion of randomly distributed gold nanoparticles.

    PubMed

    Reinhardt, Hendrik M; Bücker, Kerstin; Hampp, Norbert A

    2015-05-04

    Laser-induced reorganization and simultaneous fusion of nanoparticles is introduced as a versatile concept for pattern formation on surfaces. The process takes advantage of a phenomenon called laser-induced periodic surface structures (LIPSS) which originates from periodically alternating photonic fringe patterns in the near-field of solids. Associated photonic fringe patterns are shown to reorganize randomly distributed gold nanoparticles on a silicon wafer into periodic gold nanostructures. Concomitant melting due to optical heating facilitates the formation of continuous structures such as periodic gold nanowire arrays. Generated patterns can be converted into secondary structures using directed assembly or self-organization. This includes for example the rotation of gold nanowire arrays by arbitrary angles or their fragmentation into arrays of aligned gold nanoparticles.

  5. Cobalt selenide hollow nanorods array with exceptionally high electrocatalytic activity for high-efficiency quasi-solid-state dye-sensitized solar cells

    NASA Astrophysics Data System (ADS)

    Jin, Zhitong; Zhang, Meirong; Wang, Min; Feng, Chuanqi; Wang, Zhong-Sheng

    2018-02-01

    In quasi-solid-state dye-sensitized solar cells (QSDSSCs), electron transport through a random network of catalyst in the counter electrode (CE) and electrolyte diffusion therein are limited by the grain boundaries of catalyst particles, thus diminishing the electrocatalytic performance of CE and the corresponding photovoltaic performance of QSDSSCs. We demonstrate herein an ordered Co0.85Se hollow nanorods array film as the Pt-free CE of QSDSSCs. The Co0.85Se hollow nanorods array displays excellent electrocatalytic activity for the reduction of I3- in the quasi-solid-state electrolyte with extremely low charge transfer resistance at the CE/electrolyte interface, and the diffusion of redox species within the Co0.85Se hollow nanorods array CE is pretty fast. The QSDSSC device with the Co0.85Se hollow nanorods array CE produces much higher photovoltaic conversion efficiency (8.35%) than that (4.94%) with the Co0.85Se randomly packed nanorods CE, against the control device with the Pt CE (7.75%). Moreover, the QSDSSC device based on the Co0.85Se hollow nanorods array CE presents good long-term stability with only 4% drop of power conversion efficiency after 1086 h one-sun soaking.

  6. Spectral statistics and scattering resonances of complex primes arrays

    NASA Astrophysics Data System (ADS)

    Wang, Ren; Pinheiro, Felipe A.; Dal Negro, Luca

    2018-01-01

    We introduce a class of aperiodic arrays of electric dipoles generated from the distribution of prime numbers in complex quadratic fields (Eisenstein and Gaussian primes) as well as quaternion primes (Hurwitz and Lifschitz primes), and study the nature of their scattering resonances using the vectorial Green's matrix method. In these systems we demonstrate several distinctive spectral properties, such as the absence of level repulsion in the strongly scattering regime, critical statistics of level spacings, and the existence of critical modes, which are extended fractal modes with long lifetimes not supported by either random or periodic systems. Moreover, we show that one can predict important physical properties, such as the existence spectral gaps, by analyzing the eigenvalue distribution of the Green's matrix of the arrays in the complex plane. Our results unveil the importance of aperiodic correlations in prime number arrays for the engineering of gapped photonic media that support far richer mode localization and spectral properties compared to usual periodic and random media.

  7. Preliminary assessment of several parameters to measure and compare usefulness of the CEOS reference pseudo-invariant calibration sites

    USGS Publications Warehouse

    Chander, Gyanesh; Angal, Amit; Xiong, Xiaoxiong; Helder, Dennis L.; Mishra, Nischal; Choi, Taeyoung; Wu, Aisheng

    2010-01-01

    Test sites are central to any future quality assurance and quality control (QA/QC) strategy. The Committee on Earth Observation Satellites (CEOS) Working Group for Calibration and Validation (WGCV) Infrared Visible Optical Sensors (IVOS) worked with collaborators around the world to establish a core set of CEOS-endorsed, globally distributed, reference standard test sites (both instrumented and pseudo-invariant) for the post-launch calibration of space-based optical imaging sensors. The pseudo-invariant calibration sites (PICS) have high reflectance and are usually made up of sand dunes with low aerosol loading and practically no vegetation. The goal of this paper is to provide preliminary assessment of "several parameters" than can be used on an operational basis to compare and measure usefulness of reference sites all over the world. The data from Landsat 7 (L7) Enhanced Thematic Mapper Plus (ETM+) and the Earth Observing-1 (EO-1) Hyperion sensors over the CEOS PICS were used to perform a preliminary assessment of several parameters, such as usable area, data availability, top-of-atmosphere (TOA) reflectance, at-sensor brightness temperature, spatial uniformity, temporal stability, spectral stability, and typical spectrum observed over the sites.

  8. An Anti-Electromagnetic Attack PUF Based on a Configurable Ring Oscillator for Wireless Sensor Networks

    PubMed Central

    Lu, Zhaojun; Li, Dongfang; Liu, Hailong; Gong, Mingyang; Liu, Zhenglin

    2017-01-01

    Wireless sensor networks (WSNs) are an emerging technology employed in some crucial applications. However, limited resources and physical exposure to attackers make security a challenging issue for a WSN. Ring oscillator-based physical unclonable function (RO PUF) is a potential option to protect the security of sensor nodes because it is able to generate random responses efficiently for a key extraction mechanism, which prevents the non-volatile memory from storing secret keys. In order to deploy RO PUF in a WSN, hardware efficiency, randomness, uniqueness, and reliability should be taken into account. Besides, the resistance to electromagnetic (EM) analysis attack is important to guarantee the security of RO PUF itself. In this paper, we propose a novel architecture of configurable RO PUF based on exclusive-or (XOR) gates. First, it dramatically increases the hardware efficiency compared with other types of RO PUFs. Second, it mitigates the vulnerability to EM analysis attack by placing the adjacent RO arrays in accordance with the cosine wave and sine wave so that the frequency of each RO cannot be detected. We implement our proposal in XINLINX A-7 field programmable gate arrays (FPGAs) and conduct a set of experiments to evaluate the quality of the responses. The results show that responses pass the National Institute of Standards and Technology (NIST) statistical test and have good uniqueness and reliability under different environments. Therefore, the proposed configurable RO PUF is suitable to establish a key extraction mechanism in a WSN. PMID:28914756

  9. An Anti-Electromagnetic Attack PUF Based on a Configurable Ring Oscillator for Wireless Sensor Networks.

    PubMed

    Lu, Zhaojun; Li, Dongfang; Liu, Hailong; Gong, Mingyang; Liu, Zhenglin

    2017-09-15

    Wireless sensor networks (WSNs) are an emerging technology employed in some crucial applications. However, limited resources and physical exposure to attackers make security a challenging issue for a WSN. Ring oscillator-based physical unclonable function (RO PUF) is a potential option to protect the security of sensor nodes because it is able to generate random responses efficiently for a key extraction mechanism, which prevents the non-volatile memory from storing secret keys. In order to deploy RO PUF in a WSN, hardware efficiency, randomness, uniqueness, and reliability should be taken into account. Besides, the resistance to electromagnetic (EM) analysis attack is important to guarantee the security of RO PUF itself. In this paper, we propose a novel architecture of configurable RO PUF based on exclusive-or (XOR) gates. First, it dramatically increases the hardware efficiency compared with other types of RO PUFs. Second, it mitigates the vulnerability to EM analysis attack by placing the adjacent RO arrays in accordance with the cosine wave and sine wave so that the frequency of each RO cannot be detected. We implement our proposal in XINLINX A-7 field programmable gate arrays (FPGAs) and conduct a set of experiments to evaluate the quality of the responses. The results show that responses pass the National Institute of Standards and Technology (NIST) statistical test and have good uniqueness and reliability under different environments. Therefore, the proposed configurable RO PUF is suitable to establish a key extraction mechanism in a WSN.

  10. Study on the algorithm of computational ghost imaging based on discrete fourier transform measurement matrix

    NASA Astrophysics Data System (ADS)

    Zhang, Leihong; Liang, Dong; Li, Bei; Kang, Yi; Pan, Zilan; Zhang, Dawei; Gao, Xiumin; Ma, Xiuhua

    2016-07-01

    On the basis of analyzing the cosine light field with determined analytic expression and the pseudo-inverse method, the object is illuminated by a presetting light field with a determined discrete Fourier transform measurement matrix, and the object image is reconstructed by the pseudo-inverse method. The analytic expression of the algorithm of computational ghost imaging based on discrete Fourier transform measurement matrix is deduced theoretically, and compared with the algorithm of compressive computational ghost imaging based on random measurement matrix. The reconstruction process and the reconstruction error are analyzed. On this basis, the simulation is done to verify the theoretical analysis. When the sampling measurement number is similar to the number of object pixel, the rank of discrete Fourier transform matrix is the same as the one of the random measurement matrix, the PSNR of the reconstruction image of FGI algorithm and PGI algorithm are similar, the reconstruction error of the traditional CGI algorithm is lower than that of reconstruction image based on FGI algorithm and PGI algorithm. As the decreasing of the number of sampling measurement, the PSNR of reconstruction image based on FGI algorithm decreases slowly, and the PSNR of reconstruction image based on PGI algorithm and CGI algorithm decreases sharply. The reconstruction time of FGI algorithm is lower than that of other algorithms and is not affected by the number of sampling measurement. The FGI algorithm can effectively filter out the random white noise through a low-pass filter and realize the reconstruction denoising which has a higher denoising capability than that of the CGI algorithm. The FGI algorithm can improve the reconstruction accuracy and the reconstruction speed of computational ghost imaging.

  11. Effect of increasing disorder on domains of the 2d Coulomb glass.

    PubMed

    Bhandari, Preeti; Malik, Vikas

    2017-12-06

    We have studied a two dimensional lattice model of Coulomb glass for a wide range of disorders at [Formula: see text]. The system was first annealed using Monte Carlo simulation. Further minimization of the total energy of the system was done using an algorithm developed by Baranovskii et al, followed by cluster flipping to obtain the pseudo-ground states. We have shown that the energy required to create a domain of linear size L in d dimensions is proportional to [Formula: see text]. Using Imry-Ma arguments given for random field Ising model, one gets critical dimension [Formula: see text] for Coulomb glass. The investigation of domains in the transition region shows a discontinuity in staggered magnetization which is an indication of a first-order type transition from charge-ordered phase to disordered phase. The structure and nature of random field fluctuations of the second largest domain in Coulomb glass are inconsistent with the assumptions of Imry and Ma, as was also reported for random field Ising model. The study of domains showed that in the transition region there were mostly two large domains, and that as disorder was increased the two large domains remained, but a large number of small domains also opened up. We have also studied the properties of the second largest domain as a function of disorder. We furthermore analysed the effect of disorder on the density of states, and showed a transition from hard gap at low disorders to a soft gap at higher disorders. At [Formula: see text], we have analysed the soft gap in detail, and found that the density of states deviates slightly ([Formula: see text]) from the linear behaviour in two dimensions. Analysis of local minima show that the pseudo-ground states have similar structure.

  12. Method and apparatus for enhancing vortex pinning by conformal crystal arrays

    DOEpatents

    Janko, Boldizsar; Reichhardt, Cynthia; Reichhardt, Charles; Ray, Dipanjan

    2015-07-14

    Disclosed is a method and apparatus for strongly enhancing vortex pinning by conformal crystal arrays. The conformal crystal array is constructed by a conformal transformation of a hexagonal lattice, producing a non-uniform structure with a gradient where the local six-fold coordination of the pinning sites is preserved, and with an arching effect. The conformal pinning arrays produce significantly enhanced vortex pinning over a much wider range of field than that found for other vortex pinning geometries with an equivalent number of vortex pinning sites, such as random, square, and triangular.

  13. Optimum SNR data compression in hardware using an Eigencoil array.

    PubMed

    King, Scott B; Varosi, Steve M; Duensing, G Randy

    2010-05-01

    With the number of receivers available on clinical MRI systems now ranging from 8 to 32 channels, data compression methods are being explored to lessen the demands on the computer for data handling and processing. Although software-based methods of compression after reception lessen computational requirements, a hardware-based method before the receiver also reduces the number of receive channels required. An eight-channel Eigencoil array is constructed by placing a hardware radiofrequency signal combiner inline after preamplification, before the receiver system. The Eigencoil array produces signal-to-noise ratio (SNR) of an optimal reconstruction using a standard sum-of-squares reconstruction, with peripheral SNR gains of 30% over the standard array. The concept of "receiver channel reduction" or MRI data compression is demonstrated, with optimal SNR using only four channels, and with a three-channel Eigencoil, superior sum-of-squares SNR was achieved over the standard eight-channel array. A three-channel Eigencoil portion of a product neurovascular array confirms in vivo SNR performance and demonstrates parallel MRI up to R = 3. This SNR-preserving data compression method advantageously allows users of MRI systems with fewer receiver channels to achieve the SNR of higher-channel MRI systems. (c) 2010 Wiley-Liss, Inc.

  14. Advances in lenticular lens arrays for visual display

    NASA Astrophysics Data System (ADS)

    Johnson, R. Barry; Jacobsen, Gary A.

    2005-08-01

    Lenticular lens arrays are widely used in the printed display industry and in specialized applications of electronic displays. In general, lenticular arrays can create from interlaced printed images such visual effects as 3-D, animation, flips, morph, zoom, or various combinations. The use of these typically cylindrical lens arrays for this purpose began in the late 1920's. The lenses comprise a front surface having a spherical crosssection and a flat rear surface upon where the material to be displayed is proximately located. The principal limitation to the resultant image quality for current technology lenticular lenses is spherical aberration. This limitation causes the lenticular lens arrays to be generally thick (0.5 mm) and not easily wrapped around such items as cans or bottles. The objectives of this research effort were to develop a realistic analytical model, to significantly improve the image quality, to develop the tooling necessary to fabricate lenticular lens array extrusion cylinders, and to develop enhanced fabrication technology for the extrusion cylinder. It was determined that the most viable cross-sectional shape for the lenticular lenses is elliptical. This shape dramatically improves the image quality. The relationship between the lens radius, conic constant, material refractive index, and thickness will be discussed. A significant challenge was to fabricate a diamond-cutting tool having the proper elliptical shape. Both true elliptical and pseudo-elliptical diamond tools were designed and fabricated. The plastic sheets extruded can be quite thin (< 0.25 mm) and, consequently, can be wrapped around cans and the like. Fabrication of the lenticular engraved extrusion cylinder required remarkable development considering the large physical size and weight of the cylinder, and the tight mechanical tolerances associated with the lenticular lens molds cut into the cylinder's surface. The development of the cutting tool and the lenticular engraved extrusion cylinder will be presented in addition to an illustrative comparison of current lenticular technology and the new technology. Three U.S. patents have been issued as a consequence of this research effort.

  15. Expansion of CMOS array design techniques

    NASA Technical Reports Server (NTRS)

    Feller, A.; Ramondetta, P.

    1977-01-01

    The important features of the multiport (double entry) automatic placement and routing programs for standard cells are described. Measured performance and predicted performance were compared for seven CMOS/SOS array types and hybrids designed with the high speed CMOS/SOS cell family. The CMOS/SOS standard cell data sheets are listed and described.

  16. Workshop I: Systems/Standards/Arrays

    NASA Technical Reports Server (NTRS)

    Piszczor, Mike; Reed, Brad

    2007-01-01

    Workshop Format: 1) 1:00 - 3:00 to cover various topics as appropriate; 2) At last SPRAT, conducted Workshop topic on solar cell and array qualification standards. Brad Reed will present update on status of that effort; 3) Second workshop topic: The Future of PV Research within NASA. 4) Any time remaining, specific topics from participants. 5) Reminder for IAPG Members! RECWG today 3:00-5:00 in Federal Room, 2nd Floor OAI. a chart is presented showing: Evaluation of Solar Array Technology Readiness Levels.

  17. Random-access technique for modular bathymetry data storage in a continental shelf wave refraction program

    NASA Technical Reports Server (NTRS)

    Poole, L. R.

    1974-01-01

    A study was conducted of an alternate method for storage and use of bathymetry data in the Langley Research Center and Virginia Institute of Marine Science mid-Atlantic continental-shelf wave-refraction computer program. The regional bathymetry array was divided into 105 indexed modules which can be read individually into memory in a nonsequential manner from a peripheral file using special random-access subroutines. In running a sample refraction case, a 75-percent decrease in program field length was achieved by using the random-access storage method in comparison with the conventional method of total regional array storage. This field-length decrease was accompanied by a comparative 5-percent increase in central processing time and a 477-percent increase in the number of operating-system calls. A comparative Langley Research Center computer system cost savings of 68 percent was achieved by using the random-access storage method.

  18. Lattice Boltzmann simulations for wall-flow dynamics in porous ceramic diesel particulate filters

    NASA Astrophysics Data System (ADS)

    Lee, Da Young; Lee, Gi Wook; Yoon, Kyu; Chun, Byoungjin; Jung, Hyun Wook

    2018-01-01

    Flows through porous filter walls of wall-flow diesel particulate filter are investigated using the lattice Boltzmann method (LBM). The microscopic model of the realistic filter wall is represented by randomly overlapped arrays of solid spheres. The LB simulation results are first validated by comparison to those from previous hydrodynamic theories and constitutive models for flows in porous media with simple regular and random solid-wall configurations. We demonstrate that the newly designed randomly overlapped array structures of porous walls allow reliable and accurate simulations for the porous wall-flow dynamics in a wide range of solid volume fractions from 0.01 to about 0.8, which is beyond the maximum random packing limit of 0.625. The permeable performance of porous media is scrutinized by changing the solid volume fraction and particle Reynolds number using Darcy's law and Forchheimer's extension in the laminar flow region.

  19. Selecting Random Distributed Elements for HIFU using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Yufeng

    2011-09-01

    As an effective and noninvasive therapeutic modality for tumor treatment, high-intensity focused ultrasound (HIFU) has attracted attention from both physicians and patients. New generations of HIFU systems with the ability to electrically steer the HIFU focus using phased array transducers have been under development. The presence of side and grating lobes may cause undesired thermal accumulation at the interface of the coupling medium (i.e. water) and skin, or in the intervening tissue. Although sparse randomly distributed piston elements could reduce the amplitude of grating lobes, there are theoretically no grating lobes with the use of concave elements in the new phased array HIFU. A new HIFU transmission strategy is proposed in this study, firing a number of but not all elements for a certain period and then changing to another group for the next firing sequence. The advantages are: 1) the asymmetric position of active elements may reduce the side lobes, and 2) each element has some resting time during the entire HIFU ablation (up to several hours for some clinical applications) so that the decreasing efficiency of the transducer due to thermal accumulation is minimized. Genetic algorithm was used for selecting randomly distributed elements in a HIFU array. Amplitudes of the first side lobes at the focal plane were used as the fitness value in the optimization. Overall, it is suggested that the proposed new strategy could reduce the side lobe and the consequent side-effects, and the genetic algorithm is effective in selecting those randomly distributed elements in a HIFU array.

  20. Artemisinin derivatives for treating severe malaria.

    PubMed

    McIntosh, H M; Olliaro, P

    2000-01-01

    Artemisinin derivatives may have advantages over quinoline drugs for treating severe malaria since they are fast acting and effective against quinine resistant malaria parasites. The objective of this review was to assess the effects of artemisinin drugs for severe and complicated falciparum malaria in adults and children. We searched the Cochrane Infectious Diseases Group trials register, Cochrane Controlled Trials Register, Medline, Embase, Science Citation Index, Lilacs, African Index Medicus, conference abstracts and reference lists of articles. We contacted organisations, researchers in the field and drug companies. Randomised and pseudo-randomised trials comparing artemisinin drugs (rectal, intramuscular or intravenous) with standard treatment, or comparisons between artemisinin derivatives in adults or children with severe or complicated falciparum malaria. Eligibility, trial quality assessment and data extraction were done independently by two reviewers. Study authors were contacted for additional information. Twenty three trials are included, allocation concealment was adequate in nine. Sixteen trials compared artemisinin drugs with quinine in 2653 patients. Artemisinin drugs were associated with better survival (mortality odds ratio 0.61, 95% confidence interval 0.46 to 0.82, random effects model). In trials where concealment of allocation was adequate (2261 patients), this was barely statistically significant (odds ratio 0.72, 95% CI 0.54 to 0.96, random effects model). In 1939 patients with cerebral malaria, mortality was also lower with artemisinin drugs overall (odds ratio 0.63, 95% CI 0.44 to 0.88, random effects model). The difference was not significant however when only trials reporting adequate concealment of allocation were analysed (odds ratio 0.78, 95% CI 0.55 to 1.10, random effects model) based on 1607 patients. No difference in neurological sequelae was shown. Compared with quinine, artemisinin drugs showed faster parasite clearance from the blood and similar adverse effects. The evidence suggests that artemisinin drugs are no worse than quinine in preventing death in severe or complicated malaria. No artemisinin derivative appears to be better than the others.

  1. Synchronization of random bit generators based on coupled chaotic lasers and application to cryptography.

    PubMed

    Kanter, Ido; Butkovski, Maria; Peleg, Yitzhak; Zigzag, Meital; Aviad, Yaara; Reidler, Igor; Rosenbluh, Michael; Kinzel, Wolfgang

    2010-08-16

    Random bit generators (RBGs) constitute an important tool in cryptography, stochastic simulations and secure communications. The later in particular has some difficult requirements: high generation rate of unpredictable bit strings and secure key-exchange protocols over public channels. Deterministic algorithms generate pseudo-random number sequences at high rates, however, their unpredictability is limited by the very nature of their deterministic origin. Recently, physical RBGs based on chaotic semiconductor lasers were shown to exceed Gbit/s rates. Whether secure synchronization of two high rate physical RBGs is possible remains an open question. Here we propose a method, whereby two fast RBGs based on mutually coupled chaotic lasers, are synchronized. Using information theoretic analysis we demonstrate security against a powerful computational eavesdropper, capable of noiseless amplification, where all parameters are publicly known. The method is also extended to secure synchronization of a small network of three RBGs.

  2. Nonholonomic relativistic diffusion and exact solutions for stochastic Einstein spaces

    NASA Astrophysics Data System (ADS)

    Vacaru, S. I.

    2012-03-01

    We develop an approach to the theory of nonholonomic relativistic stochastic processes in curved spaces. The Itô and Stratonovich calculus are formulated for spaces with conventional horizontal (holonomic) and vertical (nonholonomic) splitting defined by nonlinear connection structures. Geometric models of the relativistic diffusion theory are elaborated for nonholonomic (pseudo) Riemannian manifolds and phase velocity spaces. Applying the anholonomic deformation method, the field equations in Einstein's gravity and various modifications are formally integrated in general forms, with generic off-diagonal metrics depending on some classes of generating and integration functions. Choosing random generating functions we can construct various classes of stochastic Einstein manifolds. We show how stochastic gravitational interactions with mixed holonomic/nonholonomic and random variables can be modelled in explicit form and study their main geometric and stochastic properties. Finally, the conditions when non-random classical gravitational processes transform into stochastic ones and inversely are analyzed.

  3. Space solar array reliability: A study and recommendations

    NASA Astrophysics Data System (ADS)

    Brandhorst, Henry W., Jr.; Rodiek, Julie A.

    2008-12-01

    Providing reliable power over the anticipated mission life is critical to all satellites; therefore solar arrays are one of the most vital links to satellite mission success. Furthermore, solar arrays are exposed to the harshest environment of virtually any satellite component. In the past 10 years 117 satellite solar array anomalies have been recorded with 12 resulting in total satellite failure. Through an in-depth analysis of satellite anomalies listed in the Airclaim's Ascend SpaceTrak database, it is clear that solar array reliability is a serious, industry-wide issue. Solar array reliability directly affects the cost of future satellites through increased insurance premiums and a lack of confidence by investors. Recommendations for improving reliability through careful ground testing, standardization of testing procedures such as the emerging AIAA standards, and data sharing across the industry will be discussed. The benefits of creating a certified module and array testing facility that would certify in-space reliability will also be briefly examined. Solar array reliability is an issue that must be addressed to both reduce costs and ensure continued viability of the commercial and government assets on orbit.

  4. Efficient implementation of multidimensional fast fourier transform on a distributed-memory parallel multi-node computer

    DOEpatents

    Bhanot, Gyan V [Princeton, NJ; Chen, Dong [Croton-On-Hudson, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY

    2012-01-10

    The present in invention is directed to a method, system and program storage device for efficiently implementing a multidimensional Fast Fourier Transform (FFT) of a multidimensional array comprising a plurality of elements initially distributed in a multi-node computer system comprising a plurality of nodes in communication over a network, comprising: distributing the plurality of elements of the array in a first dimension across the plurality of nodes of the computer system over the network to facilitate a first one-dimensional FFT; performing the first one-dimensional FFT on the elements of the array distributed at each node in the first dimension; re-distributing the one-dimensional FFT-transformed elements at each node in a second dimension via "all-to-all" distribution in random order across other nodes of the computer system over the network; and performing a second one-dimensional FFT on elements of the array re-distributed at each node in the second dimension, wherein the random order facilitates efficient utilization of the network thereby efficiently implementing the multidimensional FFT. The "all-to-all" re-distribution of array elements is further efficiently implemented in applications other than the multidimensional FFT on the distributed-memory parallel supercomputer.

  5. Efficient implementation of a multidimensional fast fourier transform on a distributed-memory parallel multi-node computer

    DOEpatents

    Bhanot, Gyan V [Princeton, NJ; Chen, Dong [Croton-On-Hudson, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY

    2008-01-01

    The present in invention is directed to a method, system and program storage device for efficiently implementing a multidimensional Fast Fourier Transform (FFT) of a multidimensional array comprising a plurality of elements initially distributed in a multi-node computer system comprising a plurality of nodes in communication over a network, comprising: distributing the plurality of elements of the array in a first dimension across the plurality of nodes of the computer system over the network to facilitate a first one-dimensional FFT; performing the first one-dimensional FFT on the elements of the array distributed at each node in the first dimension; re-distributing the one-dimensional FFT-transformed elements at each node in a second dimension via "all-to-all" distribution in random order across other nodes of the computer system over the network; and performing a second one-dimensional FFT on elements of the array re-distributed at each node in the second dimension, wherein the random order facilitates efficient utilization of the network thereby efficiently implementing the multidimensional FFT. The "all-to-all" re-distribution of array elements is further efficiently implemented in applications other than the multidimensional FFT on the distributed-memory parallel supercomputer.

  6. Automated installation methods for photovoltaic arrays

    NASA Astrophysics Data System (ADS)

    Briggs, R.; Daniels, A.; Greenaway, R.; Oster, J., Jr.; Racki, D.; Stoeltzing, R.

    1982-11-01

    Since installation expenses constitute a substantial portion of the cost of a large photovoltaic power system, methods for reduction of these costs were investigated. The installation of the photovoltaic arrays includes all areas, starting with site preparation (i.e., trenching, wiring, drainage, foundation installation, lightning protection, grounding and installation of the panel) and concluding with the termination of the bus at the power conditioner building. To identify the optimum combination of standard installation procedures and automated/mechanized techniques, the installation process was investigated including the equipment and hardware available, the photovoltaic array structure systems and interfaces, and the array field and site characteristics. Preliminary designs of hardware for both the standard installation method, the automated/mechanized method, and a mix of standard installation procedures and mechanized procedures were identified to determine which process effectively reduced installation costs. In addition, costs associated with each type of installation method and with the design, development and fabrication of new installation hardware were generated.

  7. A fluorometric paper-based sensor array for the discrimination of heavy-metal ions.

    PubMed

    Feng, Liang; Li, Hui; Niu, Li-Ya; Guan, Ying-Shi; Duan, Chun-Feng; Guan, Ya-Feng; Tung, Chen-Ho; Yang, Qing-Zheng

    2013-04-15

    A fluorometric paper-based sensor array has been developed for the sensitive and convenient determination of seven heavy-metal ions at their wastewater discharge standard concentrations. Combining with nine cross-reactive BODIPY fluorescent indicators and array technologies-based pattern-recognition, we have obtained the discrimination capability of seven different heavy-metal ions at their wastewater discharge standard concentrations. After the immobilization of indicators and the enrichment of analytes, identification of the heavy-metal ions was readily acquired using a standard chemometric approach. Clear differentiation among heavy-metal ions as a function of concentration was also achieved, even down to 10(-7)M. A semi-quantitative estimation of the heavy-metal ion concentration was obtained by comparing color changes with a set of known concentrations. The sensor array was tentatively investigated in spiked tap water and sea water, and showed possible feasibility for real sample testing. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. Information content of MOPITT CO profile retrievals: Temporal and geographical variability

    NASA Astrophysics Data System (ADS)

    Deeter, M. N.; Edwards, D. P.; Gille, J. C.; Worden, H. M.

    2015-12-01

    Satellite measurements of tropospheric carbon monoxide (CO) enable a wide array of applications including studies of air quality and pollution transport. The MOPITT (Measurements of Pollution in the Troposphere) instrument on the Earth Observing System Terra platform has been measuring CO concentrations globally since March 2000. As indicated by the Degrees of Freedom for Signal (DFS), the standard metric for trace-gas retrieval information content, MOPITT retrieval performance varies over a wide range. We show that both instrumental and geophysical effects yield significant geographical and temporal variability in MOPITT DFS values. Instrumental radiance uncertainties, which describe random errors (or "noise") in the calibrated radiances, vary over long time scales (e.g., months to years) and vary between the four detector elements of MOPITT's linear detector array. MOPITT retrieval performance depends on several factors including thermal contrast, fine-scale variability of surface properties, and CO loading. The relative importance of these various effects is highly variable, as demonstrated by analyses of monthly mean DFS values for the United States and the Amazon Basin. An understanding of the geographical and temporal variability of MOPITT retrieval performance is potentially valuable to data users seeking to limit the influence of the a priori through data filtering. To illustrate, it is demonstrated that calculated regional-average CO mixing ratios may be improved by excluding observations from a subset of pixels in MOPITT's linear detector array.

  9. Robust PRNG based on homogeneously distributed chaotic dynamics

    NASA Astrophysics Data System (ADS)

    Garasym, Oleg; Lozi, René; Taralova, Ina

    2016-02-01

    This paper is devoted to the design of new chaotic Pseudo Random Number Generator (CPRNG). Exploring several topologies of network of 1-D coupled chaotic mapping, we focus first on two dimensional networks. Two topologically coupled maps are studied: TTL rc non-alternate, and TTL SC alternate. The primary idea of the novel maps has been based on an original coupling of the tent and logistic maps to achieve excellent random properties and homogeneous /uniform/ density in the phase plane, thus guaranteeing maximum security when used for chaos base cryptography. In this aim two new nonlinear CPRNG: MTTL 2 sc and NTTL 2 are proposed. The maps successfully passed numerous statistical, graphical and numerical tests, due to proposed ring coupling and injection mechanisms.

  10. High-Speed Digital Interferometry

    NASA Technical Reports Server (NTRS)

    De Vine, Glenn; Shaddock, Daniel A.; Ware, Brent; Spero, Robert E.; Wuchenich, Danielle M.; Klipstein, William M.; McKenzie, Kirk

    2012-01-01

    Digitally enhanced heterodyne interferometry (DI) is a laser metrology technique employing pseudo-random noise (PRN) codes phase-modulated onto an optical carrier. Combined with heterodyne interferometry, the PRN code is used to select individual signals, returning the inherent interferometric sensitivity determined by the optical wavelength. The signal isolation arises from the autocorrelation properties of the PRN code, enabling both rejection of spurious signals (e.g., from scattered light) and multiplexing capability using a single metrology system. The minimum separation of optical components is determined by the wavelength of the PRN code.

  11. Investigating the Quality of Service of Current and Future Tactical Information Exchanges - Net Warrior

    DTIC Science & Technology

    2010-05-01

    as Link-11, Link-16 and VMF. It also includes future systems such as Link-22 (using the typical HF & UHF frequency bands) and technologies that...triangulate and find the precise geolocation of the enemy target. If the target happens to relocate, TTNT is able to update the target with high accuracy...22 operates in either the HF or UHF frequency bands. In each of these frequency bands the system can operate on a single frequency or a pseudo-random

  12. Maximum-Likelihood Estimation for Frequency-Modulated Continuous-Wave Laser Ranging Using Photon-Counting Detectors

    DTIC Science & Technology

    2013-01-01

    are calculated from coherently -detected fields, e.g., coherent Doppler lidar . Our CRB results reveal that the best-case mean-square error scales as 1...1088 (2001). 7. K. Asaka, Y. Hirano, K. Tatsumi, K. Kasahara, and T. Tajime, “A pseudo-random frequency modulation continuous wave coherent lidar using...multiple returns,” IEEE Trans. Pattern Anal. Mach. Intell. 29, 2170–2180 (2007). 11. T. J. Karr, “Atmospheric phase error in coherent laser radar

  13. An Analysis of Two Layers of Encryption to Protect Network Traffic

    DTIC Science & Technology

    2010-06-01

    Published: 06/18/2001 CVSS Severity: 7.5 (HIGH) CVE-2001-1141 Summary: The Pseudo-Random Number Generator (PRNG) in SSLeay and OpenSSL be- fore 0.9.6b allows...x509cert function in KAME Racoon successfully verifies certifi- cates even when OpenSSL validation fails, which could allow remote attackers to...montgomery function in crypto/bn/bn mont.c in OpenSSL 0.9.8e and earlier does not properly perform Montgomery multiplication, which might allow local users to

  14. Laboratory complex for simulation of navigation signals of pseudosatellites

    NASA Astrophysics Data System (ADS)

    Ratushniak, V. N.; Gladyshev, A. B.; Sokolovskiy, A. V.; Mikhov, E. D.

    2018-05-01

    In the article, features of the organization, structure and questions of formation of navigation signals of pseudosatellites of the short - range navigation system based on the hardware-software complex National Instruments are considered. A software model that performs the formation and management of a pseudo-random sequence of a navigation signal and the formation and management of the format transmitted pseudosatellite navigation information is presented. The variant of constructing the transmitting equipment of the pseudosatellite base stations is provided.

  15. Airborne Pseudolites in a Global Positioning System (GPS) Degraded Environment

    DTIC Science & Technology

    2011-03-01

    continuously two types of encoded pseudo-random noise (PRN) signals via using two center frequencies 4 in the L- band , namely L1 (1575.42 MHz) and L2...Jovanevic, Aleksandar, Nikhil Bhaita, Joseph Noronha, Brijesh Sirpatil, Michael Kirchner, and Deepak Saxena. “ Piercing the Veil ”. GPS World, 30–37, March...difficulties in receiver design. • Pseudolites can operate either at GPS L1, L2 and L5, or any other available frequency band . Similarly, other parameters to

  16. Effects of Multipath and Oversampling on Navigation Using Orthogonal Frequency Division Multiplexed Signals of Opportunity

    DTIC Science & Technology

    2008-03-01

    for military use. The L2 carrier frequency operates at 1227.6 MHz and transmits only the precise code . Each satellite transmits a unique pseudo ...random noise (PRN) code by which it is identified. GPS receivers require a LOS to four satellite signals to accurately estimate a position in three...receiver frequency errors, noise addition, and multipath ef- fects. He also developed four methods for estimating the cross- correlation peak within a sampled

  17. Chain pooling to minimize prediction error in subset regression. [Monte Carlo studies using population models

    NASA Technical Reports Server (NTRS)

    Holms, A. G.

    1974-01-01

    Monte Carlo studies using population models intended to represent response surface applications are reported. Simulated experiments were generated by adding pseudo random normally distributed errors to population values to generate observations. Model equations were fitted to the observations and the decision procedure was used to delete terms. Comparison of values predicted by the reduced models with the true population values enabled the identification of deletion strategies that are approximately optimal for minimizing prediction errors.

  18. Comparative Performance and Model Agreement of Three Common Photovoltaic Array Configurations.

    PubMed

    Boyd, Matthew T

    2018-02-01

    Three grid-connected monocrystalline silicon arrays on the National Institute of Standards and Technology (NIST) campus in Gaithersburg, MD have been instrumented and monitored for 1 yr, with only minimal gaps in the data sets. These arrays range from 73 kW to 271 kW, and all use the same module, but have different tilts, orientations, and configurations. One array is installed facing east and west over a parking lot, one in an open field, and one on a flat roof. Various measured relationships and calculated standard metrics have been used to compare the relative performance of these arrays in their different configurations. Comprehensive performance models have also been created in the modeling software pvsyst for each array, and its predictions using measured on-site weather data are compared to the arrays' measured outputs. The comparisons show that all three arrays typically have monthly performance ratios (PRs) above 0.75, but differ significantly in their relative output, strongly correlating to their operating temperature and to a lesser extent their orientation. The model predictions are within 5% of the monthly delivered energy values except during the winter months, when there was intermittent snow on the arrays, and during maintenance and other outages.

  19. Combinatorial electrochemical cell array for high throughput screening of micro-fuel-cells and metal/air batteries.

    PubMed

    Jiang, Rongzhong

    2007-07-01

    An electrochemical cell array was designed that contains a common air electrode and 16 microanodes for high throughput screening of both fuel cells (based on polymer electrolyte membrane) and metal/air batteries (based on liquid electrolyte). Electrode materials can easily be coated on the anodes of the electrochemical cell array and screened by switching a graphite probe from one cell to the others. The electrochemical cell array was used to study direct methanol fuel cells (DMFCs), including high throughput screening of electrode catalysts and determination of optimum operating conditions. For screening of DMFCs, there is about 6% relative standard deviation (percentage of standard deviation versus mean value) for discharge current from 10 to 20 mAcm(2). The electrochemical cell array was also used to study tin/air batteries. The effect of Cu content in the anode electrode on the discharge performance of the tin/air battery was investigated. The relative standard deviations for screening of metal/air battery (based on zinc/air) are 2.4%, 3.6%, and 5.1% for discharge current at 50, 100, and 150 mAcm(2), respectively.

  20. Ray-tracing in pseudo-complex General Relativity

    NASA Astrophysics Data System (ADS)

    Schönenbach, T.; Caspar, G.; Hess, P. O.; Boller, T.; Müller, A.; Schäfer, M.; Greiner, W.

    2014-07-01

    Motivated by possible observations of the black hole candidate in the centre of our Galaxy and the galaxy M87, ray-tracing methods are applied to both standard General Relativity (GR) and a recently proposed extension, the pseudo-complex GR (pc-GR). The correction terms due to the investigated pc-GR model lead to slower orbital motions close to massive objects. Also the concept of an innermost stable circular orbit is modified for the pc-GR model, allowing particles to get closer to the central object for most values of the spin parameter a than in GR. Thus, the accretion disc, surrounding a massive object, is brighter in pc-GR than in GR. Iron Kα emission-line profiles are also calculated as those are good observables for regions of strong gravity. Differences between the two theories are pointed out.

  1. Fold-Thrust mapping using photogrammetry in Western Champsaur basin, SE France

    NASA Astrophysics Data System (ADS)

    Totake, Y.; Butler, R.; Bond, C. E.

    2016-12-01

    There is an increasing demand for high-resolution geometric data for outcropping geological structures - not only to test models for their formation and evolution but also to create synthetic seismic visualisations for comparison with subsurface data. High-resolution 3D scenes reconstructed by modern photogrammetry offer an efficient toolbox for such work. When integrated with direct field measurements and observations, these products can be used to build geological interpretations and models. Photogrammetric techniques using standard equipment are ideally suited to working in the high mountain terrain that commonly offers the best outcrops, as all equipment is readily portable and, in the absence of cloud-cover, not restricted to the meteorological and legal restrictions that can affect some airborne approaches. The workflows and approaches for generating geological models utilising such photogrammetry techniques are the focus of our contribution. Our case study comes from SE France where early Alpine fore-deep sediments have been deformed into arrays of fold-thrust complexes. Over 1500m vertical relief provides excellent outcrop control with surrounding hillsides providing vantage points for ground-based photogrammetry. We collected over 9,400 photographs across the fold-thrust array using a handheld digital camera from 133 ground locations that were individually georeferenced. We processed the photographic images within the software PhotoScan-Pro to build 3D landscape scenes. The built photogrammetric models were then imported into the software Move, along with field measurements, to map faults and sedimentary layers and to produce geological cross sections and 3D geological surfaces. Polylines of sediment beds and faults traced on our photogrammetry models allow interpretation of a pseudo-3D geometry of the deformation structures, and enable prediction of dips and strikes from inaccessible field areas, to map the complex geometries of the thrust faults and deformed strata in detail. The resultant structural geometry of the thrust zones delivers an exceptional analogue to inaccessible subsurface fold-thrust structures which are often challenging to obtain a clear seismic image.

  2. A simulation study to compare three self-controlled case series approaches: correction for violation of assumption and evaluation of bias.

    PubMed

    Hua, Wei; Sun, Guoying; Dodd, Caitlin N; Romio, Silvana A; Whitaker, Heather J; Izurieta, Hector S; Black, Steven; Sturkenboom, Miriam C J M; Davis, Robert L; Deceuninck, Genevieve; Andrews, N J

    2013-08-01

    The assumption that the occurrence of outcome event must not alter subsequent exposure probability is critical for preserving the validity of the self-controlled case series (SCCS) method. This assumption is violated in scenarios in which the event constitutes a contraindication for exposure. In this simulation study, we compared the performance of the standard SCCS approach and two alternative approaches when the event-independent exposure assumption was violated. Using the 2009 H1N1 and seasonal influenza vaccines and Guillain-Barré syndrome as a model, we simulated a scenario in which an individual may encounter multiple unordered exposures and each exposure may be contraindicated by the occurrence of outcome event. The degree of contraindication was varied at 0%, 50%, and 100%. The first alternative approach used only cases occurring after exposure with follow-up time starting from exposure. The second used a pseudo-likelihood method. When the event-independent exposure assumption was satisfied, the standard SCCS approach produced nearly unbiased relative incidence estimates. When this assumption was partially or completely violated, two alternative SCCS approaches could be used. While the post-exposure cases only approach could handle only one exposure, the pseudo-likelihood approach was able to correct bias for both exposures. Violation of the event-independent exposure assumption leads to an overestimation of relative incidence which could be corrected by alternative SCCS approaches. In multiple exposure situations, the pseudo-likelihood approach is optimal; the post-exposure cases only approach is limited in handling a second exposure and may introduce additional bias, thus should be used with caution. Copyright © 2013 John Wiley & Sons, Ltd.

  3. Simplified biased random walk model for RecA-protein-mediated homology recognition offers rapid and accurate self-assembly of long linear arrays of binding sites

    NASA Astrophysics Data System (ADS)

    Kates-Harbeck, Julian; Tilloy, Antoine; Prentiss, Mara

    2013-07-01

    Inspired by RecA-protein-based homology recognition, we consider the pairing of two long linear arrays of binding sites. We propose a fully reversible, physically realizable biased random walk model for rapid and accurate self-assembly due to the spontaneous pairing of matching binding sites, where the statistics of the searched sample are included. In the model, there are two bound conformations, and the free energy for each conformation is a weakly nonlinear function of the number of contiguous matched bound sites.

  4. Conductivity of Nanowire Arrays under Random and Ordered Orientation Configurations

    PubMed Central

    Jagota, Milind; Tansu, Nelson

    2015-01-01

    A computational model was developed to analyze electrical conductivity of random metal nanowire networks. It was demonstrated for the first time through use of this model that a performance gain in random metal nanowire networks can be achieved by slightly restricting nanowire orientation. It was furthermore shown that heavily ordered configurations do not outperform configurations with some degree of randomness; randomness in the case of metal nanowire orientations acts to increase conductivity. PMID:25976936

  5. From Data to Semantic Information

    NASA Astrophysics Data System (ADS)

    Floridi, Luciano

    2003-06-01

    There is no consensus yet on the definition of semantic information. This paper contributes to the current debate by criticising and revising the Standard Definition of semantic Information (SDI) as meaningful data, in favour of the Dretske-Grice approach: meaningful and well-formed data constitute semantic information only if they also qualify as contingently truthful. After a brief introduction, SDI is criticised for providing necessary but insufficient conditions for the definition of semantic information. SDI is incorrect because truth-values do not supervene on semantic information, and misinformation (that is, false semantic information) is not a type of semantic information, but pseudo-information, that is not semantic information at all. This is shown by arguing that none of the reasons for interpreting misinformation as a type of semantic information is convincing, whilst there are compelling reasons to treat it as pseudo-information. As a consequence, SDI is revised to include a necessary truth-condition. The last section summarises the main results of the paper and indicates the important implications of the revised definition for the analysis of the deflationary theories of truth, the standard definition of knowledge and the classic, quantitative theory of semantic information.

  6. Transform-Based Wideband Array Processing

    DTIC Science & Technology

    1992-01-31

    Breusch and Pagan [2], it is possible to test which model, M,€, 0 AR or random coefficient, will better fit typical array data. Li The test indicates that...bearing estimation problems," Proc. IEEE, vol. 70, no. 9, pp. 1018-1028, 1982. (2] T. S. Breusch and A. R. Pagan , "A simple test for het...cor- relations do not obey an AR relationship across the array; relations in the observations. Through the use of a binary hypothesis test , it is

  7. Controllability in tunable chains of coupled harmonic oscillators

    NASA Astrophysics Data System (ADS)

    Buchmann, L. F.; Mølmer, K.; Petrosyan, D.

    2018-04-01

    We prove that temporal control of the strengths of springs connecting N harmonic oscillators in a chain provides complete access to all Gaussian states of N -1 collective modes. The proof relies on the construction of a suitable basis of cradle modes for the system. An iterative algorithm to reach any desired Gaussian state requires at most 3 N (N -1 )/2 operations. We illustrate this capability by engineering squeezed pseudo-phonon states—highly nonlocal, strongly correlated states that may result from various nonlinear processes. Tunable chains of coupled harmonic oscillators can be implemented by a number of current state-of-the-art experimental platforms, including cold atoms in lattice potentials, arrays of mechanical micro-oscillators, and coupled optical waveguides.

  8. Numerical investigation of field enhancement by metal nano-particles using a hybrid FDTD-PSTD algorithm.

    PubMed

    Pernice, W H; Payne, F P; Gallagher, D F

    2007-09-03

    We present a novel numerical scheme for the simulation of the field enhancement by metal nano-particles in the time domain. The algorithm is based on a combination of the finite-difference time-domain method and the pseudo-spectral time-domain method for dispersive materials. The hybrid solver leads to an efficient subgridding algorithm that does not suffer from spurious field spikes as do FDTD schemes. Simulation of the field enhancement by gold particles shows the expected exponential field profile. The enhancement factors are computed for single particles and particle arrays. Due to the geometry conforming mesh the algorithm is stable for long integration times and thus suitable for the simulation of resonance phenomena in coupled nano-particle structures.

  9. A normative price for a manufactured product: The SAMICS methodology. Volume 2: Analysis

    NASA Technical Reports Server (NTRS)

    Chamberlain, R. G.

    1979-01-01

    The Solar Array Manufacturing Industry Costing Standards provide standard formats, data, assumptions, and procedures for determining the price a hypothetical solar array manufacturer would have to be able to obtain in the market to realize a specified after-tax rate of return on equity for a specified level of production. The methodology and its theoretical background are presented. The model is sufficiently general to be used in any production-line manufacturing environment. Implementation of this methodology by the Solar Array Manufacturing Industry Simultation computer program is discussed.

  10. SNPchiMp v.3: integrating and standardizing single nucleotide polymorphism data for livestock species.

    PubMed

    Nicolazzi, Ezequiel L; Caprera, Andrea; Nazzicari, Nelson; Cozzi, Paolo; Strozzi, Francesco; Lawley, Cindy; Pirani, Ali; Soans, Chandrasen; Brew, Fiona; Jorjani, Hossein; Evans, Gary; Simpson, Barry; Tosser-Klopp, Gwenola; Brauning, Rudiger; Williams, John L; Stella, Alessandra

    2015-04-10

    In recent years, the use of genomic information in livestock species for genetic improvement, association studies and many other fields has become routine. In order to accommodate different market requirements in terms of genotyping cost, manufacturers of single nucleotide polymorphism (SNP) arrays, private companies and international consortia have developed a large number of arrays with different content and different SNP density. The number of currently available SNP arrays differs among species: ranging from one for goats to more than ten for cattle, and the number of arrays available is increasing rapidly. However, there is limited or no effort to standardize and integrate array- specific (e.g. SNP IDs, allele coding) and species-specific (i.e. past and current assemblies) SNP information. Here we present SNPchiMp v.3, a solution to these issues for the six major livestock species (cow, pig, horse, sheep, goat and chicken). Original data was collected directly from SNP array producers and specific international genome consortia, and stored in a MySQL database. The database was then linked to an open-access web tool and to public databases. SNPchiMp v.3 ensures fast access to the database (retrieving within/across SNP array data) and the possibility of annotating SNP array data in a user-friendly fashion. This platform allows easy integration and standardization, and it is aimed at both industry and research. It also enables users to easily link the information available from the array producer with data in public databases, without the need of additional bioinformatics tools or pipelines. In recognition of the open-access use of Ensembl resources, SNPchiMp v.3 was officially credited as an Ensembl E!mpowered tool. Availability at http://bioinformatics.tecnoparco.org/SNPchimp.

  11. Computer Modelling and Simulation of Solar PV Array Characteristics

    NASA Astrophysics Data System (ADS)

    Gautam, Nalin Kumar

    2003-02-01

    The main objective of my PhD research work was to study the behaviour of inter-connected solar photovoltaic (PV) arrays. The approach involved the construction of mathematical models to investigate different types of research problems related to the energy yield, fault tolerance, efficiency and optimal sizing of inter-connected solar PV array systems. My research work can be divided into four different types of research problems: 1. Modeling of inter-connected solar PV array systems to investigate their electrical behavior, 2. Modeling of different inter-connected solar PV array networks to predict their expected operational lifetimes, 3. Modeling solar radiation estimation and its variability, and 4. Modeling of a coupled system to estimate the size of PV array and battery-bank in the stand-alone inter-connected solar PV system where the solar PV system depends on a system providing solar radiant energy. The successful application of mathematics to the above-m entioned problems entailed three phases: 1. The formulation of the problem in a mathematical form using numerical, optimization, probabilistic and statistical methods / techniques, 2. The translation of mathematical models using C++ to simulate them on a computer, and 3. The interpretation of the results to see how closely they correlated with the real data. Array is the most cost-intensive component of the solar PV system. Since the electrical performances as well as life properties of an array are highly sensitive to field conditions, different characteristics of the arrays, such as energy yield, operational lifetime, collector orientation, and optimal sizing were investigated in order to improve their efficiency, fault-tolerance and reliability. Three solar cell interconnection configurations in the array - series-parallel, total-cross-tied, and bridge-linked, were considered. The electrical characteristics of these configurations were investigated to find out one that is comparatively less susceptible to the mismatches due to manufacturer's tolerances in cell characteristics, shadowing, soiling and aging of solar cells. The current-voltage curves and the values of energy yield characterized by maximum-power points and fill factors for these arrays were also obtained. Two different mathematical models, one for smaller size arrays and the other for the larger size arrays, were developed. The first model takes account of the partial differential equations with boundary value conditions, whereas the second one involves the simple linear programming concept. Based on the initial information on the values of short-circuit current and open-circuit voltage of thirty-six single-crystalline silicon solar cells provided by a manufacturer, the values of these parameters for up to 14,400 solar cells were generated randomly. Thus, the investigations were done for three different cases of array sizes, i.e., (6 x 6), (36 x 8) and (720 x 20), for each configuration. The operational lifetimes of different interconnected solar PV arrays and the improvement in their life properties through different interconnection and modularized configurations were investigated using a reliability-index model. Under normal conditions, the efficiency of a solar cell degrades in an exponential manner, and its operational life above a lowest admissible efficiency may be considered as the upper bound of its lifetime. Under field conditions, the solar cell may fail any time due to environmental stresses, or it may function up to the end of its expected lifetime. In view of this, the lifetime of a solar cell in an array was represented by an exponentially distributed random variable. At any instant of time t, this random variable was considered to have two states: (i) the cell functioned till time t, or (ii) the cell failed within time t. It was considered that the functioning of the solar cell included its operation at an efficiency decaying with time under normal conditions. It was assumed that the lifetime of a solar cell had lack of memory or aging property, which meant that no matter how long (say, t) the cell had been operational, the probability that it would last an additional time ?t was independent of t. The operational life of the solar cell above a lowest admissible efficiency was considered as the upper bound of its expected lifetime. The value of the upper bound on the expected life of solar cell was evaluated using the information provided by the manufacturers of the single-crystalline silicon solar cells. Then on the basis of these lifetimes, the expected operational lifetimes of the array systems were obtained. Since the investigations of the effects of collector orientation on the performance of an array require the continuous values of global solar radiation on a surface, a method to estimate the global solar radiation on a surface (horizontal or tilted) was also proposed. The cloudiness index was defined as the fraction of extraterrestrial radiation that reached the earth's surface when the sky above the location of interest was obscured by the cloud cover. The cloud cover at the location of interest during any time interval of a day was assumed to follow the fuzzy random phenomenon. The cloudiness index, therefore, was considered as a fuzzy random variable that accounted for the cloud cover at the location of interest during any time interval of a day. This variable was assumed to depend on four other fuzzy random variables that, respectively, accounted for the cloud cover corresponding to the 1) type of cloud group, 2) climatic region, 3) season with most of the precipitation, and 4) type of precipitation at the location of interest during any time interval. All possible types of cloud covers were categorized into five types of cloud groups. Each cloud group was considered to be a fuzzy subset. In this model, the cloud cover at the location of interest during a time interval was considered to be the clouds that obscure the sky above the location. The cloud covers, with all possible types of clouds having transmissivities corresponding to values in the membership range of a fuzzy subset (i.e., a type of cloud group), were considered to be the membership elements of that fuzzy subset. The transmissivities of different types of cloud covers in a cloud group corresponded to the values in the membership range of that cloud group. Predicate logic (i.e., if---then---, else---, conditions) was used to set the relationship between all the fuzzy random variables. The values of the above-mentioned fuzzy random variables were evaluated to provide the value of cloudiness index for each time interval at the location of interest. For each case of the fuzzy random variable, heuristic approach was used to identify subjectively the range ([a, b], where a and b were real numbers with in [0, 1] such that a

  12. The statistics of laser returns from cube-corner arrays on satellite

    NASA Technical Reports Server (NTRS)

    Lehr, C. G.

    1973-01-01

    A method first presented by Goodman is used to derive an equation for the statistical effects associated with laser returns from satellites having retroreflecting arrays of cube corners. The effect of the distribution on the returns of a satellite-tracking system is illustrated by a computation based on randomly generated numbers.

  13. ATP-dependent chromatin assembly is functionally distinct from chromatin remodeling

    PubMed Central

    Torigoe, Sharon E; Patel, Ashok; Khuong, Mai T; Bowman, Gregory D; Kadonaga, James T

    2013-01-01

    Chromatin assembly involves the combined action of ATP-dependent motor proteins and histone chaperones. Because motor proteins in chromatin assembly also function as chromatin remodeling factors, we investigated the relationship between ATP-driven chromatin assembly and chromatin remodeling in the generation of periodic nucleosome arrays. We found that chromatin remodeling-defective Chd1 motor proteins are able to catalyze ATP-dependent chromatin assembly. The resulting nucleosomes are not, however, spaced in periodic arrays. Wild-type Chd1, but not chromatin remodeling-defective Chd1, can catalyze the conversion of randomly-distributed nucleosomes into periodic arrays. These results reveal a functional distinction between ATP-dependent nucleosome assembly and chromatin remodeling, and suggest a model for chromatin assembly in which randomly-distributed nucleosomes are formed by the nucleosome assembly function of Chd1, and then regularly-spaced nucleosome arrays are generated by the chromatin remodeling activity of Chd1. These findings uncover an unforeseen level of specificity in the role of motor proteins in chromatin assembly. DOI: http://dx.doi.org/10.7554/eLife.00863.001 PMID:23986862

  14. Comparing the performance of cluster random sampling and integrated threshold mapping for targeting trachoma control, using computer simulation.

    PubMed

    Smith, Jennifer L; Sturrock, Hugh J W; Olives, Casey; Solomon, Anthony W; Brooker, Simon J

    2013-01-01

    Implementation of trachoma control strategies requires reliable district-level estimates of trachomatous inflammation-follicular (TF), generally collected using the recommended gold-standard cluster randomized surveys (CRS). Integrated Threshold Mapping (ITM) has been proposed as an integrated and cost-effective means of rapidly surveying trachoma in order to classify districts according to treatment thresholds. ITM differs from CRS in a number of important ways, including the use of a school-based sampling platform for children aged 1-9 and a different age distribution of participants. This study uses computerised sampling simulations to compare the performance of these survey designs and evaluate the impact of varying key parameters. Realistic pseudo gold standard data for 100 districts were generated that maintained the relative risk of disease between important sub-groups and incorporated empirical estimates of disease clustering at the household, village and district level. To simulate the different sampling approaches, 20 clusters were selected from each district, with individuals sampled according to the protocol for ITM and CRS. Results showed that ITM generally under-estimated the true prevalence of TF over a range of epidemiological settings and introduced more district misclassification according to treatment thresholds than did CRS. However, the extent of underestimation and resulting misclassification was found to be dependent on three main factors: (i) the district prevalence of TF; (ii) the relative risk of TF between enrolled and non-enrolled children within clusters; and (iii) the enrollment rate in schools. Although in some contexts the two methodologies may be equivalent, ITM can introduce a bias-dependent shift as prevalence of TF increases, resulting in a greater risk of misclassification around treatment thresholds. In addition to strengthening the evidence base around choice of trachoma survey methodologies, this study illustrates the use of a simulated approach in addressing operational research questions for trachoma but also other NTDs.

  15. Co-state initialization for the minimum-time low-thrust trajectory optimization

    NASA Astrophysics Data System (ADS)

    Taheri, Ehsan; Li, Nan I.; Kolmanovsky, Ilya

    2017-05-01

    This paper presents an approach for co-state initialization which is a critical step in solving minimum-time low-thrust trajectory optimization problems using indirect optimal control numerical methods. Indirect methods used in determining the optimal space trajectories typically result in two-point boundary-value problems and are solved by single- or multiple-shooting numerical methods. Accurate initialization of the co-state variables facilitates the numerical convergence of iterative boundary value problem solvers. In this paper, we propose a method which exploits the trajectory generated by the so-called pseudo-equinoctial and three-dimensional finite Fourier series shape-based methods to estimate the initial values of the co-states. The performance of the approach for two interplanetary rendezvous missions from Earth to Mars and from Earth to asteroid Dionysus is compared against three other approaches which, respectively, exploit random initialization of co-states, adjoint-control transformation and a standard genetic algorithm. The results indicate that by using our proposed approach the percent of the converged cases is higher for trajectories with higher number of revolutions while the computation time is lower. These features are advantageous for broad trajectory search in the preliminary phase of mission designs.

  16. [Respondent-Driven Sampling: a new sampling method to study visible and hidden populations].

    PubMed

    Mantecón, Alejandro; Juan, Montse; Calafat, Amador; Becoña, Elisardo; Román, Encarna

    2008-01-01

    The paper introduces a variant of chain-referral sampling: respondent-driven sampling (RDS). This sampling method shows that methods based on network analysis can be combined with the statistical validity of standard probability sampling methods. In this sense, RDS appears to be a mathematical improvement of snowball sampling oriented to the study of hidden populations. However, we try to prove its validity with populations that are not within a sampling frame but can nonetheless be contacted without difficulty. The basics of RDS are explained through our research on young people (aged 14 to 25) who go clubbing, consume alcohol and other drugs, and have sex. Fieldwork was carried out between May and July 2007 in three Spanish regions: Baleares, Galicia and Comunidad Valenciana. The presentation of the study shows the utility of this type of sampling when the population is accessible but there is a difficulty deriving from the lack of a sampling frame. However, the sample obtained is not a random representative one in statistical terms of the target population. It must be acknowledged that the final sample is representative of a 'pseudo-population' that approximates to the target population but is not identical to it.

  17. Standardized UXO Technology Demonstration Site Blind Grid Scoring Record No. 806 (U.S. Geological Survey, TMGS Magnetometer/Towed Array)

    DTIC Science & Technology

    2007-05-01

    BOX 25046, FEDERAL CENTER, M.S. 964 DENVER, CO 80225-0046 TECHNOLOGY TYPE/PLATFORM: TMGS MAGNETOMETER/TOWED ARRAY PREPARED BY: U.S. ARMY...GEOLOGICAL SURVEY, TMGS MAGNETOMETER/TOWED ARRAY) 8-CO-160-UXO-021 Karwatka, Michael... TMGS Magnetometer/Towed Array, MEC Unclassified Unclassified Unclassified SAR (Page ii Blank) i ACKNOWLEDGMENTS

  18. Non-inflammatory causes of emergency consultation in patients with multiple sclerosis.

    PubMed

    Rodríguez de Antonio, L A; García Castañón, I; Aguilar-Amat Prior, M J; Puertas, I; González Suárez, I; Oreja Guevara, C

    2018-05-26

    To describe non-relapse-related emergency consultations of patients with multiple sclerosis (MS): causes, difficulties in the diagnosis, clinical characteristics, and treatments administered. We performed a retrospective study of patients who attended a multiple sclerosis day hospital due to suspected relapse and received an alternative diagnosis, over a 2-year period. Demographic data, clinical characteristics, final diagnosis, and treatments administered were evaluated. Patients who were initially diagnosed with pseudo-relapse and ultimately diagnosed with true relapse were evaluated specifically. As an exploratory analysis, patients who consulted with non-inflammatory causes were compared with a randomly selected cohort of patients with true relapses who attended the centre in the same period. The study included 50 patients (33 were women; mean age 41.4±11.7years). Four patients (8%) were initially diagnosed with pseudo-relapse and later diagnosed as having a true relapse. Fever and vertigo were the main confounding factors. The non-inflammatory causes of emergency consultation were: neurological, 43.5% (20 patients); infectious, 15.2% (7); psychiatric, 10.9% (5); vertigo, 8.6% (4); trauma, 10.9% (5); and miscellaneous, 10.9% (5). MS-related symptoms constituted the most frequent cause of non-inflammatory emergency consultations. Close follow-up of relapse and pseudo-relapse is necessary to detect incorrect initial diagnoses, avoid unnecessary treatments, and relieve patients' symptoms. Copyright © 2018 Sociedad Española de Neurología. Publicado por Elsevier España, S.L.U. All rights reserved.

  19. iDHS-EL: identifying DNase I hypersensitive sites by fusing three different modes of pseudo nucleotide composition into an ensemble learning framework.

    PubMed

    Liu, Bin; Long, Ren; Chou, Kuo-Chen

    2016-08-15

    Regulatory DNA elements are associated with DNase I hypersensitive sites (DHSs). Accordingly, identification of DHSs will provide useful insights for in-depth investigation into the function of noncoding genomic regions. In this study, using the strategy of ensemble learning framework, we proposed a new predictor called iDHS-EL for identifying the location of DHS in human genome. It was formed by fusing three individual Random Forest (RF) classifiers into an ensemble predictor. The three RF operators were respectively based on the three special modes of the general pseudo nucleotide composition (PseKNC): (i) kmer, (ii) reverse complement kmer and (iii) pseudo dinucleotide composition. It has been demonstrated that the new predictor remarkably outperforms the relevant state-of-the-art methods in both accuracy and stability. For the convenience of most experimental scientists, a web server for iDHS-EL is established at http://bioinformatics.hitsz.edu.cn/iDHS-EL, which is the first web-server predictor ever established for identifying DHSs, and by which users can easily get their desired results without the need to go through the mathematical details. We anticipate that IDHS-EL: will become a very useful high throughput tool for genome analysis. bliu@gordonlifescience.org or bliu@insun.hit.edu.cn Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  20. Optimal Control of Shock Wave Turbulent Boundary Layer Interactions Using Micro-Array Actuation

    NASA Technical Reports Server (NTRS)

    Anderson, Bernhard H.; Tinapple, Jon; Surber, Lewis

    2006-01-01

    The intent of this study on micro-array flow control is to demonstrate the viability and economy of Response Surface Methodology (RSM) to determine optimal designs of micro-array actuation for controlling the shock wave turbulent boundary layer interactions within supersonic inlets and compare these concepts to conventional bleed performance. The term micro-array refers to micro-actuator arrays which have heights of 25 to 40 percent of the undisturbed supersonic boundary layer thickness. This study covers optimal control of shock wave turbulent boundary layer interactions using standard micro-vane, tapered micro-vane, and standard micro-ramp arrays at a free stream Mach number of 2.0. The effectiveness of the three micro-array devices was tested using a shock pressure rise induced by the 10 shock generator, which was sufficiently strong as to separate the turbulent supersonic boundary layer. The overall design purpose of the micro-arrays was to alter the properties of the supersonic boundary layer by introducing a cascade of counter-rotating micro-vortices in the near wall region. In this manner, the impact of the shock wave boundary layer (SWBL) interaction on the main flow field was minimized without boundary bleed.

  1. JPL Large Advanced Antenna Station Array Study

    NASA Technical Reports Server (NTRS)

    1978-01-01

    In accordance with study requirements, two antennas are described: a 30 meter standard antenna and a 34 meter modified antenna, along with a candidate array configuration for each. Modified antenna trade analyses are summarized, risks analyzed, costs presented, and a final antenna array configuration recommendation made.

  2. Analysis of entropy extraction efficiencies in random number generation systems

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Wang, Shuang; Chen, Wei; Yin, Zhen-Qiang; Han, Zheng-Fu

    2016-05-01

    Random numbers (RNs) have applications in many areas: lottery games, gambling, computer simulation, and, most importantly, cryptography [N. Gisin et al., Rev. Mod. Phys. 74 (2002) 145]. In cryptography theory, the theoretical security of the system calls for high quality RNs. Therefore, developing methods for producing unpredictable RNs with adequate speed is an attractive topic. Early on, despite the lack of theoretical support, pseudo RNs generated by algorithmic methods performed well and satisfied reasonable statistical requirements. However, as implemented, those pseudorandom sequences were completely determined by mathematical formulas and initial seeds, which cannot introduce extra entropy or information. In these cases, “random” bits are generated that are not at all random. Physical random number generators (RNGs), which, in contrast to algorithmic methods, are based on unpredictable physical random phenomena, have attracted considerable research interest. However, the way that we extract random bits from those physical entropy sources has a large influence on the efficiency and performance of the system. In this manuscript, we will review and discuss several randomness extraction schemes that are based on radiation or photon arrival times. We analyze the robustness, post-processing requirements and, in particular, the extraction efficiency of those methods to aid in the construction of efficient, compact and robust physical RNG systems.

  3. Catalytic Activities Of [GADV]-Peptides

    NASA Astrophysics Data System (ADS)

    Oba, Takae; Fukushima, Jun; Maruyama, Masako; Iwamoto, Ryoko; Ikehara, Kenji

    2005-10-01

    We have previously postulated a novel hypothesis for the origin of life, assuming that life on the earth originated from “[GADV]-protein world”, not from the “RNA world” (see Ikehara's review, 2002). The [GADV]-protein world is constituted from peptides and proteins with random sequences of four amino acids (glycine [G], alanine [A], aspartic acid [D] and valine [V]), which accumulated by pseudo-replication of the [GADV]-proteins. To obtain evidence for the hypothesis, we produced [GADV]-peptides by repeated heat-drying of the amino acids for 30 cycles ([GADV]-P30) and examined whether the peptides have some catalytic activities or not. From the results, it was found that the [GADV]-P30 can hydrolyze several kinds of chemical bonds in molecules, such as umbelliferyl-β-D-galactoside, glycine-p-nitroanilide and bovine serum albumin. This suggests that [GADV]-P30 could play an important role in the accumulation of [GADV]-proteins through pseudo-replication, leading to the emergence of life. We further show that [GADV]-octapaptides with random sequences, but containing no cyclic compounds as diketepiperazines, have catalytic activity, hydrolyzing peptide bonds in a natural protein, bovine serum albumin. The catalytic activity of the octapeptides was much higher than the [GADV]-P30 produced through repeated heat-drying treatments. These results also support the [GADV]-protein-world hypothesis of the origin of life (see Ikehara's review, 2002). Possible steps for the emergence of life on the primitive earth are presented.

  4. PRM/NIR sensor for brain hematoma detection and oxygenation monitoring

    NASA Astrophysics Data System (ADS)

    Zheng, Liu; Lee, Hyo Sang; Lokos, Sandor; Kim, Jin; Hanley, Daniel F.; Wilson, David A.

    1997-06-01

    The pseudo-random modulation/near IR sensor (PRM/NIR Sensor) is a low cost portable system designed for time-resolved tissue diagnosis, especially hematoma detection in the emergency care facility. The sensor consists of a personal computer and a hardware unit enclosed in a box of size 37 X 37 X 31 cm3 and of weight less than 10 kg. Two pseudo-random modulated diode lasers emitting at 670 nm and 810 nm are used in the sensor as light sources. The sensor can be operated either in a single wavelength mode or a true differential mode. Optical fiber bundles are used for convenient light delivery and color filters are used to reject room light. Based on a proprietary resolution- enhancement correlation technique, the system achieves a time resolution better than 40 ps with a PRM modulation speed of 200 MHz and a sampling rate of 1-10 Gs/s. Using the prototype sensor, phantom experiments have been conducted to study the feasibility of the sensor. Brain's optical properties are simulated with solutions of intralipid and ink. Hematomas are simulated with bags of paint and hemoglobin immersed in the solution of varies sizes, depths, and orientations. Effects of human skull and hair are studied experimentally. In animal experiment, the sensor was used to monitor the cerebral oxygenation change due to hypercapnia, hypoxia, and hyperventilation. Good correlations were found between NIR measurement parameters and physiological changes induced to the animals.

  5. Pseudo-Random Sequence Modifications for Ion Mobility Orthogonal Time of Flight Mass Spectrometry

    PubMed Central

    Clowers, Brian H.; Belov, Mikhail E.; Prior, David C.; Danielson, William F.; Ibrahim, Yehia; Smith, Richard D.

    2008-01-01

    Due to the inherently low duty cycle of ion mobility spectrometry (IMS) experiments that sample from continuous ion sources, a range of experimental advances have been developed to maximize ion utilization efficiency. The use of ion trapping mechanisms prior to the ion mobility drift tube has demonstrated significant gains over discrete sampling from continuous sources; however, these technologies have traditionally relied upon a signal averaging to attain analytically relevant signal-to-noise ratios (SNR). Multiplexed (MP) techniques based upon the Hadamard transform offer an alternative experimental approach by which ion utilization efficiency can be elevated to ∼ 50 %. Recently, our research group demonstrated a unique multiplexed ion mobility time-of-flight (MP-IMS-TOF) approach that incorporates ion trapping and can extend ion utilization efficiency beyond 50 %. However, the spectral reconstruction of the multiplexed signal using this experiment approach requires the use of sample-specific weighing designs. Though general weighing designs have been shown to significantly enhance ion utilization efficiency using this MP technique, such weighing designs cannot be applied to all samples. By modifying both the ion funnel trap and the pseudo random sequence (PRS) used for the MP experiment we have eliminated the need for complex weighing matrices. For both simple and complex mixtures SNR enhancements of up to 13 were routinely observed as compared to the SA-IMS-TOF experiment. In addition, this new class of PRS provides a two fold enhancement in ion throughput compared to the traditional HT-IMS experiment. PMID:18311942

  6. ICP-Forests (International Co-operative Programme on Assessment and Monitoring of Air Pollution Effects on Forests): Quality Assurance procedure in plant diversity monitoring.

    PubMed

    Allegrini, Maria-Cristina; Canullo, Roberto; Campetella, Giandiego

    2009-04-01

    Knowledge of accuracy and precision rates is particularly important for long-term studies. Vegetation assessments include many sources of error related to overlooking and misidentification, that are usually influenced by some factors, such as cover estimate subjectivity, observer biased species lists and experience of the botanist. The vegetation assessment protocol adopted in the Italian forest monitoring programme (CONECOFOR) contains a Quality Assurance programme. The paper presents the different phases of QA, separates the 5 main critical points of the whole protocol as sources of random or systematic errors. Examples of Measurement Quality Objectives (MQOs) expressed as Data Quality Limits (DQLs) are given for vascular plant cover estimates, in order to establish the reproducibility of the data. Quality control activities were used to determine the "distance" between the surveyor teams and the control team. Selected data were acquired during the training and inter-calibration courses. In particular, an index of average cover by species groups was used to evaluate the random error (CV 4%) as the dispersion around the "true values" of the control team. The systematic error in the evaluation of species composition, caused by overlooking or misidentification of species, was calculated following the pseudo-turnover rate; detailed species censuses on smaller sampling units were accepted as the pseudo-turnover which always fell below the 25% established threshold; species density scores recorded at community level (100 m(2) surface) rarely exceeded that limit.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korhonen, Juha, E-mail: juha.p.korhonen@hus.fi; Kapanen, Mika; Department of Oncology, Helsinki University Central Hospital, POB-180, 00029 HUS

    Purpose: The lack of electron density information in magnetic resonance images (MRI) poses a major challenge for MRI-based radiotherapy treatment planning (RTP). In this study the authors convert MRI intensity values into Hounsfield units (HUs) in the male pelvis and thus enable accurate MRI-based RTP for prostate cancer patients with varying tissue anatomy and body fat contents. Methods: T{sub 1}/T{sub 2}*-weighted MRI intensity values and standard computed tomography (CT) image HUs in the male pelvis were analyzed using image data of 10 prostate cancer patients. The collected data were utilized to generate a dual model HU conversion technique from MRImore » intensity values of the single image set separately within and outside of contoured pelvic bones. Within the bone segment local MRI intensity values were converted to HUs by applying a second-order polynomial model. This model was tuned for each patient by two patient-specific adjustments: MR signal normalization to correct shifts in absolute intensity level and application of a cutoff value to accurately represent low density bony tissue HUs. For soft tissues, such as fat and muscle, located outside of the bone contours, a threshold-based segmentation method without requirements for any patient-specific adjustments was introduced to convert MRI intensity values into HUs. The dual model HU conversion technique was implemented by constructing pseudo-CT images for 10 other prostate cancer patients. The feasibility of these images for RTP was evaluated by comparing HUs in the generated pseudo-CT images with those in standard CT images, and by determining deviations in MRI-based dose distributions compared to those in CT images with 7-field intensity modulated radiation therapy (IMRT) with the anisotropic analytical algorithm and 360° volumetric-modulated arc therapy (VMAT) with the Voxel Monte Carlo algorithm. Results: The average HU differences between the constructed pseudo-CT images and standard CT images of each test patient ranged from −2 to 5 HUs and from 22 to 78 HUs in soft and bony tissues, respectively. The average local absolute value differences were 11 HUs in soft tissues and 99 HUs in bones. The planning target volume doses (volumes 95%, 50%, 5%) in the pseudo-CT images were within 0.8% compared to those in CT images in all of the 20 treatment plans. The average deviation was 0.3%. With all the test patients over 94% (IMRT) and 92% (VMAT) of dose points within body (lower than 10% of maximum dose suppressed) passed the 1 mm and 1% 2D gamma index criterion. The statistical tests (t- and F-tests) showed significantly improved (p ≤ 0.05) HU and dose calculation accuracies with the soft tissue conversion method instead of homogeneous representation of these tissues in MRI-based RTP images. Conclusions: This study indicates that it is possible to construct high quality pseudo-CT images by converting the intensity values of a single MRI series into HUs in the male pelvis, and to use these images for accurate MRI-based prostate RTP dose calculations.« less

  8. A dual model HU conversion from MRI intensity values within and outside of bone segment for MRI-based radiotherapy treatment planning of prostate cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korhonen, Juha, E-mail: juha.p.korhonen@hus.fi; Department of Oncology, Helsinki University Central Hospital, POB-180, 00029 HUS; Kapanen, Mika

    2014-01-15

    Purpose: The lack of electron density information in magnetic resonance images (MRI) poses a major challenge for MRI-based radiotherapy treatment planning (RTP). In this study the authors convert MRI intensity values into Hounsfield units (HUs) in the male pelvis and thus enable accurate MRI-based RTP for prostate cancer patients with varying tissue anatomy and body fat contents. Methods: T{sub 1}/T{sub 2}*-weighted MRI intensity values and standard computed tomography (CT) image HUs in the male pelvis were analyzed using image data of 10 prostate cancer patients. The collected data were utilized to generate a dual model HU conversion technique from MRImore » intensity values of the single image set separately within and outside of contoured pelvic bones. Within the bone segment local MRI intensity values were converted to HUs by applying a second-order polynomial model. This model was tuned for each patient by two patient-specific adjustments: MR signal normalization to correct shifts in absolute intensity level and application of a cutoff value to accurately represent low density bony tissue HUs. For soft tissues, such as fat and muscle, located outside of the bone contours, a threshold-based segmentation method without requirements for any patient-specific adjustments was introduced to convert MRI intensity values into HUs. The dual model HU conversion technique was implemented by constructing pseudo-CT images for 10 other prostate cancer patients. The feasibility of these images for RTP was evaluated by comparing HUs in the generated pseudo-CT images with those in standard CT images, and by determining deviations in MRI-based dose distributions compared to those in CT images with 7-field intensity modulated radiation therapy (IMRT) with the anisotropic analytical algorithm and 360° volumetric-modulated arc therapy (VMAT) with the Voxel Monte Carlo algorithm. Results: The average HU differences between the constructed pseudo-CT images and standard CT images of each test patient ranged from −2 to 5 HUs and from 22 to 78 HUs in soft and bony tissues, respectively. The average local absolute value differences were 11 HUs in soft tissues and 99 HUs in bones. The planning target volume doses (volumes 95%, 50%, 5%) in the pseudo-CT images were within 0.8% compared to those in CT images in all of the 20 treatment plans. The average deviation was 0.3%. With all the test patients over 94% (IMRT) and 92% (VMAT) of dose points within body (lower than 10% of maximum dose suppressed) passed the 1 mm and 1% 2D gamma index criterion. The statistical tests (t- and F-tests) showed significantly improved (p ≤ 0.05) HU and dose calculation accuracies with the soft tissue conversion method instead of homogeneous representation of these tissues in MRI-based RTP images. Conclusions: This study indicates that it is possible to construct high quality pseudo-CT images by converting the intensity values of a single MRI series into HUs in the male pelvis, and to use these images for accurate MRI-based prostate RTP dose calculations.« less

  9. Nonvolatile reconfigurable sequential logic in a HfO2 resistive random access memory array.

    PubMed

    Zhou, Ya-Xiong; Li, Yi; Su, Yu-Ting; Wang, Zhuo-Rui; Shih, Ling-Yi; Chang, Ting-Chang; Chang, Kuan-Chang; Long, Shi-Bing; Sze, Simon M; Miao, Xiang-Shui

    2017-05-25

    Resistive random access memory (RRAM) based reconfigurable logic provides a temporal programmable dimension to realize Boolean logic functions and is regarded as a promising route to build non-von Neumann computing architecture. In this work, a reconfigurable operation method is proposed to perform nonvolatile sequential logic in a HfO 2 -based RRAM array. Eight kinds of Boolean logic functions can be implemented within the same hardware fabrics. During the logic computing processes, the RRAM devices in an array are flexibly configured in a bipolar or complementary structure. The validity was demonstrated by experimentally implemented NAND and XOR logic functions and a theoretically designed 1-bit full adder. With the trade-off between temporal and spatial computing complexity, our method makes better use of limited computing resources, thus provides an attractive scheme for the construction of logic-in-memory systems.

  10. The angular distribution of infrared radiances emerging from broken fields of cumulus clouds

    NASA Technical Reports Server (NTRS)

    Naber, P. S.; Weinman, J. A.

    1984-01-01

    Infrared radiances were simultaneously measured from broken cloud fields over the eastern Pacific Ocean by means of the eastern and western geostationary satellites. The measurements were compared with the results of models that characterized the clouds as black circular cylinders disposed randomly on a plane and as black cuboids disposed in regular and in shifted periodic arrays. The data were also compared with the results obtained from a radiative transfer model that considered emission and scattering by a regular array of periodic cuboidal clouds. It was found that the radiances did not depend significantly on the azimuth angle; this suggested that the observed cloud fields were not regular periodic arrays. However, the dependence on zenith angle suggested that the clouds were not disposed randomly either. The implication of these measurements on the understanding of the transfer of infrared radiances through broken cloud fields is considered.

  11. Experimental vibroacoustic testing of plane panels using synthesized random pressure fields.

    PubMed

    Robin, Olivier; Berry, Alain; Moreau, Stéphane

    2014-06-01

    The experimental reproduction of random pressure fields on a plane panel and corresponding induced vibrations is studied. An open-loop reproduction strategy is proposed that uses the synthetic array concept, for which a small array element is moved to create a large array by post-processing. Three possible approaches are suggested to define the complex amplitudes to be imposed to the reproduction sources distributed on a virtual plane facing the panel to be tested. Using a single acoustic monopole, a scanning laser vibrometer and a baffled simply supported aluminum panel, experimental vibroacoustic indicators such as the Transmission Loss for Diffuse Acoustic Field, high-speed subsonic and supersonic Turbulent Boundary Layer excitations are obtained. Comparisons with simulation results obtained using a commercial software show that the Transmission Loss estimation is possible under both excitations. Moreover and as a complement to frequency domain indicators, the vibroacoustic behavior of the panel can be studied in the wave number domain.

  12. Colorization-Based RGB-White Color Interpolation using Color Filter Array with Randomly Sampled Pattern

    PubMed Central

    Oh, Paul; Lee, Sukho; Kang, Moon Gi

    2017-01-01

    Recently, several RGB-White (RGBW) color filter arrays (CFAs) have been proposed, which have extra white (W) pixels in the filter array that are highly sensitive. Due to the high sensitivity, the W pixels have better SNR (Signal to Noise Ratio) characteristics than other color pixels in the filter array, especially, in low light conditions. However, most of the RGBW CFAs are designed so that the acquired RGBW pattern image can be converted into the conventional Bayer pattern image, which is then again converted into the final color image by using conventional demosaicing methods, i.e., color interpolation techniques. In this paper, we propose a new RGBW color filter array based on a totally different color interpolation technique, the colorization algorithm. The colorization algorithm was initially proposed for colorizing a gray image into a color image using a small number of color seeds. Here, we adopt this algorithm as a color interpolation technique, so that the RGBW color filter array can be designed with a very large number of W pixels to make the most of the highly sensitive characteristics of the W channel. The resulting RGBW color filter array has a pattern with a large proportion of W pixels, while the small-numbered RGB pixels are randomly distributed over the array. The colorization algorithm makes it possible to reconstruct the colors from such a small number of RGB values. Due to the large proportion of W pixels, the reconstructed color image has a high SNR value, especially higher than those of conventional CFAs in low light condition. Experimental results show that many important information which are not perceived in color images reconstructed with conventional CFAs are perceived in the images reconstructed with the proposed method. PMID:28657602

  13. Colorization-Based RGB-White Color Interpolation using Color Filter Array with Randomly Sampled Pattern.

    PubMed

    Oh, Paul; Lee, Sukho; Kang, Moon Gi

    2017-06-28

    Recently, several RGB-White (RGBW) color filter arrays (CFAs) have been proposed, which have extra white (W) pixels in the filter array that are highly sensitive. Due to the high sensitivity, the W pixels have better SNR (Signal to Noise Ratio) characteristics than other color pixels in the filter array, especially, in low light conditions. However, most of the RGBW CFAs are designed so that the acquired RGBW pattern image can be converted into the conventional Bayer pattern image, which is then again converted into the final color image by using conventional demosaicing methods, i.e., color interpolation techniques. In this paper, we propose a new RGBW color filter array based on a totally different color interpolation technique, the colorization algorithm. The colorization algorithm was initially proposed for colorizing a gray image into a color image using a small number of color seeds. Here, we adopt this algorithm as a color interpolation technique, so that the RGBW color filter array can be designed with a very large number of W pixels to make the most of the highly sensitive characteristics of the W channel. The resulting RGBW color filter array has a pattern with a large proportion of W pixels, while the small-numbered RGB pixels are randomly distributed over the array. The colorization algorithm makes it possible to reconstruct the colors from such a small number of RGB values. Due to the large proportion of W pixels, the reconstructed color image has a high SNR value, especially higher than those of conventional CFAs in low light condition. Experimental results show that many important information which are not perceived in color images reconstructed with conventional CFAs are perceived in the images reconstructed with the proposed method.

  14. Generation of brain pseudo-CTs using an undersampled, single-acquisition UTE-mDixon pulse sequence and unsupervised clustering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Su, Kuan-Hao; Hu, Lingzhi; Traughber, Melanie

    Purpose: MR-based pseudo-CT has an important role in MR-based radiation therapy planning and PET attenuation correction. The purpose of this study is to establish a clinically feasible approach, including image acquisition, correction, and CT formation, for pseudo-CT generation of the brain using a single-acquisition, undersampled ultrashort echo time (UTE)-mDixon pulse sequence. Methods: Nine patients were recruited for this study. For each patient, a 190-s, undersampled, single acquisition UTE-mDixon sequence of the brain was acquired (TE = 0.1, 1.5, and 2.8 ms). A novel method of retrospective trajectory correction of the free induction decay (FID) signal was performed based on point-spreadmore » functions of three external MR markers. Two-point Dixon images were reconstructed using the first and second echo data (TE = 1.5 and 2.8 ms). R2{sup ∗} images (1/T2{sup ∗}) were then estimated and were used to provide bone information. Three image features, i.e., Dixon-fat, Dixon-water, and R2{sup ∗}, were used for unsupervised clustering. Five tissue clusters, i.e., air, brain, fat, fluid, and bone, were estimated using the fuzzy c-means (FCM) algorithm. A two-step, automatic tissue-assignment approach was proposed and designed according to the prior information of the given feature space. Pseudo-CTs were generated by a voxelwise linear combination of the membership functions of the FCM. A low-dose CT was acquired for each patient and was used as the gold standard for comparison. Results: The contrast and sharpness of the FID images were improved after trajectory correction was applied. The mean of the estimated trajectory delay was 0.774 μs (max: 1.350 μs; min: 0.180 μs). The FCM-estimated centroids of different tissue types showed a distinguishable pattern for different tissues, and significant differences were found between the centroid locations of different tissue types. Pseudo-CT can provide additional skull detail and has low bias and absolute error of estimated CT numbers of voxels (−22 ± 29 HU and 130 ± 16 HU) when compared to low-dose CT. Conclusions: The MR features generated by the proposed acquisition, correction, and processing methods may provide representative clustering information and could thus be used for clinical pseudo-CT generation.« less

  15. An eco-friendly dyeing of woolen yarn by Terminalia chebula extract with evaluations of kinetic and adsorption characteristics.

    PubMed

    Shabbir, Mohd; Rather, Luqman Jameel; Shahid-Ul-Islam; Bukhari, Mohd Nadeem; Shahid, Mohd; Ali Khan, Mohd; Mohammad, Faqeer

    2016-05-01

    In the present study Terminalia chebula was used as an eco-friendly natural colorant for sustainable textile coloration of woolen yarn with primary emphasis on thermodynamic and kinetic adsorption aspects of dyeing processes. Polyphenols and ellagitannins are the main coloring components of the dye extract. Assessment of the effect of pH on dye adsorption showed an increase in adsorption capacity with decreasing pH. Effect of temperature on dye adsorption showed 80 °C as optimum temperature for wool dyeing with T. chebula dye extract. Two kinetic equations, namely pseudo first-order and pseudo second-order equations, were employed to investigate the adsorption rates. Pseudo second-order model provided the best fit (R (2) = 0.9908) to the experimental data. The equilibrium adsorption data were fitted by Freundlich and Langmuir isotherm models. The adsorption behavior accorded well (R (2) = 0.9937) with Langmuir isotherm model. Variety of eco-friendly and sustainable shades were developed in combination with small amount of metallic mordants and assessed in terms of colorimetric (CIEL(∗) a (∗) b (∗) and K/S) properties measured using spectrophotometer under D65 illuminant (10° standard observer). The fastness properties of dyed woolen yarn against light, washing, dry and wet rubbing were also evaluated.

  16. An eco-friendly dyeing of woolen yarn by Terminalia chebula extract with evaluations of kinetic and adsorption characteristics

    PubMed Central

    Shabbir, Mohd; Rather, Luqman Jameel; Shahid-ul-Islam; Bukhari, Mohd Nadeem; Shahid, Mohd; Ali Khan, Mohd; Mohammad, Faqeer

    2016-01-01

    In the present study Terminalia chebula was used as an eco-friendly natural colorant for sustainable textile coloration of woolen yarn with primary emphasis on thermodynamic and kinetic adsorption aspects of dyeing processes. Polyphenols and ellagitannins are the main coloring components of the dye extract. Assessment of the effect of pH on dye adsorption showed an increase in adsorption capacity with decreasing pH. Effect of temperature on dye adsorption showed 80 °C as optimum temperature for wool dyeing with T. chebula dye extract. Two kinetic equations, namely pseudo first-order and pseudo second-order equations, were employed to investigate the adsorption rates. Pseudo second-order model provided the best fit (R2 = 0.9908) to the experimental data. The equilibrium adsorption data were fitted by Freundlich and Langmuir isotherm models. The adsorption behavior accorded well (R2 = 0.9937) with Langmuir isotherm model. Variety of eco-friendly and sustainable shades were developed in combination with small amount of metallic mordants and assessed in terms of colorimetric (CIEL∗a∗b∗ and K/S) properties measured using spectrophotometer under D65 illuminant (10° standard observer). The fastness properties of dyed woolen yarn against light, washing, dry and wet rubbing were also evaluated. PMID:27222752

  17. Biodiversity mapping in a tropical West African forest with airborne hyperspectral data.

    PubMed

    Vaglio Laurin, Gaia; Cheung-Wai Chan, Jonathan; Chen, Qi; Lindsell, Jeremy A; Coomes, David A; Guerriero, Leila; Del Frate, Fabio; Miglietta, Franco; Valentini, Riccardo

    2014-01-01

    Tropical forests are major repositories of biodiversity, but are fast disappearing as land is converted to agriculture. Decision-makers need to know which of the remaining forests to prioritize for conservation, but the only spatial information on forest biodiversity has, until recently, come from a sparse network of ground-based plots. Here we explore whether airborne hyperspectral imagery can be used to predict the alpha diversity of upper canopy trees in a West African forest. The abundance of tree species were collected from 64 plots (each 1250 m(2) in size) within a Sierra Leonean national park, and Shannon-Wiener biodiversity indices were calculated. An airborne spectrometer measured reflectances of 186 bands in the visible and near-infrared spectral range at 1 m(2) resolution. The standard deviations of these reflectance values and their first-order derivatives were calculated for each plot from the c. 1250 pixels of hyperspectral information within them. Shannon-Wiener indices were then predicted from these plot-based reflectance statistics using a machine-learning algorithm (Random Forest). The regression model fitted the data well (pseudo-R(2) = 84.9%), and we show that standard deviations of green-band reflectances and infra-red region derivatives had the strongest explanatory powers. Our work shows that airborne hyperspectral sensing can be very effective at mapping canopy tree diversity, because its high spatial resolution allows within-plot heterogeneity in reflectance to be characterized, making it an effective tool for monitoring forest biodiversity over large geographic scales.

  18. Biodiversity Mapping in a Tropical West African Forest with Airborne Hyperspectral Data

    PubMed Central

    Vaglio Laurin, Gaia; Chan, Jonathan Cheung-Wai; Chen, Qi; Lindsell, Jeremy A.; Coomes, David A.; Guerriero, Leila; Frate, Fabio Del; Miglietta, Franco; Valentini, Riccardo

    2014-01-01

    Tropical forests are major repositories of biodiversity, but are fast disappearing as land is converted to agriculture. Decision-makers need to know which of the remaining forests to prioritize for conservation, but the only spatial information on forest biodiversity has, until recently, come from a sparse network of ground-based plots. Here we explore whether airborne hyperspectral imagery can be used to predict the alpha diversity of upper canopy trees in a West African forest. The abundance of tree species were collected from 64 plots (each 1250 m2 in size) within a Sierra Leonean national park, and Shannon-Wiener biodiversity indices were calculated. An airborne spectrometer measured reflectances of 186 bands in the visible and near-infrared spectral range at 1 m2 resolution. The standard deviations of these reflectance values and their first-order derivatives were calculated for each plot from the c. 1250 pixels of hyperspectral information within them. Shannon-Wiener indices were then predicted from these plot-based reflectance statistics using a machine-learning algorithm (Random Forest). The regression model fitted the data well (pseudo-R2 = 84.9%), and we show that standard deviations of green-band reflectances and infra-red region derivatives had the strongest explanatory powers. Our work shows that airborne hyperspectral sensing can be very effective at mapping canopy tree diversity, because its high spatial resolution allows within-plot heterogeneity in reflectance to be characterized, making it an effective tool for monitoring forest biodiversity over large geographic scales. PMID:24937407

  19. Solar Wind-Magnetosphere Coupling Influences on Pseudo-Breakup Activity

    NASA Technical Reports Server (NTRS)

    Fillingim, M. O.; Brittnacher, M.; Parks, G. K.; Germany, G. A.; Spann, J. F.

    1998-01-01

    Pseudo-breakups are brief, localized aurora[ arc brightening, which do not lead to a global expansion, are historically observed during the growth phase of substorms. Previous studies have demonstrated that phenomenologically there is very little difference between substorm onsets and pseudo-breakups except for the degree of localization and the absence of a global expansion phase. A key open question is what physical mechanism prevents a pseudo-breakup form expanding globally. Using Polar Ultraviolet Imager (UVI) images, we identify periods of pseudo-breakup activity. Foe the data analyzed we find that most pseudo-breakups occur near local midnight, between magnetic local times of 21 and 03, at magnetic latitudes near 70 degrees, through this value may change by several degrees. While often discussed in the context of substorm growth phase events, pseudo-breakups are also shown to occur during prolonged relatively inactive periods. These quiet time pseudo-breakups can occur over a period of several hours without the development of a significant substorm for at least an hour after pseudo-breakup activity stops. In an attempt to understand the cause of quiet time pseudo-breakups, we compute the epsilon parameter as a measure of the efficiency of solar wind-magnetosphere coupling. It is noted that quiet time pseudo-breakups occur typically when epsilon is low; less than about 50 GW. We suggest that quiet time pseudo-breakups are driven by relatively small amounts of energy transferred to the magnetosphere by the solar wind insufficient to initiate a substorm expansion onset.

  20. Computationally assisted screening and design of cell-interactive peptides by a cell-based assay using peptide arrays and a fuzzy neural network algorithm.

    PubMed

    Kaga, Chiaki; Okochi, Mina; Tomita, Yasuyuki; Kato, Ryuji; Honda, Hiroyuki

    2008-03-01

    We developed a method of effective peptide screening that combines experiments and computational analysis. The method is based on the concept that screening efficiency can be enhanced from even limited data by use of a model derived from computational analysis that serves as a guide to screening and combining the model with subsequent repeated experiments. Here we focus on cell-adhesion peptides as a model application of this peptide-screening strategy. Cell-adhesion peptides were screened by use of a cell-based assay of a peptide array. Starting with the screening data obtained from a limited, random 5-mer library (643 sequences), a rule regarding structural characteristics of cell-adhesion peptides was extracted by fuzzy neural network (FNN) analysis. According to this rule, peptides with unfavored residues in certain positions that led to inefficient binding were eliminated from the random sequences. In the restricted, second random library (273 sequences), the yield of cell-adhesion peptides having an adhesion rate more than 1.5-fold to that of the basal array support was significantly high (31%) compared with the unrestricted random library (20%). In the restricted third library (50 sequences), the yield of cell-adhesion peptides increased to 84%. We conclude that a repeated cycle of experiments screening limited numbers of peptides can be assisted by the rule-extracting feature of FNN.

  1. Experimental study of surface insulated-standard hybrid tungsten planar wire array Z-pinches at "QiangGuang-I" facility

    NASA Astrophysics Data System (ADS)

    Sheng, Liang; Peng, Bodong; Li, Yang; Yuan, Yuan; Li, Mo; Zhang, Mei; Zhao, Chen; Zhao, Jizhen; Wang, Liangping

    2016-01-01

    The experimental results of the insulated-standard hybrid wire array Z pinches carried out on "QiangGuang-I" facility at Northwest Institute of Nuclear Technology were presented and discussed. The surface insulating can impose a significant influence on the dynamics and radiation characteristics of the hybrid wire array Z pinches, especially on the early stage (t/timp < 0.6). The expansion of insulated wires at the ablation stage is suppressed, while the streams stripped from the insulated wires move faster than that from the standard wires. The foot radiation of X-ray is enhanced by increment of the number of insulated wires, 19.6 GW, 33.6 GW, and 68.6 GW for shots 14037S, 14028H, and 14039I, respectively. The surface insulation also introduces nonhomogeneity along the single wire—the streams move much faster near the electrodes. The colliding boundary of the hybrid wire array Z pinches is bias to the insulated side approximately 0.6 mm.

  2. Single-dose infusion ketamine and non-ketamine N-methyl-d-aspartate receptor antagonists for unipolar and bipolar depression: a meta-analysis of efficacy, safety and time trajectories.

    PubMed

    Kishimoto, T; Chawla, J M; Hagi, K; Zarate, C A; Kane, J M; Bauer, M; Correll, C U

    2016-05-01

    Ketamine and non-ketamine N-methyl-d-aspartate receptor antagonists (NMDAR antagonists) recently demonstrated antidepressant efficacy for the treatment of refractory depression, but effect sizes, trajectories and possible class effects are unclear. We searched PubMed/PsycINFO/Web of Science/clinicaltrials.gov until 25 August 2015. Parallel-group or cross-over randomized controlled trials (RCTs) comparing single intravenous infusion of ketamine or a non-ketamine NMDAR antagonist v. placebo/pseudo-placebo in patients with major depressive disorder (MDD) and/or bipolar depression (BD) were included in the analyses. Hedges' g and risk ratios and their 95% confidence intervals (CIs) were calculated using a random-effects model. The primary outcome was depressive symptom change. Secondary outcomes included response, remission, all-cause discontinuation and adverse effects. A total of 14 RCTs (nine ketamine studies: n = 234; five non-ketamine NMDAR antagonist studies: n = 354; MDD = 554, BD = 34), lasting 10.0 ± 8.8 days, were meta-analysed. Ketamine reduced depression significantly more than placebo/pseudo-placebo beginning at 40 min, peaking at day 1 (Hedges' g = -1.00, 95% CI -1.28 to -0.73, p < 0.001), and loosing superiority by days 10-12. Non-ketamine NMDAR antagonists were superior to placebo only on days 5-8 (Hedges' g = -0.37, 95% CI -0.66 to -0.09, p = 0.01). Compared with placebo/pseudo-placebo, ketamine led to significantly greater response (40 min to day 7) and remission (80 min to days 3-5). Non-ketamine NMDAR antagonists achieved greater response at day 2 and days 3-5. All-cause discontinuation was similar between ketamine (p = 0.34) or non-ketamine NMDAR antagonists (p = 0.94) and placebo. Although some adverse effects were more common with ketamine/NMDAR antagonists than placebo, these were transient and clinically insignificant. A single infusion of ketamine, but less so of non-ketamine NMDAR antagonists, has ultra-rapid efficacy for MDD and BD, lasting for up to 1 week. Development of easy-to-administer, repeatedly given NMDAR antagonists without risk of brain toxicity is of critical importance.

  3. Characterization of network structure in stereoEEG data using consensus-based partial coherence.

    PubMed

    Ter Wal, Marije; Cardellicchio, Pasquale; LoRusso, Giorgio; Pelliccia, Veronica; Avanzini, Pietro; Orban, Guy A; Tiesinga, Paul He

    2018-06-06

    Coherence is a widely used measure to determine the frequency-resolved functional connectivity between pairs of recording sites, but this measure is confounded by shared inputs to the pair. To remove shared inputs, the 'partial coherence' can be computed by conditioning the spectral matrices of the pair on all other recorded channels, which involves the calculation of a matrix (pseudo-) inverse. It has so far remained a challenge to use the time-resolved partial coherence to analyze intracranial recordings with a large number of recording sites. For instance, calculating the partial coherence using a pseudoinverse method produces a high number of false positives when it is applied to a large number of channels. To address this challenge, we developed a new method that randomly aggregated channels into a smaller number of effective channels on which the calculation of partial coherence was based. We obtained a 'consensus' partial coherence (cPCOH) by repeating this approach for several random aggregations of channels (permutations) and only accepting those activations in time and frequency with a high enough consensus. Using model data we show that the cPCOH method effectively filters out the effect of shared inputs and performs substantially better than the pseudo-inverse. We successfully applied the cPCOH procedure to human stereotactic EEG data and demonstrated three key advantages of this method relative to alternative procedures. First, it reduces the number of false positives relative to the pseudo-inverse method. Second, it allows for titration of the amount of false positives relative to the false negatives by adjusting the consensus threshold, thus allowing the data-analyst to prioritize one over the other to meet specific analysis demands. Third, it substantially reduced the number of identified interactions compared to coherence, providing a sparser network of connections from which clear spatial patterns emerged. These patterns can serve as a starting point of further analyses that provide insight into network dynamics during cognitive processes. These advantages likely generalize to other modalities in which shared inputs introduce confounds, such as electroencephalography (EEG) and magneto-encephalography (MEG). Copyright © 2018. Published by Elsevier Inc.

  4. Pseudo color ghost coding imaging with pseudo thermal light

    NASA Astrophysics Data System (ADS)

    Duan, De-yang; Xia, Yun-jie

    2018-04-01

    We present a new pseudo color imaging scheme named pseudo color ghost coding imaging based on ghost imaging but with multiwavelength source modulated by a spatial light modulator. Compared with conventional pseudo color imaging where there is no nondegenerate wavelength spatial correlations resulting in extra monochromatic images, the degenerate wavelength and nondegenerate wavelength spatial correlations between the idle beam and signal beam can be obtained simultaneously. This scheme can obtain more colorful image with higher quality than that in conventional pseudo color coding techniques. More importantly, a significant advantage of the scheme compared to the conventional pseudo color coding imaging techniques is the image with different colors can be obtained without changing the light source and spatial filter.

  5. Pseudo-differential CMOS analog front-end circuit for wide-bandwidth optical probe current sensor

    NASA Astrophysics Data System (ADS)

    Uekura, Takaharu; Oyanagi, Kousuke; Sonehara, Makoto; Sato, Toshiro; Miyaji, Kousuke

    2018-04-01

    In this paper, we present a pseudo-differential analog front-end (AFE) circuit for a novel optical probe current sensor (OPCS) aimed for high-frequency power electronics. It employs a regulated cascode transimpedance amplifier (RGC-TIA) to achieve a high gain and a large bandwidth without using an extremely high performance operational amplifier. The AFE circuit is designed in a 0.18 µm standard CMOS technology achieving a high transimpedance gain of 120 dB Ω and high cut off frequency of 16 MHz. The measured slew rate is 70 V/µs and the input referred current noise is 1.02 pA/\\sqrt{\\text{Hz}} . The magnetic resolution and bandwidth of OPCS are estimated to be 1.29 mTrms and 16 MHz, respectively; the bandwidth is higher than that of the reported Hall effect current sensor.

  6. Pseudo spectral collocation with Maxwell polynomials for kinetic equations with energy diffusion

    NASA Astrophysics Data System (ADS)

    Sánchez-Vizuet, Tonatiuh; Cerfon, Antoine J.

    2018-02-01

    We study the approximation and stability properties of a recently popularized discretization strategy for the speed variable in kinetic equations, based on pseudo-spectral collocation on a grid defined by the zeros of a non-standard family of orthogonal polynomials called Maxwell polynomials. Taking a one-dimensional equation describing energy diffusion due to Fokker-Planck collisions with a Maxwell-Boltzmann background distribution as the test bench for the performance of the scheme, we find that Maxwell based discretizations outperform other commonly used schemes in most situations, often by orders of magnitude. This provides a strong motivation for their use in high-dimensional gyrokinetic simulations. However, we also show that Maxwell based schemes are subject to a non-modal time stepping instability in their most straightforward implementation, so that special care must be given to the discrete representation of the linear operators in order to benefit from the advantages provided by Maxwell polynomials.

  7. Measures of precision for dissimilarity-based multivariate analysis of ecological communities

    PubMed Central

    Anderson, Marti J; Santana-Garcon, Julia

    2015-01-01

    Ecological studies require key decisions regarding the appropriate size and number of sampling units. No methods currently exist to measure precision for multivariate assemblage data when dissimilarity-based analyses are intended to follow. Here, we propose a pseudo multivariate dissimilarity-based standard error (MultSE) as a useful quantity for assessing sample-size adequacy in studies of ecological communities. Based on sums of squared dissimilarities, MultSE measures variability in the position of the centroid in the space of a chosen dissimilarity measure under repeated sampling for a given sample size. We describe a novel double resampling method to quantify uncertainty in MultSE values with increasing sample size. For more complex designs, values of MultSE can be calculated from the pseudo residual mean square of a permanova model, with the double resampling done within appropriate cells in the design. R code functions for implementing these techniques, along with ecological examples, are provided. PMID:25438826

  8. An ultra-weak sector, the strong CP problem and the pseudo-Goldstone dilaton

    DOE PAGES

    Allison, Kyle; Hill, Christopher T.; Ross, Graham G.

    2014-12-29

    In the context of a Coleman–Weinberg mechanism for the Higgs boson mass, we address the strong CP problem. We show that a DFSZ-like invisible axion model with a gauge-singlet complex scalar field S, whose couplings to the Standard Model are naturally ultra-weak, can solve the strong CP problem and simultaneously generate acceptable electroweak symmetry breaking. The ultra-weak couplings of the singlet S are associated with underlying approximate shift symmetries that act as custodial symmetries and maintain technical naturalness. The model also contains a very light pseudo-Goldstone dilaton that is consistent with cosmological Polonyi bounds, and the axion can be themore » dark matter of the universe. As a result, we further outline how a SUSY version of this model, which may be required in the context of Grand Unification, can avoid introducing a hierarchy problem.« less

  9. An ultra-weak sector, the strong CP problem and the pseudo-Goldstone dilaton

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allison, Kyle; Hill, Christopher T.; Ross, Graham G.

    In the context of a Coleman–Weinberg mechanism for the Higgs boson mass, we address the strong CP problem. We show that a DFSZ-like invisible axion model with a gauge-singlet complex scalar field S, whose couplings to the Standard Model are naturally ultra-weak, can solve the strong CP problem and simultaneously generate acceptable electroweak symmetry breaking. The ultra-weak couplings of the singlet S are associated with underlying approximate shift symmetries that act as custodial symmetries and maintain technical naturalness. The model also contains a very light pseudo-Goldstone dilaton that is consistent with cosmological Polonyi bounds, and the axion can be themore » dark matter of the universe. As a result, we further outline how a SUSY version of this model, which may be required in the context of Grand Unification, can avoid introducing a hierarchy problem.« less

  10. The influence of random element displacement on DOA estimates obtained with (Khatri-Rao-)root-MUSIC.

    PubMed

    Inghelbrecht, Veronique; Verhaevert, Jo; van Hecke, Tanja; Rogier, Hendrik

    2014-11-11

    Although a wide range of direction of arrival (DOA) estimation algorithms has been described for a diverse range of array configurations, no specific stochastic analysis framework has been established to assess the probability density function of the error on DOA estimates due to random errors in the array geometry. Therefore, we propose a stochastic collocation method that relies on a generalized polynomial chaos expansion to connect the statistical distribution of random position errors to the resulting distribution of the DOA estimates. We apply this technique to the conventional root-MUSIC and the Khatri-Rao-root-MUSIC methods. According to Monte-Carlo simulations, this novel approach yields a speedup by a factor of more than 100 in terms of CPU-time for a one-dimensional case and by a factor of 56 for a two-dimensional case.

  11. Methods for validating the presence of and characterizing proteins deposited onto an array

    DOEpatents

    Schabacker, Daniel S.

    2010-09-21

    A method of determining if proteins have been transferred from liquid-phase protein fractions to an array comprising staining the array with a total protein stain and imaging the array, optionally comparing the staining with a standard curve generated by staining known amounts of a known protein on the same or a similar array; a method of characterizing proteins transferred from liquid-phase protein fractions to an array including staining the array with a post-translational modification-specific (PTM-specific) stain and imaging the array and, optionally, after staining the array with a PTM-specific stain and imaging the array, washing the array, re-staining the array with a total protein stain, imaging the array, and comparing the imaging with the PTM-specific stain with the imaging with the total protein stain; stained arrays; and images of stained arrays.

  12. Disorder-induced localization of excitability in an array of coupled lasers

    NASA Astrophysics Data System (ADS)

    Lamperti, M.; Perego, A. M.

    2017-10-01

    We report on the localization of excitability induced by disorder in an array of coupled semiconductor lasers with a saturable absorber. Through numerical simulations we show that the exponential localization of excitable waves occurs if a certain critical amount of randomness is present in the coupling coefficients among the lasers. The results presented in this Rapid Communication demonstrate that disorder can induce localization in lattices of excitable nonlinear oscillators, and can be of interest in the study of photonics-based random networks, neuromorphic systems, and, by analogy, in biology, in particular, in the investigation of the collective dynamics of neuronal cell populations.

  13. Phase transition in nonuniform Josephson arrays: Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Lozovik, Yu. E.; Pomirchy, L. M.

    1994-01-01

    Disordered 2D system with Josephson interactions is considered. Disordered XY-model describes the granular films, Josephson arrays etc. Two types of disorder are analyzed: (1) randomly diluted system: Josephson coupling constants J ij are equal to J with probability p or zero (bond percolation problem); (2) coupling constants J ij are positive and distributed randomly and uniformly in some interval either including the vicinity of zero or apart from it. These systems are simulated by Monte Carlo method. Behaviour of potential energy, specific heat, phase correlation function and helicity modulus are analyzed. The phase diagram of the diluted system in T c-p plane is obtained.

  14. Magnesium supplementation, metabolic and inflammatory markers, and global genomic and proteomic profiling: a randomized, double-blind, controlled, crossover trial in overweight individuals.

    PubMed

    Chacko, Sara A; Sul, James; Song, Yiqing; Li, Xinmin; LeBlanc, James; You, Yuko; Butch, Anthony; Liu, Simin

    2011-02-01

    Dietary magnesium intake has been favorably associated with reduced risk of metabolic outcomes in observational studies; however, few randomized trials have introduced a systems-biology approach to explore molecular mechanisms of pleiotropic metabolic actions of magnesium supplementation. We examined the effects of oral magnesium supplementation on metabolic biomarkers and global genomic and proteomic profiling in overweight individuals. We undertook this randomized, crossover, pilot trial in 14 healthy, overweight volunteers [body mass index (in kg/m(2)) ≥25] who were randomly assigned to receive magnesium citrate (500 mg elemental Mg/d) or a placebo for 4 wk with a 1-mo washout period. Fasting blood and urine specimens were collected according to standardized protocols. Biochemical assays were conducted on blood specimens. RNA was extracted and subsequently hybridized with the Human Gene ST 1.0 array (Affymetrix, Santa Clara, CA). Urine proteomic profiling was analyzed with the CM10 ProteinChip array (Bio-Rad Laboratories, Hercules, CA). We observed that magnesium treatment significantly decreased fasting C-peptide concentrations (change: -0.4 ng/mL after magnesium treatment compared with +0.05 ng/mL after placebo treatment; P = 0.004) and appeared to decrease fasting insulin concentrations (change: -2.2 μU/mL after magnesium treatment compared with 0.0 μU/mL after placebo treatment; P = 0.25). No consistent patterns were observed across inflammatory biomarkers. Gene expression profiling revealed up-regulation of 24 genes and down-regulation of 36 genes including genes related to metabolic and inflammatory pathways such as C1q and tumor necrosis factor-related protein 9 (C1QTNF9) and pro-platelet basic protein (PPBP). Urine proteomic profiling showed significant differences in the expression amounts of several peptides and proteins after treatment. Magnesium supplementation for 4 wk in overweight individuals led to distinct changes in gene expression and proteomic profiling consistent with favorable effects on several metabolic pathways. This trial was registered at clinicaltrials.gov as NCT00737815.

  15. Automated pupil remapping with binary optics

    DOEpatents

    Neal, Daniel R.; Mansell, Justin

    1999-01-01

    Methods and apparatuses for pupil remapping employing non-standard lenslet shapes in arrays; divergence of lenslet focal spots from on-axis arrangements; use of lenslet arrays to resize two-dimensional inputs to the array; and use of lenslet arrays to map an aperture shape to a different detector shape. Applications include wavefront sensing, astronomical applications, optical interconnects, keylocks, and other binary optics and diffractive optics applications.

  16. Construct Validation of the "Supports Intensity Scale-Children" and "Adult" Versions: An Application of a Pseudo Multitrait-Multimethod Approach

    ERIC Educational Resources Information Center

    Seo, Hyojeong; Shogren, Karrie A.; Little, Todd D.; Thompson, James R.; Wehmeyer, Michael L.

    2016-01-01

    This study examined the convergent validity of the "Supports Intensity Scale-Adult Version" (SIS-A; Thompson et al., 2015a) and "Supports Intensity Scale-Children's Version" (SIS-C; Thompson et al., 2016a). Data from SISOnline (n = 129,864) for the SIS-A and from the SIS-C standardization sample (n = 4,015) were used for…

  17. Construct Validation of the "Supports Intensity Scale--Children and Adult Versions": An Application of a Pseudo Multitrait-Multimethod Approach

    ERIC Educational Resources Information Center

    Seo, Hyojeong; Shogren, Karrie A.; Little, Todd D.; Thompson, James R.; Wehmeyer, Michael L.

    2016-01-01

    This study examined the convergent validity of the "Supports Intensity Scale-Adult Version" (SIS-A; Thompson et al., 2015a) and "Supports Intensity Scale-Children's Version" (SIS-C; Thompson et al., 2016a). Data from SISOnline (n = 129,864) for the SIS-A and from the SIS-C standardization sample (n = 4,015) were used for…

  18. Semicustom integrated circuits and the standard transistor array radix (STAR)

    NASA Technical Reports Server (NTRS)

    Edge, T. M.

    1977-01-01

    The development, application, pros and cons of the semicustom and custom approach to the integration of circuits are described. Improvements in terms of cost, reliability, secrecy, power, and size reduction are examined. Also presented is the standard transistor array radix, a semicustom approach to digital integrated circuits that offers the advantages of both custom and semicustom approaches to integration.

  19. iPhos-PseEvo: Identifying Human Phosphorylated Proteins by Incorporating Evolutionary Information into General PseAAC via Grey System Theory.

    PubMed

    Qiu, Wang-Ren; Sun, Bi-Qian; Xiao, Xuan; Xu, Dong; Chou, Kuo-Chen

    2017-05-01

    Protein phosphorylation plays a critical role in human body by altering the structural conformation of a protein, causing it to become activated/deactivated, or functional modification. Given an uncharacterized protein sequence, can we predict whether it may be phosphorylated or may not? This is no doubt a very meaningful problem for both basic research and drug development. Unfortunately, to our best knowledge, so far no high throughput bioinformatics tool whatsoever has been developed to address such a very basic but important problem due to its extremely complexity and lacking sufficient training data. Here we proposed a predictor called iPhos-PseEvo by (1) incorporating the protein sequence evolutionary information into the general pseudo amino acid composition (PseAAC) via the grey system theory, (2) balancing out the skewed training datasets by the asymmetric bootstrap approach, and (3) constructing an ensemble predictor by fusing an array of individual random forest classifiers thru a voting system. Rigorous jackknife tests have indicated that very promising success rates have been achieved by iPhos-PseEvo even for such a difficult problem. A user-friendly web-server for iPhos-PseEvo has been established at http://www.jci-bioinfo.cn/iPhos-PseEvo, by which users can easily obtain their desired results without the need to go through the complicated mathematical equations involved. It has not escaped our notice that the formulation and approach presented here can be used to analyze many other problems in protein science as well. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Pseudo second order kinetics and pseudo isotherms for malachite green onto activated carbon: comparison of linear and non-linear regression methods.

    PubMed

    Kumar, K Vasanth; Sivanesan, S

    2006-08-25

    Pseudo second order kinetic expressions of Ho, Sobkowsk and Czerwinski, Blanachard et al. and Ritchie were fitted to the experimental kinetic data of malachite green onto activated carbon by non-linear and linear method. Non-linear method was found to be a better way of obtaining the parameters involved in the second order rate kinetic expressions. Both linear and non-linear regression showed that the Sobkowsk and Czerwinski and Ritchie's pseudo second order model were the same. Non-linear regression analysis showed that both Blanachard et al. and Ho have similar ideas on the pseudo second order model but with different assumptions. The best fit of experimental data in Ho's pseudo second order expression by linear and non-linear regression method showed that Ho pseudo second order model was a better kinetic expression when compared to other pseudo second order kinetic expressions. The amount of dye adsorbed at equilibrium, q(e), was predicted from Ho pseudo second order expression and were fitted to the Langmuir, Freundlich and Redlich Peterson expressions by both linear and non-linear method to obtain the pseudo isotherms. The best fitting pseudo isotherm was found to be the Langmuir and Redlich Peterson isotherm. Redlich Peterson is a special case of Langmuir when the constant g equals unity.

  1. Age bimodality in the central region of pseudo-bulges in S0 galaxies

    NASA Astrophysics Data System (ADS)

    Mishra, Preetish K.; Barway, Sudhanshu; Wadadekar, Yogesh

    2017-11-01

    We present evidence for bimodal stellar age distribution of pseudo-bulges of S0 galaxies as probed by the Dn(4000) index. We do not observe any bimodality in age distribution for pseudo-bulges in spiral galaxies. Our sample is flux limited and contains 2067 S0 and 2630 spiral galaxies drawn from the Sloan Digital Sky Survey. We identify pseudo-bulges in S0 and spiral galaxies, based on the position of the bulge on the Kormendy diagram and their central velocity dispersion. Dividing the pseudo-bulges of S0 galaxies into those containing old and young stellar populations, we study the connection between global star formation and pseudo-bulge age on the u - r colour-mass diagram. We find that most old pseudo-bulges are hosted by passive galaxies while majority of young bulges are hosted by galaxies that are star forming. Dividing our sample of S0 galaxies into early-type S0s and S0/a galaxies, we find that old pseudo-bulges are mainly hosted by early-type S0 galaxies while most of the pseudo-bulges in S0/a galaxies are young. We speculate that morphology plays a strong role in quenching of star formation in the disc of these S0 galaxies, which stops the growth of pseudo-bulges, giving rise to old pseudo-bulges and the observed age bimodality.

  2. Construction of the mathematical concept of pseudo thinking students

    NASA Astrophysics Data System (ADS)

    Anggraini, D.; Kusmayadi, T. A.; Pramudya, I.

    2018-05-01

    Thinking process is a process that begins with the acceptance of information, information processing and information calling in memory with structural changes that include concepts or knowledges. The concept or knowledge is individually constructed by each individual. While, students construct a mathematical concept, students may experience pseudo thinking. Pseudo thinking is a thinking process that results in an answer to a problem or construction to a concept “that is not true”. Pseudo thinking can be classified into two forms there are true pseudo and false pseudo. The construction of mathematical concepts in students of pseudo thinking should be immediately known because the error will have an impact on the next construction of mathematical concepts and to correct the errors it requires knowledge of the source of the error. Therefore, in this article will be discussed thinking process in constructing of mathematical concepts in students who experience pseudo thinking.

  3. Disturbance characteristics of half-selected cells in a cross-point resistive switching memory array

    NASA Astrophysics Data System (ADS)

    Chen, Zhe; Li, Haitong; Chen, Hong-Yu; Chen, Bing; Liu, Rui; Huang, Peng; Zhang, Feifei; Jiang, Zizhen; Ye, Hongfei; Gao, Bin; Liu, Lifeng; Liu, Xiaoyan; Kang, Jinfeng; Wong, H.-S. Philip; Yu, Shimeng

    2016-05-01

    Disturbance characteristics of cross-point resistive random access memory (RRAM) arrays are comprehensively studied in this paper. An analytical model is developed to quantify the number of pulses (#Pulse) the cell can bear before disturbance occurs under various sub-switching voltage stresses based on physical understanding. An evaluation methodology is proposed to assess the disturb behavior of half-selected (HS) cells in cross-point RRAM arrays by combining the analytical model and SPICE simulation. The characteristics of cross-point RRAM arrays such as energy consumption, reliable operating cycles and total error bits are evaluated by the methodology. A possible solution to mitigate disturbance is proposed.

  4. System and method for cognitive processing for data fusion

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A. (Inventor); Duong, Vu A. (Inventor)

    2012-01-01

    A system and method for cognitive processing of sensor data. A processor array receiving analog sensor data and having programmable interconnects, multiplication weights, and filters provides for adaptive learning in real-time. A static random access memory contains the programmable data for the processor array and the stored data is modified to provide for adaptive learning.

  5. SSU rDNA divergence in planktonic foraminifera: molecular taxonomy and biogeographic implications.

    PubMed

    André, Aurore; Quillévéré, Frédéric; Morard, Raphaël; Ujiié, Yurika; Escarguel, Gilles; de Vargas, Colomban; de Garidel-Thoron, Thibault; Douady, Christophe J

    2014-01-01

    The use of planktonic foraminifera in paleoceanography requires taxonomic consistency and precise assessment of the species biogeography. Yet, ribosomal small subunit (SSUr) DNA analyses have revealed that most of the modern morpho-species of planktonic foraminifera are composed of a complex of several distinct genetic types that may correspond to cryptic or pseudo-cryptic species. These genetic types are usually delimitated using partial sequences located at the 3'end of the SSUrDNA, but typically based on empirical delimitation. Here, we first use patristic genetic distances calculated within and among genetic types of the most common morpho-species to show that intra-type and inter-type genetic distances within morpho-species may significantly overlap, suggesting that genetic types have been sometimes inconsistently defined. We further apply two quantitative and independent methods, ABGD (Automatic Barcode Gap Detection) and GMYC (General Mixed Yule Coalescent) to a dataset of published and newly obtained partial SSU rDNA for a more objective assessment of the species status of these genetic types. Results of these complementary approaches are highly congruent and lead to a molecular taxonomy that ranks 49 genetic types of planktonic foraminifera as genuine (pseudo)cryptic species. Our results advocate for a standardized sequencing procedure allowing homogenous delimitations of (pseudo)cryptic species. On the ground of this revised taxonomic framework, we finally provide an integrative taxonomy synthesizing geographic, ecological and morphological differentiations that can occur among the genuine (pseudo)cryptic species. Due to molecular, environmental or morphological data scarcities, many aspects of our proposed integrative taxonomy are not yet fully resolved. On the other hand, our study opens up the potential for a correct interpretation of environmental sequence datasets.

  6. SSU rDNA Divergence in Planktonic Foraminifera: Molecular Taxonomy and Biogeographic Implications

    PubMed Central

    André, Aurore; Quillévéré, Frédéric; Morard, Raphaël; Ujiié, Yurika; Escarguel, Gilles; de Vargas, Colomban; de Garidel-Thoron, Thibault; Douady, Christophe J.

    2014-01-01

    The use of planktonic foraminifera in paleoceanography requires taxonomic consistency and precise assessment of the species biogeography. Yet, ribosomal small subunit (SSUr) DNA analyses have revealed that most of the modern morpho-species of planktonic foraminifera are composed of a complex of several distinct genetic types that may correspond to cryptic or pseudo-cryptic species. These genetic types are usually delimitated using partial sequences located at the 3′end of the SSUrDNA, but typically based on empirical delimitation. Here, we first use patristic genetic distances calculated within and among genetic types of the most common morpho-species to show that intra-type and inter-type genetic distances within morpho-species may significantly overlap, suggesting that genetic types have been sometimes inconsistently defined. We further apply two quantitative and independent methods, ABGD (Automatic Barcode Gap Detection) and GMYC (General Mixed Yule Coalescent) to a dataset of published and newly obtained partial SSU rDNA for a more objective assessment of the species status of these genetic types. Results of these complementary approaches are highly congruent and lead to a molecular taxonomy that ranks 49 genetic types of planktonic foraminifera as genuine (pseudo)cryptic species. Our results advocate for a standardized sequencing procedure allowing homogenous delimitations of (pseudo)cryptic species. On the ground of this revised taxonomic framework, we finally provide an integrative taxonomy synthesizing geographic, ecological and morphological differentiations that can occur among the genuine (pseudo)cryptic species. Due to molecular, environmental or morphological data scarcities, many aspects of our proposed integrative taxonomy are not yet fully resolved. On the other hand, our study opens up the potential for a correct interpretation of environmental sequence datasets. PMID:25119900

  7. Characterization of silicon-on-insulator wafers

    NASA Astrophysics Data System (ADS)

    Park, Ki Hoon

    The silicon-on-insulator (SOI) is attracting more interest as it is being used for an advanced complementary-metal-oxide-semiconductor (CMOS) and a base substrate for novel devices to overcome present obstacles in bulk Si scaling. Furthermore, SOI fabrication technology has improved greatly in recent years and industries produce high quality wafers with high yield. This dissertation investigated SOI material properties with simple, yet accurate methods. The electrical properties of as-grown wafers such as electron and hole mobilities, buried oxide (BOX) charges, interface trap densities, and carrier lifetimes were mainly studied. For this, various electrical measurement techniques were utilized such as pseudo-metal-oxide-semiconductor field-effect-transistor (PseudoMOSFET) static current-voltage (I-V) and transient drain current (I-t), Hall effect, and MOS capacitance-voltage/capacitance-time (C-V/C-t). The electrical characterization, however, mainly depends on the pseudo-MOSFET method, which takes advantage of the intrinsic SOI structure. From the static current-voltage and pulsed measurement, carrier mobilities, lifetimes and interface trap densities were extracted. During the course of this study, a pseudo-MOSFET drain current hysteresis regarding different gate voltage sweeping directions was discovered and the cause was revealed through systematic experiments and simulations. In addition to characterization of normal SOI, strain relaxation of strained silicon-on-insulator (sSOI) was also measured. As sSOI takes advantage of wafer bonding in its fabrication process, the tenacity of bonding between the sSOI and the BOX layer was investigated by means of thermal treatment and high dose energetic gamma-ray irradiation. It was found that the strain did not relax with processes more severe than standard CMOS processes, such as anneals at temperature as high as 1350 degree Celsius.

  8. Validating Pseudo-dynamic Source Models against Observed Ground Motion Data at the SCEC Broadband Platform, Ver 16.5

    NASA Astrophysics Data System (ADS)

    Song, S. G.

    2016-12-01

    Simulation-based ground motion prediction approaches have several benefits over empirical ground motion prediction equations (GMPEs). For instance, full 3-component waveforms can be produced and site-specific hazard analysis is also possible. However, it is important to validate them against observed ground motion data to confirm their efficiency and validity before practical uses. There have been community efforts for these purposes, which are supported by the Broadband Platform (BBP) project at the Southern California Earthquake Center (SCEC). In the simulation-based ground motion prediction approaches, it is a critical element to prepare a possible range of scenario rupture models. I developed a pseudo-dynamic source model for Mw 6.5-7.0 by analyzing a number of dynamic rupture models, based on 1-point and 2-point statistics of earthquake source parameters (Song et al. 2014; Song 2016). In this study, the developed pseudo-dynamic source models were tested against observed ground motion data at the SCEC BBP, Ver 16.5. The validation was performed at two stages. At the first stage, simulated ground motions were validated against observed ground motion data for past events such as the 1992 Landers and 1994 Northridge, California, earthquakes. At the second stage, they were validated against the latest version of empirical GMPEs, i.e., NGA-West2. The validation results show that the simulated ground motions produce ground motion intensities compatible with observed ground motion data at both stages. The compatibility of the pseudo-dynamic source models with the omega-square spectral decay and the standard deviation of the simulated ground motion intensities are also discussed in the study

  9. Processing grounded-wire TEM signal in time-frequency-pseudo-seismic domain: A new paradigm

    NASA Astrophysics Data System (ADS)

    Khan, M. Y.; Xue, G. Q.; Chen, W.; Huasen, Z.

    2017-12-01

    Grounded-wire TEM has received great attention in mineral, hydrocarbon and hydrogeological investigations for the last several years. Conventionally, TEM soundings have been presented as apparent resistivity curves as function of time. With development of sophisticated computational algorithms, it became possible to extract more realistic geoelectric information by applying inversion programs to 1-D & 3-D problems. Here, we analyze grounded-wire TEM data by carrying out analysis in time, frequency and pseudo-seismic domain supported by borehole information. At first, H, K, A & Q type geoelectric models are processed using a proven inversion program (1-D Occam inversion). Second, time-to-frequency transformation is conducted from TEM ρa(t) curves to magneto telluric MT ρa(f) curves for the same models based on all-time apparent resistivity curves. Third, 1-D Bostick's algorithm was applied to the transformed resistivity. Finally, EM diffusion field is transformed into propagating wave field obeying the standard wave equation using wavelet transformation technique and constructed pseudo-seismic section. The transformed seismic-like wave indicates that some reflection and refraction phenomena appear when the EM wave field interacts with geoelectric interface at different depth intervals due to contrast in resistivity. The resolution of the transformed TEM data is significantly improved in comparison to apparent resistivity plots. A case study illustrates the successful hydrogeophysical application of proposed approach in recovering water-filled mined-out area in a coal field located in Ye county, Henan province, China. The results support the introduction of pseudo-seismic imaging technology in short-offset version of TEM which can also be an useful aid if integrated with seismic reflection technique to explore possibilities for high resolution EM imaging in future.

  10. The need for preoperative baseline arm measurement to accurately quantify breast cancer-related lymphedema.

    PubMed

    Sun, Fangdi; Skolny, Melissa N; Swaroop, Meyha N; Rawal, Bhupendra; Catalano, Paul J; Brunelle, Cheryl L; Miller, Cynthia L; Taghian, Alphonse G

    2016-06-01

    Breast cancer-related lymphedema (BCRL) is a feared outcome of breast cancer treatment, yet the push for early screening is hampered by a lack of standardized quantification. We sought to determine the necessity of preoperative baseline in accounting for temporal changes of upper extremity volume. 1028 women with unilateral breast cancer were prospectively screened for lymphedema by perometry. Thresholds were defined: relative volume change (RVC) ≥10 % for clinically significant lymphedema and ≥5 % including subclinical lymphedema. The first postoperative measurement (pseudo-baseline) simulated the case of no baseline. McNemar's test and binomial logistic regression models were used to analyze BCRL misdiagnoses. Preoperatively, 28.3 and 2.9 % of patients had arm asymmetry of ≥5 and 10 %, respectively. Without baseline, 41.6 % of patients were underdiagnosed and 40.1 % overdiagnosed at RVC ≥ 5 %, increasing to 50.0 and 54.8 % at RVC ≥ 10 %. Increased pseudo-baseline asymmetry, increased weight change between baselines, hormonal therapy, dominant use of contralateral arm, and not receiving axillary lymph node dissection (ALND) were associated with increased risk of underdiagnosis at RVC ≥ 5 %; not receiving regional lymph node radiation was significant at RVC ≥ 10 %. Increased pseudo-baseline asymmetry, not receiving ALND, and dominant use of ipsilateral arm were associated with overdiagnosis at RVC ≥ 5 %; increased pseudo-baseline asymmetry and not receiving ALND were significant at RVC ≥ 10 %. The use of a postoperative proxy even early after treatment results in poor sensitivity for identifying BCRL. Providers with access to patients before surgery should consider the consequent need for proper baseline, with specific strategy tailored by institution.

  11. Development and validation of a GC-C-IRMS method for the confirmation analysis of pseudo-endogenous glucocorticoids in doping control.

    PubMed

    de la Torre, Xavier; Curcio, Davide; Colamonici, Cristiana; Molaioni, Francesco; Cilia, Marta; Botrè, Francesco

    2015-01-01

    Glucocorticoids are included in the S9 section of the World Anti-doping Agency (WADA) prohibited list international standard. Some among them are pseudo-endogenous steroids, like cortisol and cortisone, which present the same chemical structure as endogenously produced steroids. We are proposing an analytical method based on gas chromatography coupled to isotope ratio mass spectrometry (GC-C-IRMS) which allows discrimination between endogenous and synthetic origin of the urinary metabolites of the pseudo-endogenous glucocorticoids. A preliminary purification treatment by high-performance liquid chromatography (HPLC) of the target compounds (TC) (i.e., cortisol, tetrahydrocortisone (THE) 5α-tetrahydrocortisone (aTHE), tetrahydrocortisol (THF), and 5α-tetrahydrocortisol (aTHF)) allows collection of extracts with adequate purity for the subsequent analysis by IRMS. A population of 40 urine samples was analyzed for the TC and for the endogenous reference compounds (ERC: i.e., 11-desoxy-tetrahydrocortisol (THS) or pregnanediol). For each sample, the difference between the delta values of the ERCs and TCs (Δδ values) were calculated and based on that, some decision limits for atypical findings are proposed. The limits are below 3% units except for cortisol. The fit to purpose of the method has been confirmed by the analysis of urine samples collected in two patients under treatment with 25 mg of cortisone acetate (p.o). The samples showed Δδ values higher than 3 for at least 24 h following administration depending on the TC considered. The method can easily be integrated into existing procedures already used for the HPLC purification and IRMS analysis of pseudo-endogenous steroids with androgenic/anabolic activity. Copyright © 2015 John Wiley & Sons, Ltd.

  12. Fabrication of polymer micro-lens array with pneumatically diaphragm-driven drop-on-demand inkjet technology.

    PubMed

    Xie, Dan; Zhang, Honghai; Shu, Xiayun; Xiao, Junfeng

    2012-07-02

    The paper reports an effective method to fabricate micro-lens arrays with the ultraviolet-curable polymer, using an original pneumatically diaphragm-driven drop-on-demand inkjet system. An array of plano convex micro-lenses can be formed on the glass substrate due to surface tension and hydrophobic effect. The micro-lens arrays have uniform focusing function, smooth and real planar surface. The fabrication process showed good repeatability as well, fifty micro-lenses randomly selected form 9 × 9 miro-lens array with an average diameter of 333.28μm showed 1.1% variations. Also, the focal length, the surface roughness and optical property of the fabricated micro-lenses are measured, analyzed and proved satisfactory. The technique shows great potential for fabricating polymer micro-lens arrays with high flexibility, simple technological process and low production cost.

  13. Reduced-Drift Virtual Gyro from an Array of Low-Cost Gyros.

    PubMed

    Vaccaro, Richard J; Zaki, Ahmed S

    2017-02-11

    A Kalman filter approach for combining the outputs of an array of high-drift gyros to obtain a virtual lower-drift gyro has been known in the literature for more than a decade. The success of this approach depends on the correlations of the random drift components of the individual gyros. However, no method of estimating these correlations has appeared in the literature. This paper presents an algorithm for obtaining the statistical model for an array of gyros, including the cross-correlations of the individual random drift components. In order to obtain this model, a new statistic, called the "Allan covariance" between two gyros, is introduced. The gyro array model can be used to obtain the Kalman filter-based (KFB) virtual gyro. Instead, we consider a virtual gyro obtained by taking a linear combination of individual gyro outputs. The gyro array model is used to calculate the optimal coefficients, as well as to derive a formula for the drift of the resulting virtual gyro. The drift formula for the optimal linear combination (OLC) virtual gyro is identical to that previously derived for the KFB virtual gyro. Thus, a Kalman filter is not necessary to obtain a minimum drift virtual gyro. The theoretical results of this paper are demonstrated using simulated as well as experimental data. In experimental results with a 28-gyro array, the OLC virtual gyro has a drift spectral density 40 times smaller than that obtained by taking the average of the gyro signals.

  14. Peritoneal Mesothelioma with Residential Asbestos Exposure. Report of a Case with Long Survival (Seventeen Years) Analyzed by Cgh-Array.

    PubMed

    Serio, Gabriella; Pezzuto, Federica; Marzullo, Andrea; Scattone, Anna; Cavone, Domenica; Punzi, Alessandra; Fortarezza, Francesco; Gentile, Mattia; Buonadonna, Antonia Lucia; Barbareschi, Mattia; Vimercati, Luigi

    2017-08-22

    Malignant mesothelioma is a rare and aggressive tumor with limited therapeutic options. We report a case of a malignant peritoneal mesothelioma (MPM) epithelioid type, with environmental asbestos exposure, in a 36-year-old man, with a long survival (17 years). The patient received standard treatment which included cytoreductive surgery (CRS) and hyperthermic intraperitoneal chemotherapy (HIPEC). Molecular analysis with comparative genomic hybridization (CGH)-array was performed on paraffin-embedded tumoral samples. Multiple chromosomal imbalances were detected. The gains were prevalent. Losses at 1q21, 2q11.1→q13, 8p23.1, 9p12→p11, 9q21.33→q33.1, 9q12→q21.33, and 17p12→p11.2 are observed. Chromosome band 3p21 ( BAP1 ), 9p21 ( CDKN2A ) and 22q12 ( NF2 ) are not affected. Conclusions: the defects observed in this case are uncommon in malignant peritoneal mesothelioma. Some chromosomal aberrations that appear to be random here, might actually be relevant events explaining the response to therapy, the long survival and, finally, may be considered useful prognostic factors in peritoneal malignant mesothelioma (PMM).

  15. Relative dosimetrical verification in high dose rate brachytherapy using two-dimensional detector array IMatriXX

    PubMed Central

    Manikandan, A.; Biplab, Sarkar; David, Perianayagam A.; Holla, R.; Vivek, T. R.; Sujatha, N.

    2011-01-01

    For high dose rate (HDR) brachytherapy, independent treatment verification is needed to ensure that the treatment is performed as per prescription. This study demonstrates dosimetric quality assurance of the HDR brachytherapy using a commercially available two-dimensional ion chamber array called IMatriXX, which has a detector separation of 0.7619 cm. The reference isodose length, step size, and source dwell positional accuracy were verified. A total of 24 dwell positions, which were verified for positional accuracy gave a total error (systematic and random) of –0.45 mm, with a standard deviation of 1.01 mm and maximum error of 1.8 mm. Using a step size of 5 mm, reference isodose length (the length of 100% isodose line) was verified for single and multiple catheters of same and different source loadings. An error ≤1 mm was measured in 57% of tests analyzed. Step size verification for 2, 3, 4, and 5 cm was performed and 70% of the step size errors were below 1 mm, with maximum of 1.2 mm. The step size ≤1 cm could not be verified by the IMatriXX as it could not resolve the peaks in dose profile. PMID:21897562

  16. Demonstration of Lasercom and Spatial Tracking with a Silicon Geiger-Mode APD Array

    DTIC Science & Technology

    2016-02-26

    standardized pixel mask as described in the previous paragraph disabling 167 of the 1024 detectors in the array , this gives an absolute maximum rate...number of elements in an array based detector .5 In this paper, we present the results of photon-counting communication tests based on an arrayed ...semiconductor photon-counting detector .6 The array also has the ability to sense the spatial distribution of the received light giving it the potential to act

  17. Mosquito (Diptera: Culicidae) assemblages associated with Nidularium and Vriesea bromeliads in Serra do Mar, Atlantic Forest, Brazil

    PubMed Central

    2012-01-01

    Background The most substantial and best preserved area of Atlantic Forest is within the biogeographical sub-region of Serra do Mar. The topographic complexity of the region creates a diverse array of microclimates, which can affect species distribution and diversity inside the forest. Given that Atlantic Forest includes highly heterogeneous environments, a diverse and medically important Culicidae assemblage, and possible species co-occurrence, we evaluated mosquito assemblages from bromeliad phytotelmata in Serra do Mar (southeastern Brazil). Methods Larvae and pupae were collected monthly from Nidularium and Vriesea bromeliads between July 2008 and June 2009. Collection sites were divided into landscape categories (lowland, hillslope and hilltop) based on elevation and slope. Correlations between bromeliad mosquito assemblage and environmental variables were assessed using multivariate redundancy analysis. Differences in species diversity between bromeliads within each category of elevation were explored using the Renyi diversity index. Univariate binary logistic regression analyses were used to assess species co-occurrence. Results A total of 2,024 mosquitoes belonging to 22 species were collected. Landscape categories (pseudo-F value = 1.89, p = 0.04), bromeliad water volume (pseudo-F = 2.99, p = 0.03) and bromeliad fullness (Pseudo-F = 4.47, p < 0.01) influenced mosquito assemblage structure. Renyi diversity index show that lowland possesses the highest diversity indices. The presence of An. homunculus was associated with Cx. ocellatus and the presence of An. cruzii was associated with Cx. neglectus, Cx. inimitabilis fuscatus and Cx. worontzowi. Anopheles cruzii and An. homunculus were taken from the same bromeliad, however, the co-occurrence between those two species was not statistically significant. Conclusions One of the main findings of our study was that differences in species among mosquito assemblages were influenced by landscape characteristics. The bromeliad factor that influenced mosquito abundance and assemblage structure was fullness. The findings of the current study raise important questions about the role of An. homunculus in the transmission of Plasmodium in Serra do Mar, southeastern Atlantic Forest. PMID:22340486

  18. Mosquito (Diptera: Culicidae) assemblages associated with Nidularium and Vriesea bromeliads in Serra do Mar, Atlantic Forest, Brazil.

    PubMed

    Marques, Tatiani C; Bourke, Brian P; Laporta, Gabriel Z; Sallum, Maria Anice Mureb

    2012-02-16

    The most substantial and best preserved area of Atlantic Forest is within the biogeographical sub-region of Serra do Mar. The topographic complexity of the region creates a diverse array of microclimates, which can affect species distribution and diversity inside the forest. Given that Atlantic Forest includes highly heterogeneous environments, a diverse and medically important Culicidae assemblage, and possible species co-occurrence, we evaluated mosquito assemblages from bromeliad phytotelmata in Serra do Mar (southeastern Brazil). Larvae and pupae were collected monthly from Nidularium and Vriesea bromeliads between July 2008 and June 2009. Collection sites were divided into landscape categories (lowland, hillslope and hilltop) based on elevation and slope. Correlations between bromeliad mosquito assemblage and environmental variables were assessed using multivariate redundancy analysis. Differences in species diversity between bromeliads within each category of elevation were explored using the Renyi diversity index. Univariate binary logistic regression analyses were used to assess species co-occurrence. A total of 2,024 mosquitoes belonging to 22 species were collected. Landscape categories (pseudo-F value = 1.89, p = 0.04), bromeliad water volume (pseudo-F = 2.99, p = 0.03) and bromeliad fullness (Pseudo-F = 4.47, p < 0.01) influenced mosquito assemblage structure. Renyi diversity index show that lowland possesses the highest diversity indices. The presence of An. homunculus was associated with Cx. ocellatus and the presence of An. cruzii was associated with Cx. neglectus, Cx. inimitabilis fuscatus and Cx. worontzowi. Anopheles cruzii and An. homunculus were taken from the same bromeliad, however, the co-occurrence between those two species was not statistically significant. One of the main findings of our study was that differences in species among mosquito assemblages were influenced by landscape characteristics. The bromeliad factor that influenced mosquito abundance and assemblage structure was fullness. The findings of the current study raise important questions about the role of An. homunculus in the transmission of Plasmodium in Serra do Mar, southeastern Atlantic Forest.

  19. Chemical segregation in the young protostars Barnard 1b-N and S. Evidence of pseudo-disk rotation in Barnard 1b-S

    NASA Astrophysics Data System (ADS)

    Fuente, A.; Gerin, M.; Pety, J.; Commerçon, B.; Agúndez, M.; Cernicharo, J.; Marcelino, N.; Roueff, E.; Lis, D. C.; Wootten, H. A.

    2017-10-01

    The extremely young Class 0 object B1b-S and the first hydrostatic core (FSHC) candidate, B1b-N, provide a unique opportunity to study the chemical changes produced in the elusive transition from the prestellar core to the protostellar phase. We present 40″ × 70″ images of Barnard 1b in the 13CO 1 → 0, C18O 1 → 0, NH2D 11,1a→ 10,1s, and SO 32→ 21 lines obtained with the NOEMA interferometer. The observed chemical segregation allows us to unveil the physical structure of this young protostellar system down to scales of 500 au. The two protostellar objects are embedded in an elongated condensation, with a velocity gradient of 0.2-0.4 m s-1 au-1 in the east-west direction, reminiscent of an axial collapse. The NH2D data reveal cold and dense pseudo-disks (R 500 - 1000 au) around each protostar. Moreover, we observe evidence of pseudo-disk rotation around B1b-S. We do not see any signature of the bipolar outflows associated with B1b-N and B1b-S, which were previously detected in H2CO and CH3OH, in any of the imaged species. The non-detection of SO constrains the SO/CH3OH abundance ratio in the high-velocity gas. Based on observations carried out with the IRAM Northern Extended Millimeter Array (NOEMA). IRAM is supported by INSU/ CNRS (France), MPG (Germany), and IGN (Spain).The reduced datacube is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/606/L3

  20. Improved Pseudo-section Representation for CSAMT Data in Geothermal Exploration

    NASA Astrophysics Data System (ADS)

    Grandis, Hendra; Sumintadireja, Prihadi

    2017-04-01

    Controlled-Source Audio-frequency Magnetotellurics (CSAMT) is a frequency domain sounding technique employing typically a grounded electric dipole as the primary electromagnetic (EM) source to infer the subsurface resistivity distribution. The use of an artificial source provides coherent signals with higher signal-to-noise ratio and overcomes the problems with randomness and fluctuation of the natural EM fields used in MT. However, being an extension of MT, the CSAMT data still uses apparent resistivity and phase for data representation. The finite transmitter-receiver distance in CSAMT leads to a somewhat “distorted” response of the subsurface compared to MT data. We propose a simple technique to present CSAMT data as an apparent resistivity pseudo-section with more meaningful information for qualitative interpretation. Tests with synthetic and field CSAMT data showed that the simple technique is valid only for sounding curves exhibiting a transition from high - low - high resistivity (i.e. H-type) prevailing in data from a geothermal prospect. For quantitative interpretation, we recommend the use of the full-solution of CSAMT modelling since our technique is not valid for more general cases.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alba, Paolo; Alberico, Wanda; Bellwied, Rene

    We calculate ratios of higher-order susceptibilities quantifying fluctuations in the number of net-protons and in the net-electric charge using the Hadron Resonance Gas (HRG) model. We take into account the effect of resonance decays, the kinematic acceptance cuts in rapidity, pseudo-rapidity and transverse momentum used in the experimental analysis, as well as a randomization of the isospin of nucleons in the hadronic phase. By comparing these results to the latest experimental data from the STAR Collaboration, we determine the freeze-out conditions from net-electric charge and net-proton distributions and discuss their consistency.

  2. Possibilities and testing of CPRNG in block cipher mode of operation PM-DC-LM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zacek, Petr; Jasek, Roman; Malanik, David

    2016-06-08

    This paper discusses the chaotic pseudo-random number generator (CPRNG), which is used in block cipher mode of operation called PM-DC-LM. PM-DC-LM is one of possible subversions of general PM mode. In this paper is not discussed the design of PM-DC-LM, but only CPRNG as a part of it because designing is written in other papers. Possibilities, how to change or to improve CPRNG are mentioned. The final part is devoted for a little testing of CPRNG and some testing data are shown.

  3. Testing of motor unit synchronization model for localized muscle fatigue.

    PubMed

    Naik, Ganesh R; Kumar, Dinesh K; Yadav, Vivek; Wheeler, Katherine; Arjunan, Sridhar

    2009-01-01

    Spectral compression of surface electromyogram (sEMG) is associated with onset of localized muscle fatigue. The spectral compression has been explained based on motor unit synchronization theory. According to this theory, motor units are pseudo randomly excited during muscle contraction, and with the onset of muscle fatigue the recruitment pattern changes such that motor unit firings become more synchronized. While this is widely accepted, there is little experimental proof of this phenomenon. This paper has used source dependence measures developed in research related to independent component analysis (ICA) to test this theory.

  4. Autonomous choices among deterministic evolution-laws as source of uncertainty

    NASA Astrophysics Data System (ADS)

    Trujillo, Leonardo; Meyroneinc, Arnaud; Campos, Kilver; Rendón, Otto; Sigalotti, Leonardo Di G.

    2018-03-01

    We provide evidence of an extreme form of sensitivity to initial conditions in a family of one-dimensional self-ruling dynamical systems. We prove that some hyperchaotic sequences are closed-form expressions of the orbits of these pseudo-random dynamical systems. Each chaotic system in this family exhibits a sensitivity to initial conditions that encompasses the sequence of choices of the evolution rule in some collection of maps. This opens a possibility to extend current theories of complex behaviors on the basis of intrinsic uncertainty in deterministic chaos.

  5. Maskless wafer-level microfabrication of optical penetrating neural arrays out of soda-lime glass: Utah Optrode Array.

    PubMed

    Boutte, Ronald W; Blair, Steve

    2016-12-01

    Borrowing from the wafer-level fabrication techniques of the Utah Electrode Array, an optical array capable of delivering light for neural optogenetic studies is presented in this paper: the Utah Optrode Array. Utah Optrode Arrays are micromachined out of sheet soda-lime-silica glass using standard backend processes of the semiconductor and microelectronics packaging industries such as precision diamond grinding and wet etching. 9 × 9 arrays with 1100μ m × 100μ m optrodes and a 500μ m back-plane are repeatably reproduced on 2i n wafers 169 arrays at a time. This paper describes the steps and some of the common errors of optrode fabrication.

  6. CMB EB and TB cross-spectrum estimation via pseudospectrum techniques

    NASA Astrophysics Data System (ADS)

    Grain, J.; Tristram, M.; Stompor, R.

    2012-10-01

    We discuss methods for estimating EB and TB spectra of the cosmic microwave background anisotropy maps covering limited sky area. Such odd-parity correlations are expected to vanish whenever parity is not broken. As this is indeed the case in the standard cosmologies, any evidence to the contrary would have a profound impact on our theories of the early Universe. Such correlations could also become a sensitive diagnostic of some particularly insidious instrumental systematics. In this work we introduce three different unbiased estimators based on the so-called standard and pure pseudo-spectrum techniques and later assess their performance by means of extensive Monte Carlo simulations performed for different experimental configurations. We find that a hybrid approach combining a pure estimate of B-mode multipoles with a standard one for E-mode (or T) multipoles, leads to the smallest error bars for both EB (or TB respectively) spectra as well as for the three other polarization-related angular power spectra (i.e., EE, BB, and TE). However, if both E and B multipoles are estimated using the pure technique, the loss of precision for the EB spectrum is not larger than ˜30%. Moreover, for the experimental configurations considered here, the statistical uncertainties-due to sampling variance and instrumental noise-of the pseudo-spectrum estimates is at most a factor ˜1.4 for TT, EE, and TE spectra and a factor ˜2 for BB, TB, and EB spectra, higher than the most optimistic Fisher estimate of the variance.

  7. A "caliper" type of controlled-source, frequency-domain, electromagnetic sounding method

    NASA Astrophysics Data System (ADS)

    Zhang, W.; Lin, J.; Zhou, F.; Liu, C.; Chen, J.; Xue, K.; Liu, L.; Wu, Y.

    2011-12-01

    We developed a special measurement manner for controlled-source, frequency-domain, electromagnetic sounding method that can improve resolution and efficiency, called as "caliper". This manner is base on our array electromagnetic system DPS-I, which consists of 53 channels and can cover 2500 m survey line at one arrangement. There are several steps to apply this method. First, a rough measurement is carried out, using large dynamic range but sparse frequencies. The ratio of adjacent frequency is set to be 2 or 4. The frequency points cover the entire frequency band that is required according to the geological environment, and are almost equidistantly distributed at logarithmic axis. Receivers array are arranged in one or more survey lines to measure the amplitude and phase of electromagnetic field components simultaneously. After all frequency points for rough measurement are measured, data in each sub-receiver are transmitted to the controller and the apparent resistivity and phase are calculated in field quickly. Then the pseudo section diagrams of apparent resistivity and phase are drew. By the pseudo section we can roughly lock the abnormal zone and determine the frequency band required for detail investigation of abnormal zone. Next, the measurement using high density of frequencies in this frequency band is carried out, which we called "detailed measurement". The ratio of adjacent frequency in this time is m which lies between 1 and 2. The exact value of m will depend on how detailed that the user expected. After "detailed measurement" is finished, the pseudo section diagrams of apparent resistivity and phase are drew in the same way with the first step. We can see more detailed information about the abnormal zone and decide whether further measurement is necessary. If it is necessary, we can repeat the second step using smaller m until the resolution meet the requirements to distinguish the target. By simulation, we know that high density of frequencies really help us to improve resolution. But we also need to say that the improvement is limited and it will do no help to add frequencies if the frequency is already dense enough. This method not only improves efficiency, but also improves the ability to distinguish the abnormal body. This measurement mode consisting of rough measurement and detailed measurement is similar to the caliper measurement of length, so called "caliper" type. It is accurate and fast. It not only can be applied to frequency-domain sounding, such as controlled source audio -frequency magnetotelluric (CSAMT), but also can be extended to the spectral induced polarization method. By using this measurement manner, high resolution and high-efficiency can be expected.

  8. Automated pupil remapping with binary optics

    DOEpatents

    Neal, D.R.; Mansell, J.

    1999-01-26

    Methods and apparatuses are disclosed for pupil remapping employing non-standard lenslet shapes in arrays; divergence of lenslet focal spots from on-axis arrangements; use of lenslet arrays to resize two-dimensional inputs to the array; and use of lenslet arrays to map an aperture shape to a different detector shape. Applications include wavefront sensing, astronomical applications, optical interconnects, keylocks, and other binary optics and diffractive optics applications. 24 figs.

  9. Randomized Controlled Trial of Polyhexanide/Betaine Gel Versus Silver Sulfadiazine for Partial-Thickness Burn Treatment.

    PubMed

    Wattanaploy, Saruta; Chinaroonchai, Kusuma; Namviriyachote, Nantaporn; Muangman, Pornprom

    2017-03-01

    Silver sulfadiazine is commonly used in the treatment of partial-thickness burns, but it sometimes forms pseudo-eschar and delays wound healing. Polyhexanide/betaine gel, a new wound cleansing and moisturizing product, has some advantages in removing biofilm and promotes wound healing. This study was designed to compare clinical efficacy of polyhexanide/betaine gel with silver sulfadiazine in partial-thickness burn treatment. From September 2013 to May 2015, 46 adult patients with partial-thickness burn ≥10% total body surface area that were admitted to the Burn Unit of Siriraj Hospital within 48 hours after injury were randomly allocated into 2 groups. One group was treated with polyhexanide/betaine gel, and the other group was treated with silver sulfadiazine. Both groups received daily dressing changes and the same standard care given to patients with burns in this center. Healing times in the polyhexanide/betaine gel group and silver sulfadiazine group were 17.8 ± 2.2 days and 18.8 ± 2.1 days, respectively ( P value .13). There were no significant differences in healing times, infection rates, bacterial colonization rates, and treatment cost in both groups. The pain score of the polyhexanide/betaine gel group was significantly less than the silver sulfadiazine group at 4 to 9 days after treatment ( P < .001). The satisfactory assessment result of the polyhexanide/betaine gel group was better than that in the silver sulfadiazine group. These data indicate the need for adequately designed studies to elicit the full potential of polyhexanide gel as a wound dressing for partial-thickness burn wounds.

  10. How Well Does Physician Selection of Microbiologic Tests Identify Clostridium difficile and other Pathogens in Paediatric Diarrhoea? Insights Using Multiplex PCR-Based Detection

    PubMed Central

    Stockmann, Chris; Rogatcheva, Margarita; Harrel, Brian; Vaughn, Mike; Crisp, Rob; Poritz, Mark; Thatcher, Stephanie; Korgenski, Ernest K; Barney, Trenda; Daly, Judy; Pavia, Andrew T

    2014-01-01

    The objective of this study was to compare the aetiologic yield of standard of care microbiologic testing ordered by physicians with that of a multiplex PCR platform. Stool specimens obtained from children and young adults with gastrointestinal illness were evaluated by standard laboratory methods and a developmental version of the FilmArray Gastrointestinal Diagnostic System (FilmArray GI Panel), a rapid multiplex PCR platform that detects 23 bacterial, viral, and protozoal agents. Results were classified according to the microbiologic tests requested by the treating physician. A median of 3 (range 1-10) microbiologic tests were performed by the clinical laboratory during 378 unique diarrhoeal episodes. A potential aetiologic agent was identified in 46% of stool specimens by standard laboratory methods and in 65% of specimens tested using the FilmArray GI Panel (P<0.001). For those patients who only had Clostridium difficile testing requested, an alternative pathogen was identified in 29% of cases with the FilmArray GI Panel. Notably, 11 (12%) cases of norovirus were identified among children who only had testing for C. difficile ordered. Among those who had C. difficile testing ordered in combination with other tests, an additional pathogen was identified in 57% of stool specimens with the FilmArray GI Panel. For patients who had no C. difficile testing performed, the FilmArray GI Panel identified a pathogen in 63% of cases, including C. difficile in 8%. Physician-specified laboratory testing may miss important diarrhoeal pathogens. Additionally, standard laboratory testing is likely to underestimate co-infections with multiple infectious diarrhoeagenic agents. PMID:25599941

  11. Developing an Array Binary Code Assessment Rubric for Multiple- Choice Questions Using Item Arrays and Binary-Coded Responses

    ERIC Educational Resources Information Center

    Haro, Elizabeth K.; Haro, Luis S.

    2014-01-01

    The multiple-choice question (MCQ) is the foundation of knowledge assessment in K-12, higher education, and standardized entrance exams (including the GRE, MCAT, and DAT). However, standard MCQ exams are limited with respect to the types of questions that can be asked when there are only five choices. MCQs offering additional choices more…

  12. Augmented longitudinal acoustic trap for scalable microparticle enrichment.

    PubMed

    Cui, M; Binkley, M M; Shekhani, H N; Berezin, M Y; Meacham, J M

    2018-05-01

    We introduce an acoustic microfluidic device architecture that locally augments the pressure field for separation and enrichment of targeted microparticles in a longitudinal acoustic trap. Pairs of pillar arrays comprise "pseudo walls" that are oriented perpendicular to the inflow direction. Though sample flow is unimpeded, pillar arrays support half-wave resonances that correspond to the array gap width. Positive acoustic contrast particles of supracritical diameter focus to nodal locations of the acoustic field and are held against drag from the bulk fluid motion. Thus, the longitudinal standing bulk acoustic wave (LSBAW) device achieves size-selective and material-specific separation and enrichment of microparticles from a continuous sample flow. A finite element analysis model is used to predict eigenfrequencies of LSBAW architectures with two pillar geometries, slanted and lamellar. Corresponding pressure fields are used to identify longitudinal resonances that are suitable for microparticle enrichment. Optimal operating conditions exhibit maxima in the ratio of acoustic energy density in the LSBAW trap to that in inlet and outlet regions of the microchannel. Model results guide fabrication and experimental evaluation of realized LSBAW assemblies regarding enrichment capability. We demonstrate separation and isolation of 20  μ m polystyrene and ∼10  μ m antibody-decorated glass beads within both pillar geometries. The results also establish several practical attributes of our approach. The LSBAW device is inherently scalable and enables continuous enrichment at a prescribed location. These features benefit separations applications while also allowing concurrent observation and analysis of trap contents.

  13. COMPUTED TOMOGRAPHIC FEATURES OF INCISOR PSEUDO-ODONTOMAS IN PRAIRIE DOGS (CYNOMYS LUDOVICIANUS).

    PubMed

    Pelizzone, Igor; Di Ianni, Francesco; Volta, Antonella; Gnudi, Giacomo; Manfredi, Sabrina; Bertocchi, Mara; Parmigiani, Enrico

    2017-05-01

    Maxillary incisor pseudo-odontomas are common in pet prairie dogs and can cause progressive respiratory obstruction, while mandibular pseudo-odontomas are rarely clinically significant. The aim of this retrospective cross-sectional study was to describe CT features of maxillary and mandibular incisor pseudo-odontomas vs. normal incisors in a group of pet prairie dogs. All pet prairie dogs with head CT scans acquired during the period of 2013-2015 were included. A veterinary radiologist who was aware of final diagnosis reviewed CT scans and recorded qualitative features of affected and normal incisors. Mean density values for the pulp cavity and palatal and buccal dentin were also recorded. A total of 16 prairie dogs were sampled (12 normal maxillary incisors, 20 confirmed maxillary incisor pseudo-odontomas, 20 normal mandibular incisors, 12 presumed mandibular incisor pseudo-odontomas). Maxillary incisors with confirmed pseudo-odontomas had a significantly hyperattenuating pulp and dentin in the reserve crown and apical zone, when compared to normal maxillary incisors. Pseudo-odontomas appeared as enlargements of the apical zone with a globular/multilobular hyperattenuating mass formation haphazardly arranged, encroaching on midline and growing caudally and ventrally. Presumed mandibular incisor pseudo-odontomas had similar CT characteristics. In 60% of prairie dogs with maxillary incisor pseudo-odontomas, the hard palate was deformed and the mass bulged into the oral cavity causing loss of the palatine bone. The common nasal meatus was partially or totally obliterated in 81.8% of prairie dogs with maxillary pseudo-odontomas. Findings supported the use of CT for characterizing extent of involvement and surgical planning in prairie dogs with pseudo-odontomas. © 2017 American College of Veterinary Radiology.

  14. Writing on wet paper

    NASA Astrophysics Data System (ADS)

    Fridrich, Jessica; Goljan, Miroslav; Lisonek, Petr; Soukal, David

    2005-03-01

    In this paper, we show that the communication channel known as writing in memory with defective cells is a relevant information-theoretical model for a specific case of passive warden steganography when the sender embeds a secret message into a subset C of the cover object X without sharing the selection channel C with the recipient. The set C could be arbitrary, determined by the sender from the cover object using a deterministic, pseudo-random, or a truly random process. We call this steganography "writing on wet paper" and realize it using low-density random linear codes with the encoding step based on the LT process. The importance of writing on wet paper for covert communication is discussed within the context of adaptive steganography and perturbed quantization steganography. Heuristic arguments supported by tests using blind steganalysis indicate that the wet paper steganography provides improved steganographic security for embedding in JPEG images and is less vulnerable to attacks when compared to existing methods with shared selection channels.

  15. Investigating Factorial Invariance of Latent Variables Across Populations When Manifest Variables Are Missing Completely

    PubMed Central

    Widaman, Keith F.; Grimm, Kevin J.; Early, Dawnté R.; Robins, Richard W.; Conger, Rand D.

    2013-01-01

    Difficulties arise in multiple-group evaluations of factorial invariance if particular manifest variables are missing completely in certain groups. Ad hoc analytic alternatives can be used in such situations (e.g., deleting manifest variables), but some common approaches, such as multiple imputation, are not viable. At least 3 solutions to this problem are viable: analyzing differing sets of variables across groups, using pattern mixture approaches, and a new method using random number generation. The latter solution, proposed in this article, is to generate pseudo-random normal deviates for all observations for manifest variables that are missing completely in a given sample and then to specify multiple-group models in a way that respects the random nature of these values. An empirical example is presented in detail comparing the 3 approaches. The proposed solution can enable quantitative comparisons at the latent variable level between groups using programs that require the same number of manifest variables in each group. PMID:24019738

  16. Optical image encryption using chaos-based compressed sensing and phase-shifting interference in fractional wavelet domain

    NASA Astrophysics Data System (ADS)

    Liu, Qi; Wang, Ying; Wang, Jun; Wang, Qiong-Hua

    2018-02-01

    In this paper, a novel optical image encryption system combining compressed sensing with phase-shifting interference in fractional wavelet domain is proposed. To improve the encryption efficiency, the volume data of original image are decreased by compressed sensing. Then the compacted image is encoded through double random phase encoding in asymmetric fractional wavelet domain. In the encryption system, three pseudo-random sequences, generated by three-dimensional chaos map, are used as the measurement matrix of compressed sensing and two random-phase masks in the asymmetric fractional wavelet transform. It not only simplifies the keys to storage and transmission, but also enhances our cryptosystem nonlinearity to resist some common attacks. Further, holograms make our cryptosystem be immune to noises and occlusion attacks, which are obtained by two-step-only quadrature phase-shifting interference. And the compression and encryption can be achieved in the final result simultaneously. Numerical experiments have verified the security and validity of the proposed algorithm.

  17. CMOS array design automation techniques. [metal oxide semiconductors

    NASA Technical Reports Server (NTRS)

    Ramondetta, P.; Feller, A.; Noto, R.; Lombardi, T.

    1975-01-01

    A low cost, quick turnaround technique for generating custom metal oxide semiconductor arrays using the standard cell approach was developed, implemented, tested and validated. Basic cell design topology and guidelines are defined based on an extensive analysis that includes circuit, layout, process, array topology and required performance considerations particularly high circuit speed.

  18. Advanced capabilities for materials modelling with Quantum ESPRESSO.

    PubMed

    Andreussi, Oliviero; Brumme, Thomas; Bunau, Oana; Buongiorno Nardelli, Marco; Calandra, Matteo; Car, Roberto; Cavazzoni, Carlo; Ceresoli, Davide; Cococcioni, Matteo; Colonna, Nicola; Carnimeo, Ivan; Dal Corso, Andrea; de Gironcoli, Stefano; Delugas, Pietro; DiStasio, Robert; Ferretti, Andrea; Floris, Andrea; Fratesi, Guido; Fugallo, Giorgia; Gebauer, Ralph; Gerstmann, Uwe; Giustino, Feliciano; Gorni, Tommaso; Jia, Junteng; Kawamura, Mitsuaki; Ko, Hsin-Yu; Kokalj, Anton; Küçükbenli, Emine; Lazzeri, Michele; Marsili, Margherita; Marzari, Nicola; Mauri, Francesco; Nguyen, Ngoc Linh; Nguyen, Huy-Viet; Otero-de-la-Roza, Alberto; Paulatto, Lorenzo; Poncé, Samuel; Giannozzi, Paolo; Rocca, Dario; Sabatini, Riccardo; Santra, Biswajit; Schlipf, Martin; Seitsonen, Ari Paavo; Smogunov, Alexander; Timrov, Iurii; Thonhauser, Timo; Umari, Paolo; Vast, Nathalie; Wu, Xifan; Baroni, Stefano

    2017-09-27

    Quantum ESPRESSO is an integrated suite of open-source computer codes for quantum simulations of materials using state-of-the art electronic-structure techniques, based on density-functional theory, density-functional perturbation theory, and many-body perturbation theory, within the plane-wave pseudo-potential and projector-augmented-wave approaches. Quantum ESPRESSO owes its popularity to the wide variety of properties and processes it allows to simulate, to its performance on an increasingly broad array of hardware architectures, and to a community of researchers that rely on its capabilities as a core open-source development platform to implement theirs ideas. In this paper we describe recent extensions and improvements, covering new methodologies and property calculators, improved parallelization, code modularization, and extended interoperability both within the distribution and with external software. © 2017 IOP Publishing Ltd.

  19. Post-traumatic hepatic artery pseudo-aneurysm combined with subphrenic liver abscess treated with embolization

    PubMed Central

    Sun, Long; Guan, Yong-Song; Wu, Hua; Pan, Wei-Min; Li, Xiao; He, Qing; Liu, Yuan

    2006-01-01

    A 23-year-old man with post-traumatic hepatic artery pseudo-aneurysm and subphrenic liver abscess was admitted. He underwent coil embolization of hepatic artery pseudo-aneurysm. The pseudo-aneurysm was successfully obstructed and subphrenic liver abscess was controlled. Super-selective trans-catheter coil embolization may represent an effective treatment for hepatic artery pseudo-aneurysm combined with subphrenic liver abscess in the absence of other therapeutic alternatives. PMID:16718774

  20. Post-traumatic hepatic artery pseudo-aneurysm combined with subphrenic liver abscess treated with embolization.

    PubMed

    Sun, Long; Guan, Yong-Song; Wu, Hua; Pan, Wei-Min; Li, Xiao; He, Qing; Liu, Yuan

    2006-05-07

    A 23-year-old man with post-traumatic hepatic artery pseudo-aneurysm and subphrenic liver abscess was admitted. He underwent coil embolization of hepatic artery pseudo-aneurysm. The pseudo-aneurysm was successfully obstructed and subphrenic liver abscess was controlled. Super-selective trans-catheter coil embolization may represent an effective treatment for hepatic artery pseudo-aneurysm combined with subphrenic liver abscess in the absence of other therapeutic alternatives.

Top