Pseudo-Random Number Generator Based on Coupled Map Lattices
NASA Astrophysics Data System (ADS)
Lü, Huaping; Wang, Shihong; Hu, Gang
A one-way coupled chaotic map lattice is used for generating pseudo-random numbers. It is shown that with suitable cooperative applications of both chaotic and conventional approaches, the output of the spatiotemporally chaotic system can easily meet the practical requirements of random numbers, i.e., excellent random statistical properties, long periodicity of computer realizations, and fast speed of random number generations. This pseudo-random number generator system can be used as ideal synchronous and self-synchronizing stream cipher systems for secure communications.
An On-Demand Optical Quantum Random Number Generator with In-Future Action and Ultra-Fast Response
Stipčević, Mario; Ursin, Rupert
2015-01-01
Random numbers are essential for our modern information based society e.g. in cryptography. Unlike frequently used pseudo-random generators, physical random number generators do not depend on complex algorithms but rather on a physicsal process to provide true randomness. Quantum random number generators (QRNG) do rely on a process, wich can be described by a probabilistic theory only, even in principle. Here we present a conceptualy simple implementation, which offers a 100% efficiency of producing a random bit upon a request and simultaneously exhibits an ultra low latency. A careful technical and statistical analysis demonstrates its robustness against imperfections of the actual implemented technology and enables to quickly estimate randomness of very long sequences. Generated random numbers pass standard statistical tests without any post-processing. The setup described, as well as the theory presented here, demonstrate the maturity and overall understanding of the technology. PMID:26057576
True random numbers from amplified quantum vacuum.
Jofre, M; Curty, M; Steinlechner, F; Anzolin, G; Torres, J P; Mitchell, M W; Pruneri, V
2011-10-10
Random numbers are essential for applications ranging from secure communications to numerical simulation and quantitative finance. Algorithms can rapidly produce pseudo-random outcomes, series of numbers that mimic most properties of true random numbers while quantum random number generators (QRNGs) exploit intrinsic quantum randomness to produce true random numbers. Single-photon QRNGs are conceptually simple but produce few random bits per detection. In contrast, vacuum fluctuations are a vast resource for QRNGs: they are broad-band and thus can encode many random bits per second. Direct recording of vacuum fluctuations is possible, but requires shot-noise-limited detectors, at the cost of bandwidth. We demonstrate efficient conversion of vacuum fluctuations to true random bits using optical amplification of vacuum and interferometry. Using commercially-available optical components we demonstrate a QRNG at a bit rate of 1.11 Gbps. The proposed scheme has the potential to be extended to 10 Gbps and even up to 100 Gbps by taking advantage of high speed modulation sources and detectors for optical fiber telecommunication devices.
Analysis of entropy extraction efficiencies in random number generation systems
NASA Astrophysics Data System (ADS)
Wang, Chao; Wang, Shuang; Chen, Wei; Yin, Zhen-Qiang; Han, Zheng-Fu
2016-05-01
Random numbers (RNs) have applications in many areas: lottery games, gambling, computer simulation, and, most importantly, cryptography [N. Gisin et al., Rev. Mod. Phys. 74 (2002) 145]. In cryptography theory, the theoretical security of the system calls for high quality RNs. Therefore, developing methods for producing unpredictable RNs with adequate speed is an attractive topic. Early on, despite the lack of theoretical support, pseudo RNs generated by algorithmic methods performed well and satisfied reasonable statistical requirements. However, as implemented, those pseudorandom sequences were completely determined by mathematical formulas and initial seeds, which cannot introduce extra entropy or information. In these cases, “random” bits are generated that are not at all random. Physical random number generators (RNGs), which, in contrast to algorithmic methods, are based on unpredictable physical random phenomena, have attracted considerable research interest. However, the way that we extract random bits from those physical entropy sources has a large influence on the efficiency and performance of the system. In this manuscript, we will review and discuss several randomness extraction schemes that are based on radiation or photon arrival times. We analyze the robustness, post-processing requirements and, in particular, the extraction efficiency of those methods to aid in the construction of efficient, compact and robust physical RNG systems.
Generating random numbers by means of nonlinear dynamic systems
NASA Astrophysics Data System (ADS)
Zang, Jiaqi; Hu, Haojie; Zhong, Juhua; Luo, Duanbin; Fang, Yi
2018-07-01
To introduce the randomness of a physical process to students, a chaotic pendulum experiment was opened in East China University of Science and Technology (ECUST) on the undergraduate level in the physics department. It was shown chaotic motion could be initiated through adjusting the operation of a chaotic pendulum. By using the data of the angular displacements of chaotic motion, random binary numerical arrays can be generated. To check the randomness of generated numerical arrays, the NIST Special Publication 800-20 method was adopted. As a result, it was found that all the random arrays which were generated by the chaotic motion could pass the validity criteria and some of them were even better than the quality of pseudo-random numbers generated by a computer. Through the experiments, it is demonstrated that chaotic pendulum can be used as an efficient mechanical facility in generating random numbers, and can be applied in teaching random motion to the students.
Pseudo-random number generator for the Sigma 5 computer
NASA Technical Reports Server (NTRS)
Carroll, S. N.
1983-01-01
A technique is presented for developing a pseudo-random number generator based on the linear congruential form. The two numbers used for the generator are a prime number and a corresponding primitive root, where the prime is the largest prime number that can be accurately represented on a particular computer. The primitive root is selected by applying Marsaglia's lattice test. The technique presented was applied to write a random number program for the Sigma 5 computer. The new program, named S:RANDOM1, is judged to be superior to the older program named S:RANDOM. For applications requiring several independent random number generators, a table is included showing several acceptable primitive roots. The technique and programs described can be applied to any computer having word length different from that of the Sigma 5.
Random Numbers and Monte Carlo Methods
NASA Astrophysics Data System (ADS)
Scherer, Philipp O. J.
Many-body problems often involve the calculation of integrals of very high dimension which cannot be treated by standard methods. For the calculation of thermodynamic averages Monte Carlo methods are very useful which sample the integration volume at randomly chosen points. After summarizing some basic statistics, we discuss algorithms for the generation of pseudo-random numbers with given probability distribution which are essential for all Monte Carlo methods. We show how the efficiency of Monte Carlo integration can be improved by sampling preferentially the important configurations. Finally the famous Metropolis algorithm is applied to classical many-particle systems. Computer experiments visualize the central limit theorem and apply the Metropolis method to the traveling salesman problem.
Low rank approach to computing first and higher order derivatives using automatic differentiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reed, J. A.; Abdel-Khalik, H. S.; Utke, J.
2012-07-01
This manuscript outlines a new approach for increasing the efficiency of applying automatic differentiation (AD) to large scale computational models. By using the principles of the Efficient Subspace Method (ESM), low rank approximations of the derivatives for first and higher orders can be calculated using minimized computational resources. The output obtained from nuclear reactor calculations typically has a much smaller numerical rank compared to the number of inputs and outputs. This rank deficiency can be exploited to reduce the number of derivatives that need to be calculated using AD. The effective rank can be determined according to ESM by computingmore » derivatives with AD at random inputs. Reduced or pseudo variables are then defined and new derivatives are calculated with respect to the pseudo variables. Two different AD packages are used: OpenAD and Rapsodia. OpenAD is used to determine the effective rank and the subspace that contains the derivatives. Rapsodia is then used to calculate derivatives with respect to the pseudo variables for the desired order. The overall approach is applied to two simple problems and to MATWS, a safety code for sodium cooled reactors. (authors)« less
ERIC Educational Resources Information Center
Rinehart, Nicole J.; Bradshaw, John L.; Moss, Simon A.; Brereton, Avril V.; Tonge, Bruce J.
2006-01-01
The repetitive, stereotyped and obsessive behaviours, which are core diagnostic features of autism, are thought to be underpinned by executive dysfunction. This study examined executive impairment in individuals with autism and Asperger's disorder using a verbal equivalent of an established pseudo-random number generating task. Different patterns…
Accelerating Pseudo-Random Number Generator for MCNP on GPU
NASA Astrophysics Data System (ADS)
Gong, Chunye; Liu, Jie; Chi, Lihua; Hu, Qingfeng; Deng, Li; Gong, Zhenghu
2010-09-01
Pseudo-random number generators (PRNG) are intensively used in many stochastic algorithms in particle simulations, artificial neural networks and other scientific computation. The PRNG in Monte Carlo N-Particle Transport Code (MCNP) requires long period, high quality, flexible jump and fast enough. In this paper, we implement such a PRNG for MCNP on NVIDIA's GTX200 Graphics Processor Units (GPU) using CUDA programming model. Results shows that 3.80 to 8.10 times speedup are achieved compared with 4 to 6 cores CPUs and more than 679.18 million double precision random numbers can be generated per second on GPU.
Generation of pseudo-random numbers
NASA Technical Reports Server (NTRS)
Howell, L. W.; Rheinfurth, M. H.
1982-01-01
Practical methods for generating acceptable random numbers from a variety of probability distributions which are frequently encountered in engineering applications are described. The speed, accuracy, and guarantee of statistical randomness of the various methods are discussed.
NASA Astrophysics Data System (ADS)
Yang, Yu-Guang; Xu, Peng; Yang, Rui; Zhou, Yi-Hua; Shi, Wei-Min
2016-01-01
Quantum information and quantum computation have achieved a huge success during the last years. In this paper, we investigate the capability of quantum Hash function, which can be constructed by subtly modifying quantum walks, a famous quantum computation model. It is found that quantum Hash function can act as a hash function for the privacy amplification process of quantum key distribution systems with higher security. As a byproduct, quantum Hash function can also be used for pseudo-random number generation due to its inherent chaotic dynamics. Further we discuss the application of quantum Hash function to image encryption and propose a novel image encryption algorithm. Numerical simulations and performance comparisons show that quantum Hash function is eligible for privacy amplification in quantum key distribution, pseudo-random number generation and image encryption in terms of various hash tests and randomness tests. It extends the scope of application of quantum computation and quantum information.
Yang, Yu-Guang; Xu, Peng; Yang, Rui; Zhou, Yi-Hua; Shi, Wei-Min
2016-01-01
Quantum information and quantum computation have achieved a huge success during the last years. In this paper, we investigate the capability of quantum Hash function, which can be constructed by subtly modifying quantum walks, a famous quantum computation model. It is found that quantum Hash function can act as a hash function for the privacy amplification process of quantum key distribution systems with higher security. As a byproduct, quantum Hash function can also be used for pseudo-random number generation due to its inherent chaotic dynamics. Further we discuss the application of quantum Hash function to image encryption and propose a novel image encryption algorithm. Numerical simulations and performance comparisons show that quantum Hash function is eligible for privacy amplification in quantum key distribution, pseudo-random number generation and image encryption in terms of various hash tests and randomness tests. It extends the scope of application of quantum computation and quantum information. PMID:26823196
Yang, Yu-Guang; Xu, Peng; Yang, Rui; Zhou, Yi-Hua; Shi, Wei-Min
2016-01-29
Quantum information and quantum computation have achieved a huge success during the last years. In this paper, we investigate the capability of quantum Hash function, which can be constructed by subtly modifying quantum walks, a famous quantum computation model. It is found that quantum Hash function can act as a hash function for the privacy amplification process of quantum key distribution systems with higher security. As a byproduct, quantum Hash function can also be used for pseudo-random number generation due to its inherent chaotic dynamics. Further we discuss the application of quantum Hash function to image encryption and propose a novel image encryption algorithm. Numerical simulations and performance comparisons show that quantum Hash function is eligible for privacy amplification in quantum key distribution, pseudo-random number generation and image encryption in terms of various hash tests and randomness tests. It extends the scope of application of quantum computation and quantum information.
Tyagi, Neelam; Bose, Abhijit; Chetty, Indrin J
2004-09-01
We have parallelized the Dose Planning Method (DPM), a Monte Carlo code optimized for radiotherapy class problems, on distributed-memory processor architectures using the Message Passing Interface (MPI). Parallelization has been investigated on a variety of parallel computing architectures at the University of Michigan-Center for Advanced Computing, with respect to efficiency and speedup as a function of the number of processors. We have integrated the parallel pseudo random number generator from the Scalable Parallel Pseudo-Random Number Generator (SPRNG) library to run with the parallel DPM. The Intel cluster consisting of 800 MHz Intel Pentium III processor shows an almost linear speedup up to 32 processors for simulating 1 x 10(8) or more particles. The speedup results are nearly linear on an Athlon cluster (up to 24 processors based on availability) which consists of 1.8 GHz+ Advanced Micro Devices (AMD) Athlon processors on increasing the problem size up to 8 x 10(8) histories. For a smaller number of histories (1 x 10(8)) the reduction of efficiency with the Athlon cluster (down to 83.9% with 24 processors) occurs because the processing time required to simulate 1 x 10(8) histories is less than the time associated with interprocessor communication. A similar trend was seen with the Opteron Cluster (consisting of 1400 MHz, 64-bit AMD Opteron processors) on increasing the problem size. Because of the 64-bit architecture Opteron processors are capable of storing and processing instructions at a faster rate and hence are faster as compared to the 32-bit Athlon processors. We have validated our implementation with an in-phantom dose calculation study using a parallel pencil monoenergetic electron beam of 20 MeV energy. The phantom consists of layers of water, lung, bone, aluminum, and titanium. The agreement in the central axis depth dose curves and profiles at different depths shows that the serial and parallel codes are equivalent in accuracy.
Extracting random numbers from quantum tunnelling through a single diode.
Bernardo-Gavito, Ramón; Bagci, Ibrahim Ethem; Roberts, Jonathan; Sexton, James; Astbury, Benjamin; Shokeir, Hamzah; McGrath, Thomas; Noori, Yasir J; Woodhead, Christopher S; Missous, Mohamed; Roedig, Utz; Young, Robert J
2017-12-19
Random number generation is crucial in many aspects of everyday life, as online security and privacy depend ultimately on the quality of random numbers. Many current implementations are based on pseudo-random number generators, but information security requires true random numbers for sensitive applications like key generation in banking, defence or even social media. True random number generators are systems whose outputs cannot be determined, even if their internal structure and response history are known. Sources of quantum noise are thus ideal for this application due to their intrinsic uncertainty. In this work, we propose using resonant tunnelling diodes as practical true random number generators based on a quantum mechanical effect. The output of the proposed devices can be directly used as a random stream of bits or can be further distilled using randomness extraction algorithms, depending on the application.
Pseudo-random properties of a linear congruential generator investigated by b-adic diaphony
NASA Astrophysics Data System (ADS)
Stoev, Peter; Stoilova, Stanislava
2017-12-01
In the proposed paper we continue the study of the diaphony, defined in b-adic number system, and we extend it in different directions. We investigate this diaphony as a tool for estimation of the pseudorandom properties of some of the most used random number generators. This is done by evaluating the distribution of specially constructed two-dimensional nets on the base of the obtained random numbers. The aim is to see how the generated numbers are suitable for calculations in some numerical methods (Monte Carlo etc.).
Multi-kW coherent combining of fiber lasers seeded with pseudo random phase modulated light
NASA Astrophysics Data System (ADS)
Flores, Angel; Ehrehreich, Thomas; Holten, Roger; Anderson, Brian; Dajani, Iyad
2016-03-01
We report efficient coherent beam combining of five kilowatt-class fiber amplifiers with a diffractive optical element (DOE). Based on a master oscillator power amplifier (MOPA) configuration, the amplifiers were seeded with pseudo random phase modulated light. Each non-polarization maintaining fiber amplifier was optically path length matched and provides approximately 1.2 kW of near diffraction-limited output power (measured M2<1.1). Consequently, a low power sample of each laser was utilized for active linear polarization control. A low power sample of the combined beam after the DOE provided an error signal for active phase locking which was performed via Locking of Optical Coherence by Single-Detector Electronic-Frequency Tagging (LOCSET). After phase stabilization, the beams were coherently combined via the 1x5 DOE. A total combined output power of 4.9 kW was achieved with 82% combining efficiency and excellent beam quality (M2<1.1). The intrinsic DOE splitter loss was 5%. Similarly, losses due in part to non-ideal polarization, ASE content, uncorrelated wavefront errors, and misalignment errors contributed to the efficiency reduction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
V Yashchuk; R Conley; E Anderson
Verification of the reliability of metrology data from high quality X-ray optics requires that adequate methods for test and calibration of the instruments be developed. For such verification for optical surface profilometers in the spatial frequency domain, a modulation transfer function (MTF) calibration method based on binary pseudo-random (BPR) gratings and arrays has been suggested [1] and [2] and proven to be an effective calibration method for a number of interferometric microscopes, a phase shifting Fizeau interferometer, and a scatterometer [5]. Here we describe the details of development of binary pseudo-random multilayer (BPRML) test samples suitable for characterization of scanningmore » (SEM) and transmission (TEM) electron microscopes. We discuss the results of TEM measurements with the BPRML test samples fabricated from a WiSi2/Si multilayer coating with pseudo-randomly distributed layers. In particular, we demonstrate that significant information about the metrological reliability of the TEM measurements can be extracted even when the fundamental frequency of the BPRML sample is smaller than the Nyquist frequency of the measurements. The measurements demonstrate a number of problems related to the interpretation of the SEM and TEM data. Note that similar BPRML test samples can be used to characterize X-ray microscopes. Corresponding work with X-ray microscopes is in progress.« less
Novel pseudo-random number generator based on quantum random walks.
Yang, Yu-Guang; Zhao, Qian-Qian
2016-02-04
In this paper, we investigate the potential application of quantum computation for constructing pseudo-random number generators (PRNGs) and further construct a novel PRNG based on quantum random walks (QRWs), a famous quantum computation model. The PRNG merely relies on the equations used in the QRWs, and thus the generation algorithm is simple and the computation speed is fast. The proposed PRNG is subjected to statistical tests such as NIST and successfully passed the test. Compared with the representative PRNG based on quantum chaotic maps (QCM), the present QRWs-based PRNG has some advantages such as better statistical complexity and recurrence. For example, the normalized Shannon entropy and the statistical complexity of the QRWs-based PRNG are 0.999699456771172 and 1.799961178212329e-04 respectively given the number of 8 bits-words, say, 16Mbits. By contrast, the corresponding values of the QCM-based PRNG are 0.999448131481064 and 3.701210794388818e-04 respectively. Thus the statistical complexity and the normalized entropy of the QRWs-based PRNG are closer to 0 and 1 respectively than those of the QCM-based PRNG when the number of words of the analyzed sequence increases. It provides a new clue to construct PRNGs and also extends the applications of quantum computation.
Novel pseudo-random number generator based on quantum random walks
Yang, Yu-Guang; Zhao, Qian-Qian
2016-01-01
In this paper, we investigate the potential application of quantum computation for constructing pseudo-random number generators (PRNGs) and further construct a novel PRNG based on quantum random walks (QRWs), a famous quantum computation model. The PRNG merely relies on the equations used in the QRWs, and thus the generation algorithm is simple and the computation speed is fast. The proposed PRNG is subjected to statistical tests such as NIST and successfully passed the test. Compared with the representative PRNG based on quantum chaotic maps (QCM), the present QRWs-based PRNG has some advantages such as better statistical complexity and recurrence. For example, the normalized Shannon entropy and the statistical complexity of the QRWs-based PRNG are 0.999699456771172 and 1.799961178212329e-04 respectively given the number of 8 bits-words, say, 16Mbits. By contrast, the corresponding values of the QCM-based PRNG are 0.999448131481064 and 3.701210794388818e-04 respectively. Thus the statistical complexity and the normalized entropy of the QRWs-based PRNG are closer to 0 and 1 respectively than those of the QCM-based PRNG when the number of words of the analyzed sequence increases. It provides a new clue to construct PRNGs and also extends the applications of quantum computation. PMID:26842402
Seminar on Understanding Digital Control and Analysis in Vibration Test Systems, part 2
NASA Technical Reports Server (NTRS)
1975-01-01
A number of techniques for dealing with important technical aspects of the random vibration control problem are described. These include the generation of pseudo-random and true random noise, the control spectrum estimation problem, the accuracy/speed tradeoff, and control correction strategies. System hardware, the operator-system interface, safety features, and operational capabilities of sophisticated digital random vibration control systems are also discussed.
Method and apparatus for determining position using global positioning satellites
NASA Technical Reports Server (NTRS)
Ward, John (Inventor); Ward, William S. (Inventor)
1998-01-01
A global positioning satellite receiver having an antenna for receiving a L1 signal from a satellite. The L1 signal is processed by a preamplifier stage including a band pass filter and a low noise amplifier and output as a radio frequency (RF) signal. A mixer receives and de-spreads the RF signal in response to a pseudo-random noise code, i.e., Gold code, generated by an internal pseudo-random noise code generator. A microprocessor enters a code tracking loop, such that during the code tracking loop, it addresses the pseudo-random code generator to cause the pseudo-random code generator to sequentially output pseudo-random codes corresponding to satellite codes used to spread the L1 signal, until correlation occurs. When an output of the mixer is indicative of the occurrence of correlation between the RF signal and the generated pseudo-random codes, the microprocessor enters an operational state which slows the receiver code sequence to stay locked with the satellite code sequence. The output of the mixer is provided to a detector which, in turn, controls certain routines of the microprocessor. The microprocessor will output pseudo range information according to an interrupt routine in response detection of correlation. The pseudo range information is to be telemetered to a ground station which determines the position of the global positioning satellite receiver.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yashchuk, Valeriy V; Conley, Raymond; Anderson, Erik H
Verification of the reliability of metrology data from high quality x-ray optics requires that adequate methods for test and calibration of the instruments be developed. For such verification for optical surface profilometers in the spatial frequency domain, a modulation transfer function (MTF) calibration method based on binary pseudo-random (BPR) gratings and arrays has been suggested [Proc. SPIE 7077-7 (2007), Opt. Eng. 47(7), 073602-1-5 (2008)} and proven to be an effective calibration method for a number of interferometric microscopes, a phase shifting Fizeau interferometer, and a scatterometer [Nucl. Instr. and Meth. A 616, 172-82 (2010)]. Here we describe the details ofmore » development of binary pseudo-random multilayer (BPRML) test samples suitable for characterization of scanning (SEM) and transmission (TEM) electron microscopes. We discuss the results of TEM measurements with the BPRML test samples fabricated from a WiSi2/Si multilayer coating with pseudo randomly distributed layers. In particular, we demonstrate that significant information about the metrological reliability of the TEM measurements can be extracted even when the fundamental frequency of the BPRML sample is smaller than the Nyquist frequency of the measurements. The measurements demonstrate a number of problems related to the interpretation of the SEM and TEM data. Note that similar BPRML test samples can be used to characterize x-ray microscopes. Corresponding work with x-ray microscopes is in progress.« less
Scope of Various Random Number Generators in Ant System Approach for TSP
NASA Technical Reports Server (NTRS)
Sen, S. K.; Shaykhian, Gholam Ali
2007-01-01
Experimented on heuristic, based on an ant system approach for traveling Salesman problem, are several quasi and pseudo-random number generators. This experiment is to explore if any particular generator is most desirable. Such an experiment on large samples has the potential to rank the performance of the generators for the foregoing heuristic. This is just to seek an answer to the controversial performance ranking of the generators in probabilistic/statically sense.
Pseudo-random bit generator based on lag time series
NASA Astrophysics Data System (ADS)
García-Martínez, M.; Campos-Cantón, E.
2014-12-01
In this paper, we present a pseudo-random bit generator (PRBG) based on two lag time series of the logistic map using positive and negative values in the bifurcation parameter. In order to hidden the map used to build the pseudo-random series we have used a delay in the generation of time series. These new series when they are mapped xn against xn+1 present a cloud of points unrelated to the logistic map. Finally, the pseudo-random sequences have been tested with the suite of NIST giving satisfactory results for use in stream ciphers.
Pseudo CT estimation from MRI using patch-based random forest
NASA Astrophysics Data System (ADS)
Yang, Xiaofeng; Lei, Yang; Shu, Hui-Kuo; Rossi, Peter; Mao, Hui; Shim, Hyunsuk; Curran, Walter J.; Liu, Tian
2017-02-01
Recently, MR simulators gain popularity because of unnecessary radiation exposure of CT simulators being used in radiation therapy planning. We propose a method for pseudo CT estimation from MR images based on a patch-based random forest. Patient-specific anatomical features are extracted from the aligned training images and adopted as signatures for each voxel. The most robust and informative features are identified using feature selection to train the random forest. The well-trained random forest is used to predict the pseudo CT of a new patient. This prediction technique was tested with human brain images and the prediction accuracy was assessed using the original CT images. Peak signal-to-noise ratio (PSNR) and feature similarity (FSIM) indexes were used to quantify the differences between the pseudo and original CT images. The experimental results showed the proposed method could accurately generate pseudo CT images from MR images. In summary, we have developed a new pseudo CT prediction method based on patch-based random forest, demonstrated its clinical feasibility, and validated its prediction accuracy. This pseudo CT prediction technique could be a useful tool for MRI-based radiation treatment planning and attenuation correction in a PET/MRI scanner.
NASA Astrophysics Data System (ADS)
Matsumoto, Kouhei; Kasuya, Yuki; Yumoto, Mitsuki; Arai, Hideaki; Sato, Takashi; Sakamoto, Shuichi; Ohkawa, Masashi; Ohdaira, Yasuo
2018-02-01
Not so long ago, pseudo random numbers generated by numerical formulae were considered to be adequate for encrypting important data-files, because of the time needed to decode them. With today's ultra high-speed processors, however, this is no longer true. So, in order to thwart ever-more advanced attempts to breach our system's protections, cryptologists have devised a method that is considered to be virtually impossible to decode, and uses what is a limitless number of physical random numbers. This research describes a method, whereby laser diode's frequency noise generate a large quantities of physical random numbers. Using two types of photo detectors (APD and PIN-PD), we tested the abilities of two types of lasers (FP-LD and VCSEL) to generate random numbers. In all instances, an etalon served as frequency discriminator, the examination pass rates were determined using NIST FIPS140-2 test at each bit, and the Random Number Generation (RNG) speed was noted.
Small Private Key PKS on an Embedded Microprocessor
Seo, Hwajeong; Kim, Jihyun; Choi, Jongseok; Park, Taehwan; Liu, Zhe; Kim, Howon
2014-01-01
Multivariate quadratic ( ) cryptography requires the use of long public and private keys to ensure a sufficient security level, but this is not favorable to embedded systems, which have limited system resources. Recently, various approaches to cryptography using reduced public keys have been studied. As a result of this, at CHES2011 (Cryptographic Hardware and Embedded Systems, 2011), a small public key scheme, was proposed, and its feasible implementation on an embedded microprocessor was reported at CHES2012. However, the implementation of a small private key scheme was not reported. For efficient implementation, random number generators can contribute to reduce the key size, but the cost of using a random number generator is much more complex than computing on modern microprocessors. Therefore, no feasible results have been reported on embedded microprocessors. In this paper, we propose a feasible implementation on embedded microprocessors for a small private key scheme using a pseudo-random number generator and hash function based on a block-cipher exploiting a hardware Advanced Encryption Standard (AES) accelerator. To speed up the performance, we apply various implementation methods, including parallel computation, on-the-fly computation, optimized logarithm representation, vinegar monomials and assembly programming. The proposed method reduces the private key size by about 99.9% and boosts signature generation and verification by 5.78% and 12.19% than previous results in CHES2012. PMID:24651722
Small private key MQPKS on an embedded microprocessor.
Seo, Hwajeong; Kim, Jihyun; Choi, Jongseok; Park, Taehwan; Liu, Zhe; Kim, Howon
2014-03-19
Multivariate quadratic (MQ) cryptography requires the use of long public and private keys to ensure a sufficient security level, but this is not favorable to embedded systems, which have limited system resources. Recently, various approaches to MQ cryptography using reduced public keys have been studied. As a result of this, at CHES2011 (Cryptographic Hardware and Embedded Systems, 2011), a small public key MQ scheme, was proposed, and its feasible implementation on an embedded microprocessor was reported at CHES2012. However, the implementation of a small private key MQ scheme was not reported. For efficient implementation, random number generators can contribute to reduce the key size, but the cost of using a random number generator is much more complex than computing MQ on modern microprocessors. Therefore, no feasible results have been reported on embedded microprocessors. In this paper, we propose a feasible implementation on embedded microprocessors for a small private key MQ scheme using a pseudo-random number generator and hash function based on a block-cipher exploiting a hardware Advanced Encryption Standard (AES) accelerator. To speed up the performance, we apply various implementation methods, including parallel computation, on-the-fly computation, optimized logarithm representation, vinegar monomials and assembly programming. The proposed method reduces the private key size by about 99.9% and boosts signature generation and verification by 5.78% and 12.19% than previous results in CHES2012.
Long period pseudo random number sequence generator
NASA Technical Reports Server (NTRS)
Wang, Charles C. (Inventor)
1989-01-01
A circuit for generating a sequence of pseudo random numbers, (A sub K). There is an exponentiator in GF(2 sup m) for the normal basis representation of elements in a finite field GF(2 sup m) each represented by m binary digits and having two inputs and an output from which the sequence (A sub K). Of pseudo random numbers is taken. One of the two inputs is connected to receive the outputs (E sub K) of maximal length shift register of n stages. There is a switch having a pair of inputs and an output. The switch outputs is connected to the other of the two inputs of the exponentiator. One of the switch inputs is connected for initially receiving a primitive element (A sub O) in GF(2 sup m). Finally, there is a delay circuit having an input and an output. The delay circuit output is connected to the other of the switch inputs and the delay circuit input is connected to the output of the exponentiator. Whereby after the exponentiator initially receives the primitive element (A sub O) in GF(2 sup m) through the switch, the switch can be switched to cause the exponentiator to receive as its input a delayed output A(K-1) from the exponentiator thereby generating (A sub K) continuously at the output of the exponentiator. The exponentiator in GF(2 sup m) is novel and comprises a cyclic-shift circuit; a Massey-Omura multiplier; and, a control logic circuit all operably connected together to perform the function U(sub i) = 92(sup i) (for n(sub i) = 1 or 1 (for n(subi) = 0).
Boosting the FM-Index on the GPU: Effective Techniques to Mitigate Random Memory Access.
Chacón, Alejandro; Marco-Sola, Santiago; Espinosa, Antonio; Ribeca, Paolo; Moure, Juan Carlos
2015-01-01
The recent advent of high-throughput sequencing machines producing big amounts of short reads has boosted the interest in efficient string searching techniques. As of today, many mainstream sequence alignment software tools rely on a special data structure, called the FM-index, which allows for fast exact searches in large genomic references. However, such searches translate into a pseudo-random memory access pattern, thus making memory access the limiting factor of all computation-efficient implementations, both on CPUs and GPUs. Here, we show that several strategies can be put in place to remove the memory bottleneck on the GPU: more compact indexes can be implemented by having more threads work cooperatively on larger memory blocks, and a k-step FM-index can be used to further reduce the number of memory accesses. The combination of those and other optimisations yields an implementation that is able to process about two Gbases of queries per second on our test platform, being about 8 × faster than a comparable multi-core CPU version, and about 3 × to 5 × faster than the FM-index implementation on the GPU provided by the recently announced Nvidia NVBIO bioinformatics library.
Computational Models for Belief Revision, Group Decision-Making and Cultural Shifts
2010-10-25
34social" networks; the green numbers are pseudo-trees or artificial (non-social) constructions. The dashed blue line indicates the range of Erdos- Renyi ...non-social networks such as Erdos- Renyi random graphs or the more passive non-cognitive spreading of disease or information flow, As mentioned
Pseudo-Random Sequence Modifications for Ion Mobility Orthogonal Time of Flight Mass Spectrometry
Clowers, Brian H.; Belov, Mikhail E.; Prior, David C.; Danielson, William F.; Ibrahim, Yehia; Smith, Richard D.
2008-01-01
Due to the inherently low duty cycle of ion mobility spectrometry (IMS) experiments that sample from continuous ion sources, a range of experimental advances have been developed to maximize ion utilization efficiency. The use of ion trapping mechanisms prior to the ion mobility drift tube has demonstrated significant gains over discrete sampling from continuous sources; however, these technologies have traditionally relied upon a signal averaging to attain analytically relevant signal-to-noise ratios (SNR). Multiplexed (MP) techniques based upon the Hadamard transform offer an alternative experimental approach by which ion utilization efficiency can be elevated to ∼ 50 %. Recently, our research group demonstrated a unique multiplexed ion mobility time-of-flight (MP-IMS-TOF) approach that incorporates ion trapping and can extend ion utilization efficiency beyond 50 %. However, the spectral reconstruction of the multiplexed signal using this experiment approach requires the use of sample-specific weighing designs. Though general weighing designs have been shown to significantly enhance ion utilization efficiency using this MP technique, such weighing designs cannot be applied to all samples. By modifying both the ion funnel trap and the pseudo random sequence (PRS) used for the MP experiment we have eliminated the need for complex weighing matrices. For both simple and complex mixtures SNR enhancements of up to 13 were routinely observed as compared to the SA-IMS-TOF experiment. In addition, this new class of PRS provides a two fold enhancement in ion throughput compared to the traditional HT-IMS experiment. PMID:18311942
Scope of Various Random Number Generators in ant System Approach for TSP
NASA Technical Reports Server (NTRS)
Sen, S. K.; Shaykhian, Gholam Ali
2007-01-01
Experimented on heuristic, based on an ant system approach for traveling salesman problem, are several quasi- and pseudo-random number generators. This experiment is to explore if any particular generator is most desirable. Such an experiment on large samples has the potential to rank the performance of the generators for the foregoing heuristic. This is mainly to seek an answer to the controversial issue "which generator is the best in terms of quality of the result (accuracy) as well as cost of producing the result (time/computational complexity) in a probabilistic/statistical sense."
Parallelization of a Monte Carlo particle transport simulation code
NASA Astrophysics Data System (ADS)
Hadjidoukas, P.; Bousis, C.; Emfietzoglou, D.
2010-05-01
We have developed a high performance version of the Monte Carlo particle transport simulation code MC4. The original application code, developed in Visual Basic for Applications (VBA) for Microsoft Excel, was first rewritten in the C programming language for improving code portability. Several pseudo-random number generators have been also integrated and studied. The new MC4 version was then parallelized for shared and distributed-memory multiprocessor systems using the Message Passing Interface. Two parallel pseudo-random number generator libraries (SPRNG and DCMT) have been seamlessly integrated. The performance speedup of parallel MC4 has been studied on a variety of parallel computing architectures including an Intel Xeon server with 4 dual-core processors, a Sun cluster consisting of 16 nodes of 2 dual-core AMD Opteron processors and a 200 dual-processor HP cluster. For large problem size, which is limited only by the physical memory of the multiprocessor server, the speedup results are almost linear on all systems. We have validated the parallel implementation against the serial VBA and C implementations using the same random number generator. Our experimental results on the transport and energy loss of electrons in a water medium show that the serial and parallel codes are equivalent in accuracy. The present improvements allow for studying of higher particle energies with the use of more accurate physical models, and improve statistics as more particles tracks can be simulated in low response time.
Improvement of Nonlinearity Correction for BESIII ETOF Upgrade
NASA Astrophysics Data System (ADS)
Sun, Weijia; Cao, Ping; Ji, Xiaolu; Fan, Huanhuan; Dai, Hongliang; Zhang, Jie; Liu, Shubin; An, Qi
2015-08-01
An improved scheme to implement integral non-linearity (INL) correction of time measurements in the Beijing Spectrometer III Endcap Time-of-Flight (BESIII ETOF) upgrade system is presented in this paper. During upgrade, multi-gap resistive plate chambers (MRPC) are introduced as ETOF detectors which increases the total number of time measurement channels to 1728. The INL correction method adopted in BESIII TOF proved to be of limited use, because the sharply increased number of electronic channels required for reading out the detector strips degrade the system configuration efficiency severely. Furthermore, once installed into the spectrometer, BESIII TOF electronics do not support the TDCs' nonlinearity evaluation online. In this proposed method, INL data used for the correction algorithm are automatically imported from a non-volatile read-only memory (ROM) instead of from data acquisition software. This guarantees the real-time performance and system efficiency of the INL correction, especially for the ETOF upgrades with massive number of channels. Besides, a signal that is not synchronized to the system 41.65 MHz clock from BEPCII is sent to the frontend electronics (FEE) to simulate pseudo-random test pulses for the purpose of online nonlinearity evaluation. Test results show that the time measuring INL errors in one module with 72 channels can be corrected online and in real time.
NASA Astrophysics Data System (ADS)
Siegel, Z.; Siegel, Edward Carl-Ludwig
2011-03-01
RANDOMNESS of Numbers cognitive-semantics DEFINITION VIA Cognition QUERY: WHAT???, NOT HOW?) VS. computer-``science" mindLESS number-crunching (Harrel-Sipser-...) algorithmics Goldreich "PSEUDO-randomness"[Not.AMS(02)] mea-culpa is ONLY via MAXWELL-BOLTZMANN CLASSICAL-STATISTICS(NOT FDQS!!!) "hot-plasma" REPULSION VERSUS Newcomb(1881)-Weyl(1914;1916)-Benford(1938) "NeWBe" logarithmic-law digit-CLUMPING/ CLUSTERING NON-Randomness simple Siegel[AMS Joint.Mtg.(02)-Abs. # 973-60-124] algebraic-inversion to THE QUANTUM and ONLY BEQS preferentially SEQUENTIALLY lower-DIGITS CLUMPING/CLUSTERING with d = 0 BEC, is ONLY VIA Siegel-Baez FUZZYICS=CATEGORYICS (SON OF TRIZ)/"Category-Semantics"(C-S), latter intersection/union of Lawvere(1964)-Siegel(1964)] category-theory (matrix: MORPHISMS V FUNCTORS) "+" cognitive-semantics'' (matrix: ANTONYMS V SYNONYMS) yields Siegel-Baez FUZZYICS=CATEGORYICS/C-S tabular list-format matrix truth-table analytics: MBCS RANDOMNESS TRUTH/EMET!!!
Fast and secure encryption-decryption method based on chaotic dynamics
Protopopescu, Vladimir A.; Santoro, Robert T.; Tolliver, Johnny S.
1995-01-01
A method and system for the secure encryption of information. The method comprises the steps of dividing a message of length L into its character components; generating m chaotic iterates from m independent chaotic maps; producing an "initial" value based upon the m chaotic iterates; transforming the "initial" value to create a pseudo-random integer; repeating the steps of generating, producing and transforming until a pseudo-random integer sequence of length L is created; and encrypting the message as ciphertext based upon the pseudo random integer sequence. A system for accomplishing the invention is also provided.
Pseudo-random dynamic address configuration (PRDAC) algorithm for mobile ad hoc networks
NASA Astrophysics Data System (ADS)
Wu, Shaochuan; Tan, Xuezhi
2007-11-01
By analyzing all kinds of address configuration algorithms, this paper provides a new pseudo-random dynamic address configuration (PRDAC) algorithm for mobile ad hoc networks. Based on PRDAC, the first node that initials this network randomly chooses a nonlinear shift register that can generates an m-sequence. When another node joins this network, the initial node will act as an IP address configuration sever to compute an IP address according to this nonlinear shift register, and then allocates this address and tell the generator polynomial of this shift register to this new node. By this means, when other node joins this network, any node that has obtained an IP address can act as a server to allocate address to this new node. PRDAC can also efficiently avoid IP conflicts and deal with network partition and merge as same as prophet address (PA) allocation and dynamic configuration and distribution protocol (DCDP). Furthermore, PRDAC has less algorithm complexity, less computational complexity and more sufficient assumption than PA. In addition, PRDAC radically avoids address conflicts and maximizes the utilization rate of IP addresses. Analysis and simulation results show that PRDAC has rapid convergence, low overhead and immune from topological structures.
Electroacupuncture is not effective in chronic painful neuropathies.
Penza, Paola; Bricchi, Monica; Scola, Amalia; Campanella, Angela; Lauria, Giuseppe
2011-12-01
To investigate the analgesic efficacy of electroacupuncture (EA) in patients with chronic painful neuropathy. Double-blind, placebo-controlled, cross-over study. Inclusion criteria were diagnosis of peripheral neuropathy, neuropathic pain (visual analog scale > 4) for at least 6 months, and stable analgesic medications for at least 3 months. Sixteen patients were randomized into two arms to be treated with EA or pseudo-EA (placebo). The protocol included 6 weeks of treatment, 12 weeks free of treatment, and then further 6 weeks of treatment. EA or pseudo-EA was performed weekly during each treatment period. The primary outcome was the number of patients treated with EA achieving at least 50% of pain relief at the end of each treatment compared with pain intensity at baseline. Secondary outcomes were modification in patient's global impression of change, depression and anxiety, and quality of life. Eleven patients were randomized to EA and five patients to pseudo-EA as the first treatment. Only one patient per group (EA and pseudo-EA) reported 50% of pain relief at the end of each treatment compared with pain intensity at baseline. Pain intensity did not differ between EA (5.7 ± 2.3 at baseline and 4.97 ± 3.23 after treatment) and pseudo-EA (4.9 ± 1.9 at baseline and 4.18 ± 2.69 after treatment). There was no difference between patients who received EA as the first treatment and patients initially treated with placebo. There was no change in the secondary outcomes. Our results do not support the use of EA in this population of painful neuropathy patients. Further studies in larger groups of patients are warranted to confirm our observation. Wiley Periodicals, Inc.
Recommendations and illustrations for the evaluation of photonic random number generators
NASA Astrophysics Data System (ADS)
Hart, Joseph D.; Terashima, Yuta; Uchida, Atsushi; Baumgartner, Gerald B.; Murphy, Thomas E.; Roy, Rajarshi
2017-09-01
The never-ending quest to improve the security of digital information combined with recent improvements in hardware technology has caused the field of random number generation to undergo a fundamental shift from relying solely on pseudo-random algorithms to employing optical entropy sources. Despite these significant advances on the hardware side, commonly used statistical measures and evaluation practices remain ill-suited to understand or quantify the optical entropy that underlies physical random number generation. We review the state of the art in the evaluation of optical random number generation and recommend a new paradigm: quantifying entropy generation and understanding the physical limits of the optical sources of randomness. In order to do this, we advocate for the separation of the physical entropy source from deterministic post-processing in the evaluation of random number generators and for the explicit consideration of the impact of the measurement and digitization process on the rate of entropy production. We present the Cohen-Procaccia estimate of the entropy rate h (𝜖 ,τ ) as one way to do this. In order to provide an illustration of our recommendations, we apply the Cohen-Procaccia estimate as well as the entropy estimates from the new NIST draft standards for physical random number generators to evaluate and compare three common optical entropy sources: single photon time-of-arrival detection, chaotic lasers, and amplified spontaneous emission.
Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, L.M.; Hochstedler, R.D.
1997-02-01
Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of themore » accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code).« less
Takizawa, Ken; Beaucamp, Anthony
2017-09-18
A new category of circular pseudo-random paths is proposed in order to suppress repetitive patterns and improve surface waviness on ultra-precision polished surfaces. Random paths in prior research had many corners, therefore deceleration of the polishing tool affected the surface waviness. The new random path can suppress velocity changes of the polishing tool and thus restrict degradation of the surface waviness, making it suitable for applications with stringent mid-spatial-frequency requirements such as photomask blanks for EUV lithography.
PSO algorithm enhanced with Lozi Chaotic Map - Tuning experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pluhacek, Michal; Senkerik, Roman; Zelinka, Ivan
2015-03-10
In this paper it is investigated the effect of tuning of control parameters of the Lozi Chaotic Map employed as a chaotic pseudo-random number generator for the particle swarm optimization algorithm. Three different benchmark functions are selected from the IEEE CEC 2013 competition benchmark set. The Lozi map is extensively tuned and the performance of PSO is evaluated.
Multi-beam range imager for autonomous operations
NASA Technical Reports Server (NTRS)
Marzwell, Neville I.; Lee, H. Sang; Ramaswami, R.
1993-01-01
For space operations from the Space Station Freedom the real time range imager will be very valuable in terms of refuelling, docking as well as space exploration operations. For these applications as well as many other robotics and remote ranging applications, a small potable, power efficient, robust range imager capable of a few tens of km ranging with 10 cm accuracy is needed. The system developed is based on a well known pseudo-random modulation technique applied to a laser transmitter combined with a novel range resolution enhancement technique. In this technique, the transmitter is modulated by a relatively low frequency of an order of a few MHz to enhance the signal to noise ratio and to ease the stringent systems engineering requirements while accomplishing a very high resolution. The desired resolution cannot easily be attained by other conventional approaches. The engineering model of the system is being designed to obtain better than 10 cm range accuracy simply by implementing a high precision clock circuit. In this paper we present the principle of the pseudo-random noise (PN) lidar system and the results of the proof of experiment.
Random number generators tested on quantum Monte Carlo simulations.
Hongo, Kenta; Maezono, Ryo; Miura, Kenichi
2010-08-01
We have tested and compared several (pseudo) random number generators (RNGs) applied to a practical application, ground state energy calculations of molecules using variational and diffusion Monte Carlo metheds. A new multiple recursive generator with 8th-order recursion (MRG8) and the Mersenne twister generator (MT19937) are tested and compared with the RANLUX generator with five luxury levels (RANLUX-[0-4]). Both MRG8 and MT19937 are proven to give the same total energy as that evaluated with RANLUX-4 (highest luxury level) within the statistical error bars with less computational cost to generate the sequence. We also tested the notorious implementation of linear congruential generator (LCG), RANDU, for comparison. (c) 2010 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Zhang, Leihong; Liang, Dong; Li, Bei; Kang, Yi; Pan, Zilan; Zhang, Dawei; Gao, Xiumin; Ma, Xiuhua
2016-07-01
On the basis of analyzing the cosine light field with determined analytic expression and the pseudo-inverse method, the object is illuminated by a presetting light field with a determined discrete Fourier transform measurement matrix, and the object image is reconstructed by the pseudo-inverse method. The analytic expression of the algorithm of computational ghost imaging based on discrete Fourier transform measurement matrix is deduced theoretically, and compared with the algorithm of compressive computational ghost imaging based on random measurement matrix. The reconstruction process and the reconstruction error are analyzed. On this basis, the simulation is done to verify the theoretical analysis. When the sampling measurement number is similar to the number of object pixel, the rank of discrete Fourier transform matrix is the same as the one of the random measurement matrix, the PSNR of the reconstruction image of FGI algorithm and PGI algorithm are similar, the reconstruction error of the traditional CGI algorithm is lower than that of reconstruction image based on FGI algorithm and PGI algorithm. As the decreasing of the number of sampling measurement, the PSNR of reconstruction image based on FGI algorithm decreases slowly, and the PSNR of reconstruction image based on PGI algorithm and CGI algorithm decreases sharply. The reconstruction time of FGI algorithm is lower than that of other algorithms and is not affected by the number of sampling measurement. The FGI algorithm can effectively filter out the random white noise through a low-pass filter and realize the reconstruction denoising which has a higher denoising capability than that of the CGI algorithm. The FGI algorithm can improve the reconstruction accuracy and the reconstruction speed of computational ghost imaging.
NASA Astrophysics Data System (ADS)
Monthus, Cécile
2018-06-01
For random interacting Majorana models where the only symmetries are the parity P and the time-reversal-symmetry T, various approaches are compared to construct exact even and odd normalized zero modes Γ in finite size, i.e. Hermitian operators that commute with the Hamiltonian, that square to the identity, and that commute (even) or anticommute (odd) with the parity P. Even normalized zero-modes are well known under the name of ‘pseudo-spins’ in the field of many-body-localization or more precisely ‘local integrals of motion’ (LIOMs) in the many-body-localized-phase where the pseudo-spins happens to be spatially localized. Odd normalized zero-modes are popular under the name of ‘Majorana zero modes’ or ‘strong zero modes’. Explicit examples for small systems are described in detail. Applications to real-space renormalization procedures based on blocks containing an odd number of Majorana fermions are also discussed.
Random network model of electrical conduction in two-phase rock
NASA Astrophysics Data System (ADS)
Fuji-ta, Kiyoshi; Seki, Masayuki; Ichiki, Masahiro
2018-05-01
We developed a cell-type lattice model to clarify the interconnected conductivity mechanism of two-phase rock. We quantified electrical conduction networks in rock and evaluated electrical conductivity models of the two-phase interaction. Considering the existence ratio of conductive and resistive cells in the model, we generated natural matrix cells simulating a natural mineral distribution pattern, using Mersenne Twister random numbers. The most important and prominent feature of the model simulation is a drastic increase in the pseudo-conductivity index for conductor ratio R > 0.22. This index in the model increased from 10-4 to 100 between R = 0.22 and 0.9, a change of four orders of magnitude. We compared our model responses with results from previous model studies. Although the pseudo-conductivity computed by the model differs slightly from that of the previous model, model responses can account for the conductivity change. Our modeling is thus effective for quantitatively estimating the degree of interconnection of rock and minerals.
Characterization of network structure in stereoEEG data using consensus-based partial coherence.
Ter Wal, Marije; Cardellicchio, Pasquale; LoRusso, Giorgio; Pelliccia, Veronica; Avanzini, Pietro; Orban, Guy A; Tiesinga, Paul He
2018-06-06
Coherence is a widely used measure to determine the frequency-resolved functional connectivity between pairs of recording sites, but this measure is confounded by shared inputs to the pair. To remove shared inputs, the 'partial coherence' can be computed by conditioning the spectral matrices of the pair on all other recorded channels, which involves the calculation of a matrix (pseudo-) inverse. It has so far remained a challenge to use the time-resolved partial coherence to analyze intracranial recordings with a large number of recording sites. For instance, calculating the partial coherence using a pseudoinverse method produces a high number of false positives when it is applied to a large number of channels. To address this challenge, we developed a new method that randomly aggregated channels into a smaller number of effective channels on which the calculation of partial coherence was based. We obtained a 'consensus' partial coherence (cPCOH) by repeating this approach for several random aggregations of channels (permutations) and only accepting those activations in time and frequency with a high enough consensus. Using model data we show that the cPCOH method effectively filters out the effect of shared inputs and performs substantially better than the pseudo-inverse. We successfully applied the cPCOH procedure to human stereotactic EEG data and demonstrated three key advantages of this method relative to alternative procedures. First, it reduces the number of false positives relative to the pseudo-inverse method. Second, it allows for titration of the amount of false positives relative to the false negatives by adjusting the consensus threshold, thus allowing the data-analyst to prioritize one over the other to meet specific analysis demands. Third, it substantially reduced the number of identified interactions compared to coherence, providing a sparser network of connections from which clear spatial patterns emerged. These patterns can serve as a starting point of further analyses that provide insight into network dynamics during cognitive processes. These advantages likely generalize to other modalities in which shared inputs introduce confounds, such as electroencephalography (EEG) and magneto-encephalography (MEG). Copyright © 2018. Published by Elsevier Inc.
Jain, Mamta; Kumar, Anil; Choudhary, Rishabh Charan
2017-06-01
In this article, we have proposed an improved diagonal queue medical image steganography for patient secret medical data transmission using chaotic standard map, linear feedback shift register, and Rabin cryptosystem, for improvement of previous technique (Jain and Lenka in Springer Brain Inform 3:39-51, 2016). The proposed algorithm comprises four stages, generation of pseudo-random sequences (pseudo-random sequences are generated by linear feedback shift register and standard chaotic map), permutation and XORing using pseudo-random sequences, encryption using Rabin cryptosystem, and steganography using the improved diagonal queues. Security analysis has been carried out. Performance analysis is observed using MSE, PSNR, maximum embedding capacity, as well as by histogram analysis between various Brain disease stego and cover images.
Pseudo-random tool paths for CNC sub-aperture polishing and other applications.
Dunn, Christina R; Walker, David D
2008-11-10
In this paper we first contrast classical and CNC polishing techniques in regard to the repetitiveness of the machine motions. We then present a pseudo-random tool path for use with CNC sub-aperture polishing techniques and report polishing results from equivalent random and raster tool-paths. The random tool-path used - the unicursal random tool-path - employs a random seed to generate a pattern which never crosses itself. Because of this property, this tool-path is directly compatible with dwell time maps for corrective polishing. The tool-path can be used to polish any continuous area of any boundary shape, including surfaces with interior perforations.
The Random Telegraph Signal Behavior of Intermittently Stuck Bits in SDRAMs
NASA Astrophysics Data System (ADS)
Chugg, Andrew Michael; Burnell, Andrew J.; Duncan, Peter H.; Parker, Sarah; Ward, Jonathan J.
2009-12-01
This paper reports behavior analogous to the Random Telegraph Signal (RTS) seen in the leakage currents from radiation induced hot pixels in Charge Coupled Devices (CCDs), but in the context of stuck bits in Synchronous Dynamic Random Access Memories (SDRAMs). Our analysis suggests that pseudo-random sticking and unsticking of the SDRAM bits is due to thermally induced fluctuations in leakage current through displacement damage complexes in depletion regions that were created by high-energy neutron and proton interactions. It is shown that the number of observed stuck bits increases exponentially with temperature, due to the general increase in the leakage currents through the damage centers with temperature. Nevertheless, some stuck bits are seen to pseudo-randomly stick and unstick in the context of a continuously rising trend of temperature, thus demonstrating that their damage centers can exist in multiple widely spaced, discrete levels of leakage current, which is highly consistent with RTS. This implies that these intermittently stuck bits (ISBs) are a displacement damage phenomenon and are unrelated to microdose issues, which is confirmed by the observation that they also occur in unbiased irradiation. Finally, we note that observed variations in the periodicity of the sticking and unsticking behavior on several timescales is most readily explained by multiple leakage current pathways through displacement damage complexes spontaneously and independently opening and closing under the influence of thermal vibrations.
Yasui, S; Young, L R
1984-01-01
Smooth pursuit and saccadic components of foveal visual tracking as well as more involuntary ocular movements of optokinetic (o.k.n.) and vestibular nystagmus slow phase components were investigated in man, with particular attention given to their possible input-adaptive or predictive behaviour. Each component in question was isolated from the eye movement records through a computer-aided procedure. The frequency response method was used with sinusoidal (predictable) and pseudo-random (unpredictable) stimuli. When the target motion was pseudo-random, the frequency response of pursuit eye movements revealed a large phase lead (up to about 90 degrees) at low stimulus frequencies. It is possible to interpret this result as a predictive effect, even though the stimulation was pseudo-random and thus 'unpredictable'. The pseudo-random-input frequency response intrinsic to the saccadic system was estimated in an indirect way from the pursuit and composite (pursuit + saccade) frequency response data. The result was fitted well by a servo-mechanism model, which has a simple anticipatory mechanism to compensate for the inherent neuromuscular saccadic delay by utilizing the retinal slip velocity signal. The o.k.n. slow phase also exhibited a predictive effect with sinusoidal inputs; however, pseudo-random stimuli did not produce such phase lead as found in the pursuit case. The vestibular nystagmus slow phase showed no noticeable sign of prediction in the frequency range examined (0 approximately 0.7 Hz), in contrast to the results of the visually driven eye movements (i.e. saccade, pursuit and o.k.n. slow phase) at comparable stimulus frequencies. PMID:6707954
Encryption method based on pseudo random spatial light modulation for single-fibre data transmission
NASA Astrophysics Data System (ADS)
Kowalski, Marcin; Zyczkowski, Marek
2017-11-01
Optical cryptosystems can provide encryption and sometimes compression simultaneously. They are increasingly attractive for information securing especially for image encryption. Our studies shown that the optical cryptosystems can be used to encrypt optical data transmission. We propose and study a new method for securing fibre data communication. The paper presents a method for optical encryption of data transmitted with a single optical fibre. The encryption process relies on pseudo-random spatial light modulation, combination of two encryption keys and the Compressed Sensing framework. A linear combination of light pulses with pseudo-random patterns provides a required encryption performance. We propose an architecture to transmit the encrypted data through the optical fibre. The paper describes the method, presents the theoretical analysis, design of physical model and results of experiment.
Golden Ratio Versus Pi as Random Sequence Sources for Monte Carlo Integration
NASA Technical Reports Server (NTRS)
Sen, S. K.; Agarwal, Ravi P.; Shaykhian, Gholam Ali
2007-01-01
We discuss here the relative merits of these numbers as possible random sequence sources. The quality of these sequences is not judged directly based on the outcome of all known tests for the randomness of a sequence. Instead, it is determined implicitly by the accuracy of the Monte Carlo integration in a statistical sense. Since our main motive of using a random sequence is to solve real world problems, it is more desirable if we compare the quality of the sequences based on their performances for these problems in terms of quality/accuracy of the output. We also compare these sources against those generated by a popular pseudo-random generator, viz., the Matlab rand and the quasi-random generator ha/ton both in terms of error and time complexity. Our study demonstrates that consecutive blocks of digits of each of these numbers produce a good random sequence source. It is observed that randomly chosen blocks of digits do not have any remarkable advantage over consecutive blocks for the accuracy of the Monte Carlo integration. Also, it reveals that pi is a better source of a random sequence than theta when the accuracy of the integration is concerned.
Laser positioning of four-quadrant detector based on pseudo-random sequence
NASA Astrophysics Data System (ADS)
Tang, Yanqin; Cao, Ercong; Hu, Xiaobo; Gu, Guohua; Qian, Weixian
2016-10-01
Nowadays the technology of laser positioning based on four-quadrant detector has the wide scope of the study and application areas. The main principle of laser positioning is that by capturing the projection of the laser spot on the photosensitive surface of the detector, and then calculating the output signal from the detector to obtain the coordinates of the spot on the photosensitive surface of the detector, the coordinate information of the laser spot in the space with respect to detector system which reflects the spatial position of the target object is calculated effectively. Given the extensive application of FPGA technology and the pseudo-random sequence has the similar correlation of white noise, the measurement process of the interference, noise has little effect on the correlation peak. In order to improve anti-jamming capability of the guided missile in tracking process, when the laser pulse emission, the laser pulse period is pseudo-random encoded which maintains in the range of 40ms-65ms so that people of interfering can't find the exact real laser pulse. Also, because the receiver knows the way to solve the pseudo-random code, when the receiver receives two consecutive laser pulses, the laser pulse period can be decoded successfully. In the FPGA hardware implementation process, around each laser pulse arrival time, the receiver can open a wave door to get location information contained the true signal. Taking into account the first two consecutive pulses received have been disturbed, so after receiving the first laser pulse, it receives all the laser pulse in the next 40ms-65ms to obtain the corresponding pseudo-random code.
Lensless digital holography with diffuse illumination through a pseudo-random phase mask.
Bernet, Stefan; Harm, Walter; Jesacher, Alexander; Ritsch-Marte, Monika
2011-12-05
Microscopic imaging with a setup consisting of a pseudo-random phase mask, and an open CMOS camera, without an imaging objective, is demonstrated. The pseudo random phase mask acts as a diffuser for an incoming laser beam, scattering a speckle pattern to a CMOS chip, which is recorded once as a reference. A sample which is afterwards inserted somewhere in the optical beam path changes the speckle pattern. A single (non-iterative) image processing step, comparing the modified speckle pattern with the previously recorded one, generates a sharp image of the sample. After a first calibration the method works in real-time and allows quantitative imaging of complex (amplitude and phase) samples in an extended three-dimensional volume. Since no lenses are used, the method is free from lens abberations. Compared to standard inline holography the diffuse sample illumination improves the axial sectioning capability by increasing the effective numerical aperture in the illumination path, and it suppresses the undesired so-called twin images. For demonstration, a high resolution spatial light modulator (SLM) is programmed to act as the pseudo-random phase mask. We show experimental results, imaging microscopic biological samples, e.g. insects, within an extended volume at a distance of 15 cm with a transverse and longitudinal resolution of about 60 μm and 400 μm, respectively.
Testability Design Rating System: Testability Handbook. Volume 1
1992-02-01
4-10 4.7.5 Summary of False BIT Alarms (FBA) ............................. 4-10 4.7.6 Smart BIT Technique...Circuit Board PGA Pin Grid Array PLA Programmable Logic Array PLD Programmable Logic Device PN Pseudo-Random Number PREDICT Probabilistic Estimation of...11 4.7.6 Smart BIT ( reference: RADC-TR-85-198). " Smart " BIT is a term given to BIT circuitry in a system LRU which includes dedicated processor/memory
NASA Astrophysics Data System (ADS)
Chen, Guohai; Meng, Zeng; Yang, Dixiong
2018-01-01
This paper develops an efficient method termed as PE-PIM to address the exact nonstationary responses of pavement structure, which is modeled as a rectangular thin plate resting on bi-parametric Pasternak elastic foundation subjected to stochastic moving loads with constant acceleration. Firstly, analytical power spectral density (PSD) functions of random responses for thin plate are derived by integrating pseudo excitation method (PEM) with Duhamel's integral. Based on PEM, the new equivalent von Mises stress (NEVMS) is proposed, whose PSD function contains all cross-PSD functions between stress components. Then, the PE-PIM that combines the PEM with precise integration method (PIM) is presented to achieve efficiently stochastic responses of the plate by replacing Duhamel's integral with the PIM. Moreover, the semi-analytical Monte Carlo simulation is employed to verify the computational results of the developed PE-PIM. Finally, numerical examples demonstrate the high accuracy and efficiency of PE-PIM for nonstationary random vibration analysis. The effects of velocity and acceleration of moving load, boundary conditions of the plate and foundation stiffness on the deflection and NEVMS responses are scrutinized.
Ring correlations in random networks.
Sadjadi, Mahdi; Thorpe, M F
2016-12-01
We examine the correlations between rings in random network glasses in two dimensions as a function of their separation. Initially, we use the topological separation (measured by the number of intervening rings), but this leads to pseudo-long-range correlations due to a lack of topological charge neutrality in the shells surrounding a central ring. This effect is associated with the noncircular nature of the shells. It is, therefore, necessary to use the geometrical distance between ring centers. Hence we find a generalization of the Aboav-Weaire law out to larger distances, with the correlations between rings decaying away when two rings are more than about three rings apart.
Kuz'min, A A; Meshkovskiĭ, D V; Filist, S A
2008-01-01
Problems of engineering and algorithm development of magnetic therapy apparatuses with pseudo-random radiation spectrum within the audio range for treatment of prostatitis and gynecopathies are considered. A typical design based on a PIC 16F microcontroller is suggested. It includes a keyboard, LCD indicator, audio amplifier, inducer, and software units. The problem of pseudo-random signal generation within the audio range is considered. A series of rectangular pulses is generated on a random-length interval on the basis of a three-component random vector. This series provides the required spectral characteristics of the therapeutic magnetic field and their adaptation to the therapeutic conditions and individual features of the patient.
An inductor-based converter with EMI reduction for low-voltage thermoelectric energy harvesting
NASA Astrophysics Data System (ADS)
Wang, Chuang; Zhao, Kai; Li, Zunchao
2017-07-01
This paper presents a self-powered inductor-based converter which harvests thermoelectric energy and boosts extremely low voltage to a typical voltage level for supplying body sensor nodes. Electromagnetic interference (EMI) of the converter is reduced by spreading spectrum of fundamental frequency and harmonics via pseudo-random modulation, which is obtained via combining the linear feedback shift register and digitally controlled oscillator. Besides, the methods, namely extracting energy near MPP and reducing the power dissipation, are employed to improve the power efficiency. The presented inductor-based converter is designed and verified in CSMC CMOS 0.18-µm 1P6M process. The results reveal that it achieves the high efficiency and EMI reduction at the same time.
Fang, Yun; Wu, Hulin; Zhu, Li-Xing
2011-07-01
We propose a two-stage estimation method for random coefficient ordinary differential equation (ODE) models. A maximum pseudo-likelihood estimator (MPLE) is derived based on a mixed-effects modeling approach and its asymptotic properties for population parameters are established. The proposed method does not require repeatedly solving ODEs, and is computationally efficient although it does pay a price with the loss of some estimation efficiency. However, the method does offer an alternative approach when the exact likelihood approach fails due to model complexity and high-dimensional parameter space, and it can also serve as a method to obtain the starting estimates for more accurate estimation methods. In addition, the proposed method does not need to specify the initial values of state variables and preserves all the advantages of the mixed-effects modeling approach. The finite sample properties of the proposed estimator are studied via Monte Carlo simulations and the methodology is also illustrated with application to an AIDS clinical data set.
Absolute GPS Positioning Using Genetic Algorithms
NASA Astrophysics Data System (ADS)
Ramillien, G.
A new inverse approach for restoring the absolute coordinates of a ground -based station from three or four observed GPS pseudo-ranges is proposed. This stochastic method is based on simulations of natural evolution named genetic algorithms (GA). These iterative procedures provide fairly good and robust estimates of the absolute positions in the Earth's geocentric reference system. For comparison/validation, GA results are compared to the ones obtained using the classical linearized least-square scheme for the determination of the XYZ location proposed by Bancroft (1985) which is strongly limited by the number of available observations (i.e. here, the number of input pseudo-ranges must be four). The r.m.s. accuracy of the non -linear cost function reached by this latter method is typically ~10-4 m2 corresponding to ~300-500-m accuracies for each geocentric coordinate. However, GA can provide more acceptable solutions (r.m.s. errors < 10-5 m2), even when only three instantaneous pseudo-ranges are used, such as a lost of lock during a GPS survey. Tuned GA parameters used in different simulations are N=1000 starting individuals, as well as Pc=60-70% and Pm=30-40% for the crossover probability and mutation rate, respectively. Statistical tests on the ability of GA to recover acceptable coordinates in presence of important levels of noise are made simulating nearly 3000 random samples of erroneous pseudo-ranges. Here, two main sources of measurement errors are considered in the inversion: (1) typical satellite-clock errors and/or 300-metre variance atmospheric delays, and (2) Geometrical Dilution of Precision (GDOP) due to the particular GPS satellite configuration at the time of acquisition. Extracting valuable information and even from low-quality starting range observations, GA offer an interesting alternative for high -precision GPS positioning.
Noise generator for tinnitus treatment based on look-up tables
NASA Astrophysics Data System (ADS)
Uriz, Alejandro J.; Agüero, Pablo; Tulli, Juan C.; Castiñeira Moreira, Jorge; González, Esteban; Hidalgo, Roberto; Casadei, Manuel
2016-04-01
Treatment of tinnitus by means of masking sounds allows to obtain a significant improve of the quality of life of the individual that suffer that condition. In view of that, it is possible to develop noise synthesizers based on random number generators in digital signal processors (DSP), which are used in almost any digital hearing aid devices. DSP architecture have limitations to implement a pseudo random number generator, due to it, the noise statistics can be not as good as expectations. In this paper, a technique to generate additive white gaussian noise (AWGN) or other types of filtered noise using coefficients stored in program memory of the DSP is proposed. Also, an implementation of the technique is carried out on a dsPIC from Microchip®. Objective experiments and experimental measurements are performed to analyze the proposed technique.
Hybrid spread spectrum radio system
Smith, Stephen F.; Dress, William B.
2010-02-02
Systems and methods are described for hybrid spread spectrum radio systems. A method includes modulating a signal by utilizing a subset of bits from a pseudo-random code generator to control an amplification circuit that provides a gain to the signal. Another method includes: modulating a signal by utilizing a subset of bits from a pseudo-random code generator to control a fast hopping frequency synthesizer; and fast frequency hopping the signal with the fast hopping frequency synthesizer, wherein multiple frequency hops occur within a single data-bit time.
Navigation Using Orthogonal Frequency Division Multiplexed Signals of Opportunity
2007-09-01
transmits a 32,767 bit pseudo -random “short” code that repeats 37.5 times per second. Since the pseudo -random bit pattern and modulation scheme are... correlation process takes two “ sample windows,” both of which are ν = 16 samples wide and are spaced N = 64 samples apart, and compares them. When the...technique in (3.4) is a necessary step in order to get a more accurate estimate of the sample shift from the symbol boundary correlator in (3.1). Figure
A comparison of the Cray-2 performance before and after the installation of memory pseudo-banking
NASA Technical Reports Server (NTRS)
Schmickley, Ronald D.; Bailey, David H.
1987-01-01
A suite of 13 large Fortran benchmark codes were run on a Cray-2 configured with memory pseudo-banking circuits, and floating point operation rates were measured for each under a variety of system load configurations. These were compared with similar flop measurements taken on the same system before installation of the pseudo-banking. A useful memory access efficiency parameter was defined and calculated for both sets of performance rates, allowing a crude quantitative measure of the improvement in efficiency due to pseudo-banking. Programs were categorized as either highly scalar (S) or highly vectorized (V) and either memory-intensive or register-intensive, giving 4 categories: S-memory, S-register, V-memory, and V-register. Using flop rates as a simple quantifier of these 4 categories, a scatter plot of efficiency gain vs Mflops roughly illustrates the improvement in floating point processing speed due to pseudo-banking. On the Cray-2 system tested this improvement ranged from 1 percent for S-memory codes to about 12 percent for V-memory codes. No significant gains were made for V-register codes, which was to be expected.
Widaman, Keith F.; Grimm, Kevin J.; Early, Dawnté R.; Robins, Richard W.; Conger, Rand D.
2013-01-01
Difficulties arise in multiple-group evaluations of factorial invariance if particular manifest variables are missing completely in certain groups. Ad hoc analytic alternatives can be used in such situations (e.g., deleting manifest variables), but some common approaches, such as multiple imputation, are not viable. At least 3 solutions to this problem are viable: analyzing differing sets of variables across groups, using pattern mixture approaches, and a new method using random number generation. The latter solution, proposed in this article, is to generate pseudo-random normal deviates for all observations for manifest variables that are missing completely in a given sample and then to specify multiple-group models in a way that respects the random nature of these values. An empirical example is presented in detail comparing the 3 approaches. The proposed solution can enable quantitative comparisons at the latent variable level between groups using programs that require the same number of manifest variables in each group. PMID:24019738
NASA Astrophysics Data System (ADS)
Vodenicarevic, D.; Locatelli, N.; Mizrahi, A.; Friedman, J. S.; Vincent, A. F.; Romera, M.; Fukushima, A.; Yakushiji, K.; Kubota, H.; Yuasa, S.; Tiwari, S.; Grollier, J.; Querlioz, D.
2017-11-01
Low-energy random number generation is critical for many emerging computing schemes proposed to complement or replace von Neumann architectures. However, current random number generators are always associated with an energy cost that is prohibitive for these computing schemes. We introduce random number bit generation based on specific nanodevices: superparamagnetic tunnel junctions. We experimentally demonstrate high-quality random bit generation that represents an orders-of-magnitude improvement in energy efficiency over current solutions. We show that the random generation speed improves with nanodevice scaling, and we investigate the impact of temperature, magnetic field, and cross talk. Finally, we show how alternative computing schemes can be implemented using superparamagentic tunnel junctions as random number generators. These results open the way for fabricating efficient hardware computing devices leveraging stochasticity, and they highlight an alternative use for emerging nanodevices.
An Analysis of Two Layers of Encryption to Protect Network Traffic
2010-06-01
Published: 06/18/2001 CVSS Severity: 7.5 (HIGH) CVE-2001-1141 Summary: The Pseudo-Random Number Generator (PRNG) in SSLeay and OpenSSL be- fore 0.9.6b allows...x509cert function in KAME Racoon successfully verifies certifi- cates even when OpenSSL validation fails, which could allow remote attackers to...montgomery function in crypto/bn/bn mont.c in OpenSSL 0.9.8e and earlier does not properly perform Montgomery multiplication, which might allow local users to
Faster Bit-Parallel Algorithms for Unordered Pseudo-tree Matching and Tree Homeomorphism
NASA Astrophysics Data System (ADS)
Kaneta, Yusaku; Arimura, Hiroki
In this paper, we consider the unordered pseudo-tree matching problem, which is a problem of, given two unordered labeled trees P and T, finding all occurrences of P in T via such many-one embeddings that preserve node labels and parent-child relationship. This problem is closely related to tree pattern matching problem for XPath queries with child axis only. If m > w , we present an efficient algorithm that solves the problem in O(nm log(w)/w) time using O(hm/w + mlog(w)/w) space and O(m log(w)) preprocessing on a unit-cost arithmetic RAM model with addition, where m is the number of nodes in P, n is the number of nodes in T, h is the height of T, and w is the word length. We also discuss a modification of our algorithm for the unordered tree homeomorphism problem, which corresponds to a tree pattern matching problem for XPath queries with descendant axis only.
Efficient and robust quantum random number generation by photon number detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Applegate, M. J.; Cavendish Laboratory, University of Cambridge, 19 JJ Thomson Avenue, Cambridge CB3 0HE; Thomas, O.
2015-08-17
We present an efficient and robust quantum random number generator based upon high-rate room temperature photon number detection. We employ an electric field-modulated silicon avalanche photodiode, a type of device particularly suited to high-rate photon number detection with excellent photon number resolution to detect, without an applied dead-time, up to 4 photons from the optical pulses emitted by a laser. By both measuring and modeling the response of the detector to the incident photons, we are able to determine the illumination conditions that achieve an optimal bit rate that we show is robust against variation in the photon flux. Wemore » extract random bits from the detected photon numbers with an efficiency of 99% corresponding to 1.97 bits per detected photon number yielding a bit rate of 143 Mbit/s, and verify that the extracted bits pass stringent statistical tests for randomness. Our scheme is highly scalable and has the potential of multi-Gbit/s bit rates.« less
NASA Astrophysics Data System (ADS)
Neji, N.; Jridi, M.; Alfalou, A.; Masmoudi, N.
2016-02-01
The double random phase encryption (DRPE) method is a well-known all-optical architecture which has many advantages especially in terms of encryption efficiency. However, the method presents some vulnerabilities against attacks and requires a large quantity of information to encode the complex output plane. In this paper, we present an innovative hybrid technique to enhance the performance of DRPE method in terms of compression and encryption. An optimized simultaneous compression and encryption method is applied simultaneously on the real and imaginary components of the DRPE output plane. The compression and encryption technique consists in using an innovative randomized arithmetic coder (RAC) that can well compress the DRPE output planes and at the same time enhance the encryption. The RAC is obtained by an appropriate selection of some conditions in the binary arithmetic coding (BAC) process and by using a pseudo-random number to encrypt the corresponding outputs. The proposed technique has the capabilities to process video content and to be standard compliant with modern video coding standards such as H264 and HEVC. Simulations demonstrate that the proposed crypto-compression system has presented the drawbacks of the DRPE method. The cryptographic properties of DRPE have been enhanced while a compression rate of one-sixth can be achieved. FPGA implementation results show the high performance of the proposed method in terms of maximum operating frequency, hardware occupation, and dynamic power consumption.
NASA Astrophysics Data System (ADS)
Bonzom, Valentin
2016-07-01
We review an approach which aims at studying discrete (pseudo-)manifolds in dimension d≥ 2 and called random tensor models. More specifically, we insist on generalizing the two-dimensional notion of p-angulations to higher dimensions. To do so, we consider families of triangulations built out of simplices with colored faces. Those simplices can be glued to form new building blocks, called bubbles which are pseudo-manifolds with boundaries. Bubbles can in turn be glued together to form triangulations. The main challenge is to classify the triangulations built from a given set of bubbles with respect to their numbers of bubbles and simplices of codimension two. While the colored triangulations which maximize the number of simplices of codimension two at fixed number of simplices are series-parallel objects called melonic triangulations, this is not always true anymore when restricting attention to colored triangulations built from specific bubbles. This opens up the possibility of new universality classes of colored triangulations. We present three existing strategies to find those universality classes. The first two strategies consist in building new bubbles from old ones for which the problem can be solved. The third strategy is a bijection between those colored triangulations and stuffed, edge-colored maps, which are some sort of hypermaps whose hyperedges are replaced with edge-colored maps. We then show that the present approach can lead to enumeration results and identification of universality classes, by working out the example of quartic tensor models. They feature a tree-like phase, a planar phase similar to two-dimensional quantum gravity and a phase transition between them which is interpreted as a proliferation of baby universes. While this work is written in the context of random tensors, it is almost exclusively of combinatorial nature and we hope it is accessible to interested readers who are not familiar with random matrices, tensors and quantum field theory.
NASA Technical Reports Server (NTRS)
1972-01-01
System studies, equipment simulation, hardware development and flight tests which were conducted during the development of aircraft collision hazard warning system are discussed. The system uses a cooperative, continuous wave Doppler radar principle with pseudo-random frequency modulation. The report presents a description of the system operation and deals at length with the use of pseudo-random coding techniques. In addition, the use of mathematical modeling and computer simulation to determine the alarm statistics and system saturation characteristics in terminal area traffic of variable density is discussed.
NASA Astrophysics Data System (ADS)
Zhao, Yan; Stratt, Richard M.
2018-05-01
Surprisingly long-ranged intermolecular correlations begin to appear in isotropic (orientationally disordered) phases of liquid crystal forming molecules when the temperature or density starts to close in on the boundary with the nematic (ordered) phase. Indeed, the presence of slowly relaxing, strongly orientationally correlated, sets of molecules under putatively disordered conditions ("pseudo-nematic domains") has been apparent for some time from light-scattering and optical-Kerr experiments. Still, a fully microscopic characterization of these domains has been lacking. We illustrate in this paper how pseudo-nematic domains can be studied in even relatively small computer simulations by looking for order-parameter tensor fluctuations much larger than one would expect from random matrix theory. To develop this idea, we show that random matrix theory offers an exact description of how the probability distribution for liquid-crystal order parameter tensors converges to its macroscopic-system limit. We then illustrate how domain properties can be inferred from finite-size-induced deviations from these random matrix predictions. A straightforward generalization of time-independent random matrix theory also allows us to prove that the analogous random matrix predictions for the time dependence of the order-parameter tensor are similarly exact in the macroscopic limit, and that relaxation behavior of the domains can be seen in the breakdown of the finite-size scaling required by that random-matrix theory.
Baker, John [Walnut Creek, CA; Archer, Daniel E [Knoxville, TN; Luke, Stanley John [Pleasanton, CA; Decman, Daniel J [Livermore, CA; White, Gregory K [Livermore, CA
2009-06-23
A tailpulse signal generating/simulating apparatus, system, and method designed to produce electronic pulses which simulate tailpulses produced by a gamma radiation detector, including the pileup effect caused by the characteristic exponential decay of the detector pulses, and the random Poisson distribution pulse timing for radioactive materials. A digital signal process (DSP) is programmed and configured to produce digital values corresponding to pseudo-randomly selected pulse amplitudes and pseudo-randomly selected Poisson timing intervals of the tailpulses. Pulse amplitude values are exponentially decayed while outputting the digital value to a digital to analog converter (DAC). And pulse amplitudes of new pulses are added to decaying pulses to simulate the pileup effect for enhanced realism in the simulation.
2007-03-01
arc at a substorm pseudo-breakup Sb. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 61102F 6. AUTHORS 5d. PROJECT NUMBER K. Yago ,, K. Shiokawa, K. Yumoto...Distribution Unlimited Simultaneous DMSP, all-sky camera, and IMAGE FUV observations of the brightening arc at a substorm pseudo-breakup K. Yago "•. K. Shiokawa...2003; Mende et al., particles, field-aligned currents, and plasma convection as- 2003: Yago et al., 2005: Shiokawa et al., 2005). These sociated with
Koschate, J; Drescher, U; Thieschäfer, L; Heine, O; Baum, K; Hoffmann, U
2016-12-01
This study aims to compare cardiorespiratory kinetics as a response to a standardised work rate protocol with pseudo-random binary sequences between cycling and walking in young healthy subjects. Muscular and pulmonary oxygen uptake (V̇O 2 ) kinetics as well as heart rate kinetics were expected to be similar for walking and cycling. Cardiac data and V̇O 2 of 23 healthy young subjects were measured in response to pseudo-random binary sequences. Kinetics were assessed applying time series analysis. Higher maxima of cross-correlation functions between work rate and the respective parameter indicate faster kinetics responses. Muscular V̇O 2 kinetics were estimated from heart rate and pulmonary V̇O 2 using a circulatory model. Muscular (walking vs. cycling [mean±SD in arbitrary units]: 0.40±0.08 vs. 0.41±0.08) and pulmonary V̇O 2 kinetics (0.35±0.06 vs. 0.35±0.06) were not different, although the time courses of the cross-correlation functions of pulmonary V̇O 2 showed unexpected biphasic responses. Heart rate kinetics (0.50±0.14 vs. 0.40±0.14; P=0.017) was faster for walking. Regarding the biphasic cross-correlation functions of pulmonary V̇O 2 during walking, the assessment of muscular V̇O 2 kinetics via pseudo-random binary sequences requires a circulatory model to account for cardio-dynamic distortions. Faster heart rate kinetics for walking should be considered by comparing results from cycle and treadmill ergometry. © Georg Thieme Verlag KG Stuttgart · New York.
The development of GPU-based parallel PRNG for Monte Carlo applications in CUDA Fortran
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kargaran, Hamed, E-mail: h-kargaran@sbu.ac.ir; Minuchehr, Abdolhamid; Zolfaghari, Ahmad
The implementation of Monte Carlo simulation on the CUDA Fortran requires a fast random number generation with good statistical properties on GPU. In this study, a GPU-based parallel pseudo random number generator (GPPRNG) have been proposed to use in high performance computing systems. According to the type of GPU memory usage, GPU scheme is divided into two work modes including GLOBAL-MODE and SHARED-MODE. To generate parallel random numbers based on the independent sequence method, the combination of middle-square method and chaotic map along with the Xorshift PRNG have been employed. Implementation of our developed PPRNG on a single GPU showedmore » a speedup of 150x and 470x (with respect to the speed of PRNG on a single CPU core) for GLOBAL-MODE and SHARED-MODE, respectively. To evaluate the accuracy of our developed GPPRNG, its performance was compared to that of some other commercially available PPRNGs such as MATLAB, FORTRAN and Miller-Park algorithm through employing the specific standard tests. The results of this comparison showed that the developed GPPRNG in this study can be used as a fast and accurate tool for computational science applications.« less
Measurement time and statistics for a noise thermometer with a synthetic-noise reference
NASA Astrophysics Data System (ADS)
White, D. R.; Benz, S. P.; Labenski, J. R.; Nam, S. W.; Qu, J. F.; Rogalla, H.; Tew, W. L.
2008-08-01
This paper describes methods for reducing the statistical uncertainty in measurements made by noise thermometers using digital cross-correlators and, in particular, for thermometers using pseudo-random noise for the reference signal. First, a discrete-frequency expression for the correlation bandwidth for conventional noise thermometers is derived. It is shown how an alternative frequency-domain computation can be used to eliminate the spectral response of the correlator and increase the correlation bandwidth. The corresponding expressions for the uncertainty in the measurement of pseudo-random noise in the presence of uncorrelated thermal noise are then derived. The measurement uncertainty in this case is less than that for true thermal-noise measurements. For pseudo-random sources generating a frequency comb, an additional small reduction in uncertainty is possible, but at the cost of increasing the thermometer's sensitivity to non-linearity errors. A procedure is described for allocating integration times to further reduce the total uncertainty in temperature measurements. Finally, an important systematic error arising from the calculation of ratios of statistical variables is described.
Robust PRNG based on homogeneously distributed chaotic dynamics
NASA Astrophysics Data System (ADS)
Garasym, Oleg; Lozi, René; Taralova, Ina
2016-02-01
This paper is devoted to the design of new chaotic Pseudo Random Number Generator (CPRNG). Exploring several topologies of network of 1-D coupled chaotic mapping, we focus first on two dimensional networks. Two topologically coupled maps are studied: TTL rc non-alternate, and TTL SC alternate. The primary idea of the novel maps has been based on an original coupling of the tent and logistic maps to achieve excellent random properties and homogeneous /uniform/ density in the phase plane, thus guaranteeing maximum security when used for chaos base cryptography. In this aim two new nonlinear CPRNG: MTTL 2 sc and NTTL 2 are proposed. The maps successfully passed numerous statistical, graphical and numerical tests, due to proposed ring coupling and injection mechanisms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alba, Paolo; Alberico, Wanda; Bellwied, Rene
We calculate ratios of higher-order susceptibilities quantifying fluctuations in the number of net-protons and in the net-electric charge using the Hadron Resonance Gas (HRG) model. We take into account the effect of resonance decays, the kinematic acceptance cuts in rapidity, pseudo-rapidity and transverse momentum used in the experimental analysis, as well as a randomization of the isospin of nucleons in the hadronic phase. By comparing these results to the latest experimental data from the STAR Collaboration, we determine the freeze-out conditions from net-electric charge and net-proton distributions and discuss their consistency.
Possibilities and testing of CPRNG in block cipher mode of operation PM-DC-LM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zacek, Petr; Jasek, Roman; Malanik, David
2016-06-08
This paper discusses the chaotic pseudo-random number generator (CPRNG), which is used in block cipher mode of operation called PM-DC-LM. PM-DC-LM is one of possible subversions of general PM mode. In this paper is not discussed the design of PM-DC-LM, but only CPRNG as a part of it because designing is written in other papers. Possibilities, how to change or to improve CPRNG are mentioned. The final part is devoted for a little testing of CPRNG and some testing data are shown.
1993-03-01
representation is needed to characterize such signature. Pseudo Wigner - Ville distribution is ideally suited for portraying non-stationary signal in the...features jointly in time and frequency. 14, SUBJECT TERIMS 15. NUMBER OF PAGES Pseudo Wigner - Ville Distribution , Analytic Signal, 83 Hilbert Transform...D U C T IO N ............................................................................ . 1 II. PSEUDO WIGNER - VILLE DISTRIBUTION
NASA Astrophysics Data System (ADS)
Liu, Qi; Wang, Ying; Wang, Jun; Wang, Qiong-Hua
2018-02-01
In this paper, a novel optical image encryption system combining compressed sensing with phase-shifting interference in fractional wavelet domain is proposed. To improve the encryption efficiency, the volume data of original image are decreased by compressed sensing. Then the compacted image is encoded through double random phase encoding in asymmetric fractional wavelet domain. In the encryption system, three pseudo-random sequences, generated by three-dimensional chaos map, are used as the measurement matrix of compressed sensing and two random-phase masks in the asymmetric fractional wavelet transform. It not only simplifies the keys to storage and transmission, but also enhances our cryptosystem nonlinearity to resist some common attacks. Further, holograms make our cryptosystem be immune to noises and occlusion attacks, which are obtained by two-step-only quadrature phase-shifting interference. And the compression and encryption can be achieved in the final result simultaneously. Numerical experiments have verified the security and validity of the proposed algorithm.
A high-speed on-chip pseudo-random binary sequence generator for multi-tone phase calibration
NASA Astrophysics Data System (ADS)
Gommé, Liesbeth; Vandersteen, Gerd; Rolain, Yves
2011-07-01
An on-chip reference generator is conceived by adopting the technique of decimating a pseudo-random binary sequence (PRBS) signal in parallel sequences. This is of great benefit when high-speed generation of PRBS and PRBS-derived signals is the objective. The design implemented standard CMOS logic is available in commercial libraries to provide the logic functions for the generator. The design allows the user to select the periodicity of the PRBS and the PRBS-derived signals. The characterization of the on-chip generator marks its performance and reveals promising specifications.
Method of multiplexed analysis using ion mobility spectrometer
Belov, Mikhail E [Richland, WA; Smith, Richard D [Richland, WA
2009-06-02
A method for analyzing analytes from a sample introduced into a Spectrometer by generating a pseudo random sequence of a modulation bins, organizing each modulation bin as a series of submodulation bins, thereby forming an extended pseudo random sequence of submodulation bins, releasing the analytes in a series of analyte packets into a Spectrometer, thereby generating an unknown original ion signal vector, detecting the analytes at a detector, and characterizing the sample using the plurality of analyte signal subvectors. The method is advantageously applied to an Ion Mobility Spectrometer, and an Ion Mobility Spectrometer interfaced with a Time of Flight Mass Spectrometer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yashchuk, Valeriy V.; Anderson, Erik H.; Barber, Samuel K.
2011-03-14
A modulation transfer function (MTF) calibration method based on binary pseudo-random (BPR) gratings and arrays [Proc. SPIE 7077-7 (2007), Opt. Eng. 47, 073602 (2008)] has been proven to be an effective MTF calibration method for a number of interferometric microscopes and a scatterometer [Nucl. Instr. and Meth. A616, 172 (2010)]. Here we report on a further expansion of the application range of the method. We describe the MTF calibration of a 6 inch phase shifting Fizeau interferometer. Beyond providing a direct measurement of the interferometer's MTF, tests with a BPR array surface have revealed an asymmetry in the instrument's datamore » processing algorithm that fundamentally limits its bandwidth. Moreover, the tests have illustrated the effects of the instrument's detrending and filtering procedures on power spectral density measurements. The details of the development of a BPR test sample suitable for calibration of scanning and transmission electron microscopes are also presented. Such a test sample is realized as a multilayer structure with the layer thicknesses of two materials corresponding to BPR sequence. The investigations confirm the universal character of the method that makes it applicable to a large variety of metrology instrumentation with spatial wavelength bandwidths from a few nanometers to hundreds of millimeters.« less
Kanter, Ido; Butkovski, Maria; Peleg, Yitzhak; Zigzag, Meital; Aviad, Yaara; Reidler, Igor; Rosenbluh, Michael; Kinzel, Wolfgang
2010-08-16
Random bit generators (RBGs) constitute an important tool in cryptography, stochastic simulations and secure communications. The later in particular has some difficult requirements: high generation rate of unpredictable bit strings and secure key-exchange protocols over public channels. Deterministic algorithms generate pseudo-random number sequences at high rates, however, their unpredictability is limited by the very nature of their deterministic origin. Recently, physical RBGs based on chaotic semiconductor lasers were shown to exceed Gbit/s rates. Whether secure synchronization of two high rate physical RBGs is possible remains an open question. Here we propose a method, whereby two fast RBGs based on mutually coupled chaotic lasers, are synchronized. Using information theoretic analysis we demonstrate security against a powerful computational eavesdropper, capable of noiseless amplification, where all parameters are publicly known. The method is also extended to secure synchronization of a small network of three RBGs.
NASA Astrophysics Data System (ADS)
Yamamoto, Shuu'ichirou; Shuto, Yusuke; Sugahara, Satoshi
2013-07-01
We computationally analyzed performance and power-gating (PG) ability of a new nonvolatile delay flip-flop (NV-DFF) based on pseudo-spin-MOSFET (PS-MOSFET) architecture using spin-transfer-torque magnetic tunnel junctions (STT-MTJs). The high-performance energy-efficient PG operations of the NV-DFF can be achieved owing to its cell structure employing PS-MOSFETs that can electrically separate the STT-MTJs from the ordinary DFF part of the NV-DFF. This separation also makes it possible that the break-even time (BET) of the NV-DFF is designed by the size of the PS-MOSFETs without performance degradation of the normal DFF operations. The effect of the area occupation ratio of the NV-DFFs to a CMOS logic system on the BET was also analyzed. Although the optimized BET was varied depending on the area occupation ratio, energy-efficient fine-grained PG with a BET of several sub-microseconds was revealed to be achieved. We also proposed microprocessors and system-on-chip (SoC) devices using nonvolatile hierarchical-memory systems wherein NV-DFF and nonvolatile static random access memory (NV-SRAM) circuits are used as fundamental building blocks. Contribution to the Topical Issue “International Semiconductor Conference Dresden-Grenoble - ISCDG 2012”, Edited by Gérard Ghibaudo, Francis Balestra and Simon Deleonibus.
Foldover effect and energy output from a nonlinear pseudo-maglev harvester
NASA Astrophysics Data System (ADS)
Kecik, Krzysztof; Mitura, Andrzej; Warminski, Jerzy; Lenci, Stefano
2018-01-01
Dynamics analysis and energy harvesting of a nonlinear magnetic pseudo-levitation (pseudo-maglev) harvester under harmonic excitation is presented in this paper. The system, for selected parameters, has two stable possible solutions with different corresponding energy outputs. The main goal is to analyse the influence of resistance load on the multi-stability zones and energy recovery which can help to tune the system to improve the energy harvesting efficiency.
NASA Technical Reports Server (NTRS)
Grosveld, Ferdinand W.; Silcox, Richard (Technical Monitor)
2001-01-01
A location and positioning system was developed and implemented in the anechoic chamber of the Structural Acoustics Loads and Transmission (SALT) facility to accurately determine the coordinates of points in three-dimensional space. Transfer functions were measured between a shaker source at two different panel locations and the vibrational response distributed over the panel surface using a scanning laser vibrometer. The binaural simulation test matrix included test runs for several locations of the measuring microphones, various attitudes of the mannequin, two locations of the shaker excitation and three different shaker inputs including pulse, broadband random, and pseudo-random. Transfer functions, auto spectra, and coherence functions were acquired for the pseudo-random excitation. Time histories were acquired for the pulse and broadband random input to the shaker. The tests were repeated with a reflective surface installed. Binary data files were converted to universal format and archived on compact disk.
Binomial leap methods for simulating stochastic chemical kinetics.
Tian, Tianhai; Burrage, Kevin
2004-12-01
This paper discusses efficient simulation methods for stochastic chemical kinetics. Based on the tau-leap and midpoint tau-leap methods of Gillespie [D. T. Gillespie, J. Chem. Phys. 115, 1716 (2001)], binomial random variables are used in these leap methods rather than Poisson random variables. The motivation for this approach is to improve the efficiency of the Poisson leap methods by using larger stepsizes. Unlike Poisson random variables whose range of sample values is from zero to infinity, binomial random variables have a finite range of sample values. This probabilistic property has been used to restrict possible reaction numbers and to avoid negative molecular numbers in stochastic simulations when larger stepsize is used. In this approach a binomial random variable is defined for a single reaction channel in order to keep the reaction number of this channel below the numbers of molecules that undergo this reaction channel. A sampling technique is also designed for the total reaction number of a reactant species that undergoes two or more reaction channels. Samples for the total reaction number are not greater than the molecular number of this species. In addition, probability properties of the binomial random variables provide stepsize conditions for restricting reaction numbers in a chosen time interval. These stepsize conditions are important properties of robust leap control strategies. Numerical results indicate that the proposed binomial leap methods can be applied to a wide range of chemical reaction systems with very good accuracy and significant improvement on efficiency over existing approaches. (c) 2004 American Institute of Physics.
1992-06-01
AD-A256 202 NAVAL POSTGRADUATE SCHOOL Monterey, California THESIS - _ ’. AN ENERGY ANALYSIS OF THE PSEUDO WIGNER - VILLE DISTRIBUTION IN SUPPORT OF...NO 11 TITLE (Include Security Classification) AN ENERGY ANALYSIS OF THE PSEUDO WIGNER - VILLE DISTRIBUTION IN SUPPORT OF MACHINERY MONITORING AND...block number) FIELD GROUP SUB-GROUP machinery monitoring, transient, pseudo wigner - ville distribution , machinery diagnostics 19 ABSTRACT (Continue on
NASA Astrophysics Data System (ADS)
Debenjak, Andrej; Boškoski, Pavle; Musizza, Bojan; Petrovčič, Janko; Juričić, Đani
2014-05-01
This paper proposes an approach to the estimation of PEM fuel cell impedance by utilizing pseudo-random binary sequence as a perturbation signal and continuous wavelet transform with Morlet mother wavelet. With the approach, the impedance characteristic in the frequency band from 0.1 Hz to 500 Hz is identified in 60 seconds, approximately five times faster compared to the conventional single-sine approach. The proposed approach was experimentally evaluated on a single PEM fuel cell of a larger fuel cell stack. The quality of the results remains at the same level compared to the single-sine approach.
Local Risk-Minimization for Defaultable Claims with Recovery Process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biagini, Francesca, E-mail: biagini@mathematik.uni-muenchen.de; Cretarola, Alessandra, E-mail: alessandra.cretarola@dmi.unipg.it
We study the local risk-minimization approach for defaultable claims with random recovery at default time, seen as payment streams on the random interval [0,{tau} Logical-And T], where T denotes the fixed time-horizon. We find the pseudo-locally risk-minimizing strategy in the case when the agent information takes into account the possibility of a default event (local risk-minimization with G-strategies) and we provide an application in the case of a corporate bond. We also discuss the problem of finding a pseudo-locally risk-minimizing strategy if we suppose the agent obtains her information only by observing the non-defaultable assets.
NASA Astrophysics Data System (ADS)
Zarycz, M. Natalia C.; Provasi, Patricio F.; Sauer, Stephan P. A.
2015-12-01
It is investigated, whether the number of excited (pseudo)states can be truncated in the sum-over-states expression for indirect spin-spin coupling constants (SSCCs), which is used in the Contributions from Localized Orbitals within the Polarization Propagator Approach and Inner Projections of the Polarization Propagator (IPPP-CLOPPA) approach to analyzing SSCCs in terms of localized orbitals. As a test set we have studied the nine simple compounds, CH4, NH3, H2O, SiH4, PH3, SH2, C2H2, C2H4, and C2H6. The excited (pseudo)states were obtained from time-dependent density functional theory (TD-DFT) calculations with the B3LYP exchange-correlation functional and the specialized core-property basis set, aug-cc-pVTZ-J. We investigated both how the calculated coupling constants depend on the number of (pseudo)states included in the summation and whether the summation can be truncated in a systematic way at a smaller number of states and extrapolated to the total number of (pseudo)states for the given one-electron basis set. We find that this is possible and that for some of the couplings it is sufficient to include only about 30% of the excited (pseudo)states.
NASA Astrophysics Data System (ADS)
Lika, Konstadia; Kearney, Michael R.; Kooijman, Sebastiaan A. L. M.
2011-11-01
The covariation method for estimating the parameters of the standard Dynamic Energy Budget (DEB) model provides a single-step method of accessing all the core DEB parameters from commonly available empirical data. In this study, we assess the robustness of this parameter estimation procedure and analyse the role of pseudo-data using elasticity coefficients. In particular, we compare the performance of Maximum Likelihood (ML) vs. Weighted Least Squares (WLS) approaches and find that the two approaches tend to converge in performance as the number of uni-variate data sets increases, but that WLS is more robust when data sets comprise single points (zero-variate data). The efficiency of the approach is shown to be high, and the prior parameter estimates (pseudo-data) have very little influence if the real data contain information about the parameter values. For instance, the effects of the pseudo-value for the allocation fraction κ is reduced when there is information for both growth and reproduction, that for the energy conductance is reduced when information on age at birth and puberty is given, and the effects of the pseudo-value for the maturity maintenance rate coefficient are insignificant. The estimation of some parameters (e.g., the zoom factor and the shape coefficient) requires little information, while that of others (e.g., maturity maintenance rate, puberty threshold and reproduction efficiency) require data at several food levels. The generality of the standard DEB model, in combination with the estimation of all of its parameters, allows comparison of species on the basis of parameter values. We discuss a number of preliminary patterns emerging from the present collection of parameter estimates across a wide variety of taxa. We make the observation that the estimated value of the fraction κ of mobilised reserve that is allocated to soma is far away from the value that maximises reproduction. We recognise this as the reason why two very different parameter sets must exist that fit most data set reasonably well, and give arguments why, in most cases, the set with the large value of κ should be preferred. The continued development of a parameter database through the estimation procedures described here will provide a strong basis for understanding evolutionary patterns in metabolic organisation across the diversity of life.
Freeze-out conditions from net-proton and net-charge fluctuations at RHIC
Alba, Paolo; Alberico, Wanda; Bellwied, Rene; ...
2014-09-26
We calculate ratios of higher-order susceptibilities quantifying fluctuations in the number of net-protons and in the net-electric charge using the Hadron Resonance Gas (HRG) model. We take into account the effect of resonance decays, the kinematic acceptance cuts in rapidity, pseudo-rapidity and transverse momentum used in the experimental analysis, as well as a randomization of the isospin of nucleons in the hadronic phase. By comparing these results to the latest experimental data from the STAR Collaboration, we determine the freeze-out conditions from net-electric charge and net-proton distributions and discuss their consistency.
Solar Wind-Magnetosphere Coupling Influences on Pseudo-Breakup Activity
NASA Technical Reports Server (NTRS)
Fillingim, M. O.; Brittnacher, M.; Parks, G. K.; Germany, G. A.; Spann, J. F.
1998-01-01
Pseudo-breakups are brief, localized aurora[ arc brightening, which do not lead to a global expansion, are historically observed during the growth phase of substorms. Previous studies have demonstrated that phenomenologically there is very little difference between substorm onsets and pseudo-breakups except for the degree of localization and the absence of a global expansion phase. A key open question is what physical mechanism prevents a pseudo-breakup form expanding globally. Using Polar Ultraviolet Imager (UVI) images, we identify periods of pseudo-breakup activity. Foe the data analyzed we find that most pseudo-breakups occur near local midnight, between magnetic local times of 21 and 03, at magnetic latitudes near 70 degrees, through this value may change by several degrees. While often discussed in the context of substorm growth phase events, pseudo-breakups are also shown to occur during prolonged relatively inactive periods. These quiet time pseudo-breakups can occur over a period of several hours without the development of a significant substorm for at least an hour after pseudo-breakup activity stops. In an attempt to understand the cause of quiet time pseudo-breakups, we compute the epsilon parameter as a measure of the efficiency of solar wind-magnetosphere coupling. It is noted that quiet time pseudo-breakups occur typically when epsilon is low; less than about 50 GW. We suggest that quiet time pseudo-breakups are driven by relatively small amounts of energy transferred to the magnetosphere by the solar wind insufficient to initiate a substorm expansion onset.
An efficient algorithm for generating random number pairs drawn from a bivariate normal distribution
NASA Technical Reports Server (NTRS)
Campbell, C. W.
1983-01-01
An efficient algorithm for generating random number pairs from a bivariate normal distribution was developed. Any desired value of the two means, two standard deviations, and correlation coefficient can be selected. Theoretically the technique is exact and in practice its accuracy is limited only by the quality of the uniform distribution random number generator, inaccuracies in computer function evaluation, and arithmetic. A FORTRAN routine was written to check the algorithm and good accuracy was obtained. Some small errors in the correlation coefficient were observed to vary in a surprisingly regular manner. A simple model was developed which explained the qualities aspects of the errors.
Study on a novel laser target detection system based on software radio technique
NASA Astrophysics Data System (ADS)
Song, Song; Deng, Jia-hao; Wang, Xue-tian; Gao, Zhen; Sun, Ji; Sun, Zhi-hui
2008-12-01
This paper presents that software radio technique is applied to laser target detection system with the pseudo-random code modulation. Based on the theory of software radio, the basic framework of the system, hardware platform, and the implementation of the software system are detailed. Also, the block diagram of the system, DSP circuit, block diagram of the pseudo-random code generator, and soft flow diagram of signal processing are designed. Experimental results have shown that the application of software radio technique provides a novel method to realize the modularization, miniaturization and intelligence of the laser target detection system, and the upgrade and improvement of the system will become simpler, more convenient, and cheaper.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yashchuk, V.V.; Conley, R.; Anderson, E.H.
Verification of the reliability of metrology data from high quality X-ray optics requires that adequate methods for test and calibration of the instruments be developed. For such verification for optical surface profilometers in the spatial frequency domain, a modulation transfer function (MTF) calibration method based on binarypseudo-random (BPR) gratings and arrays has been suggested and and proven to be an effective calibration method for a number of interferometric microscopes, a phase shifting Fizeau interferometer, and a scatterometer. Here we describe the details of development of binarypseudo-random multilayer (BPRML) test samples suitable for characterization of scanning (SEM) and transmission (TEM) electronmore » microscopes. We discuss the results of TEM measurements with the BPRML test samples fabricated from a WiSi{sub 2}/Si multilayer coating with pseudo-randomly distributed layers. In particular, we demonstrate that significant information about the metrological reliability of the TEM measurements can be extracted even when the fundamental frequency of the BPRML sample is smaller than the Nyquist frequency of the measurements. The measurements demonstrate a number of problems related to the interpretation of the SEM and TEM data. Note that similar BPRML testsamples can be used to characterize X-ray microscopes. Corresponding work with X-ray microscopes is in progress.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yashchuk, Valeriy V; Anderson, Erik H.; Barber, Samuel K.
2010-07-26
A modulation transfer function (MTF) calibration method based on binary pseudo-random (BPR) gratings and arrays [Proc. SPIE 7077-7 (2007), Opt. Eng. 47(7), 073602-1-5 (2008)] has been proven to be an effective MTF calibration method for a number of interferometric microscopes and a scatterometer [Nucl. Instr. and Meth. A 616, 172-82 (2010]. Here we report on a significant expansion of the application range of the method. We describe the MTF calibration of a 6 inch phase shifting Fizeau interferometer. Beyond providing a direct measurement of the interferometer's MTF, tests with a BPR array surface have revealed an asymmetry in the instrument'smore » data processing algorithm that fundamentally limits its bandwidth. Moreover, the tests have illustrated the effects of the instrument's detrending and filtering procedures on power spectral density measurements. The details of the development of a BPR test sample suitable for calibration of scanning and transmission electron microscopes are also presented. Such a test sample is realized as a multilayer structure with the layer thicknesses of two materials corresponding to BPR sequence. The investigations confirm the universal character of the method that makes it applicable to a large variety of metrology instrumentation with spatial wavelength bandwidths from a few nanometers to hundreds of millimeters.« less
Rouppe van der Voort, J N; van Eck, H J; van Zandvoort, P M; Overmars, H; Helder, J; Bakker, J
1999-07-01
A mapping strategy is described for the construction of a linkage map of a non-inbred species in which individual offspring genotypes are not amenable to marker analysis. After one extra generation of random mating, the segregating progeny was propagated, and bulked populations of offspring were analyzed. Although the resulting population structure is different from that of commonly used mapping populations, we show that the maximum likelihood formula for a normal F2 is applicable for the estimation of recombination. This "pseudo-F2" mapping strategy, in combination with the development of an AFLP assay for single cysts, facilitated the construction of a linkage map for the potato cyst nematode Globodera rostochiensis. Using 12 pre-selected AFLP primer combinations, a total of 66 segregating markers were identified, 62 of which were mapped to nine linkage groups. These 62 AFLP markers are randomly distributed and cover about 65% of the genome. An estimate of the physical size of the Globodera genome was obtained from comparisons of the number of AFLP fragments obtained with the values for Caenorhabditis elegans. The methodology presented here resulted in the first genomic map for a cyst nematode. The low value of the kilobase/centimorgan (kb/cM) ratio for the Globodera genome will facilitate map-based cloning of genes that mediate the interaction between the nematode and its host plant.
Optimal Design of Passive Power Filters Based on Pseudo-parallel Genetic Algorithm
NASA Astrophysics Data System (ADS)
Li, Pei; Li, Hongbo; Gao, Nannan; Niu, Lin; Guo, Liangfeng; Pei, Ying; Zhang, Yanyan; Xu, Minmin; Chen, Kerui
2017-05-01
The economic costs together with filter efficiency are taken as targets to optimize the parameter of passive filter. Furthermore, the method of combining pseudo-parallel genetic algorithm with adaptive genetic algorithm is adopted in this paper. In the early stages pseudo-parallel genetic algorithm is introduced to increase the population diversity, and adaptive genetic algorithm is used in the late stages to reduce the workload. At the same time, the migration rate of pseudo-parallel genetic algorithm is improved to change with population diversity adaptively. Simulation results show that the filter designed by the proposed method has better filtering effect with lower economic cost, and can be used in engineering.
Huang, Yu-An; You, Zhu-Hong; Chen, Xing
2018-01-01
Drug-Target Interactions (DTI) play a crucial role in discovering new drug candidates and finding new proteins to target for drug development. Although the number of detected DTI obtained by high-throughput techniques has been increasing, the number of known DTI is still limited. On the other hand, the experimental methods for detecting the interactions among drugs and proteins are costly and inefficient. Therefore, computational approaches for predicting DTI are drawing increasing attention in recent years. In this paper, we report a novel computational model for predicting the DTI using extremely randomized trees model and protein amino acids information. More specifically, the protein sequence is represented as a Pseudo Substitution Matrix Representation (Pseudo-SMR) descriptor in which the influence of biological evolutionary information is retained. For the representation of drug molecules, a novel fingerprint feature vector is utilized to describe its substructure information. Then the DTI pair is characterized by concatenating the two vector spaces of protein sequence and drug substructure. Finally, the proposed method is explored for predicting the DTI on four benchmark datasets: Enzyme, Ion Channel, GPCRs and Nuclear Receptor. The experimental results demonstrate that this method achieves promising prediction accuracies of 89.85%, 87.87%, 82.99% and 81.67%, respectively. For further evaluation, we compared the performance of Extremely Randomized Trees model with that of the state-of-the-art Support Vector Machine classifier. And we also compared the proposed model with existing computational models, and confirmed 15 potential drug-target interactions by looking for existing databases. The experiment results show that the proposed method is feasible and promising for predicting drug-target interactions for new drug candidate screening based on sizeable features. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Yang, Hao-Chung; Cannata, Jonathan; Williams, Jay; Shung, K. Kirk
2013-01-01
The goal of this research was to develop a novel diced 1–3 piezocomposite geometry to reduce pulse–echo ring down and acoustic crosstalk between high-frequency ultrasonic array elements. Two PZT-5H-based 1–3 composites (10 and 15 MHz) of different pillar geometries [square (SQ), 45° triangle (TR), and pseudo-random (PR)] were fabricated and then made into single-element ultrasound transducers. The measured pulse–echo waveforms and their envelopes indicate that the PR composites had the shortest −20-dB pulse length and highest sensitivity among the composites evaluated. Using these composites, 15-MHz array subapertures with a 0.95λ pitch were fabricated to assess the acoustic crosstalk between array elements. The combined electrical and acoustical crosstalk between the nearest array elements of the PR array sub-apertures (−31.8 dB at 15 MHz) was 6.5 and 2.2 dB lower than those of the SQ and the TR array subapertures, respectively. These results demonstrate that the 1–3 piezocomposite with the pseudo-random pillars may be a better choice for fabricating enhanced high-frequency linear-array ultrasound transducers; especially when mechanical dicing is used. PMID:23143580
NASA Astrophysics Data System (ADS)
Chen, Hsin-Han; Hsieh, Chih-Cheng
2013-09-01
This paper presents a readout integrated circuit (ROIC) with inverter-based capacitive trans-impedance amplifier (CTIA) and pseudo-multiple sampling technique for infrared focal plane array (IRFPA). The proposed inverter-based CTIA with a coupling capacitor [1], executing auto-zeroing technique to cancel out the varied offset voltage from process variation, is used to substitute differential amplifier in conventional CTIA. The tunable detector bias is applied from a global external bias before exposure. This scheme not only retains stable detector bias voltage and signal injection efficiency, but also reduces the pixel area as well. Pseudo-multiple sampling technique [2] is adopted to reduce the temporal noise of readout circuit. The noise reduction performance is comparable to the conventional multiple sampling operation without need of longer readout time proportional to the number of samples. A CMOS image sensor chip with 55×65 pixel array has been fabricated in 0.18um CMOS technology. It achieves a 12um×12um pixel size, a frame rate of 72 fps, a power-per-pixel of 0.66uW/pixel, and a readout temporal noise of 1.06mVrms (16 times of pseudo-multiple sampling), respectively.
Degeneracy of energy levels of pseudo-Gaussian oscillators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iacob, Theodor-Felix; Iacob, Felix, E-mail: felix@physics.uvt.ro; Lute, Marina
2015-12-07
We study the main features of the isotropic radial pseudo-Gaussian oscillators spectral properties. This study is made upon the energy levels degeneracy with respect to orbital angular momentum quantum number. In a previous work [6] we have shown that the pseudo-Gaussian oscillators belong to the class of quasi-exactly solvable models and an exact solution has been found.
Complex Fuzzy Set-Valued Complex Fuzzy Measures and Their Properties
Ma, Shengquan; Li, Shenggang
2014-01-01
Let F*(K) be the set of all fuzzy complex numbers. In this paper some classical and measure-theoretical notions are extended to the case of complex fuzzy sets. They are fuzzy complex number-valued distance on F*(K), fuzzy complex number-valued measure on F*(K), and some related notions, such as null-additivity, pseudo-null-additivity, null-subtraction, pseudo-null-subtraction, autocontionuous from above, autocontionuous from below, and autocontinuity of the defined fuzzy complex number-valued measures. Properties of fuzzy complex number-valued measures are studied in detail. PMID:25093202
Estimating three-demensional energy transfer in isotropic turbulence
NASA Technical Reports Server (NTRS)
Li, K. S.; Helland, K. N.; Rosenblatt, M.
1980-01-01
To obtain an estimate of the spectral transfer function that indicates the rate of decay of energy, an x-wire probe was set at a fixed position, and two single wire probes were set at a number of locations in the same plane perpendicular to the mean flow in the wind tunnel. The locations of the single wire probes are determined by pseudo-random numbers (Monte Carlo). Second order spectra and cross spectra are estimated. The assumption of isotropy relative to second order spectra is examined. Third order spectra are also estimated corresponding to the positions specified. A Monte Carlo Fourier transformation of the downstream bispectra corresponding to integration across the plane perpendicular to the flow is carried out assuming isotropy. Further integration is carried out over spherical energy shells.
Precise Orbit Determination Of Low Earth Satellites At AIUB Using GPS And SLR Data
NASA Astrophysics Data System (ADS)
Jaggi, A.; Bock, H.; Thaller, D.; Sosnica, K.; Meyer, U.; Baumann, C.; Dach, R.
2013-12-01
An ever increasing number of low Earth orbiting (LEO) satellites is, or will be, equipped with retro-reflectors for Satellite Laser Ranging (SLR) and on-board receivers to collect observations from Global Navigation Satellite Systems (GNSS) such as the Global Positioning System (GPS) and the Russian GLONASS and the European Galileo systems in the future. At the Astronomical Institute of the University of Bern (AIUB) LEO precise orbit determination (POD) using either GPS or SLR data is performed for a wide range of applications for satellites at different altitudes. For this purpose the classical numerical integration techniques, as also used for dynamic orbit determination of satellites at high altitudes, are extended by pseudo-stochastic orbit modeling techniques to efficiently cope with potential force model deficiencies for satellites at low altitudes. Accuracies of better than 2 cm may be achieved by pseudo-stochastic orbit modeling for satellites at very low altitudes such as for the GPS-based POD of the Gravity field and steady-state Ocean Circulation Explorer (GOCE).
Specialized Computer Systems for Environment Visualization
NASA Astrophysics Data System (ADS)
Al-Oraiqat, Anas M.; Bashkov, Evgeniy A.; Zori, Sergii A.
2018-06-01
The need for real time image generation of landscapes arises in various fields as part of tasks solved by virtual and augmented reality systems, as well as geographic information systems. Such systems provide opportunities for collecting, storing, analyzing and graphically visualizing geographic data. Algorithmic and hardware software tools for increasing the realism and efficiency of the environment visualization in 3D visualization systems are proposed. This paper discusses a modified path tracing algorithm with a two-level hierarchy of bounding volumes and finding intersections with Axis-Aligned Bounding Box. The proposed algorithm eliminates the branching and hence makes the algorithm more suitable to be implemented on the multi-threaded CPU and GPU. A modified ROAM algorithm is used to solve the qualitative visualization of reliefs' problems and landscapes. The algorithm is implemented on parallel systems—cluster and Compute Unified Device Architecture-networks. Results show that the implementation on MPI clusters is more efficient than Graphics Processing Unit/Graphics Processing Clusters and allows real-time synthesis. The organization and algorithms of the parallel GPU system for the 3D pseudo stereo image/video synthesis are proposed. With realizing possibility analysis on a parallel GPU-architecture of each stage, 3D pseudo stereo synthesis is performed. An experimental prototype of a specialized hardware-software system 3D pseudo stereo imaging and video was developed on the CPU/GPU. The experimental results show that the proposed adaptation of 3D pseudo stereo imaging to the architecture of GPU-systems is efficient. Also it accelerates the computational procedures of 3D pseudo-stereo synthesis for the anaglyph and anamorphic formats of the 3D stereo frame without performing optimization procedures. The acceleration is on average 11 and 54 times for test GPUs.
Wideband propagation measurements at 30.3 GHz through a pecan orchard in Texas
NASA Astrophysics Data System (ADS)
Papazian, Peter B.; Jones, David L.; Espeland, Richard H.
1992-09-01
Wideband propagation measurements were made in a pecan orchard in Texas during April and August of 1990 to examine the propagation characteristics of millimeter-wave signals through vegetation. Measurements were made on tree obstructed paths with and without leaves. The study presents narrowband attenuation data at 9.6 and 28.8 GHz as well as wideband impulse response measurements at 30.3 GHz. The wideband probe (Violette et al., 1983), provides amplitude and delay of reflected and scattered signals and bit-error rate. This is accomplished using a 500 MBit/sec pseudo-random code to BPSK modulate a 28.8 GHz carrier. The channel impulse response is then extracted by cross correlating the received pseudo-random sequence with a locally generated replica.
Thermodynamics of the pseudo-knot in helix 18 of 16S ribosomal RNA.
Wojciechowska, Monika; Dudek, Marta; Trylska, Joanna
2018-04-01
A fragment of E. coli 16S rRNA formed by nucleotides 500 to 545 is termed helix 18. Nucleotides 505-507 and 524-526 form a pseudo-knot and its distortion affects ribosome function. Helix 18 isolated from the ribosome context is thus an interesting fragment to investigate the structural properties and folding of RNA with pseudo-knots. With all-atom molecular dynamics simulations, spectroscopic and gel electrophoresis experiments, we investigated thermodynamics of helix 18, with a focus on its pseudo-knot. In solution studies at ambient conditions we observed dimerization of helix 18. We proposed that the loop, containing nucleotides forming the pseudo-knot, interacts with another monomer of helix 18. The native dimer is difficult to break but introducing mutations in the pseudo-knot indeed assured a monomeric form of helix 18. Molecular dynamics simulations at 310 K confirmed the stability of the pseudo-knot but at elevated temperatures this pseudo-knot was the first part of helix 18 to lose the hydrogen bond pattern. To further determine helix 18 stability, we analyzed the interactions of helix 18 with short oligomers complementary to a nucleotide stretch containing the pseudo-knot. The formation of higher-order structures by helix 18 impacts hybridization efficiency of peptide nucleic acid and 2'-O methyl RNA oligomers. © 2018 Wiley Periodicals, Inc.
Efficiency and Accuracy of Time-Accurate Turbulent Navier-Stokes Computations
NASA Technical Reports Server (NTRS)
Rumsey, Christopher L.; Sanetrik, Mark D.; Biedron, Robert T.; Melson, N. Duane; Parlette, Edward B.
1995-01-01
The accuracy and efficiency of two types of subiterations in both explicit and implicit Navier-Stokes codes are explored for unsteady laminar circular-cylinder flow and unsteady turbulent flow over an 18-percent-thick circular-arc (biconvex) airfoil. Grid and time-step studies are used to assess the numerical accuracy of the methods. Nonsubiterative time-stepping schemes and schemes with physical time subiterations are subject to time-step limitations in practice that are removed by pseudo time sub-iterations. Computations for the circular-arc airfoil indicate that a one-equation turbulence model predicts the unsteady separated flow better than an algebraic turbulence model; also, the hysteresis with Mach number of the self-excited unsteadiness due to shock and boundary-layer separation is well predicted.
A comparison between IMSC, PI and MIMSC methods in controlling the vibration of flexible systems
NASA Technical Reports Server (NTRS)
Baz, A.; Poh, S.
1987-01-01
A comparative study is presented between three active control algorithms which have proven to be successful in controlling the vibrations of large flexible systems. These algorithms are: the Independent Modal Space Control (IMSC), the Pseudo-inverse (PI), and the Modified Independent Modal Space Control (MIMSC). Emphasis is placed on demonstrating the effectiveness of the MIMSC method in controlling the vibration of large systems with small number of actuators by using an efficient time sharing strategy. Such a strategy favors the MIMSC over the IMSC method, which requires a large number of actuators to control equal number of modes, and also over the PI method which attempts to control large number of modes with smaller number of actuators through the use of an in-exact statistical realization of a modal controller. Numerical examples are presented to illustrate the main features of the three algorithms and the merits of the MIMSC method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Novak, Erik; Trolinger, James D.; Lacey, Ian
This work reports on the development of a binary pseudo-random test sample optimized to calibrate the MTF of optical microscopes. The sample consists of a number of 1-D and 2-D patterns, with different minimum sizes of spatial artifacts from 300 nm to 2 microns. We describe the mathematical background, fabrication process, data acquisition and analysis procedure to return spatial frequency based instrument calibration. We show that the developed samples satisfy the characteristics of a test standard: functionality, ease of specification and fabrication, reproducibility, and low sensitivity to manufacturing error. © (2015) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading ofmore » the abstract is permitted for personal use only.« less
Cerasoli, Francesco; Iannella, Mattia; D'Alessandro, Paola; Biondi, Maurizio
2017-01-01
Boosted Regression Trees (BRT) is one of the modelling techniques most recently applied to biodiversity conservation and it can be implemented with presence-only data through the generation of artificial absences (pseudo-absences). In this paper, three pseudo-absences generation techniques are compared, namely the generation of pseudo-absences within target-group background (TGB), testing both the weighted (WTGB) and unweighted (UTGB) scheme, and the generation at random (RDM), evaluating their performance and applicability in distribution modelling and species conservation. The choice of the target group fell on amphibians, because of their rapid decline worldwide and the frequent lack of guidelines for conservation strategies and regional-scale planning, which instead could be provided through an appropriate implementation of SDMs. Bufo bufo, Salamandrina perspicillata and Triturus carnifex were considered as target species, in order to perform our analysis with species having different ecological and distributional characteristics. The study area is the "Gran Sasso-Monti della Laga" National Park, which hosts 15 Natura 2000 sites and represents one of the most important biodiversity hotspots in Europe. Our results show that the model calibration ameliorates when using the target-group based pseudo-absences compared to the random ones, especially when applying the WTGB. Contrarily, model discrimination did not significantly vary in a consistent way among the three approaches with respect to the tree target species. Both WTGB and RDM clearly isolate the highly contributing variables, supplying many relevant indications for species conservation actions. Moreover, the assessment of pairwise variable interactions and their three-dimensional visualization further increase the amount of useful information for protected areas' managers. Finally, we suggest the use of RDM as an admissible alternative when it is not possible to individuate a suitable set of species as a representative target-group from which the pseudo-absences can be generated.
Habitat classification modeling with incomplete data: Pushing the habitat envelope
Zarnetske, P.L.; Edwards, T.C.; Moisen, Gretchen G.
2007-01-01
Habitat classification models (HCMs) are invaluable tools for species conservation, land-use planning, reserve design, and metapopulation assessments, particularly at broad spatial scales. However, species occurrence data are often lacking and typically limited to presence points at broad scales. This lack of absence data precludes the use of many statistical techniques for HCMs. One option is to generate pseudo-absence points so that the many available statistical modeling tools can be used. Traditional techniques generate pseudoabsence points at random across broadly defined species ranges, often failing to include biological knowledge concerning the species-habitat relationship. We incorporated biological knowledge of the species-habitat relationship into pseudo-absence points by creating habitat envelopes that constrain the region from which points were randomly selected. We define a habitat envelope as an ecological representation of a species, or species feature's (e.g., nest) observed distribution (i.e., realized niche) based on a single attribute, or the spatial intersection of multiple attributes. We created HCMs for Northern Goshawk (Accipiter gentilis atricapillus) nest habitat during the breeding season across Utah forests with extant nest presence points and ecologically based pseudo-absence points using logistic regression. Predictor variables were derived from 30-m USDA Landfire and 250-m Forest Inventory and Analysis (FIA) map products. These habitat-envelope-based models were then compared to null envelope models which use traditional practices for generating pseudo-absences. Models were assessed for fit and predictive capability using metrics such as kappa, thresholdindependent receiver operating characteristic (ROC) plots, adjusted deviance (Dadj2), and cross-validation, and were also assessed for ecological relevance. For all cases, habitat envelope-based models outperformed null envelope models and were more ecologically relevant, suggesting that incorporating biological knowledge into pseudo-absence point generation is a powerful tool for species habitat assessments. Furthermore, given some a priori knowledge of the species-habitat relationship, ecologically based pseudo-absence points can be applied to any species, ecosystem, data resolution, and spatial extent. ?? 2007 by the Ecological Society of America.
NASA Astrophysics Data System (ADS)
Yu, Yuan; Tong, Qi; Li, Zhongxia; Tian, Jinhai; Wang, Yizhi; Su, Feng; Wang, Yongsheng; Liu, Jun; Zhang, Yong
2014-02-01
PhiC31 integrase-mediated gene delivery has been extensively used in gene therapy and animal transgenesis. However, random integration events are observed in phiC31-mediated integration in different types of mammalian cells; as a result, the efficiencies of pseudo attP site integration and evaluation of site-specific integration are compromised. To improve this system, we used an attB-TK fusion gene as a negative selection marker, thereby eliminating random integration during phiC31-mediated transfection. We also excised the selection system and plasmid bacterial backbone by using two other site-specific recombinases, Cre and Dre. Thus, we generated clean transgenic bovine fetal fibroblast cells free of selectable marker and plasmid bacterial backbone. These clean cells were used as donor nuclei for somatic cell nuclear transfer (SCNT), indicating a similar developmental competence of SCNT embryos to that of non-transgenic cells. Therefore, the present gene delivery system facilitated the development of gene therapy and agricultural biotechnology.
Credit Risk Evaluation of Power Market Players with Random Forest
NASA Astrophysics Data System (ADS)
Umezawa, Yasushi; Mori, Hiroyuki
A new method is proposed for credit risk evaluation in a power market. The credit risk evaluation is to measure the bankruptcy risk of the company. The power system liberalization results in new environment that puts emphasis on the profit maximization and the risk minimization. There is a high probability that the electricity transaction causes a risk between companies. So, power market players are concerned with the risk minimization. As a management strategy, a risk index is requested to evaluate the worth of the business partner. This paper proposes a new method for evaluating the credit risk with Random Forest (RF) that makes ensemble learning for the decision tree. RF is one of efficient data mining technique in clustering data and extracting relationship between input and output data. In addition, the method of generating pseudo-measurements is proposed to improve the performance of RF. The proposed method is successfully applied to real financial data of energy utilities in the power market. A comparison is made between the proposed and the conventional methods.
Effect of increasing disorder on domains of the 2d Coulomb glass.
Bhandari, Preeti; Malik, Vikas
2017-12-06
We have studied a two dimensional lattice model of Coulomb glass for a wide range of disorders at [Formula: see text]. The system was first annealed using Monte Carlo simulation. Further minimization of the total energy of the system was done using an algorithm developed by Baranovskii et al, followed by cluster flipping to obtain the pseudo-ground states. We have shown that the energy required to create a domain of linear size L in d dimensions is proportional to [Formula: see text]. Using Imry-Ma arguments given for random field Ising model, one gets critical dimension [Formula: see text] for Coulomb glass. The investigation of domains in the transition region shows a discontinuity in staggered magnetization which is an indication of a first-order type transition from charge-ordered phase to disordered phase. The structure and nature of random field fluctuations of the second largest domain in Coulomb glass are inconsistent with the assumptions of Imry and Ma, as was also reported for random field Ising model. The study of domains showed that in the transition region there were mostly two large domains, and that as disorder was increased the two large domains remained, but a large number of small domains also opened up. We have also studied the properties of the second largest domain as a function of disorder. We furthermore analysed the effect of disorder on the density of states, and showed a transition from hard gap at low disorders to a soft gap at higher disorders. At [Formula: see text], we have analysed the soft gap in detail, and found that the density of states deviates slightly ([Formula: see text]) from the linear behaviour in two dimensions. Analysis of local minima show that the pseudo-ground states have similar structure.
NASA Astrophysics Data System (ADS)
Miszczak, Jarosław Adam
2013-01-01
The presented package for the Mathematica computing system allows the harnessing of quantum random number generators (QRNG) for investigating the statistical properties of quantum states. The described package implements a number of functions for generating random states. The new version of the package adds the ability to use the on-line quantum random number generator service and implements new functions for retrieving lists of random numbers. Thanks to the introduced improvements, the new version provides faster access to high-quality sources of random numbers and can be used in simulations requiring large amount of random data. New version program summaryProgram title: TRQS Catalogue identifier: AEKA_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKA_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 18 134 No. of bytes in distributed program, including test data, etc.: 2 520 49 Distribution format: tar.gz Programming language: Mathematica, C. Computer: Any supporting Mathematica in version 7 or higher. Operating system: Any platform supporting Mathematica; tested with GNU/Linux (32 and 64 bit). RAM: Case-dependent Supplementary material: Fig. 1 mentioned below can be downloaded. Classification: 4.15. External routines: Quantis software library (http://www.idquantique.com/support/quantis-trng.html) Catalogue identifier of previous version: AEKA_v1_0 Journal reference of previous version: Comput. Phys. Comm. 183(2012)118 Does the new version supersede the previous version?: Yes Nature of problem: Generation of random density matrices and utilization of high-quality random numbers for the purpose of computer simulation. Solution method: Use of a physical quantum random number generator and an on-line service providing access to the source of true random numbers generated by quantum real number generator. Reasons for new version: Added support for the high-speed on-line quantum random number generator and improved methods for retrieving lists of random numbers. Summary of revisions: The presented version provides two signicant improvements. The first one is the ability to use the on-line Quantum Random Number Generation service developed by PicoQuant GmbH and the Nano-Optics groups at the Department of Physics of Humboldt University. The on-line service supported in the version 2.0 of the TRQS package provides faster access to true randomness sources constructed using the laws of quantum physics. The service is freely available at https://qrng.physik.hu-berlin.de/. The use of this service allows using the presented package with the need of a physical quantum random number generator. The second improvement introduced in this version is the ability to retrieve arrays of random data directly for the used source. This increases the speed of the random number generation, especially in the case of an on-line service, where it reduces the time necessary to establish the connection. Thanks to the speed improvement of the presented version, the package can now be used in simulations requiring larger amounts of random data. Moreover, the functions for generating random numbers provided by the current version of the package more closely follow the pattern of functions for generating pseudo- random numbers provided in Mathematica. Additional comments: Speed comparison: The implementation of the support for the QRNG on-line service provides a noticeable improvement in the speed of random number generation. For the samples of real numbers of size 101; 102,…,107 the times required to generate these samples using Quantis USB device and QRNG service are compared in Fig. 1. The presented results show that the use of the on-line service provides faster access to random numbers. One should note, however, that the speed gain can increase or decrease depending on the connection speed between the computer and the server providing random numbers. Running time: Depends on the used source of randomness and the amount of random data used in the experiment. References: [1] M. Wahl, M. Leifgen, M. Berlin, T. Röhlicke, H.-J. Rahn, O. Benson., An ultrafast quantum random number generator with provably bounded output bias based on photon arrival time measurements, Applied Physics Letters, Vol. 098, 171105 (2011). http://dx.doi.org/10.1063/1.3578456.
Pseudo-color coding method for high-dynamic single-polarization SAR images
NASA Astrophysics Data System (ADS)
Feng, Zicheng; Liu, Xiaolin; Pei, Bingzhi
2018-04-01
A raw synthetic aperture radar (SAR) image usually has a 16-bit or higher bit depth, which cannot be directly visualized on 8-bit displays. In this study, we propose a pseudo-color coding method for high-dynamic singlepolarization SAR images. The method considers the characteristics of both SAR images and human perception. In HSI (hue, saturation and intensity) color space, the method carries out high-dynamic range tone mapping and pseudo-color processing simultaneously in order to avoid loss of details and to improve object identifiability. It is a highly efficient global algorithm.
Do pharmacy staff recommend evidenced-based smoking cessation products? A pseudo patron study.
Chiang, P P C; Chapman, S
2006-06-01
To determine whether pharmacy staff recommend evidence-based smoking cessation aids. Pseudo patron visit to 50 randomly selected Sydney pharmacies where the pseudo patron enquired about the 'best' way to quit smoking and about the efficacy of a non-evidence-based cessation product, NicoBloc. Nicotine replacement therapy was universally stocked and the first product recommended by 90% of pharmacies. After prompting, 60% of pharmacies, either also recommended NicoBloc or deferred to 'customer choice'. About 34% disparaged the product. Evidence-based smoking cessation advice in Sydney pharmacies is fragile and may be compromised by commercial concerns. Smokers should be provided with independent point-of-sale summaries of evidence of cessation product effectiveness and warned about unsubstantiated claims.
NASA Astrophysics Data System (ADS)
Zeng, Zhi-Ping; Zhao, Yan-Gang; Xu, Wen-Tao; Yu, Zhi-Wu; Chen, Ling-Kun; Lou, Ping
2015-04-01
The frequent use of bridges in high-speed railway lines greatly increases the probability that trains are running on bridges when earthquakes occur. This paper investigates the random vibrations of a high-speed train traversing a slab track on a continuous girder bridge subjected to track irregularities and traveling seismic waves by the pseudo-excitation method (PEM). To derive the equations of motion of the train-slab track-bridge interaction system, the multibody dynamics and finite element method models are used for the train and the track and bridge, respectively. By assuming track irregularities to be fully coherent random excitations with time lags between different wheels and seismic accelerations to be uniformly modulated, non-stationary random excitations with time lags between different foundations, the random load vectors of the equations of motion are transformed into a series of deterministic pseudo-excitations based on PEM and the wheel-rail contact relationship. A computer code is developed to obtain the time-dependent random responses of the entire system. As a case study, the random vibration characteristics of an ICE-3 high-speed train traversing a seven-span continuous girder bridge simultaneously excited by track irregularities and traveling seismic waves are analyzed. The influence of train speed and seismic wave propagation velocity on the random vibration characteristics of the bridge and train are discussed.
High-speed imaging using CMOS image sensor with quasi pixel-wise exposure
NASA Astrophysics Data System (ADS)
Sonoda, T.; Nagahara, H.; Endo, K.; Sugiyama, Y.; Taniguchi, R.
2017-02-01
Several recent studies in compressive video sensing have realized scene capture beyond the fundamental trade-off limit between spatial resolution and temporal resolution using random space-time sampling. However, most of these studies showed results for higher frame rate video that were produced by simulation experiments or using an optically simulated random sampling camera, because there are currently no commercially available image sensors with random exposure or sampling capabilities. We fabricated a prototype complementary metal oxide semiconductor (CMOS) image sensor with quasi pixel-wise exposure timing that can realize nonuniform space-time sampling. The prototype sensor can reset exposures independently by columns and fix these amount of exposure by rows for each 8x8 pixel block. This CMOS sensor is not fully controllable via the pixels, and has line-dependent controls, but it offers flexibility when compared with regular CMOS or charge-coupled device sensors with global or rolling shutters. We propose a method to realize pseudo-random sampling for high-speed video acquisition that uses the flexibility of the CMOS sensor. We reconstruct the high-speed video sequence from the images produced by pseudo-random sampling using an over-complete dictionary.
Analysis on pseudo excitation of random vibration for structure of time flight counter
NASA Astrophysics Data System (ADS)
Wu, Qiong; Li, Dapeng
2015-03-01
Traditional computing method is inefficient for getting key dynamical parameters of complicated structure. Pseudo Excitation Method(PEM) is an effective method for calculation of random vibration. Due to complicated and coupling random vibration in rocket or shuttle launching, the new staging white noise mathematical model is deduced according to the practical launch environment. This deduced model is applied for PEM to calculate the specific structure of Time of Flight Counter(ToFC). The responses of power spectral density and the relevant dynamic characteristic parameters of ToFC are obtained in terms of the flight acceptance test level. Considering stiffness of fixture structure, the random vibration experiments are conducted in three directions to compare with the revised PEM. The experimental results show the structure can bear the random vibration caused by launch without any damage and key dynamical parameters of ToFC are obtained. The revised PEM is similar with random vibration experiment in dynamical parameters and responses are proved by comparative results. The maximum error is within 9%. The reasons of errors are analyzed to improve reliability of calculation. This research provides an effective method for solutions of computing dynamical characteristic parameters of complicated structure in the process of rocket or shuttle launching.
Statistical inference for the additive hazards model under outcome-dependent sampling.
Yu, Jichang; Liu, Yanyan; Sandler, Dale P; Zhou, Haibo
2015-09-01
Cost-effective study design and proper inference procedures for data from such designs are always of particular interests to study investigators. In this article, we propose a biased sampling scheme, an outcome-dependent sampling (ODS) design for survival data with right censoring under the additive hazards model. We develop a weighted pseudo-score estimator for the regression parameters for the proposed design and derive the asymptotic properties of the proposed estimator. We also provide some suggestions for using the proposed method by evaluating the relative efficiency of the proposed method against simple random sampling design and derive the optimal allocation of the subsamples for the proposed design. Simulation studies show that the proposed ODS design is more powerful than other existing designs and the proposed estimator is more efficient than other estimators. We apply our method to analyze a cancer study conducted at NIEHS, the Cancer Incidence and Mortality of Uranium Miners Study, to study the risk of radon exposure to cancer.
Statistical inference for the additive hazards model under outcome-dependent sampling
Yu, Jichang; Liu, Yanyan; Sandler, Dale P.; Zhou, Haibo
2015-01-01
Cost-effective study design and proper inference procedures for data from such designs are always of particular interests to study investigators. In this article, we propose a biased sampling scheme, an outcome-dependent sampling (ODS) design for survival data with right censoring under the additive hazards model. We develop a weighted pseudo-score estimator for the regression parameters for the proposed design and derive the asymptotic properties of the proposed estimator. We also provide some suggestions for using the proposed method by evaluating the relative efficiency of the proposed method against simple random sampling design and derive the optimal allocation of the subsamples for the proposed design. Simulation studies show that the proposed ODS design is more powerful than other existing designs and the proposed estimator is more efficient than other estimators. We apply our method to analyze a cancer study conducted at NIEHS, the Cancer Incidence and Mortality of Uranium Miners Study, to study the risk of radon exposure to cancer. PMID:26379363
Self-balanced real-time photonic scheme for ultrafast random number generation
NASA Astrophysics Data System (ADS)
Li, Pu; Guo, Ya; Guo, Yanqiang; Fan, Yuanlong; Guo, Xiaomin; Liu, Xianglian; Shore, K. Alan; Dubrova, Elena; Xu, Bingjie; Wang, Yuncai; Wang, Anbang
2018-06-01
We propose a real-time self-balanced photonic method for extracting ultrafast random numbers from broadband randomness sources. In place of electronic analog-to-digital converters (ADCs), the balanced photo-detection technology is used to directly quantize optically sampled chaotic pulses into a continuous random number stream. Benefitting from ultrafast photo-detection, our method can efficiently eliminate the generation rate bottleneck from electronic ADCs which are required in nearly all the available fast physical random number generators. A proof-of-principle experiment demonstrates that using our approach 10 Gb/s real-time and statistically unbiased random numbers are successfully extracted from a bandwidth-enhanced chaotic source. The generation rate achieved experimentally here is being limited by the bandwidth of the chaotic source. The method described has the potential to attain a real-time rate of 100 Gb/s.
NASA Astrophysics Data System (ADS)
Song, S. G.
2016-12-01
Simulation-based ground motion prediction approaches have several benefits over empirical ground motion prediction equations (GMPEs). For instance, full 3-component waveforms can be produced and site-specific hazard analysis is also possible. However, it is important to validate them against observed ground motion data to confirm their efficiency and validity before practical uses. There have been community efforts for these purposes, which are supported by the Broadband Platform (BBP) project at the Southern California Earthquake Center (SCEC). In the simulation-based ground motion prediction approaches, it is a critical element to prepare a possible range of scenario rupture models. I developed a pseudo-dynamic source model for Mw 6.5-7.0 by analyzing a number of dynamic rupture models, based on 1-point and 2-point statistics of earthquake source parameters (Song et al. 2014; Song 2016). In this study, the developed pseudo-dynamic source models were tested against observed ground motion data at the SCEC BBP, Ver 16.5. The validation was performed at two stages. At the first stage, simulated ground motions were validated against observed ground motion data for past events such as the 1992 Landers and 1994 Northridge, California, earthquakes. At the second stage, they were validated against the latest version of empirical GMPEs, i.e., NGA-West2. The validation results show that the simulated ground motions produce ground motion intensities compatible with observed ground motion data at both stages. The compatibility of the pseudo-dynamic source models with the omega-square spectral decay and the standard deviation of the simulated ground motion intensities are also discussed in the study
An adjoint-based framework for maximizing mixing in binary fluids
NASA Astrophysics Data System (ADS)
Eggl, Maximilian; Schmid, Peter
2017-11-01
Mixing in the inertial, but laminar parameter regime is a common application in a wide range of industries. Enhancing the efficiency of mixing processes thus has a fundamental effect on product quality, material homogeneity and, last but not least, production costs. In this project, we address mixing efficiency in the above mentioned regime (Reynolds number Re = 1000 , Peclet number Pe = 1000) by developing and demonstrating an algorithm based on nonlinear adjoint looping that minimizes the variance of a passive scalar field which models our binary Newtonian fluids. The numerical method is based on the FLUSI code (Engels et al. 2016), a Fourier pseudo-spectral code, which we modified and augmented by scalar transport and adjoint equations. Mixing is accomplished by moving stirrers which are numerically modeled using a penalization approach. In our two-dimensional simulations we consider rotating circular and elliptic stirrers and extract optimal mixing strategies from the iterative scheme. The case of optimizing shape and rotational speed of the stirrers will be demonstrated.
Likelihood-based methods for evaluating principal surrogacy in augmented vaccine trials.
Liu, Wei; Zhang, Bo; Zhang, Hui; Zhang, Zhiwei
2017-04-01
There is growing interest in assessing immune biomarkers, which are quick to measure and potentially predictive of long-term efficacy, as surrogate endpoints in randomized, placebo-controlled vaccine trials. This can be done under a principal stratification approach, with principal strata defined using a subject's potential immune responses to vaccine and placebo (the latter may be assumed to be zero). In this context, principal surrogacy refers to the extent to which vaccine efficacy varies across principal strata. Because a placebo recipient's potential immune response to vaccine is unobserved in a standard vaccine trial, augmented vaccine trials have been proposed to produce the information needed to evaluate principal surrogacy. This article reviews existing methods based on an estimated likelihood and a pseudo-score (PS) and proposes two new methods based on a semiparametric likelihood (SL) and a pseudo-likelihood (PL), for analyzing augmented vaccine trials. Unlike the PS method, the SL method does not require a model for missingness, which can be advantageous when immune response data are missing by happenstance. The SL method is shown to be asymptotically efficient, and it performs similarly to the PS and PL methods in simulation experiments. The PL method appears to have a computational advantage over the PS and SL methods.
"Pseudo" conditions in dermatology: need to know both real and unreal.
Kudur, Mohan H; Hulmani, Manjunath
2012-01-01
There are 'n' number of names and terminologies in dermatology. The real and unreal names lead to lot of confusion to the residents and practitioners of dermatology. The word 'pseudo' means 'unreal', 'false' or 'fake', and it has deep roots in dermatology providing herculean task to differentiate and understand the real conditions/diseases/signs in dermatology. We have made an attempt to list and describe the pseudo and associated real conditions in dermatology.
Flexible method for monitoring fuel cell voltage
Mowery, Kenneth D.; Ripley, Eugene V.
2002-01-01
A method for equalizing the measured voltage of each cluster in a fuel cell stack wherein at least one of the clusters has a different number of cells than the identical number of cells in the remaining clusters by creating a pseudo voltage for the different cell numbered cluster. The average cell voltage of the all of the cells in the fuel cell stack is calculated and multiplied by a constant equal to the difference in the number of cells in the identical cell clusters and the number of cells in the different numbered cell cluster. The resultant product is added to the actual voltage measured across the different numbered cell cluster to create a pseudo voltage which is equivalent in cell number to the number of cells in the other identical numbered cell clusters.
Selective photon counter for digital x-ray mammography tomosynthesis
NASA Astrophysics Data System (ADS)
Goldan, Amir H.; Karim, Karim S.; Rowlands, J. A.
2006-03-01
Photon counting is an emerging detection technique that is promising for mammography tomosynthesis imagers. In photon counting systems, the value of each image pixel is equal to the number of photons that interact with the detector. In this research, we introduce the design and implementation of a low noise, novel selective photon counting pixel for digital mammography tomosynthesis in crystalline silicon CMOS (complementary metal oxide semiconductor) 0.18 micron technology. The design comprises of a low noise charge amplifier (CA), two low offset voltage comparators, a decision-making unit (DMU), a mode selector, and a pseudo-random counter. Theoretical calculations and simulation results of linearity, gain, and noise of the photon counting pixel are presented.
B-737 Linear Autoland Simulink Model
NASA Technical Reports Server (NTRS)
Belcastro, Celeste (Technical Monitor); Hogge, Edward F.
2004-01-01
The Linear Autoland Simulink model was created to be a modular test environment for testing of control system components in commercial aircraft. The input variables, physical laws, and referenced frames used are summarized. The state space theory underlying the model is surveyed and the location of the control actuators described. The equations used to realize the Dryden gust model to simulate winds and gusts are derived. A description of the pseudo-random number generation method used in the wind gust model is included. The longitudinal autopilot, lateral autopilot, automatic throttle autopilot, engine model and automatic trim devices are considered as subsystems. The experience in converting the Airlabs FORTRAN aircraft control system simulation to a graphical simulation tool (Matlab/Simulink) is described.
Demonstration of Johnson noise thermometry with all-superconducting quantum voltage noise source
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamada, Takahiro, E-mail: yamada-takahiro@aist.go.jp; Urano, Chiharu; Maezawa, Masaaki
We present a Johnson noise thermometry (JNT) system based on an integrated quantum voltage noise source (IQVNS) that has been fully implemented using superconducting circuit technology. To enable precise measurement of Boltzmann's constant, an IQVNS chip was designed to produce intrinsically calculable pseudo-white noise to calibrate the JNT system. On-chip real-time generation of pseudo-random codes via simple circuits produced pseudo-voltage noise with a harmonic tone interval of less than 1 Hz, which was one order of magnitude finer than the harmonic tone interval of conventional quantum voltage noise sources. We estimated a value for Boltzmann's constant experimentally by performing JNT measurementsmore » at the temperature of the triple point of water using the IQVNS chip.« less
Scleroderma and pseudo-scleroderma: uncommon presentations.
Haustein, Uwe-Frithjof
2005-01-01
Scleroderma is characterized by major clinical symptoms, but a number of unrelated disease may mimic these features more or less completely. Even scleroderma itself sometimes presents in an unusual manner. This article deals with uncommon presentations of true scleroderma and its variants and pseudo -scleroderma diseases.
Aerodynamics of the pseudo-glottis.
Kotby, M N; Hegazi, M A; Kamal, I; Gamal El Dien, N; Nassar, J
2009-01-01
The aim of this work is to study the hitherto unclear aerodynamic parameters of the pseudo-glottis following total laryngectomy. These parameters include airflow rate, sub-pseudo-glottic pressure (SubPsG), efficiency and resistance, as well as sound pressure level (SPL). Eighteen male patients who have undergone total laryngectomy, with an age range from 54 to 72 years, were investigated in this study. All tested patients were fluent esophageal 'voice' speakers utilizing tracheo-esophageal prosthesis. The airflow rate, SubPsG and SPL were measured. The results showed that the mean value of the airflow rate was 53 ml/s, the SubPsG pressure was 13 cm H(2)O, while the SPL was 66 dB. The normative data obtained from the true glottis in healthy age-matched subjects are 89 ml/s, 7.9 cm H(2)O and 70 dB, respectively. Other aerodynamic indices were calculated and compared to the data obtained from the true glottis. Such a comparison of the pseudo-glottic aerodynamic data to the data of the true glottis gives an insight into the mechanism of action of the pseudo-glottis. The data obtained suggests possible clinical applications in pseudo-voice training. Copyright 2009 S. Karger AG, Basel.
A Pseudo-Vertical Equilibrium Model for Slow Gravity Drainage Dynamics
NASA Astrophysics Data System (ADS)
Becker, Beatrix; Guo, Bo; Bandilla, Karl; Celia, Michael A.; Flemisch, Bernd; Helmig, Rainer
2017-12-01
Vertical equilibrium (VE) models are computationally efficient and have been widely used for modeling fluid migration in the subsurface. However, they rely on the assumption of instant gravity segregation of the two fluid phases which may not be valid especially for systems that have very slow drainage at low wetting phase saturations. In these cases, the time scale for the wetting phase to reach vertical equilibrium can be several orders of magnitude larger than the time scale of interest, rendering conventional VE models unsuitable. Here we present a pseudo-VE model that relaxes the assumption of instant segregation of the two fluid phases by applying a pseudo-residual saturation inside the plume of the injected fluid that declines over time due to slow vertical drainage. This pseudo-VE model is cast in a multiscale framework for vertically integrated models with the vertical drainage solved as a fine-scale problem. Two types of fine-scale models are developed for the vertical drainage, which lead to two pseudo-VE models. Comparisons with a conventional VE model and a full multidimensional model show that the pseudo-VE models have much wider applicability than the conventional VE model while maintaining the computational benefit of the conventional VE model.
Autonomous Byte Stream Randomizer
NASA Technical Reports Server (NTRS)
Paloulian, George K.; Woo, Simon S.; Chow, Edward T.
2013-01-01
Net-centric networking environments are often faced with limited resources and must utilize bandwidth as efficiently as possible. In networking environments that span wide areas, the data transmission has to be efficient without any redundant or exuberant metadata. The Autonomous Byte Stream Randomizer software provides an extra level of security on top of existing data encryption methods. Randomizing the data s byte stream adds an extra layer to existing data protection methods, thus making it harder for an attacker to decrypt protected data. Based on a generated crypto-graphically secure random seed, a random sequence of numbers is used to intelligently and efficiently swap the organization of bytes in data using the unbiased and memory-efficient in-place Fisher-Yates shuffle method. Swapping bytes and reorganizing the crucial structure of the byte data renders the data file unreadable and leaves the data in a deconstructed state. This deconstruction adds an extra level of security requiring the byte stream to be reconstructed with the random seed in order to be readable. Once the data byte stream has been randomized, the software enables the data to be distributed to N nodes in an environment. Each piece of the data in randomized and distributed form is a separate entity unreadable on its own right, but when combined with all N pieces, is able to be reconstructed back to one. Reconstruction requires possession of the key used for randomizing the bytes, leading to the generation of the same cryptographically secure random sequence of numbers used to randomize the data. This software is a cornerstone capability possessing the ability to generate the same cryptographically secure sequence on different machines and time intervals, thus allowing this software to be used more heavily in net-centric environments where data transfer bandwidth is limited.
Petascale turbulence simulation using a highly parallel fast multipole method on GPUs
NASA Astrophysics Data System (ADS)
Yokota, Rio; Barba, L. A.; Narumi, Tetsu; Yasuoka, Kenji
2013-03-01
This paper reports large-scale direct numerical simulations of homogeneous-isotropic fluid turbulence, achieving sustained performance of 1.08 petaflop/s on GPU hardware using single precision. The simulations use a vortex particle method to solve the Navier-Stokes equations, with a highly parallel fast multipole method (FMM) as numerical engine, and match the current record in mesh size for this application, a cube of 40963 computational points solved with a spectral method. The standard numerical approach used in this field is the pseudo-spectral method, relying on the FFT algorithm as the numerical engine. The particle-based simulations presented in this paper quantitatively match the kinetic energy spectrum obtained with a pseudo-spectral method, using a trusted code. In terms of parallel performance, weak scaling results show the FMM-based vortex method achieving 74% parallel efficiency on 4096 processes (one GPU per MPI process, 3 GPUs per node of the TSUBAME-2.0 system). The FFT-based spectral method is able to achieve just 14% parallel efficiency on the same number of MPI processes (using only CPU cores), due to the all-to-all communication pattern of the FFT algorithm. The calculation time for one time step was 108 s for the vortex method and 154 s for the spectral method, under these conditions. Computing with 69 billion particles, this work exceeds by an order of magnitude the largest vortex-method calculations to date.
Analysis of backward error recovery for concurrent processes with recovery blocks
NASA Technical Reports Server (NTRS)
Shin, K. G.; Lee, Y. H.
1982-01-01
Three different methods of implementing recovery blocks (RB's). These are the asynchronous, synchronous, and the pseudo recovery point implementations. Pseudo recovery points so that unbounded rollback may be avoided while maintaining process autonomy are proposed. Probabilistic models for analyzing these three methods under standard assumptions in computer performance analysis, i.e., exponential distributions for related random variables were developed. The interval between two successive recovery lines for asynchronous RB's mean loss in computation power for the synchronized method, and additional overhead and rollback distance in case PRP's are used were estimated.
NASA Astrophysics Data System (ADS)
Ragupathy, S.; Raghu, K.; Prabu, P.
2015-03-01
Synthesis of titanium dioxide (TiO2) nanoparticles and TiO2 loaded cashew nut shell activated carbon (TiO2/CNSAC) had been undertaken using sol-gel method and their application in BG and MB dyes removal under sunlight radiation has been investigated. The synthesized photocatalysts were characterized by X-ray diffraction analysis (XRD), Fourier infra-red spectroscopy (FT-IR), UV-Vis-diffuse reflectance spectroscopy (DRS) and scanning electron microscopy (SEM) with energy dispersive X-ray analysis (EDX). The various experimental parameters like amount of catalyst, contact time for efficient dyes degradation of BG and MB were concerned in this study. Activity measurements performed under solar irradiation has shown good results for the photodegradation of BG and MB in aqueous solution. It was concluded that the higher photocatalytic activity in TiO2/CNSAC was due to parameters like band-gap, number of hydroxyl groups, surface area and porosity of the catalyst. The kinetic data were also described by the pseudo-first-order and pseudo-second-order kinetic models.
Absolute Paleointensity Estimates using Combined Shaw and Pseudo-Thellier Experimental Protocols
NASA Astrophysics Data System (ADS)
Foucher, M. S.; Smirnov, A. V.
2016-12-01
Data on the long-term evolution of Earth's magnetic field intensity have a great potential to advance our understanding of many aspects of the Earth's evolution. However, paleointensity determination is one of the most challenging aspects of paleomagnetic research so the quantity and quality of existing paleointensity data remain limited, especially for older epochs. While the Thellier double-heating method remains to be the most commonly used paleointensity technique, its applicability is limited for many rocks that undergo magneto-mineralogical alteration during the successive heating steps required by the method. In order to reduce the probability of alteration, several alternative methods that involve a limited number of or no heating steps have been proposed. However, continued efforts are needed to better understand the physical foundations and relative efficiency of reduced/non-heating methods in recovering the true paleofield strength and to better constrain their calibration factors. We will present the results of our investigation of synthetic and natural magnetite-bearing samples using a combination of the LTD-DHT Shaw and pseudo-Thellier experimental protocols for absolute paleointensity estimation.
Computationally efficient control allocation
NASA Technical Reports Server (NTRS)
Durham, Wayne (Inventor)
2001-01-01
A computationally efficient method for calculating near-optimal solutions to the three-objective, linear control allocation problem is disclosed. The control allocation problem is that of distributing the effort of redundant control effectors to achieve some desired set of objectives. The problem is deemed linear if control effectiveness is affine with respect to the individual control effectors. The optimal solution is that which exploits the collective maximum capability of the effectors within their individual physical limits. Computational efficiency is measured by the number of floating-point operations required for solution. The method presented returned optimal solutions in more than 90% of the cases examined; non-optimal solutions returned by the method were typically much less than 1% different from optimal and the errors tended to become smaller than 0.01% as the number of controls was increased. The magnitude of the errors returned by the present method was much smaller than those that resulted from either pseudo inverse or cascaded generalized inverse solutions. The computational complexity of the method presented varied linearly with increasing numbers of controls; the number of required floating point operations increased from 5.5 i, to seven times faster than did the minimum-norm solution (the pseudoinverse), and at about the same rate as did the cascaded generalized inverse solution. The computational requirements of the method presented were much better than that of previously described facet-searching methods which increase in proportion to the square of the number of controls.
Thinking Process of Pseudo Construction in Mathematics Concepts
ERIC Educational Resources Information Center
Subanji; Nusantara, Toto
2016-01-01
This article aims at studying pseudo construction of student thinking in mathematical concepts, integer number operation, algebraic forms, area concepts, and triangle concepts. 391 junior high school students from four districts of East Java Province Indonesia were taken as the subjects. Data were collected by means of distributing the main…
An investigation of error correcting techniques for OMV and AXAF
NASA Technical Reports Server (NTRS)
Ingels, Frank; Fryer, John
1991-01-01
The original objectives of this project were to build a test system for the NASA 255/223 Reed/Solomon encoding/decoding chip set and circuit board. This test system was then to be interfaced with a convolutional system at MSFC to examine the performance of the concantinated codes. After considerable work, it was discovered that the convolutional system could not function as needed. This report documents the design, construction, and testing of the test apparatus for the R/S chip set. The approach taken was to verify the error correcting behavior of the chip set by injecting known error patterns onto data and observing the results. Error sequences were generated using pseudo-random number generator programs, with Poisson time distribution between errors and Gaussian burst lengths. Sample means, variances, and number of un-correctable errors were calculated for each data set before testing.
Cooperative multi-user detection and ranging based on pseudo-random codes
NASA Astrophysics Data System (ADS)
Morhart, C.; Biebl, E. M.
2009-05-01
We present an improved approach for a Round Trip Time of Flight distance measurement system. The system is intended for the usage in a cooperative localisation system for automotive applications. Therefore, it is designed to address a large number of communication partners per measurement cycle. By using coded signals in a time divison multiple access order, we can detect a large number of pedestrian sensors with just one car sensor. We achieve this by using very short transmit bursts in combination with a real time correlation algorithm. Futhermore, the correlation approach offers real time data, concerning the time of arrival, that can serve as a trigger impulse for other comunication systems. The distance accuracy of the correlation result was further increased by adding a fourier interpolation filter. The system performance was checked with a prototype at 2.4 GHz. We reached a distance measurement accuracy of 12 cm at a range up to 450 m.
Flying insect detection and classification with inexpensive sensors.
Chen, Yanping; Why, Adena; Batista, Gustavo; Mafra-Neto, Agenor; Keogh, Eamonn
2014-10-15
An inexpensive, noninvasive system that could accurately classify flying insects would have important implications for entomological research, and allow for the development of many useful applications in vector and pest control for both medical and agricultural entomology. Given this, the last sixty years have seen many research efforts devoted to this task. To date, however, none of this research has had a lasting impact. In this work, we show that pseudo-acoustic optical sensors can produce superior data; that additional features, both intrinsic and extrinsic to the insect's flight behavior, can be exploited to improve insect classification; that a Bayesian classification approach allows to efficiently learn classification models that are very robust to over-fitting, and a general classification framework allows to easily incorporate arbitrary number of features. We demonstrate the findings with large-scale experiments that dwarf all previous works combined, as measured by the number of insects and the number of species considered.
Flying Insect Detection and Classification with Inexpensive Sensors
Chen, Yanping; Why, Adena; Batista, Gustavo; Mafra-Neto, Agenor; Keogh, Eamonn
2014-01-01
An inexpensive, noninvasive system that could accurately classify flying insects would have important implications for entomological research, and allow for the development of many useful applications in vector and pest control for both medical and agricultural entomology. Given this, the last sixty years have seen many research efforts devoted to this task. To date, however, none of this research has had a lasting impact. In this work, we show that pseudo-acoustic optical sensors can produce superior data; that additional features, both intrinsic and extrinsic to the insect’s flight behavior, can be exploited to improve insect classification; that a Bayesian classification approach allows to efficiently learn classification models that are very robust to over-fitting, and a general classification framework allows to easily incorporate arbitrary number of features. We demonstrate the findings with large-scale experiments that dwarf all previous works combined, as measured by the number of insects and the number of species considered. PMID:25350921
NASA Astrophysics Data System (ADS)
Bisadi, Zahra; Acerbi, Fabio; Fontana, Giorgio; Zorzi, Nicola; Piemonte, Claudio; Pucker, Georg; Pavesi, Lorenzo
2018-02-01
A small-sized photonic quantum random number generator, easy to be implemented in small electronic devices for secure data encryption and other applications, is highly demanding nowadays. Here, we propose a compact configuration with Silicon nanocrystals large area light emitting device (LED) coupled to a Silicon photomultiplier to generate random numbers. The random number generation methodology is based on the photon arrival time and is robust against the non-idealities of the detector and the source of quantum entropy. The raw data show high quality of randomness and pass all the statistical tests in national institute of standards and technology tests (NIST) suite without a post-processing algorithm. The highest bit rate is 0.5 Mbps with the efficiency of 4 bits per detected photon.
MRI-based treatment planning with pseudo CT generated through atlas registration.
Uh, Jinsoo; Merchant, Thomas E; Li, Yimei; Li, Xingyu; Hua, Chiaho
2014-05-01
To evaluate the feasibility and accuracy of magnetic resonance imaging (MRI)-based treatment planning using pseudo CTs generated through atlas registration. A pseudo CT, providing electron density information for dose calculation, was generated by deforming atlas CT images previously acquired on other patients. The authors tested 4 schemes of synthesizing a pseudo CT from single or multiple deformed atlas images: use of a single arbitrarily selected atlas, arithmetic mean process using 6 atlases, and pattern recognition with Gaussian process (PRGP) using 6 or 12 atlases. The required deformation for atlas CT images was derived from a nonlinear registration of conjugated atlas MR images to that of the patient of interest. The contrasts of atlas MR images were adjusted by histogram matching to reduce the effect of different sets of acquisition parameters. For comparison, the authors also tested a simple scheme assigning the Hounsfield unit of water to the entire patient volume. All pseudo CT generating schemes were applied to 14 patients with common pediatric brain tumors. The image similarity of real patient-specific CT and pseudo CTs constructed by different schemes was compared. Differences in computation times were also calculated. The real CT in the treatment planning system was replaced with the pseudo CT, and the dose distribution was recalculated to determine the difference. The atlas approach generally performed better than assigning a bulk CT number to the entire patient volume. Comparing atlas-based schemes, those using multiple atlases outperformed the single atlas scheme. For multiple atlas schemes, the pseudo CTs were similar to the real CTs (correlation coefficient, 0.787-0.819). The calculated dose distribution was in close agreement with the original dose. Nearly the entire patient volume (98.3%-98.7%) satisfied the criteria of chi-evaluation (<2% maximum dose and 2 mm range). The dose to 95% of the volume and the percentage of volume receiving at least 95% of the prescription dose in the planning target volume differed from the original values by less than 2% of the prescription dose (root-mean-square, RMS < 1%). The PRGP scheme did not perform better than the arithmetic mean process with the same number of atlases. Increasing the number of atlases from 6 to 12 often resulted in improvements, but statistical significance was not always found. MRI-based treatment planning with pseudo CTs generated through atlas registration is feasible for pediatric brain tumor patients. The doses calculated from pseudo CTs agreed well with those from real CTs, showing dosimetric accuracy within 2% for the PTV when multiple atlases were used. The arithmetic mean process may be a reasonable choice over PRGP for the synthesis scheme considering performance and computational costs.
MRI-based treatment planning with pseudo CT generated through atlas registration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uh, Jinsoo, E-mail: jinsoo.uh@stjude.org; Merchant, Thomas E.; Hua, Chiaho
2014-05-15
Purpose: To evaluate the feasibility and accuracy of magnetic resonance imaging (MRI)-based treatment planning using pseudo CTs generated through atlas registration. Methods: A pseudo CT, providing electron density information for dose calculation, was generated by deforming atlas CT images previously acquired on other patients. The authors tested 4 schemes of synthesizing a pseudo CT from single or multiple deformed atlas images: use of a single arbitrarily selected atlas, arithmetic mean process using 6 atlases, and pattern recognition with Gaussian process (PRGP) using 6 or 12 atlases. The required deformation for atlas CT images was derived from a nonlinear registration ofmore » conjugated atlas MR images to that of the patient of interest. The contrasts of atlas MR images were adjusted by histogram matching to reduce the effect of different sets of acquisition parameters. For comparison, the authors also tested a simple scheme assigning the Hounsfield unit of water to the entire patient volume. All pseudo CT generating schemes were applied to 14 patients with common pediatric brain tumors. The image similarity of real patient-specific CT and pseudo CTs constructed by different schemes was compared. Differences in computation times were also calculated. The real CT in the treatment planning system was replaced with the pseudo CT, and the dose distribution was recalculated to determine the difference. Results: The atlas approach generally performed better than assigning a bulk CT number to the entire patient volume. Comparing atlas-based schemes, those using multiple atlases outperformed the single atlas scheme. For multiple atlas schemes, the pseudo CTs were similar to the real CTs (correlation coefficient, 0.787–0.819). The calculated dose distribution was in close agreement with the original dose. Nearly the entire patient volume (98.3%–98.7%) satisfied the criteria of chi-evaluation (<2% maximum dose and 2 mm range). The dose to 95% of the volume and the percentage of volume receiving at least 95% of the prescription dose in the planning target volume differed from the original values by less than 2% of the prescription dose (root-mean-square, RMS < 1%). The PRGP scheme did not perform better than the arithmetic mean process with the same number of atlases. Increasing the number of atlases from 6 to 12 often resulted in improvements, but statistical significance was not always found. Conclusions: MRI-based treatment planning with pseudo CTs generated through atlas registration is feasible for pediatric brain tumor patients. The doses calculated from pseudo CTs agreed well with those from real CTs, showing dosimetric accuracy within 2% for the PTV when multiple atlases were used. The arithmetic mean process may be a reasonable choice over PRGP for the synthesis scheme considering performance and computational costs.« less
MRI-based treatment planning with pseudo CT generated through atlas registration
Uh, Jinsoo; Merchant, Thomas E.; Li, Yimei; Li, Xingyu; Hua, Chiaho
2014-01-01
Purpose: To evaluate the feasibility and accuracy of magnetic resonance imaging (MRI)-based treatment planning using pseudo CTs generated through atlas registration. Methods: A pseudo CT, providing electron density information for dose calculation, was generated by deforming atlas CT images previously acquired on other patients. The authors tested 4 schemes of synthesizing a pseudo CT from single or multiple deformed atlas images: use of a single arbitrarily selected atlas, arithmetic mean process using 6 atlases, and pattern recognition with Gaussian process (PRGP) using 6 or 12 atlases. The required deformation for atlas CT images was derived from a nonlinear registration of conjugated atlas MR images to that of the patient of interest. The contrasts of atlas MR images were adjusted by histogram matching to reduce the effect of different sets of acquisition parameters. For comparison, the authors also tested a simple scheme assigning the Hounsfield unit of water to the entire patient volume. All pseudo CT generating schemes were applied to 14 patients with common pediatric brain tumors. The image similarity of real patient-specific CT and pseudo CTs constructed by different schemes was compared. Differences in computation times were also calculated. The real CT in the treatment planning system was replaced with the pseudo CT, and the dose distribution was recalculated to determine the difference. Results: The atlas approach generally performed better than assigning a bulk CT number to the entire patient volume. Comparing atlas-based schemes, those using multiple atlases outperformed the single atlas scheme. For multiple atlas schemes, the pseudo CTs were similar to the real CTs (correlation coefficient, 0.787–0.819). The calculated dose distribution was in close agreement with the original dose. Nearly the entire patient volume (98.3%–98.7%) satisfied the criteria of chi-evaluation (<2% maximum dose and 2 mm range). The dose to 95% of the volume and the percentage of volume receiving at least 95% of the prescription dose in the planning target volume differed from the original values by less than 2% of the prescription dose (root-mean-square, RMS < 1%). The PRGP scheme did not perform better than the arithmetic mean process with the same number of atlases. Increasing the number of atlases from 6 to 12 often resulted in improvements, but statistical significance was not always found. Conclusions: MRI-based treatment planning with pseudo CTs generated through atlas registration is feasible for pediatric brain tumor patients. The doses calculated from pseudo CTs agreed well with those from real CTs, showing dosimetric accuracy within 2% for the PTV when multiple atlases were used. The arithmetic mean process may be a reasonable choice over PRGP for the synthesis scheme considering performance and computational costs. PMID:24784377
An algorithm for the solution of dynamic linear programs
NASA Technical Reports Server (NTRS)
Psiaki, Mark L.
1989-01-01
The algorithm's objective is to efficiently solve Dynamic Linear Programs (DLP) by taking advantage of their special staircase structure. This algorithm constitutes a stepping stone to an improved algorithm for solving Dynamic Quadratic Programs, which, in turn, would make the nonlinear programming method of Successive Quadratic Programs more practical for solving trajectory optimization problems. The ultimate goal is to being trajectory optimization solution speeds into the realm of real-time control. The algorithm exploits the staircase nature of the large constraint matrix of the equality-constrained DLPs encountered when solving inequality-constrained DLPs by an active set approach. A numerically-stable, staircase QL factorization of the staircase constraint matrix is carried out starting from its last rows and columns. The resulting recursion is like the time-varying Riccati equation from multi-stage LQR theory. The resulting factorization increases the efficiency of all of the typical LP solution operations over that of a dense matrix LP code. At the same time numerical stability is ensured. The algorithm also takes advantage of dynamic programming ideas about the cost-to-go by relaxing active pseudo constraints in a backwards sweeping process. This further decreases the cost per update of the LP rank-1 updating procedure, although it may result in more changes of the active set that if pseudo constraints were relaxed in a non-stagewise fashion. The usual stability of closed-loop Linear/Quadratic optimally-controlled systems, if it carries over to strictly linear cost functions, implies that the saving due to reduced factor update effort may outweigh the cost of an increased number of updates. An aerospace example is presented in which a ground-to-ground rocket's distance is maximized. This example demonstrates the applicability of this class of algorithms to aerospace guidance. It also sheds light on the efficacy of the proposed pseudo constraint relaxation scheme.
Superparamagnetic perpendicular magnetic tunnel junctions for true random number generators
NASA Astrophysics Data System (ADS)
Parks, Bradley; Bapna, Mukund; Igbokwe, Julianne; Almasi, Hamid; Wang, Weigang; Majetich, Sara A.
2018-05-01
Superparamagnetic perpendicular magnetic tunnel junctions are fabricated and analyzed for use in random number generators. Time-resolved resistance measurements are used as streams of bits in statistical tests for randomness. Voltage control of the thermal stability enables tuning the average speed of random bit generation up to 70 kHz in a 60 nm diameter device. In its most efficient operating mode, the device generates random bits at an energy cost of 600 fJ/bit. A narrow range of magnetic field tunes the probability of a given state from 0 to 1, offering a means of probabilistic computing.
NASA Astrophysics Data System (ADS)
Pradhan, Abanti; Kumari, Sony; Dash, Saktisradha; Prasad Biswal, Durga; Kishore Dash, Aditya; Panigrahi, Kishore C. S.
2017-08-01
As an important component of ecosystems, mosses have a strong influence on the cycling of water, energy and nutrient. Given their sensitivity to environmental change, mosses can be used as bioindicators of water quality, air pollution, metal accumulation and climate change. In the present study, the growth, differentiation and heavy metal (Hg) absorption of two species of mosses like Physcomitrella patens and Funariahygrometrica were studied in solid cultures under laboratory conditions. It was observed that, the number of gametophores developed from single inoculated gametophores after 45 days of growth of F. hygrometrica was 11±2.0 in control where as it has decreased at higher concentrations, 4±1.5 in 1ppm of mercury treatment. P. patens also shows a similar trend. The heavy metal uptake of both the species of mosses was studied. It was observed that Hg content in pseudo leaves of P. patens ranged from 0.98 ppm to 2.76 ppm at different Hg treatment (0.1-1 ppm), whereas in F. hygrometrica it ranged from 0.78 ppm to 2.43 ppm under the same treatment condition. Comparing between the Hg content in pseudo-leaves and rhizoids of P. patens and F. hygrometrica, it was observed that the Hg content was elevated about 60-64% in rhizoids than that of pseudo-leaves at 0.1% treatment level, whereas it was increased almost up to 50% in other treatment level.
Beam combining and SBS suppression in white noise and pseudo-random modulated amplifiers
NASA Astrophysics Data System (ADS)
Anderson, Brian; Flores, Angel; Holten, Roger; Ehrenreich, Thomas; Dajani, Iyad
2015-03-01
White noise phase modulation (WNS) and pseudo-random binary sequence phase modulation (PRBS) are effective techniques for mitigation of nonlinear effects such as stimulated Brillouin scattering (SBS); thereby paving the way for higher power narrow linewidth fiber amplifiers. However, detailed studies comparing both coherent beam combination and the SBS suppression of these phase modulation schemes have not been reported. In this study an active fiber cutback experiment is performed comparing the enhancement factor of a PRBS and WNS broadened seed as a function of linewidth and fiber length. Furthermore, two WNS and PRBS modulated fiber lasers are coherently combined to measure and compare the fringe visibility and coherence length as a function of optical path length difference. Notably, the discrete frequency comb of PRBS modulation provides a beam combining re-coherence effect where the lasers periodically come back into phase. Significantly, this may reduce path length matching complexity in coherently combined fiber laser systems.
Pseudo-random generator based on Chinese Remainder Theorem
NASA Astrophysics Data System (ADS)
Bajard, Jean Claude; Hördegen, Heinrich
2009-08-01
Pseudo-Random Generators (PRG) are fundamental in cryptography. Their use occurs at different level in cipher protocols. They need to verify some properties for being qualified as robust. The NIST proposes some criteria and a tests suite which gives informations on the behavior of the PRG. In this work, we present a PRG constructed from the conversion between further residue systems of representation of the elements of GF(2)[X]. In this approach, we use some pairs of co-prime polynomials of degree k and a state vector of 2k bits. The algebraic properties are broken by using different independent pairs during the process. Since this method is reversible, we also can use it as a symmetric crypto-system. We evaluate the cost of a such system, taking into account that some operations are commonly implemented on crypto-processors. We give the results of the different NIST Tests and we explain this choice compare to others found in the literature. We describe the behavior of this PRG and explain how the different rounds are chained for ensuring a fine secure randomness.
NASA Astrophysics Data System (ADS)
Song, Seok Goo; Kwak, Sangmin; Lee, Kyungbook; Park, Donghee
2017-04-01
It is a critical element to predict the intensity and variability of strong ground motions in seismic hazard assessment. The characteristics and variability of earthquake rupture process may be a dominant factor in determining the intensity and variability of near-source strong ground motions. Song et al. (2014) demonstrated that the variability of earthquake rupture scenarios could be effectively quantified in the framework of 1-point and 2-point statistics of earthquake source parameters, constrained by rupture dynamics and past events. The developed pseudo-dynamic source modeling schemes were also validated against the recorded ground motion data of past events and empirical ground motion prediction equations (GMPEs) at the broadband platform (BBP) developed by the Southern California Earthquake Center (SCEC). Recently we improved the computational efficiency of the developed pseudo-dynamic source-modeling scheme by adopting the nonparametric co-regionalization algorithm, introduced and applied in geostatistics initially. We also investigated the effect of earthquake rupture process on near-source ground motion characteristics in the framework of 1-point and 2-point statistics, particularly focusing on the forward directivity region. Finally we will discuss whether the pseudo-dynamic source modeling can reproduce the variability (standard deviation) of empirical GMPEs and the efficiency of 1-point and 2-point statistics to address the variability of ground motions.
Achak, M; Hafidi, A; Ouazzani, N; Sayadi, S; Mandi, L
2009-07-15
The aim of this work is to determine the potential of application of banana peel as a biosorbent for removing phenolic compounds from olive mill wastewaters. The effect of adsorbent dosage, pH and contact time were investigated. The results showed that the increase in the banana peel dosage from 10 to 30 g/L significantly increased the phenolic compounds adsorption rates from 60 to 88%. Increase in the pH to above neutrality resulted in the increase in the phenolic compounds adsorption capacity. The adsorption process was fast, and it reached equilibrium in 3-h contact time. The Freundlich and Langmuir adsorption models were used for mathematical description of the adsorption equilibrium and it was found that experimental data fitted very well to both Freundlich and Langmuir models. Batch adsorption models, based on the assumption of the pseudo-first-order, pseudo-second-order and intraparticle diffusion mechanism, showed that kinetic data follow closely the pseudo-second-order than the pseudo-first-order and intraparticle diffusion. Desorption studies showed that low pH value was efficient for desorption of phenolic compounds. These results indicate clearly the efficiency of banana peel as a low-cost solution for olive mill wastewaters treatment and give some preliminary elements for the comprehension of the interactions between banana peel as a bioadsorbent and the very polluting compounds from the olive oil industry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
El-Tayeb, A., E-mail: ahmed.khalil@ejust.edu.eg; El-Shazly, A. H.; Elkady, M. F.
In this article, a dual pin-to-plate high-voltage corona discharge system is introduced to study experimentally the gap distance, the contact time, the effect of pin and plate materials, the thickness of ground plate and the conductivity on the amount of Acid Blue 25 dye color removal efficiency from polluted water. A study for the optimum air gap distance between dual pin and surface of Acid Blue 25 dye solution is carried out using 3D-EM simulator to find maximum electric field intensity at the tip of both pins. The outcomes display that the best gap for corona discharge is approximately 5more » mm for 15-kV source. This separation is constant during the study of other factors. In addition, an investigation of the essential reactive species responsible for oxidation of the dye organic compounds (O{sub 3} in air discharge, O{sub 3} in water, and H{sub 2}O{sub 2}) during the experimental time is conducted. Three various materials such as: stainless steel, copper and aluminum are used for pins and plate. The maximum color removal efficiencies of Acid Blue 25 dyes are 99.03, 82.04, and 90.78% after treatment time 15 min for stainless steel, copper, and aluminum, respectively. Measurement results for the impact of thickness of an aluminum ground plate on color removal competence show color removal efficiencies of 86.3, 90.78, and 98.06% after treatment time 15 min for thicknesses of 2, 0.5, and 0.1 mm, respectively. The increasing of the solution conductivity leads to the reduction of decolorization efficiency. A kinetic model is used to define the performance of corona discharge system. The models of pseudo-zero-order, pseudo-first-order, and pseudo-second-order reaction kinetics are utilized to investigate the decolorization of Acid Blue 25 dye. The rate of degradation of Acid Blue 25 dye follows the pseudo-first-order kinetics in the dye concentration.« less
NASA Astrophysics Data System (ADS)
Oh, Seok-Geun; Suh, Myoung-Seok
2017-07-01
The projection skills of five ensemble methods were analyzed according to simulation skills, training period, and ensemble members, using 198 sets of pseudo-simulation data (PSD) produced by random number generation assuming the simulated temperature of regional climate models. The PSD sets were classified into 18 categories according to the relative magnitude of bias, variance ratio, and correlation coefficient, where each category had 11 sets (including 1 truth set) with 50 samples. The ensemble methods used were as follows: equal weighted averaging without bias correction (EWA_NBC), EWA with bias correction (EWA_WBC), weighted ensemble averaging based on root mean square errors and correlation (WEA_RAC), WEA based on the Taylor score (WEA_Tay), and multivariate linear regression (Mul_Reg). The projection skills of the ensemble methods improved generally as compared with the best member for each category. However, their projection skills are significantly affected by the simulation skills of the ensemble member. The weighted ensemble methods showed better projection skills than non-weighted methods, in particular, for the PSD categories having systematic biases and various correlation coefficients. The EWA_NBC showed considerably lower projection skills than the other methods, in particular, for the PSD categories with systematic biases. Although Mul_Reg showed relatively good skills, it showed strong sensitivity to the PSD categories, training periods, and number of members. On the other hand, the WEA_Tay and WEA_RAC showed relatively superior skills in both the accuracy and reliability for all the sensitivity experiments. This indicates that WEA_Tay and WEA_RAC are applicable even for simulation data with systematic biases, a short training period, and a small number of ensemble members.
Li, Yihe; Li, Bofeng; Gao, Yang
2015-01-01
With the increased availability of regional reference networks, Precise Point Positioning (PPP) can achieve fast ambiguity resolution (AR) and precise positioning by assimilating the satellite fractional cycle biases (FCBs) and atmospheric corrections derived from these networks. In such processing, the atmospheric corrections are usually treated as deterministic quantities. This is however unrealistic since the estimated atmospheric corrections obtained from the network data are random and furthermore the interpolated corrections diverge from the realistic corrections. This paper is dedicated to the stochastic modelling of atmospheric corrections and analyzing their effects on the PPP AR efficiency. The random errors of the interpolated corrections are processed as two components: one is from the random errors of estimated corrections at reference stations, while the other arises from the atmospheric delay discrepancies between reference stations and users. The interpolated atmospheric corrections are then applied by users as pseudo-observations with the estimated stochastic model. Two data sets are processed to assess the performance of interpolated corrections with the estimated stochastic models. The results show that when the stochastic characteristics of interpolated corrections are properly taken into account, the successful fix rate reaches 93.3% within 5 min for a medium inter-station distance network and 80.6% within 10 min for a long inter-station distance network. PMID:26633400
Li, Yihe; Li, Bofeng; Gao, Yang
2015-11-30
With the increased availability of regional reference networks, Precise Point Positioning (PPP) can achieve fast ambiguity resolution (AR) and precise positioning by assimilating the satellite fractional cycle biases (FCBs) and atmospheric corrections derived from these networks. In such processing, the atmospheric corrections are usually treated as deterministic quantities. This is however unrealistic since the estimated atmospheric corrections obtained from the network data are random and furthermore the interpolated corrections diverge from the realistic corrections. This paper is dedicated to the stochastic modelling of atmospheric corrections and analyzing their effects on the PPP AR efficiency. The random errors of the interpolated corrections are processed as two components: one is from the random errors of estimated corrections at reference stations, while the other arises from the atmospheric delay discrepancies between reference stations and users. The interpolated atmospheric corrections are then applied by users as pseudo-observations with the estimated stochastic model. Two data sets are processed to assess the performance of interpolated corrections with the estimated stochastic models. The results show that when the stochastic characteristics of interpolated corrections are properly taken into account, the successful fix rate reaches 93.3% within 5 min for a medium inter-station distance network and 80.6% within 10 min for a long inter-station distance network.
A Pumping Algorithm for Ergodic Stochastic Mean Payoff Games with Perfect Information
NASA Astrophysics Data System (ADS)
Boros, Endre; Elbassioni, Khaled; Gurvich, Vladimir; Makino, Kazuhisa
In this paper, we consider two-person zero-sum stochastic mean payoff games with perfect information, or BWR-games, given by a digraph G = (V = V B ∪ V W ∪ V R , E), with local rewards r: E to { R}, and three types of vertices: black V B , white V W , and random V R . The game is played by two players, White and Black: When the play is at a white (black) vertex v, White (Black) selects an outgoing arc (v,u). When the play is at a random vertex v, a vertex u is picked with the given probability p(v,u). In all cases, Black pays White the value r(v,u). The play continues forever, and White aims to maximize (Black aims to minimize) the limiting mean (that is, average) payoff. It was recently shown in [7] that BWR-games are polynomially equivalent with the classical Gillette games, which include many well-known subclasses, such as cyclic games, simple stochastic games (SSG's), stochastic parity games, and Markov decision processes. In this paper, we give a new algorithm for solving BWR-games in the ergodic case, that is when the optimal values do not depend on the initial position. Our algorithm solves a BWR-game by reducing it, using a potential transformation, to a canonical form in which the optimal strategies of both players and the value for every initial position are obvious, since a locally optimal move in it is optimal in the whole game. We show that this algorithm is pseudo-polynomial when the number of random nodes is constant. We also provide an almost matching lower bound on its running time, and show that this bound holds for a wider class of algorithms. Let us add that the general (non-ergodic) case is at least as hard as SSG's, for which no pseudo-polynomial algorithm is known.
Sex difference in human fingertip recognition of micron-level randomness as unpleasant.
Nakatani, M; Kawasoe, T; Denda, M
2011-08-01
We investigated sex difference in evaluation, using the human fingertip, of the tactile impressions of three different micron-scale patterns laser-engraved on plastic plates. There were two ordered (periodical) patterns consisting of ripples on a scale of a few micrometres and one pseudo-random (non-periodical) pattern; these patterns were considered to mimic the surface geometry of healthy and damaged human hair, respectively. In the first experiment, 10 women and 10 men ran a fingertip over each surface and determined which of the three plates felt most unpleasant. All 10 female participants reported the random pattern, but not the ordered patterns, as unpleasant, whereas the majority of the male participants did not. In the second experiment, 9 of 10 female participants continued to report the pseudo-random pattern as unpleasant even after their fingertip had been coated with a collodion membrane. In the third experiment, participants were asked to evaluate the magnitude of the tactile impression for each pattern. The results again indicated that female participants tend to report a greater magnitude of unpleasantness than male participants. Our findings indicate that the female participants could readily detect microgeometric surface characteristics and that they evaluated the random pattern as more unpleasant. Possible physical and perceptual mechanisms involved are discussed. © 2011 The Authors. ICS © 2011 Society of Cosmetic Scientists and the Société Française de Cosmétologie.
Research on allocation efficiency of the daisy chain allocation algorithm
NASA Astrophysics Data System (ADS)
Shi, Jingping; Zhang, Weiguo
2013-03-01
With the improvement of the aircraft performance in reliability, maneuverability and survivability, the number of the control effectors increases a lot. How to distribute the three-axis moments into the control surfaces reasonably becomes an important problem. Daisy chain method is simple and easy to be carried out in the design of the allocation system. But it can not solve the allocation problem for entire attainable moment subset. For the lateral-directional allocation problem, the allocation efficiency of the daisy chain can be directly measured by the area of its subset of attainable moments. Because of the non-linear allocation characteristic, the subset of attainable moments of daisy-chain method is a complex non-convex polygon, and it is difficult to solve directly. By analyzing the two-dimensional allocation problems with a "micro-element" idea, a numerical calculation algorithm is proposed to compute the area of the non-convex polygon. In order to improve the allocation efficiency of the algorithm, a genetic algorithm with the allocation efficiency chosen as the fitness function is proposed to find the best pseudo-inverse matrix.
The elimination of a class of pseudo echoes by an improved T/R switch technique
NASA Technical Reports Server (NTRS)
Green, J. L.; Ecklund, W. L.
1986-01-01
An annoying class of pseudo echoes are described that evidently occur occasionally in a number of ST (stratosphere troposphere) radars. The origin of these signals are located in the output circuitry of the radar transmitter. Two methods for the elimination of the radar echoes are suggested and briefly desscrib.
A maximum pseudo-profile likelihood estimator for the Cox model under length-biased sampling
Huang, Chiung-Yu; Qin, Jing; Follmann, Dean A.
2012-01-01
This paper considers semiparametric estimation of the Cox proportional hazards model for right-censored and length-biased data arising from prevalent sampling. To exploit the special structure of length-biased sampling, we propose a maximum pseudo-profile likelihood estimator, which can handle time-dependent covariates and is consistent under covariate-dependent censoring. Simulation studies show that the proposed estimator is more efficient than its competitors. A data analysis illustrates the methods and theory. PMID:23843659
DNA based random key generation and management for OTP encryption.
Zhang, Yunpeng; Liu, Xin; Sun, Manhui
2017-09-01
One-time pad (OTP) is a principle of key generation applied to the stream ciphering method which offers total privacy. The OTP encryption scheme has proved to be unbreakable in theory, but difficult to realize in practical applications. Because OTP encryption specially requires the absolute randomness of the key, its development has suffered from dense constraints. DNA cryptography is a new and promising technology in the field of information security. DNA chromosomes storing capabilities can be used as one-time pad structures with pseudo-random number generation and indexing in order to encrypt the plaintext messages. In this paper, we present a feasible solution to the OTP symmetric key generation and transmission problem with DNA at the molecular level. Through recombinant DNA technology, by using only sender-receiver known restriction enzymes to combine the secure key represented by DNA sequence and the T vector, we generate the DNA bio-hiding secure key and then place the recombinant plasmid in implanted bacteria for secure key transmission. The designed bio experiments and simulation results show that the security of the transmission of the key is further improved and the environmental requirements of key transmission are reduced. Analysis has demonstrated that the proposed DNA-based random key generation and management solutions are marked by high security and usability. Published by Elsevier B.V.
Generalized Smooth Transition Map Between Tent and Logistic Maps
NASA Astrophysics Data System (ADS)
Sayed, Wafaa S.; Fahmy, Hossam A. H.; Rezk, Ahmed A.; Radwan, Ahmed G.
There is a continuous demand on novel chaotic generators to be employed in various modeling and pseudo-random number generation applications. This paper proposes a new chaotic map which is a general form for one-dimensional discrete-time maps employing the power function with the tent and logistic maps as special cases. The proposed map uses extra parameters to provide responses that fit multiple applications for which conventional maps were not enough. The proposed generalization covers also maps whose iterative relations are not based on polynomials, i.e. with fractional powers. We introduce a framework for analyzing the proposed map mathematically and predicting its behavior for various combinations of its parameters. In addition, we present and explain the transition map which results in intermediate responses as the parameters vary from their values corresponding to tent map to those corresponding to logistic map case. We study the properties of the proposed map including graph of the map equation, general bifurcation diagram and its key-points, output sequences, and maximum Lyapunov exponent. We present further explorations such as effects of scaling, system response with respect to the new parameters, and operating ranges other than transition region. Finally, a stream cipher system based on the generalized transition map validates its utility for image encryption applications. The system allows the construction of more efficient encryption keys which enhances its sensitivity and other cryptographic properties.
Tuning the Quantum Efficiency of Random Lasers - Intrinsic Stokes-Shift and Gain
Lubatsch, Andreas; Frank, Regine
2015-01-01
We report the theoretical analysis for tuning the quantum efficiency of solid state random lasers. Vollhardt-Wölfle theory of photonic transport in disordered non-conserving and open random media, is coupled to lasing dynamics and solved positionally dependent. The interplay of non-linearity and homogeneous non-radiative frequency conversion by means of a Stokes-shift leads to a reduction of the quantum efficiency of the random laser. At the threshold a strong decrease of the spot-size in the stationary state is found due to the increase of non-radiative losses. The coherently emitted photon number per unit of modal surface is also strongly reduced. This result allows for the conclusion that Stokes-shifts are not sufficient to explain confined and extended mode regimes. PMID:26593237
Tuning the Quantum Efficiency of Random Lasers - Intrinsic Stokes-Shift and Gain.
Lubatsch, Andreas; Frank, Regine
2015-11-23
We report the theoretical analysis for tuning the quantum efficiency of solid state random lasers. Vollhardt-Wölfle theory of photonic transport in disordered non-conserving and open random media, is coupled to lasing dynamics and solved positionally dependent. The interplay of non-linearity and homogeneous non-radiative frequency conversion by means of a Stokes-shift leads to a reduction of the quantum efficiency of the random laser. At the threshold a strong decrease of the spot-size in the stationary state is found due to the increase of non-radiative losses. The coherently emitted photon number per unit of modal surface is also strongly reduced. This result allows for the conclusion that Stokes-shifts are not sufficient to explain confined and extended mode regimes.
Citric acid modified kenaf core fibres for removal of methylene blue from aqueous solution.
Sajab, Mohd Shaiful; Chia, Chin Hua; Zakaria, Sarani; Jani, Saad Mohd; Ayob, Mohd Khan; Chee, Kah Leong; Khiew, Poi Sim; Chiu, Wee Siong
2011-08-01
Chemically modified kenaf core fibres were prepared via esterification in the presence of citric acid (CA). The adsorption kinetics and isotherm studies were carried out under different conditions to examine the adsorption efficiency of CA-treated kenaf core fibres towards methylene blue (MB). The adsorption capacity of the kenaf core fibres increased significantly after the citric acid treatment. The values of the correlation coefficients indicated that the Langmuir isotherm fitted the experimental data better than the Freundlich isotherm. The maximum adsorption capacity of the CA-treated kenaf core fibres was found to be 131.6mg/g at 60°C. Kinetic models, pseudo-first-order, pseudo-second-order and intraparticle diffusion, were employed to describe the adsorption mechanism. The kinetic data were found to fit pseudo-second-order model equation as compared to pseudo-first-order model. The adsorption of MB onto the CA-treated kenaf core fibres was spontaneous and endothermic. Copyright © 2011 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Padilla, J. L., E-mail: jose.padilladelatorre@epfl.ch; Alper, C.; Ionescu, A. M.
2015-06-29
We investigate the effect of pseudo-bilayer configurations at low operating voltages (≤0.5 V) in the heterogate germanium electron-hole bilayer tunnel field-effect transistor (HG-EHBTFET) compared to the traditional bilayer structures of EHBTFETs arising from semiclassical simulations where the inversion layers for electrons and holes featured very symmetric profiles with similar concentration levels at the ON-state. Pseudo-bilayer layouts are attained by inducing a certain asymmetry between the top and the bottom gates so that even though the hole inversion layer is formed at the bottom of the channel, the top gate voltage remains below the required value to trigger the formation of themore » inversion layer for electrons. Resulting benefits from this setup are improved electrostatic control on the channel, enhanced gate-to-gate efficiency, and higher I{sub ON} levels. Furthermore, pseudo-bilayer configurations alleviate the difficulties derived from confining very high opposite carrier concentrations in very thin structures.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vay, Jean-Luc, E-mail: jlvay@lbl.gov; Haber, Irving; Godfrey, Brendan B.
Pseudo-spectral electromagnetic solvers (i.e. representing the fields in Fourier space) have extraordinary precision. In particular, Haber et al. presented in 1973 a pseudo-spectral solver that integrates analytically the solution over a finite time step, under the usual assumption that the source is constant over that time step. Yet, pseudo-spectral solvers have not been widely used, due in part to the difficulty for efficient parallelization owing to global communications associated with global FFTs on the entire computational domains. A method for the parallelization of electromagnetic pseudo-spectral solvers is proposed and tested on single electromagnetic pulses, and on Particle-In-Cell simulations of themore » wakefield formation in a laser plasma accelerator. The method takes advantage of the properties of the Discrete Fourier Transform, the linearity of Maxwell’s equations and the finite speed of light for limiting the communications of data within guard regions between neighboring computational domains. Although this requires a small approximation, test results show that no significant error is made on the test cases that have been presented. The proposed method opens the way to solvers combining the favorable parallel scaling of standard finite-difference methods with the accuracy advantages of pseudo-spectral methods.« less
Multiscale System for Environmentally-Driven Infectious Disease with Threshold Control Strategy
NASA Astrophysics Data System (ADS)
Sun, Xiaodan; Xiao, Yanni
A multiscale system for environmentally-driven infectious disease is proposed, in which control measures at three different scales are implemented when the number of infected hosts exceeds a certain threshold. Our coupled model successfully describes the feedback mechanisms of between-host dynamics on within-host dynamics by employing one-scale variable guided enhancement of interventions on other scales. The modeling approach provides a novel idea of how to link the large-scale dynamics to small-scale dynamics. The dynamic behaviors of the multiscale system on two time-scales, i.e. fast system and slow system, are investigated. The slow system is further simplified to a two-dimensional Filippov system. For the Filippov system, we study the dynamics of its two subsystems (i.e. free-system and control-system), the sliding mode dynamics, the boundary equilibrium bifurcations, as well as the global behaviors. We prove that both subsystems may undergo backward bifurcations and the sliding domain exists. Meanwhile, it is possible that the pseudo-equilibrium exists and is globally stable, or the pseudo-equilibrium, the disease-free equilibrium and the real equilibrium are tri-stable, or the pseudo-equilibrium and the real equilibrium are bi-stable, or the pseudo-equilibrium and disease-free equilibrium are bi-stable, which depends on the threshold value and other parameter values. The global stability of the pseudo-equilibrium reveals that we may maintain the number of infected hosts at a previously given value. Moreover, the bi-stability and tri-stability indicate that whether the number of infected individuals tends to zero or a previously given value or other positive values depends on the parameter values and the initial states of the system. These results highlight the challenges in the control of environmentally-driven infectious disease.
Wei, Jiangyong; Hu, Xiaohua; Zou, Xiufen; Tian, Tianhai
2017-12-28
Recent advances in omics technologies have raised great opportunities to study large-scale regulatory networks inside the cell. In addition, single-cell experiments have measured the gene and protein activities in a large number of cells under the same experimental conditions. However, a significant challenge in computational biology and bioinformatics is how to derive quantitative information from the single-cell observations and how to develop sophisticated mathematical models to describe the dynamic properties of regulatory networks using the derived quantitative information. This work designs an integrated approach to reverse-engineer gene networks for regulating early blood development based on singel-cell experimental observations. The wanderlust algorithm is initially used to develop the pseudo-trajectory for the activities of a number of genes. Since the gene expression data in the developed pseudo-trajectory show large fluctuations, we then use Gaussian process regression methods to smooth the gene express data in order to obtain pseudo-trajectories with much less fluctuations. The proposed integrated framework consists of both bioinformatics algorithms to reconstruct the regulatory network and mathematical models using differential equations to describe the dynamics of gene expression. The developed approach is applied to study the network regulating early blood cell development. A graphic model is constructed for a regulatory network with forty genes and a dynamic model using differential equations is developed for a network of nine genes. Numerical results suggests that the proposed model is able to match experimental data very well. We also examine the networks with more regulatory relations and numerical results show that more regulations may exist. We test the possibility of auto-regulation but numerical simulations do not support the positive auto-regulation. In addition, robustness is used as an importantly additional criterion to select candidate networks. The research results in this work shows that the developed approach is an efficient and effective method to reverse-engineer gene networks using single-cell experimental observations.
On the usability and security of pseudo-signatures
NASA Astrophysics Data System (ADS)
Chen, Jin; Lopresti, Daniel
2010-01-01
Handwriting has been proposed as a possible biometric for a number of years. However, recent work has shown that handwritten passphrases are vulnerable to both human-based and machine-based forgeries. Pseudosignatures as an alternative are designed to thwart such attacks while still being easy for users to create, remember, and reproduce. In this paper, we briefly review the concept of pseudo-signatures, then describe an evaluation framework that considers aspects of both usability and security. We present results from preliminary experiments that examine user choice in creating pseudo-signatures and discuss the implications when sketching is used for generating cryptographic keys.
Local Random Quantum Circuits are Approximate Polynomial-Designs
NASA Astrophysics Data System (ADS)
Brandão, Fernando G. S. L.; Harrow, Aram W.; Horodecki, Michał
2016-09-01
We prove that local random quantum circuits acting on n qubits composed of O( t 10 n 2) many nearest neighbor two-qubit gates form an approximate unitary t-design. Previously it was unknown whether random quantum circuits were a t-design for any t > 3. The proof is based on an interplay of techniques from quantum many-body theory, representation theory, and the theory of Markov chains. In particular we employ a result of Nachtergaele for lower bounding the spectral gap of frustration-free quantum local Hamiltonians; a quasi-orthogonality property of permutation matrices; a result of Oliveira which extends to the unitary group the path-coupling method for bounding the mixing time of random walks; and a result of Bourgain and Gamburd showing that dense subgroups of the special unitary group, composed of elements with algebraic entries, are ∞-copy tensor-product expanders. We also consider pseudo-randomness properties of local random quantum circuits of small depth and prove that circuits of depth O( t 10 n) constitute a quantum t-copy tensor-product expander. The proof also rests on techniques from quantum many-body theory, in particular on the detectability lemma of Aharonov, Arad, Landau, and Vazirani. We give applications of the results to cryptography, equilibration of closed quantum dynamics, and the generation of topological order. In particular we show the following pseudo-randomness property of generic quantum circuits: Almost every circuit U of size O( n k ) on n qubits cannot be distinguished from a Haar uniform unitary by circuits of size O( n ( k-9)/11) that are given oracle access to U.
Region-Based Prediction for Image Compression in the Cloud.
Begaint, Jean; Thoreau, Dominique; Guillotel, Philippe; Guillemot, Christine
2018-04-01
Thanks to the increasing number of images stored in the cloud, external image similarities can be leveraged to efficiently compress images by exploiting inter-images correlations. In this paper, we propose a novel image prediction scheme for cloud storage. Unlike current state-of-the-art methods, we use a semi-local approach to exploit inter-image correlation. The reference image is first segmented into multiple planar regions determined from matched local features and super-pixels. The geometric and photometric disparities between the matched regions of the reference image and the current image are then compensated. Finally, multiple references are generated from the estimated compensation models and organized in a pseudo-sequence to differentially encode the input image using classical video coding tools. Experimental results demonstrate that the proposed approach yields significant rate-distortion performance improvements compared with the current image inter-coding solutions such as high efficiency video coding.
Hughes, Karen; Bellis, Mark A; Leckenby, Nicola; Quigg, Zara; Hardcastle, Katherine; Sharples, Olivia; Llewellyn, David J
2014-05-01
By measuring alcohol retailers' propensity to illegally sell alcohol to young people who appear highly intoxicated, we examine whether UK legislation is effective at preventing health harms resulting from drunk individuals continuing to access alcohol. 73 randomly selected pubs, bars and nightclubs in a city in North West England were subjected to an alcohol purchase test by pseudo-drunk actors. Observers recorded venue characteristics to identify poorly managed and problematic (PMP) bars. 83.6% of purchase attempts resulted in a sale of alcohol to a pseudo-intoxicated actor. Alcohol sales increased with the number of PMP markers bars had, yet even in those with no markers, 66.7% of purchase attempts resulted in a sale. Bar servers often recognised signs of drunkenness in actors, but still served them. In 18% of alcohol sales, servers attempted to up-sell by suggesting actors purchase double rather than single vodkas. UK law preventing sales of alcohol to drunks is routinely broken in nightlife environments, yet prosecutions are rare. Nightlife drunkenness places enormous burdens on health and health services. Preventing alcohol sales to drunks should be a public health priority, while policy failures on issues, such as alcohol pricing, are revisited.
Adaptiveness in monotone pseudo-Boolean optimization and stochastic neural computation.
Grossi, Giuliano
2009-08-01
Hopfield neural network (HNN) is a nonlinear computational model successfully applied in finding near-optimal solutions of several difficult combinatorial problems. In many cases, the network energy function is obtained through a learning procedure so that its minima are states falling into a proper subspace (feasible region) of the search space. However, because of the network nonlinearity, a number of undesirable local energy minima emerge from the learning procedure, significantly effecting the network performance. In the neural model analyzed here, we combine both a penalty and a stochastic process in order to enhance the performance of a binary HNN. The penalty strategy allows us to gradually lead the search towards states representing feasible solutions, so avoiding oscillatory behaviors or asymptotically instable convergence. Presence of stochastic dynamics potentially prevents the network to fall into shallow local minima of the energy function, i.e., quite far from global optimum. Hence, for a given fixed network topology, the desired final distribution on the states can be reached by carefully modulating such process. The model uses pseudo-Boolean functions both to express problem constraints and cost function; a combination of these two functions is then interpreted as energy of the neural network. A wide variety of NP-hard problems fall in the class of problems that can be solved by the model at hand, particularly those having a monotonic quadratic pseudo-Boolean function as constraint function. That is, functions easily derived by closed algebraic expressions representing the constraint structure and easy (polynomial time) to maximize. We show the asymptotic convergence properties of this model characterizing its state space distribution at thermal equilibrium in terms of Markov chain and give evidence of its ability to find high quality solutions on benchmarks and randomly generated instances of two specific problems taken from the computational graph theory.
Reynolds number effects on mixing due to topological chaos.
Smith, Spencer A; Warrier, Sangeeta
2016-03-01
Topological chaos has emerged as a powerful tool to investigate fluid mixing. While this theory can guarantee a lower bound on the stretching rate of certain material lines, it does not indicate what fraction of the fluid actually participates in this minimally mandated mixing. Indeed, the area in which effective mixing takes place depends on physical parameters such as the Reynolds number. To help clarify this dependency, we numerically simulate the effects of a batch stirring device on a 2D incompressible Newtonian fluid in the laminar regime. In particular, we calculate the finite time Lyapunov exponent (FTLE) field for three different stirring protocols, one topologically complex (pseudo-Anosov) and two simple (finite-order), over a range of viscosities. After extracting appropriate measures indicative of both the amount of mixing and the area of effective mixing from the FTLE field, we see a clearly defined Reynolds number range in which the relative efficacy of the pseudo-Anosov protocol over the finite-order protocols justifies the application of topological chaos. More unexpectedly, we see that while the measures of effective mixing area increase with increasing Reynolds number for the finite-order protocols, they actually exhibit non-monotonic behavior for the pseudo-Anosov protocol.
A pseudo-thermodynamic description of dispersion for nanocomposites
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Yan; Beaucage, Gregory; Vogtt, Karsten
Dispersion in polymer nanocomposites is determined by the kinetics of mixing and chemical affinity. Compounds like reinforcing filler/elastomer blends display some similarity to colloidal solutions in that the filler particles are close to randomly dispersed through processing. It is attractive to apply a pseudo-thermodynamic approach taking advantage of this analogy between the kinetics of mixing for polymer compounds and thermally driven dispersion for colloids. In order to demonstrate this pseudo-thermodynamic approach, two polybutadienes and one polyisoprene were milled with three carbon blacks and two silicas. These samples were examined using small-angle x-ray scattering as a function of filler concentration tomore » determine a pseudo-second order virial coefficient, A2, which is used as an indicator for compatibility of the filler and polymer. It is found that A2 follows the expected behavior with lower values for smaller primary particles indicating that smaller particles are less compatible and more difficult to mix. The measured values of A2 can be used to specify repulsive interaction potentials for coarse grain DPD simulations of filler/elastomer systems. In addition, new methods to quantify the filler percolation threshold and filler mesh size as a function of filler concentration are obtained. Moreover, the results represent a new approach to understanding and predicting compatibility in polymer nanocomposites based on a pseudo-thermodynamic approach.« less
A pseudo-thermodynamic description of dispersion for nanocomposites
Jin, Yan; Beaucage, Gregory; Vogtt, Karsten; ...
2017-09-18
Dispersion in polymer nanocomposites is determined by the kinetics of mixing and chemical affinity. Compounds like reinforcing filler/elastomer blends display some similarity to colloidal solutions in that the filler particles are close to randomly dispersed through processing. It is attractive to apply a pseudo-thermodynamic approach taking advantage of this analogy between the kinetics of mixing for polymer compounds and thermally driven dispersion for colloids. In order to demonstrate this pseudo-thermodynamic approach, two polybutadienes and one polyisoprene were milled with three carbon blacks and two silicas. These samples were examined using small-angle x-ray scattering as a function of filler concentration tomore » determine a pseudo-second order virial coefficient, A2, which is used as an indicator for compatibility of the filler and polymer. It is found that A2 follows the expected behavior with lower values for smaller primary particles indicating that smaller particles are less compatible and more difficult to mix. The measured values of A2 can be used to specify repulsive interaction potentials for coarse grain DPD simulations of filler/elastomer systems. In addition, new methods to quantify the filler percolation threshold and filler mesh size as a function of filler concentration are obtained. Moreover, the results represent a new approach to understanding and predicting compatibility in polymer nanocomposites based on a pseudo-thermodynamic approach.« less
Chan, Wai Sze; Williams, Jacob; Dautovich, Natalie D.; McNamara, Joseph P.H.; Stripling, Ashley; Dzierzewski, Joseph M.; Berry, Richard B.; McCoy, Karin J.M.; McCrae, Christina S.
2017-01-01
Study Objectives: Sleep variability is a clinically significant variable in understanding and treating insomnia in older adults. The current study examined changes in sleep variability in the course of brief behavioral therapy for insomnia (BBT-I) in older adults who had chronic insomnia. Additionally, the current study examined the mediating mechanisms underlying reductions of sleep variability and the moderating effects of baseline sleep variability on treatment responsiveness. Methods: Sixty-two elderly participants were randomly assigned to either BBT-I or self-monitoring and attention control (SMAC). Sleep was assessed by sleep diaries and actigraphy from baseline to posttreatment and at 3-month follow-up. Mixed models were used to examine changes in sleep variability (within-person standard deviations of weekly sleep parameters) and the hypothesized mediation and moderation effects. Results: Variabilities in sleep diary-assessed sleep onset latency (SOL) and actigraphy-assessed total sleep time (TST) significantly decreased in BBT-I compared to SMAC (Pseudo R2 = .12, .27; P = .018, .008). These effects were mediated by reductions in bedtime and wake time variability and time in bed. Significant time × group × baseline sleep variability interactions on sleep outcomes indicated that participants who had higher baseline sleep variability were more responsive to BBT-I; their actigraphy-assessed TST, SOL, and sleep efficiency improved to a greater degree (Pseudo R2 = .15 to .66; P < .001 to .044). Conclusions: BBT-I is effective in reducing sleep variability in older adults who have chronic insomnia. Increased consistency in bedtime and wake time and decreased time in bed mediate reductions of sleep variability. Baseline sleep variability may serve as a marker of high treatment responsiveness to BBT-I. Clinical Trial Registration: ClinicalTrials.gov, Identifier: NCT02967185 Citation: Chan WS, Williams J, Dautovich ND, McNamara JP, Stripling A, Dzierzewski JM, Berry RB, McCoy KJ, McCrae CS. Night-to-night sleep variability in older adults with chronic insomnia: mediators and moderators in a randomized controlled trial of brief behavioral therapy (BBT-I). J Clin Sleep Med. 2017;13(11):1243–1254. PMID:28992829
NASA Astrophysics Data System (ADS)
Sakellariou, J. S.; Fassois, S. D.
2017-01-01
The identification of a single global model for a stochastic dynamical system operating under various conditions is considered. Each operating condition is assumed to have a pseudo-static effect on the dynamics and be characterized by a single measurable scheduling variable. Identification is accomplished within a recently introduced Functionally Pooled (FP) framework, which offers a number of advantages over Linear Parameter Varying (LPV) identification techniques. The focus of the work is on the extension of the framework to include the important FP-ARMAX model case. Compared to their simpler FP-ARX counterparts, FP-ARMAX models are much more general and offer improved flexibility in describing various types of stochastic noise, but at the same time lead to a more complicated, non-quadratic, estimation problem. Prediction Error (PE), Maximum Likelihood (ML), and multi-stage estimation methods are postulated, and the PE estimator optimality, in terms of consistency and asymptotic efficiency, is analytically established. The postulated estimators are numerically assessed via Monte Carlo experiments, while the effectiveness of the approach and its superiority over its FP-ARX counterpart are demonstrated via an application case study pertaining to simulated railway vehicle suspension dynamics under various mass loading conditions.
Ragupathy, S; Raghu, K; Prabu, P
2015-03-05
Synthesis of titanium dioxide (TiO2) nanoparticles and TiO2 loaded cashew nut shell activated carbon (TiO2/CNSAC) had been undertaken using sol-gel method and their application in BG and MB dyes removal under sunlight radiation has been investigated. The synthesized photocatalysts were characterized by X-ray diffraction analysis (XRD), Fourier infra-red spectroscopy (FT-IR), UV-Vis-diffuse reflectance spectroscopy (DRS) and scanning electron microscopy (SEM) with energy dispersive X-ray analysis (EDX). The various experimental parameters like amount of catalyst, contact time for efficient dyes degradation of BG and MB were concerned in this study. Activity measurements performed under solar irradiation has shown good results for the photodegradation of BG and MB in aqueous solution. It was concluded that the higher photocatalytic activity in TiO2/CNSAC was due to parameters like band-gap, number of hydroxyl groups, surface area and porosity of the catalyst. The kinetic data were also described by the pseudo-first-order and pseudo-second-order kinetic models. Copyright © 2014 Elsevier B.V. All rights reserved.
Coruh, Semra; Ergun, Osman Nuri
2010-01-15
Increasing amounts of residues and waste materials coming from industrial activities in different processes have become an increasingly urgent problem for the future. The release of large quantities of heavy metals into the environment has resulted in a number of environmental problems. The present study investigated the safe disposal of the zinc leach residue waste using industrial residues such as fly ash, phosphogypsum and red mud. In the study, leachability of heavy metals from the zinc leach residue has been evaluated by mine water leaching procedure (MWLP) and toxicity characteristic leaching procedure (TCLP). Zinc removal from leachate was studied using fly ash, phosphogypsum and red mud. The adsorption capacities and adsorption efficiencies were determined. The adsorption rate data was analyzed according to the pseudo-second-order kinetic, Elovich kinetic and intra-particle diffusion kinetic models. The pseudo-second-order kinetic was the best fit kinetic model for the experimental data. The results show that addition of fly ash, phosphogypsum and red mud to the zinc leach residue drastically reduces the heavy metal content in the leachate and could be used as liner materials.
NASA Technical Reports Server (NTRS)
Kaljurand, M.; Valentin, J. R.; Shao, M.
1996-01-01
Two alternative input sequences are commonly employed in correlation chromatography (CC). They are sequences derived according to the algorithm of the feedback shift register (i.e., pseudo random binary sequences (PRBS)) and sequences derived by using the uniform random binary sequences (URBS). These two sequences are compared. By applying the "cleaning" data processing technique to the correlograms that result from these sequences, we show that when the PRBS is used the S/N of the correlogram is much higher than the one resulting from using URBS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuo, J; Su, K; Department of Radiology, University Hospitals Case Medical Center, Case Western Reserve University, Cleveland, Ohio
Purpose: Accurate and robust photon attenuation derived from MR is essential for PET/MR and MR-based radiation treatment planning applications. Although the fuzzy C-means (FCM) algorithm has been applied for pseudo-CT generation, the input feature combination and the number of clusters have not been optimized. This study aims to optimize both for clinically practical pseudo-CT generation. Methods: Nine volunteers were recruited. A 190-second, single-acquisition UTE-mDixon with 25% (angular) sampling and 3D radial readout was performed to acquire three primitive MR features at TEs of 0.1, 1.5, and 2.8 ms: the free-induction-decay (FID), the first and the second echo images. Three derivedmore » images, Dixon-fat and Dixon-water generated by two-point Dixon water/fat separation, and R2* (1/T2*) map, were also created. To identify informative inputs for generating a pseudo-CT image volume, all 63 combinations, choosing one to six of the feature images, were used as inputs to FCM for pseudo-CT generation. Further, the number of clusters was varied from four to seven to find the optimal approach. Mean prediction deviation (MPD), mean absolute prediction deviation (MAPD), and correlation coefficient (R) of different combinations were compared for feature selection. Results: Among the 63 feature combinations, the four that resulted in the best MAPD and R were further compared along with the set containing all six features. The results suggested that R2* and Dixon-water are the most informative features. Further, including FID also improved the performance of pseudo-CT generation. Consequently, the set containing FID, Dixon-water, and R2* resulted in the most accurate, robust pseudo-CT when the number of cluster equals to five (5C). The clusters were interpreted as air, fat, bone, brain, and fluid. The six-cluster Result additionally included bone marrow. Conclusion: The results suggested that FID, Dixon-water, R2* are the most important features. The findings can be used to facilitate pseudo-CT generation for unsupervised clustering. Please note that the project was completed with partial funding from the Ohio Department of Development grant TECH 11-063 and a sponsored research agreement with Philips Healthcare that is managed by Case Western Reserve University. As noted in the affiliations, some of the authors are Philips employees.« less
NASA Astrophysics Data System (ADS)
Ben-Naim, E.; Hengartner, N. W.; Redner, S.; Vazquez, F.
2013-05-01
We study the effects of randomness on competitions based on an elementary random process in which there is a finite probability that a weaker team upsets a stronger team. We apply this model to sports leagues and sports tournaments, and compare the theoretical results with empirical data. Our model shows that single-elimination tournaments are efficient but unfair: the number of games is proportional to the number of teams N, but the probability that the weakest team wins decays only algebraically with N. In contrast, leagues, where every team plays every other team, are fair but inefficient: the top √{N} of teams remain in contention for the championship, while the probability that the weakest team becomes champion is exponentially small. We also propose a gradual elimination schedule that consists of a preliminary round and a championship round. Initially, teams play a small number of preliminary games, and subsequently, a few teams qualify for the championship round. This algorithm is fair and efficient: the best team wins with a high probability and the number of games scales as N 9/5, whereas traditional leagues require N 3 games to fairly determine a champion.
Shulkind, Gal; Nazarathy, Moshe
2012-12-17
We present an efficient method for system identification (nonlinear channel estimation) of third order nonlinear Volterra Series Transfer Function (VSTF) characterizing the four-wave-mixing nonlinear process over a coherent OFDM fiber link. Despite the seemingly large number of degrees of freedom in the VSTF (cubic in the number of frequency points) we identified a compressed VSTF representation which does not entail loss of information. Additional slightly lossy compression may be obtained by discarding very low power VSTF coefficients associated with regions of destructive interference in the FWM phased array effect. Based on this two-staged VSTF compressed representation, we develop a robust and efficient algorithm of nonlinear system identification (optical performance monitoring) estimating the VSTF by transmission of an extended training sequence over the OFDM link, performing just a matrix-vector multiplication at the receiver by a pseudo-inverse matrix which is pre-evaluated offline. For 512 (1024) frequency samples per channel, the VSTF measurement takes less than 1 (10) msec to complete with computational complexity of one real-valued multiply-add operation per time sample. Relative to a naïve exhaustive three-tone-test, our algorithm is far more tolerant of ASE additive noise and its acquisition time is orders of magnitude faster.
O'Dywer, Lian; Littlewood, Simon J; Rahman, Shahla; Spencer, R James; Barber, Sophy K; Russell, Joanne S
2016-01-01
To use a two-arm parallel trial to compare treatment efficiency between a self-ligating and a conventional preadjusted edgewise appliance system. A prospective multi-center randomized controlled clinical trial was conducted in three hospital orthodontic departments. Subjects were randomly allocated to receive treatment with either a self-ligating (3M SmartClip) or conventional (3M Victory) preadjusted edgewise appliance bracket system using a computer-generated random sequence concealed in opaque envelopes, with stratification for operator and center. Two operators followed a standardized protocol regarding bracket bonding procedure and archwire sequence. Efficiency of each ligation system was assessed by comparing the duration of treatment (months), total number of appointments (scheduled and emergency visits), and number of bracket bond failures. One hundred thirty-eight subjects (mean age 14 years 11 months) were enrolled in the study, of which 135 subjects (97.8%) completed treatment. The mean treatment time and number of visits were 25.12 months and 19.97 visits in the SmartClip group and 25.80 months and 20.37 visits in the Victory group. The overall bond failure rate was 6.6% for the SmartClip and 7.2% for Victory, with a similar debond distribution between the two appliances. No significant differences were found between the bracket systems in any of the outcome measures. No serious harm was observed from either bracket system. There was no clinically significant difference in treatment efficiency between treatment with a self-ligating bracket system and a conventional ligation system.
Araújo, Carolina S.; Souza, Givago S.; Gomes, Bruno D.; Silveira, Luiz Carlos L.
2013-01-01
The contributions of contrast detection mechanisms to the visual cortical evoked potential (VECP) have been investigated studying the contrast-response and spatial frequency-response functions. Previously, the use of m-sequences for stimulus control has been almost restricted to multifocal electrophysiology stimulation and, in some aspects, it substantially differs from conventional VECPs. Single stimulation with spatial contrast temporally controlled by m-sequences has not been extensively tested or compared to multifocal techniques. Our purpose was to evaluate the influence of spatial frequency and contrast of sinusoidal gratings on the VECP elicited by pseudo-random stimulation. Nine normal subjects were stimulated by achromatic sinusoidal gratings driven by pseudo random binary m-sequence at seven spatial frequencies (0.4–10 cpd) and three stimulus sizes (4°, 8°, and 16° of visual angle). At 8° subtence, six contrast levels were used (3.12–99%). The first order kernel (K1) did not provide a consistent measurable signal across spatial frequencies and contrasts that were tested–signal was very small or absent–while the second order kernel first (K2.1) and second (K2.2) slices exhibited reliable responses for the stimulus range. The main differences between results obtained with the K2.1 and K2.2 were in the contrast gain as measured in the amplitude versus contrast and amplitude versus spatial frequency functions. The results indicated that K2.1 was dominated by M-pathway, but for some stimulus condition some P-pathway contribution could be found, while the second slice reflected the P-pathway contribution. The present work extended previous findings of the visual pathways contribution to VECP elicited by pseudorandom stimulation for a wider range of spatial frequencies. PMID:23940546
Cadmium telluride nanoparticles loaded on activated carbon as adsorbent for removal of sunset yellow
NASA Astrophysics Data System (ADS)
Ghaedi, M.; Hekmati Jah, A.; Khodadoust, S.; Sahraei, R.; Daneshfar, A.; Mihandoost, A.; Purkait, M. K.
2012-05-01
Adsorption is a promising technique for decolorization of effluents of textile dyeing industries but its application is limited due to requirement of high amounts of adsorbent required. The objective of this study was to assess the potential of cadmium telluride nanoparticles loaded onto activated carbon (CdTN-AC) for the removal of sunset yellow (SY) dye from aqueous solution. Adsorption studies were conducted in a batch mode varying solution pH, contact time, initial dye concentration, CdTN-AC dose, and temperature. In order to investigate the efficiency of SY adsorption on CdTN-AC, pseudo-first-order, pseudo-second-order, Elovich, and intra-particle diffusion kinetic models were studied. It was observed that the pseudo-second-order kinetic model fits better than other kinetic models with good correlation coefficient. Equilibrium data were fitted to the Langmuir model. Thermodynamic parameters such as enthalpy, entropy, activation energy, and sticking probability were also calculated. It was found that the sorption of SY onto CdTN-AC was spontaneous and endothermic in nature. The proposed adsorbent is applicable for SY removal from waste of real effluents including pea-shooter, orange drink and jelly banana with efficiency more than 97%.
The structure factor of primes
NASA Astrophysics Data System (ADS)
Zhang, G.; Martelli, F.; Torquato, S.
2018-03-01
Although the prime numbers are deterministic, they can be viewed, by some measures, as pseudo-random numbers. In this article, we numerically study the pair statistics of the primes using statistical-mechanical methods, particularly the structure factor S(k) in an interval M ≤slant p ≤slant M + L with M large, and L/M smaller than unity. We show that the structure factor of the prime-number configurations in such intervals exhibits well-defined Bragg-like peaks along with a small ‘diffuse’ contribution. This indicates that primes are appreciably more correlated and ordered than previously thought. Our numerical results definitively suggest an explicit formula for the locations and heights of the peaks. This formula predicts infinitely many peaks in any non-zero interval, similar to the behavior of quasicrystals. However, primes differ from quasicrystals in that the ratio between the location of any two predicted peaks is rational. We also show numerically that the diffuse part decays slowly as M and L increases. This suggests that the diffuse part vanishes in an appropriate infinite-system-size limit.
Pseudo Random Stimulus Response of Combustion Systems.
1980-01-01
is also 7 applicable to the coalescence/dispersion (C/D) micromixing model In the C/D model, micromixing is simulated by considering the reacting...the turbulent fluctuations on the local heat release rate. Thus the micromixing ’noise’ measurements will not be valid, however, deductions
Variable word length encoder reduces TV bandwith requirements
NASA Technical Reports Server (NTRS)
Sivertson, W. E., Jr.
1965-01-01
Adaptive variable resolution encoding technique provides an adaptive compression pseudo-random noise signal processor for reducing television bandwidth requirements. Complementary processors are required in both the transmitting and receiving systems. The pretransmission processor is analog-to-digital, while the postreception processor is digital-to-analog.
Tseng, Wen-Hung; Huang, Yi-Jiun; Gotoh, Tadahiro; Hobiger, Thomas; Fujieda, Miho; Aida, Masanori; Li, Tingyu; Lin, Shinn-Yan; Lin, Huang-Tien; Feng, Kai-Ming
2012-03-01
Two-way satellite time and frequency transfer (TWSTFT) is one of the main techniques used to compare atomic time scales over long distances. To both improve the precision of TWSTFT and decrease the satellite link fee, a new software-defined modem with dual pseudo-random noise (DPN) codes has been developed. In this paper, we demonstrate the first international DPN-based TWSTFT experiment over a period of 6 months. The results of DPN exhibit excellent performance, which is competitive with the Global Positioning System (GPS) precise point positioning (PPP) technique in the short-term and consistent with the conventional TWSTFT in the long-term. Time deviations of less than 75 ps are achieved for averaging times from 1 s to 1 d. Moreover, the DPN data has less diurnal variation than that of the conventional TWSTFT. Because the DPN-based system has advantages of higher precision and lower bandwidth cost, it is one of the most promising methods to improve international time-transfer links.
Technical Note: Deep learning based MRAC using rapid ultra-short echo time imaging.
Jang, Hyungseok; Liu, Fang; Zhao, Gengyan; Bradshaw, Tyler; McMillan, Alan B
2018-05-15
In this study, we explore the feasibility of a novel framework for MR-based attenuation correction for PET/MR imaging based on deep learning via convolutional neural networks, which enables fully automated and robust estimation of a pseudo CT image based on ultrashort echo time (UTE), fat, and water images obtained by a rapid MR acquisition. MR images for MRAC are acquired using dual echo ramped hybrid encoding (dRHE), where both UTE and out-of-phase echo images are obtained within a short single acquisition (35 sec). Tissue labeling of air, soft tissue, and bone in the UTE image is accomplished via a deep learning network that was pre-trained with T1-weighted MR images. UTE images are used as input to the network, which was trained using labels derived from co-registered CT images. The tissue labels estimated by deep learning are refined by a conditional random field based correction. The soft tissue labels are further separated into fat and water components using the two-point Dixon method. The estimated bone, air, fat, and water images are then assigned appropriate Hounsfield units, resulting in a pseudo CT image for PET attenuation correction. To evaluate the proposed MRAC method, PET/MR imaging of the head was performed on 8 human subjects, where Dice similarity coefficients of the estimated tissue labels and relative PET errors were evaluated through comparison to a registered CT image. Dice coefficients for air (within the head), soft tissue, and bone labels were 0.76±0.03, 0.96±0.006, and 0.88±0.01. In PET quantification, the proposed MRAC method produced relative PET errors less than 1% within most brain regions. The proposed MRAC method utilizing deep learning with transfer learning and an efficient dRHE acquisition enables reliable PET quantification with accurate and rapid pseudo CT generation. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Efficient quantum pseudorandomness with simple graph states
NASA Astrophysics Data System (ADS)
Mezher, Rawad; Ghalbouni, Joe; Dgheim, Joseph; Markham, Damian
2018-02-01
Measurement based (MB) quantum computation allows for universal quantum computing by measuring individual qubits prepared in entangled multipartite states, known as graph states. Unless corrected for, the randomness of the measurements leads to the generation of ensembles of random unitaries, where each random unitary is identified with a string of possible measurement results. We show that repeating an MB scheme an efficient number of times, on a simple graph state, with measurements at fixed angles and no feedforward corrections, produces a random unitary ensemble that is an ɛ -approximate t design on n qubits. Unlike previous constructions, the graph is regular and is also a universal resource for measurement based quantum computing, closely related to the brickwork state.
Turbulence statistics with quantified uncertainty in cold-wall supersonic channel flow
NASA Astrophysics Data System (ADS)
Ulerich, Rhys; Moser, Robert D.
2012-11-01
To investigate compressibility effects in wall-bounded turbulence, a series of direct numerical simulations of compressible channel flow with isothermal (cold) walls have been conducted. All combinations of Re = { 3000 , 5000 } and Ma = { 0 . 1 , 0 . 5 , 1 . 5 , 3 . 0 } have been simulated where the Reynolds and Mach numbers are based on bulk velocity and sound speed at the wall temperature. Turbulence statistics with precisely quantified uncertainties computed from these simulations will be presented and are being made available in a public data base at http://turbulence.ices.utexas.edu/. The simulations were performed using a new pseudo-spectral code called Suzerain, which was designed to efficiently produce high quality data on compressible, wall-bounded turbulent flows using a semi-implicit Fourier/B-spline numerical formulation. This work is supported by the Department of Energy [National Nuclear Security Administration] under Award Number [DE-FC52-08NA28615].
NASA Astrophysics Data System (ADS)
WANG, P. T.
2015-12-01
Groundwater modeling requires to assign hydrogeological properties to every numerical grid. Due to the lack of detailed information and the inherent spatial heterogeneity, geological properties can be treated as random variables. Hydrogeological property is assumed to be a multivariate distribution with spatial correlations. By sampling random numbers from a given statistical distribution and assigning a value to each grid, a random field for modeling can be completed. Therefore, statistics sampling plays an important role in the efficiency of modeling procedure. Latin Hypercube Sampling (LHS) is a stratified random sampling procedure that provides an efficient way to sample variables from their multivariate distributions. This study combines the the stratified random procedure from LHS and the simulation by using LU decomposition to form LULHS. Both conditional and unconditional simulations of LULHS were develpoed. The simulation efficiency and spatial correlation of LULHS are compared to the other three different simulation methods. The results show that for the conditional simulation and unconditional simulation, LULHS method is more efficient in terms of computational effort. Less realizations are required to achieve the required statistical accuracy and spatial correlation.
Efficient Ab initio Modeling of Random Multicomponent Alloys
Jiang, Chao; Uberuaga, Blas P.
2016-03-08
Here, we present in this Letter a novel small set of ordered structures (SSOS) method that allows extremely efficient ab initio modeling of random multi-component alloys. Using inverse II-III spinel oxides and equiatomic quinary bcc (so-called high entropy) alloys as examples, we also demonstrate that a SSOS can achieve the same accuracy as a large supercell or a well-converged cluster expansion, but with significantly reduced computational cost. In particular, because of this efficiency, a large number of quinary alloy compositions can be quickly screened, leading to the identification of several new possible high entropy alloy chemistries. Furthermore, the SSOS methodmore » developed here can be broadly useful for the rapid computational design of multi-component materials, especially those with a large number of alloying elements, a challenging problem for other approaches.« less
Jalas, S.; Dornmair, I.; Lehe, R.; ...
2017-03-20
Particle in Cell (PIC) simulations are a widely used tool for the investigation of both laser- and beam-driven plasma acceleration. It is a known issue that the beam quality can be artificially degraded by numerical Cherenkov radiation (NCR) resulting primarily from an incorrectly modeled dispersion relation. Pseudo-spectral solvers featuring infinite order stencils can strongly reduce NCR - or even suppress it - and are therefore well suited to correctly model the beam properties. For efficient parallelization of the PIC algorithm, however, localized solvers are inevitable. Arbitrary order pseudo-spectral methods provide this needed locality. Yet, these methods can again be pronemore » to NCR. Here in this paper, we show that acceptably low solver orders are sufficient to correctly model the physics of interest, while allowing for parallel computation by domain decomposition.« less
Lexical orthography acquisition: Is handwriting better than spelling aloud?
Bosse, Marie-Line; Chaves, Nathalie; Valdois, Sylviane
2014-01-01
Lexical orthography acquisition is currently described as the building of links between the visual forms and the auditory forms of whole words. However, a growing body of data suggests that a motor component could further be involved in orthographic acquisition. A few studies support the idea that reading plus handwriting is a better lexical orthographic learning situation than reading alone. However, these studies did not explore which of the cognitive processes involved in handwriting enhanced lexical orthographic acquisition. Some findings suggest that the specific movements memorized when learning to write may participate in the establishment of orthographic representations in memory. The aim of the present study was to assess this hypothesis using handwriting and spelling aloud as two learning conditions. In two experiments, fifth graders were asked to read complex pseudo-words embedded in short sentences. Immediately after reading, participants had to recall the pseudo-words' spellings either by spelling them aloud or by handwriting them down. One week later, orthographic acquisition was tested using two post-tests: a pseudo-word production task (spelling by hand in Experiment 1 or spelling aloud in Experiment 2) and a pseudo-word recognition task. Results showed no significant difference in pseudo-word recognition between the two learning conditions. In the pseudo-word production task, orthography learning improved when the learning and post-test conditions were similar, thus showing a massive encoding-retrieval match effect in the two experiments. However, a mixed model analysis of the pseudo-word production results revealed a significant learning condition effect which remained after control of the encoding-retrieval match effect. This later finding suggests that orthography learning is more efficient when mediated by handwriting than by spelling aloud, whatever the post-test production task. PMID:24575058
Lexical orthography acquisition: Is handwriting better than spelling aloud?
Bosse, Marie-Line; Chaves, Nathalie; Valdois, Sylviane
2014-01-01
Lexical orthography acquisition is currently described as the building of links between the visual forms and the auditory forms of whole words. However, a growing body of data suggests that a motor component could further be involved in orthographic acquisition. A few studies support the idea that reading plus handwriting is a better lexical orthographic learning situation than reading alone. However, these studies did not explore which of the cognitive processes involved in handwriting enhanced lexical orthographic acquisition. Some findings suggest that the specific movements memorized when learning to write may participate in the establishment of orthographic representations in memory. The aim of the present study was to assess this hypothesis using handwriting and spelling aloud as two learning conditions. In two experiments, fifth graders were asked to read complex pseudo-words embedded in short sentences. Immediately after reading, participants had to recall the pseudo-words' spellings either by spelling them aloud or by handwriting them down. One week later, orthographic acquisition was tested using two post-tests: a pseudo-word production task (spelling by hand in Experiment 1 or spelling aloud in Experiment 2) and a pseudo-word recognition task. Results showed no significant difference in pseudo-word recognition between the two learning conditions. In the pseudo-word production task, orthography learning improved when the learning and post-test conditions were similar, thus showing a massive encoding-retrieval match effect in the two experiments. However, a mixed model analysis of the pseudo-word production results revealed a significant learning condition effect which remained after control of the encoding-retrieval match effect. This later finding suggests that orthography learning is more efficient when mediated by handwriting than by spelling aloud, whatever the post-test production task.
The Spontaneous Ray Log: A New Aid for Constructing Pseudo-Synthetic Seismograms
NASA Astrophysics Data System (ADS)
Quadir, Adnan; Lewis, Charles; Rau, Ruey-Juin
2018-02-01
Conventional synthetic seismograms for hydrocarbon exploration combine the sonic and density logs, whereas pseudo-synthetic seismograms are constructed with a density log plus a resistivity, neutron, gamma ray, or rarely a spontaneous potential log. Herein, we introduce a new technique for constructing a pseudo-synthetic seismogram by combining the gamma ray (GR) and self-potential (SP) logs to produce the spontaneous ray (SR) log. Three wells, each of which consisted of more than 1000 m of carbonates, sandstones, and shales, were investigated; each well was divided into 12 Groups based on formation tops, and the Pearson product-moment correlation coefficient (PCC) was calculated for each "Group" from each of the GR, SP, and SR logs. The highest PCC-valued log curves for each Group were then combined to produce a single log whose values were cross-plotted against the reference well's sonic ITT values to determine a linear transform for producing a pseudo-sonic (PS) log and, ultimately, a pseudo-synthetic seismogram. The range for the Nash-Sutcliffe efficiency (NSE) acceptable value for the pseudo-sonic logs of three wells was 78-83%. This technique was tested on three wells, one of which was used as a blind test well, with satisfactory results. The PCC value between the composite PS (SR) log with low-density correction and the conventional sonic (CS) log was 86%. Because of the common occurrence of spontaneous potential and gamma ray logs in many of the hydrocarbon basins of the world, this inexpensive and straightforward technique could hold significant promise in areas that are in need of alternate ways to create pseudo-synthetic seismograms for seismic reflection interpretation.
Pseudo-polar drive patterns for brain electrical impedance tomography.
Shi, Xuetao; Dong, Xiuzhen; Shuai, Wanjun; You, Fusheng; Fu, Feng; Liu, Ruigang
2006-11-01
Brain electrical impedance tomography (EIT) is a difficult task as brain tissues are enclosed by the skull of high resistance and cerebrospinal fluid (CSF) of low resistance, which makes internal resistivity information more difficult to extract. In order to seek a single source drive pattern that is more suitable for brain EIT, we built a more realistic experimental setting that simulates a head with the resistivity of the scalp, skull, CSF and brain, and compared the performance of adjacent, cross, polar and pseudo-polar drive patterns in terms of the boundary voltage dynamic range, independent measurement number, total boundary voltage changes and anti-noise performance based on it. The results demonstrate that the pseudo-polar drive pattern is optimal in all the aspects except for the dynamic range. The polar and cross drive patterns come next, and the adjacent drive pattern is the worst. Therefore, the pseudo-polar drive pattern should be chosen for brain EIT.
NASA Astrophysics Data System (ADS)
Lou, Benyong; Perumalla, Sathyanarayana Reddy; Sun, Changquan Calvin
2015-11-01
Using three carboxylic acids, we show that the COOH⋯COO- synthon is robust for directing the cocrystallization between a carboxylic acid and a carboxylate of either the same or a chemically different molecule to form a CAB or pseudo CAB cocrystal, respectively. For a given carboxylic acid and a counterion, only one salt could be prepared. However, additional one CAB cocrystals and two pseudo CAB cocrystals could be prepared based on the COOH⋯COO- synthon. The same synthon has the potential to enable the preparation of additional molecular pseudo CAB cocrystals using other chemically distinct carboxylic acids. This significantly increased number of solid forms highlights the values of charge-assisted synthons, such as COOH⋯COO-, in crystal engineering for expanding the range of material properties of a given molecule for optimum performance in product design.
Reynolds number effects on mixing due to topological chaos
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Spencer A.; Warrier, Sangeeta
2016-03-15
Topological chaos has emerged as a powerful tool to investigate fluid mixing. While this theory can guarantee a lower bound on the stretching rate of certain material lines, it does not indicate what fraction of the fluid actually participates in this minimally mandated mixing. Indeed, the area in which effective mixing takes place depends on physical parameters such as the Reynolds number. To help clarify this dependency, we numerically simulate the effects of a batch stirring device on a 2D incompressible Newtonian fluid in the laminar regime. In particular, we calculate the finite time Lyapunov exponent (FTLE) field for threemore » different stirring protocols, one topologically complex (pseudo-Anosov) and two simple (finite-order), over a range of viscosities. After extracting appropriate measures indicative of both the amount of mixing and the area of effective mixing from the FTLE field, we see a clearly defined Reynolds number range in which the relative efficacy of the pseudo-Anosov protocol over the finite-order protocols justifies the application of topological chaos. More unexpectedly, we see that while the measures of effective mixing area increase with increasing Reynolds number for the finite-order protocols, they actually exhibit non-monotonic behavior for the pseudo-Anosov protocol.« less
Neutron monitor generated data distributions in quantum variational Monte Carlo
NASA Astrophysics Data System (ADS)
Kussainov, A. S.; Pya, N.
2016-08-01
We have assessed the potential applications of the neutron monitor hardware as random number generator for normal and uniform distributions. The data tables from the acquisition channels with no extreme changes in the signal level were chosen as the retrospective model. The stochastic component was extracted by fitting the raw data with splines and then subtracting the fit. Scaling the extracted data to zero mean and variance of one is sufficient to obtain a stable standard normal random variate. Distributions under consideration pass all available normality tests. Inverse transform sampling is suggested to use as a source of the uniform random numbers. Variational Monte Carlo method for quantum harmonic oscillator was used to test the quality of our random numbers. If the data delivery rate is of importance and the conventional one minute resolution neutron count is insufficient, we could always settle for an efficient seed generator to feed into the faster algorithmic random number generator or create a buffer.
NASA Technical Reports Server (NTRS)
Anderle, R. J.
1978-01-01
It is shown that pseudo-range measurements to four GPS satellites based on correlation of the pseudo random code transmissions from the satellites can be used to determine the relative position of ground stations which are separated by several hundred kilometers to a precision at the centimeter level. Carrier signal measurements during the course of passage of satellites over a pair of stations also yield centimeter precision in the relative position, but oscillator instabilities limit the accuracy. The accuracy of solutions based on either type of data is limited by unmodeled tropospheric refraction effects which would reach 5 centimeters at low elevation angles for widely separated stations.
PseudoBase: a database with RNA pseudoknots.
van Batenburg, F H; Gultyaev, A P; Pleij, C W; Ng, J; Oliehoek, J
2000-01-01
PseudoBase is a database containing structural, functional and sequence data related to RNA pseudo-knots. It can be reached at http://wwwbio. Leiden Univ.nl/ approximately Batenburg/PKB.html. This page will direct the user to a retrieval page from where a particular pseudoknot can be chosen, or to a submission page which enables the user to add pseudoknot information to the database or to an informative page that elaborates on the various aspects of the database. For each pseudoknot, 12 items are stored, e.g. the nucleotides of the region that contains the pseudoknot, the stem positions of the pseudoknot, the EMBL accession number of the sequence that contains this pseudoknot and the support that can be given regarding the reliability of the pseudoknot. Access is via a small number of steps, using 16 different categories. The development process was done by applying the evolutionary methodology for software development rather than by applying the methodology of the classical waterfall model or the more modern spiral model.
Random numbers certified by Bell's theorem.
Pironio, S; Acín, A; Massar, S; de la Giroday, A Boyer; Matsukevich, D N; Maunz, P; Olmschenk, S; Hayes, D; Luo, L; Manning, T A; Monroe, C
2010-04-15
Randomness is a fundamental feature of nature and a valuable resource for applications ranging from cryptography and gambling to numerical simulation of physical and biological systems. Random numbers, however, are difficult to characterize mathematically, and their generation must rely on an unpredictable physical process. Inaccuracies in the theoretical modelling of such processes or failures of the devices, possibly due to adversarial attacks, limit the reliability of random number generators in ways that are difficult to control and detect. Here, inspired by earlier work on non-locality-based and device-independent quantum information processing, we show that the non-local correlations of entangled quantum particles can be used to certify the presence of genuine randomness. It is thereby possible to design a cryptographically secure random number generator that does not require any assumption about the internal working of the device. Such a strong form of randomness generation is impossible classically and possible in quantum systems only if certified by a Bell inequality violation. We carry out a proof-of-concept demonstration of this proposal in a system of two entangled atoms separated by approximately one metre. The observed Bell inequality violation, featuring near perfect detection efficiency, guarantees that 42 new random numbers are generated with 99 per cent confidence. Our results lay the groundwork for future device-independent quantum information experiments and for addressing fundamental issues raised by the intrinsic randomness of quantum theory.
Wavelet subspace decomposition of thermal infrared images for defect detection in artworks
NASA Astrophysics Data System (ADS)
Ahmad, M. Z.; Khan, A. A.; Mezghani, S.; Perrin, E.; Mouhoubi, K.; Bodnar, J. L.; Vrabie, V.
2016-07-01
Health of ancient artworks must be routinely monitored for their adequate preservation. Faults in these artworks may develop over time and must be identified as precisely as possible. The classical acoustic testing techniques, being invasive, risk causing permanent damage during periodic inspections. Infrared thermometry offers a promising solution to map faults in artworks. It involves heating the artwork and recording its thermal response using infrared camera. A novel strategy based on pseudo-random binary excitation principle is used in this work to suppress the risks associated with prolonged heating. The objective of this work is to develop an automatic scheme for detecting faults in the captured images. An efficient scheme based on wavelet based subspace decomposition is developed which favors identification of, the otherwise invisible, weaker faults. Two major problems addressed in this work are the selection of the optimal wavelet basis and the subspace level selection. A novel criterion based on regional mutual information is proposed for the latter. The approach is successfully tested on a laboratory based sample as well as real artworks. A new contrast enhancement metric is developed to demonstrate the quantitative efficiency of the algorithm. The algorithm is successfully deployed for both laboratory based and real artworks.
NASA Astrophysics Data System (ADS)
Hu, Yun-peng; Chen, Lei; Huang, Jian-yu
2017-08-01
The US Lincoln Laboratory proved that space-based visible (SBV) observation is efficient to observe space objects, especially Geosynchronous Orbit (GEO) objects. After that, SBV observation plays an important role in the space surveillance. In this paper, a novel space-based observation mode is designed to observe all the GEO objects in a relatively short time. A low earth orbit (LEO) satellite, especially a dawn-dusk sun-synchronous orbit satellite, is useful for space-based observation. Thus, the observation mode for GEO objects is based on a dawn-dusk sun-synchronous orbit satellite. It is found that the Pinch Point (PP) regions proposed by the US Lincoln Laboratory are spreading based on the analysis of the evolution principles of GEO objects. As the PP regions becoming more and more widely in the future, many strategies based on it may not be efficient any more. Hence, the key point of the space-based observation strategy design for GEO objects should be emphasized on the whole GEO belt as far as possible. The pseudo-fixed latitude observation mode is proposed in this paper based on the characteristics of GEO belt. Unlike classical space-based observation modes, pseudo-fixed latitude observation mode makes use of the one-dimensional attitude adjustment of the observation satellite. The pseudo-fixed latitude observation mode is more reliable and simple in engineering, compared with the gazing observation mode which needs to adjust the attitude from the two dimensions. It includes two types of attitude adjustment, i.e. daily and continuous attitude adjustment. Therefore, the pseudo-fixed latitude observation mode has two characteristics. In a day, the latitude of the observation region is fixed and the scanning region is about a rectangle, while the latitude of the observation region centre changes each day in a long term based on a daily strategy. The capabilities of a pseudo-fixed latitude observation instrument with a 98° dawn-dusk sun-synchronous orbit are discussed. It is found that most of GEO objects can be visited every day and almost all the GEO objects can be visited in two days in the whole year using a sensor with 20°×2° field of view (FOV). The seasonal drops, which are caused by the characteristics of GEO belt and the influence of earth shadow at the two equinoxes, have been overcome under the pseudo-fixed observation mode.
Relative efficiency and sample size for cluster randomized trials with variable cluster sizes.
You, Zhiying; Williams, O Dale; Aban, Inmaculada; Kabagambe, Edmond Kato; Tiwari, Hemant K; Cutter, Gary
2011-02-01
The statistical power of cluster randomized trials depends on two sample size components, the number of clusters per group and the numbers of individuals within clusters (cluster size). Variable cluster sizes are common and this variation alone may have significant impact on study power. Previous approaches have taken this into account by either adjusting total sample size using a designated design effect or adjusting the number of clusters according to an assessment of the relative efficiency of unequal versus equal cluster sizes. This article defines a relative efficiency of unequal versus equal cluster sizes using noncentrality parameters, investigates properties of this measure, and proposes an approach for adjusting the required sample size accordingly. We focus on comparing two groups with normally distributed outcomes using t-test, and use the noncentrality parameter to define the relative efficiency of unequal versus equal cluster sizes and show that statistical power depends only on this parameter for a given number of clusters. We calculate the sample size required for an unequal cluster sizes trial to have the same power as one with equal cluster sizes. Relative efficiency based on the noncentrality parameter is straightforward to calculate and easy to interpret. It connects the required mean cluster size directly to the required sample size with equal cluster sizes. Consequently, our approach first determines the sample size requirements with equal cluster sizes for a pre-specified study power and then calculates the required mean cluster size while keeping the number of clusters unchanged. Our approach allows adjustment in mean cluster size alone or simultaneous adjustment in mean cluster size and number of clusters, and is a flexible alternative to and a useful complement to existing methods. Comparison indicated that we have defined a relative efficiency that is greater than the relative efficiency in the literature under some conditions. Our measure of relative efficiency might be less than the measure in the literature under some conditions, underestimating the relative efficiency. The relative efficiency of unequal versus equal cluster sizes defined using the noncentrality parameter suggests a sample size approach that is a flexible alternative and a useful complement to existing methods.
Doczekalska, Beata; Kuśmierek, Krzysztof; Świątkowski, Andrzej; Bartkowiak, Monika
2018-05-04
Adsorption of 2,4-dichlorophenoxyacetic acid (2,4-D) and 4-chloro-2-metylphenoxyacetic acid (MCPA) from aqueous solution onto activated carbons derived from various lignocellulosic materials including willow, miscanthus, flax, and hemp shives was investigated. The adsorption kinetic data were analyzed using two kinetic models: the pseudo-first order and pseudo-second order equations. The adsorption kinetics of both herbicides was better represented by the pseudo-second order model. The adsorption isotherms of 2,4-D and MCPA on the activated carbons were analyzed using the Freundlich and Langmuir isotherm models. The equilibrium data followed the Langmuir isotherm. The effect of pH on the adsorption was also studied. The results showed that the activated carbons prepared from the lignocellulosic materials are efficient adsorbents for the removal of 2,4-D and MCPA from aqueous solutions.
Kinetics and Equilibrium of Fe3+ Ions Adsorption on Carbon Nanofibers
NASA Astrophysics Data System (ADS)
Alimin; Agusu, La; Ahmad, L. O.; Kadidae, L. O.; Ramadhan, L.; Nurdin, M.; Isdayanti, N.; Asria; Aprilia M, P.; Hasrudin
2018-05-01
Generally, the interaction between metal ions and adsorbent is governed by many factors including; concentration of metal ions, interaction time and solution pH. In this work, we applied liquid phase adsorption for studying the interaction between Fe3+ ions and Carbon Nanofibers (CNFs) irradiated by ultrasonic waves. Kinetics and isotherms model of the Fe3+ ion adsorption was investigated by varying contact time and pH. We found that the Fe3+ ions were efficiently adsorbed on CNFs for 0.5 h in acidic pH of around 5. In order to obtain the best-fitted isotherms model, Langmuir and Freundlich’s isotherms were used in this work. The adsorption equilibrium Fe3+ metal ions on CNFs tend to follow Langmuir. Adsorption kinetics of Fe3+ ions on CNFs were investigated by using both pseudo-first and pseudo-second orders. The adsorption kinetics coincided well with the pseudo-second-order.
Sensor-Only System Identification for Structural Health Monitoring of Advanced Aircraft
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.; Bernstein, Dennis S.
2012-01-01
Environmental conditions, cyclic loading, and aging contribute to structural wear and degradation, and thus potentially catastrophic events. The challenge of health monitoring technology is to determine incipient changes accurately and efficiently. This project addresses this challenge by developing health monitoring techniques that depend only on sensor measurements. Since actively controlled excitation is not needed, sensor-to-sensor identification (S2SID) provides an in-flight diagnostic tool that exploits ambient excitation to provide advance warning of significant changes. S2SID can subsequently be followed up by ground testing to localize and quantify structural changes. The conceptual foundation of S2SID is the notion of a pseudo-transfer function, where one sensor is viewed as the pseudo-input and another is viewed as the pseudo-output, is approach is less restrictive than transmissibility identification and operational modal analysis since no assumption is made about the locations of the sensors relative to the excitation.
Adsorption of 2,4-Dichlorophenoxyacetic Acid from an Aqueous Solution on Fly Ash.
Kuśmierek, Krzysztof; Świątkowski, Andrzej
2016-03-01
The adsorption of 2,4-dichlorophenoxyacetic acid (2,4-D) on fly ash was studied. The effects of adsorbent dose, contact time, pH, ionic strength, and temperature on the adsorption were investigated. Adsorption kinetic data were analyzed using pseudo-first and pseudo-second order models, and results showed that adsorption kinetics were better represented by the pseudo-second order model. Adsorption isotherms of 2,4-D on fly ash were analyzed using the Freundlich and Langmuir models. Thermodynamic parameters (ΔG°, ΔH°, and ΔS°) indicated that the adsorption process was spontaneous and endothermic. The negative values of ΔG° and the positive value of ΔH° indicate the spontaneous nature of 2,4-D adsorption on fly ash, and that the adsorption process was endothermic. Results showed that fly ash is an efficient, low-cost adsorbent for removal of 2,4-D from water.
Challenges at Petascale for Pseudo-Spectral Methods on Spheres (A Last Hurrah?)
NASA Technical Reports Server (NTRS)
Clune, Thomas
2011-01-01
Conclusions: a) Proper software abstractions should enable rapid-exploration of platform-specific optimizations/ tradeoffs. b) Pseudo-spectra! methods are marginally viable for at least some classes of petascaie problems. i.e., GPU based machine with good bisection would be best. c) Scalability at exascale is possible, but the necessary resolution will make algorithm prohibitively expensive. Efficient implementations of realistic global transposes are mtricate and tedious in MPI. PS at petascaie requires exploration of a variety of strategies for spreading local and remote communic3tions. PGAS allows far simpler implementation and thus rapid exploration of variants.
Pseudo-updated constrained solution algorithm for nonlinear heat conduction
NASA Technical Reports Server (NTRS)
Tovichakchaikul, S.; Padovan, J.
1983-01-01
This paper develops efficiency and stability improvements in the incremental successive substitution (ISS) procedure commonly used to generate the solution to nonlinear heat conduction problems. This is achieved by employing the pseudo-update scheme of Broyden, Fletcher, Goldfarb and Shanno in conjunction with the constrained version of the ISS. The resulting algorithm retains the formulational simplicity associated with ISS schemes while incorporating the enhanced convergence properties of slope driven procedures as well as the stability of constrained approaches. To illustrate the enhanced operating characteristics of the new scheme, the results of several benchmark comparisons are presented.
Hotta, Kinya; Ranganathan, Soumya; Liu, Ruchuan; Wu, Fei; Machiyama, Hiroaki; Gao, Rong; Hirata, Hiroaki; Soni, Neelesh; Ohe, Takashi; Hogue, Christopher W V; Madhusudhan, M S; Sawada, Yasuhiro
2014-04-01
Mechanical stretch-induced tyrosine phosphorylation in the proline-rich 306-residue substrate domain (CasSD) of p130Cas (or BCAR1) has eluded an experimentally validated structural understanding. Cellular p130Cas tyrosine phosphorylation is shown to function in areas without internal actomyosin contractility, sensing force at the leading edge of cell migration. Circular dichroism shows CasSD is intrinsically disordered with dominant polyproline type II conformations. Strongly conserved in placental mammals, the proline-rich sequence exhibits a pseudo-repeat unit with variation hotspots 2-9 residues before substrate tyrosine residues. Atomic-force microscopy pulling experiments show CasSD requires minimal extension force and exhibits infrequent, random regions of weak stability. Proteolysis, light scattering and ultracentrifugation results show that a monomeric intrinsically disordered form persists for CasSD in solution with an expanded hydrodynamic radius. All-atom 3D conformer sampling with the TraDES package yields ensembles in agreement with experiment when coil-biased sampling is used, matching the experimental radius of gyration. Increasing β-sampling propensities increases the number of prolate conformers. Combining the results, we conclude that CasSD has no stable compact structure and is unlikely to efficiently autoinhibit phosphorylation. Taking into consideration the structural propensity of CasSD and the fact that it is known to bind to LIM domains, we propose a model of how CasSD and LIM domain family of transcription factor proteins may function together to regulate phosphorylation of CasSD and effect machanosensing.
Jarnevich, Catherine S.; Talbert, Marian; Morisette, Jeffrey T.; Aldridge, Cameron L.; Brown, Cynthia; Kumar, Sunil; Manier, Daniel; Talbert, Colin; Holcombe, Tracy R.
2017-01-01
Evaluating the conditions where a species can persist is an important question in ecology both to understand tolerances of organisms and to predict distributions across landscapes. Presence data combined with background or pseudo-absence locations are commonly used with species distribution modeling to develop these relationships. However, there is not a standard method to generate background or pseudo-absence locations, and method choice affects model outcomes. We evaluated combinations of both model algorithms (simple and complex generalized linear models, multivariate adaptive regression splines, Maxent, boosted regression trees, and random forest) and background methods (random, minimum convex polygon, and continuous and binary kernel density estimator (KDE)) to assess the sensitivity of model outcomes to choices made. We evaluated six questions related to model results, including five beyond the common comparison of model accuracy assessment metrics (biological interpretability of response curves, cross-validation robustness, independent data accuracy and robustness, and prediction consistency). For our case study with cheatgrass in the western US, random forest was least sensitive to background choice and the binary KDE method was least sensitive to model algorithm choice. While this outcome may not hold for other locations or species, the methods we used can be implemented to help determine appropriate methodologies for particular research questions.
Vorontsova, Natalia V; Rozenberg, Valeria I; Sergeeva, Elena V; Vorontsov, Evgenii V; Starikova, Zoya A; Lyssenko, Konstantin A; Hopf, Henning
2008-01-01
The possible number of chiral and achiral tetrasubstituted [2.2]paracyclophanes possessing different types of symmetry (C(2), C(i), C(s), C(2v), C(2h)) is evaluated and a unified independent trivial naming descriptor system is introduced. The reactivity and regioselectivity of the electrophilic substitution of the chiral pseudo-meta- and achiral pseudo-para-disubstituted [2.2]paracyclophanes are investigated in an approach suggested to be general for the synthesis of bis-bifunctional [2.2]paracyclophanes. The mono- and diacylation of chiral pseudo-meta-dihydroxy[2.2]paracyclophane 14 with acetylchloride occur ortho-regioselectively to produce tri- 22, 23 and symmetrically 21 tetrasubstituted acyl derivatives. The same reaction with benzoylchloride is neither regio-, nor chemoselective, and gives rise to a mixture of ortho-/para-, mono-/diacylated compounds 27-31. The double acylation of pseudo-meta-dimethoxy[2.2]paracyclophane 18 is completely para-regioselective. Electrophilic substitution of pseudo-meta-bis(methoxycarbonyl)[2.2]paracyclophane 20 regioselectively generates the pseudo-gem-substitution pattern. Formylation of this substrate produces the monocarbonyl derivatives 35 only, whereas the Fe-catalyzed bromination may be directed towards mono- 36 or disubstitution 37 products chemoselectively by varying the reactions conditions. The diacylation and dibromination reactions of the respective achiral diphenol 12 and bis(methoxycarbonyl) 40 derivatives of the pseudo-para-structure retain regioselectivities which are characteristic for their pseudo-meta-regioisomers. Imino ligands 26, 25, and 39, which were obtained from monoacyl- 22 and diacyldihydroxy[2.2]paracyclophanes 21, 38, are tested as chiral ligands in stereoselective Et(2)Zn addition to benzaldehyde producing 1-phenylpropanol with ee values up to 76 %.
Gradient-free MCMC methods for dynamic causal modelling
Sengupta, Biswa; Friston, Karl J.; Penny, Will D.
2015-03-14
Here, we compare the performance of four gradient-free MCMC samplers (random walk Metropolis sampling, slice-sampling, adaptive MCMC sampling and population-based MCMC sampling with tempering) in terms of the number of independent samples they can produce per unit computational time. For the Bayesian inversion of a single-node neural mass model, both adaptive and population-based samplers are more efficient compared with random walk Metropolis sampler or slice-sampling; yet adaptive MCMC sampling is more promising in terms of compute time. Slice-sampling yields the highest number of independent samples from the target density -- albeit at almost 1000% increase in computational time, in comparisonmore » to the most efficient algorithm (i.e., the adaptive MCMC sampler).« less
Linear game non-contextuality and Bell inequalities—a graph-theoretic approach
NASA Astrophysics Data System (ADS)
Rosicka, M.; Ramanathan, R.; Gnaciński, P.; Horodecki, K.; Horodecki, M.; Horodecki, P.; Severini, S.
2016-04-01
We study the classical and quantum values of a class of one- and two-party unique games, that generalizes the well-known XOR games to the case of non-binary outcomes. In the bipartite case the generalized XOR (XOR-d) games we study are a subclass of the well-known linear games. We introduce a ‘constraint graph’ associated to such a game, with the constraints defining the game represented by an edge-coloring of the graph. We use the graph-theoretic characterization to relate the task of finding equivalent games to the notion of signed graphs and switching equivalence from graph theory. We relate the problem of computing the classical value of single-party anti-correlation XOR games to finding the edge bipartization number of a graph, which is known to be MaxSNP hard, and connect the computation of the classical value of XOR-d games to the identification of specific cycles in the graph. We construct an orthogonality graph of the game from the constraint graph and study its Lovász theta number as a general upper bound on the quantum value even in the case of single-party contextual XOR-d games. XOR-d games possess appealing properties for use in device-independent applications such as randomness of the local correlated outcomes in the optimal quantum strategy. We study the possibility of obtaining quantum algebraic violation of these games, and show that no finite XOR-d game possesses the property of pseudo-telepathy leaving the frequently used chained Bell inequalities as the natural candidates for such applications. We also show this lack of pseudo-telepathy for multi-party XOR-type inequalities involving two-body correlation functions.
NASA Astrophysics Data System (ADS)
Wang, Yi; Trouvé, Arnaud
2004-09-01
A pseudo-compressibility method is proposed to modify the acoustic time step restriction found in fully compressible, explicit flow solvers. The method manipulates terms in the governing equations of order Ma2, where Ma is a characteristic flow Mach number. A decrease in the speed of acoustic waves is obtained by adding an extra term in the balance equation for total energy. This term is proportional to flow dilatation and uses a decomposition of the dilatational field into an acoustic component and a component due to heat transfer. The present method is a variation of the pressure gradient scaling (PGS) method proposed in Ramshaw et al (1985 Pressure gradient scaling method for fluid flow with nearly uniform pressure J. Comput. Phys. 58 361-76). It achieves gains in computational efficiencies similar to PGS: at the cost of a slightly more involved right-hand-side computation, the numerical time step increases by a full order of magnitude. It also features the added benefit of preserving the hydrodynamic pressure field. The original and modified PGS methods are implemented into a parallel direct numerical simulation solver developed for applications to turbulent reacting flows with detailed chemical kinetics. The performance of the pseudo-compressibility methods is illustrated in a series of test problems ranging from isothermal sound propagation to laminar premixed flame problems.
NASA Astrophysics Data System (ADS)
Cortesi, A. B.; Smith, B. L.; Yadigaroglu, G.; Banerjee, S.
1999-01-01
The direct numerical simulation (DNS) of a temporally-growing mixing layer has been carried out, for a variety of initial conditions at various Richardson and Prandtl numbers, by means of a pseudo-spectral technique; the main objective being to elucidate how the entrainment and mixing processes in mixing-layer turbulence are altered under the combined influence of stable stratification and thermal conductivity. Stratification is seen to significantly modify the way by which entrainment and mixing occur by introducing highly-localized, convective instabilities, which in turn cause a substantially different three-dimensionalization of the flow compared to the unstratified situation. Fluid which was able to cross the braid region mainly undisturbed (unmixed) in the unstratified case, pumped by the action of rib pairs and giving rise to well-formed mushroom structures, is not available with stratified flow. This is because of the large number of ribs which efficiently mix the fluid crossing the braid region. More efficient entrainment and mixing has been noticed for high Prandtl number computations, where vorticity is significantly reinforced by the baroclinic torque. In liquid sodium, however, for which the Prandtl number is very low, the generation of vorticity is very effectively suppressed by the large thermal conduction, since only small temperature gradients, and thus negligible baroclinic vorticity reinforcement, are then available to counterbalance the effects of buoyancy. This is then reflected in less efficient entrainment and mixing. The influence of the stratification and the thermal conductivity can also be clearly identified from the calculated entrainment coefficients and turbulent Prandtl numbers, which were seen to accurately match experimental data. The turbulent Prandtl number increases rapidly with increasing stratification in liquid sodium, whereas for air and water the stratification effect is less significant. A general law for the entrainment coefficient as a function of the Richardson and Prandtl numbers is proposed, and critically assessed against experimental data.
Removal of Cu(II) from leachate using natural zeolite as a landfill liner material.
Turan, N Gamze; Ergun, Osman Nuri
2009-08-15
All hazardous waste disposal facilities require composite liner systems to act as a barrier against migration of contaminated leachate into the subsurface environment. Removal of copper(II) from leachate was studied using natural zeolite. A serial of laboratory systems on bentonite added natural zeolite was conducted and copper flotation waste was used as hazardous waste. The adsorption capacities and sorption efficiencies were determined. The sorption efficiencies increased with increasing natural zeolite ratio. The pseudo-first-order, the pseudo-second-order, Elovich and the intra-particle diffusion kinetic models were used to describe the kinetic data to estimate the rate constants. The second-order model best described adsorption kinetic data. The results indicated that natural zeolite showed excellent adsorptive characteristics for the removal of copper(II) from leachate and could be used as very good liner materials due to its high uptake capacity and the abundance in availability.
High hardness and superlative oxidation resistance in a pseudo-icosahehdral Cr-Al binary
NASA Astrophysics Data System (ADS)
Simonson, J. W.; Rosa, R.; Antonacci, A. K.; He, H.; Bender, A. D.; Pabla, J.; Adrip, W.; McNally, D. E.; Zebro, A.; Kamenov, P.; Geschwind, G.; Ghose, S.; Dooryhee, E.; Ibrahim, A.; Aronson, M. C.
Improving the efficiency of fossil fuel plants is a practical option for decreasing carbon dioxide emissions from electrical power generation. Present limits on the operating temperatures of exposed steel components, however, restrict steam temperatures and therefore energy efficiency. Even as a new generation of creep-resistant, high strength steels retain long term structural stability to temperatures as high as ~ 973 K, the low Cr-content of these alloys hinders their oxidation resistance, necessitating the development of new corrosion resistant coatings. We report here the nearly ideal properties of potential coating material Cr55Al229, which exhibits high hardness at room temperature as well as low thermal conductivity and superlative oxidation resistance at 973 K, with an oxidation rate at least three times smaller than those of benchmark materials. These properties originate from a pseudo-icosahedral crystal structure, suggesting new criteria for future research.
Chang, Sun-Il; Yoon, Euisik
2009-01-01
We report an energy efficient pseudo open-loop amplifier with programmable band-pass filter developed for neural interface systems. The proposed amplifier consumes 400nA at 2.5V power supply. The measured thermal noise level is 85nV/ radicalHz and input-referred noise is 1.69microV(rms) from 0.3Hz to 1 kHz. The amplifier has a noise efficiency factor of 2.43, the lowest in the differential topologies reported up to date to our knowledge. By programming the switched-capacitor frequency and bias current, we could control the bandwidth of the preamplifier from 138 mHz to 2.2 kHz to meet various application requirements. The entire preamplifier including band-pass filters has been realized in a small area of 0.043mm(2) using a 0.25microm CMOS technology.
NASA Astrophysics Data System (ADS)
Zamri, Mohd Faiz Muaz Ahmad; Kamaruddin, Mohamad Anuar; Yusoff, Mohd Suffian; Aziz, Hamidi Abdul; Foo, Keng Yuen
2017-05-01
This study was carried out to investigate the treatability of ion exchange resin (Indion MB 6 SR) for the removal of chromium (VI), aluminium (III), zinc (II), copper (II), iron (II), and phosphate (PO4)3-, chemical oxygen demand (COD), ammonia nitrogen (NH3-N) and colour from semi-aerobic stabilized leachate by batch test. A range of ion exchange resin dosage was tested towards the removal efficiency of leachate parameters. It was observed that equilibrium data were best represented by the Langmuir model for metal ions and Freundlich was ideally fit for COD, NH3-N and colour. Intra particle diffusion model, pseudo first-order and pseudo second-order isotherm models were found ideally fit with correlation of the experimental data. The findings revealed that the models could describe the ion exchange kinetic behaviour efficiently, which further suggests comprehensive outlook for the future research in this field.
NASA Astrophysics Data System (ADS)
Sen, O.; Gaul, N. J.; Davis, S.; Choi, K. K.; Jacobs, G.; Udaykumar, H. S.
2018-05-01
Macroscale models of shock-particle interactions require closure terms for unresolved solid-fluid momentum and energy transfer. These comprise the effects of mean as well as fluctuating fluid-phase velocity fields in the particle cloud. Mean drag and Reynolds stress equivalent terms (also known as pseudo-turbulent terms) appear in the macroscale equations. Closure laws for the pseudo-turbulent terms are constructed in this work from ensembles of high-fidelity mesoscale simulations. The computations are performed over a wide range of Mach numbers ( M) and particle volume fractions (φ ) and are used to explicitly compute the pseudo-turbulent stresses from the Favre average of the velocity fluctuations in the flow field. The computed stresses are then used as inputs to a Modified Bayesian Kriging method to generate surrogate models. The surrogates can be used as closure models for the pseudo-turbulent terms in macroscale computations of shock-particle interactions. It is found that the kinetic energy associated with the velocity fluctuations is comparable to that of the mean flow—especially for increasing M and φ . This work is a first attempt to quantify and evaluate the effect of velocity fluctuations for problems of shock-particle interactions.
NASA Astrophysics Data System (ADS)
Sen, O.; Gaul, N. J.; Davis, S.; Choi, K. K.; Jacobs, G.; Udaykumar, H. S.
2018-02-01
Macroscale models of shock-particle interactions require closure terms for unresolved solid-fluid momentum and energy transfer. These comprise the effects of mean as well as fluctuating fluid-phase velocity fields in the particle cloud. Mean drag and Reynolds stress equivalent terms (also known as pseudo-turbulent terms) appear in the macroscale equations. Closure laws for the pseudo-turbulent terms are constructed in this work from ensembles of high-fidelity mesoscale simulations. The computations are performed over a wide range of Mach numbers (M) and particle volume fractions (φ ) and are used to explicitly compute the pseudo-turbulent stresses from the Favre average of the velocity fluctuations in the flow field. The computed stresses are then used as inputs to a Modified Bayesian Kriging method to generate surrogate models. The surrogates can be used as closure models for the pseudo-turbulent terms in macroscale computations of shock-particle interactions. It is found that the kinetic energy associated with the velocity fluctuations is comparable to that of the mean flow—especially for increasing M and φ . This work is a first attempt to quantify and evaluate the effect of velocity fluctuations for problems of shock-particle interactions.
Rabaey, David; Lens, Frederic; Huysmans, Suzy; Smets, Erik; Jansen, Steven
2008-11-01
Recent micromorphological observations of angiosperm pit membranes have extended the number and range of taxa with pseudo-tori in tracheary elements. This study investigates at ultrastructural level (TEM) the development of pseudo-tori in the unrelated Malus yunnanensis, Ligustrum vulgare, Pittosporum tenuifolium, and Vaccinium myrtillus in order to determine whether these plasmodesmata associated thickenings have a similar developmental pattern across flowering plants. At early ontogenetic stages, the formation of a primary thickening was observed, resulting from swelling of the pit membrane in fibre-tracheids and vessel elements. Since plasmodesmata appear to be frequently, but not always, associated with these primary pit membrane thickenings, it remains unclear which ultrastructural characteristics control the formation of pseudo-tori. At a very late stage during xylem differentiation, a secondary thickening is deposited on the primary pit membrane thickening. Plasmodesmata are always associated with pseudo-tori at these final developmental stages. After autolysis, the secondary thickening becomes electron-dense and persistent, while the primary thickening turns transparent and partially or entirely dissolves. The developmental patterns observed in the species studied are similar and agree with former ontogenetic studies in Rosaceae, suggesting that pseudo-tori might be homologous features across angiosperms.
TSCAN: Pseudo-time reconstruction and evaluation in single-cell RNA-seq analysis
Ji, Zhicheng; Ji, Hongkai
2016-01-01
When analyzing single-cell RNA-seq data, constructing a pseudo-temporal path to order cells based on the gradual transition of their transcriptomes is a useful way to study gene expression dynamics in a heterogeneous cell population. Currently, a limited number of computational tools are available for this task, and quantitative methods for comparing different tools are lacking. Tools for Single Cell Analysis (TSCAN) is a software tool developed to better support in silico pseudo-Time reconstruction in Single-Cell RNA-seq ANalysis. TSCAN uses a cluster-based minimum spanning tree (MST) approach to order cells. Cells are first grouped into clusters and an MST is then constructed to connect cluster centers. Pseudo-time is obtained by projecting each cell onto the tree, and the ordered sequence of cells can be used to study dynamic changes of gene expression along the pseudo-time. Clustering cells before MST construction reduces the complexity of the tree space. This often leads to improved cell ordering. It also allows users to conveniently adjust the ordering based on prior knowledge. TSCAN has a graphical user interface (GUI) to support data visualization and user interaction. Furthermore, quantitative measures are developed to objectively evaluate and compare different pseudo-time reconstruction methods. TSCAN is available at https://github.com/zji90/TSCAN and as a Bioconductor package. PMID:27179027
TSCAN: Pseudo-time reconstruction and evaluation in single-cell RNA-seq analysis.
Ji, Zhicheng; Ji, Hongkai
2016-07-27
When analyzing single-cell RNA-seq data, constructing a pseudo-temporal path to order cells based on the gradual transition of their transcriptomes is a useful way to study gene expression dynamics in a heterogeneous cell population. Currently, a limited number of computational tools are available for this task, and quantitative methods for comparing different tools are lacking. Tools for Single Cell Analysis (TSCAN) is a software tool developed to better support in silico pseudo-Time reconstruction in Single-Cell RNA-seq ANalysis. TSCAN uses a cluster-based minimum spanning tree (MST) approach to order cells. Cells are first grouped into clusters and an MST is then constructed to connect cluster centers. Pseudo-time is obtained by projecting each cell onto the tree, and the ordered sequence of cells can be used to study dynamic changes of gene expression along the pseudo-time. Clustering cells before MST construction reduces the complexity of the tree space. This often leads to improved cell ordering. It also allows users to conveniently adjust the ordering based on prior knowledge. TSCAN has a graphical user interface (GUI) to support data visualization and user interaction. Furthermore, quantitative measures are developed to objectively evaluate and compare different pseudo-time reconstruction methods. TSCAN is available at https://github.com/zji90/TSCAN and as a Bioconductor package. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Least squares deconvolution for leak detection with a pseudo random binary sequence excitation
NASA Astrophysics Data System (ADS)
Nguyen, Si Tran Nguyen; Gong, Jinzhe; Lambert, Martin F.; Zecchin, Aaron C.; Simpson, Angus R.
2018-01-01
Leak detection and localisation is critical for water distribution system pipelines. This paper examines the use of the time-domain impulse response function (IRF) for leak detection and localisation in a pressurised water pipeline with a pseudo random binary sequence (PRBS) signal excitation. Compared to the conventional step wave generated using a single fast operation of a valve closure, a PRBS signal offers advantageous correlation properties, in that the signal has very low autocorrelation for lags different from zero and low cross correlation with other signals including noise and other interference. These properties result in a significant improvement in the IRF signal to noise ratio (SNR), leading to more accurate leak localisation. In this paper, the estimation of the system IRF is formulated as an optimisation problem in which the l2 norm of the IRF is minimised to suppress the impact of noise and interference sources. Both numerical and experimental data are used to verify the proposed technique. The resultant estimated IRF provides not only accurate leak location estimation, but also good sensitivity to small leak sizes due to the improved SNR.
Study of pseudo noise CW diode laser for ranging applications
NASA Technical Reports Server (NTRS)
Lee, Hyo S.; Ramaswami, Ravi
1992-01-01
A new Pseudo Random Noise (PN) modulated CW diode laser radar system is being developed for real time ranging of targets at both close and large distances (greater than 10 KM) to satisy a wide range of applications: from robotics to future space applications. Results from computer modeling and statistical analysis, along with some preliminary data obtained from a prototype system, are presented. The received signal is averaged for a short time to recover the target response function. It is found that even with uncooperative targets, based on the design parameters used (200-mW laser and 20-cm receiver), accurate ranging is possible up to about 15 KM, beyond which signal to noise ratio (SNR) becomes too small for real time analog detection.
Hypothesis testing in clinical trials.
Green, S B
2000-08-01
In designing and analyzing any clinical trial, two issues related to patient heterogeneity must be considered: (1) the effect of chance and (2) the effect of bias. These issues are addressed by enrolling adequate numbers of patients in the study and using randomization for treatment assignment. An "intention-to-treat" analysis of outcome data includes all individuals randomized and counted in the group to which they are randomized. There is an increased risk of spurious results with a greater number of subgroup analyses, particularly when these analyses are data derived. Factorial designs are sometimes appropriate and can lead to efficiencies by addressing more than one comparison of interventions in a single trial.
Efficiency and robustness of different bus network designs
NASA Astrophysics Data System (ADS)
Pang, John Zhen Fu; Bin Othman, Nasri; Ng, Keng Meng; Monterola, Christopher
2015-07-01
We compare the efficiencies and robustness of four transport networks that can be possibly formed as a result of deliberate city planning. The networks are constructed based on their spatial resemblance to the cities of Manhattan (lattice), Sudan (random), Beijing (single-blob) and Greater Cairo (dual-blob). For a given type, a genetic algorithm is employed to obtain an optimized set of the bus routes. We then simulate how commuter travels using Yen's algorithms for k shortest paths on an adjacency matrix. The cost of traveling such as walking between stations is captured by varying the weighted sums of matrices. We also consider the number of transfers a posteriori by looking at the computed shortest paths. With consideration to distances via radius of gyration, redundancies of travel and number of bus transfers, our simulations indicate that random and dual-blob are more efficient than single-blob and lattice networks. Moreover, dual-blob type is least robust when node removals are targeted but is most resilient when node failures are random. The work hopes to guide and provide technical perspectives on how geospatial distribution of a city limits the optimality of transport designs.
The evaluation of interstitial Cajal cells distribution in non-tumoral colon disorders.
Becheanu, G; Manuc, M; Dumbravă, Mona; Herlea, V; Hortopan, Monica; Costache, Mariana
2008-01-01
Interstitial cells of Cajal (ICC) are pacemakers that generate electric waves recorded from the gut and are important for intestinal motility. The aim of the study was to evaluate the distribution of interstitial cells of Cajal in colon specimens from patients with idiopathic chronic pseudo-obstruction and other non-tumoral colon disorders as compared with samples from normal colon. The distribution pattern of ICC in the normal and pathological human colon was evaluated by immunohistochemistry using antibodies for CD117, CD34, and S-100. In two cases with intestinal chronic idiopathic pseudo-obstruction we found a diffuse or focal reducing number of Cajal cells, the loss of immunoreactivity for CD117 being correlated with loss of immunoreactivity for CD34 marker. Our study revealed that the number of interstitial cells of Cajal also decrease in colonic diverticular disease and Crohn disease (p<0.05), whereas the number of enteric neurones appears to be normal. These findings might explain some of the large bowel motor abnormalities known to occur in these disorders. Interstitial Cajal cells may play an important role in pathogenesis and staining for CD117 on transmural intestinal surgical biopsies could allow a more extensive diagnosis in evaluation of chronic intestinal pseudo-obstruction.
Evaluation of Pseudo-Haptic Interactions with Soft Objects in Virtual Environments.
Li, Min; Sareh, Sina; Xu, Guanghua; Ridzuan, Maisarah Binti; Luo, Shan; Xie, Jun; Wurdemann, Helge; Althoefer, Kaspar
2016-01-01
This paper proposes a pseudo-haptic feedback method conveying simulated soft surface stiffness information through a visual interface. The method exploits a combination of two feedback techniques, namely visual feedback of soft surface deformation and control of the indenter avatar speed, to convey stiffness information of a simulated surface of a soft object in virtual environments. The proposed method was effective in distinguishing different sizes of virtual hard nodules integrated into the simulated soft bodies. To further improve the interactive experience, the approach was extended creating a multi-point pseudo-haptic feedback system. A comparison with regards to (a) nodule detection sensitivity and (b) elapsed time as performance indicators in hard nodule detection experiments to a tablet computer incorporating vibration feedback was conducted. The multi-point pseudo-haptic interaction is shown to be more time-efficient than the single-point pseudo-haptic interaction. It is noted that multi-point pseudo-haptic feedback performs similarly well when compared to a vibration-based feedback method based on both performance measures elapsed time and nodule detection sensitivity. This proves that the proposed method can be used to convey detailed haptic information for virtual environmental tasks, even subtle ones, using either a computer mouse or a pressure sensitive device as an input device. This pseudo-haptic feedback method provides an opportunity for low-cost simulation of objects with soft surfaces and hard inclusions, as, for example, occurring in ever more realistic video games with increasing emphasis on interaction with the physical environment and minimally invasive surgery in the form of soft tissue organs with embedded cancer nodules. Hence, the method can be used in many low-budget applications where haptic sensation is required, such as surgeon training or video games, either using desktop computers or portable devices, showing reasonably high fidelity in conveying stiffness perception to the user.
Gradient-free MCMC methods for dynamic causal modelling.
Sengupta, Biswa; Friston, Karl J; Penny, Will D
2015-05-15
In this technical note we compare the performance of four gradient-free MCMC samplers (random walk Metropolis sampling, slice-sampling, adaptive MCMC sampling and population-based MCMC sampling with tempering) in terms of the number of independent samples they can produce per unit computational time. For the Bayesian inversion of a single-node neural mass model, both adaptive and population-based samplers are more efficient compared with random walk Metropolis sampler or slice-sampling; yet adaptive MCMC sampling is more promising in terms of compute time. Slice-sampling yields the highest number of independent samples from the target density - albeit at almost 1000% increase in computational time, in comparison to the most efficient algorithm (i.e., the adaptive MCMC sampler). Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Convergence Acceleration of a Navier-Stokes Solver for Efficient Static Aeroelastic Computations
NASA Technical Reports Server (NTRS)
Obayashi, Shigeru; Guruswamy, Guru P.
1995-01-01
New capabilities have been developed for a Navier-Stokes solver to perform steady-state simulations more efficiently. The flow solver for solving the Navier-Stokes equations is based on a combination of the lower-upper factored symmetric Gauss-Seidel implicit method and the modified Harten-Lax-van Leer-Einfeldt upwind scheme. A numerically stable and efficient pseudo-time-marching method is also developed for computing steady flows over flexible wings. Results are demonstrated for transonic flows over rigid and flexible wings.
Effect of Na+ impregnated activated carbon on the adsorption of NH4(+)-N from aqueous solution.
Shi, Mo; Wang, Zhengfang; Zheng, Zheng
2013-08-01
Two kinds of activated carbons modified by Na+ impregnation after pre-treatments involving oxidation by nitric acid or acidification by hydrochloric acid (denoted as AC/N-Na and AC/HCl-Na, respectively), were used as adsorbents to remove NH4(+)-N. The surface features of samples were investigated by BET, SEM, XRD and FT-IR. The adsorption experiments were conducted in equilibrium and kinetic conditions. Influencing factors such as initial solution pH and initial concentration were investigated. A possible mechanism was proposed. Results showed that optimal NH4(+)-N removal efficiency was achieved at a neutral pH condition for the modified ACs. The Langmuir isotherm adsorption equation provided a better fit than other models for the equilibrium study. The adsorption kinetics followed both the pseudo second-order kinetics model and intra-particle kinetic model. Chemical surface analysis indicated that Na+ ions form ionic bonds with available surface functional groups created by pre-treatment, especially oxidation by nitric acid, thus increasing the removal efficiency of the modified ACs for NH4(+)-N. Na(+)-impregnated ACs had a higher removal capability in removing NH4(+)-N than unmodified AC, possibly resulting from higher numbers of surface functional groups and better intra-particle diffusion. The good fit of Langmuir isotherm adsorption to the data indicated the presence of monolayer NH4(+)-N adsorption on the active homogenous sites within the adsorbents. The applicability of pseudo second-order and intra-particle kinetic models revealed the complex nature of the adsorption mechanism. The intra-particle diffusion model revealed that the adsorption process consisted not only of surface adsorption but also intra-particle diffusion.
NASA Astrophysics Data System (ADS)
Zhang, Xinyue; Zhang, Qisheng; Wang, Meng; Kong, Qiang; Zhang, Shengquan; He, Ruihao; Liu, Shenghui; Li, Shuhan; Yuan, Zhenzhong
2017-11-01
Due to the pressing demand for metallic ore exploration technology in China, several new technologies are being employed in the relevant exploration instruments. In addition to possessing the high resolution of the traditional transient electromagnetic method, high-efficiency measurements, and a short measurement time, the multichannel transient electromagnetic method (MTEM) technology can also sensitively determine the characteristics of a low-resistivity geologic body, without being affected by the terrain. Besides, the MTEM technology also solves the critical, existing interference problem in electrical exploration technology. This study develops a full-waveform voltage and current recording device for MTEM transmitters. After continuous acquisition and storage of the large, pseudo-random current signals emitted by the MTEM transmitter, these signals are then convoluted with the signals collected by the receiver to obtain the earth's impulse response. In this paper, the overall design of the full-waveform recording apparatus, including the hardware and upper-computer software designs, the software interface display, and the results of field test, is discussed in detail.
Curvelet-based compressive sensing for InSAR raw data
NASA Astrophysics Data System (ADS)
Costa, Marcello G.; da Silva Pinho, Marcelo; Fernandes, David
2015-10-01
The aim of this work is to evaluate the compression performance of SAR raw data for interferometry applications collected by airborne from BRADAR (Brazilian SAR System operating in X and P bands) using the new approach based on compressive sensing (CS) to achieve an effective recovery with a good phase preserving. For this framework is desirable a real-time capability, where the collected data can be compressed to reduce onboard storage and bandwidth required for transmission. In the CS theory, a sparse unknown signals can be recovered from a small number of random or pseudo-random measurements by sparsity-promoting nonlinear recovery algorithms. Therefore, the original signal can be significantly reduced. To achieve the sparse representation of SAR signal, was done a curvelet transform. The curvelets constitute a directional frame, which allows an optimal sparse representation of objects with discontinuities along smooth curves as observed in raw data and provides an advanced denoising optimization. For the tests were made available a scene of 8192 x 2048 samples in range and azimuth in X-band with 2 m of resolution. The sparse representation was compressed using low dimension measurements matrices in each curvelet subband. Thus, an iterative CS reconstruction method based on IST (iterative soft/shrinkage threshold) was adjusted to recover the curvelets coefficients and then the original signal. To evaluate the compression performance were computed the compression ratio (CR), signal to noise ratio (SNR), and because the interferometry applications require more reconstruction accuracy the phase parameters like the standard deviation of the phase (PSD) and the mean phase error (MPE) were also computed. Moreover, in the image domain, a single-look complex image was generated to evaluate the compression effects. All results were computed in terms of sparsity analysis to provides an efficient compression and quality recovering appropriated for inSAR applications, therefore, providing a feasibility for compressive sensing application.
Estimation of perfusion properties with MR Fingerprinting Arterial Spin Labeling.
Wright, Katherine L; Jiang, Yun; Ma, Dan; Noll, Douglas C; Griswold, Mark A; Gulani, Vikas; Hernandez-Garcia, Luis
2018-03-12
In this study, the acquisition of ASL data and quantification of multiple hemodynamic parameters was explored using a Magnetic Resonance Fingerprinting (MRF) approach. A pseudo-continuous ASL labeling scheme was used with pseudo-randomized timings to acquire the MRF ASL data in a 2.5 min acquisition. A large dictionary of MRF ASL signals was generated by combining a wide range of physical and hemodynamic properties with the pseudo-random MRF ASL sequence and a two-compartment model. The acquired signals were matched to the dictionary to provide simultaneous quantification of cerebral blood flow, tissue time-to-peak, cerebral blood volume, arterial time-to-peak, B 1 , and T 1. A study in seven healthy volunteers resulted in the following values across the population in grey matter (mean ± standard deviation): cerebral blood flow of 69.1 ± 6.1 ml/min/100 g, arterial time-to-peak of 1.5 ± 0.1 s, tissue time-to-peak of 1.5 ± 0.1 s, T 1 of 1634 ms, cerebral blood volume of 0.0048 ± 0.0005. The CBF measurements were compared to standard pCASL CBF estimates using a one-compartment model, and a Bland-Altman analysis showed good agreement with a minor bias. Repeatability was tested in five volunteers in the same exam session, and no statistical difference was seen. In addition to this validation, the MRF ASL acquisition's sensitivity to the physical and physiological parameters of interest was studied numerically. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Vatutin, Eduard
2017-12-01
The article deals with the problem of analysis of effectiveness of the heuristic methods with limited depth-first search techniques of decision obtaining in the test problem of getting the shortest path in graph. The article briefly describes the group of methods based on the limit of branches number of the combinatorial search tree and limit of analyzed subtree depth used to solve the problem. The methodology of comparing experimental data for the estimation of the quality of solutions based on the performing of computational experiments with samples of graphs with pseudo-random structure and selected vertices and arcs number using the BOINC platform is considered. It also shows description of obtained experimental results which allow to identify the areas of the preferable usage of selected subset of heuristic methods depending on the size of the problem and power of constraints. It is shown that the considered pair of methods is ineffective in the selected problem and significantly inferior to the quality of solutions that are provided by ant colony optimization method and its modification with combinatorial returns.
Sparse RNA folding revisited: space-efficient minimum free energy structure prediction.
Will, Sebastian; Jabbari, Hosna
2016-01-01
RNA secondary structure prediction by energy minimization is the central computational tool for the analysis of structural non-coding RNAs and their interactions. Sparsification has been successfully applied to improve the time efficiency of various structure prediction algorithms while guaranteeing the same result; however, for many such folding problems, space efficiency is of even greater concern, particularly for long RNA sequences. So far, space-efficient sparsified RNA folding with fold reconstruction was solved only for simple base-pair-based pseudo-energy models. Here, we revisit the problem of space-efficient free energy minimization. Whereas the space-efficient minimization of the free energy has been sketched before, the reconstruction of the optimum structure has not even been discussed. We show that this reconstruction is not possible in trivial extension of the method for simple energy models. Then, we present the time- and space-efficient sparsified free energy minimization algorithm SparseMFEFold that guarantees MFE structure prediction. In particular, this novel algorithm provides efficient fold reconstruction based on dynamically garbage-collected trace arrows. The complexity of our algorithm depends on two parameters, the number of candidates Z and the number of trace arrows T; both are bounded by [Formula: see text], but are typically much smaller. The time complexity of RNA folding is reduced from [Formula: see text] to [Formula: see text]; the space complexity, from [Formula: see text] to [Formula: see text]. Our empirical results show more than 80 % space savings over RNAfold [Vienna RNA package] on the long RNAs from the RNA STRAND database (≥2500 bases). The presented technique is intentionally generalizable to complex prediction algorithms; due to their high space demands, algorithms like pseudoknot prediction and RNA-RNA-interaction prediction are expected to profit even stronger than "standard" MFE folding. SparseMFEFold is free software, available at http://www.bioinf.uni-leipzig.de/~will/Software/SparseMFEFold.
The Effects of Low Income Housing Tax Credit Developments on Neighborhoods.
Baum-Snow, Nathaniel; Marion, Justin
2009-06-01
This paper evaluates the impacts of new housing developments funded with the Low Income Housing Tax Credit (LIHTC), the largest federal project based housing program in the U.S., on the neighborhoods in which they are built. A discontinuity in the formula determining the magnitude of tax credits as a function of neighborhood characteristics generates pseudo-random assignment in the number of low income housing units built in similar sets of census tracts. Tracts where projects are awarded 30 percent higher tax credits receive approximately six more low income housing units on a base of seven units per tract. These additional new low income developments cause homeowner turnover to rise, raise property values in declining areas and reduce incomes in gentrifying areas in neighborhoods near the 30th percentile of the income distribution. LIHTC units significantly crowd out nearby new rental construction in gentrifying areas but do not displace new construction in stable or declining areas.
A modified Leslie-Gower predator-prey interaction model and parameter identifiability
NASA Astrophysics Data System (ADS)
Tripathi, Jai Prakash; Meghwani, Suraj S.; Thakur, Manoj; Abbas, Syed
2018-01-01
In this work, bifurcation and a systematic approach for estimation of identifiable parameters of a modified Leslie-Gower predator-prey system with Crowley-Martin functional response and prey refuge is discussed. Global asymptotic stability is discussed by applying fluctuation lemma. The system undergoes into Hopf bifurcation with respect to parameters intrinsic growth rate of predators (s) and prey reserve (m). The stability of Hopf bifurcation is also discussed by calculating Lyapunov number. The sensitivity analysis of the considered model system with respect to all variables is performed which also supports our theoretical study. To estimate the unknown parameter from the data, an optimization procedure (pseudo-random search algorithm) is adopted. System responses and phase plots for estimated parameters are also compared with true noise free data. It is found that the system dynamics with true set of parametric values is similar to the estimated parametric values. Numerical simulations are presented to substantiate the analytical findings.
NASA Astrophysics Data System (ADS)
Rediske, Nicole M.
The objective of this research was to characterize natural carbon fibers from coconut husks, both bare and impregnated with metallic nanoparticles, in removing cadmium from aqueous media. The adsorbent load, kinetics, isotherm parameters, removal efficiencies, desorption capacity and possible contaminant removal mechanisms were evaluated. It was found that the fibers treated with metallic nanoparticles performed better than the bare fibers in removing cadmium from water. The ideal conditions were found to be neutral pH with low initial cadmium concentrations. Through the kinetic analyses, the adsorption process was first thought to be pseudo first order with two separate adsorption mechanisms apparent. Upon further analysis, it was seen that the first mechanism does not follow the pseudo first order kinetics model. An increase in calcium and magnesium concentrations was observed as the cadmium concentrations decreases. This increase corresponds with first mechanism. This suggests the cadmium removal in the first mechanism is due to ion exchange. The second mechanism's rate constant was consistently lower than the first mechanisms rate constant by an order of magnitude. This led to the hypothesis that the second mechanism is controlled by van de Waals forces, specifically ion-induced dipole interactions, and physical adsorption. It was also found that the cadmium does not effectively desorb from the wasted fibers in DI water. Keywords: Adsorption; kinetics; pseudo first order; cadmium; metallic nanoparticles; natural fibers; removal efficiencies; ion exchange.
NASA Astrophysics Data System (ADS)
Ghaedi, Mehrorang; Tavallali, Hossein; Sharifi, Mahdi; Kokhdan, Syamak Nasiri; Asghari, Alireza
2012-02-01
In this research, the potential applicability of activated carbon prepared from Myrtus communis (AC-MC) and pomegranate (AC-PG) as useful adsorbents for the removal of Congo red (CR) from aqueous solutions in batch method was investigated. The effects of pH, contact time, agitation time and amount of adsorbents on removal percentage of Congo red on both adsorbents were examined. Increase in pH up to 6 for AC-MC and pH 7 for AC-PG increase the adsorption percentage (capacity) and reach equilibrium within 30 min of contact time. Fitting the experimental data to conventional isotherm models like Freundlich, Langmuir, Tempkin and Dubinin-Radushkevich show that the experimental data fitted very well to the Freundlich isotherm for AC-MC and Langmuir isotherm for AC-PG. Fitting the experimental data to different kinetic models such as pseudo first-order, pseudo second-order, Elovich and intraparticle diffusion mechanism showed the applicability of a pseudo second-order with involvement of intraparticle diffusion model for interpretation of experimental data for both adsorbents. The adsorption capacity of AC-PG and AC-MC for the removal of CR was found to be 19.231 and 10 mg g -1. These results clearly indicate the efficiency of adsorbents as a low cost adsorbent for treatment of wastewater containing CR.
NASA Astrophysics Data System (ADS)
Ghaedi, Mehrorang; Khajesharifi, Habibollah; Hemmati Yadkuri, Amin; Roosta, Mostafa; Sahraei, Reza; Daneshfar, Ali
2012-02-01
In the present research, cadmium hydroxide nanowire loaded on activated carbon (Cd(OH) 2-NW-AC) was synthesized and characterized. This new adsorbent was applied for the removal of Bromocresol Green (BCG) molecules from aqueous solutions. The influence of effective variables such as solution pH, contact time, initial BCG concentration, amount of Cd(OH) 2-NW-AC and temperature on the adsorption efficiency of BCG in batch system was examined. During all experiments BCG contents were determined by UV-Vis spectrophotometer. Fitting the experimental data to different kinetic models including pseudo-first-order, pseudo-second-order, Elovich and intra-particle diffusion kinetic models show the suitability of the pseudo-second-order kinetic model to interpret in the experimental data. Equilibrium isotherm studies were examined by application of different conventional models such as Langmuir, Freundlich and Tempkin models to explain the experimental data. Based on considering R2 value as criterion the adsorption data well fitted to Langmuir model with maximum adsorption capacity of 108.7 mg g -1. Thermodynamic parameters (Gibb's free energy, entropy and enthalpy) of adsorption were calculated according to general procedure to take some information about the on-going adsorption process. The high negative value of Gibb's free energy and positive value of enthalpy show the feasibility and endothermic nature of adsorption process.
Ghaedi, Mehrorang; Khajesharifi, Habibollah; Hemmati Yadkuri, Amin; Roosta, Mostafa; Sahraei, Reza; Daneshfar, Ali
2012-02-01
In the present research, cadmium hydroxide nanowire loaded on activated carbon (Cd(OH)(2)-NW-AC) was synthesized and characterized. This new adsorbent was applied for the removal of Bromocresol Green (BCG) molecules from aqueous solutions. The influence of effective variables such as solution pH, contact time, initial BCG concentration, amount of Cd(OH)(2)-NW-AC and temperature on the adsorption efficiency of BCG in batch system was examined. During all experiments BCG contents were determined by UV-Vis spectrophotometer. Fitting the experimental data to different kinetic models including pseudo-first-order, pseudo-second-order, Elovich and intra-particle diffusion kinetic models show the suitability of the pseudo-second-order kinetic model to interpret in the experimental data. Equilibrium isotherm studies were examined by application of different conventional models such as Langmuir, Freundlich and Tempkin models to explain the experimental data. Based on considering R(2) value as criterion the adsorption data well fitted to Langmuir model with maximum adsorption capacity of 108.7 mg g(-1). Thermodynamic parameters (Gibb's free energy, entropy and enthalpy) of adsorption were calculated according to general procedure to take some information about the on-going adsorption process. The high negative value of Gibb's free energy and positive value of enthalpy show the feasibility and endothermic nature of adsorption process. Copyright © 2011 Elsevier B.V. All rights reserved.
Ghaedi, Mehrorang; Tavallali, Hossein; Sharifi, Mahdi; Kokhdan, Syamak Nasiri; Asghari, Alireza
2012-02-01
In this research, the potential applicability of activated carbon prepared from Myrtus communis (AC-MC) and pomegranate (AC-PG) as useful adsorbents for the removal of Congo red (CR) from aqueous solutions in batch method was investigated. The effects of pH, contact time, agitation time and amount of adsorbents on removal percentage of Congo red on both adsorbents were examined. Increase in pH up to 6 for AC-MC and pH 7 for AC-PG increase the adsorption percentage (capacity) and reach equilibrium within 30 min of contact time. Fitting the experimental data to conventional isotherm models like Freundlich, Langmuir, Tempkin and Dubinin-Radushkevich show that the experimental data fitted very well to the Freundlich isotherm for AC-MC and Langmuir isotherm for AC-PG. Fitting the experimental data to different kinetic models such as pseudo first-order, pseudo second-order, Elovich and intraparticle diffusion mechanism showed the applicability of a pseudo second-order with involvement of intraparticle diffusion model for interpretation of experimental data for both adsorbents. The adsorption capacity of AC-PG and AC-MC for the removal of CR was found to be 19.231 and 10 mg g(-1). These results clearly indicate the efficiency of adsorbents as a low cost adsorbent for treatment of wastewater containing CR. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Thangavel, Sakthivel; Thangavel, Srinivas; Raghavan, Nivea; Alagu, Raja; Venugopal, Gunasekaran
2017-11-01
The use of two-dimensional nanomaterials as co-catalysts in the photodegradation of toxic compounds using light irradiation is an attractive ecofriendly process. In this study, we prepared a novel MoS2/Ag2WO4 nanohybrid via a one-step hydrothermal approach and the photocatalytic properties were investigated by the degradation of methyl-orange under stimulated irradiation. The nanohybrid exhibits enhanced efficiency in dye degradation compared to the bare Ag2WO4 nanorods; the same has been evidently confirmed with UV-visible spectra and total organic carbon removal analysis. The pseudo-first order rate constant of the nanohybrid is nearly 1.8 fold higher than that of the bare Ag2WO4 nanorods. With the aid of classical radical quenching and photoluminescence spectral analysis, a reasonable mechanism has been derived for the addition of MoS2 to nanohybrids to enhance the photocatalytic efficiency. MoS2 prevents photocorrosion of Ag2WO4 and also diminishes the number of photogenerated electron-hole recombination. Our findings could provide new insights in understanding the mechanism of the MoS2/Ag2WO4 nanohybrid as an efficient photocatalyst suitable for waste-water treatment and remedial applications.
On the design of random metasurface based devices.
Dupré, Matthieu; Hsu, Liyi; Kanté, Boubacar
2018-05-08
Metasurfaces are generally designed by placing scatterers in periodic or pseudo-periodic grids. We propose and discuss design rules for functional metasurfaces with randomly placed anisotropic elements that randomly sample a well-defined phase function. By analyzing the focusing performance of random metasurface lenses as a function of their density and the density of the phase-maps used to design them, we find that the performance of 1D metasurfaces is mostly governed by their density while 2D metasurfaces strongly depend on both the density and the near-field coupling configuration of the surface. The proposed approach is used to design all-polarization random metalenses at near infrared frequencies. Challenges, as well as opportunities of random metasurfaces compared to periodic ones are discussed. Our results pave the way to new approaches in the design of nanophotonic structures and devices from lenses to solar energy concentrators.
Effective potential of the three-dimensional Ising model: The pseudo-ɛ expansion study
NASA Astrophysics Data System (ADS)
Sokolov, A. I.; Kudlis, A.; Nikitina, M. A.
2017-08-01
The ratios R2k of renormalized coupling constants g2k that enter the effective potential and small-field equation of state acquire the universal values at criticality. They are calculated for the three-dimensional scalar λϕ4 field theory (3D Ising model) within the pseudo-ɛ expansion approach. Pseudo-ɛ expansions for the critical values of g6, g8, g10, R6 =g6 / g42, R8 =g8 / g43 and R10 =g10 / g44 originating from the five-loop renormalization group (RG) series are derived. Pseudo-ɛ expansions for the sextic coupling have rapidly diminishing coefficients, so addressing Padé approximants yields proper numerical results. Use of Padé-Borel-Leroy and conformal mapping resummation techniques further improves the accuracy leading to the values R6* = 1.6488 and R6* = 1.6490 which are in a brilliant agreement with the result of advanced lattice calculations. For the octic coupling the numerical structure of the pseudo-ɛ expansions is less favorable. Nevertheless, the conform-Borel resummation gives R8* = 0.868, the number being close to the lattice estimate R8* = 0.871 and compatible with the result of 3D RG analysis R8* = 0.857. Pseudo-ɛ expansions for R10* and g10* are also found to have much smaller coefficients than those of the original RG series. They remain, however, fast growing and big enough to prevent obtaining fair numerical estimates.
Calculating Nozzle Side Loads using Acceleration Measurements of Test-Based Models
NASA Technical Reports Server (NTRS)
Brown, Andrew M.; Ruf, Joe
2007-01-01
As part of a NASA/MSFC research program to evaluate the effect of different nozzle contours on the well-known but poorly characterized "side load" phenomena, we attempt to back out the net force on a sub-scale nozzle during cold-flow testing using acceleration measurements. Because modeling the test facility dynamics is problematic, new techniques for creating a "pseudo-model" of the facility and nozzle directly from modal test results are applied. Extensive verification procedures were undertaken, resulting in a loading scale factor necessary for agreement between test and model based frequency response functions. Side loads are then obtained by applying a wide-band random load onto the system model, obtaining nozzle response PSD's, and iterating both the amplitude and frequency of the input until a good comparison of the response with the measured response PSD for a specific time point is obtained. The final calculated loading can be used to compare different nozzle profiles for assessment during rocket engine nozzle development and as a basis for accurate design of the nozzle and engine structure to withstand these loads. The techniques applied within this procedure have extensive applicability to timely and accurate characterization of all test fixtures used for modal test.A viewgraph presentation on a model-test based pseudo-model used to calculate side loads on rocket engine nozzles is included. The topics include: 1) Side Loads in Rocket Nozzles; 2) Present Side Loads Research at NASA/MSFC; 3) Structural Dynamic Model Generation; 4) Pseudo-Model Generation; 5) Implementation; 6) Calibration of Pseudo-Model Response; 7) Pseudo-Model Response Verification; 8) Inverse Force Determination; 9) Results; and 10) Recent Work.
NASA Astrophysics Data System (ADS)
Indhumathi, Ponnuswamy; Sathiyaraj, Subbaiyan; Koelmel, Jeremy P.; Shoba, Srinivasan U.; Jayabalakrishnan, Chinnasamy; Saravanabhavan, Munusamy
2018-05-01
The ability of green micro algae Chlorella vulgaris for biosorption of Cu(II) ions from an aqueous solution was studied. The biosorption process was affected by the solution pH, contact time, temperature and initial Cu(II) concentration. Experimental data were analyzed in terms of pseudo-first order, pseudo-second order and intra particle diffusion models. Results showed that the sorption process of Cu(II) ions followed pseudo-second order kinetics. The sorption data of Cu(II) ions are fitted to Langmuir, Freundlich, and Redlich-Peterson isotherms, and the Temkin isotherm. The thermodynamic study shows the Cu(II) biosorption was exothermic in nature. The Cu(II) ions were recovered effectively from Chlorella vulgaris biomass using 0.1 M H2SO4 with up to 90.3% recovery, allowing for recycling of the Cu. Green algae from freshwater bodies showed significant potential for Cu(II) removal and recovery from industrial wastewater.
NASA Technical Reports Server (NTRS)
Grillo, Vince
2017-01-01
The objective of this presentation is to give a brief overview of the theory behind the (DBA) method, an overview of the derivation and a practical application of the theory using the Python computer language. The Theory and Derivation will use both Acceleration and Pseudo Velocity methods to derive a series of equations for processing by Python. We will take the results and compare both Acceleration and Pseudo Velocity methods and discuss implementation of the Python functions. Also, we will discuss the efficiency of the methods and the amount of computer time required for the solution. In conclusion, (DBA) offers a powerful method to evaluate the amount of energy imparted into a system in the form of both Amplitude and Duration during qualification testing and flight environments. Many forms of steady state and transient vibratory motion can be characterized using this technique. (DBA) provides a more robust alternative to traditional methods such Power Spectral Density (PSD) using a maximax approach.
NASA Technical Reports Server (NTRS)
Grillo, Vince
2016-01-01
The objective of this presentation is to give a brief overview of the theory behind the (DBA) method, an overview of the derivation and a practical application of the theory using the Python computer language. The Theory and Derivation will use both Acceleration and Pseudo Velocity methods to derive a series of equations for processing by Python. We will take the results and compare both Acceleration and Pseudo Velocity methods and discuss implementation of the Python functions. Also, we will discuss the efficiency of the methods and the amount of computer time required for the solution. In conclusion, (DBA) offers a powerful method to evaluate the amount of energy imparted into a system in the form of both Amplitude and Duration during qualification testing and flight environments. Many forms of steady state and transient vibratory motion can be characterized using this technique. (DBA) provides a more robust alternative to traditional methods such Power Spectral Density (PSD) using a Maximax approach.
Unsupervised Deep Hashing With Pseudo Labels for Scalable Image Retrieval.
Zhang, Haofeng; Liu, Li; Long, Yang; Shao, Ling
2018-04-01
In order to achieve efficient similarity searching, hash functions are designed to encode images into low-dimensional binary codes with the constraint that similar features will have a short distance in the projected Hamming space. Recently, deep learning-based methods have become more popular, and outperform traditional non-deep methods. However, without label information, most state-of-the-art unsupervised deep hashing (DH) algorithms suffer from severe performance degradation for unsupervised scenarios. One of the main reasons is that the ad-hoc encoding process cannot properly capture the visual feature distribution. In this paper, we propose a novel unsupervised framework that has two main contributions: 1) we convert the unsupervised DH model into supervised by discovering pseudo labels; 2) the framework unifies likelihood maximization, mutual information maximization, and quantization error minimization so that the pseudo labels can maximumly preserve the distribution of visual features. Extensive experiments on three popular data sets demonstrate the advantages of the proposed method, which leads to significant performance improvement over the state-of-the-art unsupervised hashing algorithms.
Pulse homodyne field disturbance sensor
McEwan, Thomas E.
1997-01-01
A field disturbance sensor operates with relatively low power, provides an adjustable operating range, is not hypersensitive at close range, allows co-location of multiple sensors, and is inexpensive to manufacture. The sensor includes a transmitter that transmits a sequence of transmitted bursts of electromagnetic energy. The transmitter frequency is modulated at an intermediate frequency. The sequence of bursts has a burst repetition rate, and each burst has a burst width and comprises a number of cycles at a transmitter frequency. The sensor includes a receiver which receives electromagnetic energy at the transmitter frequency, and includes a mixer which mixes a transmitted burst with reflections of the same transmitted burst to produce an intermediate frequency signal. Circuitry, responsive to the intermediate frequency signal indicates disturbances in the sensor field. Because the mixer mixes the transmitted burst with reflections of the transmitted burst, the burst width defines the sensor range. The burst repetition rate is randomly or pseudo-randomly modulated so that bursts in the sequence of bursts have a phase which varies. A second range-defining mode transmits two radio frequency bursts, where the time spacing between the bursts defines the maximum range divided by two.
Pulse homodyne field disturbance sensor
McEwan, T.E.
1997-10-28
A field disturbance sensor operates with relatively low power, provides an adjustable operating range, is not hypersensitive at close range, allows co-location of multiple sensors, and is inexpensive to manufacture. The sensor includes a transmitter that transmits a sequence of transmitted bursts of electromagnetic energy. The transmitter frequency is modulated at an intermediate frequency. The sequence of bursts has a burst repetition rate, and each burst has a burst width and comprises a number of cycles at a transmitter frequency. The sensor includes a receiver which receives electromagnetic energy at the transmitter frequency, and includes a mixer which mixes a transmitted burst with reflections of the same transmitted burst to produce an intermediate frequency signal. Circuitry, responsive to the intermediate frequency signal indicates disturbances in the sensor field. Because the mixer mixes the transmitted burst with reflections of the transmitted burst, the burst width defines the sensor range. The burst repetition rate is randomly or pseudo-randomly modulated so that bursts in the sequence of bursts have a phase which varies. A second range-defining mode transmits two radio frequency bursts, where the time spacing between the bursts defines the maximum range divided by two. 12 figs.
Seismic waveform tomography with shot-encoding using a restarted L-BFGS algorithm.
Rao, Ying; Wang, Yanghua
2017-08-17
In seismic waveform tomography, or full-waveform inversion (FWI), one effective strategy used to reduce the computational cost is shot-encoding, which encodes all shots randomly and sums them into one super shot to significantly reduce the number of wavefield simulations in the inversion. However, this process will induce instability in the iterative inversion regardless of whether it uses a robust limited-memory BFGS (L-BFGS) algorithm. The restarted L-BFGS algorithm proposed here is both stable and efficient. This breakthrough ensures, for the first time, the applicability of advanced FWI methods to three-dimensional seismic field data. In a standard L-BFGS algorithm, if the shot-encoding remains unchanged, it will generate a crosstalk effect between different shots. This crosstalk effect can only be suppressed by employing sufficient randomness in the shot-encoding. Therefore, the implementation of the L-BFGS algorithm is restarted at every segment. Each segment consists of a number of iterations; the first few iterations use an invariant encoding, while the remainder use random re-coding. This restarted L-BFGS algorithm balances the computational efficiency of shot-encoding, the convergence stability of the L-BFGS algorithm, and the inversion quality characteristic of random encoding in FWI.
NASA Astrophysics Data System (ADS)
Bhattacharyya, Swarnapratim; Haiduc, Maria; Neagu, Alina Tania; Firu, Elena
2014-07-01
We have presented a systematic study of two-particle rapidity correlations in terms of investigating the dynamical fluctuation observable \\sigma _c^2 in the forward-backward pseudo-rapidity windows by analyzing the experimental data of {}_{}^{16} O{--}AgBr interactions at 4.5 AGeV/c, {}_{}^{22} Ne{--}AgBr interactions at 4.1 AGeV/c, {}_{}^{28} Si{--}AgBr and {}_{}^{32} S{--}AgBr interactions at 4.5 AGeV/c. The experimental results have been compared with the results obtained from the analysis of event sample simulated (MC-RAND) by generating random numbers and also with the analysis of events generated by the UrQMD and AMPT model. Our study confirms the presence of strong short-range correlations among the produced particles in the forward and the backward pseudo-rapidity region. The analysis of the simple Monte Carlo-simulated (MC-RAND) events signifies that the observed correlations are not due to mere statistics only; explanation of such correlations can be attributed to the presence of dynamical fluctuations during the production of charged pions. Comparisons of the experimental results with the results obtained from analyzing the UrQMD data sample indicate that the UrQMD model cannot reproduce the experimental findings. The AMPT model also cannot explain the experimental results satisfactorily. Comparisons of our experimental results with the results obtained from the analysis of higher energy emulsion data and with the results of the RHIC data have also been presented.
Ammonia removal in electrochemical oxidation: mechanism and pseudo-kinetics.
Li, Liang; Liu, Yan
2009-01-30
This paper investigated the mechanism and pseudo-kinetics for removal of ammonia by electrochemical oxidation with RuO(2)/Ti anode using batch tests. The results show that the ammonia oxidation rates resulted from direct oxidation at electrode-liquid interfaces of the anode by stepwise dehydrogenation, and from indirect oxidation by hydroxyl radicals were so slow that their contribution to ammonia removal was negligible under the condition with Cl(-). The oxidation rates of ammonia ranged from 1.0 to 12.3 mg N L(-1)h(-1) and efficiency reached nearly 100%, primarily due to the indirect oxidation of HOCl, and followed pseudo zero-order kinetics in electrochemical oxidation with Cl(-). About 88% ammonia was removed from the solution. The removed one was subsequently found in the form of N(2) in the produced gas. The rate at which Cl(-) lost electrons at the anode was a major factor in the overall ammonia oxidation. Current density and Cl(-) concentration affected the constant of the pseudo zero-order kinetics, expressed by k=0.0024[Cl(-)]xj. The ammonia was reduced to less than 0.5 mg N L(-1) after 2h of electrochemical oxidation for the effluent from aerobic or anaerobic reactors which treated municipal wastewater. This result was in line with the strict discharge requirements.
Pseudo-circulator implemented as a multimode fiber coupler
NASA Astrophysics Data System (ADS)
Bulota, F.; Bélanger, P.; Leduc, M.; Boudoux, C.; Godbout, N.
2016-03-01
We present a linear all-fiber device exhibiting the functionality of a circulator, albeit for multimode fibers. We define a pseudo-circulator as a linear three-port component that transfers most of a multimode light signal from Port 1 to Port 2, and from Port 2 to Port 3. Unlike a traditional circulator which depends on a nonlinear phenomenon to achieve a non-reciprocal behavior, our device is a linear component that seemingly breaks the principle of reciprocity by exploiting the variations of etendue of the multimode fibers in the coupler. The pseudo-circulator is implemented as a 2x2 asymmetric multimode fiber coupler, fabricated using the fusion-tapering technique. The coupler is asymmetric in its transverse fused section. The two multimode fibers differ in area, thus favoring the transfer of light from the smaller to the bigger fiber. The desired difference of area is obtained by tapering one of the fiber before the fusion process. Using this technique, we have successfully fabricated a pseudo-circulator surpassing in efficiency a 50/50 beam-splitter. In all the visible and near-IR spectrum, the transmission ratio exceeds 77% from Port 1 to Port 2, and 80% from Port 2 to Port 3. The excess loss is less than 0.5 dB, regardless of the entry port.
Large-scale inverse model analyses employing fast randomized data reduction
NASA Astrophysics Data System (ADS)
Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan
2017-08-01
When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.
Kilowatt high-efficiency narrow-linewidth monolithic fiber amplifier operating at 1034 nm
NASA Astrophysics Data System (ADS)
Naderi, Nader A.; Flores, Angel; Anderson, Brian M.; Rowland, Ken; Dajani, Iyad
2016-03-01
Power scaling investigation of a narrow-linewidth, Ytterbium-doped all-fiber amplifier operating at 1034 nm is presented. Nonlinear stimulated Brillouin scattering (SBS) effects were suppressed through the utilization of an external phase modulation technique. Here, the power amplifier was seeded with a spectrally broadened master oscillator and the results were compared using both pseudo-random bit sequence (PRBS) and white noise source (WNS) phase modulation formats. By utilizing an optical band pass filter as well as optimizing the length of fiber used in the pre-amplifier stages, we were able to appreciably suppress unwanted amplified spontaneous emission (ASE). Notably, through PRBS phase modulation, greater than two-fold enhancement in threshold power was achieved when compared to the WNS modulated case. Consequently, by further optimizing both the power amplifier length and PRBS pattern at a clock rate of 3.5 GHz, we demonstrated 1 kilowatt of power with a slope efficiency of 81% and an overall ASE content of less than 1%. Beam quality measurements at 1 kilowatt provided near diffraction-limited operation (M2 < 1.2) with no sign of modal instability. To the best of our knowledge, the power scaling results achieved in this work represent the highest power reported for a spectrally narrow all-fiber amplifier operating at < 1040 nm in Yb-doped silica-based fiber.
Mapped Chebyshev Pseudo-Spectral Method for Dynamic Aero-Elastic Problem of Limit Cycle Oscillation
NASA Astrophysics Data System (ADS)
Im, Dong Kyun; Kim, Hyun Soon; Choi, Seongim
2018-05-01
A mapped Chebyshev pseudo-spectral method is developed as one of the Fourier-spectral approaches and solves nonlinear PDE systems for unsteady flows and dynamic aero-elastic problem in a given time interval, where the flows or elastic motions can be periodic, nonperiodic, or periodic with an unknown frequency. The method uses the Chebyshev polynomials of the first kind for the basis function and redistributes the standard Chebyshev-Gauss-Lobatto collocation points more evenly by a conformal mapping function for improved numerical stability. Contributions of the method are several. It can be an order of magnitude more efficient than the conventional finite difference-based, time-accurate computation, depending on the complexity of solutions and the number of collocation points. The method reformulates the dynamic aero-elastic problem in spectral form for coupled analysis of aerodynamics and structures, which can be effective for design optimization of unsteady and dynamic problems. A limit cycle oscillation (LCO) is chosen for the validation and a new method to determine the LCO frequency is introduced based on the minimization of a second derivative of the aero-elastic formulation. Two examples of the limit cycle oscillation are tested: nonlinear, one degree-of-freedom mass-spring-damper system and two degrees-of-freedom oscillating airfoil under pitch and plunge motions. Results show good agreements with those of the conventional time-accurate simulations and wind tunnel experiments.
Phuphanich, Surasak; Yu, John; Bannykh, Serguei; Zhu, Jay-Jiguang
2014-01-01
BACKGROUND: Previously reports of pseudo-progression in patients with brain tumor after therapeutic vaccines in pediatric and adult glioma (Pollack, JCO online on June 2, 2014 and Okada, JCO Jan 20, 2011; 29: 330-336) demonstrated that RANO criteria for tumor progression may not be adequate for immunotherapy trials. Similar observations were also seen in other checkpoint inhibitor in melanoma and NSLSC. METHODS: We identified 2 patients, who developed tumor progression by RANO criteria, underwent surgery following enrollment in a phase 2 randomized ICT-107 (an autologous vaccine consisting of patient dendritic cells pulsed with peptides from AIM-2, TRP-2, HER2/neu, IL-13Ra2, gp100, MAGE1) after radiation and Temozolomide (TMZ). RESULTS: The first case is a 69 years old Chinese male, who underwent 1st surgery of gross total resection right occipital GBM on 10/26/2011. Subsequently he received 19 cycles of TMZ and 9 vaccines/placebo. MRI from 7/2/2013 showed enhancement surrounding surgical cavity. After 2nd surgery, pathology showed only rare residual tumor cells with macrophages and positive CD 8 cells. He continued on this vaccine program and MRI showed more progression with finger-like extension into parietal lobe 4 months later. The 3rd surgery also showed extensive reactive changes with no active tumor cells. For 2nd case, a 62 years old male, who underwent first surgery on 7/11/2011 of right temporal lobe, developed 2 areas of enhancement after 6 cycles of TMZ and 7 vaccines/placebo on 4/18/2012. With 2nd surgery, pathology showed reactive gliosis without active tumor. The subject continued in this trial. CONCLUSION: Pseudo-progression was confirmed by pathology in these 2 patients at 20 and 9 months which were delayed comparing to pseudo-progression observed in patients treated with concurrent XRT/TMZ (3-6 months). Future iRANO criteria development is essential for immunotherapy trials. Accurately identifying and managing such patients is necessary to avoid premature termination of therapy.
Applying Neural Networks in Optical Communication Systems: Possible Pitfalls
NASA Astrophysics Data System (ADS)
Eriksson, Tobias A.; Bulow, Henning; Leven, Andreas
2017-12-01
We investigate the risk of overestimating the performance gain when applying neural network based receivers in systems with pseudo random bit sequences or with limited memory depths, resulting in repeated short patterns. We show that with such sequences, a large artificial gain can be obtained which comes from pattern prediction rather than predicting or compensating the studied channel/phenomena.
Test surfaces useful for calibration of surface profilometers
Yashchuk, Valeriy V; McKinney, Wayne R; Takacs, Peter Z
2013-12-31
The present invention provides for test surfaces and methods for calibration of surface profilometers, including interferometric and atomic force microscopes. Calibration is performed using a specially designed test surface, or the Binary Pseudo-random (BPR) grating (array). Utilizing the BPR grating (array) to measure the power spectral density (PSD) spectrum, the profilometer is calibrated by determining the instrumental modulation transfer.
A cryptographic hash function based on chaotic network automata
NASA Astrophysics Data System (ADS)
Machicao, Jeaneth; Bruno, Odemir M.
2017-12-01
Chaos theory has been used to develop several cryptographic methods relying on the pseudo-random properties extracted from simple nonlinear systems such as cellular automata (CA). Cryptographic hash functions (CHF) are commonly used to check data integrity. CHF “compress” arbitrary long messages (input) into much smaller representations called hash values or message digest (output), designed to prevent the ability to reverse the hash values into the original message. This paper proposes a chaos-based CHF inspired on an encryption method based on chaotic CA rule B1357-S2468. Here, we propose an hybrid model that combines CA and networks, called network automata (CNA), whose chaotic spatio-temporal outputs are used to compute a hash value. Following the Merkle and Damgård model of construction, a portion of the message is entered as the initial condition of the network automata, so that the rest parts of messages are iteratively entered to perturb the system. The chaotic network automata shuffles the message using flexible control parameters, so that the generated hash value is highly sensitive to the message. As demonstrated in our experiments, the proposed model has excellent pseudo-randomness and sensitivity properties with acceptable performance when compared to conventional hash functions.
Adsorption and removal of clofibric acid and diclofenac from water with MIEX resin.
Lu, Xian; Shao, Yisheng; Gao, Naiyun; Chen, Juxiang; Zhang, Yansen; Wang, Qiongfang; Lu, Yuqi
2016-10-01
This study demonstrates the use of MIEX resin as an efficient adsorbent for the removal of clofibric acid (CA) and diclofenac (DCF). The adsorption performance of CA and DCF are investigated by a batch mode in single-component or bi-component adsorption system. Various factors influencing the adsorption of CA and DCF, including initial concentration, contact time, adsorbent dosage, initial solution pH, agitation speed, natural organic matter and coexistent anions are studied. The Langmuir model can well describe CA adsorption in single-component system, while the Freundlich model gives better fitting in bi-component system. The DCF adsorption can be well fitted by the Freundlich model in both systems. Thermodynamic analyses show that the adsorption of CA and DCF is an endothermic (ΔH(o) > 0), entropy driven (ΔS(o) > 0) process and more randomness exists in the DCF adsorption process. The values of Gibbs free energy (ΔG(o) < 0) indicate the adsorption of DCF is spontaneous but nonspontaneous (ΔG(o) > 0) for CA adsorption. The kinetic data suggest the adsorption of CA and DCF follow the pseudo-first-order model in both systems and the intra-particle is not the unique rate-limiting step. The adsorption process is controlled simultaneously by external mass transfer and surface diffusion according to the surface diffusion modified Biot number (Bis) ranging from 1.06 to 26.15. Moreover, the possible removal mechanism for CA and DCF is respectively proposed based on the ion exchange stoichiometry. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wei, Gao-Feng; Dong, Shi-Hai
2010-11-01
By applying a Pekeris-type approximation to the pseudo-centrifugal term, we study the pseudospin symmetry of a Dirac nucleon subjected to scalar and vector modified Rosen-Morse (MRM) potentials. A complicated quartic energy equation and spinor wave functions with arbitrary spin-orbit coupling quantum number k are presented. The pseudospin degeneracy is checked numerically. Pseudospin symmetry is discussed theoretically and numerically in the limit case α rightarrow 0 . It is found that the relativistic MRM potential cannot trap a Dirac nucleon in this limit.
Cooperative pulses for pseudo-pure state preparation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei, Daxiu; Chang, Yan; Yang, Xiaodong, E-mail: steffen.glaser@tum.de, E-mail: xiaodong.yang@sibet.ac.cn
2014-06-16
Using an extended version of the optimal-control-based gradient ascent pulse engineering algorithm, cooperative (COOP) pulses are designed for multi-scan experiments to prepare pseudo-pure states in quantum computation. COOP pulses can cancel undesired signal contributions, complementing and generalizing phase cycles. They also provide more flexibility and, in particular, eliminate the need to select specific individual target states and achieve the fidelity of theoretical limit by flexibly choosing appropriate number of scans and duration of pulses. The COOP approach is experimentally demonstrated for three-qubit and four-qubit systems.
Background sampling and transferability of species distribution model ensembles under climate change
NASA Astrophysics Data System (ADS)
Iturbide, Maialen; Bedia, Joaquín; Gutiérrez, José Manuel
2018-07-01
Species Distribution Models (SDMs) constitute an important tool to assist decision-making in environmental conservation and planning. A popular application of these models is the projection of species distributions under climate change conditions. Yet there are still a range of methodological SDM factors which limit the transferability of these models, contributing significantly to the overall uncertainty of the resulting projections. An important source of uncertainty often neglected in climate change studies comes from the use of background data (a.k.a. pseudo-absences) for model calibration. Here, we study the sensitivity to pseudo-absence sampling as a determinant factor for SDM stability and transferability under climate change conditions, focusing on European wide projections of Quercus robur as an illustrative case study. We explore the uncertainty in future projections derived from ten pseudo-absence realizations and three popular SDMs (GLM, Random Forest and MARS). The contribution of the pseudo-absence realization to the uncertainty was higher in peripheral regions and clearly differed among the tested SDMs in the whole study domain, being MARS the most sensitive - with projections differing up to a 40% for different realizations - and GLM the most stable. As a result we conclude that parsimonious SDMs are preferable in this context, avoiding complex methods (such as MARS) which may exhibit poor model transferability. Accounting for this new source of SDM-dependent uncertainty is crucial when forming multi-model ensembles to undertake climate change projections.
Investigating expectation effects using multiple physiological measures
Siller, Alexander; Ambach, Wolfgang; Vaitl, Dieter
2015-01-01
The study aimed at experimentally investigating whether the human body can anticipate future events under improved methodological conditions. Previous studies have reported contradictory results for the phenomenon typically called presentiment. If the positive findings are accurate, they call into doubt our views about human perception, and if they are inaccurate, a plausible conventional explanation might be based on the experimental design of the previous studies, in which expectation due to item sequences was misinterpreted as presentiment. To address these points, we opted to collect several physiological variables, to test different randomization types and to manipulate subjective significance individually. For the latter, we combined a mock crime scenario, in which participants had to steal specific items, with a concealed information test (CIT), in which the participants had to conceal their knowledge when interrogated about items they had stolen or not stolen. We measured electrodermal activity, respiration, finger pulse, heart rate (HR), and reaction times. The participants (n = 154) were assigned randomly to four different groups. Items presented in the CIT were either drawn with replacement (full) or without replacement (pseudo) and were either presented category-wise (cat) or regardless of categories (nocat). To understand how these item sequences influence expectation and modulate physiological reactions, we compared the groups with respect to effect sizes for stolen vs. not stolen items. Group pseudo_cat yielded the highest effect sizes, and pseudo_nocat yielded the lowest. We could not find any evidence of presentiment but did find evidence of physiological correlates of expectation. Due to the design differing fundamentally from previous studies, these findings do not allow for conclusions on the question whether the expectation bias is being confounded with presentiment. PMID:26500600
Simons, C J P; Hartmann, J A; Kramer, I; Menne-Lothmann, C; Höhn, P; van Bemmel, A L; Myin-Germeys, I; Delespaul, P; van Os, J; Wichers, M
2015-11-01
Interventions based on the experience sampling method (ESM) are ideally suited to provide insight into personal, contextualized affective patterns in the flow of daily life. Recently, we showed that an ESM-intervention focusing on positive affect was associated with a decrease in symptoms in patients with depression. The aim of the present study was to examine whether ESM-intervention increased patient empowerment. Depressed out-patients (n=102) receiving psychopharmacological treatment who had participated in a randomized controlled trial with three arms: (i) an experimental group receiving six weeks of ESM self-monitoring combined with weekly feedback sessions, (ii) a pseudo-experimental group participating in six weeks of ESM self-monitoring without feedback, and (iii) a control group (treatment as usual only). Patients were recruited in the Netherlands between January 2010 and February 2012. Self-report empowerment scores were obtained pre- and post-intervention. There was an effect of group×assessment period, indicating that the experimental (B=7.26, P=0.061, d=0.44, statistically imprecise) and pseudo-experimental group (B=11.19, P=0.003, d=0.76) increased more in reported empowerment compared to the control group. In the pseudo-experimental group, 29% of the participants showed a statistically reliable increase in empowerment score and 0% reliable decrease compared to 17% reliable increase and 21% reliable decrease in the control group. The experimental group showed 19% reliable increase and 4% reliable decrease. These findings tentatively suggest that self-monitoring to complement standard antidepressant treatment may increase patients' feelings of empowerment. Further research is necessary to investigate long-term empowering effects of self-monitoring in combination with person-tailored feedback. Copyright © 2015 Elsevier Masson SAS. All rights reserved.
Syngeneic AAV pseudo-particles potentiate gene transduction of AAV vectors
USDA-ARS?s Scientific Manuscript database
Gene delivery vectors based on adeno-associated virus (AAV) have emerged as safe and efficient therapeutic platform for numerous diseases. Excessive empty particles were generated as impurities during AAV vector production, but their effects on clinical outcome of AAV gene therapy are unclear. Here,...
Secure self-calibrating quantum random-bit generator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fiorentino, M.; Santori, C.; Spillane, S. M.
2007-03-15
Random-bit generators (RBGs) are key components of a variety of information processing applications ranging from simulations to cryptography. In particular, cryptographic systems require 'strong' RBGs that produce high-entropy bit sequences, but traditional software pseudo-RBGs have very low entropy content and therefore are relatively weak for cryptography. Hardware RBGs yield entropy from chaotic or quantum physical systems and therefore are expected to exhibit high entropy, but in current implementations their exact entropy content is unknown. Here we report a quantum random-bit generator (QRBG) that harvests entropy by measuring single-photon and entangled two-photon polarization states. We introduce and implement a quantum tomographicmore » method to measure a lower bound on the 'min-entropy' of the system, and we employ this value to distill a truly random-bit sequence. This approach is secure: even if an attacker takes control of the source of optical states, a secure random sequence can be distilled.« less
Model's sparse representation based on reduced mixed GMsFE basis methods
NASA Astrophysics Data System (ADS)
Jiang, Lijian; Li, Qiuqi
2017-06-01
In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a large number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random porous media is simulated by the proposed sparse representation method.
Model's sparse representation based on reduced mixed GMsFE basis methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn; Li, Qiuqi, E-mail: qiuqili@hnu.edu.cn
2017-06-01
In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a largemore » number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random porous media is simulated by the proposed sparse representation method.« less
Estimating the variance for heterogeneity in arm-based network meta-analysis.
Piepho, Hans-Peter; Madden, Laurence V; Roger, James; Payne, Roger; Williams, Emlyn R
2018-04-19
Network meta-analysis can be implemented by using arm-based or contrast-based models. Here we focus on arm-based models and fit them using generalized linear mixed model procedures. Full maximum likelihood (ML) estimation leads to biased trial-by-treatment interaction variance estimates for heterogeneity. Thus, our objective is to investigate alternative approaches to variance estimation that reduce bias compared with full ML. Specifically, we use penalized quasi-likelihood/pseudo-likelihood and hierarchical (h) likelihood approaches. In addition, we consider a novel model modification that yields estimators akin to the residual maximum likelihood estimator for linear mixed models. The proposed methods are compared by simulation, and 2 real datasets are used for illustration. Simulations show that penalized quasi-likelihood/pseudo-likelihood and h-likelihood reduce bias and yield satisfactory coverage rates. Sum-to-zero restriction and baseline contrasts for random trial-by-treatment interaction effects, as well as a residual ML-like adjustment, also reduce bias compared with an unconstrained model when ML is used, but coverage rates are not quite as good. Penalized quasi-likelihood/pseudo-likelihood and h-likelihood are therefore recommended. Copyright © 2018 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Basak, Jyotirmoy; Maitra, Subhamoy
2018-04-01
In device-independent (DI) paradigm, the trustful assumptions over the devices are removed and CHSH test is performed to check the functionality of the devices toward certifying the security of the protocol. The existing DI protocols consider infinite number of samples from theoretical point of view, though this is not practically implementable. For finite sample analysis of the existing DI protocols, we may also consider strategies for checking device independence other than the CHSH test. In this direction, here we present a comparative analysis between CHSH and three-party Pseudo-telepathy game for the quantum private query protocol in DI paradigm that appeared in Maitra et al. (Phys Rev A 95:042344, 2017) very recently.
Collaborative Indoor Access Point Localization Using Autonomous Mobile Robot Swarm.
Awad, Fahed; Naserllah, Muhammad; Omar, Ammar; Abu-Hantash, Alaa; Al-Taj, Abrar
2018-01-31
Localization of access points has become an important research problem due to the wide range of applications it addresses such as dismantling critical security threats caused by rogue access points or optimizing wireless coverage of access points within a service area. Existing proposed solutions have mostly relied on theoretical hypotheses or computer simulation to demonstrate the efficiency of their methods. The techniques that rely on estimating the distance using samples of the received signal strength usually assume prior knowledge of the signal propagation characteristics of the indoor environment in hand and tend to take a relatively large number of uniformly distributed random samples. This paper presents an efficient and practical collaborative approach to detect the location of an access point in an indoor environment without any prior knowledge of the environment. The proposed approach comprises a swarm of wirelessly connected mobile robots that collaboratively and autonomously collect a relatively small number of non-uniformly distributed random samples of the access point's received signal strength. These samples are used to efficiently and accurately estimate the location of the access point. The experimental testing verified that the proposed approach can identify the location of the access point in an accurate and efficient manner.
Collaborative Indoor Access Point Localization Using Autonomous Mobile Robot Swarm
Awad, Fahed; Naserllah, Muhammad; Omar, Ammar; Abu-Hantash, Alaa; Al-Taj, Abrar
2018-01-01
Localization of access points has become an important research problem due to the wide range of applications it addresses such as dismantling critical security threats caused by rogue access points or optimizing wireless coverage of access points within a service area. Existing proposed solutions have mostly relied on theoretical hypotheses or computer simulation to demonstrate the efficiency of their methods. The techniques that rely on estimating the distance using samples of the received signal strength usually assume prior knowledge of the signal propagation characteristics of the indoor environment in hand and tend to take a relatively large number of uniformly distributed random samples. This paper presents an efficient and practical collaborative approach to detect the location of an access point in an indoor environment without any prior knowledge of the environment. The proposed approach comprises a swarm of wirelessly connected mobile robots that collaboratively and autonomously collect a relatively small number of non-uniformly distributed random samples of the access point’s received signal strength. These samples are used to efficiently and accurately estimate the location of the access point. The experimental testing verified that the proposed approach can identify the location of the access point in an accurate and efficient manner. PMID:29385042
Features of Ignition and Stable Combustion in Supersonic Combustor
NASA Astrophysics Data System (ADS)
Goldfeld, M.; Starov, A.; Timofeev, K.
2009-01-01
Present paper describes the results of experimental investigations of the supersonic combustor with entrance Mach numbers from 2 to 4 at static pressure from 0.8 to 2.5 bars, total temperature from 2000K to 3000K. Hydrogen and kerosene were used as fuel. The conditions, under which the self-ignition and intensive combustion of the fuel realized were found. Position of ignition area in the channel was determined and features of flame propagation in the channel presented. A possibility to ensure an efficient combustion of hydrogen and kerosene at a high supersonic flow velocity at the combustor entrance without special throttling and/or pseudo-shock introduction was shown. Analysis of applicability of existing methods of criterion descriptions of conditions of self-ignition and extinction of combustion is executed for generalization of experimental results on the basis of results obtained.
Teng, Sing Tung; Tan, Suh Nih; Lim, Hong Chang; Dao, Viet Ha; Bates, Stephen S; Leaw, Chui Pin
2016-12-01
Forty-eight isolates of Pseudo-nitzschia species were established from the Miri coast of Sarawak (Malaysian Borneo) and underwent TEM observation and molecular characterization. Ten species were found: P. abrensis, P. batesiana, P. fukuyoi, P. kodamae, P. lundholmiae, P. multistriata, P. pungens, P. subfraudulenta, as well as two additional new morphotypes, herein designated as P. bipertita sp. nov. and P. limii sp. nov. This is the first report of P. abrensis, P. batesiana, P. kodamae, P. fukuyoi, and P. lundholmiae in coastal waters of Malaysian Borneo. Pseudo-nitzschia bipertita differs from its congeners by the number of sectors that divide the poroids, densities of band striae, and its cingular band structure. Pseudo-nitzschia limii, a pseudo-cryptic species in the P. pseudodelicatissima complex sensu lato, is distinct by having wider proximal and distal mantles, a higher number of striae, and greater poroid height in the striae of the valvocopula. The species were further supported by the phylogenetic reconstructions of the nuclear-encoded large subunit ribosomal gene and the second internal transcribed spacer. Phylogenetically, P. bipertita clustered with its sister taxa (P. subpacifica + P. heimii); P. limii appears as a sister taxon to P. kodamae and P. hasleana in the ITS2 tree. Pairwise comparison of ITS2 transcripts with its closest relatives revealed the presence of both hemi- and compensatory base changes. Toxicity analysis showed detectable levels of domoic acid in P. abrensis, P. batesiana, P. lundholmiae, and P. subfraudulenta, but both new species tested below the detection limit. © 2016 Phycological Society of America.
Schäfer, Sarah K; Ihmig, Frank R; Lara H, Karen A; Neurohr, Frank; Kiefer, Stephan; Staginnus, Marlene; Lass-Hennemann, Johanna; Michael, Tanja
2018-03-16
Specific phobias are among the most common anxiety disorders. Exposure therapy is the treatment of choice for specific phobias. However, not all patients respond equally well to it. Hence, current research focuses on therapeutic add-ons to increase and consolidate the effects of exposure therapy. One potential therapeutic add-on is biofeedback to increase heart rate variability (HRV). A recent meta-analysis shows beneficial effects of HRV biofeedback interventions on stress and anxiety symptoms. Therefore, the purpose of the current trial is to evaluate the effects of HRV biofeedback, which is practiced before and utilized during exposure, in spider-fearful individuals. Further, this trial is the first to differentiate between the effects of a HRV biofeedback intervention and those of a low-load working memory (WM) task. Eighty spider-fearful individuals participate in the study. All participants receive a training session in which they practice two tasks (HRV biofeedback and a motor pseudo-biofeedback task or two motor pseudo-biofeedback tasks). Afterwards, they train both tasks at home for 6 days. One week later, during the exposure session, they watch 16 1-min spider video clips. Participants are divided into four groups: group 1 practices the HRV biofeedback and one motor pseudo-task before exposure and utilizes HRV biofeedback during exposure. Group 2 receives the same training, but continues the pseudo-biofeedback task during exposure. Group 3 practices two pseudo-biofeedback tasks and continues one of them during exposure. Group 4 trains in two pseudo-biofeedback tasks and has no additional task during exposure. The primary outcome is fear of spiders (measured by the Fear of Spiders Questionnaire and the Behavioral Approach Test). Secondary outcomes are physiological measures based on electrodermal activity, electrocardiogram and respiration. This RCT is the first one to investigate the effects of using a pre-trained HRV biofeedback during exposure in spider-fearful individuals. The study critically contrasts the effects of the biofeedback intervention with those of pseudo-tasks, which also require WM capacity, but which do not have a physiological base. If HRV biofeedback is effective in reducing fear of spiders, it would represent an easy-to-use tool to improve exposure-therapy outcomes. Deutsches Register Klinischer Studien, DRKS00012278 . Registered on 23 May 2017, amendment on 5 October 2017.
NASA Astrophysics Data System (ADS)
Azimzade, Youness; Mashaghi, Alireza
2017-12-01
Efficient search acts as a strong selective force in biological systems ranging from cellular populations to predator-prey systems. The search processes commonly involve finding a stationary or mobile target within a heterogeneously structured environment where obstacles limit migration. An open generic question is whether random or directionally biased motions or a combination of both provide an optimal search efficiency and how that depends on the motility and density of targets and obstacles. To address this question, we develop a simple model that involves a random walker searching for its targets in a heterogeneous medium of bond percolation square lattice and used mean first passage time (〈T 〉 ) as an indication of average search time. Our analysis reveals a dual effect of directional bias on the minimum value of 〈T 〉 . For a homogeneous medium, directionality always decreases 〈T 〉 and a pure directional migration (a ballistic motion) serves as the optimized strategy, while for a heterogeneous environment, we find that the optimized strategy involves a combination of directed and random migrations. The relative contribution of these modes is determined by the density of obstacles and motility of targets. Existence of randomness and motility of targets add to the efficiency of search. Our study reveals generic and simple rules that govern search efficiency. Our findings might find application in a number of areas including immunology, cell biology, ecology, and robotics.
Efficient computation of parameter sensitivities of discrete stochastic chemical reaction networks.
Rathinam, Muruhan; Sheppard, Patrick W; Khammash, Mustafa
2010-01-21
Parametric sensitivity of biochemical networks is an indispensable tool for studying system robustness properties, estimating network parameters, and identifying targets for drug therapy. For discrete stochastic representations of biochemical networks where Monte Carlo methods are commonly used, sensitivity analysis can be particularly challenging, as accurate finite difference computations of sensitivity require a large number of simulations for both nominal and perturbed values of the parameters. In this paper we introduce the common random number (CRN) method in conjunction with Gillespie's stochastic simulation algorithm, which exploits positive correlations obtained by using CRNs for nominal and perturbed parameters. We also propose a new method called the common reaction path (CRP) method, which uses CRNs together with the random time change representation of discrete state Markov processes due to Kurtz to estimate the sensitivity via a finite difference approximation applied to coupled reaction paths that emerge naturally in this representation. While both methods reduce the variance of the estimator significantly compared to independent random number finite difference implementations, numerical evidence suggests that the CRP method achieves a greater variance reduction. We also provide some theoretical basis for the superior performance of CRP. The improved accuracy of these methods allows for much more efficient sensitivity estimation. In two example systems reported in this work, speedup factors greater than 300 and 10,000 are demonstrated.
An index-based algorithm for fast on-line query processing of latent semantic analysis
Li, Pohan; Wang, Wei
2017-01-01
Latent Semantic Analysis (LSA) is widely used for finding the documents whose semantic is similar to the query of keywords. Although LSA yield promising similar results, the existing LSA algorithms involve lots of unnecessary operations in similarity computation and candidate check during on-line query processing, which is expensive in terms of time cost and cannot efficiently response the query request especially when the dataset becomes large. In this paper, we study the efficiency problem of on-line query processing for LSA towards efficiently searching the similar documents to a given query. We rewrite the similarity equation of LSA combined with an intermediate value called partial similarity that is stored in a designed index called partial index. For reducing the searching space, we give an approximate form of similarity equation, and then develop an efficient algorithm for building partial index, which skips the partial similarities lower than a given threshold θ. Based on partial index, we develop an efficient algorithm called ILSA for supporting fast on-line query processing. The given query is transformed into a pseudo document vector, and the similarities between query and candidate documents are computed by accumulating the partial similarities obtained from the index nodes corresponds to non-zero entries in the pseudo document vector. Compared to the LSA algorithm, ILSA reduces the time cost of on-line query processing by pruning the candidate documents that are not promising and skipping the operations that make little contribution to similarity scores. Extensive experiments through comparison with LSA have been done, which demonstrate the efficiency and effectiveness of our proposed algorithm. PMID:28520747
An index-based algorithm for fast on-line query processing of latent semantic analysis.
Zhang, Mingxi; Li, Pohan; Wang, Wei
2017-01-01
Latent Semantic Analysis (LSA) is widely used for finding the documents whose semantic is similar to the query of keywords. Although LSA yield promising similar results, the existing LSA algorithms involve lots of unnecessary operations in similarity computation and candidate check during on-line query processing, which is expensive in terms of time cost and cannot efficiently response the query request especially when the dataset becomes large. In this paper, we study the efficiency problem of on-line query processing for LSA towards efficiently searching the similar documents to a given query. We rewrite the similarity equation of LSA combined with an intermediate value called partial similarity that is stored in a designed index called partial index. For reducing the searching space, we give an approximate form of similarity equation, and then develop an efficient algorithm for building partial index, which skips the partial similarities lower than a given threshold θ. Based on partial index, we develop an efficient algorithm called ILSA for supporting fast on-line query processing. The given query is transformed into a pseudo document vector, and the similarities between query and candidate documents are computed by accumulating the partial similarities obtained from the index nodes corresponds to non-zero entries in the pseudo document vector. Compared to the LSA algorithm, ILSA reduces the time cost of on-line query processing by pruning the candidate documents that are not promising and skipping the operations that make little contribution to similarity scores. Extensive experiments through comparison with LSA have been done, which demonstrate the efficiency and effectiveness of our proposed algorithm.
Multi-factor challenge/response approach for remote biometric authentication
NASA Astrophysics Data System (ADS)
Al-Assam, Hisham; Jassim, Sabah A.
2011-06-01
Although biometric authentication is perceived to be more reliable than traditional authentication schemes, it becomes vulnerable to many attacks when it comes to remote authentication over open networks and raises serious privacy concerns. This paper proposes a biometric-based challenge-response approach to be used for remote authentication between two parties A and B over open networks. In the proposed approach, a remote authenticator system B (e.g. a bank) challenges its client A who wants to authenticate his/her self to the system by sending a one-time public random challenge. The client A responds by employing the random challenge along with secret information obtained from a password and a token to produce a one-time cancellable representation of his freshly captured biometric sample. The one-time biometric representation, which is based on multi-factor, is then sent back to B for matching. Here, we argue that eavesdropping of the one-time random challenge and/or the resulting one-time biometric representation does not compromise the security of the system, and no information about the original biometric data is leaked. In addition to securing biometric templates, the proposed protocol offers a practical solution for the replay attack on biometric systems. Moreover, we propose a new scheme for generating a password-based pseudo random numbers/permutation to be used as a building block in the proposed approach. The proposed scheme is also designed to provide protection against repudiation. We illustrate the viability and effectiveness of the proposed approach by experimental results based on two biometric modalities: fingerprint and face biometrics.
A perturbation method to the tent map based on Lyapunov exponent and its application
NASA Astrophysics Data System (ADS)
Cao, Lv-Chen; Luo, Yu-Ling; Qiu, Sen-Hui; Liu, Jun-Xiu
2015-10-01
Perturbation imposed on a chaos system is an effective way to maintain its chaotic features. A novel parameter perturbation method for the tent map based on the Lyapunov exponent is proposed in this paper. The pseudo-random sequence generated by the tent map is sent to another chaos function — the Chebyshev map for the post processing. If the output value of the Chebyshev map falls into a certain range, it will be sent back to replace the parameter of the tent map. As a result, the parameter of the tent map keeps changing dynamically. The statistical analysis and experimental results prove that the disturbed tent map has a highly random distribution and achieves good cryptographic properties of a pseudo-random sequence. As a result, it weakens the phenomenon of strong correlation caused by the finite precision and effectively compensates for the digital chaos system dynamics degradation. Project supported by the Guangxi Provincial Natural Science Foundation, China (Grant No. 2014GXNSFBA118271), the Research Project of Guangxi University, China (Grant No. ZD2014022), the Fund from Guangxi Provincial Key Laboratory of Multi-source Information Mining & Security, China (Grant No. MIMS14-04), the Fund from the Guangxi Provincial Key Laboratory of Wireless Wideband Communication & Signal Processing, China (Grant No. GXKL0614205), the Education Development Foundation and the Doctoral Research Foundation of Guangxi Normal University, the State Scholarship Fund of China Scholarship Council (Grant No. [2014]3012), and the Innovation Project of Guangxi Graduate Education, China (Grant No. YCSZ2015102).
Adsorption of phenolic compound by aged-refuse.
Xiaoli, Chai; Youcai, Zhao
2006-09-01
The adsorption of phenol, 2-chlorophenol, 4-chlorophenol and 2,4-dichlorophenol by aged-refuse has been studied. Adsorption isotherms have been determined for phenol, 2-chlorophenol, 4-chlorophenol and 2,4-dichlorophenol and the data fits well to the Freundlich equation. The chlorinated phenols are absorbed more strongly than the phenol and the adsorption capacity has an oblivious relationship with the numbers and the position of chlorine subsistent. The experiment data suggests that both the partition function and the chemical adsorption involve in the adsorption process. Pseudo-first-order and pseudo-second-order model were applied to investigate the kinetics of the adsorption and the results show that it fit the pseudo-second-order model. More than one step involves in the adsorption process and the overall rate of the adsorption process appears to be controlled by the chemical reaction. The thermodynamic analysis indicates that the adsorption is spontaneous and endothermic.
Chen, Qing; Tang, Ke; Zhang, Xiaoyu; Chen, Panpan; Guo, Ying
2018-03-01
Filoviruses cause severe and fatal viral hemorrhagic fever in humans. Filovirus research has been extensive since the 2014 Ebola outbreak. Due to their high pathogenicity and mortality, live filoviruses require Biosafety Level-4 (BSL-4) facilities, which have restricted the development of anti-filovirus vaccines and drugs. An HIV-based pseudovirus cell infection assay is widely used for viral entry studies in BSL-2 conditions. Here, we successfully constructed nine in vitro pseudo-filovirus models covering all filovirus genera and three in vivo pseudo-filovirus-infection mouse models using Ebola virus, Marburg virus, and Lloviu virus as representative viruses. The pseudo-filovirus-infected mice showed visualizing bioluminescence in a dose-dependent manner. A bioluminescence peak in mice was reached on day 5 post-infection for Ebola virus and Marburg virus and on day 4 post-infection for Lloviu virus. Two known filovirus entry inhibitors, clomiphene and toremiphene, were used to validate the model. Collectively, our study shows that all genera of filoviruses can be well-pseudotyped and are infectious in vitro . The pseudo-filovirus-infection mouse models can be used for in vivo activity evaluation of anti-filovirus drugs. This sequential in vitro and in vivo evaluation system of filovirus entry inhibitors provides a secure and efficient platform for screening and assessing anti-filovirus agents in BSL-2 facilities.
Digital-Analog Hybrid Scheme and Its Application to Chaotic Random Number Generators
NASA Astrophysics Data System (ADS)
Yuan, Zeshi; Li, Hongtao; Miao, Yunchi; Hu, Wen; Zhu, Xiaohua
2017-12-01
Practical random number generation (RNG) circuits are typically achieved with analog devices or digital approaches. Digital-based techniques, which use field programmable gate array (FPGA) and graphics processing units (GPU) etc. usually have better performances than analog methods as they are programmable, efficient and robust. However, digital realizations suffer from the effect of finite precision. Accordingly, the generated random numbers (RNs) are actually periodic instead of being real random. To tackle this limitation, in this paper we propose a novel digital-analog hybrid scheme that employs the digital unit as the main body, and minimum analog devices to generate physical RNs. Moreover, the possibility of realizing the proposed scheme with only one memory element is discussed. Without loss of generality, we use the capacitor and the memristor along with FPGA to construct the proposed hybrid system, and a chaotic true random number generator (TRNG) circuit is realized, producing physical RNs at a throughput of Gbit/s scale. These RNs successfully pass all the tests in the NIST SP800-22 package, confirming the significance of the scheme in practical applications. In addition, the use of this new scheme is not restricted to RNGs, and it also provides a strategy to solve the effect of finite precision in other digital systems.
Li, Qingfeng; Tsui, Amy O
2016-06-01
This study analyzes the relationships between maternal risk factors present at the time of daughters' births-namely, young mother, high parity, and short preceding birth interval-and their subsequent adult developmental, reproductive, and socioeconomic outcomes. Pseudo-cohorts are constructed using female respondent data from 189 cross-sectional rounds of Demographic and Health Surveys conducted in 50 developing countries between 1986 and 2013. Generalized linear models are estimated to test the relationships and calculate cohort-level outcome proportions with the systematic elimination of the three maternal risk factors. The simulation exercise for the full sample of 2,546 pseudo-cohorts shows that the combined elimination of risk exposures is associated with lower mean proportions of adult daughters experiencing child mortality, having a small infant at birth, and having a low body mass index. Among sub-Saharan African cohorts, the estimated changes are larger, particularly for years of schooling. The pseudo-cohort approach can enable longitudinal testing of life course hypotheses using large-scale, standardized, repeated cross-sectional data and with considerable resource efficiency.
Li, Qingfeng; Tsui, Amy O.
2016-01-01
This study analyzes the relationships between maternal risk factors present at the time of daughters’ births—namely, young mother, high parity, and short preceding birth interval—and their subsequent adult developmental, reproductive, and socioeconomic outcomes. Pseudo-cohorts are constructed using female respondent data from 189 cross-sectional rounds of Demographic and Health Surveys conducted in 50 developing countries between 1986 and 2013. Generalized linear models are estimated to test the relationships and calculate cohort-level outcome proportions with the systematic elimination of the three maternal risk factors. The simulation exercise for the full sample of 2,546 pseudo-cohorts shows that the combined elimination of risk exposures is associated with lower mean proportions of adult daughters experiencing child mortality, having a small infant at birth, and having a low body mass index. Among sub-Saharan African cohorts, the estimated changes are larger, particularly for years of schooling. The pseudo-cohort approach can enable longitudinal testing of life course hypotheses using large-scale, standardized, repeated cross-sectional data and with considerable resource efficiency. PMID:27154342
Gau, Jana; Prévost, Martine; Van Antwerpen, Pierre; Sarosi, Menyhárt-Botond; Rodewald, Steffen; Arnhold, Jürgen; Flemmig, Jörg
2017-05-26
Several hydrolyzable tannins, proanthocyanidins, tannin derivatives, and a tannin-rich plant extract of tormentil rhizome were tested for their potential to regenerate the (pseudo-)halogenating activity, i.e., the oxidation of SCN - to hypothiocyanite - OSCN, of lactoperoxidase (LPO) after hydrogen peroxide-mediated enzyme inactivation. Measurements were performed using 5-thio-2-nitrobenzoic acid in the presence of tannins and related substances in order to determine kinetic parameters and to trace the LPO-mediated - OSCN formation. The results were combined with docking studies and molecular orbital analysis. The - OSCN-regenerating effect of tannin derivatives relates well with their binding properties toward LPO as well as their occupied molecular orbitals. Especially simple compounds like ellagic acid or methyl gallate and the complex plant extract were found as potent enzyme-regenerating compounds. As the (pseudo-)halogenating activity of LPO contributes to the maintenance of oral bacterial homeostasis, the results provide new insights into the antibacterial mode of action of tannins and related compounds. Furthermore, chemical properties of the tested compounds that are important for efficient enzyme-substrate interaction and regeneration of the - OSCN formation by LPO were identified.
A novel solution for hydroxylated PAHs removal by oxidative coupling reaction using Mn oxide.
Kang, Ki-Hoon; Lim, Dong-Min; Shin, Hyun-Sang
2008-01-01
In this study, removals of 1-naphthol by oxidative-coupling reaction using birnessite, one of the natural Mn oxides present in soil, was investigated in various experimental conditions (reaction time, Mn oxide loadings, pH). The removal efficiency of 1-naphthol by birnessite was high in all the experimental conditions, and UV-visible and mass spectrometric analyses on the supernatant after reaction confirmed that the reaction products were oligomers formed by oxidative-coupling reaction. Pseudo-first order rate constants, k, for the oxidative transformation of 1-naphthol by birnessite was derived from the kinetic experiments under various amounts of birnessite loadings, and using the observed pseudo-first order rate constants with respect to birnessite loadings, the surface area normalised specific rate constant, k(surf), was also determined to be 9.3 x 10(-4) (L/m(2).min) for 1-naphthol. In addition, the oxidative transformation of 1-naphthol was found to be dependent on solution pH, and the pseudo-first order rate constants were increased from 0.129 at pH 10 to 0.187 at pH 4. (c) IWA Publishing 2008.
Efficient adaptive pseudo-symplectic numerical integration techniques for Landau-Lifshitz dynamics
NASA Astrophysics Data System (ADS)
d'Aquino, M.; Capuano, F.; Coppola, G.; Serpico, C.; Mayergoyz, I. D.
2018-05-01
Numerical time integration schemes for Landau-Lifshitz magnetization dynamics are considered. Such dynamics preserves the magnetization amplitude and, in the absence of dissipation, also implies the conservation of the free energy. This property is generally lost when time discretization is performed for the numerical solution. In this work, explicit numerical schemes based on Runge-Kutta methods are introduced. The schemes are termed pseudo-symplectic in that they are accurate to order p, but preserve magnetization amplitude and free energy to order q > p. An effective strategy for adaptive time-stepping control is discussed for schemes of this class. Numerical tests against analytical solutions for the simulation of fast precessional dynamics are performed in order to point out the effectiveness of the proposed methods.
Jaffa, Miran A; Gebregziabher, Mulugeta; Jaffa, Ayad A
2015-06-14
Renal transplant patients are mandated to have continuous assessment of their kidney function over time to monitor disease progression determined by changes in blood urea nitrogen (BUN), serum creatinine (Cr), and estimated glomerular filtration rate (eGFR). Multivariate analysis of these outcomes that aims at identifying the differential factors that affect disease progression is of great clinical significance. Thus our study aims at demonstrating the application of different joint modeling approaches with random coefficients on a cohort of renal transplant patients and presenting a comparison of their performance through a pseudo-simulation study. The objective of this comparison is to identify the model with best performance and to determine whether accuracy compensates for complexity in the different multivariate joint models. We propose a novel application of multivariate Generalized Linear Mixed Models (mGLMM) to analyze multiple longitudinal kidney function outcomes collected over 3 years on a cohort of 110 renal transplantation patients. The correlated outcomes BUN, Cr, and eGFR and the effect of various covariates such patient's gender, age and race on these markers was determined holistically using different mGLMMs. The performance of the various mGLMMs that encompass shared random intercept (SHRI), shared random intercept and slope (SHRIS), separate random intercept (SPRI) and separate random intercept and slope (SPRIS) was assessed to identify the one that has the best fit and most accurate estimates. A bootstrap pseudo-simulation study was conducted to gauge the tradeoff between the complexity and accuracy of the models. Accuracy was determined using two measures; the mean of the differences between the estimates of the bootstrapped datasets and the true beta obtained from the application of each model on the renal dataset, and the mean of the square of these differences. The results showed that SPRI provided most accurate estimates and did not exhibit any computational or convergence problem. Higher accuracy was demonstrated when the level of complexity increased from shared random coefficient models to the separate random coefficient alternatives with SPRI showing to have the best fit and most accurate estimates.
True randomness from an incoherent source
NASA Astrophysics Data System (ADS)
Qi, Bing
2017-11-01
Quantum random number generators (QRNGs) harness the intrinsic randomness in measurement processes: the measurement outputs are truly random, given the input state is a superposition of the eigenstates of the measurement operators. In the case of trusted devices, true randomness could be generated from a mixed state ρ so long as the system entangled with ρ is well protected. We propose a random number generation scheme based on measuring the quadrature fluctuations of a single mode thermal state using an optical homodyne detector. By mixing the output of a broadband amplified spontaneous emission (ASE) source with a single mode local oscillator (LO) at a beam splitter and performing differential photo-detection, we can selectively detect the quadrature fluctuation of a single mode output of the ASE source, thanks to the filtering function of the LO. Experimentally, a quadrature variance about three orders of magnitude larger than the vacuum noise has been observed, suggesting this scheme can tolerate much higher detector noise in comparison with QRNGs based on measuring the vacuum noise. The high quality of this entropy source is evidenced by the small correlation coefficients of the acquired data. A Toeplitz-hashing extractor is applied to generate unbiased random bits from the Gaussian distributed raw data, achieving an efficiency of 5.12 bits per sample. The output of the Toeplitz extractor successfully passes all the NIST statistical tests for random numbers.
Pseudo-Fovea Formation After Gene Therapy for RPE65-LCA
Cideciyan, Artur V.; Aguirre, Geoffrey K.; Jacobson, Samuel G.; Butt, Omar H.; Schwartz, Sharon B.; Swider, Malgorzata; Roman, Alejandro J.; Sadigh, Sam; Hauswirth, William W.
2015-01-01
Purpose. The purpose of this study was to evaluate fixation location and oculomotor characteristics of 15 patients with Leber congenital amaurosis (LCA) caused by RPE65 mutations (RPE65-LCA) who underwent retinal gene therapy. Methods. Eye movements were quantified under infrared imaging of the retina while the subject fixated on a stationary target. In a subset of patients, letter recognition under retinal imaging was performed. Cortical responses to visual stimulation were measured using functional magnetic resonance imaging (fMRI) in two patients before and after therapy. Results. All patients were able to fixate on a 1° diameter visible target in the dark. The preferred retinal locus of fixation was either at the anatomical fovea or at an extrafoveal locus. There were a wide range of oculomotor abnormalities. Natural history showed little change in oculomotor abnormalities if target illuminance was increased to maintain target visibility as the disease progressed. Eleven of 15 study eyes treated with gene therapy showed no differences from baseline fixation locations or instability over an average of follow-up of 3.5 years. Four of 15 eyes developed new pseudo-foveas in the treated retinal regions 9 to 12 months after therapy that persisted for up to 6 years; patients used their pseudo-foveas for letter identification. fMRI studies demonstrated that preservation of light sensitivity was restricted to the cortical projection zone of the pseudo-foveas. Conclusions. The slow emergence of pseudo-foveas many months after the initial increases in light sensitivity points to a substantial plasticity of the adult visual system and a complex interaction between it and the progression of underlying retinal disease. The visual significance of pseudo-foveas suggests careful consideration of treatment zones for future gene therapy trials. (ClinicalTrials.gov number, NCT00481546.) PMID:25537204
Topologically trivial and nontrivial edge bands in graphene induced by irradiation
NASA Astrophysics Data System (ADS)
Yang, Mou; Cai, Zhi-Jun; Wang, Rui-Qiang; Bai, Yan-Kui
2016-08-01
We proposed a minimal model to describe the Floquet band structure of two-dimensional materials with light-induced resonant inter-band transition. We applied it to graphene to study the band features caused by the light irradiation. Linearly polarized light induces pseudo gaps (gaps are functions of wavevector), and circularly polarized light causes real gaps on the quasi-energy spectrum. If the polarization of light is linear and along the longitudinal direction of zigzag ribbons, flat edge bands appear in the pseudo gaps, and if it is in the lateral direction of armchair ribbons, curved edge bands can be found. For the circularly polarized cases, edge bands arise and intersect in the gaps of both types of ribbons. The edge bands induced by the circularly polarized light are helical and those by linearly polarized light are topologically trivial ones. The Chern number of the Floquet band, which reflects the number of pairs of helical edge bands in graphene ribbons, can be reduced into the winding number at resonance.
Lin, Jiajiang; Sun, Mengqiang; Liu, Xinwen; Chen, Zuliang
2017-10-01
Kaolin supported nanoscale zero-valent iron (K-nZVI) is synthesized and applied as the Fenton-like oxidation catalyst to degrade a model azo dye, Direct Black G (DBG). The characterization of K-nZVI by the high resolution transmission electronmicroscopy (HRTEM), X-ray photoelectron spectroscopy (XPS), scanning electron microscopy (SEM), Energy Diffraction Spectrum (EDS) and X-ray diffraction (XRD) show that kaolin as a support material not only reduces the aggregation of zero-valent iron (nZVI) but also facilitates the Fenton-like oxidation by increasing the local concentration of DBG in the vicinity of nZVI. Pseudo first-order and pseudo second-order kinetic models are employed to reveal the adsorption and degradation of the DBG using K-nZVI as the catalyst. A better fit with pseudo second-order model for the adsorption process and equal excellent fits with pseudo first-order and pseudo second-order models for the degradation process are observed; the adsorption process is found to be the rate limiting step for overall reactions. The adsorption, evaluated by isotherms and thermodynamic parameters is a spontaneous and endothermic process. High-performance liquid chromatography-mass spectrometry (LC-MS) analysis was used to test degraded products in the degradation of DGB by K-nZVI. A removal mechanism based on the adsorption and degradation is proposed, including (i) prompt adsorption of DBG onto the K-nZVI surface, and (ii) oxidation of DBG by hydroxyl radicals at the K-nZVI surface. The application of K-nZVI to treat real wastewater containing azo dyes shows excellent degradation efficiency. Copyright © 2017 Elsevier Ltd. All rights reserved.
Pulse shaping system research of CdZnTe radiation detector for high energy x-ray diagnostic
NASA Astrophysics Data System (ADS)
Li, Miao; Zhao, Mingkun; Ding, Keyu; Zhou, Shousen; Zhou, Benjie
2018-02-01
As one of the typical wide band-gap semiconductor materials, the CdZnTe material has high detection efficiency and excellent energy resolution for the hard X-ray and the Gamma ray. The generated signal of the CdZnTe detector needs to be transformed to the pseudo-Gaussian pulse with a small impulse-width to remove noise and improve the energy resolution by the following nuclear spectrometry data acquisition system. In this paper, the multi-stage pseudo-Gaussian shaping-filter has been investigated based on the nuclear electronic principle. The optimized circuit parameters were also obtained based on the analysis of the characteristics of the pseudo-Gaussian shaping-filter in our following simulations. Based on the simulation results, the falling-time of the output pulse was decreased and faster response time can be obtained with decreasing shaping-time τs-k. And the undershoot was also removed when the ratio of input resistors was set to 1 to 2.5. Moreover, a two stage sallen-key Gaussian shaping-filter was designed and fabricated by using a low-noise voltage feedback operation amplifier LMH6628. A detection experiment platform had been built by using the precise pulse generator CAKE831 as the imitated radiation pulse which was equivalent signal of the semiconductor CdZnTe detector. Experiment results show that the output pulse of the two stage pseudo-Gaussian shaping filter has minimum 200ns pulse width (FWHM), and the output pulse of each stage was well consistent with the simulation results. Based on the performance in our experiment, this multi-stage pseudo-Gaussian shaping-filter can reduce the event-lost caused by pile-up in the CdZnTe semiconductor detector and improve the energy resolution effectively.
An efficient genetic algorithm for maximum coverage deployment in wireless sensor networks.
Yoon, Yourim; Kim, Yong-Hyuk
2013-10-01
Sensor networks have a lot of applications such as battlefield surveillance, environmental monitoring, and industrial diagnostics. Coverage is one of the most important performance metrics for sensor networks since it reflects how well a sensor field is monitored. In this paper, we introduce the maximum coverage deployment problem in wireless sensor networks and analyze the properties of the problem and its solution space. Random deployment is the simplest way to deploy sensor nodes but may cause unbalanced deployment and therefore, we need a more intelligent way for sensor deployment. We found that the phenotype space of the problem is a quotient space of the genotype space in a mathematical view. Based on this property, we propose an efficient genetic algorithm using a novel normalization method. A Monte Carlo method is adopted to design an efficient evaluation function, and its computation time is decreased without loss of solution quality using a method that starts from a small number of random samples and gradually increases the number for subsequent generations. The proposed genetic algorithms could be further improved by combining with a well-designed local search. The performance of the proposed genetic algorithm is shown by a comparative experimental study. When compared with random deployment and existing methods, our genetic algorithm was not only about twice faster, but also showed significant performance improvement in quality.
NASA Astrophysics Data System (ADS)
Shanmugalingam, A.; Murugesan, A.
2018-05-01
This study reports adsorption of Cr(VI) ions from aqueous solution using activated carbon that was prepared from stems of Leucas aspera. Eight hundred and fifty watts power of microwave radiation, 12 min of radiation time, 60% of ZnCl2 solution and 24 h of impregnation time are the optimal parameters to prepare efficient carbon effective activated carbon. It was designated as MWLAC (Microwave assisted Zinc chloride activated Leucas aspera carbon). Various adsorption characteristics such as dose of the adsorbent, agitation time, initial Cr(VI) ion concentration, pH of the solution and temperature on adsorption were studied for removal of Cr(VI) ions from aqueous solution by batch mode. Also the equilibrium adsorption was analyzed by the Langmuir, Freundlich, Tempkin and D-R isotherm models. The order of best describing isotherms was given based on R2 value. The pseudo-second-order kinetic model best fitted with the Cr(VI) adsorption data. Thermodynamic parameters were also determined and results suggest that the adsorption process is a spontaneous, endothermic and proceeded with increased randomness.
Influence of Prolonged Spaceflight on Heart Rate and Oxygen Uptake Kinetics
NASA Astrophysics Data System (ADS)
Hoffmann, U.; Moore, A.; Drescher, U.
2013-02-01
During prolonged spaceflight, physical training is used to minimize cardiovascular deconditioning. Measurement of the kinetics of cardiorespiratory parameters, in particular the kinetic analysis of heart rate, respiratory and muscular oxygen uptake, provides useful information with regard to the efficiency and regulation of the cardiorespiratory system. Practically, oxygen uptake kinetics can only be measured at the lung site (V’O2 resp). The dynamics of V’O2 resp, however, is not identical with the dynamics at the site of interest: skeletal muscle. Eight Astronauts were tested pre- and post-flight using pseudo random binary workload changes between 30 and 80 W. Their kinetic responses of heart rate, respiratory as well as muscular V’O2 kinetics were estimated by using time-series analysis. Statistical analysis revealed that the kinetic responses of respiratory as well as muscular V’O2 kinetics are slowed post-flight than pre-flight. Heart rate seems not to be influenced following flight. The influence of other factors (e. g. astronauts’ exercise training) may impact these parameters and is an area for future studies.
Combined fabrication technique for high-precision aspheric optical windows
NASA Astrophysics Data System (ADS)
Hu, Hao; Song, Ci; Xie, Xuhui
2016-07-01
Specifications made on optical components are becoming more and more stringent with the performance improvement of modern optical systems. These strict requirements not only involve low spatial frequency surface accuracy, mid-and-high spatial frequency surface errors, but also surface smoothness and so on. This presentation mainly focuses on the fabrication process for square aspheric window which combines accurate grinding, magnetorheological finishing (MRF) and smoothing polishing (SP). In order to remove the low spatial frequency surface errors and subsurface defects after accurate grinding, the deterministic polishing method MRF with high convergence and stable material removal rate is applied. Then the SP technology with pseudo-random path is adopted to eliminate the mid-and-high spatial frequency surface ripples and high slope errors which is the defect for MRF. Additionally, the coordinate measurement method and interferometry are combined in different phase. Acid-etched method and ion beam figuring (IBF) are also investigated on observing and reducing the subsurface defects. Actual fabrication result indicates that the combined fabrication technique can lead to high machining efficiency on manufaturing the high-precision and high-quality optical aspheric windows.
Kinetic study of acetaminophen degradation by visible light photocatalysis.
Gotostos, Mary Jane N; Su, Chia-Chi; De Luna, Mark Daniel G; Lu, Ming-Chun
2014-01-01
In this work, a novel photocatalyst K3[Fe(CN)6]/TiO2 synthesized via a simple sol-gel method was utilized to degrade acetaminophen (ACT) under visible light with the use of blue and green LED lights. Parameters (medium pH, initial concentration of reactant, catalyst concentration, temperature, and number of blue LED lights) affecting photocatalytic degradation of ACT were also investigated. The experimental result showed that compared to commercially available Degussa P-25 (DP-25) photocatalyst, K3[Fe(CN)6]/TiO2 gave higher degradation efficiency and rate constant (kapp) of ACT. The degradation efficiency or kapp decreased with increasing initial ACT concentration and temperature, but increased with increased number of blue LED lamps. Additionally, kapp increased as initial pH was increased from 5.6 to 6.9, but decreased at a high alkaline condition (pH 8.3). Furthermore, the degradation efficiency and kapp of ACT increased as K3[Fe(CN)6]/TiO2 loading was increased to 1 g L(-1) but decreased and eventually leveled off at photocatalyst loading above this value. Photocatalytic degradation of ACT in K3[Fe(CN)6]/TiO2 catalyst system follows a pseudo-first-order kinetics. The Langmuir-Hinshelwood equation was also satisfactorily used to model the degradation of ACT in K3[Fe(CN)6]/TiO2 catalyst system indicated by a satisfactory linear correlation between 1/kapp and Co, with kini = 6.54 × 10(-4) mM/min and KACT = 17.27 mM(-1).
Random phase encoding for optical security
NASA Astrophysics Data System (ADS)
Wang, RuiKang K.; Watson, Ian A.; Chatwin, Christopher R.
1996-09-01
A new optical encoding method for security applications is proposed. The encoded image (encrypted into the security products) is merely a random phase image statistically and randomly generated by a random number generator using a computer, which contains no information from the reference pattern (stored for verification) or the frequency plane filter (a phase-only function for decoding). The phase function in the frequency plane is obtained using a modified phase retrieval algorithm. The proposed method uses two phase-only functions (images) at both the input and frequency planes of the optical processor leading to maximum optical efficiency. Computer simulation shows that the proposed method is robust for optical security applications.
Hiding message into DNA sequence through DNA coding and chaotic maps.
Liu, Guoyan; Liu, Hongjun; Kadir, Abdurahman
2014-09-01
The paper proposes an improved reversible substitution method to hide data into deoxyribonucleic acid (DNA) sequence, and four measures have been taken to enhance the robustness and enlarge the hiding capacity, such as encode the secret message by DNA coding, encrypt it by pseudo-random sequence, generate the relative hiding locations by piecewise linear chaotic map, and embed the encoded and encrypted message into a randomly selected DNA sequence using the complementary rule. The key space and the hiding capacity are analyzed. Experimental results indicate that the proposed method has a better performance compared with the competing methods with respect to robustness and capacity.
Sparse sampling and reconstruction for electron and scanning probe microscope imaging
Anderson, Hyrum; Helms, Jovana; Wheeler, Jason W.; Larson, Kurt W.; Rohrer, Brandon R.
2015-07-28
Systems and methods for conducting electron or scanning probe microscopy are provided herein. In a general embodiment, the systems and methods for conducting electron or scanning probe microscopy with an undersampled data set include: driving an electron beam or probe to scan across a sample and visit a subset of pixel locations of the sample that are randomly or pseudo-randomly designated; determining actual pixel locations on the sample that are visited by the electron beam or probe; and processing data collected by detectors from the visits of the electron beam or probe at the actual pixel locations and recovering a reconstructed image of the sample.
Auta, M; Hameed, B H
2013-05-01
A renewable waste tea activated carbon (WTAC) was coalesced with chitosan to form composite adsorbent used for waste water treatment. Adsorptive capacities of crosslinked chitosan beads (CCB) and its composite (WTAC-CCB) for Methylene blue dye (MB) and Acid blue 29 (AB29) were evaluated through batch and fixed-bed studies. Langmuir, Freundlich and Temkin adsorption isotherms were tested for the adsorption process and the experimental data were best fitted by Langmuir model and least by Freundlich model; the suitability of fitness was adjudged by the Chi-square (χ(2)) and Marquadt's percent standard deviation error functions. Judging by the values of χ(2), pseudo-second-order reaction model best described the adsorption process than pseudo-first-order kinetic model for MB/AB29 on both adsorbents. After five cycles of adsorbents desorption test, more than 50% WTAC-CCB adsorption efficiency was retained while CCB had <20% adsorption efficiency. The results of this study revealed that WTAC-CCB composite is a promising adsorbent for treatment of anionic and cationic dyes in effluent wastewaters. Copyright © 2012 Elsevier B.V. All rights reserved.
Empirical likelihood inference in randomized clinical trials.
Zhang, Biao
2017-01-01
In individually randomized controlled trials, in addition to the primary outcome, information is often available on a number of covariates prior to randomization. This information is frequently utilized to undertake adjustment for baseline characteristics in order to increase precision of the estimation of average treatment effects; such adjustment is usually performed via covariate adjustment in outcome regression models. Although the use of covariate adjustment is widely seen as desirable for making treatment effect estimates more precise and the corresponding hypothesis tests more powerful, there are considerable concerns that objective inference in randomized clinical trials can potentially be compromised. In this paper, we study an empirical likelihood approach to covariate adjustment and propose two unbiased estimating functions that automatically decouple evaluation of average treatment effects from regression modeling of covariate-outcome relationships. The resulting empirical likelihood estimator of the average treatment effect is as efficient as the existing efficient adjusted estimators 1 when separate treatment-specific working regression models are correctly specified, yet are at least as efficient as the existing efficient adjusted estimators 1 for any given treatment-specific working regression models whether or not they coincide with the true treatment-specific covariate-outcome relationships. We present a simulation study to compare the finite sample performance of various methods along with some results on analysis of a data set from an HIV clinical trial. The simulation results indicate that the proposed empirical likelihood approach is more efficient and powerful than its competitors when the working covariate-outcome relationships by treatment status are misspecified.
2016-12-01
tiple dimensions (20). Hu et al. employed pseudo-random phase-encoding blips during the EPSI readout to create nonuniform sampling along the spatial...resolved MRSI with Nonuniform Undersampling and Compressed Sensing 514 30.5 Prior-knowledge Fitting for Metabolite Quantitation 515 30.6 Future Directions... NONUNIFORM UNDERSAMPLING AND COMPRESSED SENSING Nonuniform undersampling (NUS) of k-space and subsequent reconstruction using compressed sensing (CS
Ballistic Missile Defense Glossary Version 3.0.
1997-06-01
The suppression of background noise for the improvement of an object signal. Battlefield Area Evaluation (USA term). Best and Final Offer...field of the lens are focused. An FPA is a matrix of photon sensitive detectors which, when combined with low noise preamplifiers, provides image data...orbital planes with an orbit period of 12 hours at 10,900 nautical miles altitude. Each satellite transmits three L-band, pseudo-random noise -coded
Mining and Querying Multimedia Data
2011-09-29
able to capture more subtle spatial variations such as repetitiveness. Local feature descriptors such as SIFT [74] and SURF [12] have also been widely...empirically set to s = 90%, r = 50%, K = 20, where small variations lead to little perturbation of the output. The pseudo-code of the algorithm is...by constructing a three-layer graph based on clustering outputs, and executing a slight variation of random walk with restart algorithm. It provided
Yashchuk, V. V.; Fischer, P. J.; Chan, E. R.; ...
2015-12-09
We present a modulation transfer function (MTF) calibration method based on binary pseudo-random (BPR) one-dimensional sequences and two-dimensional arrays as an effective method for spectral characterization in the spatial frequency domain of a broad variety of metrology instrumentation, including interferometric microscopes, scatterometers, phase shifting Fizeau interferometers, scanning and transmission electron microscopes, and at this time, x-ray microscopes. The inherent power spectral density of BPR gratings and arrays, which has a deterministic white-noise-like character, allows a direct determination of the MTF with a uniform sensitivity over the entire spatial frequency range and field of view of an instrument. We demonstrate themore » MTF calibration and resolution characterization over the full field of a transmission soft x-ray microscope using a BPR multilayer (ML) test sample with 2.8 nm fundamental layer thickness. We show that beyond providing a direct measurement of the microscope's MTF, tests with the BPRML sample can be used to fine tune the instrument's focal distance. Finally, our results confirm the universality of the method that makes it applicable to a large variety of metrology instrumentation with spatial wavelength bandwidths from a few nanometers to hundreds of millimeters.« less
Drescher, U; Koschate, J; Schiffer, T; Schneider, S; Hoffmann, U
2017-06-01
The aim of the study was to compare the kinetics responses of heart rate (HR), pulmonary (V˙O 2 pulm) and predicted muscular (V˙O 2 musc) oxygen uptake between two different pseudo-random binary sequence (PRBS) work rate (WR) amplitudes both below anaerobic threshold. Eight healthy individuals performed two PRBS WR protocols implying changes between 30W and 80W and between 30W and 110W. HR and V˙O 2 pulm were measured beat-to-beat and breath-by-breath, respectively. V˙O 2 musc was estimated applying the approach of Hoffmann et al. (Eur J Appl Physiol 113: 1745-1754, 2013) considering a circulatory model for venous return and cross-correlation functions (CCF) for the kinetics analysis. HR and V˙O 2 musc kinetics seem to be independent of WR intensity (p>0.05). V˙O 2 pulm kinetics show prominent differences in the lag of the CCF maximum (39±9s; 31±4s; p<0.05). A mean difference of 14W between the PRBS WR amplitudes impacts venous return significantly, while HR and V˙O 2 musc kinetics remain unchanged. Copyright © 2017 Elsevier B.V. All rights reserved.
Sample size calculations for stepped wedge and cluster randomised trials: a unified approach
Hemming, Karla; Taljaard, Monica
2016-01-01
Objectives To clarify and illustrate sample size calculations for the cross-sectional stepped wedge cluster randomized trial (SW-CRT) and to present a simple approach for comparing the efficiencies of competing designs within a unified framework. Study Design and Setting We summarize design effects for the SW-CRT, the parallel cluster randomized trial (CRT), and the parallel cluster randomized trial with before and after observations (CRT-BA), assuming cross-sectional samples are selected over time. We present new formulas that enable trialists to determine the required cluster size for a given number of clusters. We illustrate by example how to implement the presented design effects and give practical guidance on the design of stepped wedge studies. Results For a fixed total cluster size, the choice of study design that provides the greatest power depends on the intracluster correlation coefficient (ICC) and the cluster size. When the ICC is small, the CRT tends to be more efficient; when the ICC is large, the SW-CRT tends to be more efficient and can serve as an alternative design when the CRT is an infeasible design. Conclusion Our unified approach allows trialists to easily compare the efficiencies of three competing designs to inform the decision about the most efficient design in a given scenario. PMID:26344808
Noise-Induced Synchronization among Sub-RF CMOS Analog Oscillators for Skew-Free Clock Distribution
NASA Astrophysics Data System (ADS)
Utagawa, Akira; Asai, Tetsuya; Hirose, Tetsuya; Amemiya, Yoshihito
We present on-chip oscillator arrays synchronized by random noises, aiming at skew-free clock distribution on synchronous digital systems. Nakao et al. recently reported that independent neural oscillators can be synchronized by applying temporal random impulses to the oscillators [1], [2]. We regard neural oscillators as independent clock sources on LSIs; i. e., clock sources are distributed on LSIs, and they are forced to synchronize through the use of random noises. We designed neuron-based clock generators operating at sub-RF region (<1GHz) by modifying the original neuron model to a new model that is suitable for CMOS implementation with 0.25-μm CMOS parameters. Through circuit simulations, we demonstrate that i) the clock generators are certainly synchronized by pseudo-random noises and ii) clock generators exhibited phase-locked oscillations even if they had small device mismatches.
Visual search for arbitrary objects in real scenes
Alvarez, George A.; Rosenholtz, Ruth; Kuzmova, Yoana I.; Sherman, Ashley M.
2011-01-01
How efficient is visual search in real scenes? In searches for targets among arrays of randomly placed distractors, efficiency is often indexed by the slope of the reaction time (RT) × Set Size function. However, it may be impossible to define set size for real scenes. As an approximation, we hand-labeled 100 indoor scenes and used the number of labeled regions as a surrogate for set size. In Experiment 1, observers searched for named objects (a chair, bowl, etc.). With set size defined as the number of labeled regions, search was very efficient (~5 ms/item). When we controlled for a possible guessing strategy in Experiment 2, slopes increased somewhat (~15 ms/item), but they were much shallower than search for a random object among other distinctive objects outside of a scene setting (Exp. 3: ~40 ms/item). In Experiments 4–6, observers searched repeatedly through the same scene for different objects. Increased familiarity with scenes had modest effects on RTs, while repetition of target items had large effects (>500 ms). We propose that visual search in scenes is efficient because scene-specific forms of attentional guidance can eliminate most regions from the “functional set size” of items that could possibly be the target. PMID:21671156
Visual search for arbitrary objects in real scenes.
Wolfe, Jeremy M; Alvarez, George A; Rosenholtz, Ruth; Kuzmova, Yoana I; Sherman, Ashley M
2011-08-01
How efficient is visual search in real scenes? In searches for targets among arrays of randomly placed distractors, efficiency is often indexed by the slope of the reaction time (RT) × Set Size function. However, it may be impossible to define set size for real scenes. As an approximation, we hand-labeled 100 indoor scenes and used the number of labeled regions as a surrogate for set size. In Experiment 1, observers searched for named objects (a chair, bowl, etc.). With set size defined as the number of labeled regions, search was very efficient (~5 ms/item). When we controlled for a possible guessing strategy in Experiment 2, slopes increased somewhat (~15 ms/item), but they were much shallower than search for a random object among other distinctive objects outside of a scene setting (Exp. 3: ~40 ms/item). In Experiments 4-6, observers searched repeatedly through the same scene for different objects. Increased familiarity with scenes had modest effects on RTs, while repetition of target items had large effects (>500 ms). We propose that visual search in scenes is efficient because scene-specific forms of attentional guidance can eliminate most regions from the "functional set size" of items that could possibly be the target.
Quantum Superalgebras at Roots of Unity and Topological Invariants of Three-manifolds
NASA Astrophysics Data System (ADS)
Blumen, Sacha C.
2006-01-01
The general method of Reshetikhin and Turaev is followed to develop topological invariants of closed, connected, orientable 3-manifolds from a new class of algebras called pseudo-modular Hopf algebras. Pseudo-modular Hopf algebras are a class of Z_2-graded ribbon Hopf algebras that generalise the concept of a modular Hopf algebra. The quantum superalgebra U_q(osp(1|2n)) over C is considered with q a primitive N^th root of unity for all integers N >= 3. For such a q, a certain left ideal I of U_q(osp(1|2n)) is also a two-sided Hopf ideal, and the quotient algebra U_q^(N)(osp(1|2n)) = U_q(osp(1|2n)) / I is a Z_2-graded ribbon Hopf algebra. For all n and all N >= 3, a finite collection of finite dimensional representations of U_q^(N)(osp(1|2n)) is defined. Each such representation of U_q^(N)(osp(1|2n)) is labelled by an integral dominant weight belonging to the truncated dominant Weyl chamber. Properties of these representations are considered: the quantum superdimension of each representation is calculated, each representation is shown to be self-dual, and more importantly, the decomposition of the tensor product of an arbitrary number of such representations is obtained for even N. It is proved that the quotient algebra U_q^(N)(osp(1|2n)), together with the set of finite dimensional representations discussed above, form a pseudo-modular Hopf algebra when N >= 6 is twice an odd number. Using this pseudo-modular Hopf algebra, we construct a topological invariant of 3-manifolds. This invariant is shown to be different to the topological invariants of 3-manifolds arising from quantum so(2n+1) at roots of unity.
Staining Method for Protein Analysis by Capillary Gel Electrophoresis
Wu, Shuqing; Lu, Joann J; Wang, Shili; Peck, Kristy L.; Li, Guigen; Liu, Shaorong
2009-01-01
A novel staining method and the associated fluorescent dye were developed for protein analysis by capillary SDS-PAGE. The method strategy is to synthesize a pseudo-SDS dye and use it to replace some of the SDS in SDS–protein complexes so that the protein can be fluorescently detected. The pseudo-SDS dye consists of a long, straight alkyl chain connected to a negative charged fluorescent head and binds to proteins just as SDS. The number of dye molecules incorporated with a protein depends on the dye concentration relative to SDS in the sample solution, since SDS and dye bind to proteins competitively. In this work, we synthesized a series of pseudo-SDS dyes, and tested their performances for capillary SDS-PAGE. FT-16 (a fluorescein molecule linked with a hexadodecyl group) seemed to be the best among all the dyes tested. Although the numbers of dye molecules bound to proteins (and the fluorescence signals from these protein complexes) were maximized in the absence of SDS, high-quality separations were obtained when co-complexes of SDS–protein–dye were formed. The migration time correlates well with protein size even after some of the SDS in the SDS–protein complexes was replaced by the pseudo-SDS dye. Under optimized experimental conditions and using a laser-induced fluorescence detector, limits of detection of as low as 0.13 ng/mL (bovine serum albumin) and dynamic ranges over 5 orders of magnitude in which fluorescence response is proportional to the square root of analyte concentration were obtained. The method and dye were also tested for separations of real-world samples from E. coli. PMID:17874848
Enhancement of A5/1 encryption algorithm
NASA Astrophysics Data System (ADS)
Thomas, Ria Elin; Chandhiny, G.; Sharma, Katyayani; Santhi, H.; Gayathri, P.
2017-11-01
Mobiles have become an integral part of today’s world. Various standards have been proposed for the mobile communication, one of them being GSM. With the rising increase of mobile-based crimes, it is necessary to improve the security of the information passed in the form of voice or data. GSM uses A5/1 for its encryption. It is known that various attacks have been implemented, exploiting the vulnerabilities present within the A5/1 algorithm. Thus, in this paper, we proceed to look at what these vulnerabilities are, and propose the enhanced A5/1 (E-A5/1) where, we try to improve the security provided by the A5/1 algorithm by XORing the key stream generated with a pseudo random number, without increasing the time complexity. We need to study what the vulnerabilities of the base algorithm (A5/1) is, and try to improve upon its security. This will help in the future releases of the A5 family of algorithms.
The Effects of Low Income Housing Tax Credit Developments on Neighborhoods
Baum-Snow, Nathaniel; Marion, Justin
2013-01-01
This paper evaluates the impacts of new housing developments funded with the Low Income Housing Tax Credit (LIHTC), the largest federal project based housing program in the U.S., on the neighborhoods in which they are built. A discontinuity in the formula determining the magnitude of tax credits as a function of neighborhood characteristics generates pseudo-random assignment in the number of low income housing units built in similar sets of census tracts. Tracts where projects are awarded 30 percent higher tax credits receive approximately six more low income housing units on a base of seven units per tract. These additional new low income developments cause homeowner turnover to rise, raise property values in declining areas and reduce incomes in gentrifying areas in neighborhoods near the 30th percentile of the income distribution. LIHTC units significantly crowd out nearby new rental construction in gentrifying areas but do not displace new construction in stable or declining areas. PMID:24235779
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, P.L.; Chetty, V.; Kasch, L.
Arylsulfatase-A deficiency causes the neurodegenerative lysosomal storage disease metachromatic leukodystrophy. In the late-onset variant, schizophrenia-like psychosis is a frequent finding and sometimes given as the initial diagnosis. A mutant allele, pseudo-deficiency, causes deficient enzyme activity but no apparent clinical effect. It occurs at a high frequency and consists of two tightly-linked A{r_arrow}G transitions: one causing the loss of a glycosylation site (PDg); and one causing the loss of a polyadenylation signal (PDa). Since this gene was mapped to chromosome 22q13-qter, a region implicated in a potential linkage with schizophrenia, we hypothesized that this common mutation may be a predisposing geneticmore » factor for schizophrenia. We studied a random sample of schizophrenic patients for possible increase in frequency of the pseudo-deficiency mutations and in multiplex families to verify if the mutations are linked to schizophrenia. Among 50 Caucasian patients identified through out-patient and in-patient clinics, the frequencies for the three alleles PDg + PDa together, PDg or PDa alone were 11%, 5% and 0%, respectively. The corresponding frequencies among 100 Caucasian controls were 7.5%, 6% and 0%, respectively, the differences between the patients and controls being insignificant ({chi}{sup 2}tests: 0.10« less
Modeling of batch sorber system: kinetic, mechanistic, and thermodynamic modeling
NASA Astrophysics Data System (ADS)
Mishra, Vishal
2017-10-01
The present investigation has dealt with the biosorption of copper and zinc ions on the surface of egg-shell particles in the liquid phase. Various rate models were evaluated to elucidate the kinetics of copper and zinc biosorptions, and the results indicated that the pseudo-second-order model was more appropriate than the pseudo-first-order model. The curve of the initial sorption rate versus the initial concentration of copper and zinc ions also complemented the results of the pseudo-second-order model. Models used for the mechanistic modeling were the intra-particle model of pore diffusion and Bangham's model of film diffusion. The results of the mechanistic modeling together with the values of pore and film diffusivities indicated that the preferential mode of the biosorption of copper and zinc ions on the surface of egg-shell particles in the liquid phase was film diffusion. The results of the intra-particle model showed that the biosorption of the copper and zinc ions was not dominated by the pore diffusion, which was due to macro-pores with open-void spaces present on the surface of egg-shell particles. The thermodynamic modeling reproduced the fact that the sorption of copper and zinc was spontaneous, exothermic with the increased order of the randomness at the solid-liquid interface.
NASA Astrophysics Data System (ADS)
Zularisam, A. W.; Wahida, Norul
2017-07-01
Nickel (II) is one of the most toxic contaminants recognised as a carcinogenic and mutagenic agent which needs complete removal from wastewater before disposal. In the present study, a novel adsorbent called mesoparticle graphene sand composite (MGSCaps) was synthesised from arenga palm sugar and sand by using a green, simple, low cost and efficient methodology. Subsequently, this composite was characterised and identified using field emission scanning electron microscope (FESEM), x-ray diffraction (XRD) and elemental mapping (EM). The adsorption process was investigated and optimised under the experimental parameters such as pH, contact time and bed depth. The results showed that the interaction between nickel (II) and MGSCaps was not ion to ion interaction hence removal of Ni (II) can be applied at any pH. The results were also exhibited the higher contact time and bed depth, the higher removal percentage of nickel (II) occurred. Adsorption kinetic data were modelled using Pseudo-first-order and Pseudo-second-order equation models. The experimental results indicated pseudo-second-order kinetic equation was most suitable to describe the experimental adsorption kinetics data with maximum capacity of 40% nickel (II) removal for the first hour. The equilibrium adsorption data was fitted with Langmuir, and Freundlich isotherms equations. The data suggested that the most fitted equation model is the Freundlich with correlation R2=0.9974. Based on the obtained results, it can be stated that the adsorption method using MGSCaps is an efficient, facile and reliable method for the removal of nickel (II) from waste water.
The Effect of the Number of Syllables on Handwriting Production
ERIC Educational Resources Information Center
Lambert, Eric; Kandel, Sonia; Fayol, Michel; Esperet, Eric
2008-01-01
Four experiments examined whether motor programming in handwriting production can be modulated by the syllable structure of the word to be written. This study manipulated the number of syllables. The items, words and pseudo-words, had 2, 3 or 4 syllables. French adults copied them three times. We measured the latencies between the visual…
Vanniyasingam, Thuva; Cunningham, Charles E; Foster, Gary; Thabane, Lehana
2016-01-01
Objectives Discrete choice experiments (DCEs) are routinely used to elicit patient preferences to improve health outcomes and healthcare services. While many fractional factorial designs can be created, some are more statistically optimal than others. The objective of this simulation study was to investigate how varying the number of (1) attributes, (2) levels within attributes, (3) alternatives and (4) choice tasks per survey will improve or compromise the statistical efficiency of an experimental design. Design and methods A total of 3204 DCE designs were created to assess how relative design efficiency (d-efficiency) is influenced by varying the number of choice tasks (2–20), alternatives (2–5), attributes (2–20) and attribute levels (2–5) of a design. Choice tasks were created by randomly allocating attribute and attribute level combinations into alternatives. Outcome Relative d-efficiency was used to measure the optimality of each DCE design. Results DCE design complexity influenced statistical efficiency. Across all designs, relative d-efficiency decreased as the number of attributes and attribute levels increased. It increased for designs with more alternatives. Lastly, relative d-efficiency converges as the number of choice tasks increases, where convergence may not be at 100% statistical optimality. Conclusions Achieving 100% d-efficiency is heavily dependent on the number of attributes, attribute levels, choice tasks and alternatives. Further exploration of overlaps and block sizes are needed. This study's results are widely applicable for researchers interested in creating optimal DCE designs to elicit individual preferences on health services, programmes, policies and products. PMID:27436671
Reducing inhomogeneity in the dynamic properties of quantum dots via self-aligned plasmonic cavities
NASA Astrophysics Data System (ADS)
Demory, Brandon; Hill, Tyler A.; Teng, Chu-Hsiang; Deng, Hui; Ku, P. C.
2018-01-01
A plasmonic cavity is shown to greatly reduce the inhomogeneity of dynamic optical properties such as quantum efficiency and radiative lifetime of InGaN quantum dots. By using an open-top plasmonic cavity structure, which exhibits a large Purcell factor and antenna quantum efficiency, the resulting quantum efficiency distribution for the quantum dots narrows and is no longer limited by the quantum dot inhomogeneity. The standard deviation of the quantum efficiency can be reduced to 2% while maintaining the overall quantum efficiency at 70%, making InGaN quantum dots a viable candidate for high-speed quantum cryptography and random number generation applications.
Demory, Brandon; Hill, Tyler A; Teng, Chu-Hsiang; Deng, Hui; Ku, P C
2018-01-05
A plasmonic cavity is shown to greatly reduce the inhomogeneity of dynamic optical properties such as quantum efficiency and radiative lifetime of InGaN quantum dots. By using an open-top plasmonic cavity structure, which exhibits a large Purcell factor and antenna quantum efficiency, the resulting quantum efficiency distribution for the quantum dots narrows and is no longer limited by the quantum dot inhomogeneity. The standard deviation of the quantum efficiency can be reduced to 2% while maintaining the overall quantum efficiency at 70%, making InGaN quantum dots a viable candidate for high-speed quantum cryptography and random number generation applications.
NASA Astrophysics Data System (ADS)
Haase, Felix; Kiefer, Fabian; Schäfer, Sören; Kruse, Christian; Krügener, Jan; Brendel, Rolf; Peibst, Robby
2017-08-01
We demonstrate an independently confirmed 25.0%-efficient interdigitated back contact silicon solar cell with passivating polycrystalline silicon (poly-Si) on oxide (POLO) contacts that enable a high open circuit voltage of 723 mV. We use n-type POLO contacts with a measured saturation current density of J 0n = 4 fA cm-2 and p-type POLO contacts with J 0p = 10 fA cm-2. The textured front side and the gaps between the POLO contacts on the rear are passivated by aluminum oxide (AlO x ) with J 0AlO x = 6 fA cm-2 as measured after deposition. We analyze the recombination characteristics of our solar cells at different process steps using spatially resolved injection-dependent carrier lifetimes measured by infrared lifetime mapping. The implied pseudo-efficiency of the unmasked cell, i.e., cell and perimeter region are illuminated during measurement, is 26.2% before contact opening, 26.0% after contact opening and 25.7% for the finished cell. This reduction is due to an increase in the saturation current density of the AlO x passivation during chemical etching of the contact openings and of the rear side metallization. The difference between the implied pseudo-efficiency and the actual efficiency of 25.0% as determined by designated-area light current-voltage (I-V) measurements is due to series resistance and diffusion of excess carriers into the non-illuminated perimeter region.
Development of a pseudo/anonymised primary care research database: Proof-of-concept study.
MacRury, Sandra; Finlayson, Jim; Hussey-Wilson, Susan; Holden, Samantha
2016-06-01
General practice records present a comprehensive source of data that could form a variety of anonymised or pseudonymised research databases to aid identification of potential research participants regardless of location. A proof-of-concept study was undertaken to extract data from general practice systems in 15 practices across the region to form pseudo and anonymised research data sets. Two feasibility studies and a disease surveillance study compared numbers of potential study participants and accuracy of disease prevalence, respectively. There was a marked reduction in screening time and increase in numbers of potential study participants identified with the research repository compared with conventional methods. Accurate disease prevalence was established and enhanced with the addition of selective text mining. This study confirms the potential for development of national anonymised research database from general practice records in addition to improving data collection for local or national audits and epidemiological projects. © The Author(s) 2014.
Leonardi, Marco; Villacampa, Mercedes
2017-01-01
The pseudo-five-component reaction between β-dicarbonyl compounds (2 molecules), diamines and α-iodoketones (2 molecules), prepared in situ from aryl ketones, was performed efficiently under mechanochemical conditions involving high-speed vibration milling with a single zirconium oxide ball. This reaction afforded symmetrical frameworks containing two pyrrole or fused pyrrole units joined by a spacer, which are of interest in the exploration of chemical space for drug discovery purposes. The method was also extended to the synthesis of one compound containing three identical pyrrole fragments via a pseudo-seven-component reaction. Access to compounds having a double bond in their spacer chain was achieved by a different approach involving the homodimerization of 1-allyl- or 1-homoallylpyrroles by application of cross-metathesis chemistry. PMID:29062414
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jalas, S.; Dornmair, I.; Lehe, R.
Particle in Cell (PIC) simulations are a widely used tool for the investigation of both laser- and beam-driven plasma acceleration. It is a known issue that the beam quality can be artificially degraded by numerical Cherenkov radiation (NCR) resulting primarily from an incorrectly modeled dispersion relation. Pseudo-spectral solvers featuring infinite order stencils can strongly reduce NCR - or even suppress it - and are therefore well suited to correctly model the beam properties. For efficient parallelization of the PIC algorithm, however, localized solvers are inevitable. Arbitrary order pseudo-spectral methods provide this needed locality. Yet, these methods can again be pronemore » to NCR. Here in this paper, we show that acceptably low solver orders are sufficient to correctly model the physics of interest, while allowing for parallel computation by domain decomposition.« less
Cross-correlation least-squares reverse time migration in the pseudo-time domain
NASA Astrophysics Data System (ADS)
Li, Qingyang; Huang, Jianping; Li, Zhenchun
2017-08-01
The least-squares reverse time migration (LSRTM) method with higher image resolution and amplitude is becoming increasingly popular. However, the LSRTM is not widely used in field land data processing because of its sensitivity to the initial migration velocity model, large computational cost and mismatch of amplitudes between the synthetic and observed data. To overcome the shortcomings of the conventional LSRTM, we propose a cross-correlation least-squares reverse time migration algorithm in pseudo-time domain (PTCLSRTM). Our algorithm not only reduces the depth/velocity ambiguities, but also reduces the effect of velocity error on the imaging results. It relieves the accuracy requirements on the migration velocity model of least-squares migration (LSM). The pseudo-time domain algorithm eliminates the irregular wavelength sampling in the vertical direction, thus it can reduce the vertical grid points and memory requirements used during computation, which makes our method more computationally efficient than the standard implementation. Besides, for field data applications, matching the recorded amplitudes is a very difficult task because of the viscoelastic nature of the Earth and inaccuracies in the estimation of the source wavelet. To relax the requirement for strong amplitude matching of LSM, we extend the normalized cross-correlation objective function to the pseudo-time domain. Our method is only sensitive to the similarity between the predicted and the observed data. Numerical tests on synthetic and land field data confirm the effectiveness of our method and its adaptability for complex models.
Intermediate quantum maps for quantum computation
NASA Astrophysics Data System (ADS)
Giraud, O.; Georgeot, B.
2005-10-01
We study quantum maps displaying spectral statistics intermediate between Poisson and Wigner-Dyson. It is shown that they can be simulated on a quantum computer with a small number of gates, and efficiently yield information about fidelity decay or spectral statistics. We study their matrix elements and entanglement production and show that they converge with time to distributions which differ from random matrix predictions. A randomized version of these maps can be implemented even more economically and yields pseudorandom operators with original properties, enabling, for example, one to produce fractal random vectors. These algorithms are within reach of present-day quantum computers.
Liu, Xiaolei; Huang, Meng; Fan, Bin; Buckler, Edward S.; Zhang, Zhiwu
2016-01-01
False positives in a Genome-Wide Association Study (GWAS) can be effectively controlled by a fixed effect and random effect Mixed Linear Model (MLM) that incorporates population structure and kinship among individuals to adjust association tests on markers; however, the adjustment also compromises true positives. The modified MLM method, Multiple Loci Linear Mixed Model (MLMM), incorporates multiple markers simultaneously as covariates in a stepwise MLM to partially remove the confounding between testing markers and kinship. To completely eliminate the confounding, we divided MLMM into two parts: Fixed Effect Model (FEM) and a Random Effect Model (REM) and use them iteratively. FEM contains testing markers, one at a time, and multiple associated markers as covariates to control false positives. To avoid model over-fitting problem in FEM, the associated markers are estimated in REM by using them to define kinship. The P values of testing markers and the associated markers are unified at each iteration. We named the new method as Fixed and random model Circulating Probability Unification (FarmCPU). Both real and simulated data analyses demonstrated that FarmCPU improves statistical power compared to current methods. Additional benefits include an efficient computing time that is linear to both number of individuals and number of markers. Now, a dataset with half million individuals and half million markers can be analyzed within three days. PMID:26828793
Fermionic dark matter with pseudo-scalar Yukawa interaction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghorbani, Karim, E-mail: k-ghorbani@araku.ac.ir
2015-01-01
We consider a renormalizable extension of the standard model whose fermionic dark matter (DM) candidate interacts with a real singlet pseudo-scalar via a pseudo-scalar Yukawa term while we assume that the full Lagrangian is CP-conserved in the classical level. When the pseudo-scalar boson develops a non-zero vacuum expectation value, spontaneous CP-violation occurs and this provides a CP-violated interaction of the dark sector with the SM particles through mixing between the Higgs-like boson and the SM-like Higgs boson. This scenario suggests a minimal number of free parameters. Focusing mainly on the indirect detection observables, we calculate the dark matter annihilation crossmore » section and then compute the DM relic density in the range up to m{sub DM} = 300 GeV.We then find viable regions in the parameter space constrained by the observed DM relic abundance as well as invisible Higgs decay width in the light of 125 GeV Higgs discovery at the LHC. We find that within the constrained region of the parameter space, there exists a model with dark matter mass m{sub DM} ∼ 38 GeV annihilating predominantly into b quarks, which can explain the Fermi-LAT galactic gamma-ray excess.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
CHERTKOV, MICHAEL; STEPANOV, MIKHAIL
2007-01-10
The authors discuss performance of Low-Density-Parity-Check (LDPC) codes decoded by Linear Programming (LP) decoding at moderate and large Signal-to-Noise-Ratios (SNR). Frame-Error-Rate (FER) dependence on SNR and the noise space landscape of the coding/decoding scheme are analyzed by a combination of the previously introduced instanton/pseudo-codeword-search method and a new 'dendro' trick. To reduce complexity of the LP decoding for a code with high-degree checks, {ge} 5, they introduce its dendro-LDPC counterpart, that is the code performing identifically to the original one under Maximum-A-Posteriori (MAP) decoding but having reduced (down to three) check connectivity degree. Analyzing number of popular LDPC codes andmore » their dendro versions performing over the Additive-White-Gaussian-Noise (AWGN) channel, they observed two qualitatively different regimes: (i) error-floor sets early, at relatively low SNR, and (ii) FER decays with SNR increase faster at moderate SNR than at the largest SNR. They explain these regimes in terms of the pseudo-codeword spectra of the codes.« less
Statistical mechanics of complex economies
NASA Astrophysics Data System (ADS)
Bardoscia, Marco; Livan, Giacomo; Marsili, Matteo
2017-04-01
In the pursuit of ever increasing efficiency and growth, our economies have evolved to remarkable degrees of complexity, with nested production processes feeding each other in order to create products of greater sophistication from less sophisticated ones, down to raw materials. The engine of such an expansion have been competitive markets that, according to general equilibrium theory (GET), achieve efficient allocations under specific conditions. We study large random economies within the GET framework, as templates of complex economies, and we find that a non-trivial phase transition occurs: the economy freezes in a state where all production processes collapse when either the number of primary goods or the number of available technologies fall below a critical threshold. As in other examples of phase transitions in large random systems, this is an unintended consequence of the growth in complexity. Our findings suggest that the Industrial Revolution can be regarded as a sharp transition between different phases, but also imply that well developed economies can collapse if too many intermediate goods are introduced.
CO₂ carbonation under aqueous conditions using petroleum coke combustion fly ash.
González, A; Moreno, N; Navia, R
2014-12-01
Fly ash from petroleum coke combustion was evaluated for CO2 capture in aqueous medium. Moreover the carbonation efficiency based on different methodologies and the kinetic parameters of the process were determined. The results show that petroleum coke fly ash achieved a CO2 capture yield of 21% at the experimental conditions of 12 g L(-1), 363°K without stirring. The carbonation efficiency by petroleum coke fly ash based on reactive calcium species was within carbonation efficiencies reported by several authors. In addition, carbonation by petroleum coke fly ash follows a pseudo-second order kinetic model. Copyright © 2014 Elsevier Ltd. All rights reserved.
SU-E-I-65: Estimation of Tagging Efficiency in Pseudo-Continuous Arterial Spin Labeling (pCASL) MRI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jen, M; Yan, F; Tseng, Y
2015-06-15
Purpose: pCASL was recommended as a potent approach for absolute cerebral blood flow (CBF) quantification in clinical practice. However, uncertainties of tagging efficiency in pCASL remain an issue. This study aimed to estimate tagging efficiency by using short quantitative pulsed ASL scan (FAIR-QUIPSSII) and compare resultant CBF values with those calibrated by using 2D Phase Contrast (PC) MRI. Methods: Fourteen normal volunteers participated in this study. All images, including whole brain (WB) pCASL, WB FAIR-QUIPSSII and single-slice 2D PC, were collected on a 3T clinical MRI scanner with a 8-channel head coil. DeltaM map was calculated by averaging the subtractionmore » of tag/control pairs in pCASL and FAIR-QUIPSSII images and used for CBF calculation. Tagging efficiency was then calculated by the ratio of mean gray matter CBF obtained from pCASL and FAIR-QUIPSSII. For comparison, tagging efficiency was also estimated with 2D PC, a previously established method, by contrast WB CBF in pCASL and 2D PC. Feasibility of estimation from a short FAIR-QUIPSSII scan was evaluated by number of averages required for obtaining a stable deltaM value. Setting deltaM calculated by maximum number of averaging (50 pairs) as reference, stable results were defined within ±10% variation. Results: Tagging efficiencies obtained by 2D PC MRI (0.732±0.092) were significantly lower than which obtained by FAIRQUIPPSSII (0.846±0.097) (P<0.05). Feasibility results revealed that four pairs of images in FAIR-QUIPPSSII scan were sufficient to obtain a robust calibration of less than 10% differences from using 50 pairs. Conclusion: This study found that reliable estimation of tagging efficiency could be obtained by a few pairs of FAIR-QUIPSSII images, which suggested that calibration scan in a short duration (within 30s) was feasible. Considering recent reports concerning variability of PC MRI-based calibration, this study proposed an effective alternative for CBF quantification with pCASL.« less
Systematic versus random sampling in stereological studies.
West, Mark J
2012-12-01
The sampling that takes place at all levels of an experimental design must be random if the estimate is to be unbiased in a statistical sense. There are two fundamental ways by which one can make a random sample of the sections and positions to be probed on the sections. Using a card-sampling analogy, one can pick any card at all out of a deck of cards. This is referred to as independent random sampling because the sampling of any one card is made without reference to the position of the other cards. The other approach to obtaining a random sample would be to pick a card within a set number of cards and others at equal intervals within the deck. Systematic sampling along one axis of many biological structures is more efficient than random sampling, because most biological structures are not randomly organized. This article discusses the merits of systematic versus random sampling in stereological studies.
76 FR 62801 - Combined Notice of Filings #1
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-11
... Pseudo PGA with Mesquite Solar 1 to be effective 11/1/2011. Filed Date: 09/30/2011. Accession Number... Electric Company submits tariff filing per 35.13(a)(2)(iii: Western USBR TFA for Red Bluff Pumping Plant to...
NASA Astrophysics Data System (ADS)
Yoon, Soon Uk; Mahanty, Biswanath; Ha, Hun Moon; Kim, Chang Gyun
2016-06-01
Phenol adsorption from aqueous solution was carried out using uncoated and methyl acrylic acid (MAA)-coated iron oxide nanoparticles (NPs), having size <10 nm, as adsorbents. Batch adsorption studies revealed that the phenol removal efficiency of MAA-coated NPs (950 mg g-1) is significantly higher than that of uncoated NPs (550 mg g-1) under neutral to acidic conditions. However, this improvement disappears above pH 9. The adsorption data under optimized conditions (pH 7) were modeled with pseudo-first- and pseudo-second-order kinetics and subjected to Freundlich and Langmuir isotherms. The analysis determined that pseudo-second-order kinetics and the Freundlich model are appropriate for both uncoated and MAA-coated NPs (all R 2 > 0.98). X-ray photoelectron spectroscopy analysis of pristine and phenol-adsorbed NPs revealed core-level binding energy and charge for Fe(2 s) and O(1 s) on the NP surfaces. The calculations suggest that phenol adsorption onto MAA-coated NPs is a charge transfer process, where the adsorbate (phenol) acts as an electron donor and the NP surface (Fe, O) as an electron acceptor. However, a physisorption process appears to be the relevant mechanism for uncoated NPs.
Patterns of fecal gonadal hormone metabolites in the maned wolf (Chrysocyon brachyurus).
Songsasen, N; Rodden, M; Brown, J L; Wildt, D E
2006-10-01
Ex situ populations of maned wolves are not viable due to low reproductive efficiency. The objective of this study was to increase knowledge regarding the reproductive physiology of maned wolves to improve captive management. Fecal samples were collected 3-5 d/wk from 12 females of various reproductive age classes (young, prime breeding and aged) and reproductive histories (conceived and raised pups, conceived but lost pups, pseudo-pregnant and unpaired). Ovarian steroids were extracted from feces and assessed by enzyme immunoassay. Concentrations of estrogen metabolites gradually increased, beginning 2-5 d before breeding, and declined to baseline on the day of lordosis and copulation. Fecal progestin metabolite concentrations increased steadily during the periovulatory period, when sexual receptivity was observed, and remained elevated during pregnancy and pseudo-pregnancy. During the luteal phase, young and prime breeding-age females excreted larger amounts of progestins than those of older age classes. Furthermore, progestin concentrations were higher during the luteal phase of pregnant versus pseudo-pregnant bitches. Profiles of fecal progestin metabolites for three singleton females were unchanged throughout the breeding season, suggesting ovulation is induced in this species. However, this finding could be confounded by age, as these females were either young or aged.
Padil, Vinod Vellora Thekkae; Stuchlík, Martin; Černík, Miroslav
2015-05-05
Electrospun nanofibre membranes from blend solutions of deacetylated gum kondagogu and polyvinyl alcohol of various weight proportions were prepared. The electrospun membrane was cross linked by heating at 150°C for 6h and later modified by methane plasma treatment. Membranes were successively used for the removal of nanoparticles (Ag, Au and Pt) from water. Pt nanoparticles with the smallest size (2.4 ± 0.7 nm) has a higher adsorption capacity (270.4 mg/g and 327.2mg/g) compared to Au and Ag nanoparticles with particle sizes 7.8 ± 2.3 nm and 10.5 ± 3.5 nm onto nanofibre membrane (NFM) and methane plasma treated membrane (P-NFM). The extraction efficiency of P-NFM for the removal of nanoparticles in water is higher compared to untreated membranes. The adsorption kinetics were evaluated by pseudo-first order and pseudo-second order models for the extraction of nanoparticles from water, with the pseudo-second order model providing a better fit. The reusability and regeneration of the P-NFM for consecutive adsorption was also established. Copyright © 2015 Elsevier Ltd. All rights reserved.
Efficient strategies for leave-one-out cross validation for genomic best linear unbiased prediction.
Cheng, Hao; Garrick, Dorian J; Fernando, Rohan L
2017-01-01
A random multiple-regression model that simultaneously fit all allele substitution effects for additive markers or haplotypes as uncorrelated random effects was proposed for Best Linear Unbiased Prediction, using whole-genome data. Leave-one-out cross validation can be used to quantify the predictive ability of a statistical model. Naive application of Leave-one-out cross validation is computationally intensive because the training and validation analyses need to be repeated n times, once for each observation. Efficient Leave-one-out cross validation strategies are presented here, requiring little more effort than a single analysis. Efficient Leave-one-out cross validation strategies is 786 times faster than the naive application for a simulated dataset with 1,000 observations and 10,000 markers and 99 times faster with 1,000 observations and 100 markers. These efficiencies relative to the naive approach using the same model will increase with increases in the number of observations. Efficient Leave-one-out cross validation strategies are presented here, requiring little more effort than a single analysis.
Toward Reliable and Energy Efficient Wireless Sensing for Space and Extreme Environments
NASA Technical Reports Server (NTRS)
Choi, Baek-Young; Boyd, Darren; Wilkerson, DeLisa
2017-01-01
Reliability is the critical challenge of wireless sensing in space systems operating in extreme environments. Energy efficiency is another concern for battery powered wireless sensors. Considering the physics of wireless communications, we propose an approach called Software-Defined Wireless Communications (SDC) that dynamically decide a reliable channel(s) avoiding unnecessary redundancy of channels, out of multiple distinct electromagnetic frequency bands such as radio and infrared frequencies.We validate the concept with Android and Raspberry Pi sensors and pseudo extreme experiments. SDC can be utilized in many areas beyond space applications.
NASA Astrophysics Data System (ADS)
Wang, Jun; Li, Xiaowei; Hu, Yuhen; Wang, Qiong-Hua
2018-03-01
A phase-retrieval attack free cryptosystem based on the cylindrical asymmetric diffraction and double-random phase encoding (DRPE) is proposed. The plaintext is abstract as a cylinder, while the observed diffraction and holographic surfaces are concentric cylinders. Therefore, the plaintext can be encrypted through a two-step asymmetric diffraction process with double pseudo random phase masks located on the object surface and the first diffraction surface. After inverse diffraction from a holographic surface to an object surface, the plaintext can be reconstructed using a decryption process. Since the diffraction propagated from the inner cylinder to the outer cylinder is different from that of the reversed direction, the proposed cryptosystem is asymmetric and hence is free of phase-retrieval attack. Numerical simulation results demonstrate the flexibility and effectiveness of the proposed cryptosystem.
Approximate ground states of the random-field Potts model from graph cuts
NASA Astrophysics Data System (ADS)
Kumar, Manoj; Kumar, Ravinder; Weigel, Martin; Banerjee, Varsha; Janke, Wolfhard; Puri, Sanjay
2018-05-01
While the ground-state problem for the random-field Ising model is polynomial, and can be solved using a number of well-known algorithms for maximum flow or graph cut, the analog random-field Potts model corresponds to a multiterminal flow problem that is known to be NP-hard. Hence an efficient exact algorithm is very unlikely to exist. As we show here, it is nevertheless possible to use an embedding of binary degrees of freedom into the Potts spins in combination with graph-cut methods to solve the corresponding ground-state problem approximately in polynomial time. We benchmark this heuristic algorithm using a set of quasiexact ground states found for small systems from long parallel tempering runs. For a not-too-large number q of Potts states, the method based on graph cuts finds the same solutions in a fraction of the time. We employ the new technique to analyze the breakup length of the random-field Potts model in two dimensions.
Random access with adaptive packet aggregation in LTE/LTE-A.
Zhou, Kaijie; Nikaein, Navid
While random access presents a promising solution for efficient uplink channel access, the preamble collision rate can significantly increase when massive number of devices simultaneously access the channel. To address this issue and improve the reliability of the random access, an adaptive packet aggregation method is proposed. With the proposed method, a device does not trigger a random access for every single packet. Instead, it starts a random access when the number of aggregated packets reaches a given threshold. This method reduces the packet collision rate at the expense of an extra latency, which is used to accumulate multiple packets into a single transmission unit. Therefore, the tradeoff between packet loss rate and channel access latency has to be carefully selected. We use semi-Markov model to derive the packet loss rate and channel access latency as functions of packet aggregation number. Hence, the optimal amount of aggregated packets can be found, which keeps the loss rate below the desired value while minimizing the access latency. We also apply for the idea of packet aggregation for power saving, where a device aggregates as many packets as possible until the latency constraint is reached. Simulations are carried out to evaluate our methods. We find that the packet loss rate and/or power consumption are significantly reduced with the proposed method.
Vanniyasingam, Thuva; Cunningham, Charles E; Foster, Gary; Thabane, Lehana
2016-07-19
Discrete choice experiments (DCEs) are routinely used to elicit patient preferences to improve health outcomes and healthcare services. While many fractional factorial designs can be created, some are more statistically optimal than others. The objective of this simulation study was to investigate how varying the number of (1) attributes, (2) levels within attributes, (3) alternatives and (4) choice tasks per survey will improve or compromise the statistical efficiency of an experimental design. A total of 3204 DCE designs were created to assess how relative design efficiency (d-efficiency) is influenced by varying the number of choice tasks (2-20), alternatives (2-5), attributes (2-20) and attribute levels (2-5) of a design. Choice tasks were created by randomly allocating attribute and attribute level combinations into alternatives. Relative d-efficiency was used to measure the optimality of each DCE design. DCE design complexity influenced statistical efficiency. Across all designs, relative d-efficiency decreased as the number of attributes and attribute levels increased. It increased for designs with more alternatives. Lastly, relative d-efficiency converges as the number of choice tasks increases, where convergence may not be at 100% statistical optimality. Achieving 100% d-efficiency is heavily dependent on the number of attributes, attribute levels, choice tasks and alternatives. Further exploration of overlaps and block sizes are needed. This study's results are widely applicable for researchers interested in creating optimal DCE designs to elicit individual preferences on health services, programmes, policies and products. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
The sensitivity of tropospheric chemistry to cloud interactions
NASA Technical Reports Server (NTRS)
Jonson, Jan E.; Isaksen, Ivar S. A.
1994-01-01
Clouds, although only occupying a relatively small fraction of the troposphere volume, can have a substantial impact on the chemistry of the troposphere. In newly formed clouds, or in clouds with air rapidly flowing through, the chemistry is expected to be far more active than in aged clouds with stagnant air. Thus, frequent cycling of air through shortlived clouds, i.e. cumulus clouds, is likely to be a much more efficient media for altering the composition of the atmosphere than an extensive cloud cover i.e. frontal cloud systems. The impact of clouds is tested out in a 2-D channel model encircling the globe in a latitudinal belt from 30 to 60 deg N. The model contains a detailed gas phase chemistry. In addition physiochemical interactions between the gas and aqueous phases are included. For species as H2O2, CH2O, O3, and SO2, Henry's law equilibria are assumed, whereas HNO3 and H2SO4 are regarded as completed dissolved in the aqueous phase. Absorption of HO2 and OH is assumed to be mass-transport limited. The chemistry of the aqueous phase is characterized by rapid cycling of odd hydrogen, (H2O2, HO2, and OH). O2(-) (produced through dissociation of HO2) reacting with dissolved O3 is a major source of OH in the aqueous phase. This reaction can be a significant sink for O3 in the troposphere. In the interstitial cloud air, odd hydrogen is depleted, whereas NO(x) remains in the gas phase, thus reducing ozone production due to the reaction between NO and HO2. Our calculations give markedly lower ozone levels when cloud interactions are included. This may in part explain the overpredictions of ozone levels often experienced in models neglecting cloud chemical interactions. In the present study, the existence of clouds, cloud types, and their lifetimes are modeled as pseudo random variables. Such pseudo random sequences are in reality deterministic and may, given the same starting values, be reproduced. The effects of cloud interactions on the overall chemistry of the troposphere are discussed. In particular, tests are performed to determine the sensitivity of cloud frequencies and cloud types.
Random element method for numerical modeling of diffusional processes
NASA Technical Reports Server (NTRS)
Ghoniem, A. F.; Oppenheim, A. K.
1982-01-01
The random element method is a generalization of the random vortex method that was developed for the numerical modeling of momentum transport processes as expressed in terms of the Navier-Stokes equations. The method is based on the concept that random walk, as exemplified by Brownian motion, is the stochastic manifestation of diffusional processes. The algorithm based on this method is grid-free and does not require the diffusion equation to be discritized over a mesh, it is thus devoid of numerical diffusion associated with finite difference methods. Moreover, the algorithm is self-adaptive in space and explicit in time, resulting in an improved numerical resolution of gradients as well as a simple and efficient computational procedure. The method is applied here to an assortment of problems of diffusion of momentum and energy in one-dimension as well as heat conduction in two-dimensions in order to assess its validity and accuracy. The numerical solutions obtained are found to be in good agreement with exact solution except for a statistical error introduced by using a finite number of elements, the error can be reduced by increasing the number of elements or by using ensemble averaging over a number of solutions.
Toth, Robert J.; Shih, Natalie; Tomaszewski, John E.; Feldman, Michael D.; Kutter, Oliver; Yu, Daphne N.; Paulus, John C.; Paladini, Ginaluca; Madabhushi, Anant
2014-01-01
Context: Co-registration of ex-vivo histologic images with pre-operative imaging (e.g., magnetic resonance imaging [MRI]) can be used to align and map disease extent, and to identify quantitative imaging signatures. However, ex-vivo histology images are frequently sectioned into quarters prior to imaging. Aims: This work presents Histostitcher™, a software system designed to create a pseudo whole mount histology section (WMHS) from a stitching of four individual histology quadrant images. Materials and Methods: Histostitcher™ uses user-identified fiducials on the boundary of two quadrants to stitch such quadrants. An original prototype of Histostitcher™ was designed using the Matlab programming languages. However, clinical use was limited due to slow performance, computer memory constraints and an inefficient workflow. The latest version was created using the extensible imaging platform (XIP™) architecture in the C++ programming language. A fast, graphics processor unit renderer was designed to intelligently cache the visible parts of the histology quadrants and the workflow was significantly improved to allow modifying existing fiducials, fast transformations of the quadrants and saving/loading sessions. Results: The new stitching platform yielded significantly more efficient workflow and reconstruction than the previous prototype. It was tested on a traditional desktop computer, a Windows 8 Surface Pro table device and a 27 inch multi-touch display, with little performance difference between the different devices. Conclusions: Histostitcher™ is a fast, efficient framework for reconstructing pseudo WMHS from individually imaged quadrants. The highly modular XIP™ framework was used to develop an intuitive interface and future work will entail mapping the disease extent from the pseudo WMHS onto pre-operative MRI. PMID:24843820
Gerislioglu, Selim; Adams, Scott R; Wesdemiotis, Chrys
2018-04-03
Conjugation of poly(ethylene glycol) (PEG) to protein drugs (PEGylation) is increasingly utilized in the biotherapeutics field because it improves significantly the drugs' circulatory half-life, solubility, and shelf-life. The activity of a PEGylated drug depends on the number, size, and location of the attached PEG chain(s). This study introduces a 2D separation approach, including reversed-phase ultra-performance liquid chromatography (RP-UPLC) and ion mobility mass spectrometry (IM-MS), in order to determine the structural properties of the conjugates, as demonstrated for a PEGylated insulin sample that was prepared by random amine PEGylation. The UPLC dimension allowed separation based on polarity. Electrospray ionization (ESI) of the eluates followed by in-source dissociation (ISD) truncated the PEG chains and created insulin fragments that provided site-specific information based on whether they contained a marker at the potential conjugation sites. Separation of the latter fragments by size and charge in the orthogonal IM dimension (pseudo-4D UPLC-ISD-IM-MS approach) enabled clear detection and identification of the positional isomers formed upon PEGylation. The results showed a highly heterogeneous mixture of singly and multiply conjugated isomers plus unconjugated material. PEGylation was observed on all three possible attachment sites (ε-NH 2 of LysB29, A- and B-chain N-termini). Each PEGylation site was validated by analysis of the same product after disulfide bond cleavage, so that the PEGylated A- and B- chain could be individually characterized with the same pseudo-4D UPLC-ISD-IM-MS method. Copyright © 2018 Elsevier B.V. All rights reserved.
Fast generation of sparse random kernel graphs
Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo
2015-09-10
The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in timemore » at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.« less
Efficient search of multiple types of targets
NASA Astrophysics Data System (ADS)
Wosniack, M. E.; Raposo, E. P.; Viswanathan, G. M.; da Luz, M. G. E.
2015-12-01
Random searches often take place in fragmented landscapes. Also, in many instances like animal foraging, significant benefits to the searcher arise from visits to a large diversity of patches with a well-balanced distribution of targets found. Up to date, such aspects have been widely ignored in the usual single-objective analysis of search efficiency, in which one seeks to maximize just the number of targets found per distance traversed. Here we address the problem of determining the best strategies for the random search when these multiple-objective factors play a key role in the process. We consider a figure of merit (efficiency function), which properly "scores" the mentioned tasks. By considering random walk searchers with a power-law asymptotic Lévy distribution of step lengths, p (ℓ ) ˜ℓ-μ , with 1 <μ ≤3 , we show that the standard optimal strategy with μopt≈2 no longer holds universally. Instead, optimal searches with enhanced superdiffusivity emerge, including values as low as μopt≈1.3 (i.e., tending to the ballistic limit). For the general theory of random search optimization, our findings emphasize the necessity to correctly characterize the multitude of aims in any concrete metric to compare among possible candidates to efficient strategies. In the context of animal foraging, our results might explain some empirical data pointing to stronger superdiffusion (μ <2 ) in the search behavior of different animal species, conceivably associated to multiple goals to be achieved in fragmented landscapes.
Effect of pH on lead removal from water using tree fern as the sorbent.
Ho, Yuh-Shan
2005-07-01
The sorption of lead from water onto an agricultural by-product, tree fern, was examined as a function of pH. The sorption processes were carried out using an agitated and baffled system. Pseudo-second-order kinetic analyses were performed to determine the rate constant of sorption, the equilibrium sorption capacity, and the initial sorption rate. Application of the pseudo-second-order kinetics model produced very high coefficients of determination. Results showed the efficiency of tree fern as a sorbent for lead. The optimum pH for lead removal was between 4 and 7, with pH 4.9 resulting in better lead removal. Ion exchange occurred in the initial reaction period. In addition, a relation between the change in the solution hydrogen ion concentration and equilibrium capacity was developed and is presented.
Englund, Erin K; Rodgers, Zachary B; Langham, Michael C; Mohler, Emile R; Floyd, Thomas F; Wehrli, Felix W
2016-10-01
To compare calf skeletal muscle perfusion measured with pulsed arterial spin labeling (PASL) and pseudo-continuous arterial spin labeling (pCASL) methods, and to assess the variability of pCASL labeling efficiency in the popliteal artery throughout an ischemia-reperfusion paradigm. At 3T, relative pCASL labeling efficiency was experimentally assessed in five subjects by measuring the signal intensity of blood in the popliteal artery just distal to the labeling plane immediately following pCASL labeling or control preparation pulses, or without any preparation pulses throughout separate ischemia-reperfusion paradigms. The relative label and control efficiencies were determined during baseline, hyperemia, and recovery. In a separate cohort of 10 subjects, pCASL and PASL sequences were used to measure reactive hyperemia perfusion dynamics. Calculated pCASL labeling and control efficiencies did not differ significantly between baseline and hyperemia or between hyperemia and recovery periods. Relative to the average baseline, pCASL label efficiency was 2 ± 9% lower during hyperemia. Perfusion dynamics measured with pCASL and PASL did not differ significantly (P > 0.05). Average leg muscle peak perfusion was 47 ± 20 mL/min/100g or 50 ± 12 mL/min/100g, and time to peak perfusion was 25 ± 3 seconds and 25 ± 7 seconds from pCASL and PASL data, respectively. Differences of further metrics parameterizing the perfusion time course were not significant between pCASL and PASL measurements (P > 0.05). No change in pCASL labeling efficiency was detected despite the almost 10-fold increase in average blood flow velocity in the popliteal artery. pCASL and PASL provide precise and consistent measurement of skeletal muscle reactive hyperemia perfusion dynamics. J. MAGN. RESON. IMAGING 2016;44:929-939. © 2016 International Society for Magnetic Resonance in Medicine.
Comparative study of the degradation of carbamazepine in water by advanced oxidation processes.
Dai, Chao-Meng; Zhou, Xue-Fei; Zhang, Ya-Lei; Duan, Yan-Ping; Qiang, Zhi-Min; Zhang, Tian C
2012-06-01
Degradation of carbamazepine (CBZ) using ultraviolet (UV), UV/H2O2, Fenton, UV/Fenton and photocatalytic oxidation with TiO2 (UV/TiO2) was studied in deionized water. The five different oxidation processes were compared for the removal kinetics of CBZ. The results showed that all the processes followed pseudo-first-order kinetics. The direct photolysis (UV alone) was found to be less effective than UV/H2O2 oxidation for the degradation of CBZ. An approximate 20% increase in the CBZ removal efficiency occurred with the UV/Fenton reaction as compared with the Fenton oxidation. In the UV/TiO2 system, the kinetics of CBZ degradation in the presence of different concentrations of TiO2 followed the pseudo-first order degradation, which was consistent with the Langmuir-Hinshelwood (L-H) model. On a time basis, the degradation efficiencies ofCBZ were in the following order: UV/Fenton (86.9% +/- 1.7%) > UV/TiO2 (70.4% +/- 4.2%) > Fenton (67.8% +/- 2.6%) > UV/H2O2 (40.65 +/- 5.1%) > UV (12.2% +/- 1.4%). However, the lowest cost was obtained with the Fenton process.
Farhadi, Sajjad; Aminzadeh, Behnoush; Torabian, Ali; Khatibikamal, Vahid; Alizadeh Fard, Mohammad
2012-06-15
This work makes a comparison between electrocoagulation (EC), photoelectrocoagulation, peroxi-electrocoagulation and peroxi-photoelectrocoagulation processes to investigate the removal of chemical oxygen demand (COD) from pharmaceutical wastewater. The effects of operational parameters such as initial pH, current density, applied voltage, amount of hydrogen peroxide and electrolysis time on COD removal efficiency were investigated and the optimum operating range for each of these operating variables was experimentally determined. In electrocoagulation process, the optimum values of pH and voltage were determined to be 7 and 40 V, respectively. Desired pH and hydrogen peroxide concentration in the Fenton-based processes were found to be 3 and 300 mg/L, respectively. The amounts of COD, pH, electrical conductivity, temperature and total dissolved solids (TDS) were on-line monitored. Results indicated that under the optimum operating range for each process, the COD removal efficiency was in order of peroxi-electrocoagulation > peroxi-photoelectrocoagulation > photoelectrocoagulation>electrocoagulation. Finally, a kinetic study was carried out using the linear pseudo-second-order model and results showed that the pseudo-second-order equation provided the best correlation for the COD removal rate. Copyright © 2012 Elsevier B.V. All rights reserved.
First-order reversal curves of single domain particles: diluted random assemblages and chains
NASA Astrophysics Data System (ADS)
Egli, R.
2009-04-01
Exact magnetic models can be used to calculate first-order reversal curves (FORC) of single domain (SD) particle assemblages, as shown by Newell [2005] for the case of isolated Stoner-Wohlfarth particles. After overcoming experimental difficulties, a FORC diagram sharing many similarities to Newell's model has been measured on a lake sediment sample (see A.P. Chen et al., "Quantification of magnetofossils using first-order reversal curves", EGU General Assembly 2009, Abstracts Vol. 11, EGU2009-10719). This sample contains abundant magnetofossils, as shown by coercivity analysis and electron microscopy, therefore suggesting that well dispersed, intact magnetosome chains are the main SD carriers. Subtle differences between the reversible and the irreversible contributions of the measured FORC distribution suggest that magnetosome chains might not be correctly described by the Stoner-Wohlfarth model. To better understand the hysteresis properties of such chains, a simple magnetic model has been implemented, taking dipole-dipole interactions between particles within the same chain into account. The model results depend on the magnetosome elongation, the number of magnetosomes in a chain, and the gap between them. If the chain axis is subparallel to the applied field, the magnetic moment reverses by a pseudo-fanning mode, which is replaced by a pseudo-coherent rotation mode at greater angles. These reversal modes are intrinsically different from coherent rotation assumed Stoner-Wohlfarth model, resulting in FORC diagrams with a smaller reversible component. On the other hand, isolated authigenic SD particles can precipitate in the sediment matrix, as it might occur for pedogenic magnetite. In this case, an assembly of randomly located particles provides a possible model for the resulting FORC diagram. If the concentration of the particles is small, each particle is affected by a random interaction field whose statistical distribution can be calculated from first principles. In this case, the irreversible component of the FORC diagram, which is described by a Dirac delta function in the non-interacting case, converts into a continuous function that directly reflects the distribution of interaction fields. Such models provide a way to identify and characterize authigenic SD particles in sediments, and in some case allow one to isolate their magnetic contribution from that of other magnetic components. Newell, A.J. (2005), A high-precision model of first-order reversal curve (FORC) functions for single-domain ferromagnets with uniaxial anisotropy, Gechem. Geophys. Geosyst., 6, Q05010, doi:10.1029/2004GC00877.
Aeroelastic Model Structure Computation for Envelope Expansion
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.
2007-01-01
Structure detection is a procedure for selecting a subset of candidate terms, from a full model description, that best describes the observed output. This is a necessary procedure to compute an efficient system description which may afford greater insight into the functionality of the system or a simpler controller design. Structure computation as a tool for black-box modeling may be of critical importance in the development of robust, parsimonious models for the flight-test community. Moreover, this approach may lead to efficient strategies for rapid envelope expansion that may save significant development time and costs. In this study, a least absolute shrinkage and selection operator (LASSO) technique is investigated for computing efficient model descriptions of non-linear aeroelastic systems. The LASSO minimises the residual sum of squares with the addition of an l(Sub 1) penalty term on the parameter vector of the traditional l(sub 2) minimisation problem. Its use for structure detection is a natural extension of this constrained minimisation approach to pseudo-linear regression problems which produces some model parameters that are exactly zero and, therefore, yields a parsimonious system description. Applicability of this technique for model structure computation for the F/A-18 (McDonnell Douglas, now The Boeing Company, Chicago, Illinois) Active Aeroelastic Wing project using flight test data is shown for several flight conditions (Mach numbers) by identifying a parsimonious system description with a high percent fit for cross-validated data.
Two-Way Satellite Time and Frequency Transfer Using 1 MChips/s Codes
2009-11-01
Abstract The Ku-band transatlantic and Europe-to-Europe two-way satellite time and frequency transfer ( TWSTFT ) operations used 2.5 MChip/s...pseudo-random codes with 3.5 MHz bandwidth until the end of July 2009. The cost of TWSTFT operation is associated with the bandwidth used on a...geostationary satellite. The transatlantic and Europe-to-Europe TWSTFT operations faced a significant increase in cost for using 3.5 MHz bandwidth on a new
Information Encoding on a Pseudo Random Noise Radar Waveform
2013-03-01
quadrature mirror filter bank (QMFB) tree diagram [18] . . . . . . . . . . . 18 2.7 QMFB layer 3 contour plot for 7-bit barker code binary phase shift...test signal . . . . . . . . 20 2.9 Block diagram of the FFT accumulation method (FAM) time smoothing method to estimate the spectral correlation ... Samples A m pl itu de (b) Correlator output for an WGN pulse in a AWGN channel Figure 2.2: Effectiveness of correlation for SNR = -10 dB 10 2.3 Radar
A space-time lower-upper symmetric Gauss-Seidel scheme for the time-spectral method
NASA Astrophysics Data System (ADS)
Zhan, Lei; Xiong, Juntao; Liu, Feng
2016-05-01
The time-spectral method (TSM) offers the advantage of increased order of accuracy compared to methods using finite-difference in time for periodic unsteady flow problems. Explicit Runge-Kutta pseudo-time marching and implicit schemes have been developed to solve iteratively the space-time coupled nonlinear equations resulting from TSM. Convergence of the explicit schemes is slow because of the stringent time-step limit. Many implicit methods have been developed for TSM. Their computational efficiency is, however, still limited in practice because of delayed implicit temporal coupling, multiple iterative loops, costly matrix operations, or lack of strong diagonal dominance of the implicit operator matrix. To overcome these shortcomings, an efficient space-time lower-upper symmetric Gauss-Seidel (ST-LU-SGS) implicit scheme with multigrid acceleration is presented. In this scheme, the implicit temporal coupling term is split as one additional dimension of space in the LU-SGS sweeps. To improve numerical stability for periodic flows with high frequency, a modification to the ST-LU-SGS scheme is proposed. Numerical results show that fast convergence is achieved using large or even infinite Courant-Friedrichs-Lewy (CFL) numbers for unsteady flow problems with moderately high frequency and with the use of moderately high numbers of time intervals. The ST-LU-SGS implicit scheme is also found to work well in calculating periodic flow problems where the frequency is not known a priori and needed to be determined by using a combined Fourier analysis and gradient-based search algorithm.
Applications of hybrid genetic algorithms in seismic tomography
NASA Astrophysics Data System (ADS)
Soupios, Pantelis; Akca, Irfan; Mpogiatzis, Petros; Basokur, Ahmet T.; Papazachos, Constantinos
2011-11-01
Almost all earth sciences inverse problems are nonlinear and involve a large number of unknown parameters, making the application of analytical inversion methods quite restrictive. In practice, most analytical methods are local in nature and rely on a linearized form of the problem equations, adopting an iterative procedure which typically employs partial derivatives in order to optimize the starting (initial) model by minimizing a misfit (penalty) function. Unfortunately, especially for highly non-linear cases, the final model strongly depends on the initial model, hence it is prone to solution-entrapment in local minima of the misfit function, while the derivative calculation is often computationally inefficient and creates instabilities when numerical approximations are used. An alternative is to employ global techniques which do not rely on partial derivatives, are independent of the misfit form and are computationally robust. Such methods employ pseudo-randomly generated models (sampling an appropriately selected section of the model space) which are assessed in terms of their data-fit. A typical example is the class of methods known as genetic algorithms (GA), which achieves the aforementioned approximation through model representation and manipulations, and has attracted the attention of the earth sciences community during the last decade, with several applications already presented for several geophysical problems. In this paper, we examine the efficiency of the combination of the typical regularized least-squares and genetic methods for a typical seismic tomography problem. The proposed approach combines a local (LOM) and a global (GOM) optimization method, in an attempt to overcome the limitations of each individual approach, such as local minima and slow convergence, respectively. The potential of both optimization methods is tested and compared, both independently and jointly, using the several test models and synthetic refraction travel-time date sets that employ the same experimental geometry, wavelength and geometrical characteristics of the model anomalies. Moreover, real data from a crosswell tomographic project for the subsurface mapping of an ancient wall foundation are used for testing the efficiency of the proposed algorithm. The results show that the combined use of both methods can exploit the benefits of each approach, leading to improved final models and producing realistic velocity models, without significantly increasing the required computation time.
Shehata, F A; Attallah, M F; Borai, E H; Hilal, M A; Abo-Aly, M M
2010-02-01
A novel impregnated polymeric resin was practically tested as adsorbent material for removal of some hazardous radionuclides from radioactive liquid waste. The applicability for the treatment of low-level liquid radioactive waste was investigated. The material was prepared by loading 4,4'(5')di-t-butylbenzo 18 crown 6 (DtBB18C6) onto poly(acrylamide-acrylic acid-acrylonitril)-N, N'-methylenediacrylamide (P(AM-AA-AN)-DAM). The removal of (134)Cs, (60)Co, (65)Zn , and ((152+154))Eu onto P(AM-AA-AN)-DAM/DtBB18C6 was investigated using a batch equilibrium technique with respect to the pH, contact time, and temperature. Kinetic models are used to determine the rate of sorption and to investigate the mechanism of sorption process. Five kinetics models, pseudo-first-order, pseudo-second-order, intra-particle diffusion, homogeneous particle diffusion (HPDM), and Elovich models, were used to investigate the sorption process. The obtained results of kinetic models predicted that, pseudo-second-order is applicable; the sorption is controlled by particle diffusion mechanism and the process is chemisorption. The obtained values of thermodynamics parameters, DeltaH degrees , DeltaS degrees , and DeltaG degrees indicated that the endothermic nature, increased randomness at the solid/solution interface and the spontaneous nature of the sorption processes. Copyright (c) 2009 Elsevier Ltd. All rights reserved.
Combined rule extraction and feature elimination in supervised classification.
Liu, Sheng; Patel, Ronak Y; Daga, Pankaj R; Liu, Haining; Fu, Gang; Doerksen, Robert J; Chen, Yixin; Wilkins, Dawn E
2012-09-01
There are a vast number of biology related research problems involving a combination of multiple sources of data to achieve a better understanding of the underlying problems. It is important to select and interpret the most important information from these sources. Thus it will be beneficial to have a good algorithm to simultaneously extract rules and select features for better interpretation of the predictive model. We propose an efficient algorithm, Combined Rule Extraction and Feature Elimination (CRF), based on 1-norm regularized random forests. CRF simultaneously extracts a small number of rules generated by random forests and selects important features. We applied CRF to several drug activity prediction and microarray data sets. CRF is capable of producing performance comparable with state-of-the-art prediction algorithms using a small number of decision rules. Some of the decision rules are biologically significant.
Wiesinger, Florian; Bylund, Mikael; Yang, Jaewon; Kaushik, Sandeep; Shanbhag, Dattesh; Ahn, Sangtae; Jonsson, Joakim H; Lundman, Josef A; Hope, Thomas; Nyholm, Tufve; Larson, Peder; Cozzini, Cristina
2018-02-18
To describe a method for converting Zero TE (ZTE) MR images into X-ray attenuation information in the form of pseudo-CT images and demonstrate its performance for (1) attenuation correction (AC) in PET/MR and (2) dose planning in MR-guided radiation therapy planning (RTP). Proton density-weighted ZTE images were acquired as input for MR-based pseudo-CT conversion, providing (1) efficient capture of short-lived bone signals, (2) flat soft-tissue contrast, and (3) fast and robust 3D MR imaging. After bias correction and normalization, the images were segmented into bone, soft-tissue, and air by means of thresholding and morphological refinements. Fixed Hounsfield replacement values were assigned for air (-1000 HU) and soft-tissue (+42 HU), whereas continuous linear mapping was used for bone. The obtained ZTE-derived pseudo-CT images accurately resembled the true CT images (i.e., Dice coefficient for bone overlap of 0.73 ± 0.08 and mean absolute error of 123 ± 25 HU evaluated over the whole head, including errors from residual registration mismatches in the neck and mouth regions). The linear bone mapping accounted for bone density variations. Averaged across five patients, ZTE-based AC demonstrated a PET error of -0.04 ± 1.68% relative to CT-based AC. Similarly, for RTP assessed in eight patients, the absolute dose difference over the target volume was found to be 0.23 ± 0.42%. The described method enables MR to pseudo-CT image conversion for the head in an accurate, robust, and fast manner without relying on anatomical prior knowledge. Potential applications include PET/MR-AC, and MR-guided RTP. © 2018 International Society for Magnetic Resonance in Medicine.
An optimal resolved rate law for kindematically redundant manipulators
NASA Technical Reports Server (NTRS)
Bourgeois, B. J.
1987-01-01
The resolved rate law for a manipulator provides the instantaneous joint rates required to satisfy a given instantaneous hand motion. When the joint space has more degrees of freedom than the task space, the manipulator is kinematically redundant and the kinematic rate equations are underdetermined. These equations can be locally optimized, but the resulting pseudo-inverse solution was found to cause large joint rates in some case. A weighting matrix in the locally optimized (pseudo-inverse) solution is dynamically adjusted to control the joint motion as desired. Joint reach limit avoidance is demonstrated in a kinematically redundant planar arm model. The treatment is applicable to redundant manipulators with any number of revolute joints and to nonplanar manipulators.
NASA Astrophysics Data System (ADS)
Leuca, Maxim
CFD (Computational Fluid Dynamics) is a computational tool for studying flow in science and technology. The Aerospace Industry uses increasingly the CFD modeling and design phase of the aircraft, so the precision with which phenomena are simulated boundary layer is very important. The research efforts are focused on optimizing the aerodynamic performance of airfoils to predict the drag and delay the laminar-turbulent transition. CFD codes must be fast and efficient to model complex geometries for aerodynamic flows. The resolution of the boundary layer equations requires a large amount of computing resources for viscous flows. CFD codes are commonly used to simulate aerodynamic flows, require normal meshes to the wall, extremely fine, and, by consequence, the calculations are very expensive. . This thesis proposes a new approach to solve the equations of boundary layer for laminar and turbulent flows using an approach based on the finite difference method. Integrated into a code of panels, this concept allows to solve airfoils avoiding the use of iterative algorithms, usually computing time and often involving convergence problems. The main advantages of panels methods are their simplicity and ability to obtain, with minimal computational effort, solutions in complex flow conditions for relatively complicated configurations. To verify and validate the developed program, experimental data are used as references when available. Xfoil code is used to obtain data as a pseudo references. Pseudo-reference, as in the absence of experimental data, we cannot really compare two software together. Xfoil is a program that has proven to be accurate and inexpensive computing resources. Developed by Drela (1985), this program uses the method with two integral to design and analyze profiles of wings at low speed (Drela et Youngren, 2014), (Drela, 2003). NACA 0012, NACA 4412, and ATR-42 airfoils have been used for this study. For the airfoils NACA 0012 and NACA 4412 the calculations are made using the Mach number M =0.17 and Reynolds number Re = 6x10 6 conditions for which we have experimental results. For the airfoil ATR-42 the calculations are made using the Mach number M =0.1 and Reynolds number Re=536450 as it was analysed in LARCASE's Price-Paidoussis wind tunnel. Keywords: boundary layer, direct method, displacement thickness, finite differences, Xfoil code.
When the display matters: A multifaceted perspective on 3D geovisualizations
NASA Astrophysics Data System (ADS)
Juřík, Vojtěch; Herman, Lukáš; Šašinka, Čeněk; Stachoň, Zdeněk; Chmelík, Jiří
2017-04-01
This study explores the influence of stereoscopic (real) 3D and monoscopic (pseudo) 3D visualization on the human ability to reckon altitude information in noninteractive and interactive 3D geovisualizations. A two phased experiment was carried out to compare the performance of two groups of participants, one of them using the real 3D and the other one pseudo 3D visualization of geographical data. A homogeneous group of 61 psychology students, inexperienced in processing of geographical data, were tested with respect to their efficiency at identifying altitudes of the displayed landscape. The first phase of the experiment was designed as non-interactive, where static 3D visual displayswere presented; the second phase was designed as interactive and the participants were allowed to explore the scene by adjusting the position of the virtual camera. The investigated variables included accuracy at altitude identification, time demands and the amount of the participant's motor activity performed during interaction with geovisualization. The interface was created using a Motion Capture system, Wii Remote Controller, widescreen projection and the passive Dolby 3D technology (for real 3D vision). The real 3D visual display was shown to significantly increase the accuracy of the landscape altitude identification in non-interactive tasks. As expected, in the interactive phase there were differences in accuracy flattened out between groups due to the possibility of interaction, with no other statistically significant differences in completion times or motor activity. The increased number of omitted objects in real 3D condition was further subjected to an exploratory analysis.
Comparison of non-invasive MRI measurements of cerebral blood flow in a large multisite cohort.
Dolui, Sudipto; Wang, Ze; Wang, Danny Jj; Mattay, Raghav; Finkel, Mack; Elliott, Mark; Desiderio, Lisa; Inglis, Ben; Mueller, Bryon; Stafford, Randall B; Launer, Lenore J; Jacobs, David R; Bryan, R Nick; Detre, John A
2016-07-01
Arterial spin labeling and phase contrast magnetic resonance imaging provide independent non-invasive methods for measuring cerebral blood flow. We compared global cerebral blood flow measurements obtained using pseudo-continuous arterial spin labeling and phase contrast in 436 middle-aged subjects acquired at two sites in the NHLBI CARDIA multisite study. Cerebral blood flow measured by phase contrast (CBFPC: 55.76 ± 12.05 ml/100 g/min) was systematically higher (p < 0.001) and more variable than cerebral blood flow measured by pseudo-continuous arterial spin labeling (CBFPCASL: 47.70 ± 9.75). The correlation between global cerebral blood flow values obtained from the two modalities was 0.59 (p < 0.001), explaining less than half of the observed variance in cerebral blood flow estimates. Well-established correlations of global cerebral blood flow with age and sex were similarly observed in both CBFPCASL and CBFPC CBFPC also demonstrated statistically significant site differences, whereas no such differences were observed in CBFPCASL No consistent velocity-dependent effects on pseudo-continuous arterial spin labeling were observed, suggesting that pseudo-continuous labeling efficiency does not vary substantially across typical adult carotid and vertebral velocities, as has previously been suggested. Although CBFPCASL and CBFPC values show substantial similarity across the entire cohort, these data do not support calibration of CBFPCASL using CBFPC in individual subjects. The wide-ranging cerebral blood flow values obtained by both methods suggest that cerebral blood flow values are highly variable in the general population. © The Author(s) 2016.
NASA Astrophysics Data System (ADS)
Denschlag, Robert; Lingenheil, Martin; Tavan, Paul
2008-06-01
Replica exchange (RE) molecular dynamics (MD) simulations are frequently applied to sample the folding-unfolding equilibria of β-hairpin peptides in solution, because efficiency gains are expected from this technique. Using a three-state Markov model featuring key aspects of β-hairpin folding we show that RE simulations can be less efficient than conventional techniques. Furthermore we demonstrate that one is easily seduced to erroneously assign convergence to the RE sampling, because RE ensembles can rapidly reach long-lived stationary states. We conclude that typical REMD simulations covering a few tens of nanoseconds are by far too short for sufficient sampling of β-hairpin folding-unfolding equilibria.
Tarlak, Fatih; Ozdemir, Murat; Melikoglu, Mehmet
2018-02-02
The growth data of Pseudomonas spp. on sliced mushrooms (Agaricus bisporus) stored between 4 and 28°C were obtained and fitted to three different primary models, known as the modified Gompertz, logistic and Baranyi models. The goodness of fit of these models was compared by considering the mean squared error (MSE) and the coefficient of determination for nonlinear regression (pseudo-R 2 ). The Baranyi model yielded the lowest MSE and highest pseudo-R 2 values. Therefore, the Baranyi model was selected as the best primary model. Maximum specific growth rate (r max ) and lag phase duration (λ) obtained from the Baranyi model were fitted to secondary models namely, the Ratkowsky and Arrhenius models. High pseudo-R 2 and low MSE values indicated that the Arrhenius model has a high goodness of fit to determine the effect of temperature on r max . Observed number of Pseudomonas spp. on sliced mushrooms from independent experiments was compared with the predicted number of Pseudomonas spp. with the models used by considering the B f and A f values. The B f and A f values were found to be 0.974 and 1.036, respectively. The correlation between the observed and predicted number of Pseudomonas spp. was high. Mushroom spoilage was simulated as a function of temperature with the models used. The models used for Pseudomonas spp. growth can provide a fast and cost-effective alternative to traditional microbiological techniques to determine the effect of storage temperature on product shelf-life. The models can be used to evaluate the growth behaviour of Pseudomonas spp. on sliced mushroom, set limits for the quantitative detection of the microbial spoilage and assess product shelf-life. Copyright © 2017 Elsevier B.V. All rights reserved.
Applications of Derandomization Theory in Coding
NASA Astrophysics Data System (ADS)
Cheraghchi, Mahdi
2011-07-01
Randomized techniques play a fundamental role in theoretical computer science and discrete mathematics, in particular for the design of efficient algorithms and construction of combinatorial objects. The basic goal in derandomization theory is to eliminate or reduce the need for randomness in such randomized constructions. In this thesis, we explore some applications of the fundamental notions in derandomization theory to problems outside the core of theoretical computer science, and in particular, certain problems related to coding theory. First, we consider the wiretap channel problem which involves a communication system in which an intruder can eavesdrop a limited portion of the transmissions, and construct efficient and information-theoretically optimal communication protocols for this model. Then we consider the combinatorial group testing problem. In this classical problem, one aims to determine a set of defective items within a large population by asking a number of queries, where each query reveals whether a defective item is present within a specified group of items. We use randomness condensers to explicitly construct optimal, or nearly optimal, group testing schemes for a setting where the query outcomes can be highly unreliable, as well as the threshold model where a query returns positive if the number of defectives pass a certain threshold. Finally, we design ensembles of error-correcting codes that achieve the information-theoretic capacity of a large class of communication channels, and then use the obtained ensembles for construction of explicit capacity achieving codes. [This is a shortened version of the actual abstract in the thesis.
NASA Astrophysics Data System (ADS)
Bencheikh, imane; el hajjaji, souad; abourouh, imane; Kitane, Said; Dahchour, Abdelmalek; El M'Rabet, Mohammadine
2017-04-01
Wastewater treatment is the subject of several studies through decades. Interest is continuously oriented to provide cheaper and efficient methods of treatment. Several methods of treatment exit including coagulation flocculation, filtration, precipitation, ozonation, ion exchange, reverse osmosis, advanced oxidation process. The use of these methods proved limited because of their high investment and operational cost. Adsorption can be an efficient low-cost process to remove pollutants from wastewater. This method of treatment calls for an solid adsorbent which constitutes the purification tool. Agricultural wastes have been widely exploited in this case .As we know the agricultural wastes are an important source of water pollution once discharged into the aquatic environment (river, sea ...). The valorization of such wastes and their use allows the prevention of this problem with an economic and environment benefits. In this context our study aimed testing the wastewater treatment capacity by adsorption onto holocellulose resulting from the valorization of an agriculture waste. In this study, methylene blue (MB) and methyl orange (MO) are selected as models pollutants for evaluating the holocellulose adsorbent capacity. The kinetics of adsorption is performed using UV-visible spectroscopy. In order to study the effect of the main parameters for the adsorption process and their mutual interaction, a full factorial design (type nk) has been used.23 full factorial design analysis was performed to screen the parameters affecting dye removal efficiency. Using the experimental results, a linear mathematical model representing the influence of the different parameters and their interactions was obtained. The parametric study showed that efficiency of the adsorption system (Dyes/ Holocellulose) is mainly linked to pH variation. The best yields were observed for MB at pH=10 and for MO at pH=2.The kinetic data was analyzed using different models , namely , the pseudo-first- order kinetic model the pseudo-second-order kinetic model , and the Intraparticule diffusion model . It was observed that the pseudo -second -order model was the best model describing the adsorption behavior of MB and MO onto holocellulose. This suggested that the adsorption mechanism might be a chemisorptions process. In general, the results indicated that holocellulose is suitable as sorbent material for adsorption of MO and MB from aqueous solutions for its high effectiveness and low cost.
Meghdadi, Aminreza
2018-05-02
Nitrate has been recognized as a global threat to environmental health. In this regard, the hyporheic zone (saturated media beneath and adjacent to the stream bed) plays a crucial role in attenuating groundwater nitrate, prior to discharge into surface water. While different nitrate removal pathways have been investigated over recent decades, the adsorption capacity of hyporheic sediments under natural conditions has not yet been identified. In this study, the natural attenuation capacity of the hyporheic-sediments of the Ghezel-Ozan River, located in the north-west of Iran, was determined. The sampled sediments (from 1 m below the stream bed) were characterized via XRD, FT-IR, BET, SEM, BJH, and Zeta potential. Nitrate adsorption was evaluated using a batch experiment with hyporheic pore-water from each study site. The study was performed in the hyporheic sediments of two morphologically different zones, including Z 1 located in the parafluvial zone having the clay sediment texture (57.8% clay) with smectite/Illite mixed layer clay type and Z 2 located in the river confluence area containing silty clay sediment texture (47.6% clay) with smectite/kaolinite mixed layer clay type. Data obtained from the batch experiment were subjected to pseudo-first order, pseudo-second order, intra-particle diffusion, and Elovich mass transfer kinetic models to characterize the nitrate adsorption mechanism. Furthermore, to replicate nitrate removal efficiencies of the hyporheic sediments under natural conditions, the sampled hyporheic pore-waters were applied as initial solutions to run the batch experiment. The results of the artificial nitrate solution correlated well with pseudo-second order (R 2 >95%; in both Z 1 and Z 2 ) and maximum removal efficiencies of 85.3% and 71.2% (adsorbent dosage 90 g/L, pH = 5.5, initial adsorbate concentration of 90 mg/L) were achieved in Z 1 and Z 2 , respectively. The results of the nitrate adsorption analysis revealed that the nitrate removal efficiencies varied from 17.24 ± 1.86% in Z 1 during the wet season to 28.13 ± 0.89% in Z 2 during the dry season. The results obtained by this study yielded strong evidence of the potential of hyporheic sediments to remove nitrate from an aqueous environment with great efficiency. Crown Copyright © 2018. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Yu, Lianchun; Liu, Liwei
2014-03-01
The generation and conduction of action potentials (APs) represents a fundamental means of communication in the nervous system and is a metabolically expensive process. In this paper, we investigate the energy efficiency of neural systems in transferring pulse signals with APs. By analytically solving a bistable neuron model that mimics the AP generation with a particle crossing the barrier of a double well, we find the optimal number of ion channels that maximizes the energy efficiency of a neuron. We also investigate the energy efficiency of a neuron population in which the input pulse signals are represented with synchronized spikes and read out with a downstream coincidence detector neuron. We find an optimal number of neurons in neuron population, as well as the number of ion channels in each neuron that maximizes the energy efficiency. The energy efficiency also depends on the characters of the input signals, e.g., the pulse strength and the interpulse intervals. These results are confirmed by computer simulation of the stochastic Hodgkin-Huxley model with a detailed description of the ion channel random gating. We argue that the tradeoff between signal transmission reliability and energy cost may influence the size of the neural systems when energy use is constrained.
Yu, Lianchun; Liu, Liwei
2014-03-01
The generation and conduction of action potentials (APs) represents a fundamental means of communication in the nervous system and is a metabolically expensive process. In this paper, we investigate the energy efficiency of neural systems in transferring pulse signals with APs. By analytically solving a bistable neuron model that mimics the AP generation with a particle crossing the barrier of a double well, we find the optimal number of ion channels that maximizes the energy efficiency of a neuron. We also investigate the energy efficiency of a neuron population in which the input pulse signals are represented with synchronized spikes and read out with a downstream coincidence detector neuron. We find an optimal number of neurons in neuron population, as well as the number of ion channels in each neuron that maximizes the energy efficiency. The energy efficiency also depends on the characters of the input signals, e.g., the pulse strength and the interpulse intervals. These results are confirmed by computer simulation of the stochastic Hodgkin-Huxley model with a detailed description of the ion channel random gating. We argue that the tradeoff between signal transmission reliability and energy cost may influence the size of the neural systems when energy use is constrained.
Closing the evidence gap in infectious disease: point-of-care randomization and informed consent.
Huttner, A; Leibovici, L; Theuretzbacher, U; Huttner, B; Paul, M
2017-02-01
The informed consent document is intended to provide basic rights to patients but often fails to do so. Patients' autonomy may be diminished by virtue of their illness; evidence shows that even patients who appear to be ideal candidates for understanding and granting informed consent rarely are, particularly those with acute infections. We argue that for low-risk trials whose purpose is to evaluate nonexperimental therapies or other measures towards which the medical community is in a state of equipoise, ethics committees should play a more active role in a more standardized fashion. Patients in the clinic are continually subject to spontaneous 'pseudo-randomizations' based on local dogma and the anecdotal experience of their physicians. Stronger ethics oversight would allow point-of-care trials to structure these spontaneous randomizations, using widely available informatics tools, in combination with opt-out informed consent where deemed appropriate. Copyright © 2016. Published by Elsevier Ltd.
Nonholonomic relativistic diffusion and exact solutions for stochastic Einstein spaces
NASA Astrophysics Data System (ADS)
Vacaru, S. I.
2012-03-01
We develop an approach to the theory of nonholonomic relativistic stochastic processes in curved spaces. The Itô and Stratonovich calculus are formulated for spaces with conventional horizontal (holonomic) and vertical (nonholonomic) splitting defined by nonlinear connection structures. Geometric models of the relativistic diffusion theory are elaborated for nonholonomic (pseudo) Riemannian manifolds and phase velocity spaces. Applying the anholonomic deformation method, the field equations in Einstein's gravity and various modifications are formally integrated in general forms, with generic off-diagonal metrics depending on some classes of generating and integration functions. Choosing random generating functions we can construct various classes of stochastic Einstein manifolds. We show how stochastic gravitational interactions with mixed holonomic/nonholonomic and random variables can be modelled in explicit form and study their main geometric and stochastic properties. Finally, the conditions when non-random classical gravitational processes transform into stochastic ones and inversely are analyzed.
NASA Astrophysics Data System (ADS)
Kiyohara, Shin; Mizoguchi, Teruyasu
2018-03-01
Grain boundary segregation of dopants plays a crucial role in materials properties. To investigate the dopant segregation behavior at the grain boundary, an enormous number of combinations have to be considered in the segregation of multiple dopants at the complex grain boundary structures. Here, two data mining techniques, the random-forests regression and the genetic algorithm, were applied to determine stable segregation sites at grain boundaries efficiently. Using the random-forests method, a predictive model was constructed from 2% of the segregation configurations and it has been shown that this model could determine the stable segregation configurations. Furthermore, the genetic algorithm also successfully determined the most stable segregation configuration with great efficiency. We demonstrate that these approaches are quite effective to investigate the dopant segregation behaviors at grain boundaries.
ViBe: a universal background subtraction algorithm for video sequences.
Barnich, Olivier; Van Droogenbroeck, Marc
2011-06-01
This paper presents a technique for motion detection that incorporates several innovative mechanisms. For example, our proposed technique stores, for each pixel, a set of values taken in the past at the same location or in the neighborhood. It then compares this set to the current pixel value in order to determine whether that pixel belongs to the background, and adapts the model by choosing randomly which values to substitute from the background model. This approach differs from those based upon the classical belief that the oldest values should be replaced first. Finally, when the pixel is found to be part of the background, its value is propagated into the background model of a neighboring pixel. We describe our method in full details (including pseudo-code and the parameter values used) and compare it to other background subtraction techniques. Efficiency figures show that our method outperforms recent and proven state-of-the-art methods in terms of both computation speed and detection rate. We also analyze the performance of a downscaled version of our algorithm to the absolute minimum of one comparison and one byte of memory per pixel. It appears that even such a simplified version of our algorithm performs better than mainstream techniques.
E-learning platform for automated testing of electronic circuits using signature analysis method
NASA Astrophysics Data System (ADS)
Gherghina, Cǎtǎlina; Bacivarov, Angelica; Bacivarov, Ioan C.; Petricǎ, Gabriel
2016-12-01
Dependability of electronic circuits can be ensured only through testing of circuit modules. This is done by generating test vectors and their application to the circuit. Testability should be viewed as a concerted effort to ensure maximum efficiency throughout the product life cycle, from conception and design stage, through production to repairs during products operating. In this paper, is presented the platform developed by authors for training for testability in electronics, in general and in using signature analysis method, in particular. The platform allows highlighting the two approaches in the field namely analog and digital signature of circuits. As a part of this e-learning platform, it has been developed a database for signatures of different electronic components meant to put into the spotlight different techniques implying fault detection, and from this there were also self-repairing techniques of the systems with this kind of components. An approach for realizing self-testing circuits based on MATLAB environment and using signature analysis method is proposed. This paper analyses the benefits of signature analysis method and simulates signature analyzer performance based on the use of pseudo-random sequences, too.
Reflectively Coupled Waveguide Photodetector for High Speed Optical Interconnection
Hsu*, Shih-Hsiang
2010-01-01
To fully utilize GaAs high drift mobility, techniques to monolithically integrate In0.53Ga0.47As p-i-n photodetectors with GaAs based optical waveguides using total internal reflection coupling are reviewed. Metal coplanar waveguides, deposited on top of the polyimide layer for the photodetector’s planarization and passivation, were then uniquely connected as a bridge between the photonics and electronics to illustrate the high-speed monitoring function. The photodetectors were efficiently implemented and imposed on the echelle grating circle for wavelength division multiplexing monitoring. In optical filtering performance, the monolithically integrated photodetector channel spacing was 2 nm over the 1,520–1,550 nm wavelength range and the pass band was 1 nm at the −1 dB level. For high-speed applications the full-width half-maximum of the temporal response and 3-dB bandwidth for the reflectively coupled waveguide photodetectors were demonstrated to be 30 ps and 11 GHz, respectively. The bit error rate performance of this integrated photodetector at 10 Gbit/s with 27-1 long pseudo-random bit sequence non-return to zero input data also showed error-free operation. PMID:22163502
Tripathi, Pooja; Pandey, Paras N
2017-07-07
The present work employs pseudo amino acid composition (PseAAC) for encoding the protein sequences in their numeric form. Later this will be arranged in the similarity matrix, which serves as input for spectral graph clustering method. Spectral methods are used previously also for clustering of protein sequences, but they uses pair wise alignment scores of protein sequences, in similarity matrix. The alignment score depends on the length of sequences, so clustering short and long sequences together may not good idea. Therefore the idea of introducing PseAAC with spectral clustering algorithm came into scene. We extensively tested our method and compared its performance with other existing machine learning methods. It is consistently observed that, the number of clusters that we obtained for a given set of proteins is close to the number of superfamilies in that set and PseAAC combined with spectral graph clustering shows the best classification results. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Graczykowski, B.; Alzina, F.; Gomis-Bresco, J.; Sotomayor Torres, C. M.
2016-01-01
In this paper, we report a theoretical investigation of surface acoustic waves propagating in one-dimensional phononic crystal. Using finite element method eigenfrequency and frequency response studies, we develop two model geometries suitable to distinguish true and pseudo (or leaky) surface acoustic waves and determine their propagation through finite size phononic crystals, respectively. The novelty of the first model comes from the application of a surface-like criterion and, additionally, functional damping domain. Exemplary calculated band diagrams show sorted branches of true and pseudo surface acoustic waves and their quantified surface confinement. The second model gives a complementary study of transmission, reflection, and surface-to-bulk losses of Rayleigh surface waves in the case of a phononic crystal with a finite number of periods. Here, we demonstrate that a non-zero transmission within non-radiative band gaps can be carried via leaky modes originating from the coupling of local resonances with propagating waves in the substrate. Finally, we show that the transmission, reflection, and surface-to-bulk losses can be effectively optimised by tuning the geometrical properties of a stripe.
TemperSAT: A new efficient fair-sampling random k-SAT solver
NASA Astrophysics Data System (ADS)
Fang, Chao; Zhu, Zheng; Katzgraber, Helmut G.
The set membership problem is of great importance to many applications and, in particular, database searches for target groups. Recently, an approach to speed up set membership searches based on the NP-hard constraint-satisfaction problem (random k-SAT) has been developed. However, the bottleneck of the approach lies in finding the solution to a large SAT formula efficiently and, in particular, a large number of independent solutions is needed to reduce the probability of false positives. Unfortunately, traditional random k-SAT solvers such as WalkSAT are biased when seeking solutions to the Boolean formulas. By porting parallel tempering Monte Carlo to the sampling of binary optimization problems, we introduce a new algorithm (TemperSAT) whose performance is comparable to current state-of-the-art SAT solvers for large k with the added benefit that theoretically it can find many independent solutions quickly. We illustrate our results by comparing to the currently fastest implementation of WalkSAT, WalkSATlm.
NASA Astrophysics Data System (ADS)
Nakamura, Kazuyuki; Sasao, Tsutomu; Matsuura, Munehiro; Tanaka, Katsumasa; Yoshizumi, Kenichi; Nakahara, Hiroki; Iguchi, Yukihiro
2006-04-01
A large-scale memory-technology-based programmable logic device (PLD) using a look-up table (LUT) cascade is developed in the 0.35-μm standard complementary metal oxide semiconductor (CMOS) logic process. Eight 64 K-bit synchronous SRAMs are connected to form an LUT cascade with a few additional circuits. The features of the LUT cascade include: 1) a flexible cascade connection structure, 2) multi phase pseudo asynchronous operations with synchronous static random access memory (SRAM) cores, and 3) LUT-bypass redundancy. This chip operates at 33 MHz in 8-LUT cascades at 122 mW. Benchmark results show that it achieves a comparable performance to field programmable gate array (FPGAs).
High-Speed Digital Interferometry
NASA Technical Reports Server (NTRS)
De Vine, Glenn; Shaddock, Daniel A.; Ware, Brent; Spero, Robert E.; Wuchenich, Danielle M.; Klipstein, William M.; McKenzie, Kirk
2012-01-01
Digitally enhanced heterodyne interferometry (DI) is a laser metrology technique employing pseudo-random noise (PRN) codes phase-modulated onto an optical carrier. Combined with heterodyne interferometry, the PRN code is used to select individual signals, returning the inherent interferometric sensitivity determined by the optical wavelength. The signal isolation arises from the autocorrelation properties of the PRN code, enabling both rejection of spurious signals (e.g., from scattered light) and multiplexing capability using a single metrology system. The minimum separation of optical components is determined by the wavelength of the PRN code.
2010-05-01
as Link-11, Link-16 and VMF. It also includes future systems such as Link-22 (using the typical HF & UHF frequency bands) and technologies that...triangulate and find the precise geolocation of the enemy target. If the target happens to relocate, TTNT is able to update the target with high accuracy...22 operates in either the HF or UHF frequency bands. In each of these frequency bands the system can operate on a single frequency or a pseudo-random
2013-01-01
are calculated from coherently -detected fields, e.g., coherent Doppler lidar . Our CRB results reveal that the best-case mean-square error scales as 1...1088 (2001). 7. K. Asaka, Y. Hirano, K. Tatsumi, K. Kasahara, and T. Tajime, “A pseudo-random frequency modulation continuous wave coherent lidar using...multiple returns,” IEEE Trans. Pattern Anal. Mach. Intell. 29, 2170–2180 (2007). 11. T. J. Karr, “Atmospheric phase error in coherent laser radar
Laboratory complex for simulation of navigation signals of pseudosatellites
NASA Astrophysics Data System (ADS)
Ratushniak, V. N.; Gladyshev, A. B.; Sokolovskiy, A. V.; Mikhov, E. D.
2018-05-01
In the article, features of the organization, structure and questions of formation of navigation signals of pseudosatellites of the short - range navigation system based on the hardware-software complex National Instruments are considered. A software model that performs the formation and management of a pseudo-random sequence of a navigation signal and the formation and management of the format transmitted pseudosatellite navigation information is presented. The variant of constructing the transmitting equipment of the pseudosatellite base stations is provided.
Airborne Pseudolites in a Global Positioning System (GPS) Degraded Environment
2011-03-01
continuously two types of encoded pseudo-random noise (PRN) signals via using two center frequencies 4 in the L- band , namely L1 (1575.42 MHz) and L2...Jovanevic, Aleksandar, Nikhil Bhaita, Joseph Noronha, Brijesh Sirpatil, Michael Kirchner, and Deepak Saxena. “ Piercing the Veil ”. GPS World, 30–37, March...difficulties in receiver design. • Pseudolites can operate either at GPS L1, L2 and L5, or any other available frequency band . Similarly, other parameters to
Freely Drifting Swallow Float Array: August 1988 Trip Report
1989-01-01
situ meas- urements of the floats’ clock drifts were obtained; the absolute drifts were on the order of / one part in 105 and the relative clock...Finally, in situ meas- urements of the floats’ clock drifts were obtained, the absolute drifts were on the order of one part in W05 and the relative...FSK mode). That is, the pseudo-random noise generator (PRNG) created a string of ones and zeros ; a zero caused a 12 kHz tone to be broadcast from
2008-03-01
for military use. The L2 carrier frequency operates at 1227.6 MHz and transmits only the precise code . Each satellite transmits a unique pseudo ...random noise (PRN) code by which it is identified. GPS receivers require a LOS to four satellite signals to accurately estimate a position in three...receiver frequency errors, noise addition, and multipath ef- fects. He also developed four methods for estimating the cross- correlation peak within a sampled
NASA Technical Reports Server (NTRS)
Holms, A. G.
1974-01-01
Monte Carlo studies using population models intended to represent response surface applications are reported. Simulated experiments were generated by adding pseudo random normally distributed errors to population values to generate observations. Model equations were fitted to the observations and the decision procedure was used to delete terms. Comparison of values predicted by the reduced models with the true population values enabled the identification of deletion strategies that are approximately optimal for minimizing prediction errors.
Wang, Shu; Li, Yun; Wu, Xiaoli; Ding, Meijuan; Yuan, Lihua; Wang, Ruoyu; Wen, Tingting; Zhang, Jun; Chen, Lina; Zhou, Xuemin; Li, Fei
2011-02-28
To assess the potential risks associated with the environmental exposure of steroid estrogens, a novel highly efficient and selective estrogen enrichment procedure based on the use of molecularly imprinted polymer has been developed and evaluated. Herein, analogue of estrogens, namely 17-ethyl estradiol (EE(2)) was used as the pseudo template, to avoid the leakage of a trace amount of the target analytes. The resulting pseudo molecularly imprinted polymers (PMIPs) showed large sorption capacity, high recognition ability and fast binding kinetics for estrogens. Moreover, using these imprinted particles as dispersive solid-phase extraction (DSPE) materials, the amounts of three estrogens (E(1), E(2) and E(3)) which were detected by HPLC-UV from the chicken tissue samples were 0.28, 0.31 and 0.17 μg g(-1), and the recoveries were 72.5-78.7%, 90.3-95.2% and 80.5-83.6% in spiked chicken tissue samples with RSD <7%, respectively. All these results reveal that EE(2)-PMIPs as DSPE materials coupled with HPLC-UV could be applied to the highly selective separation and sensitive determination of trace estrogens in chicken tissue samples. Copyright © 2010 Elsevier B.V. All rights reserved.
Chang, Xiu-Lian; Wang, Dong; Chen, Bi-Yun; Feng, Yong-Mei; Wen, Shao-Hong; Zhan, Peng-Yuan
2012-03-07
Adsorption of roselle anthocynins, a natural pigment, onto various macroporous resins was optimized to develop a simple and efficient process for industrial separation and purification of roselle anthocyanins. Nine different macroporous resins (AB-8, X-5, HPD-100, SP-207, XAD-4, LS-305A, DM-21, LS-610B, and LS-305) were evaluated for the adsorption properties of the anthocyanins extracted from the calyx extract of Hibiscus sabdariffa L. The influences of phase contact time, solution pH, initial anthocyanin concentration, and ethanol concentration with different citric acid amounts were studied by the static adsorption/desorption method. The adsorption isotherm data were fitted well to the Langmuir isotherm, and according to this model, LS-610B and LS-305 exhibited the highest monolayer sorption capacities of 31.95 and 38.16 mg/g, respectively. The kinetic data were modeled using pseudo-first-order, pseudo-second-order, and intraparticle diffusion equations. The experimental data were well described by the pseudo-second-order kinetic model. Continuous column adsorption-regeneration cycles indicated negligible capacity loss of LS-305 during operation. The overall yield of pigment product was 49.6 mg/g dried calyces. The content of roselle anthocynins in the pigment product was 4.85%.
Selvasembian, Rangabhashiyam; P, Balasubramanian
2018-05-12
Biosorption potential of novel lignocellulosic biosorbents Musa sp. peel (MSP) and Aegle marmelos shell (AMS) was investigated for the removal of toxic triphenylmethane dye malachite green (MG), from aqueous solution. Batch experiments were performed to study the biosorption characteristics of malachite green onto lignocellulosic biosorbents as a function of initial solution pH, initial malachite green concentration, biosorbents dosage, and temperature. Biosorption equilibrium data were fitted to two and three parameters isotherm models. Three-parameter isotherm models better described the equilibrium data. The maximum monolayer biosorption capacities obtained using the Langmuir model for MG removal using MSP and AMS was 47.61 and 18.86 mg/g, respectively. The biosorption kinetic data were analyzed using pseudo-first-order, pseudo-second-order, Elovich and intraparticle diffusion models. The pseudo-second-order kinetic model best fitted the experimental data, indicated the MG biosorption using MSP and AMS as chemisorption process. The removal of MG using AMS was found as highly dependent on the process temperature. The removal efficiency of MG showed declined effect at the higher concentrations of NaCl and CaCl 2 . The regeneration test of the biosorbents toward MG removal was successful up to three cycles.
The removal of chloramphenicol from water through adsorption on activated carbon
NASA Astrophysics Data System (ADS)
Lach, Joanna; Ociepa-Kubicka, Agnieszka
2017-10-01
The presented research investigated the removal of chloramphenicol from water solutions on selected activated carbon available in three grades with different porous structure and surface chemical composition. Two models of adsorption kinetics were examined, i.e. the pseudo-first order and the pseudo-second order models. For all examined cases, the results of tests with higher value of coefficient R2 were described by the equation for pseudo-second order kinetics. The adsorption kinetics was also investigated on the activated carbons modified with ozone. The measurements were taken from the solutions with pH values of 2 and 7. Chloramphenicol was the most efficiently adsorbed on the activated carbon F-300 from the solutions with pH=7, and on the activated carbon ROW 08 Supra from the solutions with pH=2. The adsorption of this antibiotic was in the majority of cases higher from the solutions with pH=2 than pH=7. The modification of the activated carbons with ozone enhanced their adsorption capacities for chloramphenicol. The adsorption is influenced by the modification method of activated carbon (i.e. the duration of ozonation of the activated carbon solution and the solution temperature). The results were described with the Freundlich and Langmuir adsorption isotherm equations. Both models well described the obtained results (high R2 values).
Simulating realistic predator signatures in quantitative fatty acid signature analysis
Bromaghin, Jeffrey F.
2015-01-01
Diet estimation is an important field within quantitative ecology, providing critical insights into many aspects of ecology and community dynamics. Quantitative fatty acid signature analysis (QFASA) is a prominent method of diet estimation, particularly for marine mammal and bird species. Investigators using QFASA commonly use computer simulation to evaluate statistical characteristics of diet estimators for the populations they study. Similar computer simulations have been used to explore and compare the performance of different variations of the original QFASA diet estimator. In both cases, computer simulations involve bootstrap sampling prey signature data to construct pseudo-predator signatures with known properties. However, bootstrap sample sizes have been selected arbitrarily and pseudo-predator signatures therefore may not have realistic properties. I develop an algorithm to objectively establish bootstrap sample sizes that generates pseudo-predator signatures with realistic properties, thereby enhancing the utility of computer simulation for assessing QFASA estimator performance. The algorithm also appears to be computationally efficient, resulting in bootstrap sample sizes that are smaller than those commonly used. I illustrate the algorithm with an example using data from Chukchi Sea polar bears (Ursus maritimus) and their marine mammal prey. The concepts underlying the approach may have value in other areas of quantitative ecology in which bootstrap samples are post-processed prior to their use.
Chen, Chen; Schneps, Matthew H; Masyn, Katherine E; Thomson, Jennifer M
2016-11-01
Increasing evidence has shown visual attention span to be a factor, distinct from phonological skills, that explains single-word identification (pseudo-word/word reading) performance in dyslexia. Yet, little is known about how well visual attention span explains text comprehension. Observing reading comprehension in a sample of 105 high school students with dyslexia, we used a pathway analysis to examine the direct and indirect path between visual attention span and reading comprehension while controlling for other factors such as phonological awareness, letter identification, short-term memory, IQ and age. Integrating phonemic decoding efficiency skills in the analytic model, this study aimed to disentangle how visual attention span and phonological skills work together in reading comprehension for readers with dyslexia. We found visual attention span to have a significant direct effect on more difficult reading comprehension but not on an easier level. It also had a significant direct effect on pseudo-word identification but not on word identification. In addition, we found that visual attention span indirectly explains reading comprehension through pseudo-word reading and word reading skills. This study supports the hypothesis that at least part of the dyslexic profile can be explained by visual attention abilities. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Sobrinho, Bruna Fernanda; de Camargo, Luana Mocelin; Sandrini-Neto, Leonardo; Kleemann, Cristian Rafael; Machado, Eunice da Costa; Mafra, Luiz Laureno
2017-01-01
In order to assess the effects of Fe-enrichment on the growth and domoic acid (DA) production of the toxigenic diatom Pseudo-nitzschia multiseries, static cultures that received the addition of different iron (Fe) concentrations were maintained for 30 days. Intra- and extracellular DA concentrations were evaluated over time, and growth and chain-formation were compared to those of non-toxic diatoms, Bacillaria sp. Growth rates of P. multiseries (μ = 0.45–0.73 d−1) were similar among cultures containing different Fe concentrations. Likewise, the similar incidence and length of P. multiseries stepped cell chains (usually 2–4; up to 8-cell long) among the treatments reinforces that the cultures were not growth-inhibited under any condition tested, suggesting an efficient Fe acquisition mechanism. Moreover, DA concentrations were significantly higher under the highest Fe concentration, indicating that Fe is required for toxin synthesis. Bacillaria sp. reached comparable growth rates under the same Fe concentrations, except when the dissolved cell contents from a P. multiseries culture was added. The 50–70% reduction in cell density and 70–90% decrease in total chlorophyll-a content of Bacillaria sp. at early stationary growth phase indicates, for the first time, an allelopathic effect of undetermined compounds released by Pseudo-nitzschia to another diatom species. PMID:29064395
Sobrinho, Bruna Fernanda; de Camargo, Luana Mocelin; Sandrini-Neto, Leonardo; Kleemann, Cristian Rafael; Machado, Eunice da Costa; Mafra, Luiz Laureno
2017-10-24
In order to assess the effects of Fe-enrichment on the growth and domoic acid (DA) production of the toxigenic diatom Pseudo-nitzschia multiseries , static cultures that received the addition of different iron (Fe) concentrations were maintained for 30 days. Intra- and extracellular DA concentrations were evaluated over time, and growth and chain-formation were compared to those of non-toxic diatoms, Bacillaria sp. Growth rates of P. multiseries (μ = 0.45-0.73 d -1 ) were similar among cultures containing different Fe concentrations. Likewise, the similar incidence and length of P. multiseries stepped cell chains (usually 2-4; up to 8-cell long) among the treatments reinforces that the cultures were not growth-inhibited under any condition tested, suggesting an efficient Fe acquisition mechanism. Moreover, DA concentrations were significantly higher under the highest Fe concentration, indicating that Fe is required for toxin synthesis. Bacillaria sp. reached comparable growth rates under the same Fe concentrations, except when the dissolved cell contents from a P. multiseries culture was added. The 50-70% reduction in cell density and 70-90% decrease in total chlorophyll-a content of Bacillaria sp. at early stationary growth phase indicates, for the first time, an allelopathic effect of undetermined compounds released by Pseudo-nitzschia to another diatom species.
Adsorption of heavy metals from aqueous solutions by Mg-Al-Zn mingled oxides adsorbent.
El-Sayed, Mona; Eshaq, Gh; ElMetwally, A E
2016-10-01
In our study, Mg-Al-Zn mingled oxides were prepared by the co-precipitation method. The structure, composition, morphology and thermal stability of the synthesized Mg-Al-Zn mingled oxides were analyzed by powder X-ray diffraction, Fourier transform infrared spectrometry, N 2 physisorption, scanning electron microscopy, differential scanning calorimetry and thermogravimetry. Batch experiments were performed to study the adsorption behavior of cobalt(II) and nickel(II) as a function of pH, contact time, initial metal ion concentration, and adsorbent dose. The maximum adsorption capacity of Mg-Al-Zn mingled oxides for cobalt and nickel metal ions was 116.7 mg g -1 , and 70.4 mg g -1 , respectively. The experimental data were analyzed using pseudo-first- and pseudo-second-order kinetic models in linear and nonlinear regression analysis. The kinetic studies showed that the adsorption process could be described by the pseudo-second-order kinetic model. Experimental equilibrium data were well represented by Langmuir and Freundlich isotherm models. Also, the maximum monolayer capacity, q max , obtained was 113.8 mg g -1 , and 79.4 mg g -1 for Co(II), and Ni(II), respectively. Our results showed that Mg-Al-Zn mingled oxides can be used as an efficient adsorbent material for removal of heavy metals from industrial wastewater samples.
Determination of VA health care costs.
Barnett, Paul G
2003-09-01
In the absence of billing data, alternative methods are used to estimate the cost of hospital stays, outpatient visits, and treatment innovations in the U.S. Department of Veterans Affairs (VA). The choice of method represents a trade-off between accuracy and research cost. The direct measurement method gathers information on staff activities, supplies, equipment, space, and workload. Since it is expensive, direct measurement should be reserved for finding short-run costs, evaluating provider efficiency, or determining the cost of treatments that are innovative or unique to VA. The pseudo-bill method combines utilization data with a non-VA reimbursement schedule. The cost regression method estimates the cost of VA hospital stays by applying the relationship between cost and characteristics of non-VA hospitalizations. The Health Economics Resource Center uses pseudo-bill and cost regression methods to create an encounter-level database of VA costs. Researchers are also beginning to use the VA activity-based cost allocation system.
New Operational Matrices for Solving Fractional Differential Equations on the Half-Line
2015-01-01
In this paper, the fractional-order generalized Laguerre operational matrices (FGLOM) of fractional derivatives and fractional integration are derived. These operational matrices are used together with spectral tau method for solving linear fractional differential equations (FDEs) of order ν (0 < ν < 1) on the half line. An upper bound of the absolute errors is obtained for the approximate and exact solutions. Fractional-order generalized Laguerre pseudo-spectral approximation is investigated for solving nonlinear initial value problem of fractional order ν. The extension of the fractional-order generalized Laguerre pseudo-spectral method is given to solve systems of FDEs. We present the advantages of using the spectral schemes based on fractional-order generalized Laguerre functions and compare them with other methods. Several numerical examples are implemented for FDEs and systems of FDEs including linear and nonlinear terms. We demonstrate the high accuracy and the efficiency of the proposed techniques. PMID:25996369
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oang, Key Young; Yang, Cheolhee; Muniyappan, Srinivasan
Determination of the optimum kinetic model is an essential prerequisite for characterizing dynamics and mechanism of a reaction. Here, we propose a simple method, termed as singular value decomposition-aided pseudo principal-component analysis (SAPPA), to facilitate determination of the optimum kinetic model from time-resolved data by bypassing any need to examine candidate kinetic models. We demonstrate the wide applicability of SAPPA by examining three different sets of experimental time-resolved data and show that SAPPA can efficiently determine the optimum kinetic model. In addition, the results of SAPPA for both time-resolved X-ray solution scattering (TRXSS) and transient absorption (TA) data of themore » same protein reveal that global structural changes of protein, which is probed by TRXSS, may occur more slowly than local structural changes around the chromophore, which is probed by TA spectroscopy.« less
Jingbo, Xia; Silan, Zhang; Feng, Shi; Huijuan, Xiong; Xuehai, Hu; Xiaohui, Niu; Zhi, Li
2011-09-07
To evaluate the possibility of an unknown protein to be a resistant gene against Xanthomonas oryzae pv. oryzae, a different mode of pseudo amino acid composition (PseAAC) is proposed to formulate the protein samples by integrating the amino acid composition, as well as the Chaos games representation (CGR) method. Some numerical comparisons of triangle, quadrangle and 12-vertex polygon CGR are carried to evaluate the efficiency of using these fractal figures in classifiers. The numerical results show that among the three polygon methods, triangle method owns a good fractal visualization and performs the best in the classifier construction. By using triangle + 12-vertex polygon CGR as the mathematical feature, the classifier achieves 98.13% in Jackknife test and MCC achieves 0.8462. Copyright © 2011 Elsevier Ltd. All rights reserved.
Pseudo-thermosetting chitosan hydrogels for biomedical application.
Berger, J; Reist, M; Chenite, A; Felt-Baeyens, O; Mayer, J M; Gurny, R
2005-01-20
To prepare transparent chitosan/beta-glycerophosphate (betaGP) pseudo-thermosetting hydrogels, the deacetylation degree (DD) of chitosan has been modified by reacetylation with acetic anhydride. Two methods (I and II) of reacetylation have been compared and have shown that the use of previously filtered chitosan, dilution of acetic anhydride and reduction of temperature in method II improves efficiency and reproducibility. Chitosans with DD ranging from 35.0 to 83.2% have been prepared according to method II under homogeneous and non-homogeneous reacetylation conditions and the turbidity of chitosan/betaGP hydrogels containing homogeneously or non-homogeneously reacetylated chitosan has been investigated. Turbidity is shown to be modulated by the DD of chitosan and by the homogeneity of the medium during reacetylation, which influences the distribution mode of the chitosan monomers. The preparation of transparent chitosan/betaGP hydrogels requires a homogeneously reacetylated chitosan with a DD between 35 and 50%.
Pseudo-thermosetting chitosan hydrogels for biomedical application.
Berger, J; Reist, M; Chenite, A; Felt-Baeyens, O; Mayer, J M; Gurny, R
2005-01-06
To prepare transparent chitosan/beta-glycerophosphate (betaGP) pseudo-thermosetting hydrogels, the deacetylation degree (DD) of chitosan has been modified by reacetylation with acetic anhydride. Two methods (I and II) of reacetylation have been compared and have shown that the use of previously filtered chitosan, dilution of acetic anhydride and reduction of temperature in method II improves efficiency and reproducibility. Chitosans with DD ranging from 35.0 to 83.2% have been prepared according to method II under homogeneous and non-homogeneous reacetylation conditions and the turbidity of chitosan/betaGP hydrogels containing homogeneously or non-homogeneously reacetylated chitosan has been investigated. Turbidity is shown to be modulated by the DD of chitosan and by the homogeneity of the medium during reacetylation, which influences the distribution mode of the chitosan monomers. The preparation of transparent chitosan/betaGP hydrogels requires a homogeneously reacetylated chitosan with a DD between 35 and 50%.
A coupled thermo-mechanical pseudo inverse approach for preform design in forging
NASA Astrophysics Data System (ADS)
Thomas, Anoop Ebey; Abbes, Boussad; Li, Yu Ming; Abbes, Fazilay; Guo, Ying-Qiao; Duval, Jean-Louis
2017-10-01
Hot forging is a process used to form difficult to form materials as well as to achieve complex geometries. This is possible due to the reduction of yield stress at high temperatures and a subsequent increase in formability. Numerical methods have been used to predict the material yield and the stress/strain states of the final product. Pseudo Inverse Approach (PIA) developed in the context of cold forming provides a quick estimate of the stress and strain fields in the final product for a given initial shape. In this paper, PIA is extended to include the thermal effects on the forging process. A Johnson-Cook thermo-viscoplastic material law is considered and a staggered scheme is employed for the coupling between the mechanical and thermal problems. The results are compared with available commercial codes to show the efficiency and the limitations of PIA.
New operational matrices for solving fractional differential equations on the half-line.
Bhrawy, Ali H; Taha, Taha M; Alzahrani, Ebraheem O; Alzahrani, Ebrahim O; Baleanu, Dumitru; Alzahrani, Abdulrahim A
2015-01-01
In this paper, the fractional-order generalized Laguerre operational matrices (FGLOM) of fractional derivatives and fractional integration are derived. These operational matrices are used together with spectral tau method for solving linear fractional differential equations (FDEs) of order ν (0 < ν < 1) on the half line. An upper bound of the absolute errors is obtained for the approximate and exact solutions. Fractional-order generalized Laguerre pseudo-spectral approximation is investigated for solving nonlinear initial value problem of fractional order ν. The extension of the fractional-order generalized Laguerre pseudo-spectral method is given to solve systems of FDEs. We present the advantages of using the spectral schemes based on fractional-order generalized Laguerre functions and compare them with other methods. Several numerical examples are implemented for FDEs and systems of FDEs including linear and nonlinear terms. We demonstrate the high accuracy and the efficiency of the proposed techniques.
On the adaptivity and complexity embedded into differential evolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Senkerik, Roman; Pluhacek, Michal; Jasek, Roman
2016-06-08
This research deals with the comparison of the two modern approaches for evolutionary algorithms, which are the adaptivity and complex chaotic dynamics. This paper aims on the investigations on the chaos-driven Differential Evolution (DE) concept. This paper is aimed at the embedding of discrete dissipative chaotic systems in the form of chaotic pseudo random number generators for the DE and comparing the influence to the performance with the state of the art adaptive representative jDE. This research is focused mainly on the possible disadvantages and advantages of both compared approaches. Repeated simulations for Lozi map driving chaotic systems were performedmore » on the simple benchmark functions set, which are more close to the real optimization problems. Obtained results are compared with the canonical not-chaotic and not adaptive DE. Results show that with used simple test functions, the performance of ChaosDE is better in the most cases than jDE and Canonical DE, furthermore due to the unique sequencing in CPRNG given by the hidden chaotic dynamics, thus better and faster selection of unique individuals from population, ChaosDE is faster.« less
GPS-Like Phasing Control of the Space Solar Power System Transmission Array
NASA Technical Reports Server (NTRS)
Psiaki, Mark L.
2003-01-01
The problem of phasing of the Space Solar Power System's transmission array has been addressed by developing a GPS-like radio navigation system. The goal of this system is to provide power transmission phasing control for each node of the array that causes the power signals to add constructively at the ground reception station. The phasing control system operates in a distributed manner, which makes it practical to implement. A leader node and two radio navigation beacons are used to control the power transmission phasing of multiple follower nodes. The necessary one-way communications to the follower nodes are implemented using the RF beacon signals. The phasing control system uses differential carrier phase relative navigation/timing techniques. A special feature of the system is an integer ambiguity resolution procedure that periodically resolves carrier phase cycle count ambiguities via encoding of pseudo-random number codes on the power transmission signals. The system is capable of achieving phasing accuracies on the order of 3 mm down to 0.4 mm depending on whether the radio navigation beacons operate in the L or C bands.
Multiscale recurrence analysis of spatio-temporal data
NASA Astrophysics Data System (ADS)
Riedl, M.; Marwan, N.; Kurths, J.
2015-12-01
The description and analysis of spatio-temporal dynamics is a crucial task in many scientific disciplines. In this work, we propose a method which uses the mapogram as a similarity measure between spatially distributed data instances at different time points. The resulting similarity values of the pairwise comparison are used to construct a recurrence plot in order to benefit from established tools of recurrence quantification analysis and recurrence network analysis. In contrast to other recurrence tools for this purpose, the mapogram approach allows the specific focus on different spatial scales that can be used in a multi-scale analysis of spatio-temporal dynamics. We illustrate this approach by application on mixed dynamics, such as traveling parallel wave fronts with additive noise, as well as more complicate examples, pseudo-random numbers and coupled map lattices with a semi-logistic mapping rule. Especially the complicate examples show the usefulness of the multi-scale consideration in order to take spatial pattern of different scales and with different rhythms into account. So, this mapogram approach promises new insights in problems of climatology, ecology, or medicine.
Multiscale recurrence analysis of spatio-temporal data.
Riedl, M; Marwan, N; Kurths, J
2015-12-01
The description and analysis of spatio-temporal dynamics is a crucial task in many scientific disciplines. In this work, we propose a method which uses the mapogram as a similarity measure between spatially distributed data instances at different time points. The resulting similarity values of the pairwise comparison are used to construct a recurrence plot in order to benefit from established tools of recurrence quantification analysis and recurrence network analysis. In contrast to other recurrence tools for this purpose, the mapogram approach allows the specific focus on different spatial scales that can be used in a multi-scale analysis of spatio-temporal dynamics. We illustrate this approach by application on mixed dynamics, such as traveling parallel wave fronts with additive noise, as well as more complicate examples, pseudo-random numbers and coupled map lattices with a semi-logistic mapping rule. Especially the complicate examples show the usefulness of the multi-scale consideration in order to take spatial pattern of different scales and with different rhythms into account. So, this mapogram approach promises new insights in problems of climatology, ecology, or medicine.
NASA Astrophysics Data System (ADS)
Pierro, Michele; Sassaroli, Angelo; Zheng, Feng; Fantini, Sergio
2011-02-01
We present a study of the relative phase of oscillations of cerebral oxy- and deoxy-hemoglobin concentrations in the low-frequency range, namely 0.04-0.12 Hz. We have characterized the potential contributions of noise to the measured phase distributions, and we have performed phase measurements on the brain of a human subject at rest, and on the brain of a human subject during stage I sleep. While phase distributions of pseudo hemodynamic oscillations generated from noise (obtained by applying to two independent sets of random numbers the same linear transformation that converts absorption coefficients at 690 and 830 nm into concentrations of oxy- and deoxy-hemoglobin) are peaked at 180º, those associated with real hemodynamic changes can be peaked around any value depending on the underlying physiology and hemodynamics. In particular, preliminary results reported here indicate a greater phase lead of deoxy-hemoglobin vs. oxy-hemoglobin low-frequency oscillations during stage I sleep (82º +/- 55º) than while the subject is awake (19º +/- 58º).
Teichmann, A Lina; Nieuwenstein, Mark R; Rich, Anina N
2015-01-01
Digit-color synesthetes report experiencing colors when perceiving letters and digits. The conscious experience is typically unidirectional (e.g., digits elicit colors but not vice versa) but recent evidence shows subtle bidirectional effects. We examined whether short-term memory for colors could be affected by the order of presentation reflecting more or less structure in the associated digits. We presented a stream of colored squares and asked participants to report the colors in order. The colors matched each synesthete's colors for digits 1-9 and the order of the colors corresponded either to a sequence of numbers (e.g., [red, green, blue] if 1 = red, 2 = green, 3 = blue) or no systematic sequence. The results showed that synesthetes recalled sequential color sequences more accurately than pseudo-randomized colors, whereas no such effect was found for the non-synesthetic controls. Synesthetes did not differ from non-synesthetic controls in recall of color sequences overall, providing no evidence of a general advantage in memory for serial recall of colors.
NASA Astrophysics Data System (ADS)
Palenčár, Rudolf; Sopkuliak, Peter; Palenčár, Jakub; Ďuriš, Stanislav; Suroviak, Emil; Halaj, Martin
2017-06-01
Evaluation of uncertainties of the temperature measurement by standard platinum resistance thermometer calibrated at the defining fixed points according to ITS-90 is a problem that can be solved in different ways. The paper presents a procedure based on the propagation of distributions using the Monte Carlo method. The procedure employs generation of pseudo-random numbers for the input variables of resistances at the defining fixed points, supposing the multivariate Gaussian distribution for input quantities. This allows taking into account the correlations among resistances at the defining fixed points. Assumption of Gaussian probability density function is acceptable, with respect to the several sources of uncertainties of resistances. In the case of uncorrelated resistances at the defining fixed points, the method is applicable to any probability density function. Validation of the law of propagation of uncertainty using the Monte Carlo method is presented on the example of specific data for 25 Ω standard platinum resistance thermometer in the temperature range from 0 to 660 °C. Using this example, we demonstrate suitability of the method by validation of its results.
NASA Astrophysics Data System (ADS)
Guillozet, Kathleen
2015-10-01
This paper describes the regulatory and compliance context for Oregon's emerging ecosystem services (ES) market in riparian shade to meet water quality obligations. In Oregon's market as with many other ES programs, contracts and other regulatory documents not only delimit the obligations and liabilities of different parties, but also constitute a primary mechanism through which ES service delivery is measured. Through a review of compliance criteria I find that under Oregon's shade trades, permittees are held to a number of input-based criteria, which essentially affirm that parties comply with predetermined practices and procedures, and one `pseudo output based' criterion, in which ES delivery is estimated through a model. The case presented in the paper critically engages with the challenges of measuring ES and in assessing the outcomes of ES projects. It places these challenges as interrelated and proposes that market designers, policymakers, and other stakeholders should consider explicit efficacy, efficiency, and equity targets.
A 0.9-V 12-bit 40-MSPS Pipeline ADC for Wireless Receivers
NASA Astrophysics Data System (ADS)
Ito, Tomohiko; Itakura, Tetsuro
A 0.9-V 12-bit 40-MSPS pipeline ADC with I/Q amplifier sharing technique is presented for wireless receivers. To achieve high linearity even at 0.9-V supply, the clock signals to sampling switches are boosted over 0.9V in conversion stages. The clock-boosting circuit for lifting these clocks is shared between I-ch ADC and Q-ch ADC, reducing the area penalty. Low supply voltage narrows the available output range of the operational amplifier. A pseudo-differential (PD) amplifier with two-gain-stage common-mode feedback (CMFB) is proposed in views of its wide output range and power efficiency. This ADC is fabricated in 90-nm CMOS technology. At 40MS/s, the measured SNDR is 59.3dB and the corresponding effective number of bits (ENOB) is 9.6. Until Nyquist frequency, the ENOB is kept over 9.3. The ADC dissipates 17.3mW/ch, whose performances are suitable for ADCs for mobile wireless systems such as WLAN/WiMAX.
Lin, Liangbin; Lin, Xiaoru; Guo, Hongyu; Yang, Fafu
2017-07-19
This study focuses on the construction of novel diphenylacrylonitrile-connected BODIPY dyes with high fluorescence in both solution and an aggregated state by combining DRET and FRET processes in a single donor-acceptor system. The first BODIPY derivatives with one, two, or three AIE-active diphenylacrylonitrile groups were designed and synthesized in moderate yields. Strong fluorescence emissions were observed in the THF solution under excitation at the absorption wavelength of non-emissive diphenylacrylonitrile chromophores, implying the existence of the DRET process between the dark diphenylacrylonitrile donor and the emissive BODIPY acceptor. In the THF/H 2 O solution, the fluorescence intensity of the novel BODIPY derivatives gradually increased under excitation at the absorption wavelength of diphenylacrylonitrile chromophores, suggesting a FRET process between diphenylacrylonitrile and BODIPY moieties. A greater number of diphenylacrylonitrile units led to higher energy-transfer efficiencies. The pseudo-Stokes shift for both DRET and FRET processes was as large as 190 nm.
Guillozet, Kathleen
2015-10-01
This paper describes the regulatory and compliance context for Oregon's emerging ecosystem services (ES) market in riparian shade to meet water quality obligations. In Oregon's market as with many other ES programs, contracts and other regulatory documents not only delimit the obligations and liabilities of different parties, but also constitute a primary mechanism through which ES service delivery is measured. Through a review of compliance criteria I find that under Oregon's shade trades, permittees are held to a number of input-based criteria, which essentially affirm that parties comply with predetermined practices and procedures, and one 'pseudo output based' criterion, in which ES delivery is estimated through a model. The case presented in the paper critically engages with the challenges of measuring ES and in assessing the outcomes of ES projects. It places these challenges as interrelated and proposes that market designers, policymakers, and other stakeholders should consider explicit efficacy, efficiency, and equity targets.
Method of gear fault diagnosis based on EEMD and improved Elman neural network
NASA Astrophysics Data System (ADS)
Zhang, Qi; Zhao, Wei; Xiao, Shungen; Song, Mengmeng
2017-05-01
Aiming at crack and wear and so on of gears Fault information is difficult to diagnose usually due to its weak, a gear fault diagnosis method that is based on EEMD and improved Elman neural network fusion is proposed. A number of IMF components are obtained by decomposing denoised all kinds of fault signals with EEMD, and the pseudo IMF components is eliminated by using the correlation coefficient method to obtain the effective IMF component. The energy characteristic value of each effective component is calculated as the input feature quantity of Elman neural network, and the improved Elman neural network is based on standard network by adding a feedback factor. The fault data of normal gear, broken teeth, cracked gear and attrited gear were collected by field collecting. The results were analyzed by the diagnostic method proposed in this paper. The results show that compared with the standard Elman neural network, Improved Elman neural network has the advantages of high diagnostic efficiency.
Precorrection concepts for mobile terminals with processing satellites
NASA Astrophysics Data System (ADS)
Nakamoto, F. S.; Oreilly, M. P.; Wolfson, C. R.
It is pointed out that when the spacecraft must process a large number of users simultaneously, it becomes impractical for it to acquire and track each uplink signal. A solution is for the terminals to precorrect their uplink transmissions so that they reach the spacecraft in time and frequency synchronism with the spacecraft receiver. Two dimensions of precorrection, namely time and frequency, are addressed. Precorrection approaches are classified as open loop, pseudo-open loop, or pseudo-closed loop. Performance relationships are established, and the applicability, requirements, advantages, and disadvantages of each class are discussed. It is found that since time and frequency precorrection have opposite sensitivities to the frequency hopping rate, different classes will often be adopted for the two dimensions.
An optimal resolved rate law for kinematically redundant manipulators
NASA Technical Reports Server (NTRS)
Bourgeois, B. J.
1987-01-01
The resolved rate law for a manipulator provides the instantaneous joint rates required to satisfy a given instantaneous hand motion. When the joint space has more degrees of freedom than the task space, the manipulator is kinematically redundant and the kinematic rate equations are underdetermined. These equations can be locally optimized, but the resulting pseudo-inverse solution has been found to cause large joint rates in some cases. A weighting matrix in the locally optimized (pseudo-inverse) solution is dynamically adjusted to control the joint motion as desired. Joint reach limit avoidance is demonstrated in a kinematically redundant planar arm model. The treatment is applicable to redundant manipulators with any number of revolute joints and to non-planar manipulators.
Distribution of randomly diffusing particles in inhomogeneous media
NASA Astrophysics Data System (ADS)
Li, Yiwei; Kahraman, Osman; Haselwandter, Christoph A.
2017-09-01
Diffusion can be conceptualized, at microscopic scales, as the random hopping of particles between neighboring lattice sites. In the case of diffusion in inhomogeneous media, distinct spatial domains in the system may yield distinct particle hopping rates. Starting from the master equations (MEs) governing diffusion in inhomogeneous media we derive here, for arbitrary spatial dimensions, the deterministic lattice equations (DLEs) specifying the average particle number at each lattice site for randomly diffusing particles in inhomogeneous media. We consider the case of free (Fickian) diffusion with no steric constraints on the maximum particle number per lattice site as well as the case of diffusion under steric constraints imposing a maximum particle concentration. We find, for both transient and asymptotic regimes, excellent agreement between the DLEs and kinetic Monte Carlo simulations of the MEs. The DLEs provide a computationally efficient method for predicting the (average) distribution of randomly diffusing particles in inhomogeneous media, with the number of DLEs associated with a given system being independent of the number of particles in the system. From the DLEs we obtain general analytic expressions for the steady-state particle distributions for free diffusion and, in special cases, diffusion under steric constraints in inhomogeneous media. We find that, in the steady state of the system, the average fraction of particles in a given domain is independent of most system properties, such as the arrangement and shape of domains, and only depends on the number of lattice sites in each domain, the particle hopping rates, the number of distinct particle species in the system, and the total number of particles of each particle species in the system. Our results provide general insights into the role of spatially inhomogeneous particle hopping rates in setting the particle distributions in inhomogeneous media.
NASA Astrophysics Data System (ADS)
Zunz, Violette; Goosse, Hugues; Dubinkina, Svetlana
2013-04-01
The sea ice extent in the Southern Ocean has increased since 1979 but the causes of this expansion have not been firmly identified. In particular, the contribution of internal variability and external forcing to this positive trend has not been fully established. In this region, the lack of observations and the overestimation of internal variability of the sea ice by contemporary General Circulation Models (GCMs) make it difficult to understand the behaviour of the sea ice. Nevertheless, if its evolution is governed by the internal variability of the system and if this internal variability is in some way predictable, a suitable initialization method should lead to simulations results that better fit the reality. Current GCMs decadal predictions are generally initialized through a nudging towards some observed fields. This relatively simple method does not seem to be appropriated to the initialization of sea ice in the Southern Ocean. The present study aims at identifying an initialization method that could improve the quality of the predictions of Southern Ocean sea ice at decadal timescales. We use LOVECLIM, an Earth-system Model of Intermediate Complexity that allows us to perform, within a reasonable computational time, the large amount of simulations required to test systematically different initialization procedures. These involve three data assimilation methods: a nudging, a particle filter and an efficient particle filter. In a first step, simulations are performed in an idealized framework, i.e. data from a reference simulation of LOVECLIM are used instead of observations, herein after called pseudo-observations. In this configuration, the internal variability of the model obviously agrees with the one of the pseudo-observations. This allows us to get rid of the issues related to the overestimation of the internal variability by models compared to the observed one. This way, we can work out a suitable methodology to assess the efficiency of the initialization procedures tested. It also allows us determine the upper limit of improvement that can be expected if more sophisticated initialization methods are used in decadal prediction simulations and if models have an internal variability agreeing with the observed one. Furthermore, since pseudo-observations are available everywhere at any time step, we also analyse the differences between simulations initialized with a complete dataset of pseudo-observations and the ones for which pseudo-observations data are not assimilated everywhere. In a second step, simulations are realized in a realistic framework, i.e. through the use of actual available observations. The same data assimilation methods are tested in order to check if more sophisticated methods can improve the reliability and the accuracy of decadal prediction simulations, even if they are performed with models that overestimate the internal variability of the sea ice extent in the Southern Ocean.
Rojas, Raquel; Vanderlinden, Eva; Morillo, José; Usero, José; El Bakouri, Hicham
2014-08-01
The adsorption/desorption behavior of four pesticides (atrazine, alachlor, endosulfan sulfate and trifluralin) in aqueous solutions onto four adsorbents (sunflower seed shells, rice husk, composted sewage sludge and soil) was investigated. Pesticide determination was carried out using stir bar sorptive extraction and gas chromatography coupled with mass spectroscopy. Maximum removal efficiency (73.9%) was reached using 1 g of rice husk and 50 mL of pesticide solution (200 μg L(-1)). The pseudo adsorption equilibrium was reached with 0.6 g organic residue, which was used in subsequent experiments. The pseudo-first-order, pseudo-second-order kinetics and the intra-particle diffusion models were used to describe the kinetic data and rate constants were evaluated. The first model was more suitable for the sorption of atrazine and alachlor while the pseudo-second-order best described endosulfan sulfate and trifluralin adsorption, which showed the fastest sorption rates. 4h was considered as the equilibrium time for determining adsorption isotherms. Experimental data were modeled by Langmuir and Freundlich models. In most of the studied cases both models can describe the adsorption process, although the Freundlich model was applicable in all cases. The sorption capacity increased with the hydrophobic character of the pesticides and decreased with their water solubility. Rice husk was revealed as the best adsorbent for three of the four studied pesticides (atrazine, alachlor and endosulfan sulfate), while better results were obtained with composted sewage sludge and sunflower seed shell for the removal of trifluralin. Although desorption percentages were not high (with the exception of alachlor, which reached a desorption rate of 57%), the Kfd values were lower than the Kf values for adsorption and all H values were below 100, indicating that the adsorption was weak. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Ebrahimi-Gatkash, Mehdi; Younesi, Habibollah; Shahbazi, Afsaneh; Heidari, Ava
2017-07-01
In the present study, amino-functionalized Mobil Composite Material No. 41 (MCM-41) was used as an adsorbent to remove nitrate anions from aqueous solutions. Mono-, di- and tri-amino functioned silicas (N-MCM-41, NN-MCM-41 and NNN-MCM-41) were prepared by post-synthesis grafting method. The samples were characterized by means of X-ray powder diffraction, FTIR spectroscopy, thermogravimetric analysis, scanning electron microscopy and nitrogen adsorption-desorption. The effects of pH, initial concentration of anions, and adsorbent loading were examined in batch adsorption system. Results of adsorption experiments showed that the adsorption capacity increased with increasing adsorbent loading and initial anion concentration. It was found that the Langmuir mathematical model indicated better fit to the experimental data than the Freundlich. According to the constants of the Langmuir equation, the maximum adsorption capacity for nitrate anion by N-MCM-41, NN-MCM-41 and NNN-MCM-41 was found to be 31.68, 38.58 and 36.81 mg/g, respectively. The adsorption kinetics were investigated with pseudo-first-order and pseudo-second-order model. Adsorption followed the pseudo-second-order rate kinetics. The coefficients of determination for pseudo-second-order kinetic model are >0.99. For continuous adsorption experiments, NNN-MCM-41 adsorbent was used for the removal of nitrate anion from solutions. Breakthrough curves were investigated at different bed heights, flow rates and initial nitrate anion concentrations. The Thomas and Yan models were utilized to calculate the kinetic parameters and to predict the breakthrough curves of different bed height. Results from this study illustrated the potential utility of these adsorbents for nitrate removal from water solution.
Pharmaceuticals are increasingly found in aquatic environments near wastewater treatment plant discharge, and may be of particular concern to aquatic life given their pseudo-persistence. The large number of detected pharmaceuticals necessitates a prioritization method for hazard...
Patterns in Calabi-Yau Distributions
NASA Astrophysics Data System (ADS)
He, Yang-Hui; Jejjala, Vishnu; Pontiggia, Luca
2017-09-01
We explore the distribution of topological numbers in Calabi-Yau manifolds, using the Kreuzer-Skarke dataset of hypersurfaces in toric varieties as a testing ground. While the Hodge numbers are well-known to exhibit mirror symmetry, patterns in frequencies of combination thereof exhibit striking new patterns. We find pseudo-Voigt and Planckian distributions with high confidence and exact fit for many substructures. The patterns indicate typicality within the landscape of Calabi-Yau manifolds of various dimension.
Non-inflammatory causes of emergency consultation in patients with multiple sclerosis.
Rodríguez de Antonio, L A; García Castañón, I; Aguilar-Amat Prior, M J; Puertas, I; González Suárez, I; Oreja Guevara, C
2018-05-26
To describe non-relapse-related emergency consultations of patients with multiple sclerosis (MS): causes, difficulties in the diagnosis, clinical characteristics, and treatments administered. We performed a retrospective study of patients who attended a multiple sclerosis day hospital due to suspected relapse and received an alternative diagnosis, over a 2-year period. Demographic data, clinical characteristics, final diagnosis, and treatments administered were evaluated. Patients who were initially diagnosed with pseudo-relapse and ultimately diagnosed with true relapse were evaluated specifically. As an exploratory analysis, patients who consulted with non-inflammatory causes were compared with a randomly selected cohort of patients with true relapses who attended the centre in the same period. The study included 50 patients (33 were women; mean age 41.4±11.7years). Four patients (8%) were initially diagnosed with pseudo-relapse and later diagnosed as having a true relapse. Fever and vertigo were the main confounding factors. The non-inflammatory causes of emergency consultation were: neurological, 43.5% (20 patients); infectious, 15.2% (7); psychiatric, 10.9% (5); vertigo, 8.6% (4); trauma, 10.9% (5); and miscellaneous, 10.9% (5). MS-related symptoms constituted the most frequent cause of non-inflammatory emergency consultations. Close follow-up of relapse and pseudo-relapse is necessary to detect incorrect initial diagnoses, avoid unnecessary treatments, and relieve patients' symptoms. Copyright © 2018 Sociedad Española de Neurología. Publicado por Elsevier España, S.L.U. All rights reserved.
Liu, Bin; Long, Ren; Chou, Kuo-Chen
2016-08-15
Regulatory DNA elements are associated with DNase I hypersensitive sites (DHSs). Accordingly, identification of DHSs will provide useful insights for in-depth investigation into the function of noncoding genomic regions. In this study, using the strategy of ensemble learning framework, we proposed a new predictor called iDHS-EL for identifying the location of DHS in human genome. It was formed by fusing three individual Random Forest (RF) classifiers into an ensemble predictor. The three RF operators were respectively based on the three special modes of the general pseudo nucleotide composition (PseKNC): (i) kmer, (ii) reverse complement kmer and (iii) pseudo dinucleotide composition. It has been demonstrated that the new predictor remarkably outperforms the relevant state-of-the-art methods in both accuracy and stability. For the convenience of most experimental scientists, a web server for iDHS-EL is established at http://bioinformatics.hitsz.edu.cn/iDHS-EL, which is the first web-server predictor ever established for identifying DHSs, and by which users can easily get their desired results without the need to go through the mathematical details. We anticipate that IDHS-EL: will become a very useful high throughput tool for genome analysis. bliu@gordonlifescience.org or bliu@insun.hit.edu.cn Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Prospects of banana waste utilization in wastewater treatment: A review.
Ahmad, Tanweer; Danish, Mohammed
2018-01-15
This review article explores utilization of banana waste (fruit peels, pseudo-stem, trunks, and leaves) as precursor materials to produce an adsorbent, and its application against environmental pollutants such as heavy metals, dyes, organic pollutants, pesticides, and various other gaseous pollutants. In recent past, quite a good number of research articles have been published on the utilization of low-cost adsorbents derived from biomass wastes. The literature survey on banana waste derived adsorbents shown that due to the abundance of banana waste worldwide, it also considered as low-cost adsorbents with promising future application against various environmental pollutants. Furthermore, raw banana biomass can be chemically modified to prepare efficient adsorbent as per requirement; chemical surface functional group modification may enhance the multiple uses of the adsorbent with industrial standard. It was evident from a literature survey that banana waste derived adsorbents have significant removal efficiency against various pollutants. Most of the published articles on banana waste derived adsorbents have been discussed critically, and the conclusion is drawn based on the results reported. Some results with poorly performed experiments were also discussed and pointed out their lacking in reporting. Based on literature survey, the future research prospect on banana wastes has a significant impact on upcoming research strategy. Copyright © 2017 Elsevier Ltd. All rights reserved.
A cellular automaton model for ship traffic flow in waterways
NASA Astrophysics Data System (ADS)
Qi, Le; Zheng, Zhongyi; Gang, Longhui
2017-04-01
With the development of marine traffic, waterways become congested and more complicated traffic phenomena in ship traffic flow are observed. It is important and necessary to build a ship traffic flow model based on cellular automata (CAs) to study the phenomena and improve marine transportation efficiency and safety. Spatial discretization rules for waterways and update rules for ship movement are two important issues that are very different from vehicle traffic. To solve these issues, a CA model for ship traffic flow, called a spatial-logical mapping (SLM) model, is presented. In this model, the spatial discretization rules are improved by adding a mapping rule. And the dynamic ship domain model is considered in the update rules to describe ships' interaction more exactly. Take the ship traffic flow in the Singapore Strait for example, some simulations were carried out and compared. The simulations show that the SLM model could avoid ship pseudo lane-change efficiently, which is caused by traditional spatial discretization rules. The ship velocity change in the SLM model is consistent with the measured data. At finally, from the fundamental diagram, the relationship between traffic ability and the lengths of ships is explored. The number of ships in the waterway declines when the proportion of large ships increases.
Guo, Zizhang; Zhang, Jian; Kang, Yan; Liu, Hai
2017-11-01
This study developed an humic acid (HA) in-situ modified activated carbon adsorbent (AC-HA) for the rapid and efficient removal of Pb(II) from aqueous media, and adsorption mechanisms are discussed. The physicochemical characteristics of activated carbons (AC) were investigated via N 2 adsorption/desorption, scanning electron microscopy (SEM), Boehm's titration method and Fourier transform infrared spectroscopy (FTIR). AC-HA exhibited richer oxygen-containing functional groups than the original AC. In addition, the removal performance of AC-HA (250.0mg/g) toward Pb(II) was greatly improved compared with the original AC (166.7mg/g). The batch adsorption study results revealed that the Pb(II) adsorption data were best fit by the pseudo-second-order model of kinetics and Langmuir isotherm of isothermals, and therefore, the effect of the solution pH was studied. The superior performance of AC-HA was attributed to the HA modification, which contains numbers of groups and has a strong π-π interaction binding energy with AC and Pb(II) species. The adsorption mechanisms were confirmed via the XPS study. More importantly, the modified method is simple and has a low cost of production. Copyright © 2017 Elsevier Inc. All rights reserved.
Catalytic Activities Of [GADV]-Peptides
NASA Astrophysics Data System (ADS)
Oba, Takae; Fukushima, Jun; Maruyama, Masako; Iwamoto, Ryoko; Ikehara, Kenji
2005-10-01
We have previously postulated a novel hypothesis for the origin of life, assuming that life on the earth originated from “[GADV]-protein world”, not from the “RNA world” (see Ikehara's review, 2002). The [GADV]-protein world is constituted from peptides and proteins with random sequences of four amino acids (glycine [G], alanine [A], aspartic acid [D] and valine [V]), which accumulated by pseudo-replication of the [GADV]-proteins. To obtain evidence for the hypothesis, we produced [GADV]-peptides by repeated heat-drying of the amino acids for 30 cycles ([GADV]-P30) and examined whether the peptides have some catalytic activities or not. From the results, it was found that the [GADV]-P30 can hydrolyze several kinds of chemical bonds in molecules, such as umbelliferyl-β-D-galactoside, glycine-p-nitroanilide and bovine serum albumin. This suggests that [GADV]-P30 could play an important role in the accumulation of [GADV]-proteins through pseudo-replication, leading to the emergence of life. We further show that [GADV]-octapaptides with random sequences, but containing no cyclic compounds as diketepiperazines, have catalytic activity, hydrolyzing peptide bonds in a natural protein, bovine serum albumin. The catalytic activity of the octapeptides was much higher than the [GADV]-P30 produced through repeated heat-drying treatments. These results also support the [GADV]-protein-world hypothesis of the origin of life (see Ikehara's review, 2002). Possible steps for the emergence of life on the primitive earth are presented.
PRM/NIR sensor for brain hematoma detection and oxygenation monitoring
NASA Astrophysics Data System (ADS)
Zheng, Liu; Lee, Hyo Sang; Lokos, Sandor; Kim, Jin; Hanley, Daniel F.; Wilson, David A.
1997-06-01
The pseudo-random modulation/near IR sensor (PRM/NIR Sensor) is a low cost portable system designed for time-resolved tissue diagnosis, especially hematoma detection in the emergency care facility. The sensor consists of a personal computer and a hardware unit enclosed in a box of size 37 X 37 X 31 cm3 and of weight less than 10 kg. Two pseudo-random modulated diode lasers emitting at 670 nm and 810 nm are used in the sensor as light sources. The sensor can be operated either in a single wavelength mode or a true differential mode. Optical fiber bundles are used for convenient light delivery and color filters are used to reject room light. Based on a proprietary resolution- enhancement correlation technique, the system achieves a time resolution better than 40 ps with a PRM modulation speed of 200 MHz and a sampling rate of 1-10 Gs/s. Using the prototype sensor, phantom experiments have been conducted to study the feasibility of the sensor. Brain's optical properties are simulated with solutions of intralipid and ink. Hematomas are simulated with bags of paint and hemoglobin immersed in the solution of varies sizes, depths, and orientations. Effects of human skull and hair are studied experimentally. In animal experiment, the sensor was used to monitor the cerebral oxygenation change due to hypercapnia, hypoxia, and hyperventilation. Good correlations were found between NIR measurement parameters and physiological changes induced to the animals.
Allegrini, Maria-Cristina; Canullo, Roberto; Campetella, Giandiego
2009-04-01
Knowledge of accuracy and precision rates is particularly important for long-term studies. Vegetation assessments include many sources of error related to overlooking and misidentification, that are usually influenced by some factors, such as cover estimate subjectivity, observer biased species lists and experience of the botanist. The vegetation assessment protocol adopted in the Italian forest monitoring programme (CONECOFOR) contains a Quality Assurance programme. The paper presents the different phases of QA, separates the 5 main critical points of the whole protocol as sources of random or systematic errors. Examples of Measurement Quality Objectives (MQOs) expressed as Data Quality Limits (DQLs) are given for vascular plant cover estimates, in order to establish the reproducibility of the data. Quality control activities were used to determine the "distance" between the surveyor teams and the control team. Selected data were acquired during the training and inter-calibration courses. In particular, an index of average cover by species groups was used to evaluate the random error (CV 4%) as the dispersion around the "true values" of the control team. The systematic error in the evaluation of species composition, caused by overlooking or misidentification of species, was calculated following the pseudo-turnover rate; detailed species censuses on smaller sampling units were accepted as the pseudo-turnover which always fell below the 25% established threshold; species density scores recorded at community level (100 m(2) surface) rarely exceeded that limit.
Investigation on the efficiency of treated Palm Tree waste for removal of organic pollutants
NASA Astrophysics Data System (ADS)
Azoulay, Karima; El HajjajiI, Souad; Dahchour, Abdelmalek
2017-04-01
Development of the industrial sector generates several problems of environmental pollution. This issue rises concern among scientific community and decision makers, in this work; we e interested in water resources polluted by the chemical substances, which can cause various problems of health. As an example, dyes generated by different industrial activities such as textile, cosmetic, metal plating, leather, paper and plastic sectors, constitute an important source of pollution. In this work, we aim at investigating the efficiency of palm tree waste for removal of dyes from polluted solution. Our work presents a double environmental aspect, on one hand it constitutes an attempt for valorization of Palm Tree waste, and on the other hand it provides natural adsorbent. The study focuses on the effectiveness of the waste in removing Methylene Bleu and Methyl Orange taken as models of pollutants from aqueous solution. Kinetics and isotherm experiments were conducted in order to determine the sorption behavior of the examined dye. The effects of initial dye and adsorbent concentrations are considered. The results indicate that the correlation coefficient calculated from pseudo-second order equation was higher than the other kinetic equations, indicating that equilibrium data fitted well with pseudo-second order model where adsorption process was chemisorption. The adsorption equilibrium was well described by Langmuir isotherm model.
Ordinal optimization and its application to complex deterministic problems
NASA Astrophysics Data System (ADS)
Yang, Mike Shang-Yu
1998-10-01
We present in this thesis a new perspective to approach a general class of optimization problems characterized by large deterministic complexities. Many problems of real-world concerns today lack analyzable structures and almost always involve high level of difficulties and complexities in the evaluation process. Advances in computer technology allow us to build computer models to simulate the evaluation process through numerical means, but the burden of high complexities remains to tax the simulation with an exorbitant computing cost for each evaluation. Such a resource requirement makes local fine-tuning of a known design difficult under most circumstances, let alone global optimization. Kolmogorov equivalence of complexity and randomness in computation theory is introduced to resolve this difficulty by converting the complex deterministic model to a stochastic pseudo-model composed of a simple deterministic component and a white-noise like stochastic term. The resulting randomness is then dealt with by a noise-robust approach called Ordinal Optimization. Ordinal Optimization utilizes Goal Softening and Ordinal Comparison to achieve an efficient and quantifiable selection of designs in the initial search process. The approach is substantiated by a case study in the turbine blade manufacturing process. The problem involves the optimization of the manufacturing process of the integrally bladed rotor in the turbine engines of U.S. Air Force fighter jets. The intertwining interactions among the material, thermomechanical, and geometrical changes makes the current FEM approach prohibitively uneconomical in the optimization process. The generalized OO approach to complex deterministic problems is applied here with great success. Empirical results indicate a saving of nearly 95% in the computing cost.
ERIC Educational Resources Information Center
Borsari, Brian; Hustad, John T. P.; Mastroleo, Nadine R.; Tevyaw, Tracy O'Leary; Barnett, Nancy P.; Kahler, Christopher W.; Short, Erica Eaton; Monti, Peter M.
2012-01-01
Objective: Over the past 2 decades, colleges and universities have seen a large increase in the number of students referred to the administration for alcohol policies violations. However, a substantial portion of mandated students may not require extensive treatment. Stepped care may maximize treatment efficiency and greatly reduce the demands on…
Kishimoto, T; Chawla, J M; Hagi, K; Zarate, C A; Kane, J M; Bauer, M; Correll, C U
2016-05-01
Ketamine and non-ketamine N-methyl-d-aspartate receptor antagonists (NMDAR antagonists) recently demonstrated antidepressant efficacy for the treatment of refractory depression, but effect sizes, trajectories and possible class effects are unclear. We searched PubMed/PsycINFO/Web of Science/clinicaltrials.gov until 25 August 2015. Parallel-group or cross-over randomized controlled trials (RCTs) comparing single intravenous infusion of ketamine or a non-ketamine NMDAR antagonist v. placebo/pseudo-placebo in patients with major depressive disorder (MDD) and/or bipolar depression (BD) were included in the analyses. Hedges' g and risk ratios and their 95% confidence intervals (CIs) were calculated using a random-effects model. The primary outcome was depressive symptom change. Secondary outcomes included response, remission, all-cause discontinuation and adverse effects. A total of 14 RCTs (nine ketamine studies: n = 234; five non-ketamine NMDAR antagonist studies: n = 354; MDD = 554, BD = 34), lasting 10.0 ± 8.8 days, were meta-analysed. Ketamine reduced depression significantly more than placebo/pseudo-placebo beginning at 40 min, peaking at day 1 (Hedges' g = -1.00, 95% CI -1.28 to -0.73, p < 0.001), and loosing superiority by days 10-12. Non-ketamine NMDAR antagonists were superior to placebo only on days 5-8 (Hedges' g = -0.37, 95% CI -0.66 to -0.09, p = 0.01). Compared with placebo/pseudo-placebo, ketamine led to significantly greater response (40 min to day 7) and remission (80 min to days 3-5). Non-ketamine NMDAR antagonists achieved greater response at day 2 and days 3-5. All-cause discontinuation was similar between ketamine (p = 0.34) or non-ketamine NMDAR antagonists (p = 0.94) and placebo. Although some adverse effects were more common with ketamine/NMDAR antagonists than placebo, these were transient and clinically insignificant. A single infusion of ketamine, but less so of non-ketamine NMDAR antagonists, has ultra-rapid efficacy for MDD and BD, lasting for up to 1 week. Development of easy-to-administer, repeatedly given NMDAR antagonists without risk of brain toxicity is of critical importance.
Pseudo color ghost coding imaging with pseudo thermal light
NASA Astrophysics Data System (ADS)
Duan, De-yang; Xia, Yun-jie
2018-04-01
We present a new pseudo color imaging scheme named pseudo color ghost coding imaging based on ghost imaging but with multiwavelength source modulated by a spatial light modulator. Compared with conventional pseudo color imaging where there is no nondegenerate wavelength spatial correlations resulting in extra monochromatic images, the degenerate wavelength and nondegenerate wavelength spatial correlations between the idle beam and signal beam can be obtained simultaneously. This scheme can obtain more colorful image with higher quality than that in conventional pseudo color coding techniques. More importantly, a significant advantage of the scheme compared to the conventional pseudo color coding imaging techniques is the image with different colors can be obtained without changing the light source and spatial filter.
Nonconvergence of the Wang-Landau algorithms with multiple random walkers.
Belardinelli, R E; Pereyra, V D
2016-05-01
This paper discusses some convergence properties in the entropic sampling Monte Carlo methods with multiple random walkers, particularly in the Wang-Landau (WL) and 1/t algorithms. The classical algorithms are modified by the use of m-independent random walkers in the energy landscape to calculate the density of states (DOS). The Ising model is used to show the convergence properties in the calculation of the DOS, as well as the critical temperature, while the calculation of the number π by multiple dimensional integration is used in the continuum approximation. In each case, the error is obtained separately for each walker at a fixed time, t; then, the average over m walkers is performed. It is observed that the error goes as 1/sqrt[m]. However, if the number of walkers increases above a certain critical value m>m_{x}, the error reaches a constant value (i.e., it saturates). This occurs for both algorithms; however, it is shown that for a given system, the 1/t algorithm is more efficient and accurate than the similar version of the WL algorithm. It follows that it makes no sense to increase the number of walkers above a critical value m_{x}, since it does not reduce the error in the calculation. Therefore, the number of walkers does not guarantee convergence.
Kumar, K Vasanth; Sivanesan, S
2006-08-25
Pseudo second order kinetic expressions of Ho, Sobkowsk and Czerwinski, Blanachard et al. and Ritchie were fitted to the experimental kinetic data of malachite green onto activated carbon by non-linear and linear method. Non-linear method was found to be a better way of obtaining the parameters involved in the second order rate kinetic expressions. Both linear and non-linear regression showed that the Sobkowsk and Czerwinski and Ritchie's pseudo second order model were the same. Non-linear regression analysis showed that both Blanachard et al. and Ho have similar ideas on the pseudo second order model but with different assumptions. The best fit of experimental data in Ho's pseudo second order expression by linear and non-linear regression method showed that Ho pseudo second order model was a better kinetic expression when compared to other pseudo second order kinetic expressions. The amount of dye adsorbed at equilibrium, q(e), was predicted from Ho pseudo second order expression and were fitted to the Langmuir, Freundlich and Redlich Peterson expressions by both linear and non-linear method to obtain the pseudo isotherms. The best fitting pseudo isotherm was found to be the Langmuir and Redlich Peterson isotherm. Redlich Peterson is a special case of Langmuir when the constant g equals unity.
Age bimodality in the central region of pseudo-bulges in S0 galaxies
NASA Astrophysics Data System (ADS)
Mishra, Preetish K.; Barway, Sudhanshu; Wadadekar, Yogesh
2017-11-01
We present evidence for bimodal stellar age distribution of pseudo-bulges of S0 galaxies as probed by the Dn(4000) index. We do not observe any bimodality in age distribution for pseudo-bulges in spiral galaxies. Our sample is flux limited and contains 2067 S0 and 2630 spiral galaxies drawn from the Sloan Digital Sky Survey. We identify pseudo-bulges in S0 and spiral galaxies, based on the position of the bulge on the Kormendy diagram and their central velocity dispersion. Dividing the pseudo-bulges of S0 galaxies into those containing old and young stellar populations, we study the connection between global star formation and pseudo-bulge age on the u - r colour-mass diagram. We find that most old pseudo-bulges are hosted by passive galaxies while majority of young bulges are hosted by galaxies that are star forming. Dividing our sample of S0 galaxies into early-type S0s and S0/a galaxies, we find that old pseudo-bulges are mainly hosted by early-type S0 galaxies while most of the pseudo-bulges in S0/a galaxies are young. We speculate that morphology plays a strong role in quenching of star formation in the disc of these S0 galaxies, which stops the growth of pseudo-bulges, giving rise to old pseudo-bulges and the observed age bimodality.
Efficient robust conditional random fields.
Song, Dongjin; Liu, Wei; Zhou, Tianyi; Tao, Dacheng; Meyer, David A
2015-10-01
Conditional random fields (CRFs) are a flexible yet powerful probabilistic approach and have shown advantages for popular applications in various areas, including text analysis, bioinformatics, and computer vision. Traditional CRF models, however, are incapable of selecting relevant features as well as suppressing noise from noisy original features. Moreover, conventional optimization methods often converge slowly in solving the training procedure of CRFs, and will degrade significantly for tasks with a large number of samples and features. In this paper, we propose robust CRFs (RCRFs) to simultaneously select relevant features. An optimal gradient method (OGM) is further designed to train RCRFs efficiently. Specifically, the proposed RCRFs employ the l1 norm of the model parameters to regularize the objective used by traditional CRFs, therefore enabling discovery of the relevant unary features and pairwise features of CRFs. In each iteration of OGM, the gradient direction is determined jointly by the current gradient together with the historical gradients, and the Lipschitz constant is leveraged to specify the proper step size. We show that an OGM can tackle the RCRF model training very efficiently, achieving the optimal convergence rate [Formula: see text] (where k is the number of iterations). This convergence rate is theoretically superior to the convergence rate O(1/k) of previous first-order optimization methods. Extensive experiments performed on three practical image segmentation tasks demonstrate the efficacy of OGM in training our proposed RCRFs.
Sampling Methods for Detection and Monitoring of the Asian Citrus Psyllid (Hemiptera: Psyllidae).
Monzo, C; Arevalo, H A; Jones, M M; Vanaclocha, P; Croxton, S D; Qureshi, J A; Stansly, P A
2015-06-01
The Asian citrus psyllid (ACP), Diaphorina citri Kuwayama is a key pest of citrus due to its role as vector of citrus greening disease or "huanglongbing." ACP monitoring is considered an indispensable tool for management of vector and disease. In the present study, datasets collected between 2009 and 2013 from 245 citrus blocks were used to evaluate precision, sensitivity for detection, and efficiency of five sampling methods. The number of samples needed to reach a 0.25 standard error-mean ratio was estimated using Taylor's power law and used to compare precision among sampling methods. Comparison of detection sensitivity and time expenditure (cost) between stem-tap and other sampling methodologies conducted consecutively at the same location were also assessed. Stem-tap sampling was the most efficient sampling method when ACP densities were moderate to high and served as the basis for comparison with all other methods. Protocols that grouped trees near randomly selected locations across the block were more efficient than sampling trees at random across the block. Sweep net sampling was similar to stem-taps in number of captures per sampled unit, but less precise at any ACP density. Yellow sticky traps were 14 times more sensitive than stem-taps but much more time consuming and thus less efficient except at very low population densities. Visual sampling was efficient for detecting and monitoring ACP at low densities. Suction sampling was time consuming and taxing but the most sensitive of all methods for detection of sparse populations. This information can be used to optimize ACP monitoring efforts. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Construction of the mathematical concept of pseudo thinking students
NASA Astrophysics Data System (ADS)
Anggraini, D.; Kusmayadi, T. A.; Pramudya, I.
2018-05-01
Thinking process is a process that begins with the acceptance of information, information processing and information calling in memory with structural changes that include concepts or knowledges. The concept or knowledge is individually constructed by each individual. While, students construct a mathematical concept, students may experience pseudo thinking. Pseudo thinking is a thinking process that results in an answer to a problem or construction to a concept “that is not true”. Pseudo thinking can be classified into two forms there are true pseudo and false pseudo. The construction of mathematical concepts in students of pseudo thinking should be immediately known because the error will have an impact on the next construction of mathematical concepts and to correct the errors it requires knowledge of the source of the error. Therefore, in this article will be discussed thinking process in constructing of mathematical concepts in students who experience pseudo thinking.
Krusche, Adele; Rudolf von Rohr, Isabelle; Muse, Kate; Duggan, Danielle; Crane, Catherine; Williams, J. Mark G.
2014-01-01
Background Randomized controlled trials (RCTs) are widely accepted as being the most efficient way of investigating the efficacy of psychological therapies. However, researchers conducting RCTs commonly report difficulties recruiting an adequate sample within planned timescales. In an effort to overcome recruitment difficulties, researchers often are forced to expand their recruitment criteria or extend the recruitment phase, thus increasing costs and delaying publication of results. Research investigating the effectiveness of recruitment strategies is limited and trials often fail to report sufficient details about the recruitment sources and resources utilised. Purpose We examined the efficacy of strategies implemented during the Staying Well after Depression RCT in Oxford to recruit participants with a history of recurrent depression. Methods We describe eight recruitment methods utilised and two further sources not initiated by the research team and examine their efficacy in terms of (i) the return, including the number of potential participants who contacted the trial and the number who were randomized into the trial, (ii) cost-effectiveness, comprising direct financial cost and manpower for initial contacts and randomized participants, and (iii) comparison of sociodemographic characteristics of individuals recruited from different sources. Results Poster advertising, web-based advertising and mental health worker referrals were the cheapest methods per randomized participant; however, the ratio of randomized participants to initial contacts differed markedly per source. Advertising online, via posters and on a local radio station were the most cost-effective recruitment methods for soliciting participants who subsequently were randomized into the trial. Advertising across many sources (saturation) was found to be important. Limitations It may not be feasible to employ all the recruitment methods used in this trial to obtain participation from other populations, such as those currently unwell, or in other geographical locations. Recruitment source was unavailable for participants who could not be reached after the initial contact. Thus, it is possible that the efficiency of certain methods of recruitment was poorer than estimated. Efficacy and costs of other recruitment initiatives, such as providing travel expenses to the in-person eligibility assessment and making follow-up telephone calls to candidates who contacted the recruitment team but could not be screened promptly, were not analysed. Conclusions Website advertising resulted in the highest number of randomized participants and was the second cheapest method of recruiting. Future research should evaluate the effectiveness of recruitment strategies for other samples to contribute to a comprehensive base of knowledge for future RCTs. PMID:24686105
Krusche, Adele; Rudolf von Rohr, Isabelle; Muse, Kate; Duggan, Danielle; Crane, Catherine; Williams, J Mark G
2014-04-01
Randomized controlled trials (RCTs) are widely accepted as being the most efficient way of investigating the efficacy of psychological therapies. However, researchers conducting RCTs commonly report difficulties in recruiting an adequate sample within planned timescales. In an effort to overcome recruitment difficulties, researchers often are forced to expand their recruitment criteria or extend the recruitment phase, thus increasing costs and delaying publication of results. Research investigating the effectiveness of recruitment strategies is limited, and trials often fail to report sufficient details about the recruitment sources and resources utilized. We examined the efficacy of strategies implemented during the Staying Well after Depression RCT in Oxford to recruit participants with a history of recurrent depression. We describe eight recruitment methods utilized and two further sources not initiated by the research team and examine their efficacy in terms of (1) the return, including the number of potential participants who contacted the trial and the number who were randomized into the trial; (2) cost-effectiveness, comprising direct financial cost and manpower for initial contacts and randomized participants; and (3) comparison of sociodemographic characteristics of individuals recruited from different sources. Poster advertising, web-based advertising, and mental health worker referrals were the cheapest methods per randomized participant; however, the ratio of randomized participants to initial contacts differed markedly per source. Advertising online, via posters, and on a local radio station were the most cost-effective recruitment methods for soliciting participants who subsequently were randomized into the trial. Advertising across many sources (saturation) was found to be important. It may not be feasible to employ all the recruitment methods used in this trial to obtain participation from other populations, such as those currently unwell, or in other geographical locations. Recruitment source was unavailable for participants who could not be reached after the initial contact. Thus, it is possible that the efficiency of certain methods of recruitment was poorer than estimated. Efficacy and costs of other recruitment initiatives, such as providing travel expenses to the in-person eligibility assessment and making follow-up telephone calls to candidates who contacted the recruitment team but could not be screened promptly, were not analysed. Website advertising resulted in the highest number of randomized participants and was the second cheapest method of recruiting. Future research should evaluate the effectiveness of recruitment strategies for other samples to contribute to a comprehensive base of knowledge for future RCTs.
Detection of susceptibility genes as modifiers due to subgroup differences in complex disease.
Bergen, Sarah E; Maher, Brion S; Fanous, Ayman H; Kendler, Kenneth S
2010-08-01
Complex diseases invariably involve multiple genes and often exhibit variable symptom profiles. The extent to which disease symptoms, course, and severity differ between affected individuals may result from underlying genetic heterogeneity. Genes with modifier effects may or may not also influence disease susceptibility. In this study, we have simulated data in which a subset of cases differ by some effect size (ES) on a quantitative trait and are also enriched for a risk allele. Power to detect this 'pseudo-modifier' gene in case-only and case-control designs was explored blind to case substructure. Simulations involved 1000 iterations and calculations for 80% power at P<0.01 while varying the risk allele frequency (RAF), sample size (SS), ES, odds ratio (OR), and proportions of the case subgroups. With realistic values for the RAF (0.20), SS (3000) and ES (1), an OR of 1.7 is necessary to detect a pseudo-modifier gene. Unequal numbers of subjects in the case groups result in little decrement in power until the group enriched for the risk allele is <30% or >70% of the total case population. In practice, greater numbers of subjects and selection of a quantitative trait with a large range will provide researchers with greater power to detect a pseudo-modifier gene. However, even under ideal conditions, studies involving alleles with low frequencies or low ORs are usually underpowered for detection of a modifier or susceptibility gene. This may explain some of the inconsistent association results for many candidate gene studies of complex diseases.
NASA Astrophysics Data System (ADS)
Lou, Qin; Zang, Chenqiang; Yang, Mo; Xu, Hongtao
In this work, the immiscible displacement in a cavity with different channel configurations is studied using an improved pseudo-potential lattice Boltzmann equation (LBE) model. This model overcomes the drawback of the dependence of the fluid properties on the grid size, which exists in the original pseudo-potential LBE model. The approach is first validated by the Laplace law. Then, it is employed to study the immiscible displacement process. The influences of different factors, such as the surface wettability, the distance between the gas cavity and liquid cavity and the surface roughness of the channel are investigated. Numerical results show that the displacement efficiency increases and the displacement time decreases with the increase of the surface contact angle. On the other hand, the displacement efficiency increases with increasing distance between the gas cavity and the liquid cavity at first and finally reaches a constant value. As for the surface roughness, two structures (a semicircular cavity and a semicircular bulge) are studied. The comprehensive results show that although the displacement processes for both the structures depend on the surface wettability, they present quite different behaviors. Specially, for the roughness structure constituted by the semicircular cavity, the displacement efficiency decreases and displacement time increases evidently with the size of the semicircular cavity for the small contact angle. The trend slows down as the increase of the contact angle. Once the contact angle exceeds a certain value, the size of the semicircular cavity almost has no influence on the displacement process. While for the roughness structure of a semicircular bulge, the displacement efficiency increases with the size of bulge first and then it decreases for the small contact angle. The displacement efficiency increases first and finally reaches a constant for the large contact angle. The results also show that the displacement time has an extreme value in these cases for the small contact angles.
Competitive Facility Location with Fuzzy Random Demands
NASA Astrophysics Data System (ADS)
Uno, Takeshi; Katagiri, Hideki; Kato, Kosuke
2010-10-01
This paper proposes a new location problem of competitive facilities, e.g. shops, with uncertainty and vagueness including demands for the facilities in a plane. By representing the demands for facilities as fuzzy random variables, the location problem can be formulated as a fuzzy random programming problem. For solving the fuzzy random programming problem, first the α-level sets for fuzzy numbers are used for transforming it to a stochastic programming problem, and secondly, by using their expectations and variances, it can be reformulated to a deterministic programming problem. After showing that one of their optimal solutions can be found by solving 0-1 programming problems, their solution method is proposed by improving the tabu search algorithm with strategic oscillation. The efficiency of the proposed method is shown by applying it to numerical examples of the facility location problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graczykowski, B., E-mail: bartlomiej.graczykowski@icn.cat; Alzina, F.; Gomis-Bresco, J.
In this paper, we report a theoretical investigation of surface acoustic waves propagating in one-dimensional phononic crystal. Using finite element method eigenfrequency and frequency response studies, we develop two model geometries suitable to distinguish true and pseudo (or leaky) surface acoustic waves and determine their propagation through finite size phononic crystals, respectively. The novelty of the first model comes from the application of a surface-like criterion and, additionally, functional damping domain. Exemplary calculated band diagrams show sorted branches of true and pseudo surface acoustic waves and their quantified surface confinement. The second model gives a complementary study of transmission, reflection,more » and surface-to-bulk losses of Rayleigh surface waves in the case of a phononic crystal with a finite number of periods. Here, we demonstrate that a non-zero transmission within non-radiative band gaps can be carried via leaky modes originating from the coupling of local resonances with propagating waves in the substrate. Finally, we show that the transmission, reflection, and surface-to-bulk losses can be effectively optimised by tuning the geometrical properties of a stripe.« less
Newton, Aracely A; Schnittker, Robert R; Yu, Zulin; Munday, Sarah S; Baumann, Diana P; Neaves, William B; Baumann, Peter
2016-12-01
Parthenogenetic species of whiptail lizards in the genus Aspidoscelis constitute a striking example of speciation by hybridization, in which first-generation hybrids instantly attain reproductive isolation and procreate as clonal all-female lineages. Production of eggs containing a full complement of chromosomes in the absence of fertilization involves genome duplication prior to the meiotic divisions. In these pseudo-tetraploid oocytes, pairing and recombination occur exclusively between identical chromosomes instead of homologs; a deviation from the normal meiotic program that maintains heterozygosity. Whether pseudo-tetraploid cells arise early in germ cell development or just prior to meiosis has remained unclear. We now show that in the obligate parthenogenetic species A. neomexicana the vast majority of oocytes enter meiosis as diploid cells. Telomere bouquet formation is normal, but synapsis fails and oocytes accumulate in large numbers at the pairing stage. Pseudo-tetraploid cells are exceedingly rare in early meiotic prophase, but they are the only cells that progress into diplotene. Despite the widespread failure to increase ploidy prior to entering meiosis, the fecundity of parthenogenetic A. neomexicana is similar to that of A. inornata, one of its bisexual ancestors. © 2016. Published by The Company of Biologists Ltd.
Induction of autophagy by depolarization of mitochondria.
Lyamzaev, Konstantin G; Tokarchuk, Artem V; Panteleeva, Alisa A; Mulkidjanian, Armen Y; Skulachev, Vladimir P; Chernyak, Boris V
2018-03-13
Mitochondrial dysfunction plays a crucial role in the macroautophagy/autophagy cascade. In a recently published study Sun et al. described the induction of autophagy by the membranophilic triphenylphosphonium (TPP)-based cation 10-(6'-ubiquinonyl) decyltriphenylphosphonium (MitoQ) in HepG2 cells (Sun C, et al. "MitoQ regulates autophagy by inducing a pseudo-mitochondrial membrane potential [PMMP]", Autophagy 2017, 13:730-738.). Sun et al. suggested that MitoQ adsorbed to the inner mitochondrial membrane with its cationic moiety remaining in the intermembrane space, adding a large number of positive charges and establishing a "pseudo-mitochondrial membrane potential," which blocked the ATP synthase. Here we argue that the suggested mechanism for generation of the "pseudo-mitochondrial membrane potential" is physically implausible and contradicts earlier findings on the electrophoretic displacements of membranophilic cations within and through phospholipid membranes. We provide evidence that TPP-cations dissipated the mitochondrial membrane potential in HepG2 cells and that the induction of autophagy in carcinoma cells by TPP-cations correlated with the uncoupling of oxidative phosphorylation. The mild uncoupling of oxidative phosphorylation by various mitochondria-targeted penetrating cations may contribute to their reported therapeutic effects via inducing both autophagy and mitochondria-selective mitophagy.
Vibronic Analysis for widetilde{B} - widetilde{X} Transition of Isopropoxy Radical
NASA Astrophysics Data System (ADS)
Chhantyal-Pun, Rabi; Miller, Terry A.
2013-06-01
Alkoxy radicals are important intermediates in combustion and atmospheric chemistry. Alkoxy radicals are also of significant spectroscopic interest for the study of Jahn Teller and pseudo Jahn Teller effects, involving the widetilde{X} and widetilde{A} states. The Jahn Teller effect has been studied in methoxy. Substitution of one or two hydrogens by methyl groups transforms the interaction to a pseudo Jahn Teller effect in ethoxy and isopropoxy. Previously, moderate resolution scans have been obtained for widetilde{B} - widetilde{X} and widetilde{B} - widetilde{A} transition systems, the latter observable at higher temperature. These measurements have shown that the widetilde{X} and widetilde{A} states of isopropoxy are separated by only 60.7(7) cm^{-1} which indicates a strong pseudo Jahn Teller effect in the widetilde{X} state. Such pseduo Jahn Teller coupling should also introduce additional bands into the widetilde{B} - widetilde{X} spectrum and a number of weaker transitions have been observed which may be caused by such effects. In this talk we present a vibronic analysis for the widetilde{B} - widetilde{X} transition based on the experimental results and also the results from recent quantum chemistry calculations.
Widespread failure to complete meiosis does not impair fecundity in parthenogenetic whiptail lizards
Newton, Aracely A.; Schnittker, Robert R.; Yu, Zulin; Munday, Sarah S.; Neaves, William B.; Baumann, Peter
2016-01-01
Parthenogenetic species of whiptail lizards in the genus Aspidoscelis constitute a striking example of speciation by hybridization, in which first-generation hybrids instantly attain reproductive isolation and procreate as clonal all-female lineages. Production of eggs containing a full complement of chromosomes in the absence of fertilization involves genome duplication prior to the meiotic divisions. In these pseudo-tetraploid oocytes, pairing and recombination occur exclusively between identical chromosomes instead of homologs; a deviation from the normal meiotic program that maintains heterozygosity. Whether pseudo-tetraploid cells arise early in germ cell development or just prior to meiosis has remained unclear. We now show that in the obligate parthenogenetic species A. neomexicana the vast majority of oocytes enter meiosis as diploid cells. Telomere bouquet formation is normal, but synapsis fails and oocytes accumulate in large numbers at the pairing stage. Pseudo-tetraploid cells are exceedingly rare in early meiotic prophase, but they are the only cells that progress into diplotene. Despite the widespread failure to increase ploidy prior to entering meiosis, the fecundity of parthenogenetic A. neomexicana is similar to that of A. inornata, one of its bisexual ancestors. PMID:27802173
Improved Pseudo-section Representation for CSAMT Data in Geothermal Exploration
NASA Astrophysics Data System (ADS)
Grandis, Hendra; Sumintadireja, Prihadi
2017-04-01
Controlled-Source Audio-frequency Magnetotellurics (CSAMT) is a frequency domain sounding technique employing typically a grounded electric dipole as the primary electromagnetic (EM) source to infer the subsurface resistivity distribution. The use of an artificial source provides coherent signals with higher signal-to-noise ratio and overcomes the problems with randomness and fluctuation of the natural EM fields used in MT. However, being an extension of MT, the CSAMT data still uses apparent resistivity and phase for data representation. The finite transmitter-receiver distance in CSAMT leads to a somewhat “distorted” response of the subsurface compared to MT data. We propose a simple technique to present CSAMT data as an apparent resistivity pseudo-section with more meaningful information for qualitative interpretation. Tests with synthetic and field CSAMT data showed that the simple technique is valid only for sounding curves exhibiting a transition from high - low - high resistivity (i.e. H-type) prevailing in data from a geothermal prospect. For quantitative interpretation, we recommend the use of the full-solution of CSAMT modelling since our technique is not valid for more general cases.
Brunetti, C R; Burke, R L; Hoflack, B; Ludwig, T; Dingwell, K S; Johnson, D C
1995-01-01
Herpes simplex virus (HSV) glycoprotein D (gD) is essential for virus entry into cells, is modified with mannose-6-phosphate (M-6-P), and binds to both the 275-kDa M-6-P receptor (MPR) and the 46-kDa MPR (C. R. Brunetti, R. L. Burke, S. Kornfeld, W. Gregory, K. S. Dingwell, F. Masiarz, and D. C. Johnson, J. Biol. Chem. 269:17067-17074, 1994). Since MPRs are found on the surfaces of mammalian cells, we tested the hypothesis that MPRs could serve as receptors for HSV during virus entry into cells. A soluble form of the 275-kDa MPR, derived from fetal bovine serum, inhibited HSV plaques on monkey Vero cells, as did polyclonal rabbit anti-MPR antibodies. In addition, the number and size of HSV plaques were reduced when cells were treated with bovine serum albumin conjugated with pentamannose-phosphate (PM-PO4-BSA), a bulky ligand which can serve as a high-affinity ligand for MPRs. These data imply that HSV can use MPRs to enter cells; however, other molecules must also serve as receptors for HSV because a reasonable fraction of virus could enter cells treated with even the highest concentrations of these inhibitors. Consistent with the possibility that there are other receptors, HSV produced the same number of plaques on MPR-deficient mouse fibroblasts as were produced on normal mouse fibroblasts, but there was no inhibition with PM-PO4-BSA with either of these embryonic mouse cells. Together, these results demonstrate that HSV does not rely solely on MPRs to enter cells, although MPRs apparently play some role in virus entry into some cell types and, perhaps, act as one of a number of cell surface molecules that can facilitate entry. We also found that HSV produced small plaques on human fibroblasts derived from patients with pseudo-Hurler's polydystrophy, cells in which glycoproteins are not modified with M-6-P residues and yet production of infectious HSV particles was not altered in the pseudo-Hurler cells. In addition, HSV plaque size was reduced by PM-PO4-BSA; therefore, it appears that M-6-P residues and MPRs are required for efficient transmission of HSV between cells, a process which differs in some respects from entry of exogenous virus particles. PMID:7745699