Array coding for large data memories
NASA Technical Reports Server (NTRS)
Tranter, W. H.
1982-01-01
It is pointed out that an array code is a convenient method for storing large quantities of data. In a typical application, the array consists of N data words having M symbols in each word. The probability of undetected error is considered, taking into account three symbol error probabilities which are of interest, and a formula for determining the probability of undetected error. Attention is given to the possibility of reading data into the array using a digital communication system with symbol error probability p. Two different schemes are found to be of interest. The conducted analysis of array coding shows that the probability of undetected error is very small even for relatively large arrays.
Performance of concatenated Reed-Solomon/Viterbi channel coding
NASA Technical Reports Server (NTRS)
Divsalar, D.; Yuen, J. H.
1982-01-01
The concatenated Reed-Solomon (RS)/Viterbi coding system is reviewed. The performance of the system is analyzed and results are derived with a new simple approach. A functional model for the input RS symbol error probability is presented. Based on this new functional model, we compute the performance of a concatenated system in terms of RS word error probability, output RS symbol error probability, bit error probability due to decoding failure, and bit error probability due to decoding error. Finally we analyze the effects of the noisy carrier reference and the slow fading on the system performance.
Performance analysis of the word synchronization properties of the outer code in a TDRSS decoder
NASA Technical Reports Server (NTRS)
Costello, D. J., Jr.; Lin, S.
1984-01-01
A self-synchronizing coding scheme for NASA's TDRSS satellite system is a concatenation of a (2,1,7) inner convolutional code with a (255,223) Reed-Solomon outer code. Both symbol and word synchronization are achieved without requiring that any additional symbols be transmitted. An important parameter which determines the performance of the word sync procedure is the ratio of the decoding failure probability to the undetected error probability. Ideally, the former should be as small as possible compared to the latter when the error correcting capability of the code is exceeded. A computer simulation of a (255,223) Reed-Solomon code as carried out. Results for decoding failure probability and for undetected error probability are tabulated and compared.
Effects of low sampling rate in the digital data-transition tracking loop
NASA Technical Reports Server (NTRS)
Mileant, A.; Million, S.; Hinedi, S.
1994-01-01
This article describes the performance of the all-digital data-transition tracking loop (DTTL) with coherent and noncoherent sampling using nonlinear theory. The effects of few samples per symbol and of noncommensurate sampling and symbol rates are addressed and analyzed. Their impact on the probability density and variance of the phase error are quantified through computer simulations. It is shown that the performance of the all-digital DTTL approaches its analog counterpart when the sampling and symbol rates are noncommensurate (i.e., the number of samples per symbol is an irrational number). The loop signal-to-noise ratio (SNR) (inverse of phase error variance) degrades when the number of samples per symbol is an odd integer but degrades even further for even integers.
Performance analysis of a cascaded coding scheme with interleaved outer code
NASA Technical Reports Server (NTRS)
Lin, S.
1986-01-01
A cascaded coding scheme for a random error channel with a bit-error rate is analyzed. In this scheme, the inner code C sub 1 is an (n sub 1, m sub 1l) binary linear block code which is designed for simultaneous error correction and detection. The outer code C sub 2 is a linear block code with symbols from the Galois field GF (2 sup l) which is designed for correcting both symbol errors and erasures, and is interleaved with a degree m sub 1. A procedure for computing the probability of a correct decoding is presented and an upper bound on the probability of a decoding error is derived. The bound provides much better results than the previous bound for a cascaded coding scheme with an interleaved outer code. Example schemes with inner codes ranging from high rates to very low rates are evaluated. Several schemes provide extremely high reliability even for very high bit-error rates say 10 to the -1 to 10 to the -2 power.
NASA Astrophysics Data System (ADS)
Sharma, Prabhat Kumar
2016-11-01
A framework is presented for the analysis of average symbol error rate (SER) for M-ary quadrature amplitude modulation in a free-space optical communication system. The standard probability density function (PDF)-based approach is extended to evaluate the average SER by representing the Q-function through its Meijer's G-function equivalent. Specifically, a converging power series expression for the average SER is derived considering the zero-boresight misalignment errors in the receiver side. The analysis presented here assumes a unified expression for the PDF of channel coefficient which incorporates the M-distributed atmospheric turbulence and Rayleigh-distributed radial displacement for the misalignment errors. The analytical results are compared with the results obtained using Q-function approximation. Further, the presented results are supported by the Monte Carlo simulations.
Multiple symbol partially coherent detection of MPSK
NASA Technical Reports Server (NTRS)
Simon, M. K.; Divsalar, D.
1992-01-01
It is shown that by using the known (or estimated) value of carrier tracking loop signal to noise ratio (SNR) in the decision metric, it is possible to improve the error probability performance of a partially coherent multiple phase-shift-keying (MPSK) system relative to that corresponding to the commonly used ideal coherent decision rule. Using a maximum-likeihood approach, an optimum decision metric is derived and shown to take the form of a weighted sum of the ideal coherent decision metric (i.e., correlation) and the noncoherent decision metric which is optimum for differential detection of MPSK. The performance of a receiver based on this optimum decision rule is derived and shown to provide continued improvement with increasing length of observation interval (data symbol sequence length). Unfortunately, increasing the observation length does not eliminate the error floor associated with the finite loop SNR. Nevertheless, in the limit of infinite observation length, the average error probability performance approaches the algebraic sum of the error floor and the performance of ideal coherent detection, i.e., at any error probability above the error floor, there is no degradation due to the partial coherence. It is shown that this limiting behavior is virtually achievable with practical size observation lengths. Furthermore, the performance is quite insensitive to mismatch between the estimate of loop SNR (e.g., obtained from measurement) fed to the decision metric and its true value. These results may be of use in low-cost Earth-orbiting or deep-space missions employing coded modulations.
On the synchronizability and detectability of random PPM sequences
NASA Technical Reports Server (NTRS)
Georghiades, Costas N.; Lin, Shu
1987-01-01
The problem of synchronization and detection of random pulse-position-modulation (PPM) sequences is investigated under the assumption of perfect slot synchronization. Maximum-likelihood PPM symbol synchronization and receiver algorithms are derived that make decisions based both on soft as well as hard data; these algorithms are seen to be easily implementable. Bounds derived on the symbol error probability as well as the probability of false synchronization indicate the existence of a rather severe performance floor, which can easily be the limiting factor in the overall system performance. The performance floor is inherent in the PPM format and random data and becomes more serious as the PPM alphabet size Q is increased. A way to eliminate the performance floor is suggested by inserting special PPM symbols in the random data stream.
On the synchronizability and detectability of random PPM sequences
NASA Technical Reports Server (NTRS)
Georghiades, Costas N.
1987-01-01
The problem of synchronization and detection of random pulse-position-modulation (PPM) sequences is investigated under the assumption of perfect slot synchronization. Maximum likelihood PPM symbol synchronization and receiver algorithms are derived that make decisions based both on soft as well as hard data; these algorithms are seen to be easily implementable. Bounds were derived on the symbol error probability as well as the probability of false synchronization that indicate the existence of a rather severe performance floor, which can easily be the limiting factor in the overall system performance. The performance floor is inherent in the PPM format and random data and becomes more serious as the PPM alphabet size Q is increased. A way to eliminate the performance floor is suggested by inserting special PPM symbols in the random data stream.
Analysis of synchronous digital-modulation schemes for satellite communication
NASA Technical Reports Server (NTRS)
Takhar, G. S.; Gupta, S. C.
1975-01-01
The multipath communication channel for space communications is modeled as a multiplicative channel. This paper discusses the effects of multiplicative channel processes on the symbol error rate for quadrature modulation (QM) digital modulation schemes. An expression for the upper bound on the probability of error is derived and numerically evaluated. The results are compared with those obtained for additive channels.
More on the decoder error probability for Reed-Solomon codes
NASA Technical Reports Server (NTRS)
Cheung, K.-M.
1987-01-01
The decoder error probability for Reed-Solomon codes (more generally, linear maximum distance separable codes) is examined. McEliece and Swanson offered an upper bound on P sub E (u), the decoder error probability given that u symbol errors occurs. This upper bound is slightly greater than Q, the probability that a completely random error pattern will cause decoder error. By using a combinatoric technique, the principle of inclusion and exclusion, an exact formula for P sub E (u) is derived. The P sub e (u) for the (255, 223) Reed-Solomon Code used by NASA, and for the (31,15) Reed-Solomon code (JTIDS code), are calculated using the exact formula, and the P sub E (u)'s are observed to approach the Q's of the codes rapidly as u gets larger. An upper bound for the expression is derived, and is shown to decrease nearly exponentially as u increases. This proves analytically that P sub E (u) indeed approaches Q as u becomes large, and some laws of large numbers come into play.
NASA Technical Reports Server (NTRS)
1981-01-01
A hardware integrated convolutional coding/symbol interleaving and integrated symbol deinterleaving/Viterbi decoding simulation system is described. Validation on the system of the performance of the TDRSS S-band return link with BPSK modulation, operating in a pulsed RFI environment is included. The system consists of three components, the Fast Linkabit Error Rate Tester (FLERT), the Transition Probability Generator (TPG), and a modified LV7017B which includes rate 1/3 capability as well as a periodic interleaver/deinterleaver. Operating and maintenance manuals for each of these units are included.
The role of visual spatial attention in adult developmental dyslexia.
Collis, Nathan L; Kohnen, Saskia; Kinoshita, Sachiko
2013-01-01
The present study investigated the nature of visual spatial attention deficits in adults with developmental dyslexia, using a partial report task with five-letter, digit, and symbol strings. Participants responded by a manual key press to one of nine alternatives, which included other characters in the string, allowing an assessment of position errors as well as intrusion errors. The results showed that the dyslexic adults performed significantly worse than age-matched controls with letter and digit strings but not with symbol strings. Both groups produced W-shaped serial position functions with letter and digit strings. The dyslexics' deficits with letter string stimuli were limited to position errors, specifically at the string-interior positions 2 and 4. These errors correlated with letter transposition reading errors (e.g., reading slat as "salt"), but not with the Rapid Automatized Naming (RAN) task. Overall, these results suggest that the dyslexic adults have a visual spatial attention deficit; however, the deficit does not reflect a reduced span in visual-spatial attention, but a deficit in processing a string of letters in parallel, probably due to difficulty in the coding of letter position.
System and method for forward error correction
NASA Technical Reports Server (NTRS)
Cole, Robert M. (Inventor); Bishop, James E. (Inventor)
2006-01-01
A system and method are provided for transferring a packet across a data link. The packet may include a stream of data symbols which is delimited by one or more framing symbols. Corruptions of the framing symbol which result in valid data symbols may be mapped to invalid symbols. If it is desired to transfer one of the valid data symbols that has been mapped to an invalid symbol, the data symbol may be replaced with an unused symbol. At the receiving end, these unused symbols are replaced with the corresponding valid data symbols. The data stream of the packet may be encoded with forward error correction information to detect and correct errors in the data stream.
NASA Astrophysics Data System (ADS)
Liao, Renbo; Liu, Hongzhan; Qiao, Yaojun
2014-05-01
In order to improve the power efficiency and reduce the packet error rate of reverse differential pulse position modulation (RDPPM) for wireless optical communication (WOC), a hybrid reverse differential pulse position width modulation (RDPPWM) scheme is proposed, based on RDPPM and reverse pulse width modulation. Subsequently, the symbol structure of RDPPWM is briefly analyzed, and its performance is compared with that of other modulation schemes in terms of average transmitted power, bandwidth requirement, and packet error rate over ideal additive white Gaussian noise (AWGN) channels. Based on the given model, the simulation results show that the proposed modulation scheme has the advantages of improving the power efficiency and reducing the bandwidth requirement. Moreover, in terms of error probability performance, RDPPWM can achieve a much lower packet error rate than that of RDPPM. For example, at the same received signal power of -28 dBm, the packet error rate of RDPPWM can decrease to 2.6×10-12, while that of RDPPM is 2.2×10. Furthermore, RDPPWM does not need symbol synchronization at the receiving end. These considerations make RDPPWM a favorable candidate to select as the modulation scheme in the WOC systems.
Precoded spatial multiplexing MIMO system with spatial component interleaver.
Gao, Xiang; Wu, Zhanji
In this paper, the performance of precoded bit-interleaved coded modulation (BICM) spatial multiplexing multiple-input multiple-output (MIMO) system with spatial component interleaver is investigated. For the ideal precoded spatial multiplexing MIMO system with spatial component interleaver based on singular value decomposition (SVD) of the MIMO channel, the average pairwise error probability (PEP) of coded bits is derived. Based on the PEP analysis, the optimum spatial Q-component interleaver design criterion is provided to achieve the minimum error probability. For the limited feedback precoded proposed scheme with linear zero forcing (ZF) receiver, in order to minimize a bound on the average probability of a symbol vector error, a novel effective signal-to-noise ratio (SNR)-based precoding matrix selection criterion and a simplified criterion are proposed. Based on the average mutual information (AMI)-maximization criterion, the optimal constellation rotation angles are investigated. Simulation results indicate that the optimized spatial multiplexing MIMO system with spatial component interleaver can achieve significant performance advantages compared to the conventional spatial multiplexing MIMO system.
Diversity Order Analysis of Dual-Hop Relaying with Partial Relay Selection
NASA Astrophysics Data System (ADS)
Bao, Vo Nguyen Quoc; Kong, Hyung Yun
In this paper, we study the performance of dual hop relaying in which the best relay selected by partial relay selection will help the source-destination link to overcome the channel impairment. Specifically, closed-form expressions for outage probability, symbol error probability and achievable diversity gain are derived using the statistical characteristic of the signal-to-noise ratio. Numerical investigation shows that the system achieves diversity of two regardless of relay number and also confirms the correctness of the analytical results. Furthermore, the performance loss due to partial relay selection is investigated.
Bandwidth efficient CCSDS coding standard proposals
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.; Perez, Lance C.; Wang, Fu-Quan
1992-01-01
The basic concatenated coding system for the space telemetry channel consists of a Reed-Solomon (RS) outer code, a symbol interleaver/deinterleaver, and a bandwidth efficient trellis inner code. A block diagram of this configuration is shown. The system may operate with or without the outer code and interleaver. In this recommendation, the outer code remains the (255,223) RS code over GF(2 exp 8) with an error correcting capability of t = 16 eight bit symbols. This code's excellent performance and the existence of fast, cost effective, decoders justify its continued use. The purpose of the interleaver/deinterleaver is to distribute burst errors out of the inner decoder over multiple codewords of the outer code. This utilizes the error correcting capability of the outer code more efficiently and reduces the probability of an RS decoder failure. Since the space telemetry channel is not considered bursty, the required interleaving depth is primarily a function of the inner decoding method. A diagram of an interleaver with depth 4 that is compatible with the (255,223) RS code is shown. Specific interleaver requirements are discussed after the inner code recommendations.
NASA Technical Reports Server (NTRS)
Lewis, Michael
1994-01-01
Statistical encoding techniques enable the reduction of the number of bits required to encode a set of symbols, and are derived from their probabilities. Huffman encoding is an example of statistical encoding that has been used for error-free data compression. The degree of compression given by Huffman encoding in this application can be improved by the use of prediction methods. These replace the set of elevations by a set of corrections that have a more advantageous probability distribution. In particular, the method of Lagrange Multipliers for minimization of the mean square error has been applied to local geometrical predictors. Using this technique, an 8-point predictor achieved about a 7 percent improvement over an existing simple triangular predictor.
System and method for transferring data on a data link
NASA Technical Reports Server (NTRS)
Cole, Robert M. (Inventor); Bishop, James E. (Inventor)
2007-01-01
A system and method are provided for transferring a packet across a data link. The packet may include a stream of data symbols which is delimited by one or more framing symbols. Corruptions of the framing symbol which result in valid data symbols may be mapped to invalid symbols. If it is desired to transfer one of the valid data symbols that has been mapped to an invalid symbol, the data symbol may be replaced with an unused symbol. At the receiving end, these unused symbols are replaced with the corresponding valid data symbols. The data stream of the packet may be encoded with forward error correction information to detect and correct errors in the data stream.
LEA Detection and Tracking Method for Color-Independent Visual-MIMO
Kim, Jai-Eun; Kim, Ji-Won; Kim, Ki-Doo
2016-01-01
Communication performance in the color-independent visual-multiple input multiple output (visual-MIMO) technique is deteriorated by light emitting array (LEA) detection and tracking errors in the received image because the image sensor included in the camera must be used as the receiver in the visual-MIMO system. In this paper, in order to improve detection reliability, we first set up the color-space-based region of interest (ROI) in which an LEA is likely to be placed, and then use the Harris corner detection method. Next, we use Kalman filtering for robust tracking by predicting the most probable location of the LEA when the relative position between the camera and the LEA varies. In the last step of our proposed method, the perspective projection is used to correct the distorted image, which can improve the symbol decision accuracy. Finally, through numerical simulation, we show the possibility of robust detection and tracking of the LEA, which results in a symbol error rate (SER) performance improvement. PMID:27384563
LEA Detection and Tracking Method for Color-Independent Visual-MIMO.
Kim, Jai-Eun; Kim, Ji-Won; Kim, Ki-Doo
2016-07-02
Communication performance in the color-independent visual-multiple input multiple output (visual-MIMO) technique is deteriorated by light emitting array (LEA) detection and tracking errors in the received image because the image sensor included in the camera must be used as the receiver in the visual-MIMO system. In this paper, in order to improve detection reliability, we first set up the color-space-based region of interest (ROI) in which an LEA is likely to be placed, and then use the Harris corner detection method. Next, we use Kalman filtering for robust tracking by predicting the most probable location of the LEA when the relative position between the camera and the LEA varies. In the last step of our proposed method, the perspective projection is used to correct the distorted image, which can improve the symbol decision accuracy. Finally, through numerical simulation, we show the possibility of robust detection and tracking of the LEA, which results in a symbol error rate (SER) performance improvement.
Symbolic Analysis of Concurrent Programs with Polymorphism
NASA Technical Reports Server (NTRS)
Rungta, Neha Shyam
2010-01-01
The current trend of multi-core and multi-processor computing is causing a paradigm shift from inherently sequential to highly concurrent and parallel applications. Certain thread interleavings, data input values, or combinations of both often cause errors in the system. Systematic verification techniques such as explicit state model checking and symbolic execution are extensively used to detect errors in such systems [7, 9]. Explicit state model checking enumerates possible thread schedules and input data values of a program in order to check for errors [3, 9]. To partially mitigate the state space explosion from data input values, symbolic execution techniques substitute data input values with symbolic values [5, 7, 6]. Explicit state model checking and symbolic execution techniques used in conjunction with exhaustive search techniques such as depth-first search are unable to detect errors in medium to large-sized concurrent programs because the number of behaviors caused by data and thread non-determinism is extremely large. We present an overview of abstraction-guided symbolic execution for concurrent programs that detects errors manifested by a combination of thread schedules and data values [8]. The technique generates a set of key program locations relevant in testing the reachability of the target locations. The symbolic execution is then guided along these locations in an attempt to generate a feasible execution path to the error state. This allows the execution to focus in parts of the behavior space more likely to contain an error.
The relevance of error analysis in graphical symbols evaluation.
Piamonte, D P
1999-01-01
In an increasing number of modern tools and devices, small graphical symbols appear simultaneously in sets as parts of the human-machine interfaces. The presence of each symbol can influence the other's recognizability and correct association to its intended referents. Thus, aside from correct associations, it is equally important to perform certain error analysis of the wrong answers, misses, confusions, and even lack of answers. This research aimed to show how such error analyses could be valuable in evaluating graphical symbols especially across potentially different user groups. The study tested 3 sets of icons representing 7 videophone functions. The methods involved parameters such as hits, confusions, missing values, and misses. The association tests showed similar hit rates of most symbols across the majority of the participant groups. However, exploring the error patterns helped detect differences in the graphical symbols' performances between participant groups, which otherwise seemed to have similar levels of recognition. These are very valuable not only in determining the symbols to be retained, replaced or re-designed, but also in formulating instructions and other aids in learning to use new products faster and more satisfactorily.
On codes with multi-level error-correction capabilities
NASA Technical Reports Server (NTRS)
Lin, Shu
1987-01-01
In conventional coding for error control, all the information symbols of a message are regarded equally significant, and hence codes are devised to provide equal protection for each information symbol against channel errors. However, in some occasions, some information symbols in a message are more significant than the other symbols. As a result, it is desired to devise codes with multilevel error-correcting capabilities. Another situation where codes with multi-level error-correcting capabilities are desired is in broadcast communication systems. An m-user broadcast channel has one input and m outputs. The single input and each output form a component channel. The component channels may have different noise levels, and hence the messages transmitted over the component channels require different levels of protection against errors. Block codes with multi-level error-correcting capabilities are also known as unequal error protection (UEP) codes. Structural properties of these codes are derived. Based on these structural properties, two classes of UEP codes are constructed.
Differential detection in quadrature-quadrature phase shift keying (Q2PSK) systems
NASA Astrophysics Data System (ADS)
El-Ghandour, Osama M.; Saha, Debabrata
1991-05-01
A generalized quadrature-quadrature phase shift keying (Q2PSK) signaling format is considered for differential encoding and differential detection. Performance in the presence of additive white Gaussian noise (AWGN) is analyzed. Symbol error rate is found to be approximately twice the symbol error rate in a quaternary DPSK system operating at the same Eb/N0. However, the bandwidth efficiency of differential Q2PSK is substantially higher than that of quaternary DPSK. When the error is due to AWGN, the ratio of double error rate to single error rate can be very high, and the ratio may approach zero at high SNR. To improve error rate, differential detection through maximum-likelihood decoding based on multiple or N symbol observations is considered. If N and SNR are large this decoding gives a 3-dB advantage in error rate over conventional N = 2 differential detection, fully recovering the energy loss (as compared to coherent detection) if the observation is extended to a large number of symbol durations.
NASA Technical Reports Server (NTRS)
Simon, M.; Mileant, A.
1986-01-01
The steady-state behavior of a particular type of digital phase-locked loop (DPLL) with an integrate-and-dump circuit following the phase detector is characterized in terms of the probability density function (pdf) of the phase error in the loop. Although the loop is entirely digital from an implementation standpoint, it operates at two extremely different sampling rates. In particular, the combination of a phase detector and an integrate-and-dump circuit operates at a very high rate whereas the loop update rate is very slow by comparison. Because of this dichotomy, the loop can be analyzed by hybrid analog/digital (s/z domain) techniques. The loop is modeled in such a general fashion that previous analyses of the Real-Time Combiner (RTC), Subcarrier Demodulator Assembly (SDA), and Symbol Synchronization Assembly (SSA) fall out as special cases.
The Statistical Loop Analyzer (SLA)
NASA Technical Reports Server (NTRS)
Lindsey, W. C.
1985-01-01
The statistical loop analyzer (SLA) is designed to automatically measure the acquisition, tracking and frequency stability performance characteristics of symbol synchronizers, code synchronizers, carrier tracking loops, and coherent transponders. Automated phase lock and system level tests can also be made using the SLA. Standard baseband, carrier and spread spectrum modulation techniques can be accomodated. Through the SLA's phase error jitter and cycle slip measurements the acquisition and tracking thresholds of the unit under test are determined; any false phase and frequency lock events are statistically analyzed and reported in the SLA output in probabilistic terms. Automated signal drop out tests can be performed in order to trouble shoot algorithms and evaluate the reacquisition statistics of the unit under test. Cycle slip rates and cycle slip probabilities can be measured using the SLA. These measurements, combined with bit error probability measurements, are all that are needed to fully characterize the acquisition and tracking performance of a digital communication system.
Trellis Coding of Non-coherent Multiple Symbol Full Response M-ary CPFSK with Modulation Index 1/M
NASA Technical Reports Server (NTRS)
Lee, H.; Divsalar, D.; Weber, C.
1994-01-01
This paper introduces a trellis coded modulation (TCM) scheme for non-coherent multiple full response M-ary CPFSK with modulation index 1/M. A proper branch metric for the trellis decoder is obtained by employing a simple approximation of the modified Bessel function for large signal to noise ratio (SNR). Pairwise error probability of coded sequences is evaluated by applying a linear approximation to the Rician random variable.
Code-Time Diversity for Direct Sequence Spread Spectrum Systems
Hassan, A. Y.
2014-01-01
Time diversity is achieved in direct sequence spread spectrum by receiving different faded delayed copies of the transmitted symbols from different uncorrelated channel paths when the transmission signal bandwidth is greater than the coherence bandwidth of the channel. In this paper, a new time diversity scheme is proposed for spread spectrum systems. It is called code-time diversity. In this new scheme, N spreading codes are used to transmit one data symbol over N successive symbols interval. The diversity order in the proposed scheme equals to the number of the used spreading codes N multiplied by the number of the uncorrelated paths of the channel L. The paper represents the transmitted signal model. Two demodulators structures will be proposed based on the received signal models from Rayleigh flat and frequency selective fading channels. Probability of error in the proposed diversity scheme is also calculated for the same two fading channels. Finally, simulation results are represented and compared with that of maximal ration combiner (MRC) and multiple-input and multiple-output (MIMO) systems. PMID:24982925
Estimation of chaotic coupled map lattices using symbolic vector dynamics
NASA Astrophysics Data System (ADS)
Wang, Kai; Pei, Wenjiang; Cheung, Yiu-ming; Shen, Yi; He, Zhenya
2010-01-01
In [K. Wang, W.J. Pei, Z.Y. He, Y.M. Cheung, Phys. Lett. A 367 (2007) 316], an original symbolic vector dynamics based method has been proposed for initial condition estimation in additive white Gaussian noisy environment. The estimation precision of this estimation method is determined by symbolic errors of the symbolic vector sequence gotten by symbolizing the received signal. This Letter further develops the symbolic vector dynamical estimation method. We correct symbolic errors with backward vector and the estimated values by using different symbols, and thus the estimation precision can be improved. Both theoretical and experimental results show that this algorithm enables us to recover initial condition of coupled map lattice exactly in both noisy and noise free cases. Therefore, we provide novel analytical techniques for understanding turbulences in coupled map lattice.
Speech processing using conditional observable maximum likelihood continuity mapping
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogden, John; Nix, David
A computer implemented method enables the recognition of speech and speech characteristics. Parameters are initialized of first probability density functions that map between the symbols in the vocabulary of one or more sequences of speech codes that represent speech sounds and a continuity map. Parameters are also initialized of second probability density functions that map between the elements in the vocabulary of one or more desired sequences of speech transcription symbols and the continuity map. The parameters of the probability density functions are then trained to maximize the probabilities of the desired sequences of speech-transcription symbols. A new sequence ofmore » speech codes is then input to the continuity map having the trained first and second probability function parameters. A smooth path is identified on the continuity map that has the maximum probability for the new sequence of speech codes. The probability of each speech transcription symbol for each input speech code can then be output.« less
1987-01-01
wetlands, are as follows: Category Symbol Definition OBLIGATE WETLAND OBL Plants that occur almost always PLANTS (estimated probability >.992) in...estimated probability 1% to 33%) in nonwetlands. FACULTATIVE PLANTS FAC Plants with a similar likelihood (estimated probability 337 to 67%) of... Symbols Symbols appearing in the list under the indicator status column are as follows: +: A "+" sign following an indicator status denotes that the
Reduced circuit implementation of encoder and syndrome generator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trager, Barry M; Winograd, Shmuel
An error correction method and system includes an Encoder and Syndrome-generator that operate in parallel to reduce the amount of circuitry used to compute check symbols and syndromes for error correcting codes. The system and method computes the contributions to the syndromes and check symbols 1 bit at a time instead of 1 symbol at a time. As a result, the even syndromes can be computed as powers of the odd syndromes. Further, the system assigns symbol addresses so that there are, for an example GF(2.sup.8) which has 72 symbols, three (3) blocks of addresses which differ by a cubemore » root of unity to allow the data symbols to be combined for reducing size and complexity of odd syndrome circuits. Further, the implementation circuit for generating check symbols is derived from syndrome circuit using the inverse of the part of the syndrome matrix for check locations.« less
Performance of unbalanced QPSK in the presence of noisy reference and crosstalk
NASA Technical Reports Server (NTRS)
Divsalar, D.; Yuen, J. H.
1979-01-01
The problem of transmitting two telemetry data streams having different rates and different powers using unbalanced quadriphase shift keying (UQPSK) signaling is considered. It is noted that the presence of a noisy carrier phase reference causes a degradation in detection performance in coherent communications systems and that imperfect carrier synchronization not only attenuates the main demodulated signal voltage in UQPSK but also produces interchannel interference (crosstalk) which degrades the performance still further. Exact analytical expressions for symbol error probability of UQPSK in the presence of noise phase reference are derived.
Mental representation of symbols as revealed by vocabulary errors in two bonobos (Pan paniscus).
Lyn, Heidi
2007-10-01
Error analysis has been used in humans to detect implicit representations and categories in language use. The present study utilizes the same technique to report on mental representations and categories in symbol use from two bonobos (Pan paniscus). These bonobos have been shown in published reports to comprehend English at the level of a two-and-a-half year old child and to use a keyboard with over 200 visuographic symbols (lexigrams). In this study, vocabulary test errors from over 10 years of data revealed auditory, visual, and spatio-temporal generalizations (errors were more likely items that looked like sounded like, or were frequently associated with the sample item in space or in time), as well as hierarchical and conceptual categorizations. These error data, like those of humans, are a result of spontaneous responding rather than specific training and do not solely depend upon the sample mode (e.g. auditory similarity errors are not universally more frequent with an English sample, nor were visual similarity errors universally more frequent with a photograph sample). However, unlike humans, these bonobos do not make errors based on syntactical confusions (e.g. confusing semantically unrelated nouns), suggesting that they may not separate syntactical and semantic information. These data suggest that apes spontaneously create a complex, hierarchical, web of representations when exposed to a symbol system.
Feed-forward frequency offset estimation for 32-QAM optical coherent detection.
Xiao, Fei; Lu, Jianing; Fu, Songnian; Xie, Chenhui; Tang, Ming; Tian, Jinwen; Liu, Deming
2017-04-17
Due to the non-rectangular distribution of the constellation points, traditional fast Fourier transform based frequency offset estimation (FFT-FOE) is no longer suitable for 32-QAM signal. Here, we report a modified FFT-FOE technique by selecting and digitally amplifying the inner QPSK ring of 32-QAM after the adaptive equalization, which is defined as QPSK-selection assisted FFT-FOE. Simulation results show that no FOE error occurs with a FFT size of only 512 symbols, when the signal-to-noise ratio (SNR) is above 17.5 dB using our proposed FOE technique. However, the error probability of traditional FFT-FOE scheme for 32-QAM is always intolerant. Finally, our proposed FOE scheme functions well for 10 Gbaud dual polarization (DP)-32-QAM signal to reach 20% forward error correction (FEC) threshold of BER=2×10-2, under the scenario of back-to-back (B2B) transmission.
Exact and Approximate Probabilistic Symbolic Execution
NASA Technical Reports Server (NTRS)
Luckow, Kasper; Pasareanu, Corina S.; Dwyer, Matthew B.; Filieri, Antonio; Visser, Willem
2014-01-01
Probabilistic software analysis seeks to quantify the likelihood of reaching a target event under uncertain environments. Recent approaches compute probabilities of execution paths using symbolic execution, but do not support nondeterminism. Nondeterminism arises naturally when no suitable probabilistic model can capture a program behavior, e.g., for multithreading or distributed systems. In this work, we propose a technique, based on symbolic execution, to synthesize schedulers that resolve nondeterminism to maximize the probability of reaching a target event. To scale to large systems, we also introduce approximate algorithms to search for good schedulers, speeding up established random sampling and reinforcement learning results through the quantification of path probabilities based on symbolic execution. We implemented the techniques in Symbolic PathFinder and evaluated them on nondeterministic Java programs. We show that our algorithms significantly improve upon a state-of- the-art statistical model checking algorithm, originally developed for Markov Decision Processes.
Huang, Kuo-Chen; Chiang, Shu-Ying; Chen, Chen-Fu
2008-02-01
The effects of color combinations of an icon's symbol/background and components of flicker and flicker rate on visual search performance on a liquid crystal display screen were investigated with 39 subjects who searched for a target icon in a circular stimulus array (diameter = 20 cm) including one target and 19 distractors. Analysis showed that the icon's symbol/background color significantly affected search time. The search times for icons with black/red and white/blue were significantly shorter than for white/yellow, black/yellow, and black/blue. Flickering of different components of the icon significantly affected the search time. Search time for an icon's border flickering was shorter than for an icon symbol flickering; search for flicker rates of 3 and 5 Hz was shorter than that for 1 Hz. For icon's symbol/background color combinations, search error rate for black/blue was greater than for black/red and white/blue combinations, and the error rate for an icon's border flickering was lower than for an icon's symbol flickering. Interactions affected search time and error rate. Results are applicable to design of graphic user interfaces.
Apperly, Ian A; Williams, Emily; Williams, Joelle
2004-01-01
In 4 experiments 120 three- to four-year-old nonreaders were asked the identity of a symbolic representation as it appeared with different objects. Consistent with Bialystok (2000), many children judged the identity of written words to vary according to the object with which they appeared but few made such errors with recognizable pictures. Children also made few errors when the symbols were unrecognizable pictures. In Experiments 2 to 4 this pattern of responses was preserved in conditions that made it unlikely or impossible for children to answer correctly by taking the symbol to refer to one of the objects with which it appeared. Instead, correct answers required children to appreciate that the symbol had a generic, abstract meaning.
NASA Technical Reports Server (NTRS)
Vilnrotter, Victor A.
2012-01-01
Initial optical communications experiments with a Vertex polished aluminum panel have been described. The polished panel was mounted on the main reflector of the DSN's research antenna at DSS-13. The PSF was recorded via remotely controlled digital camera mounted on the subreflector structure. Initial PSF generated by Jupiter showed significant tilt error and some mechanical deformation. After upgrades, the PSF improved significantly, leading to much better concentration of light. Communications performance of the initial and upgraded panel structure were compared. After the upgrades, simulated PPM symbol error probability decreased by six orders of magnitude. Work is continuing to demonstrate closed-loop tracking of sources from zenith to horizon, and better characterize communications performance in realistic daytime background environments.
Frame synchronization methods based on channel symbol measurements
NASA Technical Reports Server (NTRS)
Dolinar, S.; Cheung, K.-M.
1989-01-01
The current DSN frame synchronization procedure is based on monitoring the decoded bit stream for the appearance of a sync marker sequence that is transmitted once every data frame. The possibility of obtaining frame synchronization by processing the raw received channel symbols rather than the decoded bits is explored. Performance results are derived for three channel symbol sync methods, and these are compared with results for decoded bit sync methods reported elsewhere. It is shown that each class of methods has advantages or disadvantages under different assumptions on the frame length, the global acquisition strategy, and the desired measure of acquisition timeliness. It is shown that the sync statistics based on decoded bits are superior to the statistics based on channel symbols, if the desired operating region utilizes a probability of miss many orders of magnitude higher than the probability of false alarm. This operating point is applicable for very large frame lengths and minimal frame-to-frame verification strategy. On the other hand, the statistics based on channel symbols are superior if the desired operating point has a miss probability only a few orders of magnitude greater than the false alarm probability. This happens for small frames or when frame-to-frame verifications are required.
Does the cost function matter in Bayes decision rule?
Schlü ter, Ralf; Nussbaum-Thom, Markus; Ney, Hermann
2012-02-01
In many tasks in pattern recognition, such as automatic speech recognition (ASR), optical character recognition (OCR), part-of-speech (POS) tagging, and other string recognition tasks, we are faced with a well-known inconsistency: The Bayes decision rule is usually used to minimize string (symbol sequence) error, whereas, in practice, we want to minimize symbol (word, character, tag, etc.) error. When comparing different recognition systems, we do indeed use symbol error rate as an evaluation measure. The topic of this work is to analyze the relation between string (i.e., 0-1) and symbol error (i.e., metric, integer valued) cost functions in the Bayes decision rule, for which fundamental analytic results are derived. Simple conditions are derived for which the Bayes decision rule with integer-valued metric cost function and with 0-1 cost gives the same decisions or leads to classes with limited cost. The corresponding conditions can be tested with complexity linear in the number of classes. The results obtained do not make any assumption w.r.t. the structure of the underlying distributions or the classification problem. Nevertheless, the general analytic results are analyzed via simulations of string recognition problems with Levenshtein (edit) distance cost function. The results support earlier findings that considerable improvements are to be expected when initial error rates are high.
Automatic Estimation of Verified Floating-Point Round-Off Errors via Static Analysis
NASA Technical Reports Server (NTRS)
Moscato, Mariano; Titolo, Laura; Dutle, Aaron; Munoz, Cesar A.
2017-01-01
This paper introduces a static analysis technique for computing formally verified round-off error bounds of floating-point functional expressions. The technique is based on a denotational semantics that computes a symbolic estimation of floating-point round-o errors along with a proof certificate that ensures its correctness. The symbolic estimation can be evaluated on concrete inputs using rigorous enclosure methods to produce formally verified numerical error bounds. The proposed technique is implemented in the prototype research tool PRECiSA (Program Round-o Error Certifier via Static Analysis) and used in the verification of floating-point programs of interest to NASA.
Error correcting code with chip kill capability and power saving enhancement
Gara, Alan G [Mount Kisco, NY; Chen, Dong [Croton On Husdon, NY; Coteus, Paul W [Yorktown Heights, NY; Flynn, William T [Rochester, MN; Marcella, James A [Rochester, MN; Takken, Todd [Brewster, NY; Trager, Barry M [Yorktown Heights, NY; Winograd, Shmuel [Scarsdale, NY
2011-08-30
A method and system are disclosed for detecting memory chip failure in a computer memory system. The method comprises the steps of accessing user data from a set of user data chips, and testing the user data for errors using data from a set of system data chips. This testing is done by generating a sequence of check symbols from the user data, grouping the user data into a sequence of data symbols, and computing a specified sequence of syndromes. If all the syndromes are zero, the user data has no errors. If one of the syndromes is non-zero, then a set of discriminator expressions are computed, and used to determine whether a single or double symbol error has occurred. In the preferred embodiment, less than two full system data chips are used for testing and correcting the user data.
A concatenated coding scheme for error control
NASA Technical Reports Server (NTRS)
Lin, S.
1985-01-01
A concatenated coding scheme for error control in data communications is analyzed. The inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if the outer code detects the presence of errors after the inner code decoding. The probability of undetected error of the above error control scheme is derived and upper bounded. Two specific exmaples are analyzed. In the first example, the inner code is a distance-4 shortened Hamming code with generator polynomial (X+1)(X(6)+X+1) = X(7)+X(6)+X(2)+1 and the outer code is a distance-4 shortened Hamming code with generator polynomial (X+1)X(15+X(14)+X(13)+X(12)+X(4)+X(3)+X(2)+X+1) = X(16)+X(12)+X(5)+1 which is the X.25 standard for packet-switched data network. This example is proposed for error control on NASA telecommand links. In the second example, the inner code is the same as that in the first example but the outer code is a shortened Reed-Solomon code with symbols from GF(2(8)) and generator polynomial (X+1)(X+alpha) where alpha is a primitive element in GF(z(8)).
Iterative channel decoding of FEC-based multiple-description codes.
Chang, Seok-Ho; Cosman, Pamela C; Milstein, Laurence B
2012-03-01
Multiple description coding has been receiving attention as a robust transmission framework for multimedia services. This paper studies the iterative decoding of FEC-based multiple description codes. The proposed decoding algorithms take advantage of the error detection capability of Reed-Solomon (RS) erasure codes. The information of correctly decoded RS codewords is exploited to enhance the error correction capability of the Viterbi algorithm at the next iteration of decoding. In the proposed algorithm, an intradescription interleaver is synergistically combined with the iterative decoder. The interleaver does not affect the performance of noniterative decoding but greatly enhances the performance when the system is iteratively decoded. We also address the optimal allocation of RS parity symbols for unequal error protection. For the optimal allocation in iterative decoding, we derive mathematical equations from which the probability distributions of description erasures can be generated in a simple way. The performance of the algorithm is evaluated over an orthogonal frequency-division multiplexing system. The results show that the performance of the multiple description codes is significantly enhanced.
Symbolic PathFinder: Symbolic Execution of Java Bytecode
NASA Technical Reports Server (NTRS)
Pasareanu, Corina S.; Rungta, Neha
2010-01-01
Symbolic Pathfinder (SPF) combines symbolic execution with model checking and constraint solving for automated test case generation and error detection in Java programs with unspecified inputs. In this tool, programs are executed on symbolic inputs representing multiple concrete inputs. Values of variables are represented as constraints generated from the analysis of Java bytecode. The constraints are solved using off-the shelf solvers to generate test inputs guaranteed to achieve complex coverage criteria. SPF has been used successfully at NASA, in academia, and in industry.
Efficient Bit-to-Symbol Likelihood Mappings
NASA Technical Reports Server (NTRS)
Moision, Bruce E.; Nakashima, Michael A.
2010-01-01
This innovation is an efficient algorithm designed to perform bit-to-symbol and symbol-to-bit likelihood mappings that represent a significant portion of the complexity of an error-correction code decoder for high-order constellations. Recent implementation of the algorithm in hardware has yielded an 8- percent reduction in overall area relative to the prior design.
Meanings Given to Algebraic Symbolism in Problem-Posing
ERIC Educational Resources Information Center
Cañadas, María C.; Molina, Marta; del Río, Aurora
2018-01-01
Some errors in the learning of algebra suggest that students might have difficulties giving meaning to algebraic symbolism. In this paper, we use problem posing to analyze the students' capacity to assign meaning to algebraic symbolism and the difficulties that students encounter in this process, depending on the characteristics of the algebraic…
NASA Technical Reports Server (NTRS)
Woo, Simon S.; Cheng, Michael K.
2011-01-01
The original Luby Transform (LT) coding scheme is extended to account for data transmissions where some information symbols in a message block are more important than others. Prioritized LT codes provide unequal error protection (UEP) of data on an erasure channel by modifying the original LT encoder. The prioritized algorithm improves high-priority data protection without penalizing low-priority data recovery. Moreover, low-latency decoding is also obtained for high-priority data due to fast encoding. Prioritized LT codes only require a slight change in the original encoding algorithm, and no changes at all at the decoder. Hence, with a small complexity increase in the LT encoder, an improved UEP and low-decoding latency performance for high-priority data can be achieved. LT encoding partitions a data stream into fixed-sized message blocks each with a constant number of information symbols. To generate a code symbol from the information symbols in a message, the Robust-Soliton probability distribution is first applied in order to determine the number of information symbols to be used to compute the code symbol. Then, the specific information symbols are chosen uniform randomly from the message block. Finally, the selected information symbols are XORed to form the code symbol. The Prioritized LT code construction includes an additional restriction that code symbols formed by a relatively small number of XORed information symbols select some of these information symbols from the pool of high-priority data. Once high-priority data are fully covered, encoding continues with the conventional LT approach where code symbols are generated by selecting information symbols from the entire message block including all different priorities. Therefore, if code symbols derived from high-priority data experience an unusual high number of erasures, Prioritized LT codes can still reliably recover both high- and low-priority data. This hybrid approach decides not only "how to encode" but also "what to encode" to achieve UEP. Another advantage of the priority encoding process is that the majority of high-priority data can be decoded sooner since only a small number of code symbols are required to reconstruct high-priority data. This approach increases the likelihood that high-priority data is decoded first over low-priority data. The Prioritized LT code scheme achieves an improvement in high-priority data decoding performance as well as overall information recovery without penalizing the decoding of low-priority data, assuming high-priority data is no more than half of a message block. The cost is in the additional complexity required in the encoder. If extra computation resource is available at the transmitter, image, voice, and video transmission quality in terrestrial and space communications can benefit from accurate use of redundancy in protecting data with varying priorities.
Decomposition of conditional probability for high-order symbolic Markov chains.
Melnik, S S; Usatenko, O V
2017-07-01
The main goal of this paper is to develop an estimate for the conditional probability function of random stationary ergodic symbolic sequences with elements belonging to a finite alphabet. We elaborate on a decomposition procedure for the conditional probability function of sequences considered to be high-order Markov chains. We represent the conditional probability function as the sum of multilinear memory function monomials of different orders (from zero up to the chain order). This allows us to introduce a family of Markov chain models and to construct artificial sequences via a method of successive iterations, taking into account at each step increasingly high correlations among random elements. At weak correlations, the memory functions are uniquely expressed in terms of the high-order symbolic correlation functions. The proposed method fills the gap between two approaches, namely the likelihood estimation and the additive Markov chains. The obtained results may have applications for sequential approximation of artificial neural network training.
Decomposition of conditional probability for high-order symbolic Markov chains
NASA Astrophysics Data System (ADS)
Melnik, S. S.; Usatenko, O. V.
2017-07-01
The main goal of this paper is to develop an estimate for the conditional probability function of random stationary ergodic symbolic sequences with elements belonging to a finite alphabet. We elaborate on a decomposition procedure for the conditional probability function of sequences considered to be high-order Markov chains. We represent the conditional probability function as the sum of multilinear memory function monomials of different orders (from zero up to the chain order). This allows us to introduce a family of Markov chain models and to construct artificial sequences via a method of successive iterations, taking into account at each step increasingly high correlations among random elements. At weak correlations, the memory functions are uniquely expressed in terms of the high-order symbolic correlation functions. The proposed method fills the gap between two approaches, namely the likelihood estimation and the additive Markov chains. The obtained results may have applications for sequential approximation of artificial neural network training.
Using hidden Markov models to align multiple sequences.
Mount, David W
2009-07-01
A hidden Markov model (HMM) is a probabilistic model of a multiple sequence alignment (msa) of proteins. In the model, each column of symbols in the alignment is represented by a frequency distribution of the symbols (called a "state"), and insertions and deletions are represented by other states. One moves through the model along a particular path from state to state in a Markov chain (i.e., random choice of next move), trying to match a given sequence. The next matching symbol is chosen from each state, recording its probability (frequency) and also the probability of going to that state from a previous one (the transition probability). State and transition probabilities are multiplied to obtain a probability of the given sequence. The hidden nature of the HMM is due to the lack of information about the value of a specific state, which is instead represented by a probability distribution over all possible values. This article discusses the advantages and disadvantages of HMMs in msa and presents algorithms for calculating an HMM and the conditions for producing the best HMM.
Symbol interval optimization for molecular communication with drift.
Kim, Na-Rae; Eckford, Andrew W; Chae, Chan-Byoung
2014-09-01
In this paper, we propose a symbol interval optimization algorithm in molecular communication with drift. Proper symbol intervals are important in practical communication systems since information needs to be sent as fast as possible with low error rates. There is a trade-off, however, between symbol intervals and inter-symbol interference (ISI) from Brownian motion. Thus, we find proper symbol interval values considering the ISI inside two kinds of blood vessels, and also suggest no ISI system for strong drift models. Finally, an isomer-based molecule shift keying (IMoSK) is applied to calculate achievable data transmission rates (achievable rates, hereafter). Normalized achievable rates are also obtained and compared in one-symbol ISI and no ISI systems.
Open-loop frequency acquisition for suppressed-carrier biphase signals using one-pole arm filters
NASA Technical Reports Server (NTRS)
Shah, B.; Holmes, J. K.
1991-01-01
Open loop frequency acquisition performance is discussed for suppressed carrier binary phase shift keyed signals in terms of the probability of detecting the carrier frequency offset when the arms of the Costas loop detector have one pole filters. The approach, which does not require symbol timing, uses fast Fourier transforms (FFTs) to detect the carrier frequency offset. The detection probability, which depends on both the 3 dB arm filter bandwidth and the received symbol signal to noise ratio, is derived and is shown to be independent of symbol timing. It is shown that the performance of this technique is slightly better that other open loop acquisition techniques which use integrators in the arms and whose detection performance varies with symbol timing.
File compression and encryption based on LLS and arithmetic coding
NASA Astrophysics Data System (ADS)
Yu, Changzhi; Li, Hengjian; Wang, Xiyu
2018-03-01
e propose a file compression model based on arithmetic coding. Firstly, the original symbols, to be encoded, are input to the encoder one by one, we produce a set of chaotic sequences by using the Logistic and sine chaos system(LLS), and the values of this chaotic sequences are randomly modified the Upper and lower limits of current symbols probability. In order to achieve the purpose of encryption, we modify the upper and lower limits of all character probabilities when encoding each symbols. Experimental results show that the proposed model can achieve the purpose of data encryption while achieving almost the same compression efficiency as the arithmetic coding.
Modelling dynamics with context-free grammars
NASA Astrophysics Data System (ADS)
García-Huerta, Juan-M.; Jiménez-Hernández, Hugo; Herrera-Navarro, Ana-M.; Hernández-Díaz, Teresa; Terol-Villalobos, Ivan
2014-03-01
This article presents a strategy to model the dynamics performed by vehicles in a freeway. The proposal consists on encode the movement as a set of finite states. A watershed-based segmentation is used to localize regions with high-probability of motion. Each state represents a proportion of a camera projection in a two-dimensional space, where each state is associated to a symbol, such that any combination of symbols is expressed as a language. Starting from a sequence of symbols through a linear algorithm a free-context grammar is inferred. This grammar represents a hierarchical view of common sequences observed into the scene. Most probable grammar rules express common rules associated to normal movement behavior. Less probable rules express themselves a way to quantify non-common behaviors and they might need more attention. Finally, all sequences of symbols that does not match with the grammar rules, may express itself uncommon behaviors (abnormal). The grammar inference is built with several sequences of images taken from a freeway. Testing process uses the sequence of symbols emitted by the scenario, matching the grammar rules with common freeway behaviors. The process of detect abnormal/normal behaviors is managed as the task of verify if any word generated by the scenario is recognized by the grammar.
Approximate maximum likelihood decoding of block codes
NASA Technical Reports Server (NTRS)
Greenberger, H. J.
1979-01-01
Approximate maximum likelihood decoding algorithms, based upon selecting a small set of candidate code words with the aid of the estimated probability of error of each received symbol, can give performance close to optimum with a reasonable amount of computation. By combining the best features of various algorithms and taking care to perform each step as efficiently as possible, a decoding scheme was developed which can decode codes which have better performance than those presently in use and yet not require an unreasonable amount of computation. The discussion of the details and tradeoffs of presently known efficient optimum and near optimum decoding algorithms leads, naturally, to the one which embodies the best features of all of them.
NASA Astrophysics Data System (ADS)
Dabiri, Mohammad Taghi; Sadough, Seyed Mohammad Sajad
2018-04-01
In the free-space optical (FSO) links, atmospheric turbulence lead to scintillation in the received signal. Due to its ease of implementation, intensity modulation with direct detection (IM/DD) based on ON-OFF keying (OOK) is a popular signaling scheme in these systems. Over turbulence channel, to detect OOK symbols in a blind way, i.e., without sending pilot symbols, an expectation-maximization (EM)-based detection method was recently proposed in the literature related to free-space optical (FSO) communication. However, the performance of EM-based detection methods severely depends on the length of the observation interval (Ls). To choose the optimum values of Ls at target bit error rates (BER)s of FSO communications which are commonly lower than 10-9, Monte-Carlo simulations would be very cumbersome and require a very long processing time. To facilitate performance evaluation, in this letter we derive the analytic expressions for BER and outage probability. Numerical results validate the accuracy of our derived analytic expressions. Our results may serve to evaluate the optimum value for Ls without resorting to time-consuming Monte-Carlo simulations.
Sample-Clock Phase-Control Feedback
NASA Technical Reports Server (NTRS)
Quirk, Kevin J.; Gin, Jonathan W.; Nguyen, Danh H.; Nguyen, Huy
2012-01-01
To demodulate a communication signal, a receiver must recover and synchronize to the symbol timing of a received waveform. In a system that utilizes digital sampling, the fidelity of synchronization is limited by the time between the symbol boundary and closest sample time location. To reduce this error, one typically uses a sample clock in excess of the symbol rate in order to provide multiple samples per symbol, thereby lowering the error limit to a fraction of a symbol time. For systems with a large modulation bandwidth, the required sample clock rate is prohibitive due to current technological barriers and processing complexity. With precise control of the phase of the sample clock, one can sample the received signal at times arbitrarily close to the symbol boundary, thus obviating the need, from a synchronization perspective, for multiple samples per symbol. Sample-clock phase-control feedback was developed for use in the demodulation of an optical communication signal, where multi-GHz modulation bandwidths would require prohibitively large sample clock frequencies for rates in excess of the symbol rate. A custom mixedsignal (RF/digital) offset phase-locked loop circuit was developed to control the phase of the 6.4-GHz clock that samples the photon-counting detector output. The offset phase-locked loop is driven by a feedback mechanism that continuously corrects for variation in the symbol time due to motion between the transmitter and receiver as well as oscillator instability. This innovation will allow significant improvements in receiver throughput; for example, the throughput of a pulse-position modulation (PPM) with 16 slots can increase from 188 Mb/s to 1.5 Gb/s.
NASA Astrophysics Data System (ADS)
Sabir, Zeeshan; Babar, M. Inayatullah; Shah, Syed Waqar
2012-12-01
Mobile adhoc network (MANET) refers to an arrangement of wireless mobile nodes that have the tendency of dynamically and freely self-organizing into temporary and arbitrary network topologies. Orthogonal frequency division multiplexing (OFDM) is the foremost choice for MANET system designers at the Physical Layer due to its inherent property of high data rate transmission that corresponds to its lofty spectrum efficiency. The downside of OFDM includes its sensitivity to synchronization errors (frequency offsets and symbol time). Most of the present day techniques employing OFDM for data transmission support mobility as one of the primary features. This mobility causes small frequency offsets due to the production of Doppler frequencies. It results in intercarrier interference (ICI) which degrades the signal quality due to a crosstalk between the subcarriers of OFDM symbol. An efficient frequency-domain block-type pilot-assisted ICI mitigation scheme is proposed in this article which nullifies the effect of channel frequency offsets from the received OFDM symbols. Second problem addressed in this article is the noise effect induced by different sources into the received symbol increasing its bit error rate and making it unsuitable for many applications. Forward-error-correcting turbo codes have been employed into the proposed model which adds redundant bits into the system which are later used for error detection and correction purpose. At the receiver end, maximum a posteriori (MAP) decoding algorithm is implemented using two component MAP decoders. These decoders tend to exchange interleaved extrinsic soft information among each other in the form of log likelihood ratio improving the previous estimate regarding the decoded bit in each iteration.
Blocking Losses With a Photon Counter
NASA Technical Reports Server (NTRS)
Moision, Burce E.; Piazzolla, Sabino
2012-01-01
It was not known how to assess accurately losses in a communications link due to photodetector blocking, a phenomenon wherein a detector is rendered inactive for a short time after the detection of a photon. When used to detect a communications signal, blocking leads to losses relative to an ideal detector, which may be measured as a reduction in the communications rate for a given received signal power, or an increase in the signal power required to support the same communications rate. This work involved characterizing blocking losses for single detectors and arrays of detectors. Blocking may be mitigated by spreading the signal intensity over an array of detectors, reducing the count rate on any one detector. A simple approximation was made to the blocking loss as a function of the probability that a detector is unblocked at a given time, essentially treating the blocking probability as a scaling of the detection efficiency. An exact statistical characterization was derived for a single detector, and an approximation for multiple detectors. This allowed derivation of several accurate approximations to the loss. Methods were also derived to account for a rise time in recovery, and non-uniform illumination due to diffraction and atmospheric distortion of the phase front. It was assumed that the communications signal is intensity modulated and received by an array of photon-counting photodetectors. For the purpose of this analysis, it was assumed that the detectors are ideal, in that they produce a signal that allows one to reproduce the arrival times of electrons, produced either as photoelectrons or from dark noise, exactly. For single detectors, the performance of the maximum-likelihood (ML) receiver in blocking is illustrated, as well as a maximum-count (MC) receiver, that, when receiving a pulse-position-modulated (PPM) signal, selects the symbol corresponding to the slot with the largest electron count. Whereas the MC receiver saturates at high count rates, the ML receiver may not. The loss in capacity, symbol-error-rate (SER), and count-rate were numerically computed. It was shown that the capacity and symbol-error-rate losses track, whereas the count-rate loss does not generally reflect the SER or capacity loss, as the slot-statistics at the detector output are no longer Poisson. It is also shown that the MC receiver loss may be accurately predicted for dead times on the order of a slot.
The advanced receiver 2: Telemetry test results in CTA 21
NASA Technical Reports Server (NTRS)
Hinedi, S.; Bevan, R.; Marina, M.
1991-01-01
Telemetry tests with the Advanced Receiver II (ARX II) in Compatibility Test Area 21 are described. The ARX II was operated in parallel with a Block-III Receiver/baseband processor assembly combination (BLK-III/BPA) and a Block III Receiver/subcarrier demodulation assembly/symbol synchronization assembly combination (BLK-III/SDA/SSA). The telemetry simulator assembly provided the test signal for all three configurations, and the symbol signal to noise ratio as well as the symbol error rates were measured and compared. Furthermore, bit error rates were also measured by the system performance test computer for all three systems. Results indicate that the ARX-II telemetry performance is comparable and sometimes superior to the BLK-III/BPA and BLK-III/SDA/SSA combinations.
Huffman scanning: using language models within fixed-grid keyboard emulation☆
Roark, Brian; Beckley, Russell; Gibbons, Chris; Fried-Oken, Melanie
2012-01-01
Individuals with severe motor impairments commonly enter text using a single binary switch and symbol scanning methods. We present a new scanning method –Huffman scanning – which uses Huffman coding to select the symbols to highlight during scanning, thus minimizing the expected bits per symbol. With our method, the user can select the intended symbol even after switch activation errors. We describe two varieties of Huffman scanning – synchronous and asynchronous –and present experimental results, demonstrating speedups over row/column and linear scanning. PMID:24244070
Stewart, Terrence C; Eliasmith, Chris
2013-06-01
Quantum probability (QP) theory can be seen as a type of vector symbolic architecture (VSA): mental states are vectors storing structured information and manipulated using algebraic operations. Furthermore, the operations needed by QP match those in other VSAs. This allows existing biologically realistic neural models to be adapted to provide a mechanistic explanation of the cognitive phenomena described in the target article by Pothos & Busemeyer (P&B).
Symbol Error Rate of Underlay Cognitive Relay Systems over Rayleigh Fading Channel
NASA Astrophysics Data System (ADS)
Ho van, Khuong; Bao, Vo Nguyen Quoc
Underlay cognitive systems allow secondary users (SUs) to access the licensed band allocated to primary users (PUs) for better spectrum utilization with the power constraint imposed on SUs such that their operation does not harm the normal communication of PUs. This constraint, which limits the coverage range of SUs, can be offset by relaying techniques that take advantage of shorter range communication for lower path loss. Symbol error rate (SER) analysis of underlay cognitive relay systems over fading channel has not been reported in the literature. This paper fills this gap. The derived SER expressions are validated by simulations and show that underlay cognitive relay systems suffer a high error floor for any modulation level.
Error Rates and Channel Capacities in Multipulse PPM
NASA Technical Reports Server (NTRS)
Hamkins, Jon; Moision, Bruce
2007-01-01
A method of computing channel capacities and error rates in multipulse pulse-position modulation (multipulse PPM) has been developed. The method makes it possible, when designing an optical PPM communication system, to determine whether and under what conditions a given multipulse PPM scheme would be more or less advantageous, relative to other candidate modulation schemes. In conventional M-ary PPM, each symbol is transmitted in a time frame that is divided into M time slots (where M is an integer >1), defining an M-symbol alphabet. A symbol is represented by transmitting a pulse (representing 1) during one of the time slots and no pulse (representing 0 ) during the other M 1 time slots. Multipulse PPM is a generalization of PPM in which pulses are transmitted during two or more of the M time slots.
What Information is Stored in DNA: Does it Contain Digital Error Correcting Codes?
NASA Astrophysics Data System (ADS)
Liebovitch, Larry
1998-03-01
The longest term correlations in living systems are the information stored in DNA which reflects the evolutionary history of an organism. The 4 bases (A,T,G,C) encode sequences of amino acids as well as locations of binding sites for proteins that regulate DNA. The fidelity of this important information is maintained by ANALOG error check mechanisms. When a single strand of DNA is replicated the complementary base is inserted in the new strand. Sometimes the wrong base is inserted that sticks out disrupting the phosphate backbone. The new base is not yet methylated, so repair enzymes, that slide along the DNA, can tear out the wrong base and replace it with the right one. The bases in DNA form a sequence of 4 different symbols and so the information is encoded in a DIGITAL form. All the digital codes in our society (ISBN book numbers, UPC product codes, bank account numbers, airline ticket numbers) use error checking code, where some digits are functions of other digits to maintain the fidelity of transmitted informaiton. Does DNA also utitlize a DIGITAL error chekcing code to maintain the fidelity of its information and increase the accuracy of replication? That is, are some bases in DNA functions of other bases upstream or downstream? This raises the interesting mathematical problem: How does one determine whether some symbols in a sequence of symbols are a function of other symbols. It also bears on the issue of determining algorithmic complexity: What is the function that generates the shortest algorithm for reproducing the symbol sequence. The error checking codes most used in our technology are linear block codes. We developed an efficient method to test for the presence of such codes in DNA. We coded the 4 bases as (0,1,2,3) and used Gaussian elimination, modified for modulus 4, to test if some bases are linear combinations of other bases. We used this method to analyze the base sequence in the genes from the lac operon and cytochrome C. We did not find evidence for such error correcting codes in these genes. However, we analyzed only a small amount of DNA and if digitial error correcting schemes are present in DNA, they may be more subtle than such simple linear block codes. The basic issue we raise here, is how information is stored in DNA and an appreciation that digital symbol sequences, such as DNA, admit of interesting schemes to store and protect the fidelity of their information content. Liebovitch, Tao, Todorov, Levine. 1996. Biophys. J. 71:1539-1544. Supported by NIH grant EY6234.
Structural analysis of online handwritten mathematical symbols based on support vector machines
NASA Astrophysics Data System (ADS)
Simistira, Foteini; Papavassiliou, Vassilis; Katsouros, Vassilis; Carayannis, George
2013-01-01
Mathematical expression recognition is still a very challenging task for the research community mainly because of the two-dimensional (2d) structure of mathematical expressions (MEs). In this paper, we present a novel approach for the structural analysis between two on-line handwritten mathematical symbols of a ME, based on spatial features of the symbols. We introduce six features to represent the spatial affinity of the symbols and compare two multi-class classification methods that employ support vector machines (SVMs): one based on the "one-against-one" technique and one based on the "one-against-all", in identifying the relation between a pair of symbols (i.e. subscript, numerator, etc). A dataset containing 1906 spatial relations derived from the Competition on Recognition of Online Handwritten Mathematical Expressions (CROHME) 2012 training dataset is constructed to evaluate the classifiers and compare them with the rule-based classifier of the ILSP-1 system participated in the contest. The experimental results give an overall mean error rate of 2.61% for the "one-against-one" SVM approach, 6.57% for the "one-against-all" SVM technique and 12.31% error rate for the ILSP-1 classifier.
The effects of age on symbol comprehension in central rail hubs in Taiwan.
Liu, Yung-Ching; Ho, Chin-Heng
2012-11-01
The purpose of this study was to investigate the effects of age and symbol design features on passengers' comprehension of symbols and the performance of these symbols with regard to route guidance. In the first experiment, 30 young participants and 30 elderly participants interpreted the meanings and rated the features of 39 symbols. Researchers collected data on each subject's comprehension time, comprehension score, and feature ratings for each symbol. In the second experiment, this study used a series of photos to simulate scenarios in which passengers follow symbols to arrive at their destinations. The length of time each participant required to follow his/her route and his/her errors were recorded. Older adults experienced greater difficulty in understanding particular symbols as compared to younger adults. Familiarity was the feature most highly correlated with comprehension of symbols and accuracy of semantic depiction was the best predictor of behavior in following routes. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Differentially coherent quadrature-quadrature phase shift keying (Q2PSK)
NASA Astrophysics Data System (ADS)
Saha, Debabrata; El-Ghandour, Osama
The quadrature-quadrature phase-shift-keying (Q2PSK) signaling scheme uses the vertices of a hypercube of dimension four. A generalized Q2PSK signaling format for differentially coherent detection at the receiver is considered. Performance in the presence of additive white Gaussian noise (AWGN) is analyzed. The symbol error rate is found to be approximately twice the symbol error rate in a quaternary DPSK system operating at the same Eb/Nb. However, the bandwidth efficiency of differential Q2PSK is substantially higher than that of quaternary DPSK.
NASA Technical Reports Server (NTRS)
Lee, L.-N.
1977-01-01
Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively modest coding complexity, it is proposed to concatenate a byte-oriented unit-memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real-time minimal-byte-error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.
NASA Technical Reports Server (NTRS)
Lee, L. N.
1976-01-01
Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively small coding complexity, it is proposed to concatenate a byte oriented unit memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real time minimal byte error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.
NASA Astrophysics Data System (ADS)
Quintero-Quiroz, C.; Sorrentino, Taciano; Torrent, M. C.; Masoller, Cristina
2016-04-01
We study the dynamics of semiconductor lasers with optical feedback and direct current modulation, operating in the regime of low frequency fluctuations (LFFs). In the LFF regime the laser intensity displays abrupt spikes: the intensity drops to zero and then gradually recovers. We focus on the inter-spike-intervals (ISIs) and use a method of symbolic time-series analysis, which is based on computing the probabilities of symbolic patterns. We show that the variation of the probabilities of the symbols with the modulation frequency and with the intrinsic spike rate of the laser allows to identify different regimes of noisy locking. Simulations of the Lang-Kobayashi model are in good qualitative agreement with experimental observations.
NASA Astrophysics Data System (ADS)
Alimi, Isiaka A.; Monteiro, Paulo P.; Teixeira, António L.
2017-11-01
The key paths toward the fifth generation (5G) network requirements are towards centralized processing and small-cell densification systems that are implemented on the cloud computing-based radio access networks (CC-RANs). The increasing recognitions of the CC-RANs can be attributed to their valuable features regarding system performance optimization and cost-effectiveness. Nevertheless, realization of the stringent requirements of the fronthaul that connects the network elements is highly demanding. In this paper, considering the small-cell network architectures, we present multiuser mixed radio-frequency/free-space optical (RF/FSO) relay networks as feasible technologies for the alleviation of the stringent requirements in the CC-RANs. In this study, we use the end-to-end (e2e) outage probability, average symbol error probability (ASEP), and ergodic channel capacity as the performance metrics in our analysis. Simulation results show the suitability of deployment of mixed RF/FSO schemes in the real-life scenarios.
Error Propagation Made Easy--Or at Least Easier
ERIC Educational Resources Information Center
Gardenier, George H.; Gui, Feng; Demas, James N.
2011-01-01
Complex error propagation is reduced to formula and data entry into a Mathcad worksheet or an Excel spreadsheet. The Mathcad routine uses both symbolic calculus analysis and Monte Carlo methods to propagate errors in a formula of up to four variables. Graphical output is used to clarify the contributions to the final error of each of the…
A comparison of Manchester symbol tracking loops for block 5 applications
NASA Technical Reports Server (NTRS)
Holmes, J. K.
1991-01-01
The linearized tracking errors of three Manchester (biphase coded) symbol tracking loops are compared to determine which is appropriate for Block 5 receiver applications. The first is a nonreturn to zero (NRZ) symbol synchronizer loop operating at twice the symbol rate (NRZ x 2) so that it operates on half symbols. The second near optimally processes the mid-symbol transitions and ignores the between symbol transitions. In the third configuration, the first two approaches are combined as a hybrid to produce the best performance. Although this hybrid loop is the best at low symbol signal to noise ratios (SNRs), it has about the same performance as the NRZ x 2 loop at higher SNRs (greater than 0-dB E sub s/N sub 0). Based on this analysis, it is tentatively recommended that the hybrid loop be implemented for Manchester data in the Block 5 receiver. However, the high data rate case and the hardware implications of each implementation must be understood and analyzed before the hybrid loop is recommended unconditionally.
Fuzzy Intervals for Designing Structural Signature: An Application to Graphic Symbol Recognition
NASA Astrophysics Data System (ADS)
Luqman, Muhammad Muzzamil; Delalandre, Mathieu; Brouard, Thierry; Ramel, Jean-Yves; Lladós, Josep
The motivation behind our work is to present a new methodology for symbol recognition. The proposed method employs a structural approach for representing visual associations in symbols and a statistical classifier for recognition. We vectorize a graphic symbol, encode its topological and geometrical information by an attributed relational graph and compute a signature from this structural graph. We have addressed the sensitivity of structural representations to noise, by using data adapted fuzzy intervals. The joint probability distribution of signatures is encoded by a Bayesian network, which serves as a mechanism for pruning irrelevant features and choosing a subset of interesting features from structural signatures of underlying symbol set. The Bayesian network is deployed in a supervised learning scenario for recognizing query symbols. The method has been evaluated for robustness against degradations & deformations on pre-segmented 2D linear architectural & electronic symbols from GREC databases, and for its recognition abilities on symbols with context noise i.e. cropped symbols.
Probability Quantization for Multiplication-Free Binary Arithmetic Coding
NASA Technical Reports Server (NTRS)
Cheung, K. -M.
1995-01-01
A method has been developed to improve on Witten's binary arithmetic coding procedure of tracking a high value and a low value. The new method approximates the probability of the less probable symbol, which improves the worst-case coding efficiency.
NASA Astrophysics Data System (ADS)
Wang, Liming; Qiao, Yaojun; Yu, Qian; Zhang, Wenbo
2016-04-01
We introduce a watermark non-binary low-density parity check code (NB-LDPC) scheme, which can estimate the time-varying noise variance by using prior information of watermark symbols, to improve the performance of NB-LDPC codes. And compared with the prior-art counterpart, the watermark scheme can bring about 0.25 dB improvement in net coding gain (NCG) at bit error rate (BER) of 1e-6 and 36.8-81% reduction of the iteration numbers. Obviously, the proposed scheme shows great potential in terms of error correction performance and decoding efficiency.
Truke, a web tool to check for and handle excel misidentified gene symbols.
Mallona, Izaskun; Peinado, Miguel A
2017-03-21
Genomic datasets accompanying scientific publications show a surprisingly high rate of gene name corruption. This error is generated when files and tables are imported into Microsoft Excel and certain gene symbols are automatically converted into dates. We have developed Truke, a fexible Web tool to detect, tag and fix, if possible, such misconversions. Aside, Truke is language and regional locale-aware, providing file format customization (decimal symbol, field sepator, etc.) following user's preferences. Truke is a data format conversion tool with a unique corrupted gene symbol detection utility. Truke is freely available without registration at http://maplab.cat/truke .
Learning predictive statistics from temporal sequences: Dynamics and strategies
Wang, Rui; Shen, Yuan; Tino, Peter; Welchman, Andrew E.; Kourtzi, Zoe
2017-01-01
Human behavior is guided by our expectations about the future. Often, we make predictions by monitoring how event sequences unfold, even though such sequences may appear incomprehensible. Event structures in the natural environment typically vary in complexity, from simple repetition to complex probabilistic combinations. How do we learn these structures? Here we investigate the dynamics of structure learning by tracking human responses to temporal sequences that change in structure unbeknownst to the participants. Participants were asked to predict the upcoming item following a probabilistic sequence of symbols. Using a Markov process, we created a family of sequences, from simple frequency statistics (e.g., some symbols are more probable than others) to context-based statistics (e.g., symbol probability is contingent on preceding symbols). We demonstrate the dynamics with which individuals adapt to changes in the environment's statistics—that is, they extract the behaviorally relevant structures to make predictions about upcoming events. Further, we show that this structure learning relates to individual decision strategy; faster learning of complex structures relates to selection of the most probable outcome in a given context (maximizing) rather than matching of the exact sequence statistics. Our findings provide evidence for alternate routes to learning of behaviorally relevant statistics that facilitate our ability to predict future events in variable environments. PMID:28973111
Learning predictive statistics from temporal sequences: Dynamics and strategies.
Wang, Rui; Shen, Yuan; Tino, Peter; Welchman, Andrew E; Kourtzi, Zoe
2017-10-01
Human behavior is guided by our expectations about the future. Often, we make predictions by monitoring how event sequences unfold, even though such sequences may appear incomprehensible. Event structures in the natural environment typically vary in complexity, from simple repetition to complex probabilistic combinations. How do we learn these structures? Here we investigate the dynamics of structure learning by tracking human responses to temporal sequences that change in structure unbeknownst to the participants. Participants were asked to predict the upcoming item following a probabilistic sequence of symbols. Using a Markov process, we created a family of sequences, from simple frequency statistics (e.g., some symbols are more probable than others) to context-based statistics (e.g., symbol probability is contingent on preceding symbols). We demonstrate the dynamics with which individuals adapt to changes in the environment's statistics-that is, they extract the behaviorally relevant structures to make predictions about upcoming events. Further, we show that this structure learning relates to individual decision strategy; faster learning of complex structures relates to selection of the most probable outcome in a given context (maximizing) rather than matching of the exact sequence statistics. Our findings provide evidence for alternate routes to learning of behaviorally relevant statistics that facilitate our ability to predict future events in variable environments.
High data rate Reed-Solomon encoding and decoding using VLSI technology
NASA Technical Reports Server (NTRS)
Miller, Warner; Morakis, James
1987-01-01
Presented as an implementation of a Reed-Solomon encode and decoder, which is 16-symbol error correcting, each symbol is 8 bits. This Reed-Solomon (RS) code is an efficient error correcting code that the National Aeronautics and Space Administration (NASA) will use in future space communications missions. A Very Large Scale Integration (VLSI) implementation of the encoder and decoder accepts data rates up 80 Mbps. A total of seven chips are needed for the decoder (four of the seven decoding chips are customized using 3-micron Complementary Metal Oxide Semiconduction (CMOS) technology) and one chip is required for the encoder. The decoder operates with the symbol clock being the system clock for the chip set. Approximately 1.65 billion Galois Field (GF) operations per second are achieved with the decoder chip set and 640 MOPS are achieved with the encoder chip.
Nakamura, Moriya; Kamio, Yukiyoshi; Miyazaki, Tetsuya
2008-07-07
We experimentally demonstrated linewidth-tolerant 10-Gbit/s (2.5-Gsymbol/s) 16-quadrature amplitude modulation (QAM) by using a distributed-feedback laser diode (DFB-LD) with a linewidth of 30 MHz. Error-free operation, a bit-error rate (BER) of <10(-9) was achieved in transmission over 120 km of standard single mode fiber (SSMF) without any dispersion compensation. The phase-noise canceling capability provided by a pilot-carrier and standard electronic pre-equalization to suppress inter-symbol interference (ISI) gave clear 16-QAM constellations and floor-less BER characteristics. We evaluated the BER characteristics by real-time measurement of six (three different thresholds for each I- and Q-component) symbol error rates (SERs) with simultaneous constellation observation.
Spectral characteristics of convolutionally coded digital signals
NASA Technical Reports Server (NTRS)
Divsalar, D.
1979-01-01
The power spectral density of the output symbol sequence of a convolutional encoder is computed for two different input symbol stream source models, namely, an NRZ signaling format and a first order Markov source. In the former, the two signaling states of the binary waveform are not necessarily assumed to occur with equal probability. The effects of alternate symbol inversion on this spectrum are also considered. The mathematical results are illustrated with many examples corresponding to optimal performance codes.
VLSI single-chip (255,223) Reed-Solomon encoder with interleaver
NASA Technical Reports Server (NTRS)
Hsu, In-Shek (Inventor); Deutsch, Leslie J. (Inventor); Truong, Trieu-Kie (Inventor); Reed, Irving S. (Inventor)
1990-01-01
The invention relates to a concatenated Reed-Solomon/convolutional encoding system consisting of a Reed-Solomon outer code and a convolutional inner code for downlink telemetry in space missions, and more particularly to a Reed-Solomon encoder with programmable interleaving of the information symbols and code correction symbols to combat error bursts in the Viterbi decoder.
Djordjevic, Ivan B; Vasic, Bane
2006-05-29
A maximum a posteriori probability (MAP) symbol decoding supplemented with iterative decoding is proposed as an effective mean for suppression of intrachannel nonlinearities. The MAP detector, based on Bahl-Cocke-Jelinek-Raviv algorithm, operates on the channel trellis, a dynamical model of intersymbol interference, and provides soft-decision outputs processed further in an iterative decoder. A dramatic performance improvement is demonstrated. The main reason is that the conventional maximum-likelihood sequence detector based on Viterbi algorithm provides hard-decision outputs only, hence preventing the soft iterative decoding. The proposed scheme operates very well in the presence of strong intrachannel intersymbol interference, when other advanced forward error correction schemes fail, and it is also suitable for 40 Gb/s upgrade over existing 10 Gb/s infrastructure.
Circular blurred shape model for multiclass symbol recognition.
Escalera, Sergio; Fornés, Alicia; Pujol, Oriol; Lladós, Josep; Radeva, Petia
2011-04-01
In this paper, we propose a circular blurred shape model descriptor to deal with the problem of symbol detection and classification as a particular case of object recognition. The feature extraction is performed by capturing the spatial arrangement of significant object characteristics in a correlogram structure. The shape information from objects is shared among correlogram regions, where a prior blurring degree defines the level of distortion allowed in the symbol, making the descriptor tolerant to irregular deformations. Moreover, the descriptor is rotation invariant by definition. We validate the effectiveness of the proposed descriptor in both the multiclass symbol recognition and symbol detection domains. In order to perform the symbol detection, the descriptors are learned using a cascade of classifiers. In the case of multiclass categorization, the new feature space is learned using a set of binary classifiers which are embedded in an error-correcting output code design. The results over four symbol data sets show the significant improvements of the proposed descriptor compared to the state-of-the-art descriptors. In particular, the results are even more significant in those cases where the symbols suffer from elastic deformations.
Krajcsi, Attila; Lengyel, Gábor; Kojouharova, Petia
2018-01-01
HIGHLIGHTS We test whether symbolic number comparison is handled by an analog noisy system.Analog system model has systematic biases in describing symbolic number comparison.This suggests that symbolic and non-symbolic numbers are processed by different systems. Dominant numerical cognition models suppose that both symbolic and non-symbolic numbers are processed by the Analog Number System (ANS) working according to Weber's law. It was proposed that in a number comparison task the numerical distance and size effects reflect a ratio-based performance which is the sign of the ANS activation. However, increasing number of findings and alternative models propose that symbolic and non-symbolic numbers might be processed by different representations. Importantly, alternative explanations may offer similar predictions to the ANS prediction, therefore, former evidence usually utilizing only the goodness of fit of the ANS prediction is not sufficient to support the ANS account. To test the ANS model more rigorously, a more extensive test is offered here. Several properties of the ANS predictions for the error rates, reaction times, and diffusion model drift rates were systematically analyzed in both non-symbolic dot comparison and symbolic Indo-Arabic comparison tasks. It was consistently found that while the ANS model's prediction is relatively good for the non-symbolic dot comparison, its prediction is poorer and systematically biased for the symbolic Indo-Arabic comparison. We conclude that only non-symbolic comparison is supported by the ANS, and symbolic number comparisons are processed by other representation. PMID:29491845
Use of symbolic computation in robotics education
NASA Technical Reports Server (NTRS)
Vira, Naren; Tunstel, Edward
1992-01-01
An application of symbolic computation in robotics education is described. A software package is presented which combines generality, user interaction, and user-friendliness with the systematic usage of symbolic computation and artificial intelligence techniques. The software utilizes MACSYMA, a LISP-based symbolic algebra language, to automatically generate closed-form expressions representing forward and inverse kinematics solutions, the Jacobian transformation matrices, robot pose error-compensation models equations, and Lagrange dynamics formulation for N degree-of-freedom, open chain robotic manipulators. The goal of such a package is to aid faculty and students in the robotics course by removing burdensome tasks of mathematical manipulations. The software package has been successfully tested for its accuracy using commercially available robots.
Secondary School Students' Errors in the Translation of Algebraic Statements
ERIC Educational Resources Information Center
Molina, Marta; Rodríguez-Domingo, Susana; Cañadas, María Consuelo; Castro, Encarnación
2017-01-01
In this article, we present the results of a research study that explores secondary students' capacity to perform translations of algebraic statements between the verbal and symbolic representation systems through the lens of errors. We classify and compare the errors made by 2 groups of students: 1 at the beginning of their studies in school…
Similarity of Symbol Frequency Distributions with Heavy Tails
NASA Astrophysics Data System (ADS)
Gerlach, Martin; Font-Clos, Francesc; Altmann, Eduardo G.
2016-04-01
Quantifying the similarity between symbolic sequences is a traditional problem in information theory which requires comparing the frequencies of symbols in different sequences. In numerous modern applications, ranging from DNA over music to texts, the distribution of symbol frequencies is characterized by heavy-tailed distributions (e.g., Zipf's law). The large number of low-frequency symbols in these distributions poses major difficulties to the estimation of the similarity between sequences; e.g., they hinder an accurate finite-size estimation of entropies. Here, we show analytically how the systematic (bias) and statistical (fluctuations) errors in these estimations depend on the sample size N and on the exponent γ of the heavy-tailed distribution. Our results are valid for the Shannon entropy (α =1 ), its corresponding similarity measures (e.g., the Jensen-Shanon divergence), and also for measures based on the generalized entropy of order α . For small α 's, including α =1 , the errors decay slower than the 1 /N decay observed in short-tailed distributions. For α larger than a critical value α*=1 +1 /γ ≤2 , the 1 /N decay is recovered. We show the practical significance of our results by quantifying the evolution of the English language over the last two centuries using a complete α spectrum of measures. We find that frequent words change more slowly than less frequent words and that α =2 provides the most robust measure to quantify language change.
Symbol signal-to-noise ratio loss in square-wave subcarrier downconversion
NASA Technical Reports Server (NTRS)
Feria, Y.; Statman, J.
1993-01-01
This article presents the simulated results of the signal-to-noise ratio (SNR) loss in the process of a square-wave subcarrier down conversion. In a previous article, the SNR degradation was evaluated at the output of the down converter based on the signal and noise power change. Unlike in the previous article, the SNR loss is defined here as the difference between the actual and theoretical symbol SNR's for the same symbol-error rate at the output of the symbol matched filter. The results show that an average SNR loss of 0.3 dB can be achieved with tenth-order infinite impulse response (IIR) filters. This loss is a 0.2-dB increase over the SNR degradation in the previous analysis where neither the signal distortion nor the symbol detector was considered.
Akce, Abdullah; Johnson, Miles; Dantsker, Or; Bretl, Timothy
2013-03-01
This paper presents an interface for navigating a mobile robot that moves at a fixed speed in a planar workspace, with noisy binary inputs that are obtained asynchronously at low bit-rates from a human user through an electroencephalograph (EEG). The approach is to construct an ordered symbolic language for smooth planar curves and to use these curves as desired paths for a mobile robot. The underlying problem is then to design a communication protocol by which the user can, with vanishing error probability, specify a string in this language using a sequence of inputs. Such a protocol, provided by tools from information theory, relies on a human user's ability to compare smooth curves, just like they can compare strings of text. We demonstrate our interface by performing experiments in which twenty subjects fly a simulated aircraft at a fixed speed and altitude with input only from EEG. Experimental results show that the majority of subjects are able to specify desired paths despite a wide range of errors made in decoding EEG signals.
A Degree Distribution Optimization Algorithm for Image Transmission
NASA Astrophysics Data System (ADS)
Jiang, Wei; Yang, Junjie
2016-09-01
Luby Transform (LT) code is the first practical implementation of digital fountain code. The coding behavior of LT code is mainly decided by the degree distribution which determines the relationship between source data and codewords. Two degree distributions are suggested by Luby. They work well in typical situations but not optimally in case of finite encoding symbols. In this work, the degree distribution optimization algorithm is proposed to explore the potential of LT code. Firstly selection scheme of sparse degrees for LT codes is introduced. Then probability distribution is optimized according to the selected degrees. In image transmission, bit stream is sensitive to the channel noise and even a single bit error may cause the loss of synchronization between the encoder and the decoder. Therefore the proposed algorithm is designed for image transmission situation. Moreover, optimal class partition is studied for image transmission with unequal error protection. The experimental results are quite promising. Compared with LT code with robust soliton distribution, the proposed algorithm improves the final quality of recovered images obviously with the same overhead.
ERIC Educational Resources Information Center
Sampson, Andrew
2012-01-01
This paper reports on a small-scale study into the effects of uncoded correction (writing the correct forms above each error) and coded annotations (writing symbols that encourage learners to self-correct) on Colombian university-level EFL learners' written work. The study finds that while both coded annotations and uncoded correction appear to…
Symbolic dynamics techniques for complex systems: Application to share price dynamics
NASA Astrophysics Data System (ADS)
Xu, Dan; Beck, Christian
2017-05-01
The symbolic dynamics technique is well known for low-dimensional dynamical systems and chaotic maps, and lies at the roots of the thermodynamic formalism of dynamical systems. Here we show that this technique can also be successfully applied to time series generated by complex systems of much higher dimensionality. Our main example is the investigation of share price returns in a coarse-grained way. A nontrivial spectrum of Rényi entropies is found. We study how the spectrum depends on the time scale of returns, the sector of stocks considered, as well as the number of symbols used for the symbolic description. Overall our analysis confirms that in the symbol space transition probabilities of observed share price returns depend on the entire history of previous symbols, thus emphasizing the need for a modelling based on non-Markovian stochastic processes. Our method allows for quantitative comparisons of entirely different complex systems, for example the statistics of symbol sequences generated by share price returns using 4 symbols can be compared with that of genomic sequences.
Symbolic-numeric interface: A review
NASA Technical Reports Server (NTRS)
Ng, E. W.
1980-01-01
A survey of the use of a combination of symbolic and numerical calculations is presented. Symbolic calculations primarily refer to the computer processing of procedures from classical algebra, analysis, and calculus. Numerical calculations refer to both numerical mathematics research and scientific computation. This survey is intended to point out a large number of problem areas where a cooperation of symbolic and numerical methods is likely to bear many fruits. These areas include such classical operations as differentiation and integration, such diverse activities as function approximations and qualitative analysis, and such contemporary topics as finite element calculations and computation complexity. It is contended that other less obvious topics such as the fast Fourier transform, linear algebra, nonlinear analysis and error analysis would also benefit from a synergistic approach.
Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection.
Meng, Jiahui; Zhao, Danfeng; Tian, Hai; Zhang, Liang
2018-01-15
In order to improve the performance of non-binary low-density parity check codes (LDPC) hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA) and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes' (VN) magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF) algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER) of 10 -5 over an additive white Gaussian noise (AWGN) channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced.
Zhao, Yong; Hong, Wen-Xue
2011-11-01
Fast, nondestructive and accurate identification of special quality eggs is an urgent problem. The present paper proposed a new feature extraction method based on symbol entropy to identify near infrared spectroscopy of special quality eggs. The authors selected normal eggs, free range eggs, selenium-enriched eggs and zinc-enriched eggs as research objects and measured the near-infrared diffuse reflectance spectra in the range of 12 000-4 000 cm(-1). Raw spectra were symbolically represented with aggregation approximation algorithm and symbolic entropy was extracted as feature vector. An error-correcting output codes multiclass support vector machine classifier was designed to identify the spectrum. Symbolic entropy feature is robust when parameter changed and the highest recognition rate reaches up to 100%. The results show that the identification method of special quality eggs using near-infrared is feasible and the symbol entropy can be used as a new feature extraction method of near-infrared spectra.
ERIC Educational Resources Information Center
Apperly, Ian. A.; Williams, Emily; Williams, Joelle
2004-01-01
In 4 experiments 120 three-to four-year-old non readers were asked the identity of a symbolic representation as it appeared with different objects. Consistent with Bialystok (2000), many children judged the identity of written words to vary according to the object with which they appeared but few made such errors with recognizable pictures.…
Symbolic algebra approach to the calculation of intraocular lens power following cataract surgery
NASA Astrophysics Data System (ADS)
Hjelmstad, David P.; Sayegh, Samir I.
2013-03-01
We present a symbolic approach based on matrix methods that allows for the analysis and computation of intraocular lens power following cataract surgery. We extend the basic matrix approach corresponding to paraxial optics to include astigmatism and other aberrations. The symbolic approach allows for a refined analysis of the potential sources of errors ("refractive surprises"). We demonstrate the computation of lens powers including toric lenses that correct for both defocus (myopia, hyperopia) and astigmatism. A specific implementation in Mathematica allows an elegant and powerful method for the design and analysis of these intraocular lenses.
[Origin of three symbols in medicine and surgery].
de la Garza-Villaseñor, Lorenzo
2010-01-01
Humans use many ways to communicate with fellow humans. Symbols have been one of these ways. Shamans probably used these in the beginning and adopted other distinctive symbols as they were introduced. The origin, the reason and use of three symbols in medicine and surgery are discussed. Some symbols currently remain the same and others have been modified or have disappeared. The oldest of these three symbols is the staff of Aesculapius, related to the Greek god of medicine and health. Since the 19th century, in some countries the symbol of the medical profession has become the caduceus, but the staff is the natural symbol. The second symbol is the barber pole that was created at the beginning of the Middle Ages. This was the means to locate the office and shop of a barber/surgeon in towns, cities and battlefields. On the other hand, the surgeon made use of the emblem of the union, trade or fraternity to which he belonged, accompanied by the bowl for bloodletting. The third symbol is the wearing of long and short robes that distinguished graduate surgeons from a medical school and the so-called barber/surgeons. Symbols facilitate the manner in which to identify the origin or trade of many working people. Some symbols currently remain and others have either been modified or are obsolete, losing their relationship with surgery and medicine.
An Analytical Model for the Performance Analysis of Concurrent Transmission in IEEE 802.15.4
Gezer, Cengiz; Zanella, Alberto; Verdone, Roberto
2014-01-01
Interference is a serious cause of performance degradation for IEEE802.15.4 devices. The effect of concurrent transmissions in IEEE 802.15.4 has been generally investigated by means of simulation or experimental activities. In this paper, a mathematical framework for the derivation of chip, symbol and packet error probability of a typical IEEE 802.15.4 receiver in the presence of interference is proposed. Both non-coherent and coherent demodulation schemes are considered by our model under the assumption of the absence of thermal noise. Simulation results are also added to assess the validity of the mathematical framework when the effect of thermal noise cannot be neglected. Numerical results show that the proposed analysis is in agreement with the measurement results on the literature under realistic working conditions. PMID:24658624
An analytical model for the performance analysis of concurrent transmission in IEEE 802.15.4.
Gezer, Cengiz; Zanella, Alberto; Verdone, Roberto
2014-03-20
Interference is a serious cause of performance degradation for IEEE802.15.4 devices. The effect of concurrent transmissions in IEEE 802.15.4 has been generally investigated by means of simulation or experimental activities. In this paper, a mathematical framework for the derivation of chip, symbol and packet error probability of a typical IEEE 802.15.4 receiver in the presence of interference is proposed. Both non-coherent and coherent demodulation schemes are considered by our model under the assumption of the absence of thermal noise. Simulation results are also added to assess the validity of the mathematical framework when the effect of thermal noise cannot be neglected. Numerical results show that the proposed analysis is in agreement with the measurement results on the literature under realistic working conditions.
NASA Astrophysics Data System (ADS)
Song, Tianyu; Kam, Pooi-Yuen
2016-02-01
Since atmospheric turbulence and pointing errors cause signal intensity fluctuations and the background radiation surrounding the free-space optical (FSO) receiver contributes an undesired noisy component, the receiver requires accurate channel state information (CSI) and background information to adjust the detection threshold. In most previous studies, for CSI acquisition, pilot symbols were employed, which leads to a reduction of spectral and energy efficiency; and an impractical assumption that the background radiation component is perfectly known was made. In this paper, we develop an efficient and robust sequence receiver, which acquires the CSI and the background information implicitly and requires no knowledge about the channel model information. It is robust since it can automatically estimate the CSI and background component and detect the data sequence accordingly. Its decision metric has a simple form and involves no integrals, and thus can be easily evaluated. A Viterbi-type trellis-search algorithm is adopted to improve the search efficiency, and a selective-store strategy is adopted to overcome a potential error floor problem as well as to increase the memory efficiency. To further simplify the receiver, a decision-feedback symbol-by-symbol receiver is proposed as an approximation of the sequence receiver. By simulations and theoretical analysis, we show that the performance of both the sequence receiver and the symbol-by-symbol receiver, approach that of detection with perfect knowledge of the CSI and background radiation, as the length of the window for forming the decision metric increases.
van Welie, Steven; Wijma, Linda; Beerden, Tim; van Doormaal, Jasperien; Taxis, Katja
2016-01-01
Objectives Residents of nursing homes often have difficulty swallowing (dysphagia), which complicates the administration of solid oral dosage formulations. Erroneously crushing medication is common, but few interventions have been tested to improve medication safety. Therefore, we evaluated the effect of warning symbols in combination with education on the frequency of erroneously crushing medication in nursing homes. Setting This was a prospective uncontrolled intervention study with a preintervention and postintervention measurement. The study was conducted on 18 wards (total of 200 beds) in 3 nursing homes in the North of the Netherlands. Participants We observed 36 nurses/nursing assistants (92% female; 92% nursing assistants) administering medication to 197 patients (62.9% female; mean age 81.6). Intervention The intervention consisted of a set of warning symbols printed on each patient's unit dose packaging indicating whether or not a medication could be crushed as well as education of ward staff (lectures, newsletter and poster). Primary outcome measure The relative risk (RR) of a crushing error occurring in the postintervention period compared to the preintervention period. A crushing error was defined as the crushing of a medication considered unsuitable to be crushed based on standard reference sources. Data were collected using direct (disguised) observation of nurses during drug administration. Results The crushing error rate decreased from 3.1% (21 wrongly crushed medicines out of 681 administrations) to 0.5% (3/636), RR=0.15 (95% CI 0.05 to 0.51). Likewise, there was a significant reduction using data from patients with swallowing difficulties only, 87.5% (21 errors/24 medications) to 30.0% (3/10) (RR 0.34, 95% CI 0.13 to 0.89). Medications which were erroneously crushed included enteric-coated formulations (eg, omeprazole), medication with regulated release systems (eg, Persantin; dipyridamol) and toxic substances (eg, finasteride). Conclusions Warning symbols combined with education reduced erroneous crushing of medication, a well-known and common problem in nursing homes. PMID:27496242
Performance of the all-digital data-transition tracking loop in the advanced receiver
NASA Astrophysics Data System (ADS)
Cheng, U.; Hinedi, S.
1989-11-01
The performance of the all-digital data-transition tracking loop (DTTL) with coherent or noncoherent sampling is described. The effects of few samples per symbol and of noncommensurate sampling rates and symbol rates are addressed and analyzed. Their impacts on the loop phase-error variance and the mean time to lose lock (MTLL) are quantified through computer simulations. The analysis and preliminary simulations indicate that with three to four samples per symbol, the DTTL can track with negligible jitter because of the presence of earth Doppler rate. Furthermore, the MTLL is also expected to be large engough to maintain lock over a Deep Space Network track.
Visualizing Uncertainty of Point Phenomena by Redesigned Error Ellipses
NASA Astrophysics Data System (ADS)
Murphy, Christian E.
2018-05-01
Visualizing uncertainty remains one of the great challenges in modern cartography. There is no overarching strategy to display the nature of uncertainty, as an effective and efficient visualization depends, besides on the spatial data feature type, heavily on the type of uncertainty. This work presents a design strategy to visualize uncertainty con-nected to point features. The error ellipse, well-known from mathematical statistics, is adapted to display the uncer-tainty of point information originating from spatial generalization. Modified designs of the error ellipse show the po-tential of quantitative and qualitative symbolization and simultaneous point based uncertainty symbolization. The user can intuitively depict the centers of gravity, the major orientation of the point arrays as well as estimate the ex-tents and possible spatial distributions of multiple point phenomena. The error ellipse represents uncertainty in an intuitive way, particularly suitable for laymen. Furthermore it is shown how applicable an adapted design of the er-ror ellipse is to display the uncertainty of point features originating from incomplete data. The suitability of the error ellipse to display the uncertainty of point information is demonstrated within two showcases: (1) the analysis of formations of association football players, and (2) uncertain positioning of events on maps for the media.
Statistical Symbolic Execution with Informed Sampling
NASA Technical Reports Server (NTRS)
Filieri, Antonio; Pasareanu, Corina S.; Visser, Willem; Geldenhuys, Jaco
2014-01-01
Symbolic execution techniques have been proposed recently for the probabilistic analysis of programs. These techniques seek to quantify the likelihood of reaching program events of interest, e.g., assert violations. They have many promising applications but have scalability issues due to high computational demand. To address this challenge, we propose a statistical symbolic execution technique that performs Monte Carlo sampling of the symbolic program paths and uses the obtained information for Bayesian estimation and hypothesis testing with respect to the probability of reaching the target events. To speed up the convergence of the statistical analysis, we propose Informed Sampling, an iterative symbolic execution that first explores the paths that have high statistical significance, prunes them from the state space and guides the execution towards less likely paths. The technique combines Bayesian estimation with a partial exact analysis for the pruned paths leading to provably improved convergence of the statistical analysis. We have implemented statistical symbolic execution with in- formed sampling in the Symbolic PathFinder tool. We show experimentally that the informed sampling obtains more precise results and converges faster than a purely statistical analysis and may also be more efficient than an exact symbolic analysis. When the latter does not terminate symbolic execution with informed sampling can give meaningful results under the same time and memory limits.
Lonnemann, Jan; Li, Su; Zhao, Pei; Li, Peng; Linkersdörfer, Janosch; Lindberg, Sven; Hasselhorn, Marcus; Yan, Song
2017-01-01
Human beings are assumed to possess an approximate number system (ANS) dedicated to extracting and representing approximate numerical magnitude information. The ANS is assumed to be fundamental to arithmetic learning and has been shown to be associated with arithmetic performance. It is, however, still a matter of debate whether better arithmetic skills are reflected in the ANS. To address this issue, Chinese and German adults were compared regarding their performance in simple arithmetic tasks and in a non-symbolic numerical magnitude comparison task. Chinese participants showed a better performance in solving simple arithmetic tasks and faster reaction times in the non-symbolic numerical magnitude comparison task without making more errors than their German peers. These differences in performance could not be ascribed to differences in general cognitive abilities. Better arithmetic skills were thus found to be accompanied by a higher speed of retrieving non-symbolic numerical magnitude knowledge but not by a higher precision of non-symbolic numerical magnitude representations. The group difference in the speed of retrieving non-symbolic numerical magnitude knowledge was fully mediated by the performance in arithmetic tasks, suggesting that arithmetic skills shape non-symbolic numerical magnitude processing skills. PMID:28384191
Emergent latent symbol systems in recurrent neural networks
NASA Astrophysics Data System (ADS)
Monner, Derek; Reggia, James A.
2012-12-01
Fodor and Pylyshyn [(1988). Connectionism and cognitive architecture: A critical analysis. Cognition, 28(1-2), 3-71] famously argued that neural networks cannot behave systematically short of implementing a combinatorial symbol system. A recent response from Frank et al. [(2009). Connectionist semantic systematicity. Cognition, 110(3), 358-379] claimed to have trained a neural network to behave systematically without implementing a symbol system and without any in-built predisposition towards combinatorial representations. We believe systems like theirs may in fact implement a symbol system on a deeper and more interesting level: one where the symbols are latent - not visible at the level of network structure. In order to illustrate this possibility, we demonstrate our own recurrent neural network that learns to understand sentence-level language in terms of a scene. We demonstrate our model's learned understanding by testing it on novel sentences and scenes. By paring down our model into an architecturally minimal version, we demonstrate how it supports combinatorial computation over distributed representations by using the associative memory operations of Vector Symbolic Architectures. Knowledge of the model's memory scheme gives us tools to explain its errors and construct superior future models. We show how the model designs and manipulates a latent symbol system in which the combinatorial symbols are patterns of activation distributed across the layers of a neural network, instantiating a hybrid of classical symbolic and connectionist representations that combines advantages of both.
ERIC Educational Resources Information Center
Eckert, Andreas; Nilsson, Per
2017-01-01
This study examines an interactional view on teaching mathematics, whereby meaning is co-produced with the students through a process of negotiation. Further, teaching is viewed from a symbolic interactionism perspective, allowing the analysis to focus on the teacher's role in the negotiation of meaning. Using methods inspired by grounded theory,…
NASA Astrophysics Data System (ADS)
Khallaf, Haitham S.; Garrido-Balsells, José M.; Shalaby, Hossam M. H.; Sampei, Seiichi
2015-12-01
The performance of multiple-input multiple-output free space optical (MIMO-FSO) communication systems, that adopt multipulse pulse position modulation (MPPM) techniques, is analyzed. Both exact and approximate symbol-error rates (SERs) are derived for both cases of uncorrelated and correlated channels. The effects of background noise, receiver shot-noise, and atmospheric turbulence are taken into consideration in our analysis. The random fluctuations of the received optical irradiance, produced by the atmospheric turbulence, is modeled by the widely used gamma-gamma statistical distribution. Uncorrelated MIMO channels are modeled by the α-μ distribution. A closed-form expression for the probability density function of the optical received irradiance is derived for the case of correlated MIMO channels. Using our analytical expressions, the degradation of the system performance with the increment of the correlation coefficients between MIMO channels is corroborated.
Bit error rate performance of pi/4-DQPSK in a frequency-selective fast Rayleigh fading channel
NASA Technical Reports Server (NTRS)
Liu, Chia-Liang; Feher, Kamilo
1991-01-01
The bit error rate (BER) performance of pi/4-differential quadrature phase shift keying (DQPSK) modems in cellular mobile communication systems is derived and analyzed. The system is modeled as a frequency-selective fast Rayleigh fading channel corrupted by additive white Gaussian noise (AWGN) and co-channel interference (CCI). The probability density function of the phase difference between two consecutive symbols of M-ary differential phase shift keying (DPSK) signals is first derived. In M-ary DPSK systems, the information is completely contained in this phase difference. For pi/4-DQPSK, the BER is derived in a closed form and calculated directly. Numerical results show that for the 24 kBd (48 kb/s) pi/4-DQPSK operated at a carrier frequency of 850 MHz and C/I less than 20 dB, the BER will be dominated by CCI if the vehicular speed is below 100 mi/h. In this derivation, frequency-selective fading is modeled by two independent Rayleigh signal paths. Only one co-channel is assumed in this derivation. The results obtained are also shown to be valid for discriminator detection of M-ary DPSK signals.
Nuclear Structure in China 2010
NASA Astrophysics Data System (ADS)
Bai, Hong-Bo; Meng, Jie; Zhao, En-Guang; Zhou, Shan-Gui
2011-08-01
Personal view on nuclear physics research / Jie Meng -- High-spin level structures in [symbol]Zr / X. P. Cao ... [et al.] -- Constraining the symmetry energy from the neutron skin thickness of tin isotopes / Lie-Wen Chen ... [et al.] -- Wobbling rotation in atomic nuclei / Y. S. Chen and Zao-Chun Gao -- The mixing of scalar mesons and the possible nonstrange dibaryons / L. R. Dai ... [et al.] -- Net baryon productions and gluon saturation in the SPS, RHIC and LHC energy regions / Sheng-Qin Feng -- Production of heavy isotopes with collisions between two actinide nuclides / Z. Q. Feng ... [et al.] -- The projected configuration interaction method / Zao-Chun Gao and Yong-Shou Chen -- Applications of Nilsson mean-field plus extended pairing model to rare-earth nuclei / Xin Guan ... [et al.] -- Complex scaling method and the resonant states / Jian-You Guo ... [et al.] -- Probing the equation of state by deep sub-barrier fusion reactions / Hong-Jun Hao and Jun-Long Tian -- Doublet structure study in A[symbol]105 mass region / C. Y. He ... [et al.] -- Rotational bands in transfermium nuclei / X. T. He -- Shape coexistence and shape evolution [symbol]Yb / H. Hua ... [et al.] -- Multistep shell model method in the complex energy plane / R. J. Liotta -- The evolution of protoneutron stars with kaon condensate / Ang Li -- High spin structures in the [symbol]Lu nucleus / Li Cong-Bo ... [et al.] -- Nuclear stopping and equation of state / QingFeng Li and Ying Yuan -- Covariant description of the low-lying states in neutron-deficient Kr isotopes / Z. X. Li ... [et al.] -- Isospin corrections for superallowed [symbol] transitions / HaoZhao Liang ... [et al.] -- The positive-parity band structures in [symbol]Ag / C. Liu ... [et al.] -- New band structures in odd-odd [symbol]I and [symbol]I / Liu GongYe ... [et al.] -- The sd-pair shell model and interacting boson model / Yan-An Luo ... [et al.] -- Cross-section distributions of fragments in the calcium isotopes projectile fragmentation at the intermediate energy / C. W. Ma ... [et al.].Systematic study of spin assignment and dynamic moment of inertia of high-j intruder band in [symbol]In / K. Y. Ma ... [et al.] -- Signals of diproton emission from the three-body breakup channel of [symbol]Al and [symbol]Mg / Ma Yu-Gang ... [et al.] -- Uncertainties of Th/Eu and Th/Hf chronometers from nucleus masses / Z. M. Niu ... [et al.] -- The chiral doublet bands with [symbol] configuration in A[symbol]100 mass region / B. Qi ... [et al.] -- [symbol] formation probabilities in nuclei and pairing collectivity / Chong Qi -- A theoretical prospective on triggered gamma emission from [symbol]Hf[symbol] isomer / ShuiFa Shen ... [et al.] -- Study of nuclear giant resonances using a Fermi-liquid method / Bao-Xi Sun -- Rotational bands in doubly odd [symbol]Sb / D. P. Sun ... [et al.] -- The study of the neutron N=90 nuclei / W. X. Teng ... [et al.] -- Dynamical modes and mechanisms in ternary reaction of [symbol]Au+[symbol]Au / Jun-Long Tian ... [et al.] -- Dynamical study of X(3872) as a D[symbol] molecular state / B. Wang ... [et al.] -- Super-heavy stability island with a semi-empirical nuclear mass formula / N. Wang ... [et al.] -- Pseudospin partner bands in [symbol]Sb / S. Y. Wang ... [et al.] -- Study of elastic resonance scattering at CIAE / Y. B. Wang ... [et al.] -- Systematic study of survival probability of excited superheavy nuclei / C. J. Xia ... [et al.] -- Angular momentum projection of the Nilsson mean-field plus nearest-orbit pairing interaction model / Ming-Xia Xie ... [et al.] -- Possible shape coexistence for [symbol]Sm in a reflection-asymmetric relativistic mean-field approach / W. Zhang ... [et al.] -- Nuclear pairing reduction due to rotation and blocking / Zhen-Hua Zhang -- Nucleon pair approximation of the shell model: a review and perspective / Y. M. Zhao ... [et al.] -- Band structures in doubly odd [symbol]I / Y. Zheng ... [et al.] -- Lifetimes of high spin states in [symbol]Ag / Y. Zheng ... [et al.] -- Effect of tensor interaction on the shell structure of superheavy nuclei / Xian-Rong Zhou ... [et al.].
Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection
Meng, Jiahui; Zhao, Danfeng; Tian, Hai; Zhang, Liang
2018-01-01
In order to improve the performance of non-binary low-density parity check codes (LDPC) hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA) and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes’ (VN) magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF) algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER) of 10−5 over an additive white Gaussian noise (AWGN) channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced. PMID:29342963
40 CFR 1065.1005 - Symbols, abbreviations, acronyms, and units of measure.
Code of Federal Regulations, 2014 CFR
2014-07-01
... of diameters meter per meter m/m 1 b atomic oxygen-to-carbon ratio mole per mole mol/mol 1 C # number... error between a quantity and its reference e brake-specific emission or fuel consumption gram per... standard deviation S Sutherland constant kelvin K K SEE standard estimate of error T absolute temperature...
2011-09-30
Ratio TAFS Treasury Appropriation Fund Symbol INSPECTOR GENERAL DEPARTMENT OF DEFENSE 400 ARMY NAVY DRIVE ARLINGTON, VIRGINIA 22202-4704...Symbol ( TAFS ) to the www.recovery.gov Web site. As a result of our review, officials at AFCESA took action to correct the errors in the...projects in a timely manner, and the funding authorization documents properly identified a Recovery Act designation. Funding documents cited a TAFS of
To call a cloud 'cirrus': sound symbolism in names for categories or items.
Ković, Vanja; Sučević, Jelena; Styles, Suzy J
2017-01-01
The aim of the present paper is to experimentally test whether sound symbolism has selective effects on labels with different ranges-of-reference within a simple noun-hierarchy. In two experiments, adult participants learned the make up of two categories of unfamiliar objects ('alien life forms'), and were passively exposed to either category-labels or item-labels, in a learning-by-guessing categorization task. Following category training, participants were tested on their visual discrimination of object pairs. For different groups of participants, the labels were either congruent or incongruent with the objects. In Experiment 1, when trained on items with individual labels, participants were worse (made more errors) at detecting visual object mismatches when trained labels were incongruent. In Experiment 2, when participants were trained on items in labelled categories, participants were faster at detecting a match if the trained labels were congruent, and faster at detecting a mismatch if the trained labels were incongruent. This pattern of results suggests that sound symbolism in category labels facilitates later similarity judgments when congruent, and discrimination when incongruent, whereas for item labels incongruence generates error in judgements of visual object differences. These findings reveal that sound symbolic congruence has a different outcome at different levels of labelling within a noun hierarchy. These effects emerged in the absence of the label itself, indicating subtle but pervasive effects on visual object processing.
[The concept of risk and its estimation].
Zocchetti, C; Della Foglia, M; Colombi, A
1996-01-01
The concept of risk, in relation to human health, is a topic of primary interest for occupational health professionals. A new legislation recently established in Italy (626/94) according to European Community directives in the field of Preventive Medicine, called attention to this topic, and in particular to risk assessment and evaluation. Motivated by this context and by the impression that the concept of risk is frequently misunderstood, the present paper has two aims: the identification of the different meanings of the term "risk" in the new Italian legislation and the critical discussion of some commonly used definitions; and the proposal of a general definition, with the specification of a mathematical expression for quantitative risk estimation. The term risk (and risk estimation, assessment, or evaluation) has mainly referred to three different contexts: hazard identification, exposure assessment, and adverse health effects occurrence. Unfortunately, there are contexts in the legislation in which it is difficult to identify the true meaning of the term. This might cause equivocal interpretations and erroneous applications of the law because hazard evaluation, exposure assessment, and adverse health effects identification are completely different topics that require integrated but distinct approaches to risk management. As far as a quantitative definition of risk is of concern, we suggest an algorithm which connects the three basic risk elements (hazard, exposure, adverse health effects) by means of their probabilities of occurrence: the probability of being exposed (to a definite dose) given that a specific hazard is present (Pr(e[symbol: see text]p)), and the probability of occurrence of an adverse health effect as a consequence of that exposure (Pr(d[symbol: see text]e)). Using these quantitative components, risk can be defined as a sequence of measurable events that starts with hazard identification and terminates with disease occurrence; therefore, the following formal definition of risk is proposed: the probability of occurrence, in a given period of time, of an adverse health effect as a consequence of the existence of an hazard. In formula: R(d[symbol: see text]p) = Pr(e[symbol: see text]p) x Pr(d[symbol: see text]e). While Pr(e[symbol: see text]p) (exposure given hazard) must be evaluated in the situation under study, two alternatives exist for the estimation of the occurrence of adverse health effects (Pr(d[symbol: see text]e)): a "direct" estimation of the damage (Pr(d[symbol: see text]e) through formal epidemiologic studies conducted in the situation under observation; and an "indirect" estimation of Pr(d[symbol: see text]e) using information taken from the scientific literature (epidemiologic evaluations, dose-response relationships, extrapolations, ...). Both conditions are presented along with their respective advantages, disadvantages, and uncertainties. The usefulness of the proposed algorithm is discussed with respect to commonly used applications of risk assessment in occupational medicine; the relevance of time for risk estimation (both in the term of duration of observation, duration of exposure, and latency of effect) is briefly explained; and how the proposed algorithm takes into account (in terms of prevention and public health) both the etiologic relevance of the exposure and the consequences of exposure removal is highlighted. As a last comment, it is suggested that the diffuse application of good work practices (technical, behavioral, organizational, ...), or the exhaustive use of check lists, can be relevant in terms of improvement of prevention efficacy, but does not represent any quantitative procedure of risk assessment which, in any circumstance, must be considered the elective approach to adverse health effect prevention.
Biostatistics Series Module 5: Determining Sample Size
Hazra, Avijit; Gogtay, Nithya
2016-01-01
Determining the appropriate sample size for a study, whatever be its type, is a fundamental aspect of biomedical research. An adequate sample ensures that the study will yield reliable information, regardless of whether the data ultimately suggests a clinically important difference between the interventions or elements being studied. The probability of Type 1 and Type 2 errors, the expected variance in the sample and the effect size are the essential determinants of sample size in interventional studies. Any method for deriving a conclusion from experimental data carries with it some risk of drawing a false conclusion. Two types of false conclusion may occur, called Type 1 and Type 2 errors, whose probabilities are denoted by the symbols σ and β. A Type 1 error occurs when one concludes that a difference exists between the groups being compared when, in reality, it does not. This is akin to a false positive result. A Type 2 error occurs when one concludes that difference does not exist when, in reality, a difference does exist, and it is equal to or larger than the effect size defined by the alternative to the null hypothesis. This may be viewed as a false negative result. When considering the risk of Type 2 error, it is more intuitive to think in terms of power of the study or (1 − β). Power denotes the probability of detecting a difference when a difference does exist between the groups being compared. Smaller α or larger power will increase sample size. Conventional acceptable values for power and α are 80% or above and 5% or below, respectively, when calculating sample size. Increasing variance in the sample tends to increase the sample size required to achieve a given power level. The effect size is the smallest clinically important difference that is sought to be detected and, rather than statistical convention, is a matter of past experience and clinical judgment. Larger samples are required if smaller differences are to be detected. Although the principles are long known, historically, sample size determination has been difficult, because of relatively complex mathematical considerations and numerous different formulas. However, of late, there has been remarkable improvement in the availability, capability, and user-friendliness of power and sample size determination software. Many can execute routines for determination of sample size and power for a wide variety of research designs and statistical tests. With the drudgery of mathematical calculation gone, researchers must now concentrate on determining appropriate sample size and achieving these targets, so that study conclusions can be accepted as meaningful. PMID:27688437
NASA Technical Reports Server (NTRS)
Massey, J. L.
1976-01-01
The very low error probability obtained with long error-correcting codes results in a very small number of observed errors in simulation studies of practical size and renders the usual confidence interval techniques inapplicable to the observed error probability. A natural extension of the notion of a 'confidence interval' is made and applied to such determinations of error probability by simulation. An example is included to show the surprisingly great significance of as few as two decoding errors in a very large number of decoding trials.
A time and frequency synchronization method for CO-OFDM based on CMA equalizers
NASA Astrophysics Data System (ADS)
Ren, Kaixuan; Li, Xiang; Huang, Tianye; Cheng, Zhuo; Chen, Bingwei; Wu, Xu; Fu, Songnian; Ping, Perry Shum
2018-06-01
In this paper, an efficient time and frequency synchronization method based on a new training symbol structure is proposed for polarization division multiplexing (PDM) coherent optical orthogonal frequency division multiplexing (CO-OFDM) systems. The coarse timing synchronization is achieved by exploiting the correlation property of the first training symbol, and the fine timing synchronization is accomplished by using the time-domain symmetric conjugate of the second training symbol. Furthermore, based on these training symbols, a constant modulus algorithm (CMA) is proposed for carrier frequency offset (CFO) estimation. Theoretical analysis and simulation results indicate that the algorithm has the advantages of robustness to poor optical signal-to-noise ratio (OSNR) and chromatic dispersion (CD). The frequency offset estimation range can achieve [ -Nsc/2 ΔfN , + Nsc/2 ΔfN ] GHz with the mean normalized estimation error below 12 × 10-3 even under the condition of OSNR as low as 10 dB.
van Welie, Steven; Wijma, Linda; Beerden, Tim; van Doormaal, Jasperien; Taxis, Katja
2016-08-05
Residents of nursing homes often have difficulty swallowing (dysphagia), which complicates the administration of solid oral dosage formulations. Erroneously crushing medication is common, but few interventions have been tested to improve medication safety. Therefore, we evaluated the effect of warning symbols in combination with education on the frequency of erroneously crushing medication in nursing homes. This was a prospective uncontrolled intervention study with a preintervention and postintervention measurement. The study was conducted on 18 wards (total of 200 beds) in 3 nursing homes in the North of the Netherlands. We observed 36 nurses/nursing assistants (92% female; 92% nursing assistants) administering medication to 197 patients (62.9% female; mean age 81.6). The intervention consisted of a set of warning symbols printed on each patient's unit dose packaging indicating whether or not a medication could be crushed as well as education of ward staff (lectures, newsletter and poster). The relative risk (RR) of a crushing error occurring in the postintervention period compared to the preintervention period. A crushing error was defined as the crushing of a medication considered unsuitable to be crushed based on standard reference sources. Data were collected using direct (disguised) observation of nurses during drug administration. The crushing error rate decreased from 3.1% (21 wrongly crushed medicines out of 681 administrations) to 0.5% (3/636), RR=0.15 (95% CI 0.05 to 0.51). Likewise, there was a significant reduction using data from patients with swallowing difficulties only, 87.5% (21 errors/24 medications) to 30.0% (3/10) (RR 0.34, 95% CI 0.13 to 0.89). Medications which were erroneously crushed included enteric-coated formulations (eg, omeprazole), medication with regulated release systems (eg, Persantin; dipyridamol) and toxic substances (eg, finasteride). Warning symbols combined with education reduced erroneous crushing of medication, a well-known and common problem in nursing homes. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Decoding algorithm for vortex communications receiver
NASA Astrophysics Data System (ADS)
Kupferman, Judy; Arnon, Shlomi
2018-01-01
Vortex light beams can provide a tremendous alphabet for encoding information. We derive a symbol decoding algorithm for a direct detection matrix detector vortex beam receiver using Laguerre Gauss (LG) modes, and develop a mathematical model of symbol error rate (SER) for this receiver. We compare SER as a function of signal to noise ratio (SNR) for our algorithm and for the Pearson correlation algorithm. To our knowledge, this is the first comprehensive treatment of a decoding algorithm of a matrix detector for an LG receiver.
Symbolic inversion of control relationships in model-based expert systems
NASA Technical Reports Server (NTRS)
Thomas, Stan
1988-01-01
Symbolic inversion is examined from several perspectives. First, a number of symbolic algebra and mathematical tool packages were studied in order to evaluate their capabilities and methods, specifically with respect to symbolic inversion. Second, the KATE system (without hardware interface) was ported to a Zenith Z-248 microcomputer running Golden Common Lisp. The interesting thing about the port is that it allows the user to have measurements vary and components fail in a non-deterministic manner based upon random value from probability distributions. Third, INVERT was studied as currently implemented in KATE, its operation documented, some of its weaknesses identified, and corrections made to it. The corrections and enhancements are primarily in the way that logical conditions involving AND's and OR's and inequalities are processed. In addition, the capability to handle equalities was also added. Suggestions were also made regarding the handling of ranges in INVERT. Last, other approaches to the inversion process were studied and recommendations were made as to how future versions of KATE should perform symbolic inversion.
Comparative study of signalling methods for high-speed backplane transceiver
NASA Astrophysics Data System (ADS)
Wu, Kejun
2017-11-01
A combined analysis of transient simulation and statistical method is proposed for comparative study of signalling methods applied to high-speed backplane transceivers. This method enables fast and accurate signal-to-noise ratio and symbol error rate estimation of a serial link based on a four-dimension design space, including channel characteristics, noise scenarios, equalisation schemes, and signalling methods. The proposed combined analysis method chooses an efficient sampling size for performance evaluation. A comparative study of non-return-to-zero (NRZ), PAM-4, and four-phase shifted sinusoid symbol (PSS-4) using parameterised behaviour-level simulation shows PAM-4 and PSS-4 has substantial advantages over conventional NRZ in most of the cases. A comparison between PAM-4 and PSS-4 shows PAM-4 gets significant bit error rate degradation when noise level is enhanced.
NASA Astrophysics Data System (ADS)
Liu, Bo; Xin, Xiangjun; Zhang, Lijia; Wang, Fu; Zhang, Qi
2018-02-01
A new feedback symbol timing recovery technique using timing estimation joint equalization is proposed for digital receivers with two samples/symbol or higher sampling rate. Different from traditional methods, the clock recovery algorithm in this paper adopts another algorithm distinguishing the phases of adjacent symbols, so as to accurately estimate the timing offset based on the adjacent signals with the same phase. The addition of the module for eliminating phase modulation interference before timing estimation further reduce the variance, thus resulting in a smoothed timing estimate. The Mean Square Error (MSE) and Bit Error Rate (BER) of the resulting timing estimate are simulated to allow a satisfactory estimation performance. The obtained clock tone performance is satisfactory for MQAM modulation formats and the Roll-off Factor (ROF) close to 0. In the back-to-back system, when ROF= 0, the maximum of MSE obtained with the proposed approach reaches 0 . 0125. After 100-km fiber transmission, BER decreases to 10-3 with ROF= 0 and OSNR = 11 dB. With the increase in ROF, the performances of MSE and BER become better.
The cognitive capabilities of farm animals: categorisation learning in dwarf goats (Capra hircus).
Meyer, Susann; Nürnberg, Gerd; Puppe, Birger; Langbein, Jan
2012-07-01
The ability to establish categories enables organisms to classify stimuli, objects and events by assessing perceptual, associative or rational similarities and provides the basis for higher cognitive processing. The cognitive capabilities of farm animals are receiving increasing attention in applied ethology, a development driven primarily by scientifically based efforts to improve animal welfare. The present study investigated the learning of perceptual categories in Nigerian dwarf goats (Capra hircus) by using an automated learning device installed in the animals' pen. Thirteen group-housed goats were trained in a closed-economy approach to discriminate artificial two-dimensional symbols presented in a four-choice design. The symbols belonged to two categories: category I, black symbols with an open centre (rewarded) and category II, the same symbols but filled black (unrewarded). One symbol from category I and three different symbols from category II were used to define a discrimination problem. After the training of eight problems, the animals were presented with a transfer series containing the training problems interspersed with completely new problems made from new symbols belonging to the same categories. The results clearly demonstrate that dwarf goats are able to form categories based on similarities in the visual appearance of artificial symbols and to generalise across new symbols. However, the goats had difficulties in discriminating specific symbols. It is probable that perceptual problems caused these difficulties. Nevertheless, the present study suggests that goats housed under farming conditions have well-developed cognitive abilities, including learning of open-ended categories. This result could prove beneficial by facilitating animals' adaptation to housing environments that favour their cognitive capabilities.
Design Consideration and Performance of Networked Narrowband Waveforms for Tactical Communications
2010-09-01
four proposed CPM modes, with perfect acquisition parameters, for both coherent and noncoherent detection using an iterative receiver with both inner...Figure 1: Bit error rate performance of various CPM modes with coherent and noncoherent detection. Figure 3 shows the corresponding relationship...symbols. Table 2 summarises the parameter Coherent results (cross) Noncoherent results (diamonds) Figur 1: Bit Error Rate Pe f rmance of
Screening athletes with Down syndrome for ocular disease.
Gutstein, Walter; Sinclair, Stephen H; North, Rachel V; Bekiroglu, N
2010-02-01
Persons with Down syndrome are well known to have a high prevalence of vision and eye health problems, many of which are undetected or untreated primarily because of infrequent ocular examinations. Public screening programs, directed toward the pediatric population, have become more popular and commonly use letter or symbol charts. This study compares 2 vision screening methods, the Lea Symbol chart and a newly developed interactive computer program, the Vimetrics Central Vision Analyzer (CVA), in their ability to identify ocular disease in the Down syndrome population. Athletes with Down syndrome participating in the European Special Olympics underwent an ocular screening including history, auto-refraction, colour vision assessment, stereopsis assessment, motility assessment, pupil reactivity, and tonometry testing, as well as anterior segment and fundus examinations to evaluate for ocular disease. Visual acuity was tested with the Lea chart and CVA to evaluate these as screening tests for detecting ocular disease as well as significant, uncorrected refractive errors. Among the 91 athletes that presented to the screening, 79 (158 eyes) were sufficiently cooperative for the examination to be completed. Mean age was 26 years +/-10.8 SD. Significant, uncorrected refractive errors (>/=1.00 spherical equivalent) were detected in 28 (18%) eyes and ocular pathology in 51 (32%) eyes. The Lea chart sensitivity and specificity were 43% and 74%, respectively, for detecting ocular pathology and 58% and 100% for detecting uncorrected refractive errors. The CVA sensitivity and specificity were 70% and 86% for detecting pathology and 71% and 100% for detecting uncorrected refractive errors. This study confirmed the findings of prior studies in identifying a significant presence of uncorrected refractive errors and ocular pathology in the Down syndrome population. Screening with the Lea symbol chart found borderline sufficient sensitivity and specificity for the test to be used for screening in this population. The better sensitivity and specificity of the CVA, if adjusted normative values are utilized, appear to make this test sufficient for testing Down syndrome children for identifying both refractive errors and ocular pathology. Copyright 2010 American Optometric Association. Published by Elsevier Inc. All rights reserved.
Joint Source-Channel Coding by Means of an Oversampled Filter Bank Code
NASA Astrophysics Data System (ADS)
Marinkovic, Slavica; Guillemot, Christine
2006-12-01
Quantized frame expansions based on block transforms and oversampled filter banks (OFBs) have been considered recently as joint source-channel codes (JSCCs) for erasure and error-resilient signal transmission over noisy channels. In this paper, we consider a coding chain involving an OFB-based signal decomposition followed by scalar quantization and a variable-length code (VLC) or a fixed-length code (FLC). This paper first examines the problem of channel error localization and correction in quantized OFB signal expansions. The error localization problem is treated as an[InlineEquation not available: see fulltext.]-ary hypothesis testing problem. The likelihood values are derived from the joint pdf of the syndrome vectors under various hypotheses of impulse noise positions, and in a number of consecutive windows of the received samples. The error amplitudes are then estimated by solving the syndrome equations in the least-square sense. The message signal is reconstructed from the corrected received signal by a pseudoinverse receiver. We then improve the error localization procedure by introducing a per-symbol reliability information in the hypothesis testing procedure of the OFB syndrome decoder. The per-symbol reliability information is produced by the soft-input soft-output (SISO) VLC/FLC decoders. This leads to the design of an iterative algorithm for joint decoding of an FLC and an OFB code. The performance of the algorithms developed is evaluated in a wavelet-based image coding system.
NASA Astrophysics Data System (ADS)
Chen, Wei; Zhang, Junfeng; Gao, Mingyi; Shen, Gangxiang
2018-03-01
High-order modulation signals are suited for high-capacity communication systems because of their high spectral efficiency, but they are more vulnerable to various impairments. For the signals that experience degradation, when symbol points overlap on the constellation diagram, the original linear decision boundary cannot be used to distinguish the classification of symbol. Therefore, it is advantageous to create an optimum symbol decision boundary for the degraded signals. In this work, we experimentally demonstrated the 64-quadrature-amplitude modulation (64-QAM) coherent optical communication system using support-vector machine (SVM) decision boundary algorithm to create the optimum symbol decision boundary for improving the system performance. We investigated the influence of various impairments on the 64-QAM coherent optical communication systems, such as the impairments caused by modulator nonlinearity, phase skew between in-phase (I) arm and quadrature-phase (Q) arm of the modulator, fiber Kerr nonlinearity and amplified spontaneous emission (ASE) noise. We measured the bit-error-ratio (BER) performance of 75-Gb/s 64-QAM signals in the back-to-back and 50-km transmission. By using SVM to optimize symbol decision boundary, the impairments caused by I/Q phase skew of the modulator, fiber Kerr nonlinearity and ASE noise are greatly mitigated.
A probabilistic and multi-objective analysis of lexicase selection and ε-lexicase selection.
Cava, William La; Helmuth, Thomas; Spector, Lee; Moore, Jason H
2018-05-10
Lexicase selection is a parent selection method that considers training cases individually, rather than in aggregate, when performing parent selection. Whereas previous work has demonstrated the ability of lexicase selection to solve difficult problems in program synthesis and symbolic regression, the central goal of this paper is to develop the theoretical underpinnings that explain its performance. To this end, we derive an analytical formula that gives the expected probabilities of selection under lexicase selection, given a population and its behavior. In addition, we expand upon the relation of lexicase selection to many-objective optimization methods to describe the behavior of lexicase selection, which is to select individuals on the boundaries of Pareto fronts in high-dimensional space. We show analytically why lexicase selection performs more poorly for certain sizes of population and training cases, and show why it has been shown to perform more poorly in continuous error spaces. To address this last concern, we propose new variants of ε-lexicase selection, a method that modifies the pass condition in lexicase selection to allow near-elite individuals to pass cases, thereby improving selection performance with continuous errors. We show that ε-lexicase outperforms several diversity-maintenance strategies on a number of real-world and synthetic regression problems.
Indoor visible light communication with smart lighting technology
NASA Astrophysics Data System (ADS)
Das Barman, Abhirup; Halder, Alak
2017-02-01
An indoor visible-light communication performance is investigated utilizing energy efficient white light by 2D LED arrays. Enabled by recent advances in LED technology, IEEE 802.15.7 standardizes high-data-rate visible light communication and advocates for colour shift keying (CSK) modulation to overcome flicker and to support dimming. Voronoi segmentation is employed for decoding N-CSK constellation which has superior performance compared to other existing decoding methods. The two chief performance degrading effects of inter-symbol interference and LED nonlinearity is jointly mitigated using LMS post equalization at the receiver which improves the symbol error rate performance and increases field of view of the receiver. It is found that LMS post equalization symbol at 250MHz offers 7dB SNR improvement at SER10-6
40 CFR 1066.705 - Symbols, abbreviations, acronyms, and units of measure.
Code of Federal Regulations, 2012 CFR
2012-07-01
... series n total number of pulses in a series R dynamometer roll revolutions revolutions per minute rpm 2·π... torque (moment of force) newton meter N·m m2·kg·s−2 t time second s s Δt time interval, period, 1... atmospheric b base c coastdown e effective error error exp expected quantity i an individual of a series final...
Symbol Synchronization for Diffusion-Based Molecular Communications.
Jamali, Vahid; Ahmadzadeh, Arman; Schober, Robert
2017-12-01
Symbol synchronization refers to the estimation of the start of a symbol interval and is needed for reliable detection. In this paper, we develop several symbol synchronization schemes for molecular communication (MC) systems where we consider some practical challenges, which have not been addressed in the literature yet. In particular, we take into account that in MC systems, the transmitter may not be equipped with an internal clock and may not be able to emit molecules with a fixed release frequency. Such restrictions hold for practical nanotransmitters, e.g., modified cells, where the lengths of the symbol intervals may vary due to the inherent randomness in the availability of food and energy for molecule generation, the process for molecule production, and the release process. To address this issue, we develop two synchronization-detection frameworks which both employ two types of molecule. In the first framework, one type of molecule is used for symbol synchronization and the other one is used for data detection, whereas in the second framework, both types of molecule are used for joint symbol synchronization and data detection. For both frameworks, we first derive the optimal maximum likelihood (ML) symbol synchronization schemes as performance upper bounds. Since ML synchronization entails high complexity, for each framework, we also propose three low-complexity suboptimal schemes, namely a linear filter-based scheme, a peak observation-based scheme, and a threshold-trigger scheme, which are suitable for MC systems with limited computational capabilities. Furthermore, we study the relative complexity and the constraints associated with the proposed schemes and the impact of the insertion and deletion errors that arise due to imperfect synchronization. Our simulation results reveal the effectiveness of the proposed synchronization schemes and suggest that the end-to-end performance of MC systems significantly depends on the accuracy of the symbol synchronization.
Measurement errors in voice-key naming latency for Hiragana.
Yamada, Jun; Tamaoka, Katsuo
2003-12-01
This study makes explicit the limitations and possibilities of voice-key naming latency research on single hiragana symbols (a Japanese syllabic script) by examining three sets of voice-key naming data against Sakuma, Fushimi, and Tatsumi's 1997 speech-analyzer voice-waveform data. Analysis showed that voice-key measurement errors can be substantial in standard procedures as they may conceal the true effects of significant variables involved in hiragana-naming behavior. While one can avoid voice-key measurement errors to some extent by applying Sakuma, et al.'s deltas and by excluding initial phonemes which induce measurement errors, such errors may be ignored when test items are words and other higher-level linguistic materials.
NASA Technical Reports Server (NTRS)
Hinrichs, C. A.
1974-01-01
A digital simulation is presented for a candidate modem in a modeled atmospheric scintillation environment with Doppler, Doppler rate, and signal attenuation typical of the radio link conditions for an outer planets atmospheric entry probe. The results indicate that the signal acquisition characteristics and the channel error rate are acceptable for the system requirements of the radio link. The simulation also outputs data for calculating other error statistics and a quantized symbol stream from which error correction decoding can be analyzed.
Product code optimization for determinate state LDPC decoding in robust image transmission.
Thomos, Nikolaos; Boulgouris, Nikolaos V; Strintzis, Michael G
2006-08-01
We propose a novel scheme for error-resilient image transmission. The proposed scheme employs a product coder consisting of low-density parity check (LDPC) codes and Reed-Solomon codes in order to deal effectively with bit errors. The efficiency of the proposed scheme is based on the exploitation of determinate symbols in Tanner graph decoding of LDPC codes and a novel product code optimization technique based on error estimation. Experimental evaluation demonstrates the superiority of the proposed system in comparison to recent state-of-the-art techniques for image transmission.
Chances Are...Making Probability and Statistics Fun To Learn and Easy To Teach.
ERIC Educational Resources Information Center
Pfenning, Nancy
Probability and statistics may be the horror of many college students, but if these subjects are trimmed to include only the essential symbols, they are easily within the grasp of interested middle school or even elementary school students. This book can serve as an introduction for any beginner, from gifted students who would like to broaden…
Palmer, Katherine A; Shane, Rita; Wu, Cindy N; Bell, Douglas S; Diaz, Frank; Cook-Wiens, Galen; Jackevicius, Cynthia A
2016-01-01
Objective We sought to assess the potential of a widely available source of electronic medication data to prevent medication history errors and resultant inpatient order errors. Methods We used admission medication history (AMH) data from a recent clinical trial that identified 1017 AMH errors and 419 resultant inpatient order errors among 194 hospital admissions of predominantly older adult patients on complex medication regimens. Among the subset of patients for whom we could access current Surescripts electronic pharmacy claims data (SEPCD), two pharmacists independently assessed error severity and our main outcome, which was whether SEPCD (1) was unrelated to the medication error; (2) probably would not have prevented the error; (3) might have prevented the error; or (4) probably would have prevented the error. Results Seventy patients had both AMH errors and current, accessible SEPCD. SEPCD probably would have prevented 110 (35%) of 315 AMH errors and 46 (31%) of 147 resultant inpatient order errors. When we excluded the least severe medication errors, SEPCD probably would have prevented 99 (47%) of 209 AMH errors and 37 (61%) of 61 resultant inpatient order errors. SEPCD probably would have prevented at least one AMH error in 42 (60%) of 70 patients. Conclusion When current SEPCD was available for older adult patients on complex medication regimens, it had substantial potential to prevent AMH errors and resultant inpatient order errors, with greater potential to prevent more severe errors. Further study is needed to measure the benefit of SEPCD in actual use at hospital admission. PMID:26911817
PAPR reduction in FBMC using an ACE-based linear programming optimization
NASA Astrophysics Data System (ADS)
van der Neut, Nuan; Maharaj, Bodhaswar TJ; de Lange, Frederick; González, Gustavo J.; Gregorio, Fernando; Cousseau, Juan
2014-12-01
This paper presents four novel techniques for peak-to-average power ratio (PAPR) reduction in filter bank multicarrier (FBMC) modulation systems. The approach extends on current PAPR reduction active constellation extension (ACE) methods, as used in orthogonal frequency division multiplexing (OFDM), to an FBMC implementation as the main contribution. The four techniques introduced can be split up into two: linear programming optimization ACE-based techniques and smart gradient-project (SGP) ACE techniques. The linear programming (LP)-based techniques compensate for the symbol overlaps by utilizing a frame-based approach and provide a theoretical upper bound on achievable performance for the overlapping ACE techniques. The overlapping ACE techniques on the other hand can handle symbol by symbol processing. Furthermore, as a result of FBMC properties, the proposed techniques do not require side information transmission. The PAPR performance of the techniques is shown to match, or in some cases improve, on current PAPR techniques for FBMC. Initial analysis of the computational complexity of the SGP techniques indicates that the complexity issues with PAPR reduction in FBMC implementations can be addressed. The out-of-band interference introduced by the techniques is investigated. As a result, it is shown that the interference can be compensated for, whilst still maintaining decent PAPR performance. Additional results are also provided by means of a study of the PAPR reduction of the proposed techniques at a fixed clipping probability. The bit error rate (BER) degradation is investigated to ensure that the trade-off in terms of BER degradation is not too severe. As illustrated by exhaustive simulations, the SGP ACE-based technique proposed are ideal candidates for practical implementation in systems employing the low-complexity polyphase implementation of FBMC modulators. The methods are shown to offer significant PAPR reduction and increase the feasibility of FBMC as a replacement modulation system for OFDM.
On the decoding process in ternary error-correcting output codes.
Escalera, Sergio; Pujol, Oriol; Radeva, Petia
2010-01-01
A common way to model multiclass classification problems is to design a set of binary classifiers and to combine them. Error-Correcting Output Codes (ECOC) represent a successful framework to deal with these type of problems. Recent works in the ECOC framework showed significant performance improvements by means of new problem-dependent designs based on the ternary ECOC framework. The ternary framework contains a larger set of binary problems because of the use of a "do not care" symbol that allows us to ignore some classes by a given classifier. However, there are no proper studies that analyze the effect of the new symbol at the decoding step. In this paper, we present a taxonomy that embeds all binary and ternary ECOC decoding strategies into four groups. We show that the zero symbol introduces two kinds of biases that require redefinition of the decoding design. A new type of decoding measure is proposed, and two novel decoding strategies are defined. We evaluate the state-of-the-art coding and decoding strategies over a set of UCI Machine Learning Repository data sets and into a real traffic sign categorization problem. The experimental results show that, following the new decoding strategies, the performance of the ECOC design is significantly improved.
Irreversibility in physics stemming from unpredictable symbol-handling agents
NASA Astrophysics Data System (ADS)
Myers, John M.; Madjid, F. Hadi
2016-05-01
The basic equations of physics involve a time variable t and are invariant under the transformation t --> -t. This invariance at first sight appears to impose time reversibility as a principle of physics, in conflict with thermodynamics. But equations written on the blackboard are not the whole story in physics. In prior work we sharpened a distinction obscured in today's theoretical physics, the distinction between obtaining evidence from experiments on the laboratory bench and explaining that evidence in mathematical symbols on the blackboard. The sharp distinction rests on a proof within the mathematics of quantum theory that no amount of evidence, represented in quantum theory in terms of probabilities, can uniquely determine its explanation in terms of wave functions and linear operators. Building on the proof we show here a role in physics for unpredictable symbol-handling agents acting both at the blackboard and at the workbench, communicating back and forth by means of transmitted symbols. Because of their unpredictability, symbol-handling agents introduce a heretofore overlooked source of irreversibility into physics, even when the equations they write on the blackboard are invariant under t --> -t. Widening the scope of descriptions admissible to physics to include the agents and the symbols that link theory to experiments opens up a new source of time-irreversibility in physics.
A Multi-Encoding Approach for LTL Symbolic Satisfiability Checking
NASA Technical Reports Server (NTRS)
Rozier, Kristin Y.; Vardi, Moshe Y.
2011-01-01
Formal behavioral specifications written early in the system-design process and communicated across all design phases have been shown to increase the efficiency, consistency, and quality of the system under development. To prevent introducing design or verification errors, it is crucial to test specifications for satisfiability. Our focus here is on specifications expressed in linear temporal logic (LTL). We introduce a novel encoding of symbolic transition-based Buchi automata and a novel, "sloppy," transition encoding, both of which result in improved scalability. We also define novel BDD variable orders based on tree decomposition of formula parse trees. We describe and extensively test a new multi-encoding approach utilizing these novel encoding techniques to create 30 encoding variations. We show that our novel encodings translate to significant, sometimes exponential, improvement over the current standard encoding for symbolic LTL satisfiability checking.
Supernova 2007bi as a pair-instability explosion.
Gal-Yam, A; Mazzali, P; Ofek, E O; Nugent, P E; Kulkarni, S R; Kasliwal, M M; Quimby, R M; Filippenko, A V; Cenko, S B; Chornock, R; Waldman, R; Kasen, D; Sullivan, M; Beshore, E C; Drake, A J; Thomas, R C; Bloom, J S; Poznanski, D; Miller, A A; Foley, R J; Silverman, J M; Arcavi, I; Ellis, R S; Deng, J
2009-12-03
Stars with initial masses such that 10M[symbol: see text]
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc
1998-01-01
In a coded communication system with equiprobable signaling, MLD minimizes the word error probability and delivers the most likely codeword associated with the corresponding received sequence. This decoding has two drawbacks. First, minimization of the word error probability is not equivalent to minimization of the bit error probability. Therefore, MLD becomes suboptimum with respect to the bit error probability. Second, MLD delivers a hard-decision estimate of the received sequence, so that information is lost between the input and output of the ML decoder. This information is important in coded schemes where the decoded sequence is further processed, such as concatenated coding schemes, multi-stage and iterative decoding schemes. In this chapter, we first present a decoding algorithm which both minimizes bit error probability, and provides the corresponding soft information at the output of the decoder. This algorithm is referred to as the MAP (maximum aposteriori probability) decoding algorithm.
Tung, Li-Chen; Yu, Wan-Hui; Lin, Gong-Hong; Yu, Tzu-Ying; Wu, Chien-Te; Tsai, Chia-Yin; Chou, Willy; Chen, Mei-Hsiang; Hsieh, Ching-Lin
2016-09-01
To develop a Tablet-based Symbol Digit Modalities Test (T-SDMT) and to examine the test-retest reliability and concurrent validity of the T-SDMT in patients with stroke. The study had two phases. In the first phase, six experts, nine college students and five outpatients participated in the development and testing of the T-SDMT. In the second phase, 52 outpatients were evaluated twice (2 weeks apart) with the T-SDMT and SDMT to examine the test-retest reliability and concurrent validity of the T-SDMT. The T-SDMT was developed via expert input and college student/patient feedback. Regarding test-retest reliability, the practise effects of the T-SDMT and SDMT were both trivial (d=0.12) but significant (p≦0.015). The improvement in the T-SDMT (4.7%) was smaller than that in the SDMT (5.6%). The minimal detectable changes (MDC%) of the T-SDMT and SDMT were 6.7 (22.8%) and 10.3 (32.8%), respectively. The T-SDMT and SDMT were highly correlated with each other at the two time points (Pearson's r=0.90-0.91). The T-SDMT demonstrated good concurrent validity with the SDMT. Because the T-SDMT had a smaller practise effect and less random measurement error (superior test-retest reliability), it is recommended over the SDMT for assessing information processing speed in patients with stroke. Implications for Rehabilitation The Symbol Digit Modalities Test (SDMT), a common measure of information processing speed, showed a substantial practise effect and considerable random measurement error in patients with stroke. The Tablet-based SDMT (T-SDMT) has been developed to reduce the practise effect and random measurement error of the SDMT in patients with stroke. The T-SDMT had smaller practise effect and random measurement error than the SDMT, which can provide more reliable assessments of information processing speed.
Performance of Low-Density Parity-Check Coded Modulation
NASA Astrophysics Data System (ADS)
Hamkins, J.
2011-02-01
This article presents the simulated performance of a family of nine AR4JA low-density parity-check (LDPC) codes when used with each of five modulations. In each case, the decoder inputs are codebit log-likelihood ratios computed from the received (noisy) modulation symbols using a general formula which applies to arbitrary modulations. Suboptimal soft-decision and hard-decision demodulators are also explored. Bit-interleaving and various mappings of bits to modulation symbols are considered. A number of subtle decoder algorithm details are shown to affect performance, especially in the error floor region. Among these are quantization dynamic range and step size, clipping degree-one variable nodes, "Jones clipping" of variable nodes, approximations of the min* function, and partial hard-limiting messages from check nodes. Using these decoder optimizations, all coded modulations simulated here are free of error floors down to codeword error rates below 10^{-6}. The purpose of generating this performance data is to aid system engineers in determining an appropriate code and modulation to use under specific power and bandwidth constraints, and to provide information needed to design a variable/adaptive coded modulation (VCM/ACM) system using the AR4JA codes. IPNPR Volume 42-185 Tagged File.txt
ERIC Educational Resources Information Center
O'Connell, Ann Aileen
The relationships among types of errors observed during probability problem solving were studied. Subjects were 50 graduate students in an introductory probability and statistics course. Errors were classified as text comprehension, conceptual, procedural, and arithmetic. Canonical correlation analysis was conducted on the frequencies of specific…
Sučević, Jelena; Savić, Andrej M; Popović, Mirjana B; Styles, Suzy J; Ković, Vanja
2015-01-01
There is something about the sound of a pseudoword like takete that goes better with a spiky, than a curvy shape (Köhler, 1929:1947). Yet despite decades of research into sound symbolism, the role of this effect on real words in the lexicons of natural languages remains controversial. We report one behavioural and one ERP study investigating whether sound symbolism is active during normal language processing for real words in a speaker's native language, in the same way as for novel word forms. The results indicate that sound-symbolic congruence has a number of influences on natural language processing: Written forms presented in a congruent visual context generate more errors during lexical access, as well as a chain of differences in the ERP. These effects have a very early onset (40-80 ms, 100-160 ms, 280-320 ms) and are later overshadowed by familiar types of semantic processing, indicating that sound symbolism represents an early sensory-co-activation effect. Copyright © 2015 Elsevier Inc. All rights reserved.
Computational intelligence models to predict porosity of tablets using minimum features
Khalid, Mohammad Hassan; Kazemi, Pezhman; Perez-Gandarillas, Lucia; Michrafy, Abderrahim; Szlęk, Jakub; Jachowicz, Renata; Mendyk, Aleksander
2017-01-01
The effects of different formulations and manufacturing process conditions on the physical properties of a solid dosage form are of importance to the pharmaceutical industry. It is vital to have in-depth understanding of the material properties and governing parameters of its processes in response to different formulations. Understanding the mentioned aspects will allow tighter control of the process, leading to implementation of quality-by-design (QbD) practices. Computational intelligence (CI) offers an opportunity to create empirical models that can be used to describe the system and predict future outcomes in silico. CI models can help explore the behavior of input parameters, unlocking deeper understanding of the system. This research endeavor presents CI models to predict the porosity of tablets created by roll-compacted binary mixtures, which were milled and compacted under systematically varying conditions. CI models were created using tree-based methods, artificial neural networks (ANNs), and symbolic regression trained on an experimental data set and screened using root-mean-square error (RMSE) scores. The experimental data were composed of proportion of microcrystalline cellulose (MCC) (in percentage), granule size fraction (in micrometers), and die compaction force (in kilonewtons) as inputs and porosity as an output. The resulting models show impressive generalization ability, with ANNs (normalized root-mean-square error [NRMSE] =1%) and symbolic regression (NRMSE =4%) as the best-performing methods, also exhibiting reliable predictive behavior when presented with a challenging external validation data set (best achieved symbolic regression: NRMSE =3%). Symbolic regression demonstrates the transition from the black box modeling paradigm to more transparent predictive models. Predictive performance and feature selection behavior of CI models hints at the most important variables within this factor space. PMID:28138223
Computational intelligence models to predict porosity of tablets using minimum features.
Khalid, Mohammad Hassan; Kazemi, Pezhman; Perez-Gandarillas, Lucia; Michrafy, Abderrahim; Szlęk, Jakub; Jachowicz, Renata; Mendyk, Aleksander
2017-01-01
The effects of different formulations and manufacturing process conditions on the physical properties of a solid dosage form are of importance to the pharmaceutical industry. It is vital to have in-depth understanding of the material properties and governing parameters of its processes in response to different formulations. Understanding the mentioned aspects will allow tighter control of the process, leading to implementation of quality-by-design (QbD) practices. Computational intelligence (CI) offers an opportunity to create empirical models that can be used to describe the system and predict future outcomes in silico. CI models can help explore the behavior of input parameters, unlocking deeper understanding of the system. This research endeavor presents CI models to predict the porosity of tablets created by roll-compacted binary mixtures, which were milled and compacted under systematically varying conditions. CI models were created using tree-based methods, artificial neural networks (ANNs), and symbolic regression trained on an experimental data set and screened using root-mean-square error (RMSE) scores. The experimental data were composed of proportion of microcrystalline cellulose (MCC) (in percentage), granule size fraction (in micrometers), and die compaction force (in kilonewtons) as inputs and porosity as an output. The resulting models show impressive generalization ability, with ANNs (normalized root-mean-square error [NRMSE] =1%) and symbolic regression (NRMSE =4%) as the best-performing methods, also exhibiting reliable predictive behavior when presented with a challenging external validation data set (best achieved symbolic regression: NRMSE =3%). Symbolic regression demonstrates the transition from the black box modeling paradigm to more transparent predictive models. Predictive performance and feature selection behavior of CI models hints at the most important variables within this factor space.
Flexible Automatic Discretization for Finite Differences: Eliminating the Human Factor
NASA Astrophysics Data System (ADS)
Pranger, Casper
2017-04-01
In the geophysical numerical modelling community, finite differences are (in part due to their small footprint) a popular spatial discretization method for PDEs in the regular-shaped continuum that is the earth. However, they rapidly become prone to programming mistakes when physics increase in complexity. To eliminate opportunities for human error, we have designed an automatic discretization algorithm using Wolfram Mathematica, in which the user supplies symbolic PDEs, the number of spatial dimensions, and a choice of symbolic boundary conditions, and the script transforms this information into matrix- and right-hand-side rules ready for use in a C++ code that will accept them. The symbolic PDEs are further used to automatically develop and perform manufactured solution benchmarks, ensuring at all stages physical fidelity while providing pragmatic targets for numerical accuracy. We find that this procedure greatly accelerates code development and provides a great deal of flexibility in ones choice of physics.
On the performance of a code division multiple access scheme with transmit/receive conflicts
NASA Astrophysics Data System (ADS)
Silvester, J. A.
One of the benefits of spread spectrum is that by assigning each user a different orthogonal signal set, multiple transmissions can occur simultaneously. This possibility is utilized in new access schemes called Code Division Multiple Access (CDMA). The present investigation is concerned with a particular CDMA implementation in which the transmit times for each symbol are exactly determined in a distributed manner such that both sender and receiver know them. In connection with a decision whether to transmit or receive, the loss of a symbol in one of the channels results. The system employs thus a coding technique which permits correct decoding of a codeword even if some constituent symbols are missing or in error. The technique used is Reed Solomon coding. The performance of this system is analyzed, and attention is given to the optimum strategy which should be used in deciding whether to receive or transmit.
A pattern jitter free AFC scheme for mobile satellite systems
NASA Technical Reports Server (NTRS)
Yoshida, Shousei
1993-01-01
This paper describes a scheme for pattern jitter free automatic frequency control (AFC) with a wide frequency acquisition range. In this scheme, equalizing signals fed to the frequency discriminator allow pattern jitter free performance to be achieved for all roll-off factors. In order to define the acquisition range, frequency discrimination characateristics are analyzed on a newly derived frequency domain model. As a result, it is shown that a sufficiently wide acquisition range over a given system symbol rate can be achieved independent of symbol timing errors. Additionally, computer simulation demonstrates that frequency jitter performance improves in proportion to E(sub b)/N(sub 0) because pattern-dependent jitter is suppressed in the discriminator output. These results show significant promise for applciation to mobile satellite systems, which feature relatively low symbol rate transmission with an approximately 0.4-0.7 roll-off factor.
Fisher classifier and its probability of error estimation
NASA Technical Reports Server (NTRS)
Chittineni, C. B.
1979-01-01
Computationally efficient expressions are derived for estimating the probability of error using the leave-one-out method. The optimal threshold for the classification of patterns projected onto Fisher's direction is derived. A simple generalization of the Fisher classifier to multiple classes is presented. Computational expressions are developed for estimating the probability of error of the multiclass Fisher classifier.
Evaluating structural pattern recognition for handwritten math via primitive label graphs
NASA Astrophysics Data System (ADS)
Zanibbi, Richard; MoucheÌre, Harold; Viard-Gaudin, Christian
2013-01-01
Currently, structural pattern recognizer evaluations compare graphs of detected structure to target structures (i.e. ground truth) using recognition rates, recall and precision for object segmentation, classification and relationships. In document recognition, these target objects (e.g. symbols) are frequently comprised of multiple primitives (e.g. connected components, or strokes for online handwritten data), but current metrics do not characterize errors at the primitive level, from which object-level structure is obtained. Primitive label graphs are directed graphs defined over primitives and primitive pairs. We define new metrics obtained by Hamming distances over label graphs, which allow classification, segmentation and parsing errors to be characterized separately, or using a single measure. Recall and precision for detected objects may also be computed directly from label graphs. We illustrate the new metrics by comparing a new primitive-level evaluation to the symbol-level evaluation performed for the CROHME 2012 handwritten math recognition competition. A Python-based set of utilities for evaluating, visualizing and translating label graphs is publicly available.
Match graph generation for symbolic indirect correlation
NASA Astrophysics Data System (ADS)
Lopresti, Daniel; Nagy, George; Joshi, Ashutosh
2006-01-01
Symbolic indirect correlation (SIC) is a new approach for bringing lexical context into the recognition of unsegmented signals that represent words or phrases in printed or spoken form. One way of viewing the SIC problem is to find the correspondence, if one exists, between two bipartite graphs, one representing the matching of the two lexical strings and the other representing the matching of the two signal strings. While perfect matching cannot be expected with real-world signals and while some degree of mismatch is allowed for in the second stage of SIC, such errors, if they are too numerous, can present a serious impediment to a successful implementation of the concept. In this paper, we describe a framework for evaluating the effectiveness of SIC match graph generation and examine the relatively simple, controlled cases of synthetic images of text strings typeset, both normally and in highly condensed fashion. We quantify and categorize the errors that arise, as well as present a variety of techniques we have developed to visualize the intermediate results of the SIC process.
NASA Technical Reports Server (NTRS)
Gracey, William; Jewel, Joseph W., Jr.; Carpenter, Gene T.
1960-01-01
The overall errors of the service altimeter installations of a variety of civil transport, military, and general-aviation airplanes have been experimentally determined during normal landing-approach and take-off operations. The average height above the runway at which the data were obtained was about 280 feet for the landings and about 440 feet for the take-offs. An analysis of the data obtained from 196 airplanes during 415 landing approaches and from 70 airplanes during 152 take-offs showed that: 1. The overall error of the altimeter installations in the landing- approach condition had a probable value (50 percent probability) of +/- 36 feet and a maximum probable value (99.7 percent probability) of +/- 159 feet with a bias of +10 feet. 2. The overall error in the take-off condition had a probable value of +/- 47 feet and a maximum probable value of +/- 207 feet with a bias of -33 feet. 3. The overall errors of the military airplanes were generally larger than those of the civil transports in both the landing-approach and take-off conditions. In the landing-approach condition the probable error and the maximum probable error of the military airplanes were +/- 43 and +/- 189 feet, respectively, with a bias of +15 feet, whereas those for the civil transports were +/- 22 and +/- 96 feet, respectively, with a bias of +1 foot. 4. The bias values of the error distributions (+10 feet for the landings and -33 feet for the take-offs) appear to represent a measure of the hysteresis characteristics (after effect and recovery) and friction of the instrument and the pressure lag of the tubing-instrument system.
Dehghan, Ashraf; Abumasoudi, Rouhollah Sheikh; Ehsanpour, Soheila
2016-01-01
Infertility and errors in the process of its treatment have a negative impact on infertile couples. The present study was aimed to identify and assess the common errors in the reception process by applying the approach of "failure modes and effects analysis" (FMEA). In this descriptive cross-sectional study, the admission process of fertility and infertility center of Isfahan was selected for evaluation of its errors based on the team members' decision. At first, the admission process was charted through observations and interviewing employees, holding multiple panels, and using FMEA worksheet, which has been used in many researches all over the world and also in Iran. Its validity was evaluated through content and face validity, and its reliability was evaluated through reviewing and confirmation of the obtained information by the FMEA team, and eventually possible errors, causes, and three indicators of severity of effect, probability of occurrence, and probability of detection were determined and corrective actions were proposed. Data analysis was determined by the number of risk priority (RPN) which is calculated by multiplying the severity of effect, probability of occurrence, and probability of detection. Twenty-five errors with RPN ≥ 125 was detected through the admission process, in which six cases of error had high priority in terms of severity and occurrence probability and were identified as high-risk errors. The team-oriented method of FMEA could be useful for assessment of errors and also to reduce the occurrence probability of errors.
Dehghan, Ashraf; Abumasoudi, Rouhollah Sheikh; Ehsanpour, Soheila
2016-01-01
Background: Infertility and errors in the process of its treatment have a negative impact on infertile couples. The present study was aimed to identify and assess the common errors in the reception process by applying the approach of “failure modes and effects analysis” (FMEA). Materials and Methods: In this descriptive cross-sectional study, the admission process of fertility and infertility center of Isfahan was selected for evaluation of its errors based on the team members’ decision. At first, the admission process was charted through observations and interviewing employees, holding multiple panels, and using FMEA worksheet, which has been used in many researches all over the world and also in Iran. Its validity was evaluated through content and face validity, and its reliability was evaluated through reviewing and confirmation of the obtained information by the FMEA team, and eventually possible errors, causes, and three indicators of severity of effect, probability of occurrence, and probability of detection were determined and corrective actions were proposed. Data analysis was determined by the number of risk priority (RPN) which is calculated by multiplying the severity of effect, probability of occurrence, and probability of detection. Results: Twenty-five errors with RPN ≥ 125 was detected through the admission process, in which six cases of error had high priority in terms of severity and occurrence probability and were identified as high-risk errors. Conclusions: The team-oriented method of FMEA could be useful for assessment of errors and also to reduce the occurrence probability of errors. PMID:28194208
Does a better model yield a better argument? An info-gap analysis
NASA Astrophysics Data System (ADS)
Ben-Haim, Yakov
2017-04-01
Theories, models and computations underlie reasoned argumentation in many areas. The possibility of error in these arguments, though of low probability, may be highly significant when the argument is used in predicting the probability of rare high-consequence events. This implies that the choice of a theory, model or computational method for predicting rare high-consequence events must account for the probability of error in these components. However, error may result from lack of knowledge or surprises of various sorts, and predicting the probability of error is highly uncertain. We show that the putatively best, most innovative and sophisticated argument may not actually have the lowest probability of error. Innovative arguments may entail greater uncertainty than more standard but less sophisticated methods, creating an innovation dilemma in formulating the argument. We employ info-gap decision theory to characterize and support the resolution of this problem and present several examples.
Lytle, Nicole; London, Kamala; Bruck, Maggie
2015-01-01
In two experiments, we investigated 3- to 5-year-old children’s ability to use dolls and human figure drawings as symbols to map body touches. In Experiment 1 stickers were placed on different locations of children’s bodies, and they were asked to indicate the location of the sticker using three different symbols: a doll, a human figure drawing, and the adult researcher. Performance on the tasks increased with age, but many 5-year-olds did not attain perfect performance. Surprisingly, younger children made more errors on the 2D human figure drawing task compared to the 3D doll and adult tasks. In Experiment 2, we compared children’s ability to use 3D and 2D symbols to indicate body touch as well as to guide their search for a hidden object. We replicated the findings of Experiment 1 for the body touch task: for younger children, 3D symbols were easier to use than 2D symbols. However, the reverse pattern was found for the object locations task with children showing superior performance using 2D drawings over 3D models. Though children showed developmental improvements in using dolls and drawings to show where they were touched, less than two-thirds of the 5-year-olds performed perfectly on the touch tasks. Developmental as well as forensic implications of these results are discussed. PMID:25781003
Price, Gavin R; Wilkey, Eric D; Yeo, Darren J
2017-05-01
A growing body of research suggests that the processing of nonsymbolic (e.g. sets of dots) and symbolic (e.g. Arabic digits) numerical magnitudes serves as a foundation for the development of math competence. Performance on magnitude comparison tasks is thought to reflect the precision of a shared cognitive representation, as evidence by the presence of a numerical ratio effect for both formats. However, little is known regarding how visuo-perceptual processes are related to the numerical ratio effect, whether they are shared across numerical formats, and whether they relate to math competence independently of performance outcomes. The present study investigates these questions in a sample of typically developing adults. Our results reveal a pattern of associations between eye-movement measures, but not their ratio effects, across formats. This suggests that ratio-specific visuo-perceptual processing during magnitude processing is different across nonsymbolic and symbolic formats. Furthermore, eye movements are related to math performance only during symbolic comparison, supporting a growing body of literature suggesting symbolic number processing is more strongly related to math outcomes than nonsymbolic magnitude processing. Finally, eye-movement patterns, specifically fixation dwell time, continue to be negatively related to math performance after controlling for task performance (i.e. error rate and reaction time) and domain general cognitive abilities (IQ), suggesting that fluent visual processing of Arabic digits plays a unique and important role in linking symbolic number processing to formal math abilities. Copyright © 2017 Elsevier B.V. All rights reserved.
The Use of Neural Networks for Determining Tank Routes
1992-09-01
ADDRESS (City, State, and ZIP Code) Monterey, CA 93943-5000 Monterey, CA 93943-5000 &a. NAME OF FUNDINGJSPONSORING 8b. OFFICE SYMBOL 9. PROCUREMENT...Weights Figure 1. Neural Network Architecture 6 The back-error propagation technique iteratively assigns weights to connections, computes the errors...neurons as the start. From that we decided to try 4, 6 , 8, 10, 12, 15, 20, 25, 30, 35, 40, 45, 50, 60, 70, 80, 90 and 100 or until it was obvious that
Chaos synchronization basing on symbolic dynamics with nongenerating partition.
Wang, Xingyuan; Wang, Mogei; Liu, Zhenzhen
2009-06-01
Using symbolic dynamics and information theory, we study the information transmission needed for synchronizing unidirectionally coupled oscillators. It is found that when sustaining chaos synchronization with nongenerating partition, the synchronization error will be larger than a critical value, although the required coupled channel capacity can be smaller than the case of using a generating partition. Then we show that no matter whether a generating or nongenerating partition is in use, a high-quality detector can guarantee the lead of the response oscillator, while the lag responding can make up the low precision of the detector. A practicable synchronization scheme basing on a nongenerating partition is also proposed in this paper.
Effect of digital scrambling on satellite communication links
NASA Technical Reports Server (NTRS)
Dessouky, K.
1985-01-01
Digital data scrambling has been considered for communication systems using NRZ symbol formats. The purpose is to increase the number of transitions in the data to improve the performance of the symbol synchronizer. This is accomplished without expanding the bandwidth but at the expense of increasing the data bit error rate (BER). Models for the scramblers/descramblers of practical interest are presented together with the appropriate link model. The effects of scrambling on the performance of coded and uncoded links are studied. The results are illustrated by application to the Tracking and Data Relay Satellite System (TDRSS) links. Conclusions regarding the usefulness of scrambling are also given.
Corrigendum to 'Modeling the degradation mechanisms of C6/LiFePO4 batteries'
NASA Astrophysics Data System (ADS)
Li, Dongjiang; Danilov, Dmitri L.; Zwikirsch, Barbara; Fichtner, Maximilian; Yang, Yong; Eichel, Rüdiger-A.; Notten, Peter H. L.
2018-04-01
The authors regret that the following errors were present within their article: In equation 10 and 11, the rate constant "k" should be in lowercase; the same problem existed within table 2 and 3 and also in the 'List of symbols'.
Visual Salience of Algebraic Transformations
ERIC Educational Resources Information Center
Kirshner, David; Awtry, Thomas
2004-01-01
Information processing researchers have assumed that algebra symbol skills depend on mastery of the abstract rules presented in the curriculum (Matz, 1980; Sleeman, 1986). Thus, students' ubiquitous algebra errors have been taken as indicating the need to embed algebra in rich contextual settings (Kaput, 1995; National Council of Teachers of…
[Snake as a symbol in medicine and pharmacy - a historical study].
Okuda, J; Kiyokawa, R
2000-01-01
The snake and snake venoms have stimulated the mind and imagination of humankind since the beginning of records about society. No animal has been more worshipped yet more cast out, more loved yet more despised than the snake. The essence of the fascination with fear of the snake lies within the creature's venom. Snakes have been used for worship, magic potions and, medicine, and they have been the symbol of love, health, disease, medicine, pharmacy, immortality, death and even wisdom. In the Sumer civilization (B.C. 2350-2150), designs with 2 snakes appeared. In Greek mythology (B.C. 2000-400), statues of Asclepius (God of Medicine), with "Caduceus" (made of two snakes and a staff), and his daughter Hygeia (God of Health), holding a snake and bowl, were created as symbols for medicine and health, respectively. A kind of Caduceus (1 snake and 1 staff) has been used as a symbol by the World Health Organization (WHO) and a snake and bowl as a symbol of pharmacies in Europe. Snakes have also been worshipped by old Indian peoples involved in Hinduism since 6-4th century B.C. In ancient Egypt, snake designs were used in hieroglyphs. In China, dried bodies of about 30 species of snakes are still using as Chinese medicines. In Japan, a painting of the symbol of "Genbu" (snake with tortoise) was found recently on the north wall of the Takamatsuzuka ancient tomb (7-8th century A.D.), however it is a symbol of a compass direction, and has probably less relation to medicine and pharmacy.
Error Patterns in Ordering Fractions among At-Risk Fourth-Grade Students
Malone, Amelia S.; Fuchs, Lynn S.
2016-01-01
The 3 purposes of this study were to: (a) describe fraction ordering errors among at-risk 4th-grade students; (b) assess the effect of part-whole understanding and accuracy of fraction magnitude estimation on the probability of committing errors; and (c) examine the effect of students' ability to explain comparing problems on the probability of committing errors. Students (n = 227) completed a 9-item ordering test. A high proportion (81%) of problems were completed incorrectly. Most (65% of) errors were due to students misapplying whole number logic to fractions. Fraction-magnitude estimation skill, but not part-whole understanding, significantly predicted the probability of committing this type of error. Implications for practice are discussed. PMID:26966153
The Reality of Neandertal Symbolic Behavior at the Grotte du Renne, Arcy-sur-Cure, France
Caron, François; d'Errico, Francesco; Del Moral, Pierre; Santos, Frédéric; Zilhão, João
2011-01-01
Background The question of whether symbolically mediated behavior is exclusive to modern humans or shared with anatomically archaic populations such as the Neandertals is hotly debated. At the Grotte du Renne, Arcy-sur-Cure, France, the Châtelperronian levels contain Neandertal remains and large numbers of personal ornaments, decorated bone tools and colorants, but it has been suggested that this association reflects intrusion of the symbolic artifacts from the overlying Protoaurignacian and/or of the Neandertal remains from the underlying Mousterian. Methodology/Principal Findings We tested these hypotheses against the horizontal and vertical distributions of the various categories of diagnostic finds and statistically assessed the probability that the Châtelperronian levels are of mixed composition. Our results reject that the associations result from large or small scale, localized or generalized post-depositional displacement, and they imply that incomplete sample decontamination is the parsimonious explanation for the stratigraphic anomalies seen in the radiocarbon dating of the sequence. Conclusions/Significance The symbolic artifacts in the Châtelperronian of the Grotte du Renne are indeed Neandertal material culture. PMID:21738702
Human Error Analysis in a Permit to Work System: A Case Study in a Chemical Plant
Jahangiri, Mehdi; Hoboubi, Naser; Rostamabadi, Akbar; Keshavarzi, Sareh; Hosseini, Ali Akbar
2015-01-01
Background A permit to work (PTW) is a formal written system to control certain types of work which are identified as potentially hazardous. However, human error in PTW processes can lead to an accident. Methods This cross-sectional, descriptive study was conducted to estimate the probability of human errors in PTW processes in a chemical plant in Iran. In the first stage, through interviewing the personnel and studying the procedure in the plant, the PTW process was analyzed using the hierarchical task analysis technique. In doing so, PTW was considered as a goal and detailed tasks to achieve the goal were analyzed. In the next step, the standardized plant analysis risk-human (SPAR-H) reliability analysis method was applied for estimation of human error probability. Results The mean probability of human error in the PTW system was estimated to be 0.11. The highest probability of human error in the PTW process was related to flammable gas testing (50.7%). Conclusion The SPAR-H method applied in this study could analyze and quantify the potential human errors and extract the required measures for reducing the error probabilities in PTW system. Some suggestions to reduce the likelihood of errors, especially in the field of modifying the performance shaping factors and dependencies among tasks are provided. PMID:27014485
Correction to Kreuzbauer, King, and Basu (2015).
2015-08-01
Reports an error in "The Mind in the Object-Psychological Valuation of Materialized Human Expression" by Robert Kreuzbauer, Dan King and Shankha Basu (Journal of Experimental Psychology: General, Advanced Online Publication, Jun 15, 2015, np). In the article the labels on the X-axis of Figure 1 "Remove Variance" and "Preserve Variance" should be switched. (The following abstract of the original article appeared in record 2015-26264-001.) Symbolic material objects such as art or certain artifacts (e.g., fine pottery, jewelry) share one common element: The combination of generating an expression, and the materialization of this expression in the object. This explains why people place a much greater value on handmade over machine-made objects, and originals over duplicates. We show that this mechanism occurs when a material object's symbolic property is salient and when the creator (artist or craftsman) is perceived to have agency control over the 1-to-1 materialized expression in the object. Coactivation of these 2 factors causes the object to be perceived as having high value because it is seen as the embodied representation of the creator's unique personal expression. In 6 experiments, subjects rated objects in various object categories, which varied on the type of object property (symbolic, functional, aesthetic), the production procedure (handmade, machine-made, analog, digital) and the origin of the symbolic information (person or software). The studies showed that the proposed mechanism applies to symbolic, but not to functional or aesthetic material objects. Furthermore, they show that this specific form of symbolic object valuation could not be explained by various other related psychological theories (e.g., uniqueness, scarcity, physical touching, creative performance). Our research provides a universal framework that identifies a core mechanism for explaining judgments of value for one of our most uniquely human symbolic object categories. (c) 2015 APA, all rights reserved).
Probability of undetected error after decoding for a concatenated coding scheme
NASA Technical Reports Server (NTRS)
Costello, D. J., Jr.; Lin, S.
1984-01-01
A concatenated coding scheme for error control in data communications is analyzed. In this scheme, the inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if the outer code detects the presence of errors after the inner code decoding. Probability of undetected error is derived and bounded. A particular example, proposed for NASA telecommand system is analyzed.
List of Error-Prone Abbreviations, Symbols, and Dose Designations
... unit dose (e.g., diltiazem 125 mg IV infusion “UD” misin- terpreted as meaning to give the entire infusion as a unit [bolus] dose) Use “as directed” ... Names Intended Meaning Misinterpretation Correction “Nitro” drip nitroglycerin infusion Mistaken as sodium nitroprusside infusion Use complete drug ...
Understanding Written Corrective Feedback in Second-Language Grammar Acquisition
ERIC Educational Resources Information Center
Wagner, Jason Paul; Wulf, Douglas J.
2016-01-01
Written Corrective Feedback (WCF) is used extensively in second-language (L2) writing classrooms despite controversy over its effectiveness. This study examines indirect WCF, an instructional procedure that flags L2 students' errors with editing symbols that guide their corrections. WCF practitioners assume that this guidance will lead to…
McClintock, Brett T.; Bailey, Larissa L.; Pollock, Kenneth H.; Simons, Theodore R.
2010-01-01
The recent surge in the development and application of species occurrence models has been associated with an acknowledgment among ecologists that species are detected imperfectly due to observation error. Standard models now allow unbiased estimation of occupancy probability when false negative detections occur, but this is conditional on no false positive detections and sufficient incorporation of explanatory variables for the false negative detection process. These assumptions are likely reasonable in many circumstances, but there is mounting evidence that false positive errors and detection probability heterogeneity may be much more prevalent in studies relying on auditory cues for species detection (e.g., songbird or calling amphibian surveys). We used field survey data from a simulated calling anuran system of known occupancy state to investigate the biases induced by these errors in dynamic models of species occurrence. Despite the participation of expert observers in simplified field conditions, both false positive errors and site detection probability heterogeneity were extensive for most species in the survey. We found that even low levels of false positive errors, constituting as little as 1% of all detections, can cause severe overestimation of site occupancy, colonization, and local extinction probabilities. Further, unmodeled detection probability heterogeneity induced substantial underestimation of occupancy and overestimation of colonization and local extinction probabilities. Completely spurious relationships between species occurrence and explanatory variables were also found. Such misleading inferences would likely have deleterious implications for conservation and management programs. We contend that all forms of observation error, including false positive errors and heterogeneous detection probabilities, must be incorporated into the estimation framework to facilitate reliable inferences about occupancy and its associated vital rate parameters.
On the timing problem in optical PPM communications.
NASA Technical Reports Server (NTRS)
Gagliardi, R. M.
1971-01-01
Investigation of the effects of imperfect timing in a direct-detection (noncoherent) optical system using pulse-position-modulation bits. Special emphasis is placed on specification of timing accuracy, and an examination of system degradation when this accuracy is not attained. Bit error probabilities are shown as a function of timing errors, from which average error probabilities can be computed for specific synchronization methods. Of significant importance is shown to be the presence of a residual, or irreducible error probability, due entirely to the timing system, that cannot be overcome by the data channel.
Noncoherent DTTLs for Symbol Synchronization
NASA Technical Reports Server (NTRS)
Simon, Marvin; Tkacenko, Andre
2007-01-01
Noncoherent data-transition tracking loops (DTTLs) have been proposed for use as symbol synchronizers in digital communication receivers. [Communication- receiver subsystems that can perform their assigned functions in the absence of synchronization with the phases of their carrier signals ( carrier synchronization ) are denoted by the term noncoherent, while receiver subsystems that cannot function without carrier synchronization are said to be coherent. ] The proposal applies, more specifically, to receivers of binary phase-shift-keying (BPSK) signals generated by directly phase-modulating binary non-return-to-zero (NRZ) data streams onto carrier signals having known frequencies but unknown phases. The proposed noncoherent DTTLs would be modified versions of traditional DTTLs, which are coherent. The symbol-synchronization problem is essentially the problem of recovering symbol timing from a received signal. In the traditional, coherent approach to symbol synchronization, it is necessary to establish carrier synchronization in order to recover symbol timing. A traditional DTTL effects an iterative process in which it first generates an estimate of the carrier phase in the absence of symbol-synchronization information, then uses the carrier-phase estimate to obtain an estimate of the symbol-synchronization information, then feeds the symbol-synchronization estimate back to the carrier-phase-estimation subprocess. In a noncoherent symbol-synchronization process, there is no need for carrier synchronization and, hence, no need for iteration between carrier-synchronization and symbol- synchronization subprocesses. The proposed noncoherent symbolsynchronization process is justified theoretically by a mathematical derivation that starts from a maximum a posteriori (MAP) method of estimation of symbol timing utilized in traditional, coherent DTTLs. In that MAP method, one chooses the value of a variable of interest (in this case, the offset in the estimated symbol timing) that causes a likelihood function of symbol estimates over some number of symbol periods to assume a maximum value. In terms that are necessarily oversimplified to fit within the space available for this article, it can be said that the mathematical derivation involves a modified interpretation of the likelihood function that lends itself to noncoherent DTTLs. The proposal encompasses both linear and nonlinear noncoherent DTTLs. The performances of both have been computationally simulated; for comparison, the performances of linear and nonlinear coherent DTTLs have also been computationally simulated. The results of these simulations show that, among other things, the expected mean-square timing errors of coherent and noncoherent DTTLs are relatively insensitive to window width. The results also show that at high signal-to-noise ratios (SNRs), the performances of the noncoherent DTTLs approach those of their coherent counterparts at, while at low SNRs, the noncoherent DTTLs incur penalties of the order of 1.5 to 2 dB.
Thompson, Clarissa A; Ratcliff, Roger; McKoon, Gail
2016-10-01
How do speed and accuracy trade off, and what components of information processing develop as children and adults make simple numeric comparisons? Data from symbolic and non-symbolic number tasks were collected from 19 first graders (Mage=7.12 years), 26 second/third graders (Mage=8.20 years), 27 fourth/fifth graders (Mage=10.46 years), and 19 seventh/eighth graders (Mage=13.22 years). The non-symbolic task asked children to decide whether an array of asterisks had a larger or smaller number than 50, and the symbolic task asked whether a two-digit number was greater than or less than 50. We used a diffusion model analysis to estimate components of processing in tasks from accuracy, correct and error response times, and response time (RT) distributions. Participants who were accurate on one task were accurate on the other task, and participants who made fast decisions on one task made fast decisions on the other task. Older participants extracted a higher quality of information from the stimulus arrays, were more willing to make a decision, and were faster at encoding, transforming the stimulus representation, and executing their responses. Individual participants' accuracy and RTs were uncorrelated. Drift rate and boundary settings were significantly related across tasks, but they were unrelated to each other. Accuracy was mainly determined by drift rate, and RT was mainly determined by boundary separation. We concluded that RT and accuracy operate largely independently. Copyright © 2016 Elsevier Inc. All rights reserved.
Sample Size Determination for Rasch Model Tests
ERIC Educational Resources Information Center
Draxler, Clemens
2010-01-01
This paper is concerned with supplementing statistical tests for the Rasch model so that additionally to the probability of the error of the first kind (Type I probability) the probability of the error of the second kind (Type II probability) can be controlled at a predetermined level by basing the test on the appropriate number of observations.…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-15
....gov/acs/www/ or contact the Census Bureau's Social, Economic, and Housing Statistics Division at (301...) Sampling Error, which consists of the error that arises from the use of probability sampling to create the... direction; and (2) Sampling Error, which consists of the error that arises from the use of probability...
Symbolic enhancement of perspective displays
NASA Technical Reports Server (NTRS)
Ellis, Stephen R.; Hacisalihzade, Selim S.
1990-01-01
Two exocentric azimuth judgment experiments with a perspective display were conducted with 16 subjects. Previous work has shown these judgments to exhibit a bias possibly due to misinterpretation of the viewing parameters used to generate the display. Though geometric compensations may be used to correct for the bias, an alternate technique selected in the following 2 experiments was the introduction of symbolic enhancements in the form of compass roses. It is suggested that a compass rose with 30 deg divisions results in overall optimal azimuth estimation accuracy when accuracy and decision time are both considered. The data also suggest that the added radial lines on the compass roses may interact with normalization processes that influence the judgment errors.
Digital scrambling for shuttle communication links: Do drawbacks outweigh advantages?
NASA Technical Reports Server (NTRS)
Dessouky, K.
1985-01-01
Digital data scrambling has been considered for communication systems using NRZ (non-return to zero) symbol formats. The purpose is to increase the number of transitions in the data to improve the performance of the symbol synchronizer. This is accomplished without expanding the bandwidth but at the expense of increasing the data bit error rate (BER). Models for the scramblers/descramblers of practical interest are presented together with the appropriate link model. The effects of scrambling on the performance of coded and uncoded links are studied. The results are illustrated by application to the Tracking and Data Relay Satellite System links. Conclusions regarding the usefulness of scrambling are also given.
Joint Carrier-Phase Synchronization and LDPC Decoding
NASA Technical Reports Server (NTRS)
Simon, Marvin; Valles, Esteban
2009-01-01
A method has been proposed to increase the degree of synchronization of a radio receiver with the phase of a suppressed carrier signal modulated with a binary- phase-shift-keying (BPSK) or quaternary- phase-shift-keying (QPSK) signal representing a low-density parity-check (LDPC) code. This method is an extended version of the method described in Using LDPC Code Constraints to Aid Recovery of Symbol Timing (NPO-43112), NASA Tech Briefs, Vol. 32, No. 10 (October 2008), page 54. Both methods and the receiver architectures in which they would be implemented belong to a class of timing- recovery methods and corresponding receiver architectures characterized as pilotless in that they do not require transmission and reception of pilot signals. The proposed method calls for the use of what is known in the art as soft decision feedback to remove the modulation from a replica of the incoming signal prior to feeding this replica to a phase-locked loop (PLL) or other carrier-tracking stage in the receiver. Soft decision feedback refers to suitably processed versions of intermediate results of iterative computations involved in the LDPC decoding process. Unlike a related prior method in which hard decision feedback (the final sequence of decoded symbols) is used to remove the modulation, the proposed method does not require estimation of the decoder error probability. In a basic digital implementation of the proposed method, the incoming signal (having carrier phase theta theta (sub c) plus noise would first be converted to inphase (I) and quadrature (Q) baseband signals by mixing it with I and Q signals at the carrier frequency [wc/(2 pi)] generated by a local oscillator. The resulting demodulated signals would be processed through one-symbol-period integrate and- dump filters, the outputs of which would be sampled and held, then multiplied by a soft-decision version of the baseband modulated signal. The resulting I and Q products consist of terms proportional to the cosine and sine of the carrier phase cc as well as correlated noise components. These products would be fed as inputs to a digital PLL that would include a number-controlled oscillator (NCO), which provides an estimate of the carrier phase, theta(sub c).
Sensitivity of feedforward neural networks to weight errors
NASA Technical Reports Server (NTRS)
Stevenson, Maryhelen; Widrow, Bernard; Winter, Rodney
1990-01-01
An analysis is made of the sensitivity of feedforward layered networks of Adaline elements (threshold logic units) to weight errors. An approximation is derived which expresses the probability of error for an output neuron of a large network (a network with many neurons per layer) as a function of the percentage change in the weights. As would be expected, the probability of error increases with the number of layers in the network and with the percentage change in the weights. The probability of error is essentially independent of the number of weights per neuron and of the number of neurons per layer, as long as these numbers are large (on the order of 100 or more).
Rothmann, Mark
2005-01-01
When testing the equality of means from two different populations, a t-test or large sample normal test tend to be performed. For these tests, when the sample size or design for the second sample is dependent on the results of the first sample, the type I error probability is altered for each specific possibility in the null hypothesis. We will examine the impact on the type I error probabilities for two confidence interval procedures and procedures using test statistics when the design for the second sample or experiment is dependent on the results from the first sample or experiment (or series of experiments). Ways for controlling a desired maximum type I error probability or a desired type I error rate will be discussed. Results are applied to the setting of noninferiority comparisons in active controlled trials where the use of a placebo is unethical.
NASA Technical Reports Server (NTRS)
Sun, Xiaoli; Skillman, David R.; Hoffman, Evan D.; Mao, Dandan; McGarry, Jan F.; Zellar, Ronald S.; Fong, Wai H; Krainak, Michael A.; Neumann, Gregory A.; Smith, David E.
2013-01-01
Laser communication and ranging experiments were successfully conducted from the satellite laser ranging (SLR) station at NASA Goddard Space Flight Center (GSFC) to the Lunar Reconnaissance Orbiter (LRO) in lunar orbit. The experiments used 4096-ary pulse position modulation (PPM) for the laser pulses during one-way LRO Laser Ranging (LR) operations. Reed-Solomon forward error correction codes were used to correct the PPM symbol errors due to atmosphere turbulence and pointing jitter. The signal fading was measured and the results were compared to the model.
Inverse sequential detection of parameter changes in developing time series
NASA Technical Reports Server (NTRS)
Radok, Uwe; Brown, Timothy J.
1992-01-01
Progressive values of two probabilities are obtained for parameter estimates derived from an existing set of values and from the same set enlarged by one or more new values, respectively. One probability is that of erroneously preferring the second of these estimates for the existing data ('type 1 error'), while the second probability is that of erroneously accepting their estimates for the enlarged test ('type 2 error'). A more stable combined 'no change' probability which always falls between 0.5 and 0 is derived from the (logarithmic) width of the uncertainty region of an equivalent 'inverted' sequential probability ratio test (SPRT, Wald 1945) in which the error probabilities are calculated rather than prescribed. A parameter change is indicated when the compound probability undergoes a progressive decrease. The test is explicitly formulated and exemplified for Gaussian samples.
Poster error probability in the Mu-11 Sequential Ranging System
NASA Technical Reports Server (NTRS)
Coyle, C. W.
1981-01-01
An expression is derived for the posterior error probability in the Mu-2 Sequential Ranging System. An algorithm is developed which closely bounds the exact answer and can be implemented in the machine software. A computer simulation is provided to illustrate the improved level of confidence in a ranging acquisition using this figure of merit as compared to that using only the prior probabilities. In a simulation of 20,000 acquisitions with an experimentally determined threshold setting, the algorithm detected 90% of the actual errors and made false indication of errors on 0.2% of the acquisitions.
NASA Technical Reports Server (NTRS)
Dwyer, J. H., III; Palmer, E. A., III
1975-01-01
A simulator study was conducted to determine the usefulness of adding flight path vector symbology to a head-up display designed to improve glide-slope tracking performance during steep 7.5 deg visual approaches in STOL aircraft. All displays included a fixed attitude symbol, a pitch- and roll-stabilized horizon bar, and a glide-slope reference bar parallel to and 7.5 deg below the horizon bar. The displays differed with respect to the flight-path marker (FPM) symbol: display 1 had no FPM symbol; display 2 had an air-referenced FPM, and display 3 had a ground-referenced FPM. No differences between displays 1 and 2 were found on any of the performance measures. Display 3 was found to decrease height error in the early part of the approach and to reduce descent rate variation over the entire approach. Two measures of workload did not indicate any differences between the displays.
Parallel digital modem using multirate digital filter banks
NASA Technical Reports Server (NTRS)
Sadr, Ramin; Vaidyanathan, P. P.; Raphaeli, Dan; Hinedi, Sami
1994-01-01
A new class of architectures for an all-digital modem is presented in this report. This architecture, referred to as the parallel receiver (PRX), is based on employing multirate digital filter banks (DFB's) to demodulate, track, and detect the received symbol stream. The resulting architecture is derived, and specifications are outlined for designing the DFB for the PRX. The key feature of this approach is a lower processing rate then either the Nyquist rate or the symbol rate, without any degradation in the symbol error rate. Due to the freedom in choosing the processing rate, the designer is able to arbitrarily select and use digital components, independent of the speed of the integrated circuit technology. PRX architecture is particularly suited for high data rate applications, and due to the modular structure of the parallel signal path, expansion to even higher data rates is accommodated with each. Applications of the PRX would include gigabit satellite channels, multiple spacecraft, optical links, interactive cable-TV, telemedicine, code division multiple access (CDMA) communications, and others.
Digital Communications in Spatially Distributed Interference Channels.
1982-12-01
July 1980 through 31 March 1981. This report is organized into five parts. Part I describes an optimum recivr tructure fordgtlcmutatnI ~ tal itiue (over...Jelinek, and J. Raviv , "Optimal Decoding of Linear Codes for Minimizing Symbol Error Rate", IEEE Trans. Inform. Theory, Vol. IT-20, pp. 284-287, March 1974
Performance of concatenated Reed-Solomon trellis-coded modulation over Rician fading channels
NASA Technical Reports Server (NTRS)
Moher, Michael L.; Lodge, John H.
1990-01-01
A concatenated coding scheme for providing very reliable data over mobile-satellite channels at power levels similar to those used for vocoded speech is described. The outer code is a shorter Reed-Solomon code which provides error detection as well as error correction capabilities. The inner code is a 1-D 8-state trellis code applied independently to both the inphase and quadrature channels. To achieve the full error correction potential of this inner code, the code symbols are multiplexed with a pilot sequence which is used to provide dynamic channel estimation and coherent detection. The implementation structure of this scheme is discussed and its performance is estimated.
Objectives and models of the planetary quarantine program
NASA Technical Reports Server (NTRS)
Werber, M.
1975-01-01
The objectives of the planetary quarantine program are presented and the history of early contamination prevention efforts is outlined. Contamination models which were previously established are given and include: determination of parameters; symbol nomenclature; and calculations of contamination and hazard probabilities. Planetary quarantine is discussed as an issue of national and international concern. Information on international treaty and meetings on spacecraft sterilization, quarantine standards, and policies is provided. The specific contamination probabilities of the U.S.S.R. Venus 3 flyby are included.
A Quantum Theoretical Explanation for Probability Judgment Errors
ERIC Educational Resources Information Center
Busemeyer, Jerome R.; Pothos, Emmanuel M.; Franco, Riccardo; Trueblood, Jennifer S.
2011-01-01
A quantum probability model is introduced and used to explain human probability judgment errors including the conjunction and disjunction fallacies, averaging effects, unpacking effects, and order effects on inference. On the one hand, quantum theory is similar to other categorization and memory models of cognition in that it relies on vector…
Carrier recovery methods for a dual-mode modem: A design approach
NASA Technical Reports Server (NTRS)
Richards, C. W.; Wilson, S. G.
1984-01-01
A dual mode model with selectable QPSK or 16-QASK modulation schemes is discussed. The theoretical reasoning as well as the practical trade-offs made during the development of a modem are presented, with attention given to the carrier recovery method used for coherent demodulation. Particular attention is given to carrier recovery methods that can provide little degradation due to phase error for both QPSK and 16-QASK, while being insensitive to the amplitude characteristic of a 16-QASK modulation scheme. A computer analysis of the degradation is symbol error rate (SER) for QPSK and 16-QASK due to phase error is prresented. Results find that an energy increase of roughly 4 dB is needed to maintain a SER of 1X10(-5) for QPSK with 20 deg of phase error and 16-QASK with 7 deg phase error.
Dambacher, Michael; Hübner, Ronald; Schlösser, Jan
2011-01-01
The influence of monetary incentives on performance has been widely investigated among various disciplines. While the results reveal positive incentive effects only under specific conditions, the exact nature, and the contribution of mediating factors are largely unexplored. The present study examined influences of payoff schemes as one of these factors. In particular, we manipulated penalties for errors and slow responses in a speeded categorization task. The data show improved performance for monetary over symbolic incentives when (a) penalties are higher for slow responses than for errors, and (b) neither slow responses nor errors are punished. Conversely, payoff schemes with stronger punishment for errors than for slow responses resulted in worse performance under monetary incentives. The findings suggest that an emphasis of speed is favorable for positive influences of monetary incentives, whereas an emphasis of accuracy under time pressure has the opposite effect. PMID:21980316
A concatenated coding scheme for error control
NASA Technical Reports Server (NTRS)
Lin, S.
1985-01-01
A concatenated coding scheme for error contol in data communications was analyzed. The inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if either the inner code decoder fails to make a successful decoding or the outer code decoder detects the presence of errors after the inner code decoding. Probability of undetected error of the proposed scheme is derived. An efficient method for computing this probability is presented. Throughout efficiency of the proposed error control scheme incorporated with a selective repeat ARQ retransmission strategy is analyzed.
Gómez-Velázquez, Fabiola R; Vélez-Pérez, Hugo; Espinoza-Valdez, Aurora; Romo-Vazquez, Rebeca; Salido-Ruiz, Ricardo A; Ruiz-Stovel, Vanessa; Gallardo-Moreno, Geisa B; González-Garrido, Andrés A; Berumen, Gustavo
2017-02-08
Children with mathematical difficulties usually have an impaired ability to process symbolic representations. Functional MRI methods have suggested that early frontoparietal connectivity can predict mathematic achievements; however, the study of brain connectivity during numerical processing remains unexplored. With the aim of evaluating this in children with different math proficiencies, we selected a sample of 40 children divided into two groups [high achievement (HA) and low achievement (LA)] according to their arithmetic scores in the Wide Range Achievement Test, 4th ed.. Participants performed a symbolic magnitude comparison task (i.e. determining which of two numbers is numerically larger), with simultaneous electrophysiological recording. Partial directed coherence and graph theory methods were used to estimate and depict frontoparietal connectivity in both groups. The behavioral measures showed that children with LA performed significantly slower and less accurately than their peers in the HA group. Significantly higher frontocentral connectivity was found in LA compared with HA; however, when the connectivity analysis was restricted to parietal locations, no relevant group differences were observed. These findings seem to support the notion that LA children require greater memory and attentional efforts to meet task demands, probably affecting early stages of symbolic comparison.
The effect of timing errors in optical digital systems.
NASA Technical Reports Server (NTRS)
Gagliardi, R. M.
1972-01-01
The use of digital transmission with narrow light pulses appears attractive for data communications, but carries with it a stringent requirement on system bit timing. The effects of imperfect timing in direct-detection (noncoherent) optical binary systems are investigated using both pulse-position modulation and on-off keying for bit transmission. Particular emphasis is placed on specification of timing accuracy and an examination of system degradation when this accuracy is not attained. Bit error probabilities are shown as a function of timing errors from which average error probabilities can be computed for specific synchronization methods. Of significance is the presence of a residual or irreducible error probability in both systems, due entirely to the timing system, which cannot be overcome by the data channel.
Intrinsic Nilpotent Approximation.
1985-06-01
expansions, in particular, positive-homogeneous principal symbols. With some minor modification, as we shall indicate, our work can probably be carried out in...relation to the construction below can be stated precisely. (see Cor. 3.19). As in our proof of the. lifting theorem, the main tools will be Lemna 1.35 and
Tang, Shih-Fen; Chen, I-Hui; Chiang, Hsin-Yu; Wu, Chien-Te; Hsueh, I-Ping; Yu, Wan-Hui; Hsieh, Ching-Lin
2017-11-27
We aimed to compare the test-retest agreement, random measurement error, practice effect, and ecological validity of the original and Tablet-based Symbol Digit Modalities Test (T-SDMT) over five serial assessments, and to examine the concurrent validity of the T-SDMT in patients with schizophrenia. Sixty patients with chronic schizophrenia completed five serial assessments (one week apart) of the SDMT and T-SDMT and one assessment of the Activities of Daily Living Rating Scale III at the first time point. Both measures showed high test-retest agreement, similar levels of random measurement error over five serial assessments. Moreover, the practice effects of the two measures did not reach a plateau phase after five serial assessments in young and middle-aged participants. Nevertheless, only the practice effect of the T-SDMT became trivial after the first assessment. Like the SDMT, the T-SDMT had good ecological validity. The T-SDMT also had good concurrent validity with the SDMT. In addition, only the T-SDMT had discriminative validity to discriminate processing speed in young and middle-aged participants. Compared to the SDMT, the T-SDMT had overall slightly better psychometric properties, so it can be an alternative measure to the SDMT for assessing processing speed in patients with schizophrenia. Copyright © 2017 Elsevier B.V. All rights reserved.
Nematode Damage Functions: The Problems of Experimental and Sampling Error
Ferris, H.
1984-01-01
The development and use of pest damage functions involves measurement and experimental errors associated with cultural, environmental, and distributional factors. Damage predictions are more valuable if considered with associated probability. Collapsing population densities into a geometric series of population classes allows a pseudo-replication removal of experimental and sampling error in damage function development. Recognition of the nature of sampling error for aggregated populations allows assessment of probability associated with the population estimate. The product of the probabilities incorporated in the damage function and in the population estimate provides a basis for risk analysis of the yield loss prediction and the ensuing management decision. PMID:19295865
Szardenings, Carsten; Kuhn, Jörg-Tobias; Ranger, Jochen; Holling, Heinz
2017-01-01
The respective roles of the approximate number system (ANS) and an access deficit (AD) in developmental dyscalculia (DD) are not well-known. Most studies rely on response times (RTs) or accuracy (error rates) separately. We analyzed the results of two samples of elementary school children in symbolic magnitude comparison (MC) and non-symbolic MC using a diffusion model. This approach uses the joint distribution of both RTs and accuracy in order to synthesize measures closer to ability and response caution or response conservatism. The latter can be understood in the context of the speed-accuracy tradeoff: It expresses how much a subject trades in speed for improved accuracy. We found significant effects of DD on both ability (negative) and response caution (positive) in MC tasks and a negative interaction of DD with symbolic task material on ability. These results support that DD subjects suffer from both an impaired ANS and an AD and in particular support that slower RTs of children with DD are indeed related to impaired processing of numerical information. An interaction effect of symbolic task material and DD (low mathematical ability) on response caution could not be refuted. However, in a sample more representative of the general population we found a negative association of mathematical ability and response caution in symbolic but not in non-symbolic task material. The observed differences in response behavior highlight the importance of accounting for response caution in the analysis of MC tasks. The results as a whole present a good example of the benefits of a diffusion model analysis.
Szardenings, Carsten; Kuhn, Jörg-Tobias; Ranger, Jochen; Holling, Heinz
2018-01-01
The respective roles of the approximate number system (ANS) and an access deficit (AD) in developmental dyscalculia (DD) are not well-known. Most studies rely on response times (RTs) or accuracy (error rates) separately. We analyzed the results of two samples of elementary school children in symbolic magnitude comparison (MC) and non-symbolic MC using a diffusion model. This approach uses the joint distribution of both RTs and accuracy in order to synthesize measures closer to ability and response caution or response conservatism. The latter can be understood in the context of the speed-accuracy tradeoff: It expresses how much a subject trades in speed for improved accuracy. We found significant effects of DD on both ability (negative) and response caution (positive) in MC tasks and a negative interaction of DD with symbolic task material on ability. These results support that DD subjects suffer from both an impaired ANS and an AD and in particular support that slower RTs of children with DD are indeed related to impaired processing of numerical information. An interaction effect of symbolic task material and DD (low mathematical ability) on response caution could not be refuted. However, in a sample more representative of the general population we found a negative association of mathematical ability and response caution in symbolic but not in non-symbolic task material. The observed differences in response behavior highlight the importance of accounting for response caution in the analysis of MC tasks. The results as a whole present a good example of the benefits of a diffusion model analysis. PMID:29379450
Neural dynamics of reward probability coding: a Magnetoencephalographic study in humans
Thomas, Julie; Vanni-Mercier, Giovanna; Dreher, Jean-Claude
2013-01-01
Prediction of future rewards and discrepancy between actual and expected outcomes (prediction error) are crucial signals for adaptive behavior. In humans, a number of fMRI studies demonstrated that reward probability modulates these two signals in a large brain network. Yet, the spatio-temporal dynamics underlying the neural coding of reward probability remains unknown. Here, using magnetoencephalography, we investigated the neural dynamics of prediction and reward prediction error computations while subjects learned to associate cues of slot machines with monetary rewards with different probabilities. We showed that event-related magnetic fields (ERFs) arising from the visual cortex coded the expected reward value 155 ms after the cue, demonstrating that reward value signals emerge early in the visual stream. Moreover, a prediction error was reflected in ERF peaking 300 ms after the rewarded outcome and showing decreasing amplitude with higher reward probability. This prediction error signal was generated in a network including the anterior and posterior cingulate cortex. These findings pinpoint the spatio-temporal characteristics underlying reward probability coding. Together, our results provide insights into the neural dynamics underlying the ability to learn probabilistic stimuli-reward contingencies. PMID:24302894
Bit Error Probability for Maximum Likelihood Decoding of Linear Block Codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc P. C.; Rhee, Dojun
1996-01-01
In this paper, the bit error probability P(sub b) for maximum likelihood decoding of binary linear codes is investigated. The contribution of each information bit to P(sub b) is considered. For randomly generated codes, it is shown that the conventional approximation at high SNR P(sub b) is approximately equal to (d(sub H)/N)P(sub s), where P(sub s) represents the block error probability, holds for systematic encoding only. Also systematic encoding provides the minimum P(sub b) when the inverse mapping corresponding to the generator matrix of the code is used to retrieve the information sequence. The bit error performances corresponding to other generator matrix forms are also evaluated. Although derived for codes with a generator matrix randomly generated, these results are shown to provide good approximations for codes used in practice. Finally, for decoding methods which require a generator matrix with a particular structure such as trellis decoding or algebraic-based soft decision decoding, equivalent schemes that reduce the bit error probability are discussed.
U.S. Maternally Linked Birth Records May Be Biased for Hispanics and Other Population Groups
LEISS, JACK K.; GILES, DENISE; SULLIVAN, KRISTIN M.; MATHEWS, RAHEL; SENTELLE, GLENDA; TOMASHEK, KAY M.
2010-01-01
Purpose To advance understanding of linkage error in U.S. maternally linked datasets, and how the error may affect results of studies based on the linked data. Methods North Carolina birth and fetal death records for 1988-1997 were maternally linked (n=1,030,029). The maternal set probability, defined as the probability that all records assigned to the same maternal set do in fact represent events to the same woman, was used to assess differential maternal linkage error across race/ethnic groups. Results Maternal set probabilities were lower for records specifying Asian or Hispanic race/ethnicity, suggesting greater maternal linkage error. The lower probabilities for Hispanics were concentrated in women of Mexican origin who were not born in the United States. Conclusions Differential maternal linkage error may be a source of bias in studies using U.S. maternally linked datasets to make comparisons between Hispanics and other groups or among Hispanic subgroups. Methods to quantify and adjust for this potential bias are needed. PMID:20006273
Erratum: Erratum to: "A higher-spin Chern-Simons theory of anyons"
NASA Astrophysics Data System (ADS)
Boulanger, N.; Sundell, P.; Valenzuela, M.
2017-09-01
In the published version there is an error in the affiliation (the word "Andre's" with accent) of the author Per Sundell. The present form in this erratum is the correct (should be the word "Andres" without accent). The affiliation under the symbol " b" should read: Departamento de Ciencias Físicas, Universidad Andres Bello, Santiago, Chile
On the error probability of general tree and trellis codes with applications to sequential decoding
NASA Technical Reports Server (NTRS)
Johannesson, R.
1973-01-01
An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random binary tree codes is derived and shown to be independent of the length of the tree. An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random L-branch binary trellis codes of rate R = 1/n is derived which separates the effects of the tail length T and the memory length M of the code. It is shown that the bound is independent of the length L of the information sequence. This implication is investigated by computer simulations of sequential decoding utilizing the stack algorithm. These simulations confirm the implication and further suggest an empirical formula for the true undetected decoding error probability with sequential decoding.
Campe, Amely; Schulz, Sophia; Bohnet, Willa
2016-01-01
Although equids have had to be tagged with a transponder since 2009, breeding associations in Germany disagree as to which method is best suited for identification (with or without hot iron branding). Therefore, the aim of this systematic literature review was to gain an overview of how effective identification is using transponders and hot iron branding and as to which factors influence the success of identification. Existing literature showed that equids can be identified by means of transponders with a probability of 85-100%, whereas symbol brandings could be identified correctly in 78-89%, whole number brandings in 0-87% and single figures in 37-92% of the readings, respectively. The successful reading of microchips can be further optimised by a correctly operated implantation process and thorough training of the applying persons. affect identification with a scanner. The removal of transponders for manipulation purposes is virtually impossible. Influences during the application of branding marks can hardly, if at all, be standardised, but influence the subsequent readability relevantly. Therefore, identification by means of hot branding cannot be considered sufficiently reliable. Impaired quality of identification can be reduced during reading but cannot be counteracted. Based on the existing studies it can be concluded that the transponder method is the best suited of the investigated methods for clearly identifying equids, being forgery-proof and permanent. It is not to be expected that applying hot branding in addition to microchips would optimise the probability of identification relevantly.
Decision-aided ICI mitigation with time-domain average approximation in CO-OFDM
NASA Astrophysics Data System (ADS)
Ren, Hongliang; Cai, Jiaxing; Ye, Xin; Lu, Jin; Cao, Quanjun; Guo, Shuqin; Xue, Lin-lin; Qin, Yali; Hu, Weisheng
2015-07-01
We introduce and investigate the feasibility of a novel iterative blind phase noise inter-carrier interference (ICI) mitigation scheme for coherent optical orthogonal frequency division multiplexing (CO-OFDM) systems. The ICI mitigation scheme is performed through the combination of frequency-domain symbol decision-aided estimation and the ICI phase noise time-average approximation. An additional initial decision process with suitable threshold is introduced in order to suppress the decision error symbols. Our proposed ICI mitigation scheme is proved to be effective in removing the ICI for a simulated CO-OFDM with 16-QAM modulation format. With the slightly high computational complexity, it outperforms the time-domain average blind ICI (Avg-BL-ICI) algorithm at a relatively wide laser line-width and high OSNR.
Weyl calculus in QED I. The unitary group
NASA Astrophysics Data System (ADS)
Amour, L.; Lascar, R.; Nourrigat, J.
2017-01-01
In this work, we consider fixed 1/2 spin particles interacting with the quantized radiation field in the context of quantum electrodynamics. We investigate the time evolution operator in studying the reduced propagator (interaction picture). We first prove that this propagator belongs to the class of infinite dimensional Weyl pseudodifferential operators recently introduced in Amour et al. [J. Funct. Anal. 269(9), 2747-2812 (2015)] on Wiener spaces. We give a semiclassical expansion of the symbol of the reduced propagator up to any order with estimates on the remainder terms. Next, taking into account analyticity properties for the Weyl symbol of the reduced propagator, we derive estimates concerning transition probabilities between coherent states.
NASA Astrophysics Data System (ADS)
Laoupi, A.
The strong multi-symbolic archetype of the Pleiades functions as a worldwide astromythological system going back to Upper Palaeolithic Era. The Greek version of the myth seems to embody a wide range of environmental symbolism, as it incorporates various information and very archaic elements about: a) the periodicity of the solstices and the equinoxes, b) the fluctuations on the biochemical structure of Earth's atmosphere related to the global hydro -climatic phenomenon of ENSO, c) probable past observations of brightening of a star (nova) in the cluster of Pleiades, d) the primordial elements of the mythological nucleus of Atlantis' legend and e) the remnants of Palaeolithic 'proto-European' moon culture.
Macaque monkeys can learn token values from human models through vicarious reward.
Bevacqua, Sara; Cerasti, Erika; Falcone, Rossella; Cervelloni, Milena; Brunamonti, Emiliano; Ferraina, Stefano; Genovesio, Aldo
2013-01-01
Monkeys can learn the symbolic meaning of tokens, and exchange them to get a reward. Monkeys can also learn the symbolic value of a token by observing conspecifics but it is not clear if they can learn passively by observing other actors, e.g., humans. To answer this question, we tested two monkeys in a token exchange paradigm in three experiments. Monkeys learned token values through observation of human models exchanging them. We used, after a phase of object familiarization, different sets of tokens. One token of each set was rewarded with a bit of apple. Other tokens had zero value (neutral tokens). Each token was presented only in one set. During the observation phase, monkeys watched the human model exchange tokens and watched them consume rewards (vicarious rewards). In the test phase, the monkeys were asked to exchange one of the tokens for food reward. Sets of three tokens were used in the first experiment and sets of two tokens were used in the second and third experiments. The valuable token was presented with different probabilities in the observation phase during the first and second experiments in which the monkeys exchanged the valuable token more frequently than any of the neutral tokens. The third experiments examined the effect of unequal probabilities. Our results support the view that monkeys can learn from non-conspecific actors through vicarious reward, even a symbolic task like the token-exchange task.
Possible halo depictions in the prehistoric rock art of Utah.
Sassen, K
1994-07-20
In western American rock art the concentric circle symbol, which is widely regarded as a sun symbol, is ubiquitous. We provide evidence from Archaic and Fremont Indian rock art sites in northwestern Utah that at least one depiction was motivated by an observation of a complex halo display. Cirrus cloud optical displays are linked in both folklore and meteorology to precipitation-producing weather situations, which, in combination with an abundance of weather-related rock art symbolism, indicate that such images reflected the ceremonial concerns of the indigenous cultures for ensuring adequate precipitation. As has been shown to be the case with rock art rainbows, conventionalization of the halo image may have resulted in simple patterns that lacked recognizable details of atmospheric optical phenomena. However, in one case in which an Archaic-style petroglyph (probably 1500 yr or more old) satisfactorily reproduced a complicated halo display that contained parhelia and tangent arcs, sufficient geometricinformation is rendered to indicate a solar elevation angle of ~ 40° at the time of observation.
Possible Halo Depictions in the Prehistoric Rock Art of Utah
NASA Technical Reports Server (NTRS)
Sassen, Kenneth
1994-01-01
In western American rock art the concentric circle symbol, which is widely regarded as a sun symbol, is ubiquitous. We provide evidence from Archaic and Fremont Indian rock art sites in northwestern Utah that at least one depiction was motivated by an observation of a complex halo display. Cirrus cloud optical displays are linked in both folklore and meteorology to precipitation-producing weather situations, which, in combination with an abundance of weather-related rock art symbolism, indicate that such images reflected the ceremonial concerns of the indigenous cultures for ensuring adequate precipitation. As has been shown to be the case with rock art rainbows, conventionalization of the halo image may have resulted in simple patterns that lacked recognizable details of atmospheric optical phenomena. However, in one case in which an Archaic-style petroglyph (probably 1500 yr or more old) satisfactorily reproduced a complicated halo display that contained parhelia and tangent arcs, sufficient geometric information is rendered to indicate a solar elevation angle of approx. 40 deg. at the time of observation.
Error Patterns in Ordering Fractions among At-Risk Fourth-Grade Students
ERIC Educational Resources Information Center
Malone, Amelia Schneider; Fuchs, Lynn S.
2015-01-01
The 3 purposes of this study were to: (a) describe fraction ordering errors among at-risk 4th-grade students; (b) assess the effect of part-whole understanding and accuracy of fraction magnitude estimation on the probability of committing errors; and (c) examine the effect of students' ability to explain comparing problems on the probability of…
Readability of New Aviation Chart Symbology in Day and NVG Reading Conditions.
Wagstaff, Anthony S; Larsen, Terje
2017-11-01
The Swedish Air Force (SwAF) conducted a study in 2010 to harmonize portrayal of aeronautical info (AI) on SwAF charts with NATO standards. A mismatch was found concerning vertical obstructions (VO). Norway regarded Sweden's existing symbology as a way to solve the problem of overcrowded air charts and the two countries started to cooperate. The result of this development was a new set of symbology for obstacles. The aim of this study was to test the readability of the new obstacle and power line symbols compared to the old symbols. We also wished to assess the readability in NVG illumination conditions, particularly regarding the new symbols compared to the old. In a randomized controlled study design, 21 volunteer military pilots from the Norwegian and Swedish Air Force were asked to perform tracking and chart-reading tests. The chart-reading test scored both errors and readability using a predefined score index. Subjective scoring was also done at the end of the test day. Overall response time improved by approximately 20% using the new symbology and error rate decreased by approximately 30-90% where statistically significant differences were found. The tracking test turned out to be too difficult due to several factors in the experimental design. Even though some caution should be shown in drawing conclusions from this study, the general trends seem well supported with the number of aircrew subjects we were able to recruit.Wagstaff AS, Larsen T. Readability of new aviation chart symbology in day and NVG reading conditions. Aerosp Med Hum Perform. 2017; 88(11):978-984.
NASA Technical Reports Server (NTRS)
Huddleston, Lisa L.; Roeder, William; Merceret, Francis J.
2010-01-01
A technique has been developed to calculate the probability that any nearby lightning stroke is within any radius of any point of interest. In practice, this provides the probability that a nearby lightning stroke was within a key distance of a facility, rather than the error ellipses centered on the stroke. This process takes the current bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to get the probability that the stroke is inside any specified radius. This new facility-centric technique will be much more useful to the space launch customers and may supersede the lightning error ellipse approach discussed in [5], [6].
Entanglement-enhanced Neyman-Pearson target detection using quantum illumination
NASA Astrophysics Data System (ADS)
Zhuang, Quntao; Zhang, Zheshen; Shapiro, Jeffrey H.
2017-08-01
Quantum illumination (QI) provides entanglement-based target detection---in an entanglement-breaking environment---whose performance is significantly better than that of optimum classical-illumination target detection. QI's performance advantage was established in a Bayesian setting with the target presumed equally likely to be absent or present and error probability employed as the performance metric. Radar theory, however, eschews that Bayesian approach, preferring the Neyman-Pearson performance criterion to avoid the difficulties of accurately assigning prior probabilities to target absence and presence and appropriate costs to false-alarm and miss errors. We have recently reported an architecture---based on sum-frequency generation (SFG) and feedforward (FF) processing---for minimum error-probability QI target detection with arbitrary prior probabilities for target absence and presence. In this paper, we use our results for FF-SFG reception to determine the receiver operating characteristic---detection probability versus false-alarm probability---for optimum QI target detection under the Neyman-Pearson criterion.
A comparison of frame synchronization methods. [Deep Space Network
NASA Technical Reports Server (NTRS)
Swanson, L.
1982-01-01
Different methods are considered for frame synchronization of a concatenated block code/Viterbi link. Synchronization after Viterbi decoding, synchronization before Viterbi decoding based on hard-quantized channel symbols are all compared. For each scheme, the probability under certain conditions of true detection of sync within four 10,000 bit frames is tabulated.
Shooting Free Throws, Probability, and the Golden Ratio
ERIC Educational Resources Information Center
Goodman, Terry
2010-01-01
Part of the power of algebra is that it provides students with tools that they can use to model a variety of problems and applications. Such modeling requires them to understand patterns and choose from a variety of representations--numeric, graphical, symbolic--to construct a model that accurately reflects the relationships found in the original…
Symbolic Model-Based SAR Feature Analysis and Change Detection
1992-02-01
normalization fac- tor described above in the Dempster rule of combination. Another problem is that in certain cases D-S overweights prior probabilities compared...Beaufort Sea data set and the Peru data set. The Phoenix results are described in section 6.2.2 including a partial trace of the opera- tion of the
The random coding bound is tight for the average code.
NASA Technical Reports Server (NTRS)
Gallager, R. G.
1973-01-01
The random coding bound of information theory provides a well-known upper bound to the probability of decoding error for the best code of a given rate and block length. The bound is constructed by upperbounding the average error probability over an ensemble of codes. The bound is known to give the correct exponential dependence of error probability on block length for transmission rates above the critical rate, but it gives an incorrect exponential dependence at rates below a second lower critical rate. Here we derive an asymptotic expression for the average error probability over the ensemble of codes used in the random coding bound. The result shows that the weakness of the random coding bound at rates below the second critical rate is due not to upperbounding the ensemble average, but rather to the fact that the best codes are much better than the average at low rates.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-12
... Household Economic Statistics Division at (301) 763-3243. Under the advice of the Census Bureau, HHS..., which consists of the error that arises from the use of probability sampling to create the sample. For...) Sampling Error, which consists of the error that arises from the use of probability sampling to create the...
NASA Astrophysics Data System (ADS)
Trung, Ha Duyen
2017-12-01
In this paper, the end-to-end performance of free-space optical (FSO) communication system combining with Amplify-and-Forward (AF)-assisted or fixed-gain relaying technology using subcarrier quadrature amplitude modulation (SC-QAM) over weak atmospheric turbulence channels modeled by log-normal distribution with pointing error impairments is studied. More specifically, unlike previous studies on AF relaying FSO communication systems without pointing error effects; the pointing error effect is studied by taking into account the influence of beamwidth, aperture size and jitter variance. In addition, a combination of these models to analyze the combined effect of atmospheric turbulence and pointing error to AF relaying FSO/SC-QAM systems is used. Finally, an analytical expression is derived to evaluate the average symbol error rate (ASER) performance of such systems. The numerical results show that the impact of pointing error on the performance of AF relaying FSO/SC-QAM systems and how we use proper values of aperture size and beamwidth to improve the performance of such systems. Some analytical results are confirmed by Monte-Carlo simulations.
Bix, Laura; Seo, Do Chan; Ladoni, Moslem; Brunk, Eric; Becker, Mark W
2016-01-01
Effective standardization of medical device labels requires objective study of varied designs. Insufficient empirical evidence exists regarding how practitioners utilize and view labeling. Measure the effect of graphic elements (boxing information, grouping information, symbol use and color-coding) to optimize a label for comparison with those typical of commercial medical devices. Participants viewed 54 trials on a computer screen. Trials were comprised of two labels that were identical with regard to graphics, but differed in one aspect of information (e.g., one had latex, the other did not). Participants were instructed to select the label along a given criteria (e.g., latex containing) as quickly as possible. Dependent variables were binary (correct selection) and continuous (time to correct selection). Eighty-nine healthcare professionals were recruited at Association of Surgical Technologists (AST) conferences, and using a targeted e-mail of AST members. Symbol presence, color coding and grouping critical pieces of information all significantly improved selection rates and sped time to correct selection (α = 0.05). Conversely, when critical information was graphically boxed, probability of correct selection and time to selection were impaired (α = 0.05). Subsequently, responses from trials containing optimal treatments (color coded, critical information grouped with symbols) were compared to two labels created based on a review of those commercially available. Optimal labels yielded a significant positive benefit regarding the probability of correct choice ((P<0.0001) LSM; UCL, LCL: 97.3%; 98.4%, 95.5%)), as compared to the two labels we created based on commercial designs (92.0%; 94.7%, 87.9% and 89.8%; 93.0%, 85.3%) and time to selection. Our study provides data regarding design factors, namely: color coding, symbol use and grouping of critical information that can be used to significantly enhance the performance of medical device labels.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Audenaert, Koenraad M. R., E-mail: koenraad.audenaert@rhul.ac.uk; Department of Physics and Astronomy, University of Ghent, S9, Krijgslaan 281, B-9000 Ghent; Mosonyi, Milán, E-mail: milan.mosonyi@gmail.com
2014-10-01
We consider the multiple hypothesis testing problem for symmetric quantum state discrimination between r given states σ₁, …, σ{sub r}. By splitting up the overall test into multiple binary tests in various ways we obtain a number of upper bounds on the optimal error probability in terms of the binary error probabilities. These upper bounds allow us to deduce various bounds on the asymptotic error rate, for which it has been hypothesized that it is given by the multi-hypothesis quantum Chernoff bound (or Chernoff divergence) C(σ₁, …, σ{sub r}), as recently introduced by Nussbaum and Szkoła in analogy with Salikhov'smore » classical multi-hypothesis Chernoff bound. This quantity is defined as the minimum of the pairwise binary Chernoff divergences min{sub j« less
A concatenated coding scheme for error control
NASA Technical Reports Server (NTRS)
Kasami, T.; Fujiwara, T.; Lin, S.
1986-01-01
In this paper, a concatenated coding scheme for error control in data communications is presented and analyzed. In this scheme, the inner code is used for both error correction and detection; however, the outer code is used only for error detection. A retransmission is requested if either the inner code decoder fails to make a successful decoding or the outer code decoder detects the presence of errors after the inner code decoding. Probability of undetected error (or decoding error) of the proposed scheme is derived. An efficient method for computing this probability is presented. Throughput efficiency of the proposed error control scheme incorporated with a selective-repeat ARQ retransmission strategy is also analyzed. Three specific examples are presented. One of the examples is proposed for error control in the NASA Telecommand System.
Multiple trellis coded modulation
NASA Technical Reports Server (NTRS)
Simon, Marvin K. (Inventor); Divsalar, Dariush (Inventor)
1990-01-01
A technique for designing trellis codes to minimize bit error performance for a fading channel. The invention provides a criteria which may be used in the design of such codes which is significantly different from that used for average white Gaussian noise channels. The method of multiple trellis coded modulation of the present invention comprises the steps of: (a) coding b bits of input data into s intermediate outputs; (b) grouping said s intermediate outputs into k groups of s.sub.i intermediate outputs each where the summation of all s.sub.i,s is equal to s and k is equal to at least 2; (c) mapping each of said k groups of intermediate outputs into one of a plurality of symbols in accordance with a plurality of modulation schemes, one for each group such that the first group is mapped in accordance with a first modulation scheme and the second group is mapped in accordance with a second modulation scheme; and (d) outputting each of said symbols to provide k output symbols for each b bits of input data.
Thermodynamic framework for information in nanoscale systems with memory
NASA Astrophysics Data System (ADS)
Arias-Gonzalez, J. Ricardo
2017-11-01
Information is represented by linear strings of symbols with memory that carry errors as a result of their stochastic nature. Proofreading and edition are assumed to improve certainty although such processes may not be effective. Here, we develop a thermodynamic theory for material chains made up of nanoscopic subunits with symbolic meaning in the presence of memory. This framework is based on the characterization of single sequences of symbols constructed under a protocol and is used to derive the behavior of ensembles of sequences similarly constructed. We then analyze the role of proofreading and edition in the presence of memory finding conditions to make revision an effective process, namely, to decrease the entropy of the chain. Finally, we apply our formalism to DNA replication and RNA transcription finding that Watson and Crick hybridization energies with which nucleotides are branched to the template strand during the copying process are optimal to regulate the fidelity in proofreading. These results are important in applications of information theory to a variety of solid-state physical systems and other biomolecular processes.
The effects of speech output technology in the learning of graphic symbols.
Schlosser, R W; Belfiore, P J; Nigam, R; Blischak, D; Hetzroni, O
1995-01-01
The effects of auditory stimuli in the form of synthetic speech output on the learning of graphic symbols were evaluated. Three adults with severe to profound mental retardation and communication impairments were taught to point to lexigrams when presented with words under two conditions. In the first condition, participants used a voice output communication aid to receive synthetic speech as antecedent and consequent stimuli. In the second condition, with a nonelectronic communications board, participants did not receive synthetic speech. A parallel treatments design was used to evaluate the effects of the synthetic speech output as an added component of the augmentative and alternative communication system. The 3 participants reached criterion when not provided with the auditory stimuli. Although 2 participants also reached criterion when not provided with the auditory stimuli, the addition of auditory stimuli resulted in more efficient learning and a decreased error rate. Maintenance results, however, indicated no differences between conditions. Finding suggest that auditory stimuli in the form of synthetic speech contribute to the efficient acquisition of graphic communication symbols. PMID:14743828
Thermodynamic framework for information in nanoscale systems with memory.
Arias-Gonzalez, J Ricardo
2017-11-28
Information is represented by linear strings of symbols with memory that carry errors as a result of their stochastic nature. Proofreading and edition are assumed to improve certainty although such processes may not be effective. Here, we develop a thermodynamic theory for material chains made up of nanoscopic subunits with symbolic meaning in the presence of memory. This framework is based on the characterization of single sequences of symbols constructed under a protocol and is used to derive the behavior of ensembles of sequences similarly constructed. We then analyze the role of proofreading and edition in the presence of memory finding conditions to make revision an effective process, namely, to decrease the entropy of the chain. Finally, we apply our formalism to DNA replication and RNA transcription finding that Watson and Crick hybridization energies with which nucleotides are branched to the template strand during the copying process are optimal to regulate the fidelity in proofreading. These results are important in applications of information theory to a variety of solid-state physical systems and other biomolecular processes.
Systematic Instruction in Phoneme-Grapheme Correspondence for Students with Reading Disabilities
ERIC Educational Resources Information Center
Earle, Gentry A.; Sayeski, Kristin L.
2017-01-01
Letter-sound knowledge is a strong predictor of a student's ability to decode words. Approximately 50% of English words can be decoded by following a sound-symbol correspondence rule alone and an additional 36% are spelled with only one error. Many students with reading disabilities or who struggle to learn to read have difficulty with phonology,…
Model Checker for Java Programs
NASA Technical Reports Server (NTRS)
Visser, Willem
2007-01-01
Java Pathfinder (JPF) is a verification and testing environment for Java that integrates model checking, program analysis, and testing. JPF consists of a custom-made Java Virtual Machine (JVM) that interprets bytecode, combined with a search interface to allow the complete behavior of a Java program to be analyzed, including interleavings of concurrent programs. JPF is implemented in Java, and its architecture is highly modular to support rapid prototyping of new features. JPF is an explicit-state model checker, because it enumerates all visited states and, therefore, suffers from the state-explosion problem inherent in analyzing large programs. It is suited to analyzing programs less than 10kLOC, but has been successfully applied to finding errors in concurrent programs up to 100kLOC. When an error is found, a trace from the initial state to the error is produced to guide the debugging. JPF works at the bytecode level, meaning that all of Java can be model-checked. By default, the software checks for all runtime errors (uncaught exceptions), assertions violations (supports Java s assert), and deadlocks. JPF uses garbage collection and symmetry reductions of the heap during model checking to reduce state-explosion, as well as dynamic partial order reductions to lower the number of interleavings analyzed. JPF is capable of symbolic execution of Java programs, including symbolic execution of complex data such as linked lists and trees. JPF is extensible as it allows for the creation of listeners that can subscribe to events during searches. The creation of dedicated code to be executed in place of regular classes is supported and allows users to easily handle native calls and to improve the efficiency of the analysis.
Spatial Lattice Modulation for MIMO Systems
NASA Astrophysics Data System (ADS)
Choi, Jiwook; Nam, Yunseo; Lee, Namyoon
2018-06-01
This paper proposes spatial lattice modulation (SLM), a spatial modulation method for multipleinput-multiple-output (MIMO) systems. The key idea of SLM is to jointly exploit spatial, in-phase, and quadrature dimensions to modulate information bits into a multi-dimensional signal set that consists oflattice points. One major finding is that SLM achieves a higher spectral efficiency than the existing spatial modulation and spatial multiplexing methods for the MIMO channel under the constraint ofM-ary pulseamplitude-modulation (PAM) input signaling per dimension. In particular, it is shown that when the SLM signal set is constructed by using dense lattices, a significant signal-to-noise-ratio (SNR) gain, i.e., a nominal coding gain, is attainable compared to the existing methods. In addition, closed-form expressions for both the average mutual information and average symbol-vector-error-probability (ASVEP) of generic SLM are derived under Rayleigh-fading environments. To reduce detection complexity, a low-complexity detection method for SLM, which is referred to as lattice sphere decoding, is developed by exploiting lattice theory. Simulation results verify the accuracy of the conducted analysis and demonstrate that the proposed SLM techniques achieve higher average mutual information and lower ASVEP than do existing methods.
Higher-order differential phase shift keyed modulation
NASA Astrophysics Data System (ADS)
Vanalphen, Deborah K.; Lindsey, William C.
1994-02-01
Advanced modulation/demodulation techniques which are robust in the presence of phase and frequency uncertainties continue to be of interest to communication engineers. We are particularly interested in techniques which accommodate slow channel phase and frequency variations with minimal performance degradation and which alleviate the need for phase and frequency tracking loops in the receiver. We investigate the performance sensitivity to frequency offsets of a modulation technique known as binary Double Differential Phase Shift Keying (DDPSK) and compare it to that of classical binary Differential Phase Shift Keying (DPSK). We also generalize our analytical results to include n(sup -th) order, M-ary DPSK. The DDPSK (n = 2) technique was first introduced in the Russian literature circa 1972 and was studied more thoroughly in the late 1970's by Pent and Okunev. Here, we present an expression for the symbol error probability that is easy to derive and to evaluate numerically. We also present graphical results that establish when, as a function of signal energy-to-noise ratio and normalized frequency offset, binary DDPSK is preferable to binary DPSK with respect to performance in additive white Gaussian noise. Finally, we provide insight into the optimum receiver from a detection theory viewpoint.
Diversity Performance Analysis on Multiple HAP Networks.
Dong, Feihong; Li, Min; Gong, Xiangwu; Li, Hongjun; Gao, Fengyue
2015-06-30
One of the main design challenges in wireless sensor networks (WSNs) is achieving a high-data-rate transmission for individual sensor devices. The high altitude platform (HAP) is an important communication relay platform for WSNs and next-generation wireless networks. Multiple-input multiple-output (MIMO) techniques provide the diversity and multiplexing gain, which can improve the network performance effectively. In this paper, a virtual MIMO (V-MIMO) model is proposed by networking multiple HAPs with the concept of multiple assets in view (MAV). In a shadowed Rician fading channel, the diversity performance is investigated. The probability density function (PDF) and cumulative distribution function (CDF) of the received signal-to-noise ratio (SNR) are derived. In addition, the average symbol error rate (ASER) with BPSK and QPSK is given for the V-MIMO model. The system capacity is studied for both perfect channel state information (CSI) and unknown CSI individually. The ergodic capacity with various SNR and Rician factors for different network configurations is also analyzed. The simulation results validate the effectiveness of the performance analysis. It is shown that the performance of the HAPs network in WSNs can be significantly improved by utilizing the MAV to achieve overlapping coverage, with the help of the V-MIMO techniques.
Application of grammar-based codes for lossless compression of digital mammograms
NASA Astrophysics Data System (ADS)
Li, Xiaoli; Krishnan, Srithar; Ma, Ngok-Wah
2006-01-01
A newly developed grammar-based lossless source coding theory and its implementation was proposed in 1999 and 2000, respectively, by Yang and Kieffer. The code first transforms the original data sequence into an irreducible context-free grammar, which is then compressed using arithmetic coding. In the study of grammar-based coding for mammography applications, we encountered two issues: processing time and limited number of single-character grammar G variables. For the first issue, we discover a feature that can simplify the matching subsequence search in the irreducible grammar transform process. Using this discovery, an extended grammar code technique is proposed and the processing time of the grammar code can be significantly reduced. For the second issue, we propose to use double-character symbols to increase the number of grammar variables. Under the condition that all the G variables have the same probability of being used, our analysis shows that the double- and single-character approaches have the same compression rates. By using the methods proposed, we show that the grammar code can outperform three other schemes: Lempel-Ziv-Welch (LZW), arithmetic, and Huffman on compression ratio, and has similar error tolerance capabilities as LZW coding under similar circumstances.
Simpler Alternative to an Optimum FQPSK-B Viterbi Receiver
NASA Technical Reports Server (NTRS)
Lee, Dennis; Simon, Marvin; Yan, Tsun-Yee
2003-01-01
A reduced-complexity alternative to an optimum FQPSK-B Viterbi receiver has been invented. As described, the reduction in complexity is achieved at the cost of only a small reduction in power performance [performance expressed in terms of a bit-energy-to-noise-energy ratio (Eb/N0) for a given bit-error rate (BER)]. The term "FQPSK-B" denotes a baseband-filtered version of Feher quadrature-phase-shift keying, which is a patented, bandwidth-efficient phase-modulation scheme named after its inventor. Heretofore, commercial FQPSK-B receivers have performed symbol-by-symbol detection, in each case using a detection filter (either the proprietary FQPSK-B filter for better BER performance, or a simple integrate-and-dump filter with degraded performance) and a sample-and-hold circuit.
Receiver IQ mismatch estimation in PDM CO-OFDM system using training symbol
NASA Astrophysics Data System (ADS)
Peng, Dandan; Ma, Xiurong; Yao, Xin; Zhang, Haoyuan
2017-07-01
Receiver in-phase/quadrature (IQ) mismatch is hard to mitigate at the receiver via using conventional method in polarization division multiplexed (PDM) coherent optical orthogonal frequency division multiplexing (CO-OFDM) system. In this paper, a novel training symbol structure is proposed to estimate IQ mismatch and channel distortion. Combined this structure with Gram Schmidt orthogonalization procedure (GSOP) algorithm, we can get lower bit error rate (BER). Meanwhile, based on this structure one estimation method is deduced in frequency domain which can achieve the estimation of IQ mismatch and channel distortion independently and improve the system performance obviously. Numerical simulation shows that the proposed two methods have better performance than compared method at 100 Gb/s after 480 km fiber transmission. Besides, the calculation complexity is also analyzed.
Intransparent German number words complicate transcoding - a translingual comparison with Japanese.
Moeller, Korbinian; Zuber, Julia; Olsen, Naoko; Nuerk, Hans-Christoph; Willmes, Klaus
2015-01-01
Superior early numerical competencies of children in several Asian countries have (amongst others) been attributed to the higher transparency of their number word systems. Here, we directly investigated this claim by evaluating whether Japanese children's transcoding performance when writing numbers to dictation (e.g., "twenty five" → 25) was less error prone than that of German-speaking children - both in general as well as when considering language-specific attributes of the German number word system such as the inversion property, in particular. In line with this hypothesis we observed that German-speaking children committed more transcoding errors in general than their Japanese peers. Moreover, their error pattern reflected the specific inversion intransparency of the German number-word system. Inversion errors in transcoding represented the most prominent error category in German-speaking children, but were almost absent in Japanese-speaking children. We conclude that the less transparent German number-word system complicates the acquisition of the correspondence between symbolic Arabic numbers and their respective verbal number words.
Analysis of the impact of error detection on computer performance
NASA Technical Reports Server (NTRS)
Shin, K. C.; Lee, Y. H.
1983-01-01
Conventionally, reliability analyses either assume that a fault/error is detected immediately following its occurrence, or neglect damages caused by latent errors. Though unrealistic, this assumption was imposed in order to avoid the difficulty of determining the respective probabilities that a fault induces an error and the error is then detected in a random amount of time after its occurrence. As a remedy for this problem a model is proposed to analyze the impact of error detection on computer performance under moderate assumptions. Error latency, the time interval between occurrence and the moment of detection, is used to measure the effectiveness of a detection mechanism. This model is used to: (1) predict the probability of producing an unreliable result, and (2) estimate the loss of computation due to fault and/or error.
Tutorial on Reed-Solomon error correction coding
NASA Technical Reports Server (NTRS)
Geisel, William A.
1990-01-01
This tutorial attempts to provide a frank, step-by-step approach to Reed-Solomon (RS) error correction coding. RS encoding and RS decoding both with and without erasing code symbols are emphasized. There is no need to present rigorous proofs and extreme mathematical detail. Rather, the simple concepts of groups and fields, specifically Galois fields, are presented with a minimum of complexity. Before RS codes are presented, other block codes are presented as a technical introduction into coding. A primitive (15, 9) RS coding example is then completely developed from start to finish, demonstrating the encoding and decoding calculations and a derivation of the famous error-locator polynomial. The objective is to present practical information about Reed-Solomon coding in a manner such that it can be easily understood.
Wagner, Barry T; Jackson, Heather M
2006-02-01
This study examined the cognitive demands of 2 selection techniques in augmentative and alternative communication (AAC), direct selection, and visual linear scanning, by determining the memory retrieval abilities of typically developing children when presented with fixed communication displays. One hundred twenty typical children from kindergarten, 1st, and 3rd grades were randomly assigned to either a direct selection or visual linear scanning group. Memory retrieval was assessed through word span using Picture Communication Symbols (PCSs). Participants were presented various numbers and arrays of PCSs and asked to retrieve them by placing identical graphic symbols on fixed communication displays with grid layouts. The results revealed that participants were able to retrieve more PCSs during direct selection than scanning. Additionally, 3rd-grade children retrieved more PCSs than kindergarten and 1st-grade children. An analysis on the type of errors during retrieval indicated that children were more successful at retrieving the correct PCSs than the designated location of those symbols on fixed communication displays. AAC practitioners should consider using direct selection over scanning whenever possible and account for anticipatory monitoring and pulses when scanning is used in the service delivery of children with little or no functional speech. Also, researchers should continue to investigate AAC selection techniques in relationship to working memory resources.
Symmetry and the Golden Ratio in the Analysis of a Regular Pentagon
ERIC Educational Resources Information Center
Sparavigna, Amelia Carolina; Baldi, Mauro Maria
2017-01-01
The regular pentagon had a symbolic meaning in the Pythagorean and Platonic philosophies and a subsequent important role in Western thought, appearing also in arts and architecture. A property of regular pentagons, which was probably discovered by the Pythagoreans, is that the ratio between the diagonal and the side of these pentagons is equal to…
NASA Technical Reports Server (NTRS)
Long, S. A. T.
1973-01-01
The triangulation method developed specifically for the Barium Ion Cloud Project is discussed. Expression for the four displacement errors, the three slope errors, and the curvature error in the triangulation solution due to a probable error in the lines-of-sight from the observation stations to points on the cloud are derived. The triangulation method is then used to determine the effect of the following on these different errors in the solution: the number and location of the stations, the observation duration, east-west cloud drift, the number of input data points, and the addition of extra cameras to one of the stations. The pointing displacement errors, and the pointing slope errors are compared. The displacement errors in the solution due to a probable error in the position of a moving station plus the weighting factors for the data from the moving station are also determined.
Decision feedback equalizer for holographic data storage.
Kim, Kyuhwan; Kim, Seung Hun; Koo, Gyogwon; Seo, Min Seok; Kim, Sang Woo
2018-05-20
Holographic data storage (HDS) has attracted much attention as a next-generation storage medium. Because HDS suffers from two-dimensional (2D) inter-symbol interference (ISI), the partial-response maximum-likelihood (PRML) method has been studied to reduce 2D ISI. However, the PRML method has various drawbacks. To solve the problems, we propose a modified decision feedback equalizer (DFE) for HDS. To prevent the error propagation problem, which is a typical problem in DFEs, we also propose a reliability factor for HDS. Various simulations were executed to analyze the performance of the proposed methods. The proposed methods showed fast processing speed after training, superior bit error rate performance, and consistency.
Continuous quantum measurements and the action uncertainty principle
NASA Astrophysics Data System (ADS)
Mensky, Michael B.
1992-09-01
The path-integral approach to quantum theory of continuous measurements has been developed in preceding works of the author. According to this approach the measurement amplitude determining probabilities of different outputs of the measurement can be evaluated in the form of a restricted path integral (a path integral “in finite limits”). With the help of the measurement amplitude, maximum deviation of measurement outputs from the classical one can be easily determined. The aim of the present paper is to express this variance in a simpler and transparent form of a specific uncertainty principle (called the action uncertainty principle, AUP). The most simple (but weak) form of AUP is δ S≳ℏ, where S is the action functional. It can be applied for simple derivation of the Bohr-Rosenfeld inequality for measurability of gravitational field. A stronger (and having wider application) form of AUP (for ideal measurements performed in the quantum regime) is |∫{/' t″ }(δ S[ q]/δ q( t))Δ q( t) dt|≃ℏ, where the paths [ q] and [Δ q] stand correspondingly for the measurement output and for the measurement error. It can also be presented in symbolic form as Δ(Equation) Δ(Path) ≃ ℏ. This means that deviation of the observed (measured) motion from that obeying the classical equation of motion is reciprocally proportional to the uncertainty in a path (the latter uncertainty resulting from the measurement error). The consequence of AUP is that improving the measurement precision beyond the threshold of the quantum regime leads to decreasing information resulting from the measurement.
Performance of cellular frequency-hopped spread-spectrum radio networks
NASA Astrophysics Data System (ADS)
Gluck, Jeffrey W.; Geraniotis, Evaggelos
1989-10-01
Multiple access interference is characterized for cellular mobile networks, in which users are assumed to be Poisson-distributed in the plane and employ frequency-hopped spread-spectrum signaling with transmitter-oriented assignment of frequency-hopping patterns. Exact expressions for the bit error probabilities are derived for binary coherently demodulated systems without coding. Approximations for the packet error probability are derived for coherent and noncoherent systems and these approximations are applied when forward-error-control coding is employed. In all cases, the effects of varying interference power are accurately taken into account according to some propagation law. Numerical results are given in terms of bit error probability for the exact case and throughput for the approximate analyses. Comparisons are made with previously derived bounds and it is shown that these tend to be very pessimistic.
NASA Technical Reports Server (NTRS)
Massey, J. L.
1976-01-01
Virtually all previously-suggested rate 1/2 binary convolutional codes with KE = 24 are compared. Their distance properties are given; and their performance, both in computation and in error probability, with sequential decoding on the deep-space channel is determined by simulation. Recommendations are made both for the choice of a specific KE = 24 code as well as for codes to be included in future coding standards for the deep-space channel. A new result given in this report is a method for determining the statistical significance of error probability data when the error probability is so small that it is not feasible to perform enough decoding simulations to obtain more than a very small number of decoding errors.
ERIC Educational Resources Information Center
Naah, Basil M.
2012-01-01
Students who harbor misconceptions often find chemistry difficult to understand. To improve teaching about the dissolving process, first semester introductory chemistry students were asked to complete a free-response questionnaire on writing balanced equations for dissolving ionic compounds in water. To corroborate errors and misconceptions…
Bayesian network models for error detection in radiotherapy plans
NASA Astrophysics Data System (ADS)
Kalet, Alan M.; Gennari, John H.; Ford, Eric C.; Phillips, Mark H.
2015-04-01
The purpose of this study is to design and develop a probabilistic network for detecting errors in radiotherapy plans for use at the time of initial plan verification. Our group has initiated a multi-pronged approach to reduce these errors. We report on our development of Bayesian models of radiotherapy plans. Bayesian networks consist of joint probability distributions that define the probability of one event, given some set of other known information. Using the networks, we find the probability of obtaining certain radiotherapy parameters, given a set of initial clinical information. A low probability in a propagated network then corresponds to potential errors to be flagged for investigation. To build our networks we first interviewed medical physicists and other domain experts to identify the relevant radiotherapy concepts and their associated interdependencies and to construct a network topology. Next, to populate the network’s conditional probability tables, we used the Hugin Expert software to learn parameter distributions from a subset of de-identified data derived from a radiation oncology based clinical information database system. These data represent 4990 unique prescription cases over a 5 year period. Under test case scenarios with approximately 1.5% introduced error rates, network performance produced areas under the ROC curve of 0.88, 0.98, and 0.89 for the lung, brain and female breast cancer error detection networks, respectively. Comparison of the brain network to human experts performance (AUC of 0.90 ± 0.01) shows the Bayes network model performs better than domain experts under the same test conditions. Our results demonstrate the feasibility and effectiveness of comprehensive probabilistic models as part of decision support systems for improved detection of errors in initial radiotherapy plan verification procedures.
Impaired Feedback Processing for Symbolic Reward in Individuals with Internet Game Overuse
Kim, Jinhee; Kim, Hackjin; Kang, Eunjoo
2017-01-01
Reward processing, which plays a critical role in adaptive behavior, is impaired in addiction disorders, which are accompanied by functional abnormalities in brain reward circuits. Internet gaming disorder, like substance addiction, is thought to be associated with impaired reward processing, but little is known about how it affects learning, especially when feedback is conveyed by less-salient motivational events. Here, using both monetary (±500 KRW) and symbolic (Chinese characters “right” or “wrong”) rewards and penalties, we investigated whether behavioral performance and feedback-related neural responses are altered in Internet game overuse (IGO) group. Using functional MRI, brain responses for these two types of reward/penalty feedback were compared between young males with problems of IGO (IGOs, n = 18, mean age = 22.2 ± 2.0 years) and age-matched control subjects (Controls, n = 20, mean age = 21.2 ± 2.1) during a visuomotor association task where associations were learned between English letters and one of four responses. No group difference was found in adjustment of error responses following the penalty or in brain responses to penalty, for either monetary or symbolic penalties. The IGO individuals, however, were more likely to fail to choose the response previously reinforced by symbolic (but not monetary) reward. A whole brain two-way ANOVA analysis for reward revealed reduced activations in the IGO group in the rostral anterior cingulate cortex/ventromedial prefrontal cortex (vmPFC) in response to both reward types, suggesting impaired reward processing. However, the responses to reward in the inferior parietal region and medial orbitofrontal cortex/vmPFC were affected by the types of reward in the IGO group. Unlike the control group, in the IGO group the reward response was reduced only for symbolic reward, suggesting lower attentional and value processing specific to symbolic reward. Furthermore, the more severe the Internet gaming overuse symptoms in the IGO group, the greater the activations of the ventral striatum for monetary relative to symbolic reward. These findings suggest that IGO is associated with bias toward motivationally salient reward, which would lead to poor goal-directed behavior in everyday life. PMID:29051739
Wald Sequential Probability Ratio Test for Analysis of Orbital Conjunction Data
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Markley, F. Landis; Gold, Dara
2013-01-01
We propose a Wald Sequential Probability Ratio Test for analysis of commonly available predictions associated with spacecraft conjunctions. Such predictions generally consist of a relative state and relative state error covariance at the time of closest approach, under the assumption that prediction errors are Gaussian. We show that under these circumstances, the likelihood ratio of the Wald test reduces to an especially simple form, involving the current best estimate of collision probability, and a similar estimate of collision probability that is based on prior assumptions about the likelihood of collision.
Eigenpairs of Toeplitz and Disordered Toeplitz Matrices with a Fisher-Hartwig Symbol
NASA Astrophysics Data System (ADS)
Movassagh, Ramis; Kadanoff, Leo P.
2017-05-01
Toeplitz matrices have entries that are constant along diagonals. They model directed transport, are at the heart of correlation function calculations of the two-dimensional Ising model, and have applications in quantum information science. We derive their eigenvalues and eigenvectors when the symbol is singular Fisher-Hartwig. We then add diagonal disorder and study the resulting eigenpairs. We find that there is a "bulk" behavior that is well captured by second order perturbation theory of non-Hermitian matrices. The non-perturbative behavior is classified into two classes: Runaways type I leave the complex-valued spectrum and become completely real because of eigenvalue attraction. Runaways type II leave the bulk and move very rapidly in response to perturbations. These have high condition numbers and can be predicted. Localization of the eigenvectors are then quantified using entropies and inverse participation ratios. Eigenvectors corresponding to Runaways type II are most localized (i.e., super-exponential), whereas Runaways type I are less localized than the unperturbed counterparts and have most of their probability mass in the interior with algebraic decays. The results are corroborated by applying free probability theory and various other supporting numerical studies.
The transformation of aerodynamic stability derivatives by symbolic mathematical computation
NASA Technical Reports Server (NTRS)
Howard, J. C.
1975-01-01
The formulation of mathematical models of aeronautical systems for simulation or other purposes, involves the transformation of aerodynamic stability derivatives. It is shown that these derivatives transform like the components of a second order tensor having one index of covariance and one index of contravariance. Moreover, due to the equivalence of covariant and contravariant transformations in orthogonal Cartesian systems of coordinates, the transformations can be treated as doubly covariant or doubly contravariant, if this simplifies the formulation. It is shown that the tensor properties of these derivatives can be used to facilitate their transformation by symbolic mathematical computation, and the use of digital computers equipped with formula manipulation compilers. When the tensor transformations are mechanised in the manner described, man-hours are saved and the errors to which human operators are prone can be avoided.
Joint symbolic dynamic analysis of cardiorespiratory interactions in patients on weaning trials.
Caminal, P; Giraldo, B; Zabaleta, H; Vallverdu, M; Benito, S; Ballesteros, D; Lopez-Rodriguez, L; Esteban, A; Baumert, M; Voss, A
2005-01-01
Assessing autonomic control provides information about patho-physiological imbalances. Measures of variability of the cardiac interbeat duration RR(n) and the variability of the breath duration T
Symbolic feature detection for image understanding
NASA Astrophysics Data System (ADS)
Aslan, Sinem; Akgül, Ceyhun Burak; Sankur, Bülent
2014-03-01
In this study we propose a model-driven codebook generation method used to assign probability scores to pixels in order to represent underlying local shapes they reside in. In the first version of the symbol library we limited ourselves to photometric and similarity transformations applied on eight prototypical shapes of flat plateau , ramp, valley, ridge, circular and elliptic respectively pit and hill and used randomized decision forest as the statistical classifier to compute shape class ambiguity of each pixel. We achieved90% accuracy in identification of known objects from alternate views, however, we could not outperform texture, global and local shape methods, but only color-based method in recognition of unknown objects. We present a progress plan to be accomplished as a future work to improve the proposed approach further.
NASA Astrophysics Data System (ADS)
Pernot, Pascal; Savin, Andreas
2018-06-01
Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.
Modulation/demodulation techniques for satellite communications. Part 1: Background
NASA Technical Reports Server (NTRS)
Omura, J. K.; Simon, M. K.
1981-01-01
Basic characteristics of digital data transmission systems described include the physical communication links, the notion of bandwidth, FCC regulations, and performance measurements such as bit rates, bit error probabilities, throughputs, and delays. The error probability performance and spectral characteristics of various modulation/demodulation techniques commonly used or proposed for use in radio and satellite communication links are summarized. Forward error correction with block or convolutional codes is also discussed along with the important coding parameter, channel cutoff rate.
Experimental demonstration of an efficient hybrid equalizer for short-reach optical SSB systems
NASA Astrophysics Data System (ADS)
Zhu, Mingyue; Ying, Hao; Zhang, Jing; Yi, Xingwen; Qiu, Kun
2018-02-01
We propose an efficient enhanced hybrid equalizer combining the feed forward equalization (FFE) with a modified Volterra filter to mitigate the linear and nonlinear interference for the short-reach optical single side-band (SSB) system. The optical SSB signal is generated by a relatively low-cost dual-drive Mach-Zehnder modulator (DDMZM). The two driving signals are a pair of Hilbert signals with Nyquist pulse-shaped four-level pulse amplitude modulation (NPAM-4). After the fiber transmission, the neighboring received symbols are strongly correlated due to the pulse spreading in time domain caused by the chromatic dispersion (CD). At the receiver equalization stage, the FFE followed by higher order terms of modified Volterra filter, which utilizes the forward and backward neighboring symbols to construct the kernels with strong correlation, are used as an enhanced hybrid equalizer to mitigate the inter symbol interference (ISI) and nonlinear distortion due to the interaction of the CD and the square-law detection. We experimentally demonstrate that the optical SSB NPAM-4 signal of 40 Gb/s transmitting over 80 km standard single mode fiber (SSMF) with a bit-error-rate (BER) of 7 . 59 × 10-4.
The Mental Number Line in Dyscalculia: Impaired Number Sense or Access From Symbolic Numbers?
Lafay, Anne; St-Pierre, Marie-Catherine; Macoir, Joël
Numbers may be manipulated and represented mentally over a compressible number line oriented from left to right. According to numerous studies, one of the primary reasons for dyscalculia is related to improper understanding of the mental number line. Children with dyscalculia usually show difficulty when they have to place Arabic numbers on a physical number line. However, it remains unclear whether they have a deficit with the mental number line per se or a deficit with accessing it from nonsymbolic and/or symbolic numbers. Quebec French-speaking 8- to 9-year-old children with (24) and without (37) dyscalculia were assessed with transcoding tasks ( number-to-position and position-to-number) designed to assess the acuity of the mental number line with Arabic and spoken numbers as well as with analogic numerosities. Results showed that children with dyscalculia produced a larger percentage absolute error than children without mathematics difficulties in every task except the number-to-position transcoding task with analogic numerosities. Hence, these results suggested that children with dyscalculia do not have a general deficit of the mental number line but rather a deficit with accessing it from symbolic numbers.
The fast decoding of Reed-Solomon codes using number theoretic transforms
NASA Technical Reports Server (NTRS)
Reed, I. S.; Welch, L. R.; Truong, T. K.
1976-01-01
It is shown that Reed-Solomon (RS) codes can be encoded and decoded by using a fast Fourier transform (FFT) algorithm over finite fields. The arithmetic utilized to perform these transforms requires only integer additions, circular shifts and a minimum number of integer multiplications. The computing time of this transform encoder-decoder for RS codes is less than the time of the standard method for RS codes. More generally, the field GF(q) is also considered, where q is a prime of the form K x 2 to the nth power + 1 and K and n are integers. GF(q) can be used to decode very long RS codes by an efficient FFT algorithm with an improvement in the number of symbols. It is shown that a radix-8 FFT algorithm over GF(q squared) can be utilized to encode and decode very long RS codes with a large number of symbols. For eight symbols in GF(q squared), this transform over GF(q squared) can be made simpler than any other known number theoretic transform with a similar capability. Of special interest is the decoding of a 16-tuple RS code with four errors.
16QAM transmission with 5.2 bits/s/Hz spectral efficiency over transoceanic distance.
Zhang, H; Cai, J-X; Batshon, H G; Davidson, C R; Sun, Y; Mazurczyk, M; Foursa, D G; Pilipetskii, A; Mohs, G; Bergano, Neal S
2012-05-21
We transmit 160 x 100 G PDM RZ 16 QAM channels with 5.2 bits/s/Hz spectral efficiency over 6,860 km. There are more than 3 billion 16 QAM symbols, i.e., 12 billion bits, processed in total. Using coded modulation and iterative decoding between a MAP decoder and an LDPC based FEC all channels are decoded with no remaining errors.
40 CFR 1065.1005 - Symbols, abbreviations, acronyms, and units of measure.
Code of Federal Regulations, 2011 CFR
2011-07-01
... β ratio of diameters meter per meter m/m 1 β atomic oxygen to carbon ratio mole per mole mol/mol 1 C... Sutherland constant kelvin K K SEE standard estimate of error T absolute temperature kelvin K K T Celsius temperature degree Celsius °C K-273.15 T torque (moment of force) newton meter N.m m2 .kg.s−2 t time second s...
40 CFR 1065.1005 - Symbols, abbreviations, acronyms, and units of measure.
Code of Federal Regulations, 2010 CFR
2010-07-01
... β ratio of diameters meter per meter m/m 1 β atomic oxygen to carbon ratio mole per mole mol/mol 1 C... Sutherland constant kelvin K K SEE standard estimate of error T absolute temperature kelvin K K T Celsius temperature degree Celsius °C K-273.15 T torque (moment of force) newton meter N.m m2 .kg.s−2 t time second s...
Improving Quality Using Architecture Fault Analysis with Confidence Arguments
2015-03-01
CMU/SEI-2015-TR-006 | SOFTWARE ENGINEERING INSTITUTE | CARNEGIE MELLON UNIVERSITY iii List of Figures Figure 1: Architecture-Centric...Requirements Decomposition 5 Figure 2: A System and Its Interface with Its Environment 6 Figure 3: AADL Graphical Symbols 8 Figure 4: Textual AADL Example...8 Figure 5: Textual AADL Error Model Example 9 Figure 6: Potential Hazard Sources in the Feedback Control Loop [Leveson 2012] 11 Figure 7
NASA Astrophysics Data System (ADS)
Batra, Arun; Zeidler, James R.; Beex, A. A. Louis
2007-12-01
It has previously been shown that a least-mean-square (LMS) decision-feedback filter can mitigate the effect of narrowband interference (L.-M. Li and L. Milstein, 1983). An adaptive implementation of the filter was shown to converge relatively quickly for mild interference. It is shown here, however, that in the case of severe narrowband interference, the LMS decision-feedback equalizer (DFE) requires a very large number of training symbols for convergence, making it unsuitable for some types of communication systems. This paper investigates the introduction of an LMS prediction-error filter (PEF) as a prefilter to the equalizer and demonstrates that it reduces the convergence time of the two-stage system by as much as two orders of magnitude. It is also shown that the steady-state bit-error rate (BER) performance of the proposed system is still approximately equal to that attained in steady-state by the LMS DFE-only. Finally, it is shown that the two-stage system can be implemented without the use of training symbols. This two-stage structure lowers the complexity of the overall system by reducing the number of filter taps that need to be adapted, while incurring a slight loss in the steady-state BER.
The use of hazard road signs to improve the perception of severe bends.
Milleville-Pennel, Isabelle; Jean-Michel, Hoc; Elise, Jolly
2007-07-01
Collision analysis indicates that many car accidents occur when negotiating a bend. Excessive speed and steering wheel errors are often given by way of explanation. Nevertheless, the underlying origin of these dramatic errors could be, at least in part, a poor estimation of bend curvature. The aim of this study was to investigate both the assessment of bend curvature by drivers and the impact of symbolic road signs that indicate a hazardous bend on this assessment. Thus, participants first viewed a video recording showing approaching bends of different curvature before being asked to assess the curvature of these bends. This assessment could either be a verbal (symbolic control) estimation of the bend's curvature and risk, or a sensorimotor (subsymbolic control) estimation of the bend's curvature (participants were asked to turn a steering wheel to mimic the position that would be necessary to accurately negotiate the bend). Results show that very severe bends (with a radius of less than 80 m) were actually underestimated. This was associated with an underestimation of risk corresponding to these bends and a poor sensorimotor anticipation of bend curvature. Road signs, which indicate risk significantly improve bend assessment, but this was of no use for sensorimotor anticipation. Thus, other indicators need to be envisaged in order to also improve this level of control.
Soft-Decision Decoding of Binary Linear Block Codes Based on an Iterative Search Algorithm
NASA Technical Reports Server (NTRS)
Lin, Shu; Kasami, Tadao; Moorthy, H. T.
1997-01-01
This correspondence presents a suboptimum soft-decision decoding scheme for binary linear block codes based on an iterative search algorithm. The scheme uses an algebraic decoder to iteratively generate a sequence of candidate codewords one at a time using a set of test error patterns that are constructed based on the reliability information of the received symbols. When a candidate codeword is generated, it is tested based on an optimality condition. If it satisfies the optimality condition, then it is the most likely (ML) codeword and the decoding stops. If it fails the optimality test, a search for the ML codeword is conducted in a region which contains the ML codeword. The search region is determined by the current candidate codeword and the reliability of the received symbols. The search is conducted through a purged trellis diagram for the given code using the Viterbi algorithm. If the search fails to find the ML codeword, a new candidate is generated using a new test error pattern, and the optimality test and search are renewed. The process of testing and search continues until either the MEL codeword is found or all the test error patterns are exhausted and the decoding process is terminated. Numerical results show that the proposed decoding scheme achieves either practically optimal performance or a performance only a fraction of a decibel away from the optimal maximum-likelihood decoding with a significant reduction in decoding complexity compared with the Viterbi decoding based on the full trellis diagram of the codes.
Psychological distress and prejudice following terror attacks in France.
Goodwin, Robin; Kaniasty, Krzysztof; Sun, Shaojing; Ben-Ezra, Menachem
2017-08-01
Terrorist attacks have the capacity to threaten our beliefs about the world, cause distress across populations and promote discrimination towards particular groups. We examined the impact of two different types of attacks in the same city and same year on psychological distress and probable posttraumatic stress symptoms, and the moderating effects of religion or media use on distress/posttraumatic symptoms and inter-group relations. Two panel surveys four weeks after the January 2015 Charlie Hebdo attack (N = 1981) and the November 2015 Bataclan concert hall/restaurant attacks (N = 1878), measured intrinsic religiosity, social and traditional media use, psychological distress (K6), probable posttraumatic stress symptoms (proposed ICD-11), symbolic racism and willingness to interact with Muslims by non-Muslims. Prevalence of serious mental illness (K6 score > 18) was higher after November 2015 attacks (7.0% after the first attack, 10.2% the second, χ2 (1) = 5.67, p < 0.02), as were probable posttraumatic stress symptoms (11.9% vs. 14.1%; χ2 (1) = 4.15, p < 0.04). In structural equation analyses, sex, age, geographic proximity, media use and religiosity were associated with distress, as was the interaction between event and religiosity. Distress was then associated with racism symbolism and willingness to interact with Muslims. Implications are considered for managing psychological trauma across populations, and protecting inter-group harmony. Copyright © 2017 Elsevier Ltd. All rights reserved.
Estimation abilities of large numerosities in Kindergartners
Mejias, Sandrine; Schiltz, Christine
2013-01-01
The approximate number system (ANS) is thought to be a building block for the elaboration of formal mathematics. However, little is known about how this core system develops and if it can be influenced by external factors at a young age (before the child enters formal numeracy education). The purpose of this study was to examine numerical magnitude representations of 5–6 year old children at 2 different moments of Kindergarten considering children's early number competence as well as schools' socio-economic index (SEI). This study investigated estimation abilities of large numerosities using symbolic and non-symbolic output formats (8–64). In addition, we assessed symbolic and non-symbolic early number competence (1–12) at the end of the 2nd (N = 42) and the 3rd (N = 32) Kindergarten grade. By letting children freely produce estimates we observed surprising estimation abilities at a very young age (from 5 year on) extending far beyond children's symbolic explicit knowledge. Moreover, the time of testing has an impact on the ANS accuracy since 3rd Kindergarteners were more precise in both estimation tasks. Additionally, children who presented better exact symbolic knowledge were also those with the most refined ANS. However, this was true only for 3rd Kindergarteners who were a few months from receiving math instructions. In a similar vein, higher SEI positively impacted only the oldest children's estimation abilities whereas it played a role for exact early number competences already in 2nd and 3rd graders. Our results support the view that approximate numerical representations are linked to exact number competence in young children before the start of formal math education and might thus serve as building blocks for mathematical knowledge. Since this core number system was also sensitive to external components such as the SEI this implies that it can most probably be targeted and refined through specific educational strategies from preschool on. PMID:24009591
Application of symbolic computations to the constitutive modeling of structural materials
NASA Technical Reports Server (NTRS)
Arnold, Steven M.; Tan, H. Q.; Dong, X.
1990-01-01
In applications involving elevated temperatures, the derivation of mathematical expressions (constitutive equations) describing the material behavior can be quite time consuming, involved and error-prone. Therefore intelligent application of symbolic systems to faciliate this tedious process can be of significant benefit. Presented here is a problem oriented, self contained symbolic expert system, named SDICE, which is capable of efficiently deriving potential based constitutive models in analytical form. This package, running under DOE MACSYMA, has the following features: (1) potential differentiation (chain rule), (2) tensor computations (utilizing index notation) including both algebraic and calculus; (3) efficient solution of sparse systems of equations; (4) automatic expression substitution and simplification; (5) back substitution of invariant and tensorial relations; (6) the ability to form the Jacobian and Hessian matrix; and (7) a relational data base. Limited aspects of invariant theory were also incorporated into SDICE due to the utilization of potentials as a starting point and the desire for these potentials to be frame invariant (objective). The uniqueness of SDICE resides in its ability to manipulate expressions in a general yet pre-defined order and simplify expressions so as to limit expression growth. Results are displayed, when applicable, utilizing index notation. SDICE was designed to aid and complement the human constitutive model developer. A number of examples are utilized to illustrate the various features contained within SDICE. It is expected that this symbolic package can and will provide a significant incentive to the development of new constitutive theories.
The calculation of average error probability in a digital fibre optical communication system
NASA Astrophysics Data System (ADS)
Rugemalira, R. A. M.
1980-03-01
This paper deals with the problem of determining the average error probability in a digital fibre optical communication system, in the presence of message dependent inhomogeneous non-stationary shot noise, additive Gaussian noise and intersymbol interference. A zero-forcing equalization receiver filter is considered. Three techniques for error rate evaluation are compared. The Chernoff bound and the Gram-Charlier series expansion methods are compared to the characteristic function technique. The latter predicts a higher receiver sensitivity
Image statistics decoding for convolutional codes
NASA Technical Reports Server (NTRS)
Pitt, G. H., III; Swanson, L.; Yuen, J. H.
1987-01-01
It is a fact that adjacent pixels in a Voyager image are very similar in grey level. This fact can be used in conjunction with the Maximum-Likelihood Convolutional Decoder (MCD) to decrease the error rate when decoding a picture from Voyager. Implementing this idea would require no changes in the Voyager spacecraft and could be used as a backup to the current system without too much expenditure, so the feasibility of it and the possible gains for Voyager were investigated. Simulations have shown that the gain could be as much as 2 dB at certain error rates, and experiments with real data inspired new ideas on ways to get the most information possible out of the received symbol stream.
Probability shapes perceptual precision: A study in orientation estimation.
Jabar, Syaheed B; Anderson, Britt
2015-12-01
Probability is known to affect perceptual estimations, but an understanding of mechanisms is lacking. Moving beyond binary classification tasks, we had naive participants report the orientation of briefly viewed gratings where we systematically manipulated contingent probability. Participants rapidly developed faster and more precise estimations for high-probability tilts. The shapes of their error distributions, as indexed by a kurtosis measure, also showed a distortion from Gaussian. This kurtosis metric was robust, capturing probability effects that were graded, contextual, and varying as a function of stimulus orientation. Our data can be understood as a probability-induced reduction in the variability or "shape" of estimation errors, as would be expected if probability affects the perceptual representations. As probability manipulations are an implicit component of many endogenous cuing paradigms, changes at the perceptual level could account for changes in performance that might have traditionally been ascribed to "attention." (c) 2015 APA, all rights reserved).
Effects of preparation time and trial type probability on performance of anti- and pro-saccades.
Pierce, Jordan E; McDowell, Jennifer E
2016-02-01
Cognitive control optimizes responses to relevant task conditions by balancing bottom-up stimulus processing with top-down goal pursuit. It can be investigated using the ocular motor system by contrasting basic prosaccades (look toward a stimulus) with complex antisaccades (look away from a stimulus). Furthermore, the amount of time allotted between trials, the need to switch task sets, and the time allowed to prepare for an upcoming saccade all impact performance. In this study the relative probabilities of anti- and pro-saccades were manipulated across five blocks of interleaved trials, while the inter-trial interval and trial type cue duration were varied across subjects. Results indicated that inter-trial interval had no significant effect on error rates or reaction times (RTs), while a shorter trial type cue led to more antisaccade errors and faster overall RTs. Responses following a shorter cue duration also showed a stronger effect of trial type probability, with more antisaccade errors in blocks with a low antisaccade probability and slower RTs for each saccade task when its trial type was unlikely. A longer cue duration yielded fewer errors and slower RTs, with a larger switch cost for errors compared to a short cue duration. Findings demonstrated that when the trial type cue duration was shorter, visual motor responsiveness was faster and subjects relied upon the implicit trial probability context to improve performance. When the cue duration was longer, increased fixation-related activity may have delayed saccade motor preparation and slowed responses, guiding subjects to respond in a controlled manner regardless of trial type probability. Copyright © 2016 Elsevier B.V. All rights reserved.
Simulation of rare events in quantum error correction
NASA Astrophysics Data System (ADS)
Bravyi, Sergey; Vargo, Alexander
2013-12-01
We consider the problem of calculating the logical error probability for a stabilizer quantum code subject to random Pauli errors. To access the regime of large code distances where logical errors are extremely unlikely we adopt the splitting method widely used in Monte Carlo simulations of rare events and Bennett's acceptance ratio method for estimating the free energy difference between two canonical ensembles. To illustrate the power of these methods in the context of error correction, we calculate the logical error probability PL for the two-dimensional surface code on a square lattice with a pair of holes for all code distances d≤20 and all error rates p below the fault-tolerance threshold. Our numerical results confirm the expected exponential decay PL˜exp[-α(p)d] and provide a simple fitting formula for the decay rate α(p). Both noiseless and noisy syndrome readout circuits are considered.
Diagnostic reasoning techniques for selective monitoring
NASA Technical Reports Server (NTRS)
Homem-De-mello, L. S.; Doyle, R. J.
1991-01-01
An architecture for using diagnostic reasoning techniques in selective monitoring is presented. Given the sensor readings and a model of the physical system, a number of assertions are generated and expressed as Boolean equations. The resulting system of Boolean equations is solved symbolically. Using a priori probabilities of component failure and Bayes' rule, revised probabilities of failure can be computed. These will indicate what components have failed or are the most likely to have failed. This approach is suitable for systems that are well understood and for which the correctness of the assertions can be guaranteed. Also, the system must be such that changes are slow enough to allow the computation.
Some practical universal noiseless coding techniques
NASA Technical Reports Server (NTRS)
Rice, R. F.
1979-01-01
Some practical adaptive techniques for the efficient noiseless coding of a broad class of such data sources are developed and analyzed. Algorithms are designed for coding discrete memoryless sources which have a known symbol probability ordering but unknown probability values. A general applicability of these algorithms to solving practical problems is obtained because most real data sources can be simply transformed into this form by appropriate preprocessing. These algorithms have exhibited performance only slightly above all entropy values when applied to real data with stationary characteristics over the measurement span. Performance considerably under a measured average data entropy may be observed when data characteristics are changing over the measurement span.
Analytic barrage attack model. Final report, January 1986-January 1989
DOE Office of Scientific and Technical Information (OSTI.GOV)
St Ledger, J.W.; Naegeli, R.E.; Dowden, N.A.
An analytic model is developed for a nuclear barrage attack, assuming weapons with no aiming error and a cookie-cutter damage function. The model is then extended with approximations for the effects of aiming error and distance damage sigma. The final result is a fast running model which calculates probability of damage for a barrage attack. The probability of damage is accurate to within seven percent or better, for weapon reliabilities of 50 to 100 percent, distance damage sigmas of 0.5 or less, and zero to very large circular error probabilities. FORTRAN 77 coding is included in the report for themore » analytic model and for a numerical model used to check the analytic results.« less
NASA Astrophysics Data System (ADS)
Zhao, Liang; Ge, Jian-Hua
2012-12-01
Single-carrier (SC) transmission with frequency-domain equalization (FDE) is today recognized as an attractive alternative to orthogonal frequency-division multiplexing (OFDM) for communication application with the inter-symbol interference (ISI) caused by multi-path propagation, especially in shallow water channel. In this paper, we investigate an iterative receiver based on minimum mean square error (MMSE) decision feedback equalizer (DFE) with symbol rate and fractional rate samplings in the frequency domain (FD) and serially concatenated trellis coded modulation (SCTCM) decoder. Based on sound speed profiles (SSP) measured in the lake and finite-element ray tracking (Bellhop) method, the shallow water channel is constructed to evaluate the performance of the proposed iterative receiver. Performance results show that the proposed iterative receiver can significantly improve the performance and obtain better data transmission than FD linear and adaptive decision feedback equalizers, especially in adopting fractional rate sampling.
Super high compression of line drawing data
NASA Technical Reports Server (NTRS)
Cooper, D. B.
1976-01-01
Models which can be used to accurately represent the type of line drawings which occur in teleconferencing and transmission for remote classrooms and which permit considerable data compression were described. The objective was to encode these pictures in binary sequences of shortest length but such that the pictures can be reconstructed without loss of important structure. It was shown that exploitation of reasonably simple structure permits compressions in the range of 30-100 to 1. When dealing with highly stylized material such as electronic or logic circuit schematics, it is unnecessary to reproduce configurations exactly. Rather, the symbols and configurations must be understood and be reproduced, but one can use fixed font symbols for resistors, diodes, capacitors, etc. Compression of pictures of natural phenomena such as can be realized by taking a similar approach, or essentially zero error reproducibility can be achieved but at a lower level of compression.
Data-Rate Estimation for Autonomous Receiver Operation
NASA Technical Reports Server (NTRS)
Tkacenko, A.; Simon, M. K.
2005-01-01
In this article, we present a series of algorithms for estimating the data rate of a signal whose admissible data rates are integer base, integer powered multiples of a known basic data rate. These algorithms can be applied to the Electra radio currently used in the Deep Space Network (DSN), which employs data rates having the above relationship. The estimation is carried out in an autonomous setting in which very little a priori information is assumed. It is done by exploiting an elegant property of the split symbol moments estimator (SSME), which is traditionally used to estimate the signal-to-noise ratio (SNR) of the received signal. By quantizing the assumed symbol-timing error or jitter, we present an all-digital implementation of the SSME which can be used to jointly estimate the data rate, SNR, and jitter. Simulation results presented show that these joint estimation algorithms perform well, even in the low SNR regions typically encountered in the DSN.
Altenberg, Lee
2016-01-01
The mathematical symbol for the norm, which is heavily overloaded with multiple definitions that have both universal and specific properties, lends itself to confusion. This is manifest in the proof of an important theorem for population dynamics by Schreiber and Li on how dispersal increases population growth in a periodic environment. Here the theorem is placed in context, the proof is clarified, and the confusing but inconsequential errors corrected.
The Road from Foolishness to Fraud
NASA Astrophysics Data System (ADS)
Park, R. L.
2000-12-01
Ancient beliefs in demons and magic still sweep across the modern landscape, but they are now dressed in the language and symbols of science. This is pseudoscience. At least in the beginning, its practitioners may believe it to be science, just as witches and faith healers may truly believe they can call forth supernatural powers. What may begin as honest error, however, has a way of evolving through almost imperceptible steps from self-delusion to fraud.
The Road From Foolishness to Fraud
NASA Astrophysics Data System (ADS)
Park, Bob
2000-03-01
Ancient beliefs in demons and magic still sweep across the modern landscape, but they are now dressed in the language and symbols of science. This is pseudoscience. At least in the beginning, its practitioners may believe it to be science, just as witches and faith healers may truly believe they can call forth supernatural powers. What may begin as honest error, however, has a way of evolving through almost imperceptible steps from self-delusion to fraud.
Proceedings of the Air Force Forum for Intelligent Tutoring Systems
1989-04-01
interface help the students find facts? I recently developed an expert system that is used at the JFK Airport to help workers assign incoming planes to...of their errors and to make comparisons with optimal solution paths. Chapters 2, and 5 BIP, BIP II: Basic Instructional Program BIP applied knowledge...OFFICE SYMBOL 7a NAME OF MONITORING ORGANIZATION Center for Applied Artificial (If applicable) Training Systems Division Intelligence 1 6. ADDRESS (City
40 CFR 1065.1005 - Symbols, abbreviations, acronyms, and units of measure.
Code of Federal Regulations, 2012 CFR
2012-07-01
... least squares regression β ratio of diameters meter per meter m/m 1 β atomic oxygen to carbon ratio mole... consumption gram per kilowatt hour g/(kW·hr) g·3.6−1·106·m−2·kg·s2 F F-test statistic f frequency hertz Hz s−1... standard deviation S Sutherland constant kelvin K K SEE standard estimate of error T absolute temperature...
40 CFR 1065.1005 - Symbols, abbreviations, acronyms, and units of measure.
Code of Federal Regulations, 2013 CFR
2013-07-01
... least squares regression β ratio of diameters meter per meter m/m 1 β atomic oxygen to carbon ratio mole... consumption gram per kilowatt hour g/(kW·hr) g·3.6−1·106·m−2·kg·s2 F F-test statistic f frequency hertz Hz s−1... standard deviation S Sutherland constant kelvin K K SEE standard estimate of error T absolute temperature...
NASA Astrophysics Data System (ADS)
Amir, Amihood; Gotthilf, Zvi; Shalom, B. Riva
The Longest Common Subsequence (LCS) of two strings A and B is a well studied problem having a wide range of applications. When each symbol of the input strings is assigned a positive weight the problem becomes the Heaviest Common Subsequence (HCS) problem. In this paper we consider a different version of weighted LCS on Position Weight Matrices (PWM). The Position Weight Matrix was introduced as a tool to handle a set of sequences that are not identical, yet, have many local similarities. Such a weighted sequence is a 'statistical image' of this set where we are given the probability of every symbol's occurrence at every text location. We consider two possible definitions of LCS on PWM. For the first, we solve the weighted LCS problem of z sequences in time O(zn z + 1). For the second, we prove \\cal{NP}-hardness and provide an approximation algorithm.
MaxEnt alternatives to pearson family distributions
NASA Astrophysics Data System (ADS)
Stokes, Barrie J.
2012-05-01
In a previous MaxEnt conference [11] a method of obtaining MaxEnt univariate distributions under a variety of constraints was presented. The Mathematica function Interpolation[], normally used with numerical data, can also process "semi-symbolic" data, and Lagrange Multiplier equations were solved for a set of symbolic ordinates describing the required MaxEnt probability density function. We apply a more developed version of this approach to finding MaxEnt distributions having prescribed β1 and β2 values, and compare the entropy of the MaxEnt distribution to that of the Pearson family distribution having the same β1 and β2. These MaxEnt distributions do have, in general, greater entropy than the related Pearson distribution. In accordance with Jaynes' Maximum Entropy Principle, these MaxEnt distributions are thus to be preferred to the corresponding Pearson distributions as priors in Bayes' Theorem.
MIMO equalization with adaptive step size for few-mode fiber transmission systems.
van Uden, Roy G H; Okonkwo, Chigo M; Sleiffer, Vincent A J M; de Waardt, Hugo; Koonen, Antonius M J
2014-01-13
Optical multiple-input multiple-output (MIMO) transmission systems generally employ minimum mean squared error time or frequency domain equalizers. Using an experimental 3-mode dual polarization coherent transmission setup, we show that the convergence time of the MMSE time domain equalizer (TDE) and frequency domain equalizer (FDE) can be reduced by approximately 50% and 30%, respectively. The criterion used to estimate the system convergence time is the time it takes for the MIMO equalizer to reach an average output error which is within a margin of 5% of the average output error after 50,000 symbols. The convergence reduction difference between the TDE and FDE is attributed to the limited maximum step size for stable convergence of the frequency domain equalizer. The adaptive step size requires a small overhead in the form of a lookup table. It is highlighted that the convergence time reduction is achieved without sacrificing optical signal-to-noise ratio performance.
Optical character recognition: an illustrated guide to the frontier
NASA Astrophysics Data System (ADS)
Nagy, George; Nartker, Thomas A.; Rice, Stephen V.
1999-12-01
We offer a perspective on the performance of current OCR systems by illustrating and explaining actual OCR errors made by three commercial devices. After discussing briefly the character recognition abilities of humans and computers, we present illustrated examples of recognition errors. The top level of our taxonomy of the causes of errors consists of Imaging Defects, Similar Symbols, Punctuation, and Typography. The analysis of a series of 'snippets' from this perspective provides insight into the strengths and weaknesses of current systems, and perhaps a road map to future progress. The examples were drawn from the large-scale tests conducted by the authors at the Information Science Research Institute of the University of Nevada, Las Vegas. By way of conclusion, we point to possible approaches for improving the accuracy of today's systems. The talk is based on our eponymous monograph, recently published in The Kluwer International Series in Engineering and Computer Science, Kluwer Academic Publishers, 1999.
Estimating parameters for probabilistic linkage of privacy-preserved datasets.
Brown, Adrian P; Randall, Sean M; Ferrante, Anna M; Semmens, James B; Boyd, James H
2017-07-10
Probabilistic record linkage is a process used to bring together person-based records from within the same dataset (de-duplication) or from disparate datasets using pairwise comparisons and matching probabilities. The linkage strategy and associated match probabilities are often estimated through investigations into data quality and manual inspection. However, as privacy-preserved datasets comprise encrypted data, such methods are not possible. In this paper, we present a method for estimating the probabilities and threshold values for probabilistic privacy-preserved record linkage using Bloom filters. Our method was tested through a simulation study using synthetic data, followed by an application using real-world administrative data. Synthetic datasets were generated with error rates from zero to 20% error. Our method was used to estimate parameters (probabilities and thresholds) for de-duplication linkages. Linkage quality was determined by F-measure. Each dataset was privacy-preserved using separate Bloom filters for each field. Match probabilities were estimated using the expectation-maximisation (EM) algorithm on the privacy-preserved data. Threshold cut-off values were determined by an extension to the EM algorithm allowing linkage quality to be estimated for each possible threshold. De-duplication linkages of each privacy-preserved dataset were performed using both estimated and calculated probabilities. Linkage quality using the F-measure at the estimated threshold values was also compared to the highest F-measure. Three large administrative datasets were used to demonstrate the applicability of the probability and threshold estimation technique on real-world data. Linkage of the synthetic datasets using the estimated probabilities produced an F-measure that was comparable to the F-measure using calculated probabilities, even with up to 20% error. Linkage of the administrative datasets using estimated probabilities produced an F-measure that was higher than the F-measure using calculated probabilities. Further, the threshold estimation yielded results for F-measure that were only slightly below the highest possible for those probabilities. The method appears highly accurate across a spectrum of datasets with varying degrees of error. As there are few alternatives for parameter estimation, the approach is a major step towards providing a complete operational approach for probabilistic linkage of privacy-preserved datasets.
Butler, Troy; Wildey, Timothy
2018-01-01
In thist study, we develop a procedure to utilize error estimates for samples of a surrogate model to compute robust upper and lower bounds on estimates of probabilities of events. We show that these error estimates can also be used in an adaptive algorithm to simultaneously reduce the computational cost and increase the accuracy in estimating probabilities of events using computationally expensive high-fidelity models. Specifically, we introduce the notion of reliability of a sample of a surrogate model, and we prove that utilizing the surrogate model for the reliable samples and the high-fidelity model for the unreliable samples gives preciselymore » the same estimate of the probability of the output event as would be obtained by evaluation of the original model for each sample. The adaptive algorithm uses the additional evaluations of the high-fidelity model for the unreliable samples to locally improve the surrogate model near the limit state, which significantly reduces the number of high-fidelity model evaluations as the limit state is resolved. Numerical results based on a recently developed adjoint-based approach for estimating the error in samples of a surrogate are provided to demonstrate (1) the robustness of the bounds on the probability of an event, and (2) that the adaptive enhancement algorithm provides a more accurate estimate of the probability of the QoI event than standard response surface approximation methods at a lower computational cost.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butler, Troy; Wildey, Timothy
In thist study, we develop a procedure to utilize error estimates for samples of a surrogate model to compute robust upper and lower bounds on estimates of probabilities of events. We show that these error estimates can also be used in an adaptive algorithm to simultaneously reduce the computational cost and increase the accuracy in estimating probabilities of events using computationally expensive high-fidelity models. Specifically, we introduce the notion of reliability of a sample of a surrogate model, and we prove that utilizing the surrogate model for the reliable samples and the high-fidelity model for the unreliable samples gives preciselymore » the same estimate of the probability of the output event as would be obtained by evaluation of the original model for each sample. The adaptive algorithm uses the additional evaluations of the high-fidelity model for the unreliable samples to locally improve the surrogate model near the limit state, which significantly reduces the number of high-fidelity model evaluations as the limit state is resolved. Numerical results based on a recently developed adjoint-based approach for estimating the error in samples of a surrogate are provided to demonstrate (1) the robustness of the bounds on the probability of an event, and (2) that the adaptive enhancement algorithm provides a more accurate estimate of the probability of the QoI event than standard response surface approximation methods at a lower computational cost.« less
Diversity Performance Analysis on Multiple HAP Networks
Dong, Feihong; Li, Min; Gong, Xiangwu; Li, Hongjun; Gao, Fengyue
2015-01-01
One of the main design challenges in wireless sensor networks (WSNs) is achieving a high-data-rate transmission for individual sensor devices. The high altitude platform (HAP) is an important communication relay platform for WSNs and next-generation wireless networks. Multiple-input multiple-output (MIMO) techniques provide the diversity and multiplexing gain, which can improve the network performance effectively. In this paper, a virtual MIMO (V-MIMO) model is proposed by networking multiple HAPs with the concept of multiple assets in view (MAV). In a shadowed Rician fading channel, the diversity performance is investigated. The probability density function (PDF) and cumulative distribution function (CDF) of the received signal-to-noise ratio (SNR) are derived. In addition, the average symbol error rate (ASER) with BPSK and QPSK is given for the V-MIMO model. The system capacity is studied for both perfect channel state information (CSI) and unknown CSI individually. The ergodic capacity with various SNR and Rician factors for different network configurations is also analyzed. The simulation results validate the effectiveness of the performance analysis. It is shown that the performance of the HAPs network in WSNs can be significantly improved by utilizing the MAV to achieve overlapping coverage, with the help of the V-MIMO techniques. PMID:26134102
Relation between minimum-error discrimination and optimum unambiguous discrimination
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qiu Daowen; SQIG-Instituto de Telecomunicacoes, Departamento de Matematica, Instituto Superior Tecnico, Universidade Tecnica de Lisboa, Avenida Rovisco Pais PT-1049-001, Lisbon; Li Lvjun
2010-09-15
In this paper, we investigate the relationship between the minimum-error probability Q{sub E} of ambiguous discrimination and the optimal inconclusive probability Q{sub U} of unambiguous discrimination. It is known that for discriminating two states, the inequality Q{sub U{>=}}2Q{sub E} has been proved in the literature. The main technical results are as follows: (1) We show that, for discriminating more than two states, Q{sub U{>=}}2Q{sub E} may not hold again, but the infimum of Q{sub U}/Q{sub E} is 1, and there is no supremum of Q{sub U}/Q{sub E}, which implies that the failure probabilities of the two schemes for discriminating somemore » states may be narrowly or widely gapped. (2) We derive two concrete formulas of the minimum-error probability Q{sub E} and the optimal inconclusive probability Q{sub U}, respectively, for ambiguous discrimination and unambiguous discrimination among arbitrary m simultaneously diagonalizable mixed quantum states with given prior probabilities. In addition, we show that Q{sub E} and Q{sub U} satisfy the relationship that Q{sub U{>=}}(m/m-1)Q{sub E}.« less
Universal Entropy of Word Ordering Across Linguistic Families
Montemurro, Marcelo A.; Zanette, Damián H.
2011-01-01
Background The language faculty is probably the most distinctive feature of our species, and endows us with a unique ability to exchange highly structured information. In written language, information is encoded by the concatenation of basic symbols under grammatical and semantic constraints. As is also the case in other natural information carriers, the resulting symbolic sequences show a delicate balance between order and disorder. That balance is determined by the interplay between the diversity of symbols and by their specific ordering in the sequences. Here we used entropy to quantify the contribution of different organizational levels to the overall statistical structure of language. Methodology/Principal Findings We computed a relative entropy measure to quantify the degree of ordering in word sequences from languages belonging to several linguistic families. While a direct estimation of the overall entropy of language yielded values that varied for the different families considered, the relative entropy quantifying word ordering presented an almost constant value for all those families. Conclusions/Significance Our results indicate that despite the differences in the structure and vocabulary of the languages analyzed, the impact of word ordering in the structure of language is a statistical linguistic universal. PMID:21603637
Tattersall, Ian
2009-01-01
Our species, Homo sapiens, is highly autapomorphic (uniquely derived) among hominids in the structure of its skull and postcranial skeleton. It is also sharply distinguished from other organisms by its unique symbolic mode of cognition. The fossil and archaeological records combine to show fairly clearly that our physical and cognitive attributes both first appeared in Africa, but at different times. Essentially modern bony conformation was established in that continent by the 200–150 Ka range (a dating in good agreement with dates for the origin of H. sapiens derived from modern molecular diversity). The event concerned was apparently short-term because it is essentially unanticipated in the fossil record. In contrast, the first convincing stirrings of symbolic behavior are not currently detectable until (possibly well) after 100 Ka. The radical reorganization of gene expression that underwrote the distinctive physical appearance of H. sapiens was probably also responsible for the neural substrate that permits symbolic cognition. This exaptively acquired potential lay unexploited until it was “discovered” via a cultural stimulus, plausibly the invention of language. Modern humans appear to have definitively exited Africa to populate the rest of the globe only after both their physical and cognitive peculiarities had been acquired within that continent. PMID:19805256
On the sensitivity of TG-119 and IROC credentialing to TPS commissioning errors.
McVicker, Drew; Yin, Fang-Fang; Adamson, Justus D
2016-01-08
We investigate the sensitivity of IMRT commissioning using the TG-119 C-shape phantom and credentialing with the IROC head and neck phantom to treatment planning system commissioning errors. We introduced errors into the various aspects of the commissioning process for a 6X photon energy modeled using the analytical anisotropic algorithm within a commercial treatment planning system. Errors were implemented into the various components of the dose calculation algorithm including primary photons, secondary photons, electron contamination, and MLC parameters. For each error we evaluated the probability that it could be committed unknowingly during the dose algorithm commissioning stage, and the probability of it being identified during the verification stage. The clinical impact of each commissioning error was evaluated using representative IMRT plans including low and intermediate risk prostate, head and neck, mesothelioma, and scalp; the sensitivity of the TG-119 and IROC phantoms was evaluated by comparing dosimetric changes to the dose planes where film measurements occur and change in point doses where dosimeter measurements occur. No commissioning errors were found to have both a low probability of detection and high clinical severity. When errors do occur, the IROC credentialing and TG 119 commissioning criteria are generally effective at detecting them; however, for the IROC phantom, OAR point-dose measurements are the most sensitive despite being currently excluded from IROC analysis. Point-dose measurements with an absolute dose constraint were the most effective at detecting errors, while film analysis using a gamma comparison and the IROC film distance to agreement criteria were less effective at detecting the specific commissioning errors implemented here.
2011-01-01
Background Practicing arm and gait movements with robotic assistance after neurologic injury can help patients improve their movement ability, but patients sometimes reduce their effort during training in response to the assistance. Reduced effort has been hypothesized to diminish clinical outcomes of robotic training. To better understand patient slacking, we studied the role of visual distraction and auditory feedback in modulating patient effort during a common robot-assisted tracking task. Methods Fourteen participants with chronic left hemiparesis from stroke, five control participants with chronic right hemiparesis and fourteen non-impaired healthy control participants, tracked a visual target with their arms while receiving adaptive assistance from a robotic arm exoskeleton. We compared four practice conditions: the baseline tracking task alone; tracking while also performing a visual distracter task; tracking with the visual distracter and sound feedback; and tracking with sound feedback. For the distracter task, symbols were randomly displayed in the corners of the computer screen, and the participants were instructed to click a mouse button when a target symbol appeared. The sound feedback consisted of a repeating beep, with the frequency of repetition made to increase with increasing tracking error. Results Participants with stroke halved their effort and doubled their tracking error when performing the visual distracter task with their left hemiparetic arm. With sound feedback, however, these participants increased their effort and decreased their tracking error close to their baseline levels, while also performing the distracter task successfully. These effects were significantly smaller for the participants who used their non-paretic arm and for the participants without stroke. Conclusions Visual distraction decreased participants effort during a standard robot-assisted movement training task. This effect was greater for the hemiparetic arm, suggesting that the increased demands associated with controlling an affected arm make the motor system more prone to slack when distracted. Providing an alternate sensory channel for feedback, i.e., auditory feedback of tracking error, enabled the participants to simultaneously perform the tracking task and distracter task effectively. Thus, incorporating real-time auditory feedback of performance errors might improve clinical outcomes of robotic therapy systems. PMID:21513561
Double ErrP Detection for Automatic Error Correction in an ERP-Based BCI Speller.
Cruz, Aniana; Pires, Gabriel; Nunes, Urbano J
2018-01-01
Brain-computer interface (BCI) is a useful device for people with severe motor disabilities. However, due to its low speed and low reliability, BCI still has a very limited application in daily real-world tasks. This paper proposes a P300-based BCI speller combined with a double error-related potential (ErrP) detection to automatically correct erroneous decisions. This novel approach introduces a second error detection to infer whether wrong automatic correction also elicits a second ErrP. Thus, two single-trial responses, instead of one, contribute to the final selection, improving the reliability of error detection. Moreover, to increase error detection, the evoked potential detected as target by the P300 classifier is combined with the evoked error potential at a feature-level. Discriminable error and positive potentials (response to correct feedback) were clearly identified. The proposed approach was tested on nine healthy participants and one tetraplegic participant. The online average accuracy for the first and second ErrPs were 88.4% and 84.8%, respectively. With automatic correction, we achieved an improvement around 5% achieving 89.9% in spelling accuracy for an effective 2.92 symbols/min. The proposed approach revealed that double ErrP detection can improve the reliability and speed of BCI systems.
Prehistory of Zodiac Dating: Three Strata of Upper Paleolithic Constellations
NASA Astrophysics Data System (ADS)
Gurshtein, Alex A.
A pattern of archaic proto-constellations is extracted from Aratus' "The Phaenomena" didactic poem list according to a size criterion elaborated earlier, and their symbolism is analyzed. As a result of this approach three celestial symbolical strata are discovered to be probably a reflection of the symbols for the Lower, the Middle and the Upper Worlds; the Under-World creatures have a water character, the Middle World ones are mostly anthropomorphic and flying beings are for the Upper World. The strata excerpted from Aratus' sky seems to be in agreement with the well-known Babylonian division into three god pathways for Ea (Enki), Anu and Enlil. There is a possibility of dating the pattern discovered because of precession's strong influence as far back as 16 thousand years, the result being supported by the comparison of different star group mean sizes. The archaic constellation pattern under consideration is a reasonable background of symbolical meanings for the first Zodiacal generation quartet (7.5 thousand years old) examined by the author previously. The enormous size of the Argo constellation (Ship of Argo and his Argonauts) as well as the large sizes of other southern constellations are explained as due to the existence of an accumulation zone near the South celestial pole. Some extra correlations between the reconstruction proposed and cultural data available are discussed. The paper is the second part of the investigation "On the Origin of the Zodiacal constellations" published in Vistas in Astronomy, vol.36, pp.171-190, 1993.
NASA Astrophysics Data System (ADS)
Daneshgaran, Fred; Mondin, Marina; Olia, Khashayar
This paper is focused on the problem of Information Reconciliation (IR) for continuous variable Quantum Key Distribution (QKD). The main problem is quantization and assignment of labels to the samples of the Gaussian variables observed at Alice and Bob. Trouble is that most of the samples, assuming that the Gaussian variable is zero mean which is de-facto the case, tend to have small magnitudes and are easily disturbed by noise. Transmission over longer and longer distances increases the losses corresponding to a lower effective Signal-to-Noise Ratio (SNR) exasperating the problem. Quantization over higher dimensions is advantageous since it allows for fractional bit per sample accuracy which may be needed at very low SNR conditions whereby the achievable secret key rate is significantly less than one bit per sample. In this paper, we propose to use Permutation Modulation (PM) for quantization of Gaussian vectors potentially containing thousands of samples. PM is applied to the magnitudes of the Gaussian samples and we explore the dependence of the sign error probability on the magnitude of the samples. At very low SNR, we may transmit the entire label of the PM code from Bob to Alice in Reverse Reconciliation (RR) over public channel. The side information extracted from this label can then be used by Alice to characterize the sign error probability of her individual samples. Forward Error Correction (FEC) coding can be used by Bob on each subset of samples with similar sign error probability to aid Alice in error correction. This can be done for different subsets of samples with similar sign error probabilities leading to an Unequal Error Protection (UEP) coding paradigm.
Lee, Chanseok; Lee, Jae Young; Kim, Do-Nyun
2018-02-07
The originally published version of this Article contained an error in Figure 5. In panel f, the right y-axis 'Strain energy (kbT)' was labelled 'Probability' and the left y-axis 'Probability' was labelled 'Strain energy (kbT)'. This error has now been corrected in both the PDF and HTML versions of the Article.
NASA Technical Reports Server (NTRS)
Elyasberg, P. Y.
1979-01-01
The shortcomings of the classical approach are set forth, and the newer methods resulting from these shortcomings are explained. The problem was approached with the assumption that the probabilities of error were known, as well as without knowledge of the distribution of the probabilities of error. The advantages of the newer approach are discussed.
Vukovic, Rose K; Lesaux, Nonie K
2013-06-01
This longitudinal study examined how language ability relates to mathematical development in a linguistically and ethnically diverse sample of children from 6 to 9 years of age. Study participants were 75 native English speakers and 92 language minority learners followed from first to fourth grades. Autoregression in a structural equation modeling (SEM) framework was used to evaluate the relation between children's language ability and gains in different domains of mathematical cognition (i.e., arithmetic, data analysis/probability, algebra, and geometry). The results showed that language ability predicts gains in data analysis/probability and geometry, but not in arithmetic or algebra, after controlling for visual-spatial working memory, reading ability, and sex. The effect of language on gains in mathematical cognition did not differ between language minority learners and native English speakers. These findings suggest that language influences how children make meaning of mathematics but is not involved in complex arithmetical procedures whether presented with Arabic symbols as in arithmetic or with abstract symbols as in algebraic reasoning. The findings further indicate that early language experiences are important for later mathematical development regardless of language background, denoting the need for intensive and targeted language opportunities for language minority and native English learners to develop mathematical concepts and representations. Copyright © 2013. Published by Elsevier Inc.
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2015-01-01
Upper bounds on high speed satellite collision probability, PC †, have been investigated. Previous methods assume an individual position error covariance matrix is available for each object. The two matrices being combined into a single, relative position error covariance matrix. Components of the combined error covariance are then varied to obtain a maximum PC. If error covariance information for only one of the two objects was available, either some default shape has been used or nothing could be done. An alternative is presented that uses the known covariance information along with a critical value of the missing covariance to obtain an approximate but potentially useful Pc upper bound.
The decline and fall of Type II error rates
Steve Verrill; Mark Durst
2005-01-01
For general linear models with normally distributed random errors, the probability of a Type II error decreases exponentially as a function of sample size. This potentially rapid decline reemphasizes the importance of performing power calculations.
Metrics for Business Process Models
NASA Astrophysics Data System (ADS)
Mendling, Jan
Up until now, there has been little research on why people introduce errors in real-world business process models. In a more general context, Simon [404] points to the limitations of cognitive capabilities and concludes that humans act rationally only to a certain extent. Concerning modeling errors, this argument would imply that human modelers lose track of the interrelations of large and complex models due to their limited cognitive capabilities and introduce errors that they would not insert in a small model. A recent study by Mendling et al. [275] explores in how far certain complexity metrics of business process models have the potential to serve as error determinants. The authors conclude that complexity indeed appears to have an impact on error probability. Before we can test such a hypothesis in a more general setting, we have to establish an understanding of how we can define determinants that drive error probability and how we can measure them.
Coding gains and error rates from the Big Viterbi Decoder
NASA Technical Reports Server (NTRS)
Onyszchuk, I. M.
1991-01-01
A prototype hardware Big Viterbi Decoder (BVD) was completed for an experiment with the Galileo Spacecraft. Searches for new convolutional codes, studies of Viterbi decoder hardware designs and architectures, mathematical formulations, and decompositions of the deBruijn graph into identical and hierarchical subgraphs, and very large scale integration (VLSI) chip design are just a few examples of tasks completed for this project. The BVD bit error rates (BER), measured from hardware and software simulations, are plotted as a function of bit signal to noise ratio E sub b/N sub 0 on the additive white Gaussian noise channel. Using the constraint length 15, rate 1/4, experimental convolutional code for the Galileo mission, the BVD gains 1.5 dB over the NASA standard (7,1/2) Maximum Likelihood Convolution Decoder (MCD) at a BER of 0.005. At this BER, the same gain results when the (255,233) NASA standard Reed-Solomon decoder is used, which yields a word error rate of 2.1 x 10(exp -8) and a BER of 1.4 x 10(exp -9). The (15, 1/6) code to be used by the Cometary Rendezvous Asteroid Flyby (CRAF)/Cassini Missions yields 1.7 dB of coding gain. These gains are measured with respect to symbols input to the BVD and increase with decreasing BER. Also, 8-bit input symbol quantization makes the BVD resistant to demodulated signal-level variations which may cause higher bandwidth than the NASA (7,1/2) code, these gains are offset by about 0.1 dB of expected additional receiver losses. Coding gains of several decibels are possible by compressing all spacecraft data.
Quantum state discrimination bounds for finite sample size
DOE Office of Scientific and Technical Information (OSTI.GOV)
Audenaert, Koenraad M. R.; Mosonyi, Milan; Mathematical Institute, Budapest University of Technology and Economics, Egry Jozsef u 1., Budapest 1111
2012-12-15
In the problem of quantum state discrimination, one has to determine by measurements the state of a quantum system, based on the a priori side information that the true state is one of the two given and completely known states, {rho} or {sigma}. In general, it is not possible to decide the identity of the true state with certainty, and the optimal measurement strategy depends on whether the two possible errors (mistaking {rho} for {sigma}, or the other way around) are treated as of equal importance or not. Results on the quantum Chernoff and Hoeffding bounds and the quantum Stein'smore » lemma show that, if several copies of the system are available then the optimal error probabilities decay exponentially in the number of copies, and the decay rate is given by a certain statistical distance between {rho} and {sigma} (the Chernoff distance, the Hoeffding distances, and the relative entropy, respectively). While these results provide a complete solution to the asymptotic problem, they are not completely satisfying from a practical point of view. Indeed, in realistic scenarios one has access only to finitely many copies of a system, and therefore it is desirable to have bounds on the error probabilities for finite sample size. In this paper we provide finite-size bounds on the so-called Stein errors, the Chernoff errors, the Hoeffding errors, and the mixed error probabilities related to the Chernoff and the Hoeffding errors.« less
Understanding seasonal variability of uncertainty in hydrological prediction
NASA Astrophysics Data System (ADS)
Li, M.; Wang, Q. J.
2012-04-01
Understanding uncertainty in hydrological prediction can be highly valuable for improving the reliability of streamflow prediction. In this study, a monthly water balance model, WAPABA, in a Bayesian joint probability with error models are presented to investigate the seasonal dependency of prediction error structure. A seasonal invariant error model, analogous to traditional time series analysis, uses constant parameters for model error and account for no seasonal variations. In contrast, a seasonal variant error model uses a different set of parameters for bias, variance and autocorrelation for each individual calendar month. Potential connection amongst model parameters from similar months is not considered within the seasonal variant model and could result in over-fitting and over-parameterization. A hierarchical error model further applies some distributional restrictions on model parameters within a Bayesian hierarchical framework. An iterative algorithm is implemented to expedite the maximum a posterior (MAP) estimation of a hierarchical error model. Three error models are applied to forecasting streamflow at a catchment in southeast Australia in a cross-validation analysis. This study also presents a number of statistical measures and graphical tools to compare the predictive skills of different error models. From probability integral transform histograms and other diagnostic graphs, the hierarchical error model conforms better to reliability when compared to the seasonal invariant error model. The hierarchical error model also generally provides the most accurate mean prediction in terms of the Nash-Sutcliffe model efficiency coefficient and the best probabilistic prediction in terms of the continuous ranked probability score (CRPS). The model parameters of the seasonal variant error model are very sensitive to each cross validation, while the hierarchical error model produces much more robust and reliable model parameters. Furthermore, the result of the hierarchical error model shows that most of model parameters are not seasonal variant except for error bias. The seasonal variant error model is likely to use more parameters than necessary to maximize the posterior likelihood. The model flexibility and robustness indicates that the hierarchical error model has great potential for future streamflow predictions.
Kramers-Kronig receiver operable without digital upsampling.
Bo, Tianwai; Kim, Hoon
2018-05-28
The Kramers-Kronig (KK) receiver is capable of retrieving the phase information of optical single-sideband (SSB) signal from the optical intensity when the optical signal satisfies the minimum phase condition. Thus, it is possible to direct-detect the optical SSB signal without suffering from the signal-signal beat interference and linear transmission impairments. However, due to the spectral broadening induced by nonlinear operations in the conventional KK algorithm, it is necessary to employ the digital upsampling at the beginning of the digital signal processing (DSP). The increased number of samples at the DSP would hinder the real-time implementation of this attractive receiver. Hence, we propose a new DSP algorithm for KK receiver operable at 2 samples per symbol. We adopt a couple of mathematical approximations to avoid the use of nonlinear operations such as logarithm and exponential functions. By using the proposed algorithm, we demonstrate the transmission of 112-Gb/s SSB orthogonal frequency-division-multiplexed signal over an 80-km fiber link. The results show that the proposed algorithm operating at 2 samples per symbol exhibits similar performance to the conventional KK one operating at 6 samples per symbol. We also present the error analysis of the proposed algorithm for KK receiver in comparison with the conventional one.
Advanced technology satellite demodulator development
NASA Technical Reports Server (NTRS)
Ames, Stephen A.
1989-01-01
Ford Aerospace has developed a proof-of-concept satellite 8 phase shift keying (PSK) modulation and coding system operating in the Time Division Multiple Access (TDMA) mode at a data range of 200 Mbps using rate 5/6 forward error correction coding. The 80 Msps 8 PSK modem was developed in a mostly digital form and is amenable to an ASIC realization in the next phase of development. The codec was developed as a paper design only. The power efficiency goal was to be within 2 dB of theoretical at a bit error rate (BER) of 5x10(exp 7) while the measured implementation loss was 4.5 dB. The bandwidth efficiency goal was 2 bits/sec/Hz while the realized bandwidth efficiency was 1.8 bits/sec/Hz. The burst format used a preamble of only 40 8 PSK symbol times including 32 symbols of all zeros and an eight symbol unique word. The modem and associated special test equipment (STE) were fabricated mostly on a specially designed stitch-weld board although a few of the highest rate circuits were built on printed circuit cards. All the digital circuits were ECL to support the clock rates of from 80 MHz to 360 MHz. The transmitter and receiver matched filters were square-root Nyquist bandpass filters realized at the 3.37 GHz i.f. The modem operated as a coherent system although no analog phase locked (PLL) loop was employed. Within the budgetary constraints of the program, the approach to the demodulator has been proven and is eligible to proceed to the next phase of development of a satellite demodulator engineering model. This would entail the development of an ASIC version of the digital portion of the demodulator, and MMIC version of the quadrature detector, and SAW Nyquist filters to realize the bandwidth efficiency.
Rapid Prototyping: A Survey and Evaluation of Methodologies and Models
1990-03-01
possibility of program coding errors or design differences from the actual prototype the user validated. The method - ology should result in a production...behavior within the problem domain to be defned. "Each method has a different approach towards developing the set of symbols with which to define the...investigate prototyping as a viable alternative to the conventional method of software development. By the mid 1980’s, it was evi- dent that the traditional
1982-10-01
AD-A127 993 MODEM SIGNATURE ANALISIS (U) PAR TECHNOLOGY CORP NEW / HARTFORD NY V EDWARDS ET AL. OCT 82 RADC-TR-82-269 F30602-80-C-0264 NCLASSIFIED F/G...as an indication of the class clustering and separation between different classes in the modem data base. It is apparent from the projection that the...that as the clusters disperse, the likelihood of a sample crossing the boundary into an adjacent region and causing a symbol decision error increases. As
Relative Loading on Biplane Wings
1933-01-01
1.00, for which F.=0.675 from figure 6.3gi partially to eperimental errors and partially to the The ratios are then multiplied by obI as required by...plane designers . The definitions have been based on show no change in the value of K,. Figure 13 indicates geometrical angles, which may be mnisleadimg...wrows Axis Moment about ams Angle Velocities Force - = 3 s oiie Dsin. =- Lnear - Designation symbol Designation So" Poitv t=on (cIttgm ngla
FORTRAN program for induction motor analysis
NASA Technical Reports Server (NTRS)
Bollenbacher, G.
1976-01-01
A FORTRAN program for induction motor analysis is described. The analysis includes calculations of torque-speed characteristics, efficiency, losses, magnetic flux densities, weights, and various electrical parameters. The program is limited to three-phase Y-connected, squirrel-cage motors. Detailed instructions for using the program are given. The analysis equations are documented, and the sources of the equations are referenced. The appendixes include a FORTRAN symbol list, a complete explanation of input requirements, and a list of error messages.
Fundamental Bounds for Sequence Reconstruction from Nanopore Sequencers.
Magner, Abram; Duda, Jarosław; Szpankowski, Wojciech; Grama, Ananth
2016-06-01
Nanopore sequencers are emerging as promising new platforms for high-throughput sequencing. As with other technologies, sequencer errors pose a major challenge for their effective use. In this paper, we present a novel information theoretic analysis of the impact of insertion-deletion (indel) errors in nanopore sequencers. In particular, we consider the following problems: (i) for given indel error characteristics and rate, what is the probability of accurate reconstruction as a function of sequence length; (ii) using replicated extrusion (the process of passing a DNA strand through the nanopore), what is the number of replicas needed to accurately reconstruct the true sequence with high probability? Our results provide a number of important insights: (i) the probability of accurate reconstruction of a sequence from a single sample in the presence of indel errors tends quickly (i.e., exponentially) to zero as the length of the sequence increases; and (ii) replicated extrusion is an effective technique for accurate reconstruction. We show that for typical distributions of indel errors, the required number of replicas is a slow function (polylogarithmic) of sequence length - implying that through replicated extrusion, we can sequence large reads using nanopore sequencers. Moreover, we show that in certain cases, the required number of replicas can be related to information-theoretic parameters of the indel error distributions.
Zabaleta, Haritz; Valencia, David; Perry, Joel; Veneman, Jan; Keller, Thierry
2011-01-01
ArmAssist is a wireless robot for post stroke upper limb rehabilitation. Knowing the position of the arm is essential for any rehabilitation device. In this paper, we describe a method based on an artificial landmark navigation system. The navigation system uses three optical mouse sensors. This enables the building of a cheap but reliable position sensor. Two of the sensors are the data source for odometry calculations, and the third optical mouse sensor takes very low resolution pictures of a custom designed mat. These pictures are processed by an optical symbol recognition algorithm which will estimate the orientation of the robot and recognize the landmarks placed on the mat. The data fusion strategy is described to detect the misclassifications of the landmarks in order to fuse only reliable information. The orientation given by the optical symbol recognition (OSR) algorithm is used to improve significantly the odometry and the recognition of the landmarks is used to reference the odometry to a absolute coordinate system. The system was tested using a 3D motion capture system. With the actual mat configuration, in a field of motion of 710 × 450 mm, the maximum error in position estimation was 49.61 mm with an average error of 36.70 ± 22.50 mm. The average test duration was 36.5 seconds and the average path length was 4173 mm.
Sensitivity to prediction error in reach adaptation
Haith, Adrian M.; Harran, Michelle D.; Shadmehr, Reza
2012-01-01
It has been proposed that the brain predicts the sensory consequences of a movement and compares it to the actual sensory feedback. When the two differ, an error signal is formed, driving adaptation. How does an error in one trial alter performance in the subsequent trial? Here we show that the sensitivity to error is not constant but declines as a function of error magnitude. That is, one learns relatively less from large errors compared with small errors. We performed an experiment in which humans made reaching movements and randomly experienced an error in both their visual and proprioceptive feedback. Proprioceptive errors were created with force fields, and visual errors were formed by perturbing the cursor trajectory to create a visual error that was smaller, the same size, or larger than the proprioceptive error. We measured single-trial adaptation and calculated sensitivity to error, i.e., the ratio of the trial-to-trial change in motor commands to error size. We found that for both sensory modalities sensitivity decreased with increasing error size. A reanalysis of a number of previously published psychophysical results also exhibited this feature. Finally, we asked how the brain might encode sensitivity to error. We reanalyzed previously published probabilities of cerebellar complex spikes (CSs) and found that this probability declined with increasing error size. From this we posit that a CS may be representative of the sensitivity to error, and not error itself, a hypothesis that may explain conflicting reports about CSs and their relationship to error. PMID:22773782
Asymmetric Memory Circuit Would Resist Soft Errors
NASA Technical Reports Server (NTRS)
Buehler, Martin G.; Perlman, Marvin
1990-01-01
Some nonlinear error-correcting codes more efficient in presence of asymmetry. Combination of circuit-design and coding concepts expected to make integrated-circuit random-access memories more resistant to "soft" errors (temporary bit errors, also called "single-event upsets" due to ionizing radiation). Integrated circuit of new type made deliberately more susceptible to one kind of bit error than to other, and associated error-correcting code adapted to exploit this asymmetry in error probabilities.
On the Probability of Error and Stochastic Resonance in Discrete Memoryless Channels
2013-12-01
Information - Driven Doppler Shift Estimation and Compensation Methods for Underwater Wireless Sensor Networks ”, which is to analyze and develop... underwater wireless sensor networks . We formulated an analytic relationship that relates the average probability of error to the systems parameters, the...thesis, we studied the performance of Discrete Memoryless Channels (DMC), arising in the context of cooperative underwater wireless sensor networks
Probability of misclassifying biological elements in surface waters.
Loga, Małgorzata; Wierzchołowska-Dziedzic, Anna
2017-11-24
Measurement uncertainties are inherent to assessment of biological indices of water bodies. The effect of these uncertainties on the probability of misclassification of ecological status is the subject of this paper. Four Monte-Carlo (M-C) models were applied to simulate the occurrence of random errors in the measurements of metrics corresponding to four biological elements of surface waters: macrophytes, phytoplankton, phytobenthos, and benthic macroinvertebrates. Long series of error-prone measurement values of these metrics, generated by M-C models, were used to identify cases in which values of any of the four biological indices lay outside of the "true" water body class, i.e., outside the class assigned from the actual physical measurements. Fraction of such cases in the M-C generated series was used to estimate the probability of misclassification. The method is particularly useful for estimating the probability of misclassification of the ecological status of surface water bodies in the case of short sequences of measurements of biological indices. The results of the Monte-Carlo simulations show a relatively high sensitivity of this probability to measurement errors of the river macrophyte index (MIR) and high robustness to measurement errors of the benthic macroinvertebrate index (MMI). The proposed method of using Monte-Carlo models to estimate the probability of misclassification has significant potential for assessing the uncertainty of water body status reported to the EC by the EU member countries according to WFD. The method can be readily applied also in risk assessment of water management decisions before adopting the status dependent corrective actions.
UID...Leaving Its Mark on the Universe
NASA Technical Reports Server (NTRS)
Schramm, Harry F., Jr.
2008-01-01
Since 1975 bar codes on products at the retail counter have been accepted as the standard for entering product identity for price determination. Since the beginning of the 21 st century, the Data Matrix symbol has become accepted as the bar code format that is marked directly on a part, assembly or product that is durable enough to identify that item for its lifetime. NASA began the studies for direct part marking Data Matrix symbols on parts during the Return to Flight activities after the Challenger Accident. Over the 20 year period that has elapsed since Challenger, a mountain of studies, analyses and focused problem solutions developed by and for NASA have brought about world changing results. NASA Technical Standard 6002 and NASA Handbook 6003 for Direct Part Marking Data Matrix Symbols on Aerospace Parts have formed the basis for most other standards on part marking internationally. NASA and its commercial partners have developed numerous products and methods that addressed the difficulties of collecting part identification in aerospace operations. These products enabled the marking of Data Matrix symbols in virtually every situation and the reading of symbols at great distances, severe angles, under paint and in the dark without a light. Even unmarkable delicate parts now have a process to apply a chemical mixture, recently trademarked as Nanocodes, that can be converted to Data Matrix information through software. The accompanying intellectual property is protected by ten patents, several of which are licensed. Direct marking Data Matrix on NASA parts dramatically decreases data entry errors and the number of parts that go through their life cycle unmarked, two major threats to sound configuration management and flight safety. NASA is said to only have people and stuff with information connecting them. Data Matrix is one of the most significant improvements since Challenger to the safety and reliability of that connection.
UID...Now That's Gonna Leave A Mark
NASA Technical Reports Server (NTRS)
Schramm, Harry F., Jr.
2007-01-01
Since 1975 bar codes on products at the retail counter have been accepted as the standard for entering product identity for price determination. Since the beginning of the 21st century, the Data Matrix symbol has become accepted as the bar code format that is marked directly on a part, assembly or product that is durable enough to identify that item for its lifetime. NASA began the studies for direct part marking Data Matrix symbols on parts during the Return to Flight activities after the Challenger Accident. Over the 20 year period that has elapsed since Challenger, a mountain of studies, analyses and focused problem solutions developed by and for NASA have brought about world changing results. NASA Technical Standard 6002 and NASA Handbook 6003 for Direct Part Marking Data Matrix Symbols on Aerospace Parts have formed the basis for most other standards on part marking internationally. NASA and its commercial partners have developed numerous products and methods that addressed the difficulties of collecting part identification in aerospace operations. These products enabled the marking of Data Matrix symbols in virtually every situation and the reading of symbols at great distances, severe angles, under paint and in the dark without a light. Even unmarkable delicate parts now have a process to apply a chemical mixture, recently trademarked as Nanocodes, that can be converted to Data Matrix information through software. The accompanying intellectual property is protected by ten patents, several of which are licensed. Direct marking Data Matrix on NASA parts dramatically decreases data entry errors and the number ofparts that go through their life cycle unmarked, two major threats to sound configuration management and flight safety. NASA is said to only have people and stuff with information connecting them. Data Matrix is one of the most significant improvements since Challenger to the safety and reliability of that connection.
UID...Now That's Gonna Leave a Mark
NASA Technical Reports Server (NTRS)
Schramm, Harry F.
2008-01-01
Since 1975 bar codes on products at the retail counter have been accepted as the standard for entering product identity for price determination. Since the beginning of the 21st century, the Data Matrix symbol has become accepted as the bar code format that is marked directly on a part, assembly or product that is durable enough to identify that item for its lifetime. NASA began the studies for direct part marking Data Matrix symbols on parts during the Return to Flight activities after the Challenger Accident. Over the 20 year period that has elapsed since Challenger, a mountain of studies, analyses and focused problem solutions developed by and for NASA have brought about world changing results. NASA Technical Standard 6002 and NASA Handbook 6003 for Direct Part Marking Data Matrix Symbols on Aerospace Parts have formed the basis for most other standards on part marking internationally. NASA and its commercial partners have developed numerous products and methods that addressed the difficulties of collecting part identification in aerospace operations. These products enabled the marking of Data Matrix symbols in virtually every situation and the reading of symbols at great distances, severe angles, under paint and in the dark without a light. Even unmarkable delicate parts now have a process to apply a chemical mixture, recently trademarked as Nanocodes, that can be converted to Data Matrix information through software. The accompanying intellectual property is protected by ten patents, several of which are licensed. Direct marking Data Matrix on NASA parts dramatically decreases data entry errors and the number of parts that go through their life cycle unmarked, two major threats to sound configuration management and flight safety. NASA is said to only have people and stuff with information connecting them. Data Matrix is one of the most significant improvements since Challenger to the safety and reliability of that connection.
UID....Now That's Gonna Leave A Mark
NASA Technical Reports Server (NTRS)
Schramm, Harry F., Jr.
2008-01-01
Since 1975 bar codes on products at the retail counter have been accepted as the standard for entering product identity for price determination. Since the beginning of the 21 st century, the Data Matrix symbol has become accepted as the bar code format that is marked directly on a part, assembly or product that is durable enough to identify that item for its lifetime. NASA began the studies for direct part marking Data Matrix symbols on parts during the Return to Flight activities after the Challenger Accident. Over the 20 year period that has elapsed since Challenger, a mountain of studies, analyses and focused problem solutions developed by and for NASA have brought about world changing results. NASA Technical Standard 6002 and NASA Handbook 6003 for Direct Part Marking Data Matrix Symbols on Aerospace Parts have formed the basis for most other standards on part marking internationally. NASA and its commercial partners have developed numerous products and methods that addressed the difficulties of collecting part identification in aerospace operations. These products enabled the marking of Data Matrix symbols in virtually every situation and the reading of symbols at great distances, severe angles, under paint and in the dark without a light. Even unmarkable delicate parts now have a process to apply a chemical mixture, recently trademarked as Nanocodes, that can be converted to Data Matrix information through software. The accompanying intellectual property is protected by ten patents, several of which are licensed. Direct marking Data Matrix on NASA parts dramatically decreases data entry errors and the number of parts that go through their life cycle unmarked, two major threats to sound configuration management and flight safety. NASA is said to only have people and stuff with information connecting them. Data Matrix is one of the most significant improvements since Challenger to the safety and reliability of that connection.
UID.. .Now That's Gonna Leave A Mark
NASA Technical Reports Server (NTRS)
Schramm, Fred
2006-01-01
Since 1975 bar codes on products at the retail counter have been accepted as the standard for entering product identity for price determination. Since the beginning of the 21st century, the Data Matrix symbol has become accepted as the bar code format that is marked directly on a part, assembly or product that is durable enough to identify that item for its lifetime. NASA began the studies for direct part marking Data Matrix symbols on parts during the Return to Flight activities after the Challenger Accident. Over the 20 year period that has elapsed since Challenger, a mountain of studies, analyses and focused problem solutions developed by and for NASA have brought about world changing results. NASA Technical Standard 6002 and NASA Handbook 6003 for Direct Part Marking Data Matrix Symbols on Aerospace Parts have formed the basis for most other standards on part marking internationally. NASA and its commercial partners have developed numerous products and methods that addressed the difficulties of collecting part identification in aerospace operations. These products enabled the marking of Data Matrix symbols in virtually every situation and the reading of symbols at great distances, severe angles, under paint and in the dark without a light. Even unmarkable delicate parts now have a process to apply a chemical mixture, recently trademarked as Nanocodes, that can be converted to Data Matrix information through software. The accompanying intellectual property is protected by ten patents, several of which are licensed. Direct marking Data Matrix on NASA parts dramatically decreases data entry errors and the number of parts that go through their life cycle marked, two major threats to sound configuration management and flight safety. NASA is said to only have people and stuff with information connecting them. Data Matrix is one of the most significant improvements since Challenger to the safety and reliability of that connection.
NASA Technologies for Product Identification
NASA Technical Reports Server (NTRS)
Schramm, Fred, Jr.
2006-01-01
Since 1975 bar codes on products at the retail counter have been accepted as the standard for entering product identity for price determination. Since the beginning of the 21st century, the Data Matrix symbol has become accepted as the bar code format that is marked directly on a part, assembly or product that is durable enough to identify that item for its lifetime. NASA began the studies for direct part marking Data Matrix symbols on parts during the Return to Flight activities after the Challenger Accident. Over the 20 year period that has elapsed since Challenger, a mountain of studies, analyses and focused problem solutions developed by and for NASA have brought about world changing results. NASA Technical Standard 6002 and NASA Handbook 6003 for Direct Part Marking Data Matrix Symbols on Aerospace Parts have formed the basis for most other standards on part marking internationally. NASA and its commercial partners have developed numerous products and methods that addressed the difficulties of collecting part identification in aerospace operations. These products enabled the marking of Data Matrix symbols in virtually every situation and the reading of symbols at great distances, severe angles, under paint and in the dark without a light. Even unmarkable delicate parts now have a process to apply a chemical mixture called NanocodesTM that can be converted to a Data Matrix. The accompanying intellectual property is protected by 10 patents, several of which are licensed. Direct marking Data Matrix on NASA parts virtually eliminates data entry errors and the number of parts that go through their life cycle unmarked, two major threats to sound configuration management and flight safety. NASA is said to only have people and stuff with information connecting them. Data Matrix is one of the most significant improvements since Challenger to the safety and reliability of that connection. This presentation highlights the accomplishments of NASA in its efforts to develop technologies for automatic identification, its efforts to implement them and its vision on their role in space.
A negentropy minimization approach to adaptive equalization for digital communication systems.
Choi, Sooyong; Lee, Te-Won
2004-07-01
In this paper, we introduce and investigate a new adaptive equalization method based on minimizing approximate negentropy of the estimation error for a finite-length equalizer. We consider an approximate negentropy using nonpolynomial expansions of the estimation error as a new performance criterion to improve performance of a linear equalizer based on minimizing minimum mean squared error (MMSE). Negentropy includes higher order statistical information and its minimization provides improved converge, performance and accuracy compared to traditional methods such as MMSE in terms of bit error rate (BER). The proposed negentropy minimization (NEGMIN) equalizer has two kinds of solutions, the MMSE solution and the other one, depending on the ratio of the normalization parameters. The NEGMIN equalizer has best BER performance when the ratio of the normalization parameters is properly adjusted to maximize the output power(variance) of the NEGMIN equalizer. Simulation experiments show that BER performance of the NEGMIN equalizer with the other solution than the MMSE one has similar characteristics to the adaptive minimum bit error rate (AMBER) equalizer. The main advantage of the proposed equalizer is that it needs significantly fewer training symbols than the AMBER equalizer. Furthermore, the proposed equalizer is more robust to nonlinear distortions than the MMSE equalizer.
NASA Technical Reports Server (NTRS)
Gutierrez, Alberto, Jr.
1995-01-01
This dissertation evaluates receiver-based methods for mitigating the effects due to nonlinear bandlimited signal distortion present in high data rate satellite channels. The effects of the nonlinear bandlimited distortion is illustrated for digitally modulated signals. A lucid development of the low-pass Volterra discrete time model for a nonlinear communication channel is presented. In addition, finite-state machine models are explicitly developed for a nonlinear bandlimited satellite channel. A nonlinear fixed equalizer based on Volterra series has previously been studied for compensation of noiseless signal distortion due to a nonlinear satellite channel. This dissertation studies adaptive Volterra equalizers on a downlink-limited nonlinear bandlimited satellite channel. We employ as figure of merits performance in the mean-square error and probability of error senses. In addition, a receiver consisting of a fractionally-spaced equalizer (FSE) followed by a Volterra equalizer (FSE-Volterra) is found to give improvement beyond that gained by the Volterra equalizer. Significant probability of error performance improvement is found for multilevel modulation schemes. Also, it is found that probability of error improvement is more significant for modulation schemes, constant amplitude and multilevel, which require higher signal to noise ratios (i.e., higher modulation orders) for reliable operation. The maximum likelihood sequence detection (MLSD) receiver for a nonlinear satellite channel, a bank of matched filters followed by a Viterbi detector, serves as a probability of error lower bound for the Volterra and FSE-Volterra equalizers. However, this receiver has not been evaluated for a specific satellite channel. In this work, an MLSD receiver is evaluated for a specific downlink-limited satellite channel. Because of the bank of matched filters, the MLSD receiver may be high in complexity. Consequently, the probability of error performance of a more practical suboptimal MLSD receiver, requiring only a single receive filter, is evaluated.
Peppas, Kostas P; Lazarakis, Fotis; Alexandridis, Antonis; Dangakis, Kostas
2012-08-01
In this Letter we investigate the error performance of multiple-input multiple-output free-space optical communication systems employing intensity modulation/direct detection and operating over strong atmospheric turbulence channels. Atmospheric-induced strong turbulence fading is modeled using the negative exponential distribution. For the considered system, an approximate yet accurate analytical expression for the average bit error probability is derived and an efficient method for its numerical evaluation is proposed. Numerically evaluated and computer simulation results are further provided to demonstrate the validity of the proposed mathematical analysis.
Quantifying seining detection probability for fishes of Great Plains sand‐bed rivers
Mollenhauer, Robert; Logue, Daniel R.; Brewer, Shannon K.
2018-01-01
Species detection error (i.e., imperfect and variable detection probability) is an essential consideration when investigators map distributions and interpret habitat associations. When fish detection error that is due to highly variable instream environments needs to be addressed, sand‐bed streams of the Great Plains represent a unique challenge. We quantified seining detection probability for diminutive Great Plains fishes across a range of sampling conditions in two sand‐bed rivers in Oklahoma. Imperfect detection resulted in underestimates of species occurrence using naïve estimates, particularly for less common fishes. Seining detection probability also varied among fishes and across sampling conditions. We observed a quadratic relationship between water depth and detection probability, in which the exact nature of the relationship was species‐specific and dependent on water clarity. Similarly, the direction of the relationship between water clarity and detection probability was species‐specific and dependent on differences in water depth. The relationship between water temperature and detection probability was also species dependent, where both the magnitude and direction of the relationship varied among fishes. We showed how ignoring detection error confounded an underlying relationship between species occurrence and water depth. Despite imperfect and heterogeneous detection, our results support that determining species absence can be accomplished with two to six spatially replicated seine hauls per 200‐m reach under average sampling conditions; however, required effort would be higher under certain conditions. Detection probability was low for the Arkansas River Shiner Notropis girardi, which is federally listed as threatened, and more than 10 seine hauls per 200‐m reach would be required to assess presence across sampling conditions. Our model allows scientists to estimate sampling effort to confidently assess species occurrence, which maximizes the use of available resources. Increased implementation of approaches that consider detection error promote ecological advancements and conservation and management decisions that are better informed.
Analysis of MMU FDIR expert system
NASA Technical Reports Server (NTRS)
Landauer, Christopher
1990-01-01
This paper describes the analysis of a rulebase for fault diagnosis, isolation, and recovery for NASA's Manned Maneuvering Unit (MMU). The MMU is used by a human astronaut to move around a spacecraft in space. In order to provide maneuverability, there are several thrusters oriented in various directions, and hand-controlled devices for useful groups of them. The rulebase describes some error detection procedures, and corrective actions that can be applied in a few cases. The approach taken in this paper is to treat rulebases as symbolic objects and compute correctness and 'reasonableness' criteria that use the statistical distribution of various syntactic structures within the rulebase. The criteria should identify awkward situations, and otherwise signal anomalies that may be errors. The rulebase analysis agorithms are derived from mathematical and computational criteria that implement certain principles developed for rulebase evaluation. The principles are Consistency, Completeness, Irredundancy, Connectivity, and finally, Distribution. Several errors were detected in the delivered rulebase. Some of these errors were easily fixed. Some errors could not be fixed with the available information. A geometric model of the thruster arrangement is needed to show how to correct certain other distribution nomalies that are in fact errors. The investigations reported here were partially supported by The Aerospace Corporation's Sponsored Research Program.
A method to compute SEU fault probabilities in memory arrays with error correction
NASA Technical Reports Server (NTRS)
Gercek, Gokhan
1994-01-01
With the increasing packing densities in VLSI technology, Single Event Upsets (SEU) due to cosmic radiations are becoming more of a critical issue in the design of space avionics systems. In this paper, a method is introduced to compute the fault (mishap) probability for a computer memory of size M words. It is assumed that a Hamming code is used for each word to provide single error correction. It is also assumed that every time a memory location is read, single errors are corrected. Memory is read randomly whose distribution is assumed to be known. In such a scenario, a mishap is defined as two SEU's corrupting the same memory location prior to a read. The paper introduces a method to compute the overall mishap probability for the entire memory for a mission duration of T hours.
Multiple statistical tests: Lessons from a d20.
Madan, Christopher R
2016-01-01
Statistical analyses are often conducted with α= .05. When multiple statistical tests are conducted, this procedure needs to be adjusted to compensate for the otherwise inflated Type I error. In some instances in tabletop gaming, sometimes it is desired to roll a 20-sided die (or 'd20') twice and take the greater outcome. Here I draw from probability theory and the case of a d20, where the probability of obtaining any specific outcome is (1)/ 20, to determine the probability of obtaining a specific outcome (Type-I error) at least once across repeated, independent statistical tests.
The price of complexity in financial networks
NASA Astrophysics Data System (ADS)
Battiston, Stefano; Caldarelli, Guido; May, Robert M.; Roukny, Tarik; Stiglitz, Joseph E.
2016-09-01
Financial institutions form multilayer networks by engaging in contracts with each other and by holding exposures to common assets. As a result, the default probability of one institution depends on the default probability of all of the other institutions in the network. Here, we show how small errors on the knowledge of the network of contracts can lead to large errors in the probability of systemic defaults. From the point of view of financial regulators, our findings show that the complexity of financial networks may decrease the ability to mitigate systemic risk, and thus it may increase the social cost of financial crises.
The price of complexity in financial networks.
Battiston, Stefano; Caldarelli, Guido; May, Robert M; Roukny, Tarik; Stiglitz, Joseph E
2016-09-06
Financial institutions form multilayer networks by engaging in contracts with each other and by holding exposures to common assets. As a result, the default probability of one institution depends on the default probability of all of the other institutions in the network. Here, we show how small errors on the knowledge of the network of contracts can lead to large errors in the probability of systemic defaults. From the point of view of financial regulators, our findings show that the complexity of financial networks may decrease the ability to mitigate systemic risk, and thus it may increase the social cost of financial crises.
Puzzling accretion onto a black hole in the ultraluminous X-ray source M 101 ULX-1.
Liu, Ji-Feng; Bregman, Joel N; Bai, Yu; Justham, Stephen; Crowther, Paul
2013-11-28
There are two proposed explanations for ultraluminous X-ray sources (ULXs) with luminosities in excess of 10(39) erg s(-1). They could be intermediate-mass black holes (more than 100-1,000 solar masses, M sun symbol) radiating at sub-maximal (sub-Eddington) rates, as in Galactic black-hole X-ray binaries but with larger, cooler accretion disks. Alternatively, they could be stellar-mass black holes radiating at Eddington or super-Eddington rates. On its discovery, M 101 ULX-1 had a luminosity of 3 × 10(39) erg s(-1) and a supersoft thermal disk spectrum with an exceptionally low temperature--uncomplicated by photons energized by a corona of hot electrons--more consistent with the expected appearance of an accreting intermediate-mass black hole. Here we report optical spectroscopic monitoring of M 101 ULX-1. We confirm the previous suggestion that the system contains a Wolf-Rayet star, and reveal that the orbital period is 8.2 days. The black hole has a minimum mass of 5 M sun symbol, and more probably a mass of 20 M sun symbol-30 M sun symbol, but we argue that it is very unlikely to be an intermediate-mass black hole. Therefore, its exceptionally soft spectra at high Eddington ratios violate the expectations for accretion onto stellar-mass black holes. Accretion must occur from captured stellar wind, which has hitherto been thought to be so inefficient that it could not power an ultraluminous source.
Permanence analysis of a concatenated coding scheme for error control
NASA Technical Reports Server (NTRS)
Costello, D. J., Jr.; Lin, S.; Kasami, T.
1983-01-01
A concatenated coding scheme for error control in data communications is analyzed. In this scheme, the inner code is used for both error correction and detection, however, the outer code is used only for error detection. A retransmission is requested if the outer code detects the presence of errors after the inner code decoding. Probability of undetected error is derived and bounded. A particular example, proposed for the planetary program, is analyzed.
A Parallel Decoding Algorithm for Short Polar Codes Based on Error Checking and Correcting
Pan, Xiaofei; Pan, Kegang; Ye, Zhan; Gong, Chao
2014-01-01
We propose a parallel decoding algorithm based on error checking and correcting to improve the performance of the short polar codes. In order to enhance the error-correcting capacity of the decoding algorithm, we first derive the error-checking equations generated on the basis of the frozen nodes, and then we introduce the method to check the errors in the input nodes of the decoder by the solutions of these equations. In order to further correct those checked errors, we adopt the method of modifying the probability messages of the error nodes with constant values according to the maximization principle. Due to the existence of multiple solutions of the error-checking equations, we formulate a CRC-aided optimization problem of finding the optimal solution with three different target functions, so as to improve the accuracy of error checking. Besides, in order to increase the throughput of decoding, we use a parallel method based on the decoding tree to calculate probability messages of all the nodes in the decoder. Numerical results show that the proposed decoding algorithm achieves better performance than that of some existing decoding algorithms with the same code length. PMID:25540813
The Diversity ECCM Performance of Frequency-Hopping CPFSK in Partial- Band Noise Jamming
1988-05-25
TELEPHONE (Inc€ude Art Code) 22c. OFFICE SYMBOL 1 ar .anoer (919) 549-0641 ARO: EL-S D FORM 1473.6 4 MAR 83 APR edition may be used until exhausted...ASSOCIATES, INC. TABLE OF CONTENTS Page 1.0 INTRODUCTION ...... .... ......................... 1 1.1 BACKGROUND ...... ......................... I 1.2...andJ/or Dit Special t -. V 1 atI J. S. LEE ASSOCIATES, INC. -S TABLE OF CONTENTS (Cont.) . - Page 2.2 ERROR PROBA3ILITY FORMULATION .5
On the Mean Squared Error of Nonparametric Quantile Estimators under Random Right-Censorship.
1986-09-01
SECURITY CI.ASSIFICATION lb. RESTRICTIVE MARKINGS UNCLASSIFIED 2a, SECURITY CLASSIFICATION AUTHORITY 3 . OISTRIBUTIONIAVAILASIL.ITY OF REPORT P16e 2b...UNCLASSIPIEO/UNLIMITEO 3 SAME AS RPT". 0 OTIC USERS 1 UNCLASSIFIED p." " 22. NAME OP RESPONSIBLE INOIVIOUAL 22b. TELEPHONE NUMBER 22c. OFFICE SYMBOL...in Section 3 , and the result for the kernel estimator Qn is derived in Section 4. It should be k. mentioned that the order statistic methods used by
NASA Astrophysics Data System (ADS)
Amphawan, Angela; Ghazi, Alaan; Al-dawoodi, Aras
2017-11-01
A free-space optics mode-wavelength division multiplexing (MWDM) system using Laguerre-Gaussian (LG) modes is designed using decision feedback equalization for controlling mode coupling and combating inter symbol interference so as to increase channel diversity. In this paper, a data rate of 24 Gbps is achieved for a FSO MWDM channel of 2.6 km in length using feedback equalization. Simulation results show significant improvement in eye diagrams and bit-error rates before and after decision feedback equalization.
Word and frame synchronization with verification for PPM optical communications
NASA Technical Reports Server (NTRS)
Marshall, William K.
1986-01-01
A method for obtaining word and frame synchronization in pulse position modulated optical communication systems is described. The method uses a short sync sequence inserted at the beginning of each data frame and a verification procedure to distinguish between inserted and randomly occurring sequences at the receiver. This results in an easy to implement sync system which provides reliable synchronization even at high symbol error rates. Results are given for the application of this approach to a highly energy efficient 256-ary PPM test system.
Blöchliger, Nicolas; Keller, Peter M; Böttger, Erik C; Hombach, Michael
2017-09-01
The procedure for setting clinical breakpoints (CBPs) for antimicrobial susceptibility has been poorly standardized with respect to population data, pharmacokinetic parameters and clinical outcome. Tools to standardize CBP setting could result in improved antibiogram forecast probabilities. We propose a model to estimate probabilities for methodological categorization errors and defined zones of methodological uncertainty (ZMUs), i.e. ranges of zone diameters that cannot reliably be classified. The impact of ZMUs on methodological error rates was used for CBP optimization. The model distinguishes theoretical true inhibition zone diameters from observed diameters, which suffer from methodological variation. True diameter distributions are described with a normal mixture model. The model was fitted to observed inhibition zone diameters of clinical Escherichia coli strains. Repeated measurements for a quality control strain were used to quantify methodological variation. For 9 of 13 antibiotics analysed, our model predicted error rates of < 0.1% applying current EUCAST CBPs. Error rates were > 0.1% for ampicillin, cefoxitin, cefuroxime and amoxicillin/clavulanic acid. Increasing the susceptible CBP (cefoxitin) and introducing ZMUs (ampicillin, cefuroxime, amoxicillin/clavulanic acid) decreased error rates to < 0.1%. ZMUs contained low numbers of isolates for ampicillin and cefuroxime (3% and 6%), whereas the ZMU for amoxicillin/clavulanic acid contained 41% of all isolates and was considered not practical. We demonstrate that CBPs can be improved and standardized by minimizing methodological categorization error rates. ZMUs may be introduced if an intermediate zone is not appropriate for pharmacokinetic/pharmacodynamic or drug dosing reasons. Optimized CBPs will provide a standardized antibiotic susceptibility testing interpretation at a defined level of probability. © The Author 2017. Published by Oxford University Press on behalf of the British Society for Antimicrobial Chemotherapy. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
The Performance of Noncoherent Orthogonal M-FSK in the Presence of Timing and Frequency Errors
NASA Technical Reports Server (NTRS)
Hinedi, Sami; Simon, Marvin K.; Raphaeli, Dan
1993-01-01
Practical M-FSK systems experience a combination of time and frequency offsets (errors). This paper assesses the deleterious effect of these offsets, first individually and then combined, on the average bit error probability performance of the system.
Advanced Receiver tracking of Voyager 2 near solar conjunction
NASA Technical Reports Server (NTRS)
Brown, D. H.; Hurd, W. J.; Vilnrotter, V. A.; Wiggins, J. D.
1988-01-01
The Advanced Receiver (ARX) was used to track the Voyager 2 spacecraft at low Sun-Earth-Probe (SEP) angles near solar conjunction in December of 1987. The received carrier signal exhibited strong fluctuations in both phase and amplitude. The ARX used spectral estimation and mathematical modeling of the phase and receiver noise processes to set an optimum carrier tracking bandwidth. This minimized the mean square phase error in tracking carrier phase and thus minimized the loss in the telemetry signal-to-noise ratio due to the carrier loop. Recovered symbol SNRs and errors in decoded engineering data for the ARX are compared with those for the current Block 3 telemetry stream. Optimum bandwidths are plotted against SEP angle. Measurements of the power spectral density of the solar phase and amplitude fluctuations are also given.
Discussion on LDPC Codes and Uplink Coding
NASA Technical Reports Server (NTRS)
Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio
2007-01-01
This slide presentation reviews the progress that the workgroup on Low-Density Parity-Check (LDPC) for space link coding. The workgroup is tasked with developing and recommending new error correcting codes for near-Earth, Lunar, and deep space applications. Included in the presentation is a summary of the technical progress of the workgroup. Charts that show the LDPC decoder sensitivity to symbol scaling errors are reviewed, as well as a chart showing the performance of several frame synchronizer algorithms compared to that of some good codes and LDPC decoder tests at ESTL. Also reviewed is a study on Coding, Modulation, and Link Protocol (CMLP), and the recommended codes. A design for the Pseudo-Randomizer with LDPC Decoder and CRC is also reviewed. A chart that summarizes the three proposed coding systems is also presented.
Phase ambiguity resolution for offset QPSK modulation systems
NASA Technical Reports Server (NTRS)
Nguyen, Tien M. (Inventor)
1991-01-01
A demodulator for Offset Quaternary Phase Shift Keyed (OQPSK) signals modulated with two words resolves eight possible combinations of phase ambiguity which may produce data error by first processing received I(sub R) and Q(sub R) data in an integrated carrier loop/symbol synchronizer using a digital Costas loop with matched filters for correcting four of eight possible phase lock errors, and then the remaining four using a phase ambiguity resolver which detects the words to not only reverse the received I(sub R) and Q(sub R) data channels, but to also invert (complement) the I(sub R) and/or Q(sub R) data, or to at least complement the I(sub R) and Q(sub R) data for systems using nontransparent codes that do not have rotation direction ambiguity.
Logical errors on proving theorem
NASA Astrophysics Data System (ADS)
Sari, C. K.; Waluyo, M.; Ainur, C. M.; Darmaningsih, E. N.
2018-01-01
In tertiary level, students of mathematics education department attend some abstract courses, such as Introduction to Real Analysis which needs an ability to prove mathematical statements almost all the time. In fact, many students have not mastered this ability appropriately. In their Introduction to Real Analysis tests, even though they completed their proof of theorems, they achieved an unsatisfactory score. They thought that they succeeded, but their proof was not valid. In this study, a qualitative research was conducted to describe logical errors that students made in proving the theorem of cluster point. The theorem was given to 54 students. Misconceptions on understanding the definitions seem to occur within cluster point, limit of function, and limit of sequences. The habit of using routine symbol might cause these misconceptions. Suggestions to deal with this condition are described as well.
An exploratory study of cognitive load in diagnosing patient conditions.
Workman, Michael; Lesser, Michael F; Kim, Joonmin
2007-06-01
To determine whether the ways in which information is presented to physicians will improve their ability to respond in a timely and accurate manner to acute care needs. The forms of the presentation compared traditional textual, chart and graph representations with equivalent symbolic language representations. To test this objective, our investigation involved two studies of interpreting patient conditions using two forms of information representation. The first assessed the level of cognitive effort (the outcome variable is known as cognitive load), and the second assessed the time and accuracy outcome variables. Our investigation consisted of two studies, the first study involved 3rd and 4th year medical students, and the second study involved three board certified physicians who worked in an intensive care unit of a metropolitan hospital. The first study utilized an all-within-subject design with repeated measures, where pretests were utilized as control covariate for prior learning and individual differences. The second study utilized a random sampling of records analyzed by two physicians and qualitatively evaluated by board-certified intensivists. The first study indicated that the cognitive load to interpret the symbolic representation was less than those presented in the more traditional textual, chart and graphic form. The second study suggests that experienced physicians may react in a more timely fashion with at least the same accuracy when the symbolic language was used than with traditional charts and graphs. The ways in which information is presented to physicians may affect the quality of acute care, such as in intensive, critical and emergency care units. When information can be presented in symbolic form, it may be cognitively processed more efficiently than when it is presented in the usual textual and chart form, potentially lowering errors in diagnosis and increasing the responsiveness to patient conditions.
A Sensitivity Analysis of Circular Error Probable Approximation Techniques
1992-03-01
SENSITIVITY ANALYSIS OF CIRCULAR ERROR PROBABLE APPROXIMATION TECHNIQUES THESIS Presented to the Faculty of the School of Engineering of the Air Force...programming skills. Major Paul Auclair patiently advised me in this endeavor, and Major Andy Howell added numerous insightful contributions. I thank my...techniques. The two ret(st accuratec techniiques require numerical integration and can take several hours to run ov a personal comlputer [2:1-2,4-6]. Some
Observation of non-classical correlations in sequential measurements of photon polarization
NASA Astrophysics Data System (ADS)
Suzuki, Yutaro; Iinuma, Masataka; Hofmann, Holger F.
2016-10-01
A sequential measurement of two non-commuting quantum observables results in a joint probability distribution for all output combinations that can be explained in terms of an initial joint quasi-probability of the non-commuting observables, modified by the resolution errors and back-action of the initial measurement. Here, we show that the error statistics of a sequential measurement of photon polarization performed at different measurement strengths can be described consistently by an imaginary correlation between the statistics of resolution and back-action. The experimental setup was designed to realize variable strength measurements with well-controlled imaginary correlation between the statistical errors caused by the initial measurement of diagonal polarizations, followed by a precise measurement of the horizontal/vertical polarization. We perform the experimental characterization of an elliptically polarized input state and show that the same complex joint probability distribution is obtained at any measurement strength.
Meijer, Erik; Rohwedder, Susann; Wansbeek, Tom
2012-01-01
Survey data on earnings tend to contain measurement error. Administrative data are superior in principle, but they are worthless in case of a mismatch. We develop methods for prediction in mixture factor analysis models that combine both data sources to arrive at a single earnings figure. We apply the methods to a Swedish data set. Our results show that register earnings data perform poorly if there is a (small) probability of a mismatch. Survey earnings data are more reliable, despite their measurement error. Predictors that combine both and take conditional class probabilities into account outperform all other predictors.
Soft-decision decoding techniques for linear block codes and their error performance analysis
NASA Technical Reports Server (NTRS)
Lin, Shu
1996-01-01
The first paper presents a new minimum-weight trellis-based soft-decision iterative decoding algorithm for binary linear block codes. The second paper derives an upper bound on the probability of block error for multilevel concatenated codes (MLCC). The bound evaluates difference in performance for different decompositions of some codes. The third paper investigates the bit error probability code for maximum likelihood decoding of binary linear codes. The fourth and final paper included in this report is concerns itself with the construction of multilevel concatenated block modulation codes using a multilevel concatenation scheme for the frequency non-selective Rayleigh fading channel.
The Limits of Coding with Joint Constraints on Detected and Undetected Error Rates
NASA Technical Reports Server (NTRS)
Dolinar, Sam; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush
2008-01-01
We develop a remarkably tight upper bound on the performance of a parameterized family of bounded angle maximum-likelihood (BA-ML) incomplete decoders. The new bound for this class of incomplete decoders is calculated from the code's weight enumerator, and is an extension of Poltyrev-type bounds developed for complete ML decoders. This bound can also be applied to bound the average performance of random code ensembles in terms of an ensemble average weight enumerator. We also formulate conditions defining a parameterized family of optimal incomplete decoders, defined to minimize both the total codeword error probability and the undetected error probability for any fixed capability of the decoder to detect errors. We illustrate the gap between optimal and BA-ML incomplete decoding via simulation of a small code.
Signaling on the continuous spectrum of nonlinear optical fiber.
Tavakkolnia, Iman; Safari, Majid
2017-08-07
This paper studies different signaling techniques on the continuous spectrum (CS) of nonlinear optical fiber defined by nonlinear Fourier transform. Three different signaling techniques are proposed and analyzed based on the statistics of the noise added to CS after propagation along the nonlinear optical fiber. The proposed methods are compared in terms of error performance, distance reach, and complexity. Furthermore, the effect of chromatic dispersion on the data rate and noise in nonlinear spectral domain is investigated. It is demonstrated that, for a given sequence of CS symbols, an optimal bandwidth (or symbol rate) can be determined so that the temporal duration of the propagated signal at the end of the fiber is minimized. In effect, the required guard interval between the subsequently transmitted data packets in time is minimized and the effective data rate is significantly enhanced. Moreover, by selecting the proper signaling method and design criteria a distance reach of 7100 km is reported by only singling on CS at a rate of 9.6 Gbps.
Critical interactionism: an upstream-downstream approach to health care reform.
Martins, Diane Cocozza; Burbank, Patricia M
2011-01-01
Currently, per capita health care expenditures in the United States are more than 20% higher than any other country in the world and more than twice the average expenditure for European countries, yet the United States ranks 37th in life expectancy. Clearly, the health care system is not succeeding in improving the health of the US population with its focus on illness care for individuals. A new theoretical approach, critical interactionism, combines symbolic interactionism and critical social theory to provide a guide for addressing health care problems from both an upstream and downstream approach. Concepts of meaning from symbolic interactionism and emancipation from critical perspective move across system levels to inform and reform health care for individuals, organizations, and societies. This provides a powerful approach for health care reform, moving back and forth between the micro and macro levels. Areas of application to nursing practice with several examples (patients with obesity; patients who are lesbian, gay, bisexual, and transgender; workplace bullying and errors), nursing education, and research are also discussed.
NASA Astrophysics Data System (ADS)
He, Jing; Wen, Xuejie; Chen, Ming; Chen, Lin
2015-09-01
In this paper, a Golay complementary training sequence (TS)-based symbol synchronization scheme is proposed and experimentally demonstrated in multiband orthogonal frequency division multiplexing (MB-OFDM) ultra-wideband over fiber (UWBoF) system with a variable rate low-density parity-check (LDPC) code. Meanwhile, the coding gain and spectral efficiency in the variable rate LDPC-coded MB-OFDM UWBoF system are investigated. By utilizing the non-periodic auto-correlation property of the Golay complementary pair, the start point of LDPC-coded MB-OFDM UWB signal can be estimated accurately. After 100 km standard single-mode fiber (SSMF) transmission, at the bit error rate of 1×10-3, the experimental results show that the short block length 64QAM-LDPC coding provides a coding gain of 4.5 dB, 3.8 dB and 2.9 dB for a code rate of 62.5%, 75% and 87.5%, respectively.
Abel, David L.
2011-01-01
Is life physicochemically unique? No. Is life unique? Yes. Life manifests innumerable formalisms that cannot be generated or explained by physicodynamics alone. Life pursues thousands of biofunctional goals, not the least of which is staying alive. Neither physicodynamics, nor evolution, pursue goals. Life is largely directed by linear digital programming and by the Prescriptive Information (PI) instantiated particularly into physicodynamically indeterminate nucleotide sequencing. Epigenomic controls only compound the sophistication of these formalisms. Life employs representationalism through the use of symbol systems. Life manifests autonomy, homeostasis far from equilibrium in the harshest of environments, positive and negative feedback mechanisms, prevention and correction of its own errors, and organization of its components into Sustained Functional Systems (SFS). Chance and necessity—heat agitation and the cause-and-effect determinism of nature’s orderliness—cannot spawn formalisms such as mathematics, language, symbol systems, coding, decoding, logic, organization (not to be confused with mere self-ordering), integration of circuits, computational success, and the pursuit of functionality. All of these characteristics of life are formal, not physical. PMID:25382119
Direct bit detection receiver noise performance analysis for 32-PSK and 64-PSK modulated signals
NASA Astrophysics Data System (ADS)
Ahmed, Iftikhar
1987-12-01
Simple two channel receivers for 32-PSK and 64-PSK modulated signals have been proposed which allow digital data (namely bits), to be recovered directly instead of the traditional approach of symbol detection followed by symbol to bit mappings. This allows for binary rather than M-ary receiver decisions, reduces the amount of signal processing operations and permits parallel recovery of the bits. The noise performance of these receivers quantified by the Bit Error Rate (BER) assuming an Additive White Gaussian Noise interference model is evaluated as a function of Eb/No, the signal to noise ratio, and transmitted phase angles of the signals. The performance results of the direct bit detection receivers (DBDR) when compared to that of convectional phase measurement receivers demonstrate that DBDR's are optimum in BER sense. The simplicity of the receiver implementations and the BER of the delivered data make DBDR's attractive for high speed, spectrally efficient digital communication systems.
Probabilistic confidence for decisions based on uncertain reliability estimates
NASA Astrophysics Data System (ADS)
Reid, Stuart G.
2013-05-01
Reliability assessments are commonly carried out to provide a rational basis for risk-informed decisions concerning the design or maintenance of engineering systems and structures. However, calculated reliabilities and associated probabilities of failure often have significant uncertainties associated with the possible estimation errors relative to the 'true' failure probabilities. For uncertain probabilities of failure, a measure of 'probabilistic confidence' has been proposed to reflect the concern that uncertainty about the true probability of failure could result in a system or structure that is unsafe and could subsequently fail. The paper describes how the concept of probabilistic confidence can be applied to evaluate and appropriately limit the probabilities of failure attributable to particular uncertainties such as design errors that may critically affect the dependability of risk-acceptance decisions. This approach is illustrated with regard to the dependability of structural design processes based on prototype testing with uncertainties attributable to sampling variability.
The price of complexity in financial networks
May, Robert M.; Roukny, Tarik; Stiglitz, Joseph E.
2016-01-01
Financial institutions form multilayer networks by engaging in contracts with each other and by holding exposures to common assets. As a result, the default probability of one institution depends on the default probability of all of the other institutions in the network. Here, we show how small errors on the knowledge of the network of contracts can lead to large errors in the probability of systemic defaults. From the point of view of financial regulators, our findings show that the complexity of financial networks may decrease the ability to mitigate systemic risk, and thus it may increase the social cost of financial crises. PMID:27555583
Gaussian Hypothesis Testing and Quantum Illumination.
Wilde, Mark M; Tomamichel, Marco; Lloyd, Seth; Berta, Mario
2017-09-22
Quantum hypothesis testing is one of the most basic tasks in quantum information theory and has fundamental links with quantum communication and estimation theory. In this paper, we establish a formula that characterizes the decay rate of the minimal type-II error probability in a quantum hypothesis test of two Gaussian states given a fixed constraint on the type-I error probability. This formula is a direct function of the mean vectors and covariance matrices of the quantum Gaussian states in question. We give an application to quantum illumination, which is the task of determining whether there is a low-reflectivity object embedded in a target region with a bright thermal-noise bath. For the asymmetric-error setting, we find that a quantum illumination transmitter can achieve an error probability exponent stronger than a coherent-state transmitter of the same mean photon number, and furthermore, that it requires far fewer trials to do so. This occurs when the background thermal noise is either low or bright, which means that a quantum advantage is even easier to witness than in the symmetric-error setting because it occurs for a larger range of parameters. Going forward from here, we expect our formula to have applications in settings well beyond those considered in this paper, especially to quantum communication tasks involving quantum Gaussian channels.
Latin hypercube approach to estimate uncertainty in ground water vulnerability
Gurdak, J.J.; McCray, J.E.; Thyne, G.; Qi, S.L.
2007-01-01
A methodology is proposed to quantify prediction uncertainty associated with ground water vulnerability models that were developed through an approach that coupled multivariate logistic regression with a geographic information system (GIS). This method uses Latin hypercube sampling (LHS) to illustrate the propagation of input error and estimate uncertainty associated with the logistic regression predictions of ground water vulnerability. Central to the proposed method is the assumption that prediction uncertainty in ground water vulnerability models is a function of input error propagation from uncertainty in the estimated logistic regression model coefficients (model error) and the values of explanatory variables represented in the GIS (data error). Input probability distributions that represent both model and data error sources of uncertainty were simultaneously sampled using a Latin hypercube approach with logistic regression calculations of probability of elevated nonpoint source contaminants in ground water. The resulting probability distribution represents the prediction intervals and associated uncertainty of the ground water vulnerability predictions. The method is illustrated through a ground water vulnerability assessment of the High Plains regional aquifer. Results of the LHS simulations reveal significant prediction uncertainties that vary spatially across the regional aquifer. Additionally, the proposed method enables a spatial deconstruction of the prediction uncertainty that can lead to improved prediction of ground water vulnerability. ?? 2007 National Ground Water Association.
NASA Technical Reports Server (NTRS)
Gejji, Raghvendra, R.
1992-01-01
Network transmission errors such as collisions, CRC errors, misalignment, etc. are statistical in nature. Although errors can vary randomly, a high level of errors does indicate specific network problems, e.g. equipment failure. In this project, we have studied the random nature of collisions theoretically as well as by gathering statistics, and established a numerical threshold above which a network problem is indicated with high probability.
Crawford, Forrest W.; Suchard, Marc A.
2011-01-01
A birth-death process is a continuous-time Markov chain that counts the number of particles in a system over time. In the general process with n current particles, a new particle is born with instantaneous rate λn and a particle dies with instantaneous rate μn. Currently no robust and efficient method exists to evaluate the finite-time transition probabilities in a general birth-death process with arbitrary birth and death rates. In this paper, we first revisit the theory of continued fractions to obtain expressions for the Laplace transforms of these transition probabilities and make explicit an important derivation connecting transition probabilities and continued fractions. We then develop an efficient algorithm for computing these probabilities that analyzes the error associated with approximations in the method. We demonstrate that this error-controlled method agrees with known solutions and outperforms previous approaches to computing these probabilities. Finally, we apply our novel method to several important problems in ecology, evolution, and genetics. PMID:21984359
Schubert, Teresa; Badcock, Nicholas; Kohnen, Saskia
2017-10-01
Letter recognition and digit recognition are critical skills for literate adults, yet few studies have considered the development of these skills in children. We conducted a nine-alternative forced-choice (9AFC) partial report task with strings of letters and digits, with typographical symbols (e.g., $, @) as a control, to investigate the development of identity and position processing in children. This task allows for the delineation of identity processing (as overall accuracy) and position coding (as the proportion of position errors). Our participants were students in Grade 1 to Grade 6, allowing us to track the development of these abilities across the primary school years. Our data suggest that although digit processing and letter processing end up with many similarities in adult readers, the developmental trajectories for identity and position processing for the two character types differ. Symbol processing showed little developmental change in terms of identity or position accuracy. We discuss the implications of our results for theories of identity and position coding: modified receptive field, multiple-route model, and lexical tuning. Despite moderate success for some theories, considerable theoretical work is required to explain the developmental trajectories of letter processing and digit processing, which might not be as closely tied in child readers as they are in adult readers. Copyright © 2017 Elsevier Inc. All rights reserved.
Biometric and Emotion Identification: An ECG Compression Based Method.
Brás, Susana; Ferreira, Jacqueline H T; Soares, Sandra C; Pinho, Armando J
2018-01-01
We present an innovative and robust solution to both biometric and emotion identification using the electrocardiogram (ECG). The ECG represents the electrical signal that comes from the contraction of the heart muscles, indirectly representing the flow of blood inside the heart, it is known to convey a key that allows biometric identification. Moreover, due to its relationship with the nervous system, it also varies as a function of the emotional state. The use of information-theoretic data models, associated with data compression algorithms, allowed to effectively compare ECG records and infer the person identity, as well as emotional state at the time of data collection. The proposed method does not require ECG wave delineation or alignment, which reduces preprocessing error. The method is divided into three steps: (1) conversion of the real-valued ECG record into a symbolic time-series, using a quantization process; (2) conditional compression of the symbolic representation of the ECG, using the symbolic ECG records stored in the database as reference; (3) identification of the ECG record class, using a 1-NN (nearest neighbor) classifier. We obtained over 98% of accuracy in biometric identification, whereas in emotion recognition we attained over 90%. Therefore, the method adequately identify the person, and his/her emotion. Also, the proposed method is flexible and may be adapted to different problems, by the alteration of the templates for training the model.
Biometric and Emotion Identification: An ECG Compression Based Method
Brás, Susana; Ferreira, Jacqueline H. T.; Soares, Sandra C.; Pinho, Armando J.
2018-01-01
We present an innovative and robust solution to both biometric and emotion identification using the electrocardiogram (ECG). The ECG represents the electrical signal that comes from the contraction of the heart muscles, indirectly representing the flow of blood inside the heart, it is known to convey a key that allows biometric identification. Moreover, due to its relationship with the nervous system, it also varies as a function of the emotional state. The use of information-theoretic data models, associated with data compression algorithms, allowed to effectively compare ECG records and infer the person identity, as well as emotional state at the time of data collection. The proposed method does not require ECG wave delineation or alignment, which reduces preprocessing error. The method is divided into three steps: (1) conversion of the real-valued ECG record into a symbolic time-series, using a quantization process; (2) conditional compression of the symbolic representation of the ECG, using the symbolic ECG records stored in the database as reference; (3) identification of the ECG record class, using a 1-NN (nearest neighbor) classifier. We obtained over 98% of accuracy in biometric identification, whereas in emotion recognition we attained over 90%. Therefore, the method adequately identify the person, and his/her emotion. Also, the proposed method is flexible and may be adapted to different problems, by the alteration of the templates for training the model. PMID:29670564
Hybrid computer technique yields random signal probability distributions
NASA Technical Reports Server (NTRS)
Cameron, W. D.
1965-01-01
Hybrid computer determines the probability distributions of instantaneous and peak amplitudes of random signals. This combined digital and analog computer system reduces the errors and delays of manual data analysis.
Quantum-state comparison and discrimination
NASA Astrophysics Data System (ADS)
Hayashi, A.; Hashimoto, T.; Horibe, M.
2018-05-01
We investigate the performance of discrimination strategy in the comparison task of known quantum states. In the discrimination strategy, one infers whether or not two quantum systems are in the same state on the basis of the outcomes of separate discrimination measurements on each system. In some cases with more than two possible states, the optimal strategy in minimum-error comparison is that one should infer the two systems are in different states without any measurement, implying that the discrimination strategy performs worse than the trivial "no-measurement" strategy. We present a sufficient condition for this phenomenon to happen. For two pure states with equal prior probabilities, we determine the optimal comparison success probability with an error margin, which interpolates the minimum-error and unambiguous comparison. We find that the discrimination strategy is not optimal except for the minimum-error case.
Type-II generalized family-wise error rate formulas with application to sample size determination.
Delorme, Phillipe; de Micheaux, Pierre Lafaye; Liquet, Benoit; Riou, Jérémie
2016-07-20
Multiple endpoints are increasingly used in clinical trials. The significance of some of these clinical trials is established if at least r null hypotheses are rejected among m that are simultaneously tested. The usual approach in multiple hypothesis testing is to control the family-wise error rate, which is defined as the probability that at least one type-I error is made. More recently, the q-generalized family-wise error rate has been introduced to control the probability of making at least q false rejections. For procedures controlling this global type-I error rate, we define a type-II r-generalized family-wise error rate, which is directly related to the r-power defined as the probability of rejecting at least r false null hypotheses. We obtain very general power formulas that can be used to compute the sample size for single-step and step-wise procedures. These are implemented in our R package rPowerSampleSize available on the CRAN, making them directly available to end users. Complexities of the formulas are presented to gain insight into computation time issues. Comparison with Monte Carlo strategy is also presented. We compute sample sizes for two clinical trials involving multiple endpoints: one designed to investigate the effectiveness of a drug against acute heart failure and the other for the immunogenicity of a vaccine strategy against pneumococcus. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Probability Theory, Not the Very Guide of Life
ERIC Educational Resources Information Center
Juslin, Peter; Nilsson, Hakan; Winman, Anders
2009-01-01
Probability theory has long been taken as the self-evident norm against which to evaluate inductive reasoning, and classical demonstrations of violations of this norm include the conjunction error and base-rate neglect. Many of these phenomena require multiplicative probability integration, whereas people seem more inclined to linear additive…
Development of a Methodology to Optimally Allocate Visual Inspection Time
1989-06-01
Model and then takes into account the costs of the errors. The purpose of the Alternative Model is to not make 104 costly mistakes while meeting the...James Buck, and Virgil Anderson, AIIE Transactions, Volume 11, No.4, December 1979. 26. "Inspection of Sheet Materials - Model and Data", Colin G. Drury ...worker error, the probability of inspector error, and the cost of system error. Paired comparisons of error phenomena from operational personnel are
Discrepancy-based error estimates for Quasi-Monte Carlo III. Error distributions and central limits
NASA Astrophysics Data System (ADS)
Hoogland, Jiri; Kleiss, Ronald
1997-04-01
In Quasi-Monte Carlo integration, the integration error is believed to be generally smaller than in classical Monte Carlo with the same number of integration points. Using an appropriate definition of an ensemble of quasi-random point sets, we derive various results on the probability distribution of the integration error, which can be compared to the standard Central Limit Theorem for normal stochastic sampling. In many cases, a Gaussian error distribution is obtained.
On the Discriminant Analysis in the 2-Populations Case
NASA Astrophysics Data System (ADS)
Rublík, František
2008-01-01
The empirical Bayes Gaussian rule, which in the normal case yields good values of the probability of total error, may yield high values of the maximum probability error. From this point of view the presented modified version of the classification rule of Broffitt, Randles and Hogg appears to be superior. The modification included in this paper is termed as a WR method, and the choice of its weights is discussed. The mentioned methods are also compared with the K nearest neighbours classification rule.
Nonparametric probability density estimation by optimization theoretic techniques
NASA Technical Reports Server (NTRS)
Scott, D. W.
1976-01-01
Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.
Conflict Probability Estimation for Free Flight
NASA Technical Reports Server (NTRS)
Paielli, Russell A.; Erzberger, Heinz
1996-01-01
The safety and efficiency of free flight will benefit from automated conflict prediction and resolution advisories. Conflict prediction is based on trajectory prediction and is less certain the farther in advance the prediction, however. An estimate is therefore needed of the probability that a conflict will occur, given a pair of predicted trajectories and their levels of uncertainty. A method is developed in this paper to estimate that conflict probability. The trajectory prediction errors are modeled as normally distributed, and the two error covariances for an aircraft pair are combined into a single equivalent covariance of the relative position. A coordinate transformation is then used to derive an analytical solution. Numerical examples and Monte Carlo validation are presented.
Memorabeatlia: a naturalistic study of long-term memory.
Hyman, I E; Rubin, D C
1990-03-01
Seventy-six undergraduates were given the titles and first lines of Beatles' songs and asked to recall the songs. Seven hundred and four different undergraduates were cued with one line from each of 25 Beatles' songs and asked to recall the title. The probability of recalling a line was best predicted by the number of times a line was repeated in the song and how early the line first appeared in the song. The probability of cuing to the title was best predicted by whether the line shared words with the title. Although the subjects recalled only 21% of the lines, there were very few errors in recall, and the errors rarely violated the rhythmic, poetic, or thematic constraints of the songs. Acting together, these constraints can account for the near verbatim recall observed. Fourteen subjects, who transcribed one song, made fewer and different errors than the subjects who had recalled the song, indicating that the errors in recall were not primarily the result of errors in encoding.
Closed-Loop Analysis of Soft Decisions for Serial Links
NASA Technical Reports Server (NTRS)
Lansdowne, Chatwin A.; Steele, Glen F.; Zucha, Joan P.; Schlesinger, Adam M.
2013-01-01
We describe the benefit of using closed-loop measurements for a radio receiver paired with a counterpart transmitter. We show that real-time analysis of the soft decision output of a receiver can provide rich and relevant insight far beyond the traditional hard-decision bit error rate (BER) test statistic. We describe a Soft Decision Analyzer (SDA) implementation for closed-loop measurements on single- or dual- (orthogonal) channel serial data communication links. The analyzer has been used to identify, quantify, and prioritize contributors to implementation loss in live-time during the development of software defined radios. This test technique gains importance as modern receivers are providing soft decision symbol synchronization as radio links are challenged to push more data and more protocol overhead through noisier channels, and software-defined radios (SDRs) use error-correction codes that approach Shannon's theoretical limit of performance.
NASA Technical Reports Server (NTRS)
Li, Jing; Hylton, Alan; Budinger, James; Nappier, Jennifer; Downey, Joseph; Raible, Daniel
2012-01-01
Due to its simplicity and robustness against wavefront distortion, pulse position modulation (PPM) with photon counting detector has been seriously considered for long-haul optical wireless systems. This paper evaluates the dual-pulse case and compares it with the conventional single-pulse case. Analytical expressions for symbol error rate and bit error rate are first derived and numerically evaluated, for the strong, negative-exponential turbulent atmosphere; and bandwidth efficiency and throughput are subsequently assessed. It is shown that, under a set of practical constraints including pulse width and pulse repetition frequency (PRF), dual-pulse PPM enables a better channel utilization and hence a higher throughput than it single-pulse counterpart. This result is new and different from the previous idealistic studies that showed multi-pulse PPM provided no essential information-theoretic gains than single-pulse PPM.
NASA Astrophysics Data System (ADS)
Shima, Tomoyuki; Tomeba, Hiromichi; Adachi, Fumiyuki
Orthogonal multi-carrier direct sequence code division multiple access (orthogonal MC DS-CDMA) is a combination of time-domain spreading and orthogonal frequency division multiplexing (OFDM). In orthogonal MC DS-CDMA, the frequency diversity gain can be obtained by applying frequency-domain equalization (FDE) based on minimum mean square error (MMSE) criterion to a block of OFDM symbols and can improve the bit error rate (BER) performance in a severe frequency-selective fading channel. FDE requires an accurate estimate of the channel gain. The channel gain can be estimated by removing the pilot modulation in the frequency domain. In this paper, we propose a pilot-assisted channel estimation suitable for orthogonal MC DS-CDMA with FDE and evaluate, by computer simulation, the BER performance in a frequency-selective Rayleigh fading channel.
NASA Technical Reports Server (NTRS)
Lansdowne, Chatwin; Steele, Glen; Zucha, Joan; Schlesinger, Adam
2013-01-01
We describe the benefit of using closed-loop measurements for a radio receiver paired with a counterpart transmitter. We show that real-time analysis of the soft decision output of a receiver can provide rich and relevant insight far beyond the traditional hard-decision bit error rate (BER) test statistic. We describe a Soft Decision Analyzer (SDA) implementation for closed-loop measurements on single- or dual- (orthogonal) channel serial data communication links. The analyzer has been used to identify, quantify, and prioritize contributors to implementation loss in live-time during the development of software defined radios. This test technique gains importance as modern receivers are providing soft decision symbol synchronization as radio links are challenged to push more data and more protocol overhead through noisier channels, and software-defined radios (SDRs) use error-correction codes that approach Shannon's theoretical limit of performance.
SimCheck: An Expressive Type System for Simulink
NASA Technical Reports Server (NTRS)
Roy, Pritam; Shankar, Natarajan
2010-01-01
MATLAB Simulink is a member of a class of visual languages that are used for modeling and simulating physical and cyber-physical systems. A Simulink model consists of blocks with input and output ports connected using links that carry signals. We extend the type system of Simulink with annotations and dimensions/units associated with ports and links. These types can capture invariants on signals as well as relations between signals. We define a type-checker that checks the wellformedness of Simulink blocks with respect to these type annotations. The type checker generates proof obligations that are solved by SRI's Yices solver for satisfiability modulo theories (SMT). This translation can be used to detect type errors, demonstrate counterexamples, generate test cases, or prove the absence of type errors. Our work is an initial step toward the symbolic analysis of MATLAB Simulink models.
Combinatorial pulse position modulation for power-efficient free-space laser communications
NASA Technical Reports Server (NTRS)
Budinger, James M.; Vanderaar, M.; Wagner, P.; Bibyk, Steven
1993-01-01
A new modulation technique called combinatorial pulse position modulation (CPPM) is presented as a power-efficient alternative to quaternary pulse position modulation (QPPM) for direct-detection, free-space laser communications. The special case of 16C4PPM is compared to QPPM in terms of data throughput and bit error rate (BER) performance for similar laser power and pulse duty cycle requirements. The increased throughput from CPPM enables the use of forward error corrective (FEC) encoding for a net decrease in the amount of laser power required for a given data throughput compared to uncoded QPPM. A specific, practical case of coded CPPM is shown to reduce the amount of power required to transmit and receive a given data sequence by at least 4.7 dB. Hardware techniques for maximum likelihood detection and symbol timing recovery are presented.
Enhanced decoding for the Galileo S-band mission
NASA Technical Reports Server (NTRS)
Dolinar, S.; Belongie, M.
1993-01-01
A coding system under consideration for the Galileo S-band low-gain antenna mission is a concatenated system using a variable redundancy Reed-Solomon outer code and a (14,1/4) convolutional inner code. The 8-bit Reed-Solomon symbols are interleaved to depth 8, and the eight 255-symbol codewords in each interleaved block have redundancies 64, 20, 20, 20, 64, 20, 20, and 20, respectively (or equivalently, the codewords have 191, 235, 235, 235, 191, 235, 235, and 235 8-bit information symbols, respectively). This concatenated code is to be decoded by an enhanced decoder that utilizes a maximum likelihood (Viterbi) convolutional decoder; a Reed Solomon decoder capable of processing erasures; an algorithm for declaring erasures in undecoded codewords based on known erroneous symbols in neighboring decodable words; a second Viterbi decoding operation (redecoding) constrained to follow only paths consistent with the known symbols from previously decodable Reed-Solomon codewords; and a second Reed-Solomon decoding operation using the output from the Viterbi redecoder and additional erasure declarations to the extent possible. It is estimated that this code and decoder can achieve a decoded bit error rate of 1 x 10(exp 7) at a concatenated code signal-to-noise ratio of 0.76 dB. By comparison, a threshold of 1.17 dB is required for a baseline coding system consisting of the same (14,1/4) convolutional code, a (255,223) Reed-Solomon code with constant redundancy 32 also interleaved to depth 8, a one-pass Viterbi decoder, and a Reed Solomon decoder incapable of declaring or utilizing erasures. The relative gain of the enhanced system is thus 0.41 dB. It is predicted from analysis based on an assumption of infinite interleaving that the coding gain could be further improved by approximately 0.2 dB if four stages of Viterbi decoding and four levels of Reed-Solomon redundancy are permitted. Confirmation of this effect and specification of the optimum four-level redundancy profile for depth-8 interleaving is currently being done.
Cruz, Gênesis Vivianne; Pereira, Wilza Rocha
2013-01-01
The aim of this study was to investigate the different settings of violence in pedagogical relations between teachers and students in a higher education from the theory of violence symbolic power. Twelve interviews were conducted with students from six courses of graduation from a higher educational institution; it was used the content analysis to interpret the data. It was found that violence is configured from the most subtle to the most noticeable way and that, although violence was present in the pedagogical processes of the context studied, this was not fully realized, probably because of the reproduction of the symbolic order, socially constructed and internalized by teachers and students. It is considered that the teaching practice needs to be improved in order to make the classroom democratic spaces and to make the students share responsibility for the pursuit of knowledge. We conclude that violence in relationships pedagogical produces certain effects, changes and consequences both, immediate and delayed, which can be minimized.
Giraldo, Beatriz F; Rodriguez, Javier; Caminal, Pere; Bayes-Genis, Antonio; Voss, Andreas
2015-01-01
Cardiovascular diseases are the first cause of death in developed countries. Using electrocardiographic (ECG), blood pressure (BP) and respiratory flow signals, we obtained parameters for classifying cardiomyopathy patients. 42 patients with ischemic (ICM) and dilated (DCM) cardiomyopathies were studied. The left ventricular ejection fraction (LVEF) was used to stratify patients with low risk (LR: LVEF>35%, 14 patients) and high risk (HR: LVEF≤ 35%, 28 patients) of heart attack. RR, SBP and TTot time series were extracted from the ECG, BP and respiratory flow signals, respectively. The time series were transformed to a binary space and then analyzed using Joint Symbolic Dynamic with a word length of three, characterizing them by the probability of occurrence of the words. Extracted parameters were then reduced using correlation and statistical analysis. Principal component analysis and support vector machines methods were applied to characterize the cardiorespiratory and cardiovascular interactions in ICM and DCM cardiomyopathies, obtaining an accuracy of 85.7%.
Why Do White Americans Oppose Race-Targeted Policies? Clarifying the Impact of Symbolic Racism
Rabinowitz, Joshua L.; Sears, David O.; Sidanius, Jim; Krosnick, Jon A.
2009-01-01
Measures of symbolic racism (SR) have often been used to tap racial prejudice toward Blacks. However, given the wording of questions used for this purpose, some of the apparent effects on attitudes toward policies to help Blacks may instead be due to political conservatism, attitudes toward government, and/or attitudes toward redistributive government policies in general. Using data from national probability sample surveys and an experiment, we explored whether SR has effects even when controlling for these potential confounds and whether its effects are specific to policies involving Blacks. Holding constant conservatism and attitudes toward limited government, SR predicted Whites' opposition to policies designed to help Blacks and more weakly predicted attitudes toward social programs whose beneficiaries were racially ambiguous. An experimental manipulation of policy beneficiaries revealed that SR predicted policy attitudes when Blacks were the beneficiary but not when women were. These findings are consistent with the claim that SR's association with racial policy preferences is not due to these confounds. PMID:20161542
Optimizer convergence and local minima errors and their clinical importance
NASA Astrophysics Data System (ADS)
Jeraj, Robert; Wu, Chuan; Mackie, Thomas R.
2003-09-01
Two of the errors common in the inverse treatment planning optimization have been investigated. The first error is the optimizer convergence error, which appears because of non-perfect convergence to the global or local solution, usually caused by a non-zero stopping criterion. The second error is the local minima error, which occurs when the objective function is not convex and/or the feasible solution space is not convex. The magnitude of the errors, their relative importance in comparison to other errors as well as their clinical significance in terms of tumour control probability (TCP) and normal tissue complication probability (NTCP) were investigated. Two inherently different optimizers, a stochastic simulated annealing and deterministic gradient method were compared on a clinical example. It was found that for typical optimization the optimizer convergence errors are rather small, especially compared to other convergence errors, e.g., convergence errors due to inaccuracy of the current dose calculation algorithms. This indicates that stopping criteria could often be relaxed leading into optimization speed-ups. The local minima errors were also found to be relatively small and typically in the range of the dose calculation convergence errors. Even for the cases where significantly higher objective function scores were obtained the local minima errors were not significantly higher. Clinical evaluation of the optimizer convergence error showed good correlation between the convergence of the clinical TCP or NTCP measures and convergence of the physical dose distribution. On the other hand, the local minima errors resulted in significantly different TCP or NTCP values (up to a factor of 2) indicating clinical importance of the local minima produced by physical optimization.
Optimizer convergence and local minima errors and their clinical importance.
Jeraj, Robert; Wu, Chuan; Mackie, Thomas R
2003-09-07
Two of the errors common in the inverse treatment planning optimization have been investigated. The first error is the optimizer convergence error, which appears because of non-perfect convergence to the global or local solution, usually caused by a non-zero stopping criterion. The second error is the local minima error, which occurs when the objective function is not convex and/or the feasible solution space is not convex. The magnitude of the errors, their relative importance in comparison to other errors as well as their clinical significance in terms of tumour control probability (TCP) and normal tissue complication probability (NTCP) were investigated. Two inherently different optimizers, a stochastic simulated annealing and deterministic gradient method were compared on a clinical example. It was found that for typical optimization the optimizer convergence errors are rather small, especially compared to other convergence errors, e.g., convergence errors due to inaccuracy of the current dose calculation algorithms. This indicates that stopping criteria could often be relaxed leading into optimization speed-ups. The local minima errors were also found to be relatively small and typically in the range of the dose calculation convergence errors. Even for the cases where significantly higher objective function scores were obtained the local minima errors were not significantly higher. Clinical evaluation of the optimizer convergence error showed good correlation between the convergence of the clinical TCP or NTCP measures and convergence of the physical dose distribution. On the other hand, the local minima errors resulted in significantly different TCP or NTCP values (up to a factor of 2) indicating clinical importance of the local minima produced by physical optimization.
Regulation of the Two Delta Crystallin Genes during Lens Development in the Chicken Embryo
1991-08-22
Stabilization of tubulin mRNA by inhibition of protein synthesis sea 148 urchin embryos. Mol. Cell. Biol. 8, 3518-3525. Goto, K., Okada, T.S...counts from twenty lens epithelia. Error bars are ± SEM . Symbols: control lens tissue, (square), 0.5 ng/ml actinomycin D, (inverted triangle), 30 ng...Ŝ]-methionine for 5 hr in the absence or presence of actinomycin D (0.5 or 30 M-g/̂ iD • Values are the means ± SEM for ten groups of three lens
New spatial diversity equalizer based on PLL
NASA Astrophysics Data System (ADS)
Rao, Wei
2011-10-01
A new Spatial Diversity Equalizer (SDE) based on phase-locked loop (PLL) is proposed to overcome the inter-symbol interference (ISI) and phase rotations simultaneously in the digital communication system. The proposed SDE consists of equal gain combining technique based on a famous blind equalization algorithm constant modulus algorithm (CMA) and a PLL. Compared with conventional SDE, the proposed SDE has not only faster convergence rate and lower residual error but also the ability to recover carrier phase rotation. The efficiency of the method is proved by computer simulation.
NASA Technical Reports Server (NTRS)
Ortega, J. M.
1984-01-01
Several short summaries of the work performed during this reporting period are presented. Topics discussed in this document include: (1) resilient seeded errors via simple techniques; (2) knowledge representation for engineering design; (3) analysis of faults in a multiversion software experiment; (4) implementation of parallel programming environment; (5) symbolic execution of concurrent programs; (6) two computer graphics systems for visualization of pressure distribution and convective density particles; (7) design of a source code management system; (8) vectorizing incomplete conjugate gradient on the Cyber 203/205; (9) extensions of domain testing theory and; (10) performance analyzer for the pisces system.
Too much noise on the dance floor: Intra- and inter-dance angular error in honey bee waggle dances.
Schürch, Roger; Couvillon, Margaret J
2013-01-01
Successful honey bee foragers communicate where they have found a good resource with the waggle dance, a symbolic language that encodes a distance and direction. Both of these components are repeated several times (1 to > 100) within the same dance. Additionally, both these components vary within a dance. Here we discuss some causes and consequences of intra-dance and inter-dance angular variation and advocate revisiting von Frisch and Lindauer's earlier work to gain a better understanding of honey bee foraging ecology.
Fusion of Scores in a Detection Context Based on Alpha Integration.
Soriano, Antonio; Vergara, Luis; Ahmed, Bouziane; Salazar, Addisson
2015-09-01
We present a new method for fusing scores corresponding to different detectors (two-hypotheses case). It is based on alpha integration, which we have adapted to the detection context. Three optimization methods are presented: least mean square error, maximization of the area under the ROC curve, and minimization of the probability of error. Gradient algorithms are proposed for the three methods. Different experiments with simulated and real data are included. Simulated data consider the two-detector case to illustrate the factors influencing alpha integration and demonstrate the improvements obtained by score fusion with respect to individual detector performance. Two real data cases have been considered. In the first, multimodal biometric data have been processed. This case is representative of scenarios in which the probability of detection is to be maximized for a given probability of false alarm. The second case is the automatic analysis of electroencephalogram and electrocardiogram records with the aim of reproducing the medical expert detections of arousal during sleeping. This case is representative of scenarios in which probability of error is to be minimized. The general superior performance of alpha integration verifies the interest of optimizing the fusing parameters.
Learn-as-you-go acceleration of cosmological parameter estimates
NASA Astrophysics Data System (ADS)
Aslanyan, Grigor; Easther, Richard; Price, Layne C.
2015-09-01
Cosmological analyses can be accelerated by approximating slow calculations using a training set, which is either precomputed or generated dynamically. However, this approach is only safe if the approximations are well understood and controlled. This paper surveys issues associated with the use of machine-learning based emulation strategies for accelerating cosmological parameter estimation. We describe a learn-as-you-go algorithm that is implemented in the Cosmo++ code and (1) trains the emulator while simultaneously estimating posterior probabilities; (2) identifies unreliable estimates, computing the exact numerical likelihoods if necessary; and (3) progressively learns and updates the error model as the calculation progresses. We explicitly describe and model the emulation error and show how this can be propagated into the posterior probabilities. We apply these techniques to the Planck likelihood and the calculation of ΛCDM posterior probabilities. The computation is significantly accelerated without a pre-defined training set and uncertainties in the posterior probabilities are subdominant to statistical fluctuations. We have obtained a speedup factor of 6.5 for Metropolis-Hastings and 3.5 for nested sampling. Finally, we discuss the general requirements for a credible error model and show how to update them on-the-fly.
Learn-as-you-go acceleration of cosmological parameter estimates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aslanyan, Grigor; Easther, Richard; Price, Layne C., E-mail: g.aslanyan@auckland.ac.nz, E-mail: r.easther@auckland.ac.nz, E-mail: lpri691@aucklanduni.ac.nz
2015-09-01
Cosmological analyses can be accelerated by approximating slow calculations using a training set, which is either precomputed or generated dynamically. However, this approach is only safe if the approximations are well understood and controlled. This paper surveys issues associated with the use of machine-learning based emulation strategies for accelerating cosmological parameter estimation. We describe a learn-as-you-go algorithm that is implemented in the Cosmo++ code and (1) trains the emulator while simultaneously estimating posterior probabilities; (2) identifies unreliable estimates, computing the exact numerical likelihoods if necessary; and (3) progressively learns and updates the error model as the calculation progresses. We explicitlymore » describe and model the emulation error and show how this can be propagated into the posterior probabilities. We apply these techniques to the Planck likelihood and the calculation of ΛCDM posterior probabilities. The computation is significantly accelerated without a pre-defined training set and uncertainties in the posterior probabilities are subdominant to statistical fluctuations. We have obtained a speedup factor of 6.5 for Metropolis-Hastings and 3.5 for nested sampling. Finally, we discuss the general requirements for a credible error model and show how to update them on-the-fly.« less
Noise Estimation and Adaptive Encoding for Asymmetric Quantum Error Correcting Codes
NASA Astrophysics Data System (ADS)
Florjanczyk, Jan; Brun, Todd; CenterQuantum Information Science; Technology Team
We present a technique that improves the performance of asymmetric quantum error correcting codes in the presence of biased qubit noise channels. Our study is motivated by considering what useful information can be learned from the statistics of syndrome measurements in stabilizer quantum error correcting codes (QECC). We consider the case of a qubit dephasing channel where the dephasing axis is unknown and time-varying. We are able to estimate the dephasing angle from the statistics of the standard syndrome measurements used in stabilizer QECC's. We use this estimate to rotate the computational basis of the code in such a way that the most likely type of error is covered by the highest distance of the asymmetric code. In particular, we use the [ [ 15 , 1 , 3 ] ] shortened Reed-Muller code which can correct one phase-flip error but up to three bit-flip errors. In our simulations, we tune the computational basis to match the estimated dephasing axis which in turn leads to a decrease in the probability of a phase-flip error. With a sufficiently accurate estimate of the dephasing axis, our memory's effective error is dominated by the much lower probability of four bit-flips. Aro MURI Grant No. W911NF-11-1-0268.
NASA Astrophysics Data System (ADS)
Wu, Yue; Shang, Pengjian; Li, Yilong
2018-03-01
A modified multiscale sample entropy measure based on symbolic representation and similarity (MSEBSS) is proposed in this paper to research the complexity of stock markets. The modified algorithm reduces the probability of inducing undefined entropies and is confirmed to be robust to strong noise. Considering the validity and accuracy, MSEBSS is more reliable than Multiscale entropy (MSE) for time series mingled with much noise like financial time series. We apply MSEBSS to financial markets and results show American stock markets have the lowest complexity compared with European and Asian markets. There are exceptions to the regularity that stock markets show a decreasing complexity over the time scale, indicating a periodicity at certain scales. Based on MSEBSS, we introduce the modified multiscale cross-sample entropy measure based on symbolic representation and similarity (MCSEBSS) to consider the degree of the asynchrony between distinct time series. Stock markets from the same area have higher synchrony than those from different areas. And for stock markets having relative high synchrony, the entropy values will decrease with the increasing scale factor. While for stock markets having high asynchrony, the entropy values will not decrease with the increasing scale factor sometimes they tend to increase. So both MSEBSS and MCSEBSS are able to distinguish stock markets of different areas, and they are more helpful if used together for studying other features of financial time series.
Symbolics of the constellations of sagittarius and centaurus in russian traditional culture
NASA Astrophysics Data System (ADS)
Bagdasarov, R.
2001-12-01
Centaurus falls into the category of 'imaginary animals'. The Russian tradition used not only the symbol Sgr (a result of its acquaintance with the circle of Zodiac), but also the symbol Cen, which fact, as we shall demonstrate, is an evidence of certain mythological-astronomical conceptions. Both the constellations Sagittarius (Sgr) and Centaurus (Cen) are usually represented as versions of the picture of a fantastic being, a Centaur, shaped as man from head to waist, and as an animal, mostly, a horse, from waist down. 'Centaurus' (from the Greek word kev (or kevw)) for 'kill' and o, for 'bull') means 'bull killer', and is probably related to the opposition of the zodiacal constellations Taurus and Sagittarius. When the latter begins to rise on to the night sky, the former disappears completely from view. Sagittarius is represented at ancient monuments related to astronomy as a centaur holding a bow and pointing at certain stars. The constellation of Centaurus is also symbolised by a centaur, but holding not a bow, but a staff or a spear in one hand and an 'animal of sacrifice' in the other (Higinus, Astronomica, III, 37, 1; Chernetsov, 1975, Figure 1). The attributes stand for the Peliases Spear (The Mithological Dictionary, 1991), depicted in astrological maps as The Spear of Centaurus1, The Wolf (Lupus), the Panther or the Beast (Flammarion, 1994).
NASA Technical Reports Server (NTRS)
Reddy, C. P.; Gupta, S. C.
1973-01-01
An all digital phase locked loop which tracks the phase of the incoming sinusoidal signal once per carrier cycle is proposed. The different elements and their functions and the phase lock operation are explained in detail. The nonlinear difference equations which govern the operation of the digital loop when the incoming signal is embedded in white Gaussian noise are derived, and a suitable model is specified. The performance of the digital loop is considered for the synchronization of a sinusoidal signal. For this, the noise term is suitably modelled which allows specification of the output probabilities for the two level quantizer in the loop at any given phase error. The loop filter considered increases the probability of proper phase correction. The phase error states in modulo two-pi forms a finite state Markov chain which enables the calculation of steady state probabilities, RMS phase error, transient response and mean time for cycle skipping.
Inference of emission rates from multiple sources using Bayesian probability theory.
Yee, Eugene; Flesch, Thomas K
2010-03-01
The determination of atmospheric emission rates from multiple sources using inversion (regularized least-squares or best-fit technique) is known to be very susceptible to measurement and model errors in the problem, rendering the solution unusable. In this paper, a new perspective is offered for this problem: namely, it is argued that the problem should be addressed as one of inference rather than inversion. Towards this objective, Bayesian probability theory is used to estimate the emission rates from multiple sources. The posterior probability distribution for the emission rates is derived, accounting fully for the measurement errors in the concentration data and the model errors in the dispersion model used to interpret the data. The Bayesian inferential methodology for emission rate recovery is validated against real dispersion data, obtained from a field experiment involving various source-sensor geometries (scenarios) consisting of four synthetic area sources and eight concentration sensors. The recovery of discrete emission rates from three different scenarios obtained using Bayesian inference and singular value decomposition inversion are compared and contrasted.
NASA Astrophysics Data System (ADS)
Jarabo-Amores, María-Pilar; la Mata-Moya, David de; Gil-Pita, Roberto; Rosa-Zurera, Manuel
2013-12-01
The application of supervised learning machines trained to minimize the Cross-Entropy error to radar detection is explored in this article. The detector is implemented with a learning machine that implements a discriminant function, which output is compared to a threshold selected to fix a desired probability of false alarm. The study is based on the calculation of the function the learning machine approximates to during training, and the application of a sufficient condition for a discriminant function to be used to approximate the optimum Neyman-Pearson (NP) detector. In this article, the function a supervised learning machine approximates to after being trained to minimize the Cross-Entropy error is obtained. This discriminant function can be used to implement the NP detector, which maximizes the probability of detection, maintaining the probability of false alarm below or equal to a predefined value. Some experiments about signal detection using neural networks are also presented to test the validity of the study.
A multistate dynamic site occupancy model for spatially aggregated sessile communities
Fukaya, Keiichi; Royle, J. Andrew; Okuda, Takehiro; Nakaoka, Masahiro; Noda, Takashi
2017-01-01
Estimation of transition probabilities of sessile communities seems easy in principle but may still be difficult in practice because resampling error (i.e. a failure to resample exactly the same location at fixed points) may cause significant estimation bias. Previous studies have developed novel analytical methods to correct for this estimation bias. However, they did not consider the local structure of community composition induced by the aggregated distribution of organisms that is typically observed in sessile assemblages and is very likely to affect observations.We developed a multistate dynamic site occupancy model to estimate transition probabilities that accounts for resampling errors associated with local community structure. The model applies a nonparametric multivariate kernel smoothing methodology to the latent occupancy component to estimate the local state composition near each observation point, which is assumed to determine the probability distribution of data conditional on the occurrence of resampling error.By using computer simulations, we confirmed that an observation process that depends on local community structure may bias inferences about transition probabilities. By applying the proposed model to a real data set of intertidal sessile communities, we also showed that estimates of transition probabilities and of the properties of community dynamics may differ considerably when spatial dependence is taken into account.Results suggest the importance of accounting for resampling error and local community structure for developing management plans that are based on Markovian models. Our approach provides a solution to this problem that is applicable to broad sessile communities. It can even accommodate an anisotropic spatial correlation of species composition, and may also serve as a basis for inferring complex nonlinear ecological dynamics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fredriksson, Albin, E-mail: albin.fredriksson@raysearchlabs.com; Hårdemark, Björn; Forsgren, Anders
2015-07-15
Purpose: This paper introduces a method that maximizes the probability of satisfying the clinical goals in intensity-modulated radiation therapy treatments subject to setup uncertainty. Methods: The authors perform robust optimization in which the clinical goals are constrained to be satisfied whenever the setup error falls within an uncertainty set. The shape of the uncertainty set is included as a variable in the optimization. The goal of the optimization is to modify the shape of the uncertainty set in order to maximize the probability that the setup error will fall within the modified set. Because the constraints enforce the clinical goalsmore » to be satisfied under all setup errors within the uncertainty set, this is equivalent to maximizing the probability of satisfying the clinical goals. This type of robust optimization is studied with respect to photon and proton therapy applied to a prostate case and compared to robust optimization using an a priori defined uncertainty set. Results: Slight reductions of the uncertainty sets resulted in plans that satisfied a larger number of clinical goals than optimization with respect to a priori defined uncertainty sets, both within the reduced uncertainty sets and within the a priori, nonreduced, uncertainty sets. For the prostate case, the plans taking reduced uncertainty sets into account satisfied 1.4 (photons) and 1.5 (protons) times as many clinical goals over the scenarios as the method taking a priori uncertainty sets into account. Conclusions: Reducing the uncertainty sets enabled the optimization to find better solutions with respect to the errors within the reduced as well as the nonreduced uncertainty sets and thereby achieve higher probability of satisfying the clinical goals. This shows that asking for a little less in the optimization sometimes leads to better overall plan quality.« less
On the Determinants of the Conjunction Fallacy: Probability versus Inductive Confirmation
ERIC Educational Resources Information Center
Tentori, Katya; Crupi, Vincenzo; Russo, Selena
2013-01-01
Major recent interpretations of the conjunction fallacy postulate that people assess the probability of a conjunction according to (non-normative) averaging rules as applied to the constituents' probabilities or represent the conjunction fallacy as an effect of random error in the judgment process. In the present contribution, we contrast such…
Asteroid orbital error analysis: Theory and application
NASA Technical Reports Server (NTRS)
Muinonen, K.; Bowell, Edward
1992-01-01
We present a rigorous Bayesian theory for asteroid orbital error estimation in which the probability density of the orbital elements is derived from the noise statistics of the observations. For Gaussian noise in a linearized approximation the probability density is also Gaussian, and the errors of the orbital elements at a given epoch are fully described by the covariance matrix. The law of error propagation can then be applied to calculate past and future positional uncertainty ellipsoids (Cappellari et al. 1976, Yeomans et al. 1987, Whipple et al. 1991). To our knowledge, this is the first time a Bayesian approach has been formulated for orbital element estimation. In contrast to the classical Fisherian school of statistics, the Bayesian school allows a priori information to be formally present in the final estimation. However, Bayesian estimation does give the same results as Fisherian estimation when no priori information is assumed (Lehtinen 1988, and reference therein).
NASA Technical Reports Server (NTRS)
Huddleston, Lisa L.; Roeder, William P.; Merceret, Francis J.
2010-01-01
A new technique has been developed to estimate the probability that a nearby cloud-to-ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even within the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force station.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simonen, E.P.; Johnson, K.I.; Simonen, F.A.
The Vessel Integrity Simulation Analysis (VISA-II) code was developed to allow calculations of the failure probability of a reactor pressure vessel subject to defined pressure/temperature transients. A version of the code, revised by Pacific Northwest Laboratory for the US Nuclear Regulatory Commission, was used to evaluate the sensitivities of calculated through-wall flaw probability to material, flaw and calculational assumptions. Probabilities were more sensitive to flaw assumptions than to material or calculational assumptions. Alternative flaw assumptions changed the probabilities by one to two orders of magnitude, whereas alternative material assumptions typically changed the probabilities by a factor of two or less.more » Flaw shape, flaw through-wall position and flaw inspection were sensitivities examined. Material property sensitivities included the assumed distributions in copper content and fracture toughness. Methods of modeling flaw propagation that were evaluated included arrest/reinitiation toughness correlations, multiple toughness values along the length of a flaw, flaw jump distance for each computer simulation and added error in estimating irradiated properties caused by the trend curve correlation error.« less
Error Patterns in Ordering Fractions among At-Risk Fourth-Grade Students
ERIC Educational Resources Information Center
Malone, Amelia S.; Fuchs, Lynn S.
2017-01-01
The three purposes of this study were to (a) describe fraction ordering errors among at-risk fourth grade students, (b) assess the effect of part-whole understanding and accuracy of fraction magnitude estimation on the probability of committing errors, and (c) examine the effect of students' ability to explain comparing problems on the probability…
Human factors analysis for a 2D enroute moving map application
NASA Astrophysics Data System (ADS)
Pschierer, Christian; Wipplinger, Patrick; Schiefele, Jens; Cromer, Scot; Laurin, John; Haffner, Skip
2005-05-01
The paper describes flight trials performed in Centennial, CO with a Piper Cheyenne from Marinvent. Six pilots flew the Cheyenne in twelve enroute segments between Denver Centennial and Colorado Springs. Two different settings (paper chart, enroute moving map) were evaluated with randomized settings. The flight trial goal was to evaluate the objective performance of pilots compared among the different settings. As dependent variables, positional accuracy and situational awareness probe (SAP) were measured. Analysis was conducted by an ANOVA test. In parallel, all pilots answered subjective Cooper-Harper, NASA TLX, situation awareness rating technique (SART), Display Readability Rating and debriefing questionnaires. The tested enroute moving map application has Jeppesen chart compliant symbologies for high-enroute and low-enroute. It has a briefing mode were all information found on today"s enroute paper chart together with a loaded flight plan are displayed in a north-up orientation. The execution mode displays a loaded flight plan routing together with only pertinent flight route relevant information in either a track up or north up orientation. Depiction of an own ship symbol is possible in both modes. All text and symbols are deconflicted. Additional information can be obtained by clicking on symbols. Terrain and obstacle data can be displayed for enhanced situation awareness. The result shows that pilots flying the 2D enroute moving map display perform no worse than pilots with conventional systems. Flight technical error and workload are equivalent or lower, situational awareness is higher than on conventional paper charts.
Chaos-based wireless communication resisting multipath effects.
Yao, Jun-Liang; Li, Chen; Ren, Hai-Peng; Grebogi, Celso
2017-09-01
In additive white Gaussian noise channel, chaos has been shown to be the optimal coherent communication waveform in the sense of using a very simple matched filter to maximize the signal-to-noise ratio. Recently, Lyapunov exponent spectrum of the chaotic signals after being transmitted through a wireless channel has been shown to be unaltered, paving the way for wireless communication using chaos. In wireless communication systems, inter-symbol interference caused by multipath propagation is one of the main obstacles to achieve high bit transmission rate and low bit-error rate (BER). How to resist the multipath effect is a fundamental problem in a chaos-based wireless communication system (CWCS). In this paper, a CWCS is built to transmit chaotic signals generated by a hybrid dynamical system and then to filter the received signals by using the corresponding matched filter to decrease the noise effect and to detect the binary information. We find that the multipath effect can be effectively resisted by regrouping the return map of the received signal and by setting the corresponding threshold based on the available information. We show that the optimal threshold is a function of the channel parameters and of the information symbols. Practically, the channel parameters are time-variant, and the future information symbols are unavailable. In this case, a suboptimal threshold is proposed, and the BER using the suboptimal threshold is derived analytically. Simulation results show that the CWCS achieves a remarkable competitive performance even under inaccurate channel parameters.
Chaos-based wireless communication resisting multipath effects
NASA Astrophysics Data System (ADS)
Yao, Jun-Liang; Li, Chen; Ren, Hai-Peng; Grebogi, Celso
2017-09-01
In additive white Gaussian noise channel, chaos has been shown to be the optimal coherent communication waveform in the sense of using a very simple matched filter to maximize the signal-to-noise ratio. Recently, Lyapunov exponent spectrum of the chaotic signals after being transmitted through a wireless channel has been shown to be unaltered, paving the way for wireless communication using chaos. In wireless communication systems, inter-symbol interference caused by multipath propagation is one of the main obstacles to achieve high bit transmission rate and low bit-error rate (BER). How to resist the multipath effect is a fundamental problem in a chaos-based wireless communication system (CWCS). In this paper, a CWCS is built to transmit chaotic signals generated by a hybrid dynamical system and then to filter the received signals by using the corresponding matched filter to decrease the noise effect and to detect the binary information. We find that the multipath effect can be effectively resisted by regrouping the return map of the received signal and by setting the corresponding threshold based on the available information. We show that the optimal threshold is a function of the channel parameters and of the information symbols. Practically, the channel parameters are time-variant, and the future information symbols are unavailable. In this case, a suboptimal threshold is proposed, and the BER using the suboptimal threshold is derived analytically. Simulation results show that the CWCS achieves a remarkable competitive performance even under inaccurate channel parameters.
NASA Astrophysics Data System (ADS)
Huo, Ming-Xia; Li, Ying
2017-12-01
Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.
Threshold detection in an on-off binary communications channel with atmospheric scintillation
NASA Technical Reports Server (NTRS)
Webb, W. E.; Marino, J. T., Jr.
1974-01-01
The optimum detection threshold in an on-off binary optical communications system operating in the presence of atmospheric turbulence was investigated assuming a poisson detection process and log normal scintillation. The dependence of the probability of bit error on log amplitude variance and received signal strength was analyzed and semi-emperical relationships to predict the optimum detection threshold derived. On the basis of this analysis a piecewise linear model for an adaptive threshold detection system is presented. Bit error probabilities for non-optimum threshold detection system were also investigated.
Threshold detection in an on-off binary communications channel with atmospheric scintillation
NASA Technical Reports Server (NTRS)
Webb, W. E.
1975-01-01
The optimum detection threshold in an on-off binary optical communications system operating in the presence of atmospheric turbulence was investigated assuming a poisson detection process and log normal scintillation. The dependence of the probability of bit error on log amplitude variance and received signal strength was analyzed and semi-empirical relationships to predict the optimum detection threshold derived. On the basis of this analysis a piecewise linear model for an adaptive threshold detection system is presented. The bit error probabilities for nonoptimum threshold detection systems were also investigated.
Objective Analysis of Oceanic Data for Coast Guard Trajectory Models Phase II
1997-12-01
as outliers depends on the desired probability of false alarm, Pfa values, which is the probability of marking a valid point as an outlier. Table 2-2...constructed to minimize the mean-squared prediction error of the grid point estimate under the constraint that the estimate is unbiased . The...prediction error, e= Zl(S) _oizl(Si)+oC1iZz(S) (2.44) subject to the constraints of unbiasedness , • c/1 = 1,and (2.45) i SCC12 = 0. (2.46) Denoting
On the Effects of a Spacecraft Subcarrier Unbalanced Modulator
NASA Technical Reports Server (NTRS)
Nguyen, Tien Manh
1993-01-01
This paper presents mathematical models with associated analysis of the deleterious effects which a spacecraft's subcarrier unbalanced modulator has on the performance of a phase-modulated residual carrier communications link. The undesired spectral components produced by the phase and amplitude imbalances in the subcarrier modulator can cause (1) potential interference to the carrier tracking and (2) degradation in the telemetry bit signal-to-noise ratio (SNR). A suitable model for the unbalanced modulator is developed and the threshold levels of undesired components that fall into the carrier tracking loop are determined. The distribution of the carrier phase error caused by the additive White Gaussian noise (AWGN) and undesired component at the residual RF carrier is derived for the limiting cases. Further, this paper analyses the telemetry bit signal-to-noise ratio degradations due to undesirable spectral components as well as the carrier tracking phase error induced by phase and amplitude imbalances. Numerical results which indicate the sensitivity of the carrier tracking loop and the telemetry symbol-error rate (SER) to various parameters of the models are also provided as a tool in the design of the subcarrier balanced modulator.
Students’ mathematical representations on secondary school in solving trigonometric problems
NASA Astrophysics Data System (ADS)
Istadi; Kusmayadi, T. A.; Sujadi, I.
2017-06-01
This research aimed to analyse students’ mathematical representations on secondary school in solving trigonometric problems. This research used qualitative method. The participants were 4 students who had high competence of knowledge taken from 20 students of 12th natural-science grade SMAN-1 Kota Besi, Central Kalimantan. Data validation was carried out using time triangulation. Data analysis used Huberman and Miles stages. The results showed that their answers were not only based on the given figure, but also used the definition of trigonometric ratio on verbal representations. On the other hand, they were able to determine the object positions to be observed. However, they failed to determine the position of the angle of depression at the sketches made on visual representations. Failure in determining the position of the angle of depression to cause an error in using the mathematical equation. Finally, they were unsuccessful to use the mathematical equation properly on symbolic representations. From this research, we could recommend the importance of translations between mathematical problems and mathematical representations as well as translations among mathematical representaions (verbal, visual, and symbolic) in learning mathematics in the classroom.
NASA Astrophysics Data System (ADS)
Fehenberger, Tobias
2018-02-01
This paper studies probabilistic shaping in a multi-span wavelength-division multiplexing optical fiber system with 64-ary quadrature amplitude modulation (QAM) input. In split-step fiber simulations and via an enhanced Gaussian noise model, three figures of merit are investigated, which are signal-to-noise ratio (SNR), achievable information rate (AIR) for capacity-achieving forward error correction (FEC) with bit-metric decoding, and the information rate achieved with low-density parity-check (LDPC) FEC. For the considered system parameters and different shaped input distributions, shaping is found to decrease the SNR by 0.3 dB yet simultaneously increases the AIR by up to 0.4 bit per 4D-symbol. The information rates of LDPC-coded modulation with shaped 64QAM input are improved by up to 0.74 bit per 4D-symbol, which is larger than the shaping gain when considering AIRs. This increase is attributed to the reduced coding gap of the higher-rate code that is used for decoding the nonuniform QAM input.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jha, Sumit Kumar; Pullum, Laura L; Ramanathan, Arvind
Embedded intelligent systems ranging from tiny im- plantable biomedical devices to large swarms of autonomous un- manned aerial systems are becoming pervasive in our daily lives. While we depend on the flawless functioning of such intelligent systems, and often take their behavioral correctness and safety for granted, it is notoriously difficult to generate test cases that expose subtle errors in the implementations of machine learning algorithms. Hence, the validation of intelligent systems is usually achieved by studying their behavior on representative data sets, using methods such as cross-validation and bootstrapping.In this paper, we present a new testing methodology for studyingmore » the correctness of intelligent systems. Our approach uses symbolic decision procedures coupled with statistical hypothesis testing to. We also use our algorithm to analyze the robustness of a human detection algorithm built using the OpenCV open-source computer vision library. We show that the human detection implementation can fail to detect humans in perturbed video frames even when the perturbations are so small that the corresponding frames look identical to the naked eye.« less
Asarnow, R F; Cromwell, R L; Rennick, P M
1978-10-01
Twenty-four male schizophrenics, 12 (SFH) with schizophrenia in the immediate family and 12 (SNFH) with no evidence of schizophrenia in the family background, and 24 male control subjects, 12 highly educated (HEC), and 12 minimally educated (MEC), were assessed for premorbid social adjustment and were administered the Digit Symbol Substitution Test, a size estimation task, and the EEG average evoked response (AER) at different levels of stimulus intensity. As predicted from the stimulus redundancy formulation, the SFH patients were poorer in premorbid adjustment, were less often paranoid, functioned at a lower level of cognitive efficiency (poor digit symbol and greater absolute error on size estimation), were more chronic, and, in some respects, had size estimation indices of minimal scanning. Contrary to prediction, the SFH group had the strongest and most sustained augmenting response on AER, while the SNFH group shifted from an augmenting to a reducing pattern of response. The relationship between an absence of AER reducing and the presence of cognitive impairment in the SFH group was a major focus of discussion.
Dube, William V.; Wilkinson, Krista M.
2014-01-01
This paper examines the phenomenon of “stimulus overselectivity” or “overselective attention” as it may impact AAC training and use in individuals with intellectual disabilities. Stimulus overselectivity is defined as an atypical limitation in the number of stimuli or stimulus features within an image that are attended to and subsequently learned. Within AAC, the term “stimulus” could refer to symbols or line drawings on speech generating devices, drawings or pictures on low-technology systems, and/or the elements within visual scene displays. In this context, overselective attention may result in unusual or uneven error patterns such as confusion between two symbols that share a single feature or difficulties with transitioning between different types of hardware. We review some of the ways that overselective attention has been studied behaviorally. We then examine how eye tracking technology allows a glimpse into some of the behavioral characteristics of overselective attention. We describe an intervention approach, differential observing responses, that may reduce or eliminate overselectivity, and we consider this type of intervention as it relates to issues of relevance for AAC. PMID:24773053
Effects of structural error on the estimates of parameters of dynamical systems
NASA Technical Reports Server (NTRS)
Hadaegh, F. Y.; Bekey, G. A.
1986-01-01
In this paper, the notion of 'near-equivalence in probability' is introduced for identifying a system in the presence of several error sources. Following some basic definitions, necessary and sufficient conditions for the identifiability of parameters are given. The effects of structural error on the parameter estimates for both the deterministic and stochastic cases are considered.
Position Error Covariance Matrix Validation and Correction
NASA Technical Reports Server (NTRS)
Frisbee, Joe, Jr.
2016-01-01
In order to calculate operationally accurate collision probabilities, the position error covariance matrices predicted at times of closest approach must be sufficiently accurate representations of the position uncertainties. This presentation will discuss why the Gaussian distribution is a reasonable expectation for the position uncertainty and how this assumed distribution type is used in the validation and correction of position error covariance matrices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morley, Steven
The PyForecastTools package provides Python routines for calculating metrics for model validation, forecast verification and model comparison. For continuous predictands the package provides functions for calculating bias (mean error, mean percentage error, median log accuracy, symmetric signed bias), and for calculating accuracy (mean squared error, mean absolute error, mean absolute scaled error, normalized RMSE, median symmetric accuracy). Convenience routines to calculate the component parts (e.g. forecast error, scaled error) of each metric are also provided. To compare models the package provides: generic skill score; percent better. Robust measures of scale including median absolute deviation, robust standard deviation, robust coefficient ofmore » variation and the Sn estimator are all provided by the package. Finally, the package implements Python classes for NxN contingency tables. In the case of a multi-class prediction, accuracy and skill metrics such as proportion correct and the Heidke and Peirce skill scores are provided as object methods. The special case of a 2x2 contingency table inherits from the NxN class and provides many additional metrics for binary classification: probability of detection, probability of false detection, false alarm ration, threat score, equitable threat score, bias. Confidence intervals for many of these quantities can be calculated using either the Wald method or Agresti-Coull intervals.« less
Zimmerman, Dale L; Fang, Xiangming; Mazumdar, Soumya; Rushton, Gerard
2007-01-10
The assignment of a point-level geocode to subjects' residences is an important data assimilation component of many geographic public health studies. Often, these assignments are made by a method known as automated geocoding, which attempts to match each subject's address to an address-ranged street segment georeferenced within a streetline database and then interpolate the position of the address along that segment. Unfortunately, this process results in positional errors. Our study sought to model the probability distribution of positional errors associated with automated geocoding and E911 geocoding. Positional errors were determined for 1423 rural addresses in Carroll County, Iowa as the vector difference between each 100%-matched automated geocode and its true location as determined by orthophoto and parcel information. Errors were also determined for 1449 60%-matched geocodes and 2354 E911 geocodes. Huge (> 15 km) outliers occurred among the 60%-matched geocoding errors; outliers occurred for the other two types of geocoding errors also but were much smaller. E911 geocoding was more accurate (median error length = 44 m) than 100%-matched automated geocoding (median error length = 168 m). The empirical distributions of positional errors associated with 100%-matched automated geocoding and E911 geocoding exhibited a distinctive Greek-cross shape and had many other interesting features that were not capable of being fitted adequately by a single bivariate normal or t distribution. However, mixtures of t distributions with two or three components fit the errors very well. Mixtures of bivariate t distributions with few components appear to be flexible enough to fit many positional error datasets associated with geocoding, yet parsimonious enough to be feasible for nascent applications of measurement-error methodology to spatial epidemiology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yunlong; Wang, Aiping; Guo, Lei
This paper presents an error-entropy minimization tracking control algorithm for a class of dynamic stochastic system. The system is represented by a set of time-varying discrete nonlinear equations with non-Gaussian stochastic input, where the statistical properties of stochastic input are unknown. By using Parzen windowing with Gaussian kernel to estimate the probability densities of errors, recursive algorithms are then proposed to design the controller such that the tracking error can be minimized. The performance of the error-entropy minimization criterion is compared with the mean-square-error minimization in the simulation results.
Students’ Mathematical Literacy in Solving PISA Problems Based on Keirsey Personality Theory
NASA Astrophysics Data System (ADS)
Masriyah; Firmansyah, M. H.
2018-01-01
This research is descriptive-qualitative research. The purpose is to describe students’ mathematical literacy in solving PISA on space and shape content based on Keirsey personality theory. The subjects are four junior high school students grade eight with guardian, artisan, rational or idealist personality. Data collecting methods used test and interview. Data of Keirsey Personality test, PISA test, and interview were analysed. Profile of mathematical literacy of each subject are described as follows. In formulating, guardian subject identified mathematical aspects are formula of rectangle area and sides length; significant variables are terms/conditions in problem and formula of ever encountered question; translated into mathematical language those are measurement and arithmetic operations. In employing, he devised and implemented strategies using ease of calculation on area-subtraction principle; declared truth of result but the reason was less correct; didn’t use and switch between different representations. In interpreting, he declared result as area of house floor; declared reasonableness according measurement estimation. In formulating, artisan subject identified mathematical aspects are plane and sides length; significant variables are solution procedure on both of daily problem and ever encountered question; translated into mathematical language those are measurement, variables, and arithmetic operations as well as symbol representation. In employing, he devised and implemented strategies using two design comparison; declared truth of result without reason; used symbol representation only. In interpreting, he expressed result as floor area of house; declared reasonableness according measurement estimation. In formulating, rational subject identified mathematical aspects are scale and sides length; significant variables are solution strategy on ever encountered question; translated into mathematical language those are measurement, variable, arithmetic operation as well as symbol and graphic representation. In employing, he devised and implemented strategies using additional plane forming on area-subtraction principle; declared truth of result according calculation process; used and switched between symbol and graphic representation. In interpreting, he declared result as house area within terrace and wall; declared reasonableness according measurement estimation. In formulating, idealist subject identified mathematical aspects are sides length; significant variables are terms/condition in problem; translated into mathematical language those are measurement, variables, arithmetic operations as well as symbol and graphic representation. In employing, he devised and implemented strategies using trial and error and two design in process of finding solutions; declared truth of result according the use of two design of solution; used and switched between symbol and graphic representation. In interpreting, he declared result as floor area of house; declared reasonableness according measurement estimation.
Wang, Ping; Liu, Xiaoxia; Cao, Tian; Fu, Huihua; Wang, Ranran; Guo, Lixin
2016-09-20
The impact of nonzero boresight pointing errors on the system performance of decode-and-forward protocol-based multihop parallel optical wireless communication systems is studied. For the aggregated fading channel, the atmospheric turbulence is simulated by an exponentiated Weibull model, and pointing errors are described by one recently proposed statistical model including both boresight and jitter. The binary phase-shift keying subcarrier intensity modulation-based analytical average bit error rate (ABER) and outage probability expressions are achieved for a nonidentically and independently distributed system. The ABER and outage probability are then analyzed with different turbulence strengths, receiving aperture sizes, structure parameters (P and Q), jitter variances, and boresight displacements. The results show that aperture averaging offers almost the same system performance improvement with boresight included or not, despite the values of P and Q. The performance enhancement owing to the increase of cooperative path (P) is more evident with nonzero boresight than that with zero boresight (jitter only), whereas the performance deterioration because of the increasing hops (Q) with nonzero boresight is almost the same as that with zero boresight. Monte Carlo simulation is offered to verify the validity of ABER and outage probability expressions.
Genetic Algorithm-Based Motion Estimation Method using Orientations and EMGs for Robot Controls
Chae, Jeongsook; Jin, Yong; Sung, Yunsick
2018-01-01
Demand for interactive wearable devices is rapidly increasing with the development of smart devices. To accurately utilize wearable devices for remote robot controls, limited data should be analyzed and utilized efficiently. For example, the motions by a wearable device, called Myo device, can be estimated by measuring its orientation, and calculating a Bayesian probability based on these orientation data. Given that Myo device can measure various types of data, the accuracy of its motion estimation can be increased by utilizing these additional types of data. This paper proposes a motion estimation method based on weighted Bayesian probability and concurrently measured data, orientations and electromyograms (EMG). The most probable motion among estimated is treated as a final estimated motion. Thus, recognition accuracy can be improved when compared to the traditional methods that employ only a single type of data. In our experiments, seven subjects perform five predefined motions. When orientation is measured by the traditional methods, the sum of the motion estimation errors is 37.3%; likewise, when only EMG data are used, the error in motion estimation by the proposed method was also 37.3%. The proposed combined method has an error of 25%. Therefore, the proposed method reduces motion estimation errors by 12%. PMID:29324641
Inomata, Takeshi; Triadan, Daniela; Pinzón, Flory; Burham, Melissa; Ranchos, José Luis; Aoyama, Kazuo; Haraguchi, Tsuyoshi
2018-01-01
Although the application of LiDAR has made significant contributions to archaeology, LiDAR only provides a synchronic view of the current topography. An important challenge for researchers is to extract diachronic information over typically extensive LiDAR-surveyed areas in an efficient manner. By applying an architectural chronology obtained from intensive excavations at the site center and by complementing it with surface collection and test excavations in peripheral zones, we analyze LiDAR data over an area of 470 km2 to trace social changes through time in the Ceibal region, Guatemala, of the Maya lowlands. We refine estimates of structure counts and populations by applying commission and omission error rates calculated from the results of ground-truthing. Although the results of our study need to be tested and refined with additional research in the future, they provide an initial understanding of social processes over a wide area. Ceibal appears to have served as the only ceremonial complex in the region during the transition to sedentism at the beginning of the Middle Preclassic period (c. 1000 BC). As a more sedentary way of life was accepted during the late part of the Middle Preclassic period and the initial Late Preclassic period (600-300 BC), more ceremonial assemblages were constructed outside the Ceibal center, possibly symbolizing the local groups' claim to surrounding agricultural lands. From the middle Late Preclassic to the initial Early Classic period (300 BC-AD 300), a significant number of pyramidal complexes were probably built. Their high concentration in the Ceibal center probably reflects increasing political centralization. After a demographic decline during the rest of the Early Classic period, the population in the Ceibal region reached the highest level during the Late and Terminal Classic periods, when dynastic rule was well established (AD 600-950).
Triadan, Daniela; Pinzón, Flory; Burham, Melissa; Ranchos, José Luis; Aoyama, Kazuo; Haraguchi, Tsuyoshi
2018-01-01
Although the application of LiDAR has made significant contributions to archaeology, LiDAR only provides a synchronic view of the current topography. An important challenge for researchers is to extract diachronic information over typically extensive LiDAR-surveyed areas in an efficient manner. By applying an architectural chronology obtained from intensive excavations at the site center and by complementing it with surface collection and test excavations in peripheral zones, we analyze LiDAR data over an area of 470 km2 to trace social changes through time in the Ceibal region, Guatemala, of the Maya lowlands. We refine estimates of structure counts and populations by applying commission and omission error rates calculated from the results of ground-truthing. Although the results of our study need to be tested and refined with additional research in the future, they provide an initial understanding of social processes over a wide area. Ceibal appears to have served as the only ceremonial complex in the region during the transition to sedentism at the beginning of the Middle Preclassic period (c. 1000 BC). As a more sedentary way of life was accepted during the late part of the Middle Preclassic period and the initial Late Preclassic period (600–300 BC), more ceremonial assemblages were constructed outside the Ceibal center, possibly symbolizing the local groups’ claim to surrounding agricultural lands. From the middle Late Preclassic to the initial Early Classic period (300 BC-AD 300), a significant number of pyramidal complexes were probably built. Their high concentration in the Ceibal center probably reflects increasing political centralization. After a demographic decline during the rest of the Early Classic period, the population in the Ceibal region reached the highest level during the Late and Terminal Classic periods, when dynastic rule was well established (AD 600–950). PMID:29466384
Ka-Band Phased Array System Characterization
NASA Technical Reports Server (NTRS)
Acosta, R.; Johnson, S.; Sands, O.; Lambert, K.
2001-01-01
Phased Array Antennas (PAAs) using patch-radiating elements are projected to transmit data at rates several orders of magnitude higher than currently offered with reflector-based systems. However, there are a number of potential sources of degradation in the Bit Error Rate (BER) performance of the communications link that are unique to PAA-based links. Short spacing of radiating elements can induce mutual coupling between radiating elements, long spacing can induce grating lobes, modulo 2 pi phase errors can add to Inter Symbol Interference (ISI), phase shifters and power divider network introduce losses into the system. This paper describes efforts underway to test and evaluate the effects of the performance degrading features of phased-array antennas when used in a high data rate modulation link. The tests and evaluations described here uncover the interaction between the electrical characteristics of a PAA and the BER performance of a communication link.
Demodulation Algorithms for the Ofdm Signals in the Time- and Frequency-Scattering Channels
NASA Astrophysics Data System (ADS)
Bochkov, G. N.; Gorokhov, K. V.; Kolobkov, A. V.
2016-06-01
We consider a method based on the generalized maximum-likelihood rule for solving the problem of reception of the signals with orthogonal frequency division multiplexing of their harmonic components (OFDM signals) in the time- and frequency-scattering channels. The coherent and incoherent demodulators effectively using the time scattering due to the fast fading of the signal are developed. Using computer simulation, we performed comparative analysis of the proposed algorithms and well-known signal-reception algorithms with equalizers. The proposed symbolby-symbol detector with decision feedback and restriction of the number of searched variants is shown to have the best bit-error-rate performance. It is shown that under conditions of the limited accuracy of estimating the communication-channel parameters, the incoherent OFDMsignal detectors with differential phase-shift keying can ensure a better bit-error-rate performance compared with the coherent OFDM-signal detectors with absolute phase-shift keying.
QPPM receiver for free-space laser communications
NASA Technical Reports Server (NTRS)
Budinger, J. M.; Mohamed, J. H.; Nagy, L. A.; Lizanich, P. J.; Mortensen, D. J.
1994-01-01
A prototype receiver developed at NASA Lewis Research Center for direct detection and demodulation of quaternary pulse position modulated (QPPM) optical carriers is described. The receiver enables dual-channel communications at 325-Megabits per second (Mbps) per channel. The optical components of the prototype receiver are briefly described. The electronic components, comprising the analog signal conditioning, slot clock recovery, matched filter and maximum likelihood data recovery circuits are described in more detail. A novel digital symbol clock recovery technique is presented as an alternative to conventional analog methods. Simulated link degradations including noise and pointing-error induced amplitude variations are applied. The bit-error-rate performance of the electronic portion of the prototype receiver under varying optical signal-to-noise power ratios is found to be within 1.5-dB of theory. Implementation of the receiver as a hybrid of analog and digital application specific integrated circuits is planned.
Modeling habitat dynamics accounting for possible misclassification
Veran, Sophie; Kleiner, Kevin J.; Choquet, Remi; Collazo, Jaime; Nichols, James D.
2012-01-01
Land cover data are widely used in ecology as land cover change is a major component of changes affecting ecological systems. Landscape change estimates are characterized by classification errors. Researchers have used error matrices to adjust estimates of areal extent, but estimation of land cover change is more difficult and more challenging, with error in classification being confused with change. We modeled land cover dynamics for a discrete set of habitat states. The approach accounts for state uncertainty to produce unbiased estimates of habitat transition probabilities using ground information to inform error rates. We consider the case when true and observed habitat states are available for the same geographic unit (pixel) and when true and observed states are obtained at one level of resolution, but transition probabilities estimated at a different level of resolution (aggregations of pixels). Simulation results showed a strong bias when estimating transition probabilities if misclassification was not accounted for. Scaling-up does not necessarily decrease the bias and can even increase it. Analyses of land cover data in the Southeast region of the USA showed that land change patterns appeared distorted if misclassification was not accounted for: rate of habitat turnover was artificially increased and habitat composition appeared more homogeneous. Not properly accounting for land cover misclassification can produce misleading inferences about habitat state and dynamics and also misleading predictions about species distributions based on habitat. Our models that explicitly account for state uncertainty should be useful in obtaining more accurate inferences about change from data that include errors.
NASA Astrophysics Data System (ADS)
Sun, Dongliang; Huang, Guangtuan; Jiang, Juncheng; Zhang, Mingguang; Wang, Zhirong
2013-04-01
Overpressure is one important cause of domino effect in accidents of chemical process equipments. Some models considering propagation probability and threshold values of the domino effect caused by overpressure have been proposed in previous study. In order to prove the rationality and validity of the models reported in the reference, two boundary values of three damage degrees reported were considered as random variables respectively in the interval [0, 100%]. Based on the overpressure data for damage to the equipment and the damage state, and the calculation method reported in the references, the mean square errors of the four categories of damage probability models of overpressure were calculated with random boundary values, and then a relationship of mean square error vs. the two boundary value was obtained, the minimum of mean square error was obtained, compared with the result of the present work, mean square error decreases by about 3%. Therefore, the error was in the acceptable range of engineering applications, the models reported can be considered reasonable and valid.
Estimating alarm thresholds and the number of components in mixture distributions
NASA Astrophysics Data System (ADS)
Burr, Tom; Hamada, Michael S.
2012-09-01
Mixtures of probability distributions arise in many nuclear assay and forensic applications, including nuclear weapon detection, neutron multiplicity counting, and in solution monitoring (SM) for nuclear safeguards. SM data is increasingly used to enhance nuclear safeguards in aqueous reprocessing facilities having plutonium in solution form in many tanks. This paper provides background for mixture probability distributions and then focuses on mixtures arising in SM data. SM data can be analyzed by evaluating transfer-mode residuals defined as tank-to-tank transfer differences, and wait-mode residuals defined as changes during non-transfer modes. A previous paper investigated impacts on transfer-mode and wait-mode residuals of event marking errors which arise when the estimated start and/or stop times of tank events such as transfers are somewhat different from the true start and/or stop times. Event marking errors contribute to non-Gaussian behavior and larger variation than predicted on the basis of individual tank calibration studies. This paper illustrates evidence for mixture probability distributions arising from such event marking errors and from effects such as condensation or evaporation during non-transfer modes, and pump carryover during transfer modes. A quantitative assessment of the sample size required to adequately characterize a mixture probability distribution arising in any context is included.
Maximum entropy approach to statistical inference for an ocean acoustic waveguide.
Knobles, D P; Sagers, J D; Koch, R A
2012-02-01
A conditional probability distribution suitable for estimating the statistical properties of ocean seabed parameter values inferred from acoustic measurements is derived from a maximum entropy principle. The specification of the expectation value for an error function constrains the maximization of an entropy functional. This constraint determines the sensitivity factor (β) to the error function of the resulting probability distribution, which is a canonical form that provides a conservative estimate of the uncertainty of the parameter values. From the conditional distribution, marginal distributions for individual parameters can be determined from integration over the other parameters. The approach is an alternative to obtaining the posterior probability distribution without an intermediary determination of the likelihood function followed by an application of Bayes' rule. In this paper the expectation value that specifies the constraint is determined from the values of the error function for the model solutions obtained from a sparse number of data samples. The method is applied to ocean acoustic measurements taken on the New Jersey continental shelf. The marginal probability distribution for the values of the sound speed ratio at the surface of the seabed and the source levels of a towed source are examined for different geoacoustic model representations. © 2012 Acoustical Society of America
Average BER and outage probability of the ground-to-train OWC link in turbulence with rain
NASA Astrophysics Data System (ADS)
Zhang, Yixin; Yang, Yanqiu; Hu, Beibei; Yu, Lin; Hu, Zheng-Da
2017-09-01
The bit-error rate (BER) and outage probability of optical wireless communication (OWC) link for the ground-to-train of the curved track in turbulence with rain is evaluated. Considering the re-modulation effects of raining fluctuation on optical signal modulated by turbulence, we set up the models of average BER and outage probability in the present of pointing errors, based on the double inverse Gaussian (IG) statistical distribution model. The numerical results indicate that, for the same covered track length, the larger curvature radius increases the outage probability and average BER. The performance of the OWC link in turbulence with rain is limited mainly by the rain rate and pointing errors which are induced by the beam wander and train vibration. The effect of the rain rate on the performance of the link is more severe than the atmospheric turbulence, but the fluctuation owing to the atmospheric turbulence affects the laser beam propagation more greatly than the skewness of the rain distribution. Besides, the turbulence-induced beam wander has a more significant impact on the system in heavier rain. We can choose the size of transmitting and receiving apertures and improve the shockproof performance of the tracks to optimize the communication performance of the system.
Using LDPC Code Constraints to Aid Recovery of Symbol Timing
NASA Technical Reports Server (NTRS)
Jones, Christopher; Villasnor, John; Lee, Dong-U; Vales, Esteban
2008-01-01
A method of utilizing information available in the constraints imposed by a low-density parity-check (LDPC) code has been proposed as a means of aiding the recovery of symbol timing in the reception of a binary-phase-shift-keying (BPSK) signal representing such a code in the presence of noise, timing error, and/or Doppler shift between the transmitter and the receiver. This method and the receiver architecture in which it would be implemented belong to a class of timing-recovery methods and corresponding receiver architectures characterized as pilotless in that they do not require transmission and reception of pilot signals. Acquisition and tracking of a signal of the type described above have traditionally been performed upstream of, and independently of, decoding and have typically involved utilization of a phase-locked loop (PLL). However, the LDPC decoding process, which is iterative, provides information that can be fed back to the timing-recovery receiver circuits to improve performance significantly over that attainable in the absence of such feedback. Prior methods of coupling LDPC decoding with timing recovery had focused on the use of output code words produced as the iterations progress. In contrast, in the present method, one exploits the information available from the metrics computed for the constraint nodes of an LDPC code during the decoding process. In addition, the method involves the use of a waveform model that captures, better than do the waveform models of the prior methods, distortions introduced by receiver timing errors and transmitter/ receiver motions. An LDPC code is commonly represented by use of a bipartite graph containing two sets of nodes. In the graph corresponding to an (n,k) code, the n variable nodes correspond to the code word symbols and the n-k constraint nodes represent the constraints that the code places on the variable nodes in order for them to form a valid code word. The decoding procedure involves iterative computation of values associated with these nodes. A constraint node represents a parity-check equation using a set of variable nodes as inputs. A valid decoded code word is obtained if all parity-check equations are satisfied. After each iteration, the metrics associated with each constraint node can be evaluated to determine the status of the associated parity check. Heretofore, normally, these metrics would be utilized only within the LDPC decoding process to assess whether or not variable nodes had converged to a codeword. In the present method, it is recognized that these metrics can be used to determine accuracy of the timing estimates used in acquiring the sampled data that constitute the input to the LDPC decoder. In fact, the number of constraints that are satisfied exhibits a peak near the optimal timing estimate. Coarse timing estimation (or first-stage estimation as described below) is found via a parametric search for this peak. The present method calls for a two-stage receiver architecture illustrated in the figure. The first stage would correct large time delays and frequency offsets; the second stage would track random walks and correct residual time and frequency offsets. In the first stage, constraint-node feedback from the LDPC decoder would be employed in a search algorithm in which the searches would be performed in successively narrower windows to find the correct time delay and/or frequency offset. The second stage would include a conventional first-order PLL with a decision-aided timing-error detector that would utilize, as its decision aid, decoded symbols from the LDPC decoder. The method has been tested by means of computational simulations in cases involving various timing and frequency errors. The results of the simulations ined in the ideal case of perfect timing in the receiver.
iGen: An automated generator of simplified models with provable error bounds.
NASA Astrophysics Data System (ADS)
Tang, D.; Dobbie, S.
2009-04-01
Climate models employ various simplifying assumptions and parameterisations in order to increase execution speed. However, in order to draw conclusions about the Earths climate from the results of a climate simulation it is necessary to have information about the error that these assumptions and parameterisations introduce. A novel computer program, called iGen, is being developed which automatically generates fast, simplified models by analysing the source code of a slower, high resolution model. The resulting simplified models have provable bounds on error compared to the high resolution model and execute at speeds that are typically orders of magnitude faster. iGen's input is a definition of the prognostic variables of the simplified model, a set of bounds on acceptable error and the source code of a model that captures the behaviour of interest. In the case of an atmospheric model, for example, this would be a global cloud resolving model with very high resolution. Although such a model would execute far too slowly to be used directly in a climate model, iGen never executes it. Instead, it converts the code of the resolving model into a mathematical expression which is then symbolically manipulated and approximated to form a simplified expression. This expression is then converted back into a computer program and output as a simplified model. iGen also derives and reports formal bounds on the error of the simplified model compared to the resolving model. These error bounds are always maintained below the user-specified acceptable error. Results will be presented illustrating the success of iGen's analysis of a number of example models. These extremely encouraging results have lead on to work which is currently underway to analyse a cloud resolving model and so produce an efficient parameterisation of moist convection with formally bounded error.
Identification of dynamic systems, theory and formulation
NASA Technical Reports Server (NTRS)
Maine, R. E.; Iliff, K. W.
1985-01-01
The problem of estimating parameters of dynamic systems is addressed in order to present the theoretical basis of system identification and parameter estimation in a manner that is complete and rigorous, yet understandable with minimal prerequisites. Maximum likelihood and related estimators are highlighted. The approach used requires familiarity with calculus, linear algebra, and probability, but does not require knowledge of stochastic processes or functional analysis. The treatment emphasizes unification of the various areas in estimation in dynamic systems is treated as a direct outgrowth of the static system theory. Topics covered include basic concepts and definitions; numerical optimization methods; probability; statistical estimators; estimation in static systems; stochastic processes; state estimation in dynamic systems; output error, filter error, and equation error methods of parameter estimation in dynamic systems, and the accuracy of the estimates.
Quantum illumination for enhanced detection of Rayleigh-fading targets
NASA Astrophysics Data System (ADS)
Zhuang, Quntao; Zhang, Zheshen; Shapiro, Jeffrey H.
2017-08-01
Quantum illumination (QI) is an entanglement-enhanced sensing system whose performance advantage over a comparable classical system survives its usage in an entanglement-breaking scenario plagued by loss and noise. In particular, QI's error-probability exponent for discriminating between equally likely hypotheses of target absence or presence is 6 dB higher than that of the optimum classical system using the same transmitted power. This performance advantage, however, presumes that the target return, when present, has known amplitude and phase, a situation that seldom occurs in light detection and ranging (lidar) applications. At lidar wavelengths, most target surfaces are sufficiently rough that their returns are speckled, i.e., they have Rayleigh-distributed amplitudes and uniformly distributed phases. QI's optical parametric amplifier receiver—which affords a 3 dB better-than-classical error-probability exponent for a return with known amplitude and phase—fails to offer any performance gain for Rayleigh-fading targets. We show that the sum-frequency generation receiver [Zhuang et al., Phys. Rev. Lett. 118, 040801 (2017), 10.1103/PhysRevLett.118.040801]—whose error-probability exponent for a nonfading target achieves QI's full 6 dB advantage over optimum classical operation—outperforms the classical system for Rayleigh-fading targets. In this case, QI's advantage is subexponential: its error probability is lower than the classical system's by a factor of 1 /ln(M κ ¯NS/NB) , when M κ ¯NS/NB≫1 , with M ≫1 being the QI transmitter's time-bandwidth product, NS≪1 its brightness, κ ¯ the target return's average intensity, and NB the background light's brightness.
Dissociating error-based and reinforcement-based loss functions during sensorimotor learning
McGregor, Heather R.; Mohatarem, Ayman
2017-01-01
It has been proposed that the sensorimotor system uses a loss (cost) function to evaluate potential movements in the presence of random noise. Here we test this idea in the context of both error-based and reinforcement-based learning. In a reaching task, we laterally shifted a cursor relative to true hand position using a skewed probability distribution. This skewed probability distribution had its mean and mode separated, allowing us to dissociate the optimal predictions of an error-based loss function (corresponding to the mean of the lateral shifts) and a reinforcement-based loss function (corresponding to the mode). We then examined how the sensorimotor system uses error feedback and reinforcement feedback, in isolation and combination, when deciding where to aim the hand during a reach. We found that participants compensated differently to the same skewed lateral shift distribution depending on the form of feedback they received. When provided with error feedback, participants compensated based on the mean of the skewed noise. When provided with reinforcement feedback, participants compensated based on the mode. Participants receiving both error and reinforcement feedback continued to compensate based on the mean while repeatedly missing the target, despite receiving auditory, visual and monetary reinforcement feedback that rewarded hitting the target. Our work shows that reinforcement-based and error-based learning are separable and can occur independently. Further, when error and reinforcement feedback are in conflict, the sensorimotor system heavily weights error feedback over reinforcement feedback. PMID:28753634
Dissociating error-based and reinforcement-based loss functions during sensorimotor learning.
Cashaback, Joshua G A; McGregor, Heather R; Mohatarem, Ayman; Gribble, Paul L
2017-07-01
It has been proposed that the sensorimotor system uses a loss (cost) function to evaluate potential movements in the presence of random noise. Here we test this idea in the context of both error-based and reinforcement-based learning. In a reaching task, we laterally shifted a cursor relative to true hand position using a skewed probability distribution. This skewed probability distribution had its mean and mode separated, allowing us to dissociate the optimal predictions of an error-based loss function (corresponding to the mean of the lateral shifts) and a reinforcement-based loss function (corresponding to the mode). We then examined how the sensorimotor system uses error feedback and reinforcement feedback, in isolation and combination, when deciding where to aim the hand during a reach. We found that participants compensated differently to the same skewed lateral shift distribution depending on the form of feedback they received. When provided with error feedback, participants compensated based on the mean of the skewed noise. When provided with reinforcement feedback, participants compensated based on the mode. Participants receiving both error and reinforcement feedback continued to compensate based on the mean while repeatedly missing the target, despite receiving auditory, visual and monetary reinforcement feedback that rewarded hitting the target. Our work shows that reinforcement-based and error-based learning are separable and can occur independently. Further, when error and reinforcement feedback are in conflict, the sensorimotor system heavily weights error feedback over reinforcement feedback.
Some loopholes to save quantum nonlocality
NASA Astrophysics Data System (ADS)
Accardi, Luigi
2005-02-01
The EPR-chameleon experiment has closed a long standing debate between the supporters of quantum nonlocality and the thesis of quantum probability according to which the essence of the quantum pecularity is non Kolmogorovianity rather than non locality. The theory of adaptive systems (symbolized by the chameleon effect) provides a natural intuition for the emergence of non-Kolmogorovian statistics from classical deterministic dynamical systems. These developments are quickly reviewed and in conclusion some comments are introduced on recent attempts to "reconstruct history" on the lines described by Orwell in "1984".
Fixation probability in a two-locus intersexual selection model.
Durand, Guillermo; Lessard, Sabin
2016-06-01
We study a two-locus model of intersexual selection in a finite haploid population reproducing according to a discrete-time Moran model with a trait locus expressed in males and a preference locus expressed in females. We show that the probability of ultimate fixation of a single mutant allele for a male ornament introduced at random at the trait locus given any initial frequency state at the preference locus is increased by weak intersexual selection and recombination, weak or strong. Moreover, this probability exceeds the initial frequency of the mutant allele even in the case of a costly male ornament if intersexual selection is not too weak. On the other hand, the probability of ultimate fixation of a single mutant allele for a female preference towards a male ornament introduced at random at the preference locus is increased by weak intersexual selection and weak recombination if the female preference is not costly, and is strong enough in the case of a costly male ornament. The analysis relies on an extension of the ancestral recombination-selection graph for samples of haplotypes to take into account events of intersexual selection, while the symbolic calculation of the fixation probabilities is made possible in a reasonable time by an optimizing algorithm. Copyright © 2016 Elsevier Inc. All rights reserved.
Refractive errors in children with autism in a developing country.
Ezegwui, I R; Lawrence, L; Aghaji, A E; Okoye, O I; Okoye, O; Onwasigwe, E N; Ebigbo, P O
2014-01-01
In a resource-limited country visual problems of mentally challenged individuals are often neglected. The present study aims to study refractive errors in children diagnosed with autism in a developing country. Ophthalmic examination was carried out on children diagnosed with autism attending a school for the mentally challenged in Enugu, Nigeria between December 2009 and May 2010. Visual acuity was assessed using Lea symbols. Anterior and posterior segments were examined. Cycloplegic refraction was performed. Data was entered on the protocol prepared for the study and analyzed using Statistical Package for the Social Sciences version 17 (Chicago IL, USA). A total of 21 children with autism were enrolled in the school; 18 of whom were examined giving coverage of 85.7%. The age range was 5-15 years, with a mean of 10.28 years (standard deviation ± 3.20). There were 13 boys and 5 girls. One child had bilateral temporal pallor of the disc and one had bilateral maculopathy with diffuse chorioretinal atrophy. Refraction revealed 4 children (22.2%) had astigmatism and 2 children (11.1%) had hypermetropia. Significant refractive error mainly astigmatism was noted in the children with autism. Identifying refractive errors in these children early and providing appropriate corrective lenses may help optimize their visual functioning and impact their activities of daily life in a positive way.
[Efficacy of decoding training for children with difficulty reading hiragana].
Uchiyama, Hitoshi; Tanaka, Daisuke; Seki, Ayumi; Wakamiya, Eiji; Hirasawa, Noriko; Iketani, Naotake; Kato, Ken; Koeda, Tatsuya
2013-05-01
The present study aimed to clarify the efficacy of decoding training focusing on the correspondence between written symbols and their readings for children with difficulty reading hiragana (Japanese syllabary). Thirty-five children with difficulty reading hiragana were selected from among 367 first-grade elementary school students using a reading aloud test and were then divided into intervention (n=15) and control (n=20) groups. The intervention comprised 5 minutes of decoding training each day for a period of 3 weeks using an original program on a personal computer. Reading time and number of reading errors in the reading aloud test were compared between the groups. The intervention group showed a significant shortening of reading time (F(1,33)=5.40, p<0.05, two-way ANOVA) compared to the control group. However, no significant difference in the number of errors was observed between the two groups. Ten children in the control group who wished to participate in the decoding training were included in an additional study;as a result, improvement of the number of reading errors was observed (t= 2.863, p< 0.05, paired t test), but there was no improvement in reading time. Decoding training was found to be effective for improving both reading time and reading errors in children with difficulty reading hiragana.
Method of estimating natural recharge to the Edwards Aquifer in the San Antonio area, Texas
Puente, Celso
1978-01-01
The principal errors in the estimates of annual recharge are related to errors in estimating runoff in ungaged areas, which represent about 30 percent of the infiltration area. The estimated long-term average annual recharge in each basin, however, is probably representative of the actual recharge because the averaging procedure tends to cancel out the major errors.
Cao, Youfang; Terebus, Anna; Liang, Jie
2016-01-01
The discrete chemical master equation (dCME) provides a general framework for studying stochasticity in mesoscopic reaction networks. Since its direct solution rapidly becomes intractable due to the increasing size of the state space, truncation of the state space is necessary for solving most dCMEs. It is therefore important to assess the consequences of state space truncations so errors can be quantified and minimized. Here we describe a novel method for state space truncation. By partitioning a reaction network into multiple molecular equivalence groups (MEG), we truncate the state space by limiting the total molecular copy numbers in each MEG. We further describe a theoretical framework for analysis of the truncation error in the steady state probability landscape using reflecting boundaries. By aggregating the state space based on the usage of a MEG and constructing an aggregated Markov process, we show that the truncation error of a MEG can be asymptotically bounded by the probability of states on the reflecting boundary of the MEG. Furthermore, truncating states of an arbitrary MEG will not undermine the estimated error of truncating any other MEGs. We then provide an overall error estimate for networks with multiple MEGs. To rapidly determine the appropriate size of an arbitrary MEG, we also introduce an a priori method to estimate the upper bound of its truncation error. This a priori estimate can be rapidly computed from reaction rates of the network, without the need of costly trial solutions of the dCME. As examples, we show results of applying our methods to the four stochastic networks of 1) the birth and death model, 2) the single gene expression model, 3) the genetic toggle switch model, and 4) the phage lambda bistable epigenetic switch model. We demonstrate how truncation errors and steady state probability landscapes can be computed using different sizes of the MEG(s) and how the results validate out theories. Overall, the novel state space truncation and error analysis methods developed here can be used to ensure accurate direct solutions to the dCME for a large number of stochastic networks. PMID:27105653
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, Youfang; Terebus, Anna; Liang, Jie
The discrete chemical master equation (dCME) provides a general framework for studying stochasticity in mesoscopic reaction networks. Since its direct solution rapidly becomes intractable due to the increasing size of the state space, truncation of the state space is necessary for solving most dCMEs. It is therefore important to assess the consequences of state space truncations so errors can be quantified and minimized. Here we describe a novel method for state space truncation. By partitioning a reaction network into multiple molecular equivalence groups (MEGs), we truncate the state space by limiting the total molecular copy numbers in each MEG. Wemore » further describe a theoretical framework for analysis of the truncation error in the steady-state probability landscape using reflecting boundaries. By aggregating the state space based on the usage of a MEG and constructing an aggregated Markov process, we show that the truncation error of a MEG can be asymptotically bounded by the probability of states on the reflecting boundary of the MEG. Furthermore, truncating states of an arbitrary MEG will not undermine the estimated error of truncating any other MEGs. We then provide an overall error estimate for networks with multiple MEGs. To rapidly determine the appropriate size of an arbitrary MEG, we also introduce an a priori method to estimate the upper bound of its truncation error. This a priori estimate can be rapidly computed from reaction rates of the network, without the need of costly trial solutions of the dCME. As examples, we show results of applying our methods to the four stochastic networks of (1) the birth and death model, (2) the single gene expression model, (3) the genetic toggle switch model, and (4) the phage lambda bistable epigenetic switch model. We demonstrate how truncation errors and steady-state probability landscapes can be computed using different sizes of the MEG(s) and how the results validate our theories. Overall, the novel state space truncation and error analysis methods developed here can be used to ensure accurate direct solutions to the dCME for a large number of stochastic networks.« less
Cao, Youfang; Terebus, Anna; Liang, Jie
2016-04-22
The discrete chemical master equation (dCME) provides a general framework for studying stochasticity in mesoscopic reaction networks. Since its direct solution rapidly becomes intractable due to the increasing size of the state space, truncation of the state space is necessary for solving most dCMEs. It is therefore important to assess the consequences of state space truncations so errors can be quantified and minimized. Here we describe a novel method for state space truncation. By partitioning a reaction network into multiple molecular equivalence groups (MEGs), we truncate the state space by limiting the total molecular copy numbers in each MEG. Wemore » further describe a theoretical framework for analysis of the truncation error in the steady-state probability landscape using reflecting boundaries. By aggregating the state space based on the usage of a MEG and constructing an aggregated Markov process, we show that the truncation error of a MEG can be asymptotically bounded by the probability of states on the reflecting boundary of the MEG. Furthermore, truncating states of an arbitrary MEG will not undermine the estimated error of truncating any other MEGs. We then provide an overall error estimate for networks with multiple MEGs. To rapidly determine the appropriate size of an arbitrary MEG, we also introduce an a priori method to estimate the upper bound of its truncation error. This a priori estimate can be rapidly computed from reaction rates of the network, without the need of costly trial solutions of the dCME. As examples, we show results of applying our methods to the four stochastic networks of (1) the birth and death model, (2) the single gene expression model, (3) the genetic toggle switch model, and (4) the phage lambda bistable epigenetic switch model. We demonstrate how truncation errors and steady-state probability landscapes can be computed using different sizes of the MEG(s) and how the results validate our theories. Overall, the novel state space truncation and error analysis methods developed here can be used to ensure accurate direct solutions to the dCME for a large number of stochastic networks.« less
A Study on Gröbner Basis with Inexact Input
NASA Astrophysics Data System (ADS)
Nagasaka, Kosaku
Gröbner basis is one of the most important tools in recent symbolic algebraic computations. However, computing a Gröbner basis for the given polynomial ideal is not easy and it is not numerically stable if polynomials have inexact coefficients. In this paper, we study what we should get for computing a Gröbner basis with inexact coefficients and introduce a naive method to compute a Gröbner basis by reduced row echelon form, for the ideal generated by the given polynomial set having a priori errors on their coefficients.
NASA Technical Reports Server (NTRS)
Huddleston, Lisa L.; Roeder, William P.; Merceret, Francis J.
2011-01-01
A new technique has been developed to estimate the probability that a nearby cloud to ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even with the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force Station. Future applications could include forensic meteorology.
On the determinants of the conjunction fallacy: probability versus inductive confirmation.
Tentori, Katya; Crupi, Vincenzo; Russo, Selena
2013-02-01
Major recent interpretations of the conjunction fallacy postulate that people assess the probability of a conjunction according to (non-normative) averaging rules as applied to the constituents' probabilities or represent the conjunction fallacy as an effect of random error in the judgment process. In the present contribution, we contrast such accounts with a different reading of the phenomenon based on the notion of inductive confirmation as defined by contemporary Bayesian theorists. Averaging rule hypotheses along with the random error model and many other existing proposals are shown to all imply that conjunction fallacy rates would rise as the perceived probability of the added conjunct does. By contrast, our account predicts that the conjunction fallacy depends on the added conjunct being perceived as inductively confirmed. Four studies are reported in which the judged probability versus confirmation of the added conjunct have been systematically manipulated and dissociated. The results consistently favor a confirmation-theoretic account of the conjunction fallacy against competing views. Our proposal is also discussed in connection with related issues in the study of human inductive reasoning. 2013 APA, all rights reserved
NASA Technical Reports Server (NTRS)
Huddleston, Lisa; Roeder, WIlliam P.; Merceret, Francis J.
2011-01-01
A new technique has been developed to estimate the probability that a nearby cloud-to-ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even within the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force station. Future applications could include forensic meteorology.
Legal consequences of the moral duty to report errors.
Hall, Jacqulyn Kay
2003-09-01
Increasingly, clinicians are under a moral duty to report errors to the patients who are injured by such errors. The sources of this duty are identified, and its probable impact on malpractice litigation and criminal law is discussed. The potential consequences of enforcing this new moral duty as a minimum in law are noted. One predicted consequence is that the trend will be accelerated toward government payment of compensation for errors. The effect of truth-telling on individuals is discussed.
An extended Reed Solomon decoder design
NASA Technical Reports Server (NTRS)
Chen, J.; Owsley, P.; Purviance, J.
1991-01-01
It has previously been shown that the Reed-Solomon (RS) codes can correct errors beyond the Singleton and Rieger Bounds with an arbitrarily small probability of a miscorrect. That is, an (n,k) RS code can correct more than (n-k)/2 errors. An implementation of such an RS decoder is presented in this paper. An existing RS decoder, the AHA4010, is utilized in this work. This decoder is especially useful for errors which are patterned with a long burst plus some random errors.
Schlosser, Ralf W; Koul, Rajinder; Shane, Howard; Sorce, James; Brock, Kristofer; Harmon, Ashley; Moerlein, Dorothy; Hearn, Emilia
2014-10-01
The effects of animation on naming and identification of graphic symbols for verbs and prepositions were studied in 2 graphic symbol sets in preschoolers. Using a 2 × 2 × 2 × 3 completely randomized block design, preschoolers across three age groups were randomly assigned to combinations of symbol set (Autism Language Program [ALP] Animated Graphics or Picture Communication Symbols [PCS]), symbol format (animated or static), and word class (verbs or prepositions). Children were asked to name symbols and to identify a target symbol from an array given the spoken label. Animated symbols were more readily named than static symbols, although this was more pronounced for verbs than for prepositions. ALP symbols were named more accurately than PCS in particular with prepositions. Animation did not facilitate identification. ALP symbols for prepositions were identified better than PCS, but there was no difference for verbs. Finally, older children guessed and identified symbols more effectively than younger children. Animation improves the naming of graphic symbols for verbs. For prepositions, ALP symbols are named more accurately and are more readily identifiable than PCS. Naming and identifying symbols are learned skills that develop over time. Limitations and future research directions are discussed.
A human performance evaluation of graphic symbol-design features.
Samet, M G; Geiselman, R E; Landee, B M
1982-06-01
16 subjects learned each of two tactical display symbol sets (conventional symbols and iconic symbols) in turn and were then shown a series of graphic displays containing various symbol configurations. For each display, the subject was asked questions corresponding to different behavioral processes relating to symbol use (identification, search, comparison, pattern recognition). The results indicated that: (a) conventional symbols yielded faster pattern-recognition performance than iconic symbols, and iconic symbols did not yield faster identification than conventional symbols, and (b) the portrayal of additional feature information (through the use of perimeter density or vector projection coding) slowed processing of the core symbol information in four tasks, but certain symbol-design features created less perceptual interference and had greater correspondence with the portrayal of specific tactical concepts than others. The results were discussed in terms of the complexities involved in the selection of symbol design features for use in graphic tactical displays.
Learning abstract visual concepts via probabilistic program induction in a Language of Thought.
Overlan, Matthew C; Jacobs, Robert A; Piantadosi, Steven T
2017-11-01
The ability to learn abstract concepts is a powerful component of human cognition. It has been argued that variable binding is the key element enabling this ability, but the computational aspects of variable binding remain poorly understood. Here, we address this shortcoming by formalizing the Hierarchical Language of Thought (HLOT) model of rule learning. Given a set of data items, the model uses Bayesian inference to infer a probability distribution over stochastic programs that implement variable binding. Because the model makes use of symbolic variables as well as Bayesian inference and programs with stochastic primitives, it combines many of the advantages of both symbolic and statistical approaches to cognitive modeling. To evaluate the model, we conducted an experiment in which human subjects viewed training items and then judged which test items belong to the same concept as the training items. We found that the HLOT model provides a close match to human generalization patterns, significantly outperforming two variants of the Generalized Context Model, one variant based on string similarity and the other based on visual similarity using features from a deep convolutional neural network. Additional results suggest that variable binding happens automatically, implying that binding operations do not add complexity to peoples' hypothesized rules. Overall, this work demonstrates that a cognitive model combining symbolic variables with Bayesian inference and stochastic program primitives provides a new perspective for understanding people's patterns of generalization. Copyright © 2017 Elsevier B.V. All rights reserved.
A super-Earth transiting a nearby low-mass star.
Charbonneau, David; Berta, Zachory K; Irwin, Jonathan; Burke, Christopher J; Nutzman, Philip; Buchhave, Lars A; Lovis, Christophe; Bonfils, Xavier; Latham, David W; Udry, Stéphane; Murray-Clay, Ruth A; Holman, Matthew J; Falco, Emilio E; Winn, Joshua N; Queloz, Didier; Pepe, Francesco; Mayor, Michel; Delfosse, Xavier; Forveille, Thierry
2009-12-17
A decade ago, the detection of the first transiting extrasolar planet provided a direct constraint on its composition and opened the door to spectroscopic investigations of extrasolar planetary atmospheres. Because such characterization studies are feasible only for transiting systems that are both nearby and for which the planet-to-star radius ratio is relatively large, nearby small stars have been surveyed intensively. Doppler studies and microlensing have uncovered a population of planets with minimum masses of 1.9-10 times the Earth's mass (M[symbol:see text]), called super-Earths. The first constraint on the bulk composition of this novel class of planets was afforded by CoRoT-7b (refs 8, 9), but the distance and size of its star preclude atmospheric studies in the foreseeable future. Here we report observations of the transiting planet GJ 1214b, which has a mass of 6.55M[symbol:see text]), and a radius 2.68 times Earth's radius (R[symbol:see text]), indicating that it is intermediate in stature between Earth and the ice giants of the Solar System. We find that the planetary mass and radius are consistent with a composition of primarily water enshrouded by a hydrogen-helium envelope that is only 0.05% of the mass of the planet. The atmosphere is probably escaping hydrodynamically, indicating that it has undergone significant evolution during its history. The star is small and only 13 parsecs away, so the planetary atmosphere is amenable to study with current observatories.
A functional model of sensemaking in a neurocognitive architecture.
Lebiere, Christian; Pirolli, Peter; Thomson, Robert; Paik, Jaehyon; Rutledge-Taylor, Matthew; Staszewski, James; Anderson, John R
2013-01-01
Sensemaking is the active process of constructing a meaningful representation (i.e., making sense) of some complex aspect of the world. In relation to intelligence analysis, sensemaking is the act of finding and interpreting relevant facts amongst the sea of incoming reports, images, and intelligence. We present a cognitive model of core information-foraging and hypothesis-updating sensemaking processes applied to complex spatial probability estimation and decision-making tasks. While the model was developed in a hybrid symbolic-statistical cognitive architecture, its correspondence to neural frameworks in terms of both structure and mechanisms provided a direct bridge between rational and neural levels of description. Compared against data from two participant groups, the model correctly predicted both the presence and degree of four biases: confirmation, anchoring and adjustment, representativeness, and probability matching. It also favorably predicted human performance in generating probability distributions across categories, assigning resources based on these distributions, and selecting relevant features given a prior probability distribution. This model provides a constrained theoretical framework describing cognitive biases as arising from three interacting factors: the structure of the task environment, the mechanisms and limitations of the cognitive architecture, and the use of strategies to adapt to the dual constraints of cognition and the environment.
Closed-form solution of decomposable stochastic models
NASA Technical Reports Server (NTRS)
Sjogren, Jon A.
1990-01-01
Markov and semi-Markov processes are increasingly being used in the modeling of complex reconfigurable systems (fault tolerant computers). The estimation of the reliability (or some measure of performance) of the system reduces to solving the process for its state probabilities. Such a model may exhibit numerous states and complicated transition distributions, contributing to an expensive and numerically delicate solution procedure. Thus, when a system exhibits a decomposition property, either structurally (autonomous subsystems), or behaviorally (component failure versus reconfiguration), it is desirable to exploit this decomposition in the reliability calculation. In interesting cases there can be failure states which arise from non-failure states of the subsystems. Equations are presented which allow the computation of failure probabilities of the total (combined) model without requiring a complete solution of the combined model. This material is presented within the context of closed-form functional representation of probabilities as utilized in the Symbolic Hierarchical Automated Reliability and Performance Evaluator (SHARPE) tool. The techniques adopted enable one to compute such probability functions for a much wider class of systems at a reduced computational cost. Several examples show how the method is used, especially in enhancing the versatility of the SHARPE tool.
A Functional Model of Sensemaking in a Neurocognitive Architecture
Lebiere, Christian; Paik, Jaehyon; Rutledge-Taylor, Matthew; Staszewski, James; Anderson, John R.
2013-01-01
Sensemaking is the active process of constructing a meaningful representation (i.e., making sense) of some complex aspect of the world. In relation to intelligence analysis, sensemaking is the act of finding and interpreting relevant facts amongst the sea of incoming reports, images, and intelligence. We present a cognitive model of core information-foraging and hypothesis-updating sensemaking processes applied to complex spatial probability estimation and decision-making tasks. While the model was developed in a hybrid symbolic-statistical cognitive architecture, its correspondence to neural frameworks in terms of both structure and mechanisms provided a direct bridge between rational and neural levels of description. Compared against data from two participant groups, the model correctly predicted both the presence and degree of four biases: confirmation, anchoring and adjustment, representativeness, and probability matching. It also favorably predicted human performance in generating probability distributions across categories, assigning resources based on these distributions, and selecting relevant features given a prior probability distribution. This model provides a constrained theoretical framework describing cognitive biases as arising from three interacting factors: the structure of the task environment, the mechanisms and limitations of the cognitive architecture, and the use of strategies to adapt to the dual constraints of cognition and the environment. PMID:24302930
NASA Astrophysics Data System (ADS)
Straus, D. M.
2007-12-01
The probability distribution (pdf) of errors is followed in identical twin studies using the COLA T63 AGCM, integrated with observed SST for 15 recent winters. 30 integrations per winter (for 15 winters) are available with initial errors that are extremely small. The evolution of the pdf is tested for multi-modality, and the results interpreted in terms of clusters / regimes found in: (a) the set of 15x30 integrations mentioned, and (b) a larger ensemble of 55x15 integrations made with the same GCM using the same SSTs. The mapping of pdf evolution and clusters is also carried out for each winter separately, using the clusters found in the 55-member ensemble for the same winter alone. This technique yields information on the change in regimes caused by different boundary forcing (Straus and Molteni, 2004; Straus, Corti and Molteni, 2006). Analysis of the growing errors in terms of baroclinic and barotropic components allows for interpretation of the corresponding instabilities.
Performance Analysis of an Inter-Relay Co-operation in FSO Communication System
NASA Astrophysics Data System (ADS)
Khanna, Himanshu; Aggarwal, Mona; Ahuja, Swaran
2018-04-01
In this work, we analyze the outage and error performance of a one-way inter-relay assisted free space optical link. The assumption of the absence of direct link between the source and destination node is being made for the analysis, and the feasibility of such system configuration is studied. We consider the influence of path loss, atmospheric turbulence and pointing error impairments, and investigate the effect of these parameters on the system performance. The turbulence-induced fading is modeled by independent but not necessarily identically distributed gamma-gamma fading statistics. The closed-form expressions for outage probability and probability of error are derived and illustrated by numerical plots. It is concluded that the absence of line of sight path between source and destination nodes does not lead to significant performance degradation. Moreover, for the system model under consideration, interconnected relaying provides better error performance than the non-interconnected relaying and dual-hop serial relaying techniques.
SEC proton prediction model: verification and analysis.
Balch, C C
1999-06-01
This paper describes a model that has been used at the NOAA Space Environment Center since the early 1970s as a guide for the prediction of solar energetic particle events. The algorithms for proton event probability, peak flux, and rise time are described. The predictions are compared with observations. The current model shows some ability to distinguish between proton event associated flares and flares that are not associated with proton events. The comparisons of predicted and observed peak flux show considerable scatter, with an rms error of almost an order of magnitude. Rise time comparisons also show scatter, with an rms error of approximately 28 h. The model algorithms are analyzed using historical data and improvements are suggested. Implementation of the algorithm modifications reduces the rms error in the log10 of the flux prediction by 21%, and the rise time rms error by 31%. Improvements are also realized in the probability prediction by deriving the conditional climatology for proton event occurrence given flare characteristics.
Solid State Ionics Advanced Materials for Emerging Technologies
NASA Astrophysics Data System (ADS)
Chowdari, B. V. R.; Careem, M. A.; Dissanayake, M. A. K. L.; Rajapakse, R. M. G.; Seneviratne, V. A.
2006-06-01
Keynote lecture. Challenges and opportunities of solid state ionic devices / W. Weppner -- pt. I. Ionically conducting inorganic solids. Invited papers. Multinuclear NMR studies of mass transport of phosphoric acid in water / J. R. P. Jayakody ... [et al.]. Crystalline glassy and polymeric electrolytes: similarities and differences in ionic transport mechanisms / J.-L. Souquet. 30 years of NMR/NQR experiments in solid electrolytes / D. Brinkmann. Analysis of conductivity and NMR measurements in Li[symbol]La[symbol]TiO[symbol] fast Li[symbol] ionic conductor: evidence for correlated Li[symbol] motion / O. Bohnké ... [et al.]. Transport pathways for ions in disordered solids from bond valence mismatch landscapes / S. Adams. Proton conductivity in condensed phases of water: implications on linear and ball lightning / K. Tennakone -- Contributed papers. Proton transport in nanocrystalline bioceramic materials: an investigative study of synthetic bone with that of natural bone / H. Jena, B. Rambabu. Synthesis and properties of the nanostructured fast ionic conductor Li[symbol]La[symbol]TiO[symbol] / Q. N. Pham ... [et al.]. Hydrogen production: ceramic materials for high temperature water electrolysis / A. Hammou. Influence of the sintering temperature on pH sensor ability of Li[symbol]La[symbol]TiO[symbol]. Relationship between potentiometric and impedance spectroscopy measurements / Q. N. Pham ... [et al.]. Microstructure chracterization and ionic conductivity of nano-sized CeO[symbol]-Sm[symbol]O[symbol] system (x=0.05 - 0.2) prepared by combustion route / K. Singh, S. A. Acharya, S. S. Bhoga. Red soil in Northern Sri Lanka is a natural magnetic ceramic / K. Ahilan ... [et al.]. Neutron scattering of LiNiO[symbol] / K. Basar ... [et al.]. Preparation and properties of LiFePO[symbol] nanorods / L. Q. Mai ... [et al.]. Structural and electrochemical properties of monoclinic and othorhombic MoO[symbol] phases / O. M. Hussain ... [et al.]. Preparation of Zircon (ZrSiO[symbol]) ceramics via solid state sintering of Zr)[symbol] and SiO[symbol] and the effect of dopants on the zircon yield / U. Dhanayake, B. S. B. Karunaratne. Preparation and properties of vanadium doped ZnTe cermet thin films / M. S. Hossain, R. Islam, K. A. Khan. Dynamical properties and electronic structure of lithium-ion conductor / M. Kobayashi ... [et al.]. Cuprous ion conducting Montmorillonite-Polypyrrole nanocomposites / D. M. M. Krishantha ... [et al.]. Frequency dependence of conductivity studies on a newly synthesized superionic solid solution/mixed system: [0.75AgI: 0.25AgCl] / R. K. Nagarch, R. Kumar. Diffuse X-ray and neutron scattering from Powder PbS / X. Lian ... [et al.]. Electron affinity and work function of Pyrolytic MnO[symbol] thin films prepared from Mn(C[symbol]H[symbol]O[symbol])[symbol].4H[symbol]) / A. K. M. Farid Ul Islam, R. Islam, K. A. Khan. Crystal structure and heat capacity of Ba[symbol]Ca[symbol]Nb[symbol]O[symbol] / T. Shimoyama ... [et al.]. XPS and impedance investigations on amorphous vanadium oxide thin films / M. Kamalanathan ... [et al.]. Sintering and mixed electronic-ionic conducting properties of La[symbol]Sr[symbol]NiO[symbol] derived from a polyaminocarboxylate complex precursor / D.-P. Huang ... [et al.]. Preparation and characteristics of ball milled MgH[symbol] + M (M= Fe, VF[symbol] and FeF[symbol]) nanocomposites for hydrogen storage / N. W. B. Balasooriya, Ch. Poinsignon. Structural studies of oxysulfide glasses by X-ray diffraction and molecular dynamics simulation / R. Prasada Rao, M. Seshasayee, J. Dheepa. Synthesis, sintering and oxygen ionic conducting properties of Bi[symbol]V[symbol]Cu[symbol]O[symbol] / F. Zhang ... [et al.]. Synthesis and transport characteristics of PbI[symbol]-Ag[symbol]O-Cr[symbol]O[symbol] superioninc system / S. A. Suthanthiraraj, V. Mathew. Electronic conductivity of La[symbol]Sr[symbol]Ga[symbol]Mg[symbol]Co[symbol]O[symbol] electrolytes / K. Yamaji ... [et al.] -- pt. II. Electrode materials. Invited papers. Cathodic properties of Al-doped LiCoO[symbol] prepared by molten salt method Li-Ion batteries / M. V. Reddy, G. V. Subba Rao, B. V. R. Chowdari. Layered ion-electron conducting materials / M. A. Santa Ana, E. Benavente, G. González. LiNi[symbol]Co[symbol]O[symbol] cathode thin-film prepared by RF sputtering for all-solid-state rechargeable microbatteries / X. J. Zhu ... [et al.] -- Contributed papers. Contributed papers. Nanocomposite cathode for SOFCs prepared by electrostatic spray deposition / A. Princivalle, E. Djurado. Effect of the addition of nanoporous carbon black on the cycling characteristics of Li[symbol]Co[symbol](MoO[symbol])[symbol] for lithium batteries / K. M. Begam, S. R. S. Prabaharan. Protonic conduction in TiP[symbol]O[symbol] / V. Nalini, T. Norby, A. M. Anuradha. Preparation and electrochemical LiMn[symbol]O[symbol] thin film by a solution deposition method / X. Y. Gan ... [et al.]. Synthesis and characterization LiMPO[symbol] (M = Ni, Co) / T. Savitha, S. Selvasekarapandian, C. S. Ramya. Synthesis and electrical characterization of LiCoO[symbol] LiFeO[symbol] and NiO compositions / A. Wijayasinghe, B. Bergman. Natural Sri Lanka graphite as conducting enhancer in manganese dioxide (Emd type) cathode of alkaline batteries / N. W. B. Balasooriya ... [et al.]. Electrochemical properties of LiNi[symbol]Al[symbol]Zn[symbol]O[symbol] cathode material synthesized by emulsion method / B.-H. Kim ... [et al.]. LiNi[symbol]Co[symbol]O[symbol] cathode materials synthesized by particulate sol-gel method for lithium ion batteries / X. J. Zhu ... [et al.]. Pulsed laser deposition of highly oriented LiCoO[symbol] and LiMn[symbol]O[symbol] thin films for microbattery applications / O. M. Hussain ... [et al.]. Preparation of LiNi[symbol]Co[symbol]O[symbol] thin films by a sol-gel method / X. J. Zhu ... [et al.]. Electrochemical lithium insertion into a manganese dioxide electrode in aqueous solutions / M. Minakshi ... [et al.]. AC impedance spectroscopic analysis of thin film LiNiVO[symbol] prepared by pulsed laser deposition technique / S. Selvasekarapandian ... [et al.]. Synthesis and characterization of LiFePO[symbol] cathode materials by microwave processing / J. Zhou ... [et al.]. Characterization of Nd[symbol]Sr[symbol]CoO[symbol] including Pt second phase as the cathode material for low-temperature SOFCs / J. W. Choi ... [et al.]. Thermodynamic behavior of lithium intercalation into natural vein and synthetic graphite / N. W. B. Balasooriya, P. W. S. K. Bandaranayake, Ph. Touzain -- pt. III. Electroactive polymers. Invited papers. Organised or disorganised? looking at polymer electrolytes from both points of view / Y.-P. Liao ... [et al.]. Polymer electrolytes - simple low permittivity solutions? / I. Albinsson, B.-E. Mellander. Dependence of conductivity enhancement on the dielectric constant of the dispersoid in polymer-ferroelectric composite electrolytes / A. Chandra, P. K. Singh, S. Chandra. Design and application of boron compounds for high-performance polymer electrolytes / T. Fujinami. Structural, vibrational and AC impedance analysis of nano composite polymer electrolytes based on PVAC / S. Selvasekarapandian ... [et al.]. Absorption intensity variation with ion association in PEO based electrolytes / J. E. Furneaux ... [et al.]. Study of ion-polymer interactions in cationic and anionic ionomers from the dependence of conductivity on pressure and temperature / M. Duclot ... [et al.]. Triol based polyurethane gel electrolytes for electrochemical devices / A. R. Kulkarni. Contributed papers. Accurate conductivity measurements to solvation energies in nafion / M. Maréchal, J.-L Souquet. Ion conducting behaviour of composite polymer gel electrolyte: PEG-PVA-(NH[symbol]CH[symbol]CO[symbol])[symbol] system / S. L. Agrawal, A. Awadhia, S. K. Patel. Impedance spectroscopy and DSC studies of poly(vinylalcohol)/ silicotungstic acid crosslinked composite membranes / A. Anis, A. K. Banthia. (PEO)[symbol]:Na[symbol]P[symbol]O[symbol]: a report on complex formation / A. Bhide, K. Hariharan. Experimental studies on (PVC+LiClO[symbol]+DMP) polymer electrolyte systems for lithium battery / Ch. V. S. Reddy. Stability of the gel electrolyte, PAN: EC: PC: LiCF[symbol]SO[symbol] towards lithium / K. Perera ... [et al.]. Montmorillonite as a conductivity enhancer in (PEO)[symbol]LiCF[symbol]SO[symbol] polymer electrolyte / C. H. Manoratne ... [et al.]. Polymeric gel electrolytes for electrochemical capacitors / M. Morita ... [et al.]. Electrical conductivity studies on proton conducting polymer electrolytes based on poly (viniyl acetate) / D. Arun Kumar ... [et al.]. Conductivity and thermal studies on plasticized PEO:LiTf-Al[symbol]O[symbol] composite polymer electrolyte / H. M. J. C. Pitawala, M. A. K. L. Dissanayake, V. A. Seneviratne. Investigation of transport properties of a new biomaterials - gum mangosteen / S. S. Pradhan, A. Sarkar. Investigation of ionic conductivity of PEO-MgCl[symbol] based solid polymer electrolyte / M. Sundar ... [et al.]. [symbol]H NMR and Raman analysis of proton conducting polymer electrolytes based on partially hydrolyzed poly (vinyl alcohol) / G. Hirankumar ... [et al.]. Influence of Al[symbol]O[symbol] nanoparticles on the phase matrix of polyethylene oxide-silver triflate polymer electrolytes / S. Austin Suthanthiraraj, D. Joice Sheeba. Effect of different types of ceramic fillers on thermal, dielectric and transport properties of PEO[symbol]LiTf solid polymer electrolyte / K. Vignarooban ... [et al.]. Characterization of PVP based solid polymer electrolytes using spectroscopic techniques / C. S. Ramya ... [et al.]. Electrochemical and structural properties of poly vinylidene fluoride - silver triflate solid polymer electrolyte system / S. Austin Suthanthiraraj, B. Joseph Paul. Micro Raman, Li NMR and AC impedance analysis of PVAC:LiClO[symbol] solid polymer eectrolytes / R. Baskaran ... [et al.].Study of Na+ ion conduction in PVA-NaSCN solid polymer electrolytes / G. M. Brahmanandhan ... [et al.]. Effect of filler addition on plasticized polymer electrolyte systems / M. Sundar, S. Selladurai. Ionic motion in PEDOT and PPy conducting polymer bilayers / U. L. Zainudeen, S. Skaarup, M. A. Careem. Film formation mechanism and electrochemical characterization of V[symbol]O[symbol] xerogel intercalated by polyaniniline / Q. Zhu ... [et al.]. Effect of NH[symbol]NO[symbol] concentration on the conductivity of PVA based solid polymer electrolyte / M. Hema ... [et al.]. Dielectric and conductivity studies of PVA-KSCN based solid polymer electrolytes / J. Malathi ... [et al.] -- pt. IV. Emerging applications. Invited papers. The use of solid state ionic materials and devices in medical applications / R. Linford. Development of all-solid-state lithium batteries / V. Thangadurai, J. Schwenzei, W. Weppner. Reversible intermediate temperature solid oxide fuel cells / B.-E. Mellander, I. Albinsson. Nano-size effects in lithium batteries / P. Balaya, Y. Hu, J. Maier. Electrochromics: fundamentals and applications / C. G. Granqvist. Electrochemical CO[symbol] gas sensor / K. Singh. Polypyrrole for artificial muscles: ionic mechanisms / S. Skaarup. Development and characterization of polyfluorene based light emitting diodes and their colour tuning using Forster resonance energy transfer / P. C. Mattur ... [et al.]. Mesoporous and nanoparticulate metal oxides: applications in new photocatalysis / C. Boxall. Proton Conducting (PC) perovskite membranes for hydrogen separation and PC-SOFC electrodes and electrolytes / H. Jena, B. Rambabu. Contributed papers. Electroceramic materials for the development of natural gas fuelled SOFC/GT plant in developing country (Trinidad and Tobogo (T&T)) / R. Saunders, H. Jena, B. Rambabu. Thin film SOFC supported on nano-porous substrate / J. Hoon Joo, G. M. Choi. Characterization and fabrication of silver solid state battery Ag/AGI-AgPO[symbol]/I[symbol], C / E. Kartini ... [et al.]. Performance of lithium polymer cells with polyacrylonitrile based electrolyte / K. Perera ... [et al.]. Hydrothermal synthesis and electrochemical behavior of MoO[symbol] nanobelts for lithium batteries / Y. Qi ... [et al.]. Electrochemical behaviour of a PPy (DBS)/polyacrylonitrile: LiTF:EC:PC/Li cell / K. Vidanapathirana ... [et al.]. Characteristics of thick film CO[symbol] sensors based on NASICON using Li[symbol]CO[symbol]-CaCO[symbol] auxiliary phases / H. J. Kim ... [et al.]. Solid state battery discharge characteristic study on fast silver ion conducting composite system: 0.9[0.75AgI:0.25AgCl]: 0.1TiO[symbol] / R. K. Nagarch, R. Kumar, P. Rawat. Intercalating protonic solid-state batteries with series and parallel combination / K. Singh, S. S. Bhoga, S. M. Bansod. Synthesis and characterization of ZnO fiber by microwave processing / Lin Wang ... [et al.]. Preparation of Sn-Ge alloy coated Ge nanoparticles and Sn-Si alloy coated Si nanoparticles by ball-milling / J. K. D. S. Jayanett, S. M. Heald. Synthesis of ultrafine and crystallized TiO[symbol] by alalkoxied free polymerizable precursor method / M. Vijayakumar ... [et al.]. Development and characterization of polythiophene/fullerene composite solar cells and their degradation studies / P. K. Bhatnagar ... [et al.].
Error-related negativities elicited by monetary loss and cues that predict loss.
Dunning, Jonathan P; Hajcak, Greg
2007-11-19
Event-related potential studies have reported error-related negativity following both error commission and feedback indicating errors or monetary loss. The present study examined whether error-related negativities could be elicited by a predictive cue presented prior to both the decision and subsequent feedback in a gambling task. Participants were presented with a cue that indicated the probability of reward on the upcoming trial (0, 50, and 100%). Results showed a negative deflection in the event-related potential in response to loss cues compared with win cues; this waveform shared a similar latency and morphology with the traditional feedback error-related negativity.
Quantum error-correction failure distributions: Comparison of coherent and stochastic error models
NASA Astrophysics Data System (ADS)
Barnes, Jeff P.; Trout, Colin J.; Lucarelli, Dennis; Clader, B. D.
2017-06-01
We compare failure distributions of quantum error correction circuits for stochastic errors and coherent errors. We utilize a fully coherent simulation of a fault-tolerant quantum error correcting circuit for a d =3 Steane and surface code. We find that the output distributions are markedly different for the two error models, showing that no simple mapping between the two error models exists. Coherent errors create very broad and heavy-tailed failure distributions. This suggests that they are susceptible to outlier events and that mean statistics, such as pseudothreshold estimates, may not provide the key figure of merit. This provides further statistical insight into why coherent errors can be so harmful for quantum error correction. These output probability distributions may also provide a useful metric that can be utilized when optimizing quantum error correcting codes and decoding procedures for purely coherent errors.
Human-computer interaction in multitask situations
NASA Technical Reports Server (NTRS)
Rouse, W. B.
1977-01-01
Human-computer interaction in multitask decisionmaking situations is considered, and it is proposed that humans and computers have overlapping responsibilities. Queueing theory is employed to model this dynamic approach to the allocation of responsibility between human and computer. Results of simulation experiments are used to illustrate the effects of several system variables including number of tasks, mean time between arrivals of action-evoking events, human-computer speed mismatch, probability of computer error, probability of human error, and the level of feedback between human and computer. Current experimental efforts are discussed and the practical issues involved in designing human-computer systems for multitask situations are considered.
Outage probability of a relay strategy allowing intra-link errors utilizing Slepian-Wolf theorem
NASA Astrophysics Data System (ADS)
Cheng, Meng; Anwar, Khoirul; Matsumoto, Tad
2013-12-01
In conventional decode-and-forward (DF) one-way relay systems, a data block received at the relay node is discarded, if the information part is found to have errors after decoding. Such errors are referred to as intra-link errors in this article. However, in a setup where the relay forwards data blocks despite possible intra-link errors, the two data blocks, one from the source node and the other from the relay node, are highly correlated because they were transmitted from the same source. In this article, we focus on the outage probability analysis of such a relay transmission system, where source-destination and relay-destination links, Link 1 and Link 2, respectively, are assumed to suffer from the correlated fading variation due to block Rayleigh fading. The intra-link is assumed to be represented by a simple bit-flipping model, where some of the information bits recovered at the relay node are the flipped version of their corresponding original information bits at the source. The correlated bit streams are encoded separately by the source and relay nodes, and transmitted block-by-block to a common destination using different time slots, where the information sequence transmitted over Link 2 may be a noise-corrupted interleaved version of the original sequence. The joint decoding takes place at the destination by exploiting the correlation knowledge of the intra-link (source-relay link). It is shown that the outage probability of the proposed transmission technique can be expressed by a set of double integrals over the admissible rate range, given by the Slepian-Wolf theorem, with respect to the probability density function ( pdf) of the instantaneous signal-to-noise power ratios (SNR) of Link 1 and Link 2. It is found that, with the Slepian-Wolf relay technique, so far as the correlation ρ of the complex fading variation is | ρ|<1, the 2nd order diversity can be achieved only if the two bit streams are fully correlated. This indicates that the diversity order exhibited in the outage curve converges to 1 when the bit streams are not fully correlated. Moreover, the Slepian-Wolf outage probability is proved to be smaller than that of the 2nd order maximum ratio combining (MRC) diversity, if the average SNRs of the two independent links are the same. Exact as well as asymptotic expressions of the outage probability are theoretically derived in the article. In addition, the theoretical outage results are compared with the frame-error-rate (FER) curves, obtained by a series of simulations for the Slepian-Wolf relay system based on bit-interleaved coded modulation with iterative detection (BICM-ID). It is shown that the FER curves exhibit the same tendency as the theoretical results.
Boughalia, A; Marcie, S; Fellah, M; Chami, S; Mekki, F
2015-06-01
The aim of this study is to assess and quantify patients' set-up errors using an electronic portal imaging device and to evaluate their dosimetric and biological impact in terms of generalized equivalent uniform dose (gEUD) on predictive models, such as the tumour control probability (TCP) and the normal tissue complication probability (NTCP). 20 patients treated for nasopharyngeal cancer were enrolled in the radiotherapy-oncology department of HCA. Systematic and random errors were quantified. The dosimetric and biological impact of these set-up errors on the target volume and the organ at risk (OARs) coverage were assessed using calculation of dose-volume histogram, gEUD, TCP and NTCP. For this purpose, an in-house software was developed and used. The standard deviations (1SDs) of the systematic set-up and random set-up errors were calculated for the lateral and subclavicular fields and gave the following results: ∑ = 0.63 ± (0.42) mm and σ = 3.75 ± (0.79) mm, respectively. Thus a planning organ at risk volume (PRV) margin of 3 mm was defined around the OARs, and a 5-mm margin used around the clinical target volume. The gEUD, TCP and NTCP calculations obtained with and without set-up errors have shown increased values for tumour, where ΔgEUD (tumour) = 1.94% Gy (p = 0.00721) and ΔTCP = 2.03%. The toxicity of OARs was quantified using gEUD and NTCP. The values of ΔgEUD (OARs) vary from 0.78% to 5.95% in the case of the brainstem and the optic chiasm, respectively. The corresponding ΔNTCP varies from 0.15% to 0.53%, respectively. The quantification of set-up errors has a dosimetric and biological impact on the tumour and on the OARs. The developed in-house software using the concept of gEUD, TCP and NTCP biological models has been successfully used in this study. It can be used also to optimize the treatment plan established for our patients. The gEUD, TCP and NTCP may be more suitable tools to assess the treatment plans before treating the patients.
Klaus, Christian A; Carrasco, Luis E; Goldberg, Daniel W; Henry, Kevin A; Sherman, Recinda L
2015-09-15
The utility of patient attributes associated with the spatiotemporal analysis of medical records lies not just in their values but also the strength of association between them. Estimating the extent to which a hierarchy of conditional probability exists between patient attribute associations such as patient identifying fields, patient and date of diagnosis, and patient and address at diagnosis is fundamental to estimating the strength of association between patient and geocode, and patient and enumeration area. We propose a hierarchy for the attribute associations within medical records that enable spatiotemporal relationships. We also present a set of metrics that store attribute association error probability (AAEP), to estimate error probability for all attribute associations upon which certainty in a patient geocode depends. A series of experiments were undertaken to understand how error estimation could be operationalized within health data and what levels of AAEP in real data reveal themselves using these methods. Specifically, the goals of this evaluation were to (1) assess if the concept of our error assessment techniques could be implemented by a population-based cancer registry; (2) apply the techniques to real data from a large health data agency and characterize the observed levels of AAEP; and (3) demonstrate how detected AAEP might impact spatiotemporal health research. We present an evaluation of AAEP metrics generated for cancer cases in a North Carolina county. We show examples of how we estimated AAEP for selected attribute associations and circumstances. We demonstrate the distribution of AAEP in our case sample across attribute associations, and demonstrate ways in which disease registry specific operations influence the prevalence of AAEP estimates for specific attribute associations. The effort to detect and store estimates of AAEP is worthwhile because of the increase in confidence fostered by the attribute association level approach to the assessment of uncertainty in patient geocodes, relative to existing geocoding related uncertainty metrics.
Joy Lo, Chih-Wei; Yien, Huey-Wen; Chen, I-Ping
2016-04-01
To evaluate the effectiveness of universal health symbol usage and to analyze the factors influencing the adoption of those symbols in Taiwan. Universal symbols are an important innovative tool for health facility wayfinding systems. Hablamos Juntos, a universal healthcare symbol system developed in the United States, is a thoughtful, well-designed, and thoroughly tested symbol system that facilitates communication across languages and cultures. We designed a questionnaire to test how well the selected graphic symbols were understood by Taiwanese participants and determined factors related to successful symbol decoding, including participant-related factors, stimulation factors, and the interaction between stimulation and participants. Additionally, we further established a design principle for future development of localized healthcare symbols. (1) Eleven symbols were identified as highly comprehensible and effective symbols that can be directly adopted in Taiwanese healthcare settings. Sixteen symbols were deemed incomprehensible or confusing and thus had to be redesigned. Finally, 14 were identified as relatively incomprehensible and could thus be redesigned and then have their effectiveness evaluated again. (2) Three factors were found to influence the participants' differing levels of comprehension of the Hablamos Juntos symbols. In order to prevent the three aforementioned factors from causing difficulty in interpreting symbols, we suggest that the local symbol designers should (1) use more iconic images, (2) carefully evaluate the indexical and symbolic meaning of graphic symbols, and (3) collect the consensus of Taiwanese people with different educational backgrounds. © The Author(s) 2016.
Elliott, Rachel A; Putman, Koen D; Franklin, Matthew; Annemans, Lieven; Verhaeghe, Nick; Eden, Martin; Hayre, Jasdeep; Rodgers, Sarah; Sheikh, Aziz; Avery, Anthony J
2014-06-01
We recently showed that a pharmacist-led information technology-based intervention (PINCER) was significantly more effective in reducing medication errors in general practices than providing simple feedback on errors, with cost per error avoided at £79 (US$131). We aimed to estimate cost effectiveness of the PINCER intervention by combining effectiveness in error reduction and intervention costs with the effect of the individual errors on patient outcomes and healthcare costs, to estimate the effect on costs and QALYs. We developed Markov models for each of six medication errors targeted by PINCER. Clinical event probability, treatment pathway, resource use and costs were extracted from literature and costing tariffs. A composite probabilistic model combined patient-level error models with practice-level error rates and intervention costs from the trial. Cost per extra QALY and cost-effectiveness acceptability curves were generated from the perspective of NHS England, with a 5-year time horizon. The PINCER intervention generated £2,679 less cost and 0.81 more QALYs per practice [incremental cost-effectiveness ratio (ICER): -£3,037 per QALY] in the deterministic analysis. In the probabilistic analysis, PINCER generated 0.001 extra QALYs per practice compared with simple feedback, at £4.20 less per practice. Despite this extremely small set of differences in costs and outcomes, PINCER dominated simple feedback with a mean ICER of -£3,936 (standard error £2,970). At a ceiling 'willingness-to-pay' of £20,000/QALY, PINCER reaches 59 % probability of being cost effective. PINCER produced marginal health gain at slightly reduced overall cost. Results are uncertain due to the poor quality of data to inform the effect of avoiding errors.
Gebuis, Titia; Herfs, Inkeri K; Kenemans, J Leon; de Haan, Edward H F; van der Smagt, Maarten J
2009-11-01
Infants can visually detect changes in numerosity, which suggests that a (non-symbolic) numerosity system is already present early in life. This non-symbolic system is hypothesized to serve as the basis for the later acquired symbolic system. Little is known about the processes underlying the transition from the non-symbolic to symbolic code. In the current study we investigated the development of automatization of symbolic number processing in children from second (6.0 years) and fourth grade (8.0 years) and adults using a symbolic and non-symbolic size congruency task and event-related potentials (ERPs) as a measure. The comparison between symbolic and non-symbolic size congruency effects (SCEs) allowed us to disentangle processes necessary to perform the task from processes specific to numerosity notation. In contrast to previous studies, second graders already revealed a behavioral symbolic SCE similar to that of adults. In addition, the behavioral SCE increased for symbolic and decreased for non-symbolic notation with increasing age. For all age groups, the ERP data showed that the two magnitudes interfered at a level before selective activation of the response system, for both notations. However, only for the second graders distinct processes were recruited to perform the symbolic size comparison task. This shift in recruited processes for the symbolic task only might reflect the functional specialization of the parietal cortex.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, H; Chen, Z; Nath, R
Purpose: kV fluoroscopic imaging combined with MV treatment beam imaging has been investigated for intrafractional motion monitoring and correction. It is, however, subject to additional kV imaging dose to normal tissue. To balance tracking accuracy and imaging dose, we previously proposed an adaptive imaging strategy to dynamically decide future imaging type and moments based on motion tracking uncertainty. kV imaging may be used continuously for maximal accuracy or only when the position uncertainty (probability of out of threshold) is high if a preset imaging dose limit is considered. In this work, we propose more accurate methods to estimate tracking uncertaintymore » through analyzing acquired data in real-time. Methods: We simulated motion tracking process based on a previously developed imaging framework (MV + initial seconds of kV imaging) using real-time breathing data from 42 patients. Motion tracking errors for each time point were collected together with the time point’s corresponding features, such as tumor motion speed and 2D tracking error of previous time points, etc. We tested three methods for error uncertainty estimation based on the features: conditional probability distribution, logistic regression modeling, and support vector machine (SVM) classification to detect errors exceeding a threshold. Results: For conditional probability distribution, polynomial regressions on three features (previous tracking error, prediction quality, and cosine of the angle between the trajectory and the treatment beam) showed strong correlation with the variation (uncertainty) of the mean 3D tracking error and its standard deviation: R-square = 0.94 and 0.90, respectively. The logistic regression and SVM classification successfully identified about 95% of tracking errors exceeding 2.5mm threshold. Conclusion: The proposed methods can reliably estimate the motion tracking uncertainty in real-time, which can be used to guide adaptive additional imaging to confirm the tumor is within the margin or initialize motion compensation if it is out of the margin.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herberger, Sarah M.; Boring, Ronald L.
Abstract Objectives: This paper discusses the differences between classical human reliability analysis (HRA) dependence and the full spectrum of probabilistic dependence. Positive influence suggests an error increases the likelihood of subsequent errors or success increases the likelihood of subsequent success. Currently the typical method for dependence in HRA implements the Technique for Human Error Rate Prediction (THERP) positive dependence equations. This assumes that the dependence between two human failure events varies at discrete levels between zero and complete dependence (as defined by THERP). Dependence in THERP does not consistently span dependence values between 0 and 1. In contrast, probabilistic dependencemore » employs Bayes Law, and addresses a continuous range of dependence. Methods: Using the laws of probability, complete dependence and maximum positive dependence do not always agree. Maximum dependence is when two events overlap to their fullest amount. Maximum negative dependence is the smallest amount that two events can overlap. When the minimum probability of two events overlapping is less than independence, negative dependence occurs. For example, negative dependence is when an operator fails to actuate Pump A, thereby increasing his or her chance of actuating Pump B. The initial error actually increases the chance of subsequent success. Results: Comparing THERP and probability theory yields different results in certain scenarios; with the latter addressing negative dependence. Given that most human failure events are rare, the minimum overlap is typically 0. And when the second event is smaller than the first event the max dependence is less than 1, as defined by Bayes Law. As such alternative dependence equations are provided along with a look-up table defining the maximum and maximum negative dependence given the probability of two events. Conclusions: THERP dependence has been used ubiquitously for decades, and has provided approximations of the dependencies between two events. Since its inception, computational abilities have increased exponentially, and alternative approaches that follow the laws of probability dependence need to be implemented. These new approaches need to consider negative dependence and identify when THERP output is not appropriate.« less
Nonlinear Algorithms for Channel Equalization and Map Symbol Detection.
NASA Astrophysics Data System (ADS)
Giridhar, K.
The transfer of information through a communication medium invariably results in various kinds of distortion to the transmitted signal. In this dissertation, a feed -forward neural network-based equalizer, and a family of maximum a posteriori (MAP) symbol detectors are proposed for signal recovery in the presence of intersymbol interference (ISI) and additive white Gaussian noise. The proposed neural network-based equalizer employs a novel bit-mapping strategy to handle multilevel data signals in an equivalent bipolar representation. It uses a training procedure to learn the channel characteristics, and at the end of training, the multilevel symbols are recovered from the corresponding inverse bit-mapping. When the channel characteristics are unknown and no training sequences are available, blind estimation of the channel (or its inverse) and simultaneous data recovery is required. Convergence properties of several existing Bussgang-type blind equalization algorithms are studied through computer simulations, and a unique gain independent approach is used to obtain a fair comparison of their rates of convergence. Although simple to implement, the slow convergence of these Bussgang-type blind equalizers make them unsuitable for many high data-rate applications. Rapidly converging blind algorithms based on the principle of MAP symbol-by -symbol detection are proposed, which adaptively estimate the channel impulse response (CIR) and simultaneously decode the received data sequence. Assuming a linear and Gaussian measurement model, the near-optimal blind MAP symbol detector (MAPSD) consists of a parallel bank of conditional Kalman channel estimators, where the conditioning is done on each possible data subsequence that can convolve with the CIR. This algorithm is also extended to the recovery of convolutionally encoded waveforms in the presence of ISI. Since the complexity of the MAPSD algorithm increases exponentially with the length of the assumed CIR, a suboptimal decision-feedback mechanism is introduced to truncate the channel memory "seen" by the MAPSD section. Also, simpler gradient-based updates for the channel estimates, and a metric pruning technique are used to further reduce the MAPSD complexity. Spatial diversity MAP combiners are developed to enhance the error rate performance and combat channel fading. As a first application of the MAPSD algorithm, dual-mode recovery techniques for TDMA (time-division multiple access) mobile radio signals are presented. Combined estimation of the symbol timing and the multipath parameters is proposed, using an auxiliary extended Kalman filter during the training cycle, and then tracking of the fading parameters is performed during the data cycle using the blind MAPSD algorithm. For the second application, a single-input receiver is employed to jointly recover cochannel narrowband signals. Assuming known channels, this two-stage joint MAPSD (JMAPSD) algorithm is compared to the optimal joint maximum likelihood sequence estimator, and to the joint decision-feedback detector. A blind MAPSD algorithm for the joint recovery of cochannel signals is also presented. Computer simulation results are provided to quantify the performance of the various algorithms proposed in this dissertation.