NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.; Courturier, Servanne; Levy, Yannick; Mills, Diane G.; Perez, Lance C.; Wang, Fu-Quan
1993-01-01
In his seminal 1948 paper 'The Mathematical Theory of Communication,' Claude E. Shannon derived the 'channel coding theorem' which has an explicit upper bound, called the channel capacity, on the rate at which 'information' could be transmitted reliably on a given communication channel. Shannon's result was an existence theorem and did not give specific codes to achieve the bound. Some skeptics have claimed that the dramatic performance improvements predicted by Shannon are not achievable in practice. The advances made in the area of coded modulation in the past decade have made communications engineers optimistic about the possibility of achieving or at least coming close to channel capacity. Here we consider the possibility in the light of current research results.
Polar codes for achieving the classical capacity of a quantum channel
NASA Astrophysics Data System (ADS)
Guha, Saikat; Wilde, Mark
2012-02-01
We construct the first near-explicit, linear, polar codes that achieve the capacity for classical communication over quantum channels. The codes exploit the channel polarization phenomenon observed by Arikan for classical channels. Channel polarization is an effect in which one can synthesize a set of channels, by ``channel combining'' and ``channel splitting,'' in which a fraction of the synthesized channels is perfect for data transmission while the other fraction is completely useless for data transmission, with the good fraction equal to the capacity of the channel. Our main technical contributions are threefold. First, we demonstrate that the channel polarization effect occurs for channels with classical inputs and quantum outputs. We then construct linear polar codes based on this effect, and the encoding complexity is O(N log N), where N is the blocklength of the code. We also demonstrate that a quantum successive cancellation decoder works well, i.e., the word error rate decays exponentially with the blocklength of the code. For a quantum channel with binary pure-state outputs, such as a binary-phase-shift-keyed coherent-state optical communication alphabet, the symmetric Holevo information rate is in fact the ultimate channel capacity, which is achieved by our polar code.
Belief propagation decoding of quantum channels by passing quantum messages
NASA Astrophysics Data System (ADS)
Renes, Joseph M.
2017-07-01
The belief propagation (BP) algorithm is a powerful tool in a wide range of disciplines from statistical physics to machine learning to computational biology, and is ubiquitous in decoding classical error-correcting codes. The algorithm works by passing messages between nodes of the factor graph associated with the code and enables efficient decoding of the channel, in some cases even up to the Shannon capacity. Here we construct the first BP algorithm which passes quantum messages on the factor graph and is capable of decoding the classical-quantum channel with pure state outputs. This gives explicit decoding circuits whose number of gates is quadratic in the code length. We also show that this decoder can be modified to work with polar codes for the pure state channel and as part of a decoder for transmitting quantum information over the amplitude damping channel. These represent the first explicit capacity-achieving decoders for non-Pauli channels.
Applications of Derandomization Theory in Coding
NASA Astrophysics Data System (ADS)
Cheraghchi, Mahdi
2011-07-01
Randomized techniques play a fundamental role in theoretical computer science and discrete mathematics, in particular for the design of efficient algorithms and construction of combinatorial objects. The basic goal in derandomization theory is to eliminate or reduce the need for randomness in such randomized constructions. In this thesis, we explore some applications of the fundamental notions in derandomization theory to problems outside the core of theoretical computer science, and in particular, certain problems related to coding theory. First, we consider the wiretap channel problem which involves a communication system in which an intruder can eavesdrop a limited portion of the transmissions, and construct efficient and information-theoretically optimal communication protocols for this model. Then we consider the combinatorial group testing problem. In this classical problem, one aims to determine a set of defective items within a large population by asking a number of queries, where each query reveals whether a defective item is present within a specified group of items. We use randomness condensers to explicitly construct optimal, or nearly optimal, group testing schemes for a setting where the query outcomes can be highly unreliable, as well as the threshold model where a query returns positive if the number of defectives pass a certain threshold. Finally, we design ensembles of error-correcting codes that achieve the information-theoretic capacity of a large class of communication channels, and then use the obtained ensembles for construction of explicit capacity achieving codes. [This is a shortened version of the actual abstract in the thesis.
Optimal superdense coding over memory channels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shadman, Z.; Kampermann, H.; Bruss, D.
2011-10-15
We study the superdense coding capacity in the presence of quantum channels with correlated noise. We investigate both the cases of unitary and nonunitary encoding. Pauli channels for arbitrary dimensions are treated explicitly. The superdense coding capacity for some special channels and resource states is derived for unitary encoding. We also provide an example of a memory channel where nonunitary encoding leads to an improvement in the superdense coding capacity.
Gareau, Alexandre; Gaudreau, Patrick
2017-11-01
In previous research, autonomous motivation (AM) has been found to be associated with school achievement, but the relation has been largely heterogeneous across studies. AM has typically been assessed with explicit measures such as self-report questionnaires. Recent self-determination theory (SDT) research has suggested that converging implicit and explicit measures can be taken to characterize the integrative process in SDT. Drawing from dual-process theories, we contended that explicit AM is likely to promote school achievement when it is part of an integrated cognitive system that combines easily accessible mental representations (i.e., implicit AM) and efficient executive functioning. A sample of 272 university students completed a questionnaire and a lexical decision task to assess their explicit and implicit AM, respectively, and they also completed working memory capacity measures. Grades were obtained at the end of the semester to examine the short-term prospective effect of implicit and explicit AM, working memory, and their interaction. Results of moderation analyses have provided support for a synergistic interaction in which the association between explicit AM and academic achievement was positive and significant only for individuals with high level of implicit AM. Moreover, working memory was moderating the synergistic effect of explicit and implicit AM. Explicit AM was positively associated with academic achievement for students with average-to-high levels of working memory capacity, but only if their motivation operated synergistically with high implicit AM. The integrative process thus seems to hold better proprieties for achievement than the sole effect of explicit AM. Implications for SDT are outlined. © 2017 The British Psychological Society.
Topics in quantum cryptography, quantum error correction, and channel simulation
NASA Astrophysics Data System (ADS)
Luo, Zhicheng
In this thesis, we mainly investigate four different topics: efficiently implementable codes for quantum key expansion [51], quantum error-correcting codes based on privacy amplification [48], private classical capacity of quantum channels [44], and classical channel simulation with quantum side information [49, 50]. For the first topic, we propose an efficiently implementable quantum key expansion protocol, capable of increasing the size of a pre-shared secret key by a constant factor. Previously, the Shor-Preskill proof [64] of the security of the Bennett-Brassard 1984 (BB84) [6] quantum key distribution protocol relied on the theoretical existence of good classical error-correcting codes with the "dual-containing" property. But the explicit and efficiently decodable construction of such codes is unknown. We show that we can lift the dual-containing constraint by employing the non-dual-containing codes with excellent performance and efficient decoding algorithms. For the second topic, we propose a construction of Calderbank-Shor-Steane (CSS) [19, 68] quantum error-correcting codes, which are originally based on pairs of mutually dual-containing classical codes, by combining a classical code with a two-universal hash function. We show, using the results of Renner and Koenig [57], that the communication rates of such codes approach the hashing bound on tensor powers of Pauli channels in the limit of large block-length. For the third topic, we prove a regularized formula for the secret key assisted capacity region of a quantum channel for transmitting private classical information. This result parallels the work of Devetak on entanglement assisted quantum communication capacity. This formula provides a new family protocol, the private father protocol, under the resource inequality framework that includes the private classical communication without the assisted secret keys as a child protocol. For the fourth topic, we study and solve the problem of classical channel simulation with quantum side information at the receiver. Our main theorem has two important corollaries: rate-distortion theory with quantum side information and common randomness distillation. Simple proofs of achievability of classical multi-terminal source coding problems can be made via a unified approach using the channel simulation theorem as building blocks. The fully quantum generalization of the problem is also conjectured with outer and inner bounds on the achievable rate pairs.
Coded Cooperation for Multiway Relaying in Wireless Sensor Networks †
Si, Zhongwei; Ma, Junyang; Thobaben, Ragnar
2015-01-01
Wireless sensor networks have been considered as an enabling technology for constructing smart cities. One important feature of wireless sensor networks is that the sensor nodes collaborate in some manner for communications. In this manuscript, we focus on the model of multiway relaying with full data exchange where each user wants to transmit and receive data to and from all other users in the network. We derive the capacity region for this specific model and propose a coding strategy through coset encoding. To obtain good performance with practical codes, we choose spatially-coupled LDPC (SC-LDPC) codes for the coded cooperation. In particular, for the message broadcasting from the relay, we construct multi-edge-type (MET) SC-LDPC codes by repeatedly applying coset encoding. Due to the capacity-achieving property of the SC-LDPC codes, we prove that the capacity region can theoretically be achieved by the proposed MET SC-LDPC codes. Numerical results with finite node degrees are provided, which show that the achievable rates approach the boundary of the capacity region in both binary erasure channels and additive white Gaussian channels. PMID:26131675
Coded Cooperation for Multiway Relaying in Wireless Sensor Networks.
Si, Zhongwei; Ma, Junyang; Thobaben, Ragnar
2015-06-29
Wireless sensor networks have been considered as an enabling technology for constructing smart cities. One important feature of wireless sensor networks is that the sensor nodes collaborate in some manner for communications. In this manuscript, we focus on the model of multiway relaying with full data exchange where each user wants to transmit and receive data to and from all other users in the network. We derive the capacity region for this specific model and propose a coding strategy through coset encoding. To obtain good performance with practical codes, we choose spatially-coupled LDPC (SC-LDPC) codes for the coded cooperation. In particular, for the message broadcasting from the relay, we construct multi-edge-type (MET) SC-LDPC codes by repeatedly applying coset encoding. Due to the capacity-achieving property of the SC-LDPC codes, we prove that the capacity region can theoretically be achieved by the proposed MET SC-LDPC codes. Numerical results with finite node degrees are provided, which show that the achievable rates approach the boundary of the capacity region in both binary erasure channels and additive white Gaussian channels.
Coherent-state constellations and polar codes for thermal Gaussian channels
NASA Astrophysics Data System (ADS)
Lacerda, Felipe; Renes, Joseph M.; Scholz, Volkher B.
2017-06-01
Optical communication channels are ultimately quantum mechanical in nature, and we must therefore look beyond classical information theory to determine their communication capacity as well as to find efficient encoding and decoding schemes of the highest rates. Thermal channels, which arise from linear coupling of the field to a thermal environment, are of particular practical relevance; their classical capacity has been recently established, but their quantum capacity remains unknown. While the capacity sets the ultimate limit on reliable communication rates, it does not promise that such rates are achievable by practical means. Here we construct efficiently encodable codes for thermal channels which achieve the classical capacity and the so-called Gaussian coherent information for transmission of classical and quantum information, respectively. Our codes are based on combining polar codes with a discretization of the channel input into a finite "constellation" of coherent states. Encoding of classical information can be done using linear optics.
Developmental Dyslexia and Explicit Long-Term Memory
ERIC Educational Resources Information Center
Menghini, Deny; Carlesimo, Giovanni Augusto; Marotta, Luigi; Finzi, Alessandra; Vicari, Stefano
2010-01-01
The reduced verbal long-term memory capacities often reported in dyslexics are generally interpreted as a consequence of their deficit in phonological coding. The present study was aimed at evaluating whether the learning deficit exhibited by dyslexics was restricted only to the verbal component of the long-term memory abilities or also involved…
Implicit motives, explicit traits, and task and contextual performance at work.
Lang, Jonas W B; Zettler, Ingo; Ewen, Christian; Hülsheger, Ute R
2012-11-01
Personality psychologists have long argued that explicit traits (as measured by questionnaires) channel the expression of implicit motives (as measured by coding imaginative verbal behavior) such that both interact in the prediction of relevant life outcome variables. In the present research, we apply these ideas in the context of industrial and organizational psychology and propose that 2 explicit traits work as channels for the expression of 3 core implicit motives in task and contextual job performance (extraversion for implicit affiliation and implicit power; explicit achievement for implicit achievement). As a test of these theoretical ideas, we report a study in which employees (N = 241) filled out a questionnaire booklet and worked on an improved modern implicit motive measure, the operant motive test. Their supervisors rated their task and contextual performance. Results support 4 of the 6 theoretical predictions and show that interactions between implicit motives and explicit traits increase the explained criterion variance in both task and contextual performance. (c) 2012 APA, all rights reserved.
Capacity of a direct detection optical communication channel
NASA Technical Reports Server (NTRS)
Tan, H. H.
1980-01-01
The capacity of a free space optical channel using a direct detection receiver is derived under both peak and average signal power constraints and without a signal bandwidth constraint. The addition of instantaneous noiseless feedback from the receiver to the transmitter does not increase the channel capacity. In the absence of received background noise, an optimally coded PPM system is shown to achieve capacity in the limit as signal bandwidth approaches infinity. In the case of large peak to average signal power ratios, an interleaved coding scheme with PPM modulation is shown to have a computational cutoff rate far greater than ordinary coding schemes.
Zero-forcing pre-coding for MIMO WiMAX transceivers: Performance analysis and implementation issues
NASA Astrophysics Data System (ADS)
Cattoni, A. F.; Le Moullec, Y.; Sacchi, C.
Next generation wireless communication networks are expected to achieve ever increasing data rates. Multi-User Multiple-Input-Multiple-Output (MU-MIMO) is a key technique to obtain the expected performance, because such a technique combines the high capacity achievable using MIMO channel with the benefits of space division multiple access. In MU-MIMO systems, the base stations transmit signals to two or more users over the same channel, for this reason every user can experience inter-user interference. This paper provides a capacity analysis of an online, interference-based pre-coding algorithm able to mitigate the multi-user interference of the MU-MIMO systems in the context of a realistic WiMAX application scenario. Simulation results show that pre-coding can significantly increase the channel capacity. Furthermore, the paper presents several feasibility considerations for implementation of the analyzed technique in a possible FPGA-based software defined radio.
Delay and death-thought accessibility: a meta-analysis.
Steinman, Christopher T; Updegraff, John A
2015-12-01
The dual-process component of Terror Management Theory (TMT) proposes that different types of threats lead to increases in death-thought accessibility (DTA) after different delay intervals. Experimental studies of terror management threats' effect on DTA were collected and coded for their use of explicitly death-related (vs. not explicitly death-related) threats, and for their use of delay and task-switching during the delay. Results reveal that studies using death-related threats achieved larger DTA effect-sizes when they included more task-switching or a longer delay between the threat and the DTA measurement. In contrast, studies using threats that were not explicitly death-related achieved smaller DTA effect-sizes when they included more task-switching between the threat and the DTA measurement. These findings provide partial support for the dual-process component's predictions regarding delay and DTA. Limitations and future directions are discussed. © 2015 by the Society for Personality and Social Psychology, Inc.
Properties of a certain stochastic dynamical system, channel polarization, and polar codes
NASA Astrophysics Data System (ADS)
Tanaka, Toshiyuki
2010-06-01
A new family of codes, called polar codes, has recently been proposed by Arikan. Polar codes are of theoretical importance because they are provably capacity achieving with low-complexity encoding and decoding. We first discuss basic properties of a certain stochastic dynamical system, on the basis of which properties of channel polarization and polar codes are reviewed, with emphasis on our recent results.
Accumulate-Repeat-Accumulate-Accumulate-Codes
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Dolinar, Sam; Thorpe, Jeremy
2004-01-01
Inspired by recently proposed Accumulate-Repeat-Accumulate (ARA) codes [15], in this paper we propose a channel coding scheme called Accumulate-Repeat-Accumulate-Accumulate (ARAA) codes. These codes can be seen as serial turbo-like codes or as a subclass of Low Density Parity Check (LDPC) codes, and they have a projected graph or protograph representation; this allows for a high-speed iterative decoder implementation using belief propagation. An ARAA code can be viewed as a precoded Repeat-and-Accumulate (RA) code with puncturing in concatenation with another accumulator, where simply an accumulator is chosen as the precoder; thus ARAA codes have a very fast encoder structure. Using density evolution on their associated protographs, we find examples of rate-lJ2 ARAA codes with maximum variable node degree 4 for which a minimum bit-SNR as low as 0.21 dB from the channel capacity limit can be achieved as the block size goes to infinity. Such a low threshold cannot be achieved by RA or Irregular RA (IRA) or unstructured irregular LDPC codes with the same constraint on the maximum variable node degree. Furthermore by puncturing the accumulators we can construct families of higher rate ARAA codes with thresholds that stay close to their respective channel capacity thresholds uniformly. Iterative decoding simulation results show comparable performance with the best-known LDPC codes but with very low error floor even at moderate block sizes.
Error suppression via complementary gauge choices in Reed-Muller codes
NASA Astrophysics Data System (ADS)
Chamberland, Christopher; Jochym-O'Connor, Tomas
2017-09-01
Concatenation of two quantum error-correcting codes with complementary sets of transversal gates can provide a means toward universal fault-tolerant quantum computation. We first show that it is generally preferable to choose the inner code with the higher pseudo-threshold to achieve lower logical failure rates. We then explore the threshold properties of a wide range of concatenation schemes. Notably, we demonstrate that the concatenation of complementary sets of Reed-Muller codes can increase the code capacity threshold under depolarizing noise when compared to extensions of previously proposed concatenation models. We also analyze the properties of logical errors under circuit-level noise, showing that smaller codes perform better for all sampled physical error rates. Our work provides new insights into the performance of universal concatenated quantum codes for both code capacity and circuit-level noise.
Köllner, Martin G.; Schultheiss, Oliver C.
2014-01-01
The correlation between implicit and explicit motive measures and potential moderators of this relationship were examined meta-analytically, using Hunter and Schmidt's (2004) approach. Studies from a comprehensive search in PsycINFO, data sets of our research group, a literature list compiled by an expert, and the results of a request for gray literature were examined for relevance and coded. Analyses were based on 49 papers, 56 independent samples, 6151 subjects, and 167 correlations. The correlations (ρ) between implicit and explicit measures were 0.130 (CI: 0.077–0.183) for the overall relationship, 0.116 (CI: 0.050–0.182) for affiliation, 0.139 (CI: 0.080–0.198) for achievement, and 0.038 (CI: −0.055–0.131) for power. Participant age did not moderate the size of these relationships. However, a greater proportion of males in the samples and an earlier publication year were associated with larger effect sizes. PMID:25152741
New Class of Quantum Error-Correcting Codes for a Bosonic Mode
NASA Astrophysics Data System (ADS)
Michael, Marios H.; Silveri, Matti; Brierley, R. T.; Albert, Victor V.; Salmilehto, Juha; Jiang, Liang; Girvin, S. M.
2016-07-01
We construct a new class of quantum error-correcting codes for a bosonic mode, which are advantageous for applications in quantum memories, communication, and scalable computation. These "binomial quantum codes" are formed from a finite superposition of Fock states weighted with binomial coefficients. The binomial codes can exactly correct errors that are polynomial up to a specific degree in bosonic creation and annihilation operators, including amplitude damping and displacement noise as well as boson addition and dephasing errors. For realistic continuous-time dissipative evolution, the codes can perform approximate quantum error correction to any given order in the time step between error detection measurements. We present an explicit approximate quantum error recovery operation based on projective measurements and unitary operations. The binomial codes are tailored for detecting boson loss and gain errors by means of measurements of the generalized number parity. We discuss optimization of the binomial codes and demonstrate that by relaxing the parity structure, codes with even lower unrecoverable error rates can be achieved. The binomial codes are related to existing two-mode bosonic codes, but offer the advantage of requiring only a single bosonic mode to correct amplitude damping as well as the ability to correct other errors. Our codes are similar in spirit to "cat codes" based on superpositions of the coherent states but offer several advantages such as smaller mean boson number, exact rather than approximate orthonormality of the code words, and an explicit unitary operation for repumping energy into the bosonic mode. The binomial quantum codes are realizable with current superconducting circuit technology, and they should prove useful in other quantum technologies, including bosonic quantum memories, photonic quantum communication, and optical-to-microwave up- and down-conversion.
Automated error correction in IBM quantum computer and explicit generalization
NASA Astrophysics Data System (ADS)
Ghosh, Debjit; Agarwal, Pratik; Pandey, Pratyush; Behera, Bikash K.; Panigrahi, Prasanta K.
2018-06-01
Construction of a fault-tolerant quantum computer remains a challenging problem due to unavoidable noise and fragile quantum states. However, this goal can be achieved by introducing quantum error-correcting codes. Here, we experimentally realize an automated error correction code and demonstrate the nondestructive discrimination of GHZ states in IBM 5-qubit quantum computer. After performing quantum state tomography, we obtain the experimental results with a high fidelity. Finally, we generalize the investigated code for maximally entangled n-qudit case, which could both detect and automatically correct any arbitrary phase-change error, or any phase-flip error, or any bit-flip error, or combined error of all types of error.
Coherent state coding approaches the capacity of non-Gaussian bosonic channels
NASA Astrophysics Data System (ADS)
Huber, Stefan; König, Robert
2018-05-01
The additivity problem asks if the use of entanglement can boost the information-carrying capacity of a given channel beyond what is achievable by coding with simple product states only. This has recently been shown not to be the case for phase-insensitive one-mode Gaussian channels, but remains unresolved in general. Here we consider two general classes of bosonic noise channels, which include phase-insensitive Gaussian channels as special cases: these are attenuators with general, potentially non-Gaussian environment states and classical noise channels with general probabilistic noise. We show that additivity violations, if existent, are rather minor for all these channels: the maximal gain in classical capacity is bounded by a constant independent of the input energy. Our proof shows that coding by simple classical modulation of coherent states is close to optimal.
Effective Identification of Similar Patients Through Sequential Matching over ICD Code Embedding.
Nguyen, Dang; Luo, Wei; Venkatesh, Svetha; Phung, Dinh
2018-04-11
Evidence-based medicine often involves the identification of patients with similar conditions, which are often captured in ICD (International Classification of Diseases (World Health Organization 2013)) code sequences. With no satisfying prior solutions for matching ICD-10 code sequences, this paper presents a method which effectively captures the clinical similarity among routine patients who have multiple comorbidities and complex care needs. Our method leverages the recent progress in representation learning of individual ICD-10 codes, and it explicitly uses the sequential order of codes for matching. Empirical evaluation on a state-wide cancer data collection shows that our proposed method achieves significantly higher matching performance compared with state-of-the-art methods ignoring the sequential order. Our method better identifies similar patients in a number of clinical outcomes including readmission and mortality outlook. Although this paper focuses on ICD-10 diagnosis code sequences, our method can be adapted to work with other codified sequence data.
Long distance quantum communication with quantum Reed-Solomon codes
NASA Astrophysics Data System (ADS)
Muralidharan, Sreraman; Zou, Chang-Ling; Li, Linshu; Jiang, Liang; Jianggroup Team
We study the construction of quantum Reed Solomon codes from classical Reed Solomon codes and show that they achieve the capacity of quantum erasure channel for multi-level quantum systems. We extend the application of quantum Reed Solomon codes to long distance quantum communication, investigate the local resource overhead needed for the functioning of one-way quantum repeaters with these codes, and numerically identify the parameter regime where these codes perform better than the known quantum polynomial codes and quantum parity codes . Finally, we discuss the implementation of these codes into time-bin photonic states of qubits and qudits respectively, and optimize the performance for one-way quantum repeaters.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1995-01-01
This report focuses on the results obtained during the PI's recent sabbatical leave at the Swiss Federal Institute of Technology (ETH) in Zurich, Switzerland, from January 1, 1995 through June 30, 1995. Two projects investigated various properties of TURBO codes, a new form of concatenated coding that achieves near channel capacity performance at moderate bit error rates. The performance of TURBO codes is explained in terms of the code's distance spectrum. These results explain both the near capacity performance of the TURBO codes and the observed 'error floor' for moderate and high signal-to-noise ratios (SNR's). A semester project, entitled 'The Realization of the Turbo-Coding System,' involved a thorough simulation study of the performance of TURBO codes and verified the results claimed by previous authors. A copy of the final report for this project is included as Appendix A. A diploma project, entitled 'On the Free Distance of Turbo Codes and Related Product Codes,' includes an analysis of TURBO codes and an explanation for their remarkable performance. A copy of the final report for this project is included as Appendix B.
Performance and capacity analysis of Poisson photon-counting based Iter-PIC OCDMA systems.
Li, Lingbin; Zhou, Xiaolin; Zhang, Rong; Zhang, Dingchen; Hanzo, Lajos
2013-11-04
In this paper, an iterative parallel interference cancellation (Iter-PIC) technique is developed for optical code-division multiple-access (OCDMA) systems relying on shot-noise limited Poisson photon-counting reception. The novel semi-analytical tool of extrinsic information transfer (EXIT) charts is used for analysing both the bit error rate (BER) performance as well as the channel capacity of these systems and the results are verified by Monte Carlo simulations. The proposed Iter-PIC OCDMA system is capable of achieving two orders of magnitude BER improvements and a 0.1 nats of capacity improvement over the conventional chip-level OCDMA systems at a coding rate of 1/10.
Bates, Imelda; Taegtmeyer, Miriam; Squire, S Bertel; Ansong, Daniel; Nhlema-Simwaka, Bertha; Baba, Amuda; Theobald, Sally
2011-03-28
Despite substantial investment in health capacity building in developing countries, evaluations of capacity building effectiveness are scarce. By analysing projects in Africa that had successfully built sustainable capacity, we aimed to identify evidence that could indicate that capacity building was likely to be sustainable. Four projects were selected as case studies using pre-determined criteria, including the achievement of sustainable capacity. By mapping the capacity building activities in each case study onto a framework previously used for evaluating health research capacity in Ghana, we were able to identify activities that were common to all projects. We used these activities to derive indicators which could be used in other projects to monitor progress towards building sustainable research capacity. Indicators of sustainable capacity building increased in complexity as projects matured and included- early engagement of stakeholders; explicit plans for scale up; strategies for influencing policies; quality assessments (awareness and experiential stages)- improved resources; institutionalisation of activities; innovation (expansion stage)- funding for core activities secured; management and decision-making led by southern partners (consolidation stage).Projects became sustainable after a median of 66 months. The main challenges to achieving sustainability were high turnover of staff and stakeholders, and difficulties in embedding new activities into existing systems, securing funding and influencing policy development. Our indicators of sustainable capacity building need to be tested prospectively in a variety of projects to assess their usefulness. For each project the evidence required to show that indicators have been achieved should evolve with the project and they should be determined prospectively in collaboration with stakeholders.
Parallelization of an Object-Oriented Unstructured Aeroacoustics Solver
NASA Technical Reports Server (NTRS)
Baggag, Abdelkader; Atkins, Harold; Oezturan, Can; Keyes, David
1999-01-01
A computational aeroacoustics code based on the discontinuous Galerkin method is ported to several parallel platforms using MPI. The discontinuous Galerkin method is a compact high-order method that retains its accuracy and robustness on non-smooth unstructured meshes. In its semi-discrete form, the discontinuous Galerkin method can be combined with explicit time marching methods making it well suited to time accurate computations. The compact nature of the discontinuous Galerkin method also makes it well suited for distributed memory parallel platforms. The original serial code was written using an object-oriented approach and was previously optimized for cache-based machines. The port to parallel platforms was achieved simply by treating partition boundaries as a type of boundary condition. Code modifications were minimal because boundary conditions were abstractions in the original program. Scalability results are presented for the SCI Origin, IBM SP2, and clusters of SGI and Sun workstations. Slightly superlinear speedup is achieved on a fixed-size problem on the Origin, due to cache effects.
NASA Technical Reports Server (NTRS)
Frumkin, Michael; Yan, Jerry
1999-01-01
We present an HPF (High Performance Fortran) implementation of ARC3D code along with the profiling and performance data on SGI Origin 2000. Advantages and limitations of HPF as a parallel programming language for CFD applications are discussed. For achieving good performance results we used the data distributions optimized for implementation of implicit and explicit operators of the solver and boundary conditions. We compare the results with MPI and directive based implementations.
Protograph based LDPC codes with minimum distance linearly growing with block size
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Jones, Christopher; Dolinar, Sam; Thorpe, Jeremy
2005-01-01
We propose several LDPC code constructions that simultaneously achieve good threshold and error floor performance. Minimum distance is shown to grow linearly with block size (similar to regular codes of variable degree at least 3) by considering ensemble average weight enumerators. Our constructions are based on projected graph, or protograph, structures that support high-speed decoder implementations. As with irregular ensembles, our constructions are sensitive to the proportion of degree-2 variable nodes. A code with too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code with too many such nodes tends to not exhibit a minimum distance that grows linearly in block length. In this paper we also show that precoding can be used to lower the threshold of regular LDPC codes. The decoding thresholds of the proposed codes, which have linearly increasing minimum distance in block size, outperform that of regular LDPC codes. Furthermore, a family of low to high rate codes, with thresholds that adhere closely to their respective channel capacity thresholds, is presented. Simulation results for a few example codes show that the proposed codes have low error floors as well as good threshold SNFt performance.
Fundamentals of Free-Space Optical Communications
NASA Technical Reports Server (NTRS)
Dolinar, Sam; Moision, Bruce; Erkmen, Baris
2012-01-01
Free-space optical communication systems potentially gain many dBs over RF systems. There is no upper limit on the theoretically achievable photon efficiency when the system is quantum-noise-limited: a) Intensity modulations plus photon counting can achieve arbitrarily high photon efficiency, but with sub-optimal spectral efficiency. b) Quantum-ideal number states can achieve the ultimate capacity in the limit of perfect transmissivity. Appropriate error correction codes are needed to communicate reliably near the capacity limits. Poisson-modeled noises, detector losses, and atmospheric effects must all be accounted for: a) Theoretical models are used to analyze performance degradations. b) Mitigation strategies derived from this analysis are applied to minimize these degradations.
Systematic network coding for two-hop lossy transmissions
NASA Astrophysics Data System (ADS)
Li, Ye; Blostein, Steven; Chan, Wai-Yip
2015-12-01
In this paper, we consider network transmissions over a single or multiple parallel two-hop lossy paths. These scenarios occur in applications such as sensor networks or WiFi offloading. Random linear network coding (RLNC), where previously received packets are re-encoded at intermediate nodes and forwarded, is known to be a capacity-achieving approach for these networks. However, a major drawback of RLNC is its high encoding and decoding complexity. In this work, a systematic network coding method is proposed. We show through both analysis and simulation that the proposed method achieves higher end-to-end rate as well as lower computational cost than RLNC for finite field sizes and finite-sized packet transmissions.
On the optimum signal constellation design for high-speed optical transport networks.
Liu, Tao; Djordjevic, Ivan B
2012-08-27
In this paper, we first describe an optimum signal constellation design algorithm, which is optimum in MMSE-sense, called MMSE-OSCD, for channel capacity achieving source distribution. Secondly, we introduce a feedback channel capacity inspired optimum signal constellation design (FCC-OSCD) to further improve the performance of MMSE-OSCD, inspired by the fact that feedback channel capacity is higher than that of systems without feedback. The constellations obtained by FCC-OSCD are, however, OSNR dependent. The optimization is jointly performed together with regular quasi-cyclic low-density parity-check (LDPC) code design. Such obtained coded-modulation scheme, in combination with polarization-multiplexing, is suitable as both 400 Gb/s and multi-Tb/s optical transport enabling technology. Using large girth LDPC code, we demonstrate by Monte Carlo simulations that a 32-ary signal constellation, obtained by FCC-OSCD, outperforms previously proposed optimized 32-ary CIPQ signal constellation by 0.8 dB at BER of 10(-7). On the other hand, the LDPC-coded 16-ary FCC-OSCD outperforms 16-QAM by 1.15 dB at the same BER.
Krendl, Anne C
2018-05-21
Although engaging explicit regulatory strategies may reduce negative bias toward outgroup members, these strategies are cognitively demanding and thus may not be effective for older adults (OA) who have reduced cognitive resources. The current study therefore examines whether individual differences in cognitive capacity disrupt OA' ability to explicitly regulate their bias to stigmatized individuals. Young and OA were instructed to explicitly regulate their negative bias toward stigmatized individuals by using an explicit reappraisal strategy. Regulatory success was assessed as a function of age and individual differences in cognitive capacity (Experiment 1). In Experiment 2, the role of executive function in implementing cognitive reappraisal strategies was examined by using a divided attention manipulation. Results from Experiment 1 revealed that individual differences in OA' cognitive capacity disrupted their ability to regulate their negative emotional response to stigma. In Experiment 2, it was found that dividing attention in young adults (YA) significantly reduced their regulatory success as compared to YA' regulatory capacity in the full attention condition. As expected, dividing YA' attention made their performance similar to OA with relatively preserved cognitive capacity. Together, the results from this study demonstrated that individual differences in cognitive capacity predicted OA' ability to explicitly regulate their negative bias to a range of stigmatized individuals.
Protograph-Based Raptor-Like Codes
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Chen, Tsung-Yi; Wang, Jiadong; Wesel, Richard D.
2014-01-01
Theoretical analysis has long indicated that feedback improves the error exponent but not the capacity of pointto- point memoryless channels. The analytic and empirical results indicate that at short blocklength regime, practical rate-compatible punctured convolutional (RCPC) codes achieve low latency with the use of noiseless feedback. In 3GPP, standard rate-compatible turbo codes (RCPT) did not outperform the convolutional codes in the short blocklength regime. The reason is the convolutional codes for low number of states can be decoded optimally using Viterbi decoder. Despite excellent performance of convolutional codes at very short blocklengths, the strength of convolutional codes does not scale with the blocklength for a fixed number of states in its trellis.
Interleaved concatenated codes: new perspectives on approaching the Shannon limit.
Viterbi, A J; Viterbi, A M; Sindhushayana, N T
1997-09-02
The last few years have witnessed a significant decrease in the gap between the Shannon channel capacity limit and what is practically achievable. Progress has resulted from novel extensions of previously known coding techniques involving interleaved concatenated codes. A considerable body of simulation results is now available, supported by an important but limited theoretical basis. This paper presents a computational technique which further ties simulation results to the known theory and reveals a considerable reduction in the complexity required to approach the Shannon limit.
Performance and Application of Parallel OVERFLOW Codes on Distributed and Shared Memory Platforms
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed; Rizk, Yehia M.
1999-01-01
The presentation discusses recent studies on the performance of the two parallel versions of the aerodynamics CFD code, OVERFLOW_MPI and _MLP. Developed at NASA Ames, the serial version, OVERFLOW, is a multidimensional Navier-Stokes flow solver based on overset (Chimera) grid technology. The code has recently been parallelized in two ways. One is based on the explicit message-passing interface (MPI) across processors and uses the _MPI communication package. This approach is primarily suited for distributed memory systems and workstation clusters. The second, termed the multi-level parallel (MLP) method, is simple and uses shared memory for all communications. The _MLP code is suitable on distributed-shared memory systems. For both methods, the message passing takes place across the processors or processes at the advancement of each time step. This procedure is, in effect, the Chimera boundary conditions update, which is done in an explicit "Jacobi" style. In contrast, the update in the serial code is done in more of the "Gauss-Sidel" fashion. The programming efforts for the _MPI code is more complicated than for the _MLP code; the former requires modification of the outer and some inner shells of the serial code, whereas the latter focuses only on the outer shell of the code. The _MPI version offers a great deal of flexibility in distributing grid zones across a specified number of processors in order to achieve load balancing. The approach is capable of partitioning zones across multiple processors or sending each zone and/or cluster of several zones into a single processor. The message passing across the processors consists of Chimera boundary and/or an overlap of "halo" boundary points for each partitioned zone. The MLP version is a new coarse-grain parallel concept at the zonal and intra-zonal levels. A grouping strategy is used to distribute zones into several groups forming sub-processes which will run in parallel. The total volume of grid points in each group are approximately balanced. A proper number of threads are initially allocated to each group, and in subsequent iterations during the run-time, the number of threads are adjusted to achieve load balancing across the processes. Each process exploits the multitasking directives already established in Overflow.
Accumulate repeat accumulate codes
NASA Technical Reports Server (NTRS)
Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung
2004-01-01
In this paper we propose an innovative channel coding scheme called 'Accumulate Repeat Accumulate codes' (ARA). This class of codes can be viewed as serial turbo-like codes, or as a subclass of Low Density Parity Check (LDPC) codes, thus belief propagation can be used for iterative decoding of ARA codes on a graph. The structure of encoder for this class can be viewed as precoded Repeat Accumulate (RA) code or as precoded Irregular Repeat Accumulate (IRA) code, where simply an accumulator is chosen as a precoder. Thus ARA codes have simple, and very fast encoder structure when they representing LDPC codes. Based on density evolution for LDPC codes through some examples for ARA codes, we show that for maximum variable node degree 5 a minimum bit SNR as low as 0.08 dB from channel capacity for rate 1/2 can be achieved as the block size goes to infinity. Thus based on fixed low maximum variable node degree, its threshold outperforms not only the RA and IRA codes but also the best known LDPC codes with the dame maximum node degree. Furthermore by puncturing the accumulators any desired high rate codes close to code rate 1 can be obtained with thresholds that stay close to the channel capacity thresholds uniformly. Iterative decoding simulation results are provided. The ARA codes also have projected graph or protograph representation that allows for high speed decoder implementation.
Bojórquez, Edén; Reyes-Salazar, Alfredo; Ruiz, Sonia E; Terán-Gilmore, Amador
2014-01-01
Several studies have been devoted to calibrate damage indices for steel and reinforced concrete members with the purpose of overcoming some of the shortcomings of the parameters currently used during seismic design. Nevertheless, there is a challenge to study and calibrate the use of such indices for the practical structural evaluation of complex structures. In this paper, an energy-based damage model for multidegree-of-freedom (MDOF) steel framed structures that accounts explicitly for the effects of cumulative plastic deformation demands is used to estimate the cyclic drift capacity of steel structures. To achieve this, seismic hazard curves are used to discuss the limitations of the maximum interstory drift demand as a performance parameter to achieve adequate damage control. Then the concept of cyclic drift capacity, which incorporates information of the influence of cumulative plastic deformation demands, is introduced as an alternative for future applications of seismic design of structures subjected to long duration ground motions.
Bojórquez, Edén; Reyes-Salazar, Alfredo; Ruiz, Sonia E.; Terán-Gilmore, Amador
2014-01-01
Several studies have been devoted to calibrate damage indices for steel and reinforced concrete members with the purpose of overcoming some of the shortcomings of the parameters currently used during seismic design. Nevertheless, there is a challenge to study and calibrate the use of such indices for the practical structural evaluation of complex structures. In this paper, an energy-based damage model for multidegree-of-freedom (MDOF) steel framed structures that accounts explicitly for the effects of cumulative plastic deformation demands is used to estimate the cyclic drift capacity of steel structures. To achieve this, seismic hazard curves are used to discuss the limitations of the maximum interstory drift demand as a performance parameter to achieve adequate damage control. Then the concept of cyclic drift capacity, which incorporates information of the influence of cumulative plastic deformation demands, is introduced as an alternative for future applications of seismic design of structures subjected to long duration ground motions. PMID:25089288
Error Control Techniques for Satellite and Space Communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1996-01-01
In this report, we present the results of our recent work on turbo coding in two formats. Appendix A includes the overheads of a talk that has been given at four different locations over the last eight months. This presentation has received much favorable comment from the research community and has resulted in the full-length paper included as Appendix B, 'A Distance Spectrum Interpretation of Turbo Codes'. Turbo codes use a parallel concatenation of rate 1/2 convolutional encoders combined with iterative maximum a posteriori probability (MAP) decoding to achieve a bit error rate (BER) of 10(exp -5) at a signal-to-noise ratio (SNR) of only 0.7 dB. The channel capacity for a rate 1/2 code with binary phase shift-keyed modulation on the AWGN (additive white Gaussian noise) channel is 0 dB, and thus the Turbo coding scheme comes within 0.7 DB of capacity at a BER of 10(exp -5).
Explicit and implicit learning: The case of computer programming
NASA Astrophysics Data System (ADS)
Mancy, Rebecca
The central question of this thesis concerns the role of explicit and implicit learning in the acquisition of a complex skill, namely computer programming. This issue is explored with reference to information processing models of memory drawn from cognitive science. These models indicate that conscious information processing occurs in working memory where information is stored and manipulated online, but that this mode of processing shows serious limitations in terms of capacity or resources. Some information processing models also indicate information processing in the absence of conscious awareness through automation and implicit learning. It was hypothesised that students would demonstrate implicit and explicit knowledge and that both would contribute to their performance in programming. This hypothesis was investigated via two empirical studies. The first concentrated on temporary storage and online processing in working memory and the second on implicit and explicit knowledge. Storage and processing were tested using two tools: temporary storage capacity was measured using a digit span test; processing was investigated with a disembedding test. The results were used to calculate correlation coefficients with performance on programming examinations. Individual differences in temporary storage had only a small role in predicting programming performance and this factor was not a major determinant of success. Individual differences in disembedding were more strongly related to programming achievement. The second study used interviews to investigate the use of implicit and explicit knowledge. Data were analysed according to a grounded theory paradigm. The results indicated that students possessed implicit and explicit knowledge, but that the balance between the two varied between students and that the most successful students did not necessarily possess greater explicit knowledge. The ways in which students described their knowledge led to the development of a framework which extends beyond the implicit-explicit dichotomy to four descriptive categories of knowledge along this dimension. Overall, the results demonstrated that explicit and implicit knowledge both contribute to the acquisition ofprogramming skills. Suggestions are made for further research, and the results are discussed in the context of their implications for education.
High-Capacity Communications from Martian Distances
NASA Technical Reports Server (NTRS)
Williams, W. Dan; Collins, Michael; Hodges, Richard; Orr, Richard S.; Sands, O. Scott; Schuchman, Leonard; Vyas, Hemali
2007-01-01
High capacity communications from Martian distances, required for the envisioned human exploration and desirable for data-intensive science missions, is challenging. NASA s Deep Space Network currently requires large antennas to close RF telemetry links operating at kilobit-per-second data rates. To accommodate higher rate communications, NASA is considering means to achieve greater effective aperture at its ground stations. This report, focusing on the return link from Mars to Earth, demonstrates that without excessive research and development expenditure, operational Mars-to-Earth RF communications systems can achieve data rates up to 1 Gbps by 2020 using technology that today is at technology readiness level (TRL) 4-5. Advanced technology to achieve the needed increase in spacecraft power and transmit aperture is feasible at an only moderate increase in spacecraft mass and technology risk. In addition, both power-efficient, near-capacity coding and modulation and greater aperture from the DSN array will be required. In accord with these results and conclusions, investment in the following technologies is recommended:(1) lightweight (1 kg/sq m density) spacecraft antenna systems; (2) a Ka-band receive ground array consisting of relatively small (10-15 m) antennas; (3) coding and modulation technology that reduces spacecraft power by at least 3 dB; and (4) efficient generation of kilowatt-level spacecraft RF power.
Risk Informed Design and Analysis Criteria for Nuclear Structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salmon, Michael W.
2015-06-17
Target performance can be achieved by defining design basis ground motion from results of a probabilistic seismic hazards assessment, and introducing known levels of conservatism in the design above the DBE. ASCE 4, 43, DOE-STD-1020 defined the DBE at 4x10-4 and introduce only slight levels of conservatism in response. ASCE 4, 43, DOE-STD-1020 assume code capacities shoot for about 98% NEP. There is a need to have a uniform target (98% NEP) for code developers (ACI, AISC, etc.) to aim for. In considering strengthening options, one must also consider cost/risk reduction achieved.
High-efficiency Gaussian key reconciliation in continuous variable quantum key distribution
NASA Astrophysics Data System (ADS)
Bai, ZengLiang; Wang, XuYang; Yang, ShenShen; Li, YongMin
2016-01-01
Efficient reconciliation is a crucial step in continuous variable quantum key distribution. The progressive-edge-growth (PEG) algorithm is an efficient method to construct relatively short block length low-density parity-check (LDPC) codes. The qua-sicyclic construction method can extend short block length codes and further eliminate the shortest cycle. In this paper, by combining the PEG algorithm and qua-si-cyclic construction method, we design long block length irregular LDPC codes with high error-correcting capacity. Based on these LDPC codes, we achieve high-efficiency Gaussian key reconciliation with slice recon-ciliation based on multilevel coding/multistage decoding with an efficiency of 93.7%.
Algorithms for Zonal Methods and Development of Three Dimensional Mesh Generation Procedures.
1984-02-01
a r-re complete set of equations is used, but their effect is imposed by means of a right hand side forcing function, not by means of a zonal boundary...modifications of flow-simulation algorithms The explicit finite-difference code of Magnus and are discussed. Computational tests in two dimensions...used to simplify the task of grid generation without an adverse achieve computational efficiency. More recently, effect on flow-field algorithms and
Protograph LDPC Codes with Node Degrees at Least 3
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Jones, Christopher
2006-01-01
In this paper we present protograph codes with a small number of degree-3 nodes and one high degree node. The iterative decoding threshold for proposed rate 1/2 codes are lower, by about 0.2 dB, than the best known irregular LDPC codes with degree at least 3. The main motivation is to gain linear minimum distance to achieve low error floor. Also to construct rate-compatible protograph-based LDPC codes for fixed block length that simultaneously achieves low iterative decoding threshold and linear minimum distance. We start with a rate 1/2 protograph LDPC code with degree-3 nodes and one high degree node. Higher rate codes are obtained by connecting check nodes with degree-2 non-transmitted nodes. This is equivalent to constraint combining in the protograph. The condition where all constraints are combined corresponds to the highest rate code. This constraint must be connected to nodes of degree at least three for the graph to have linear minimum distance. Thus having node degree at least 3 for rate 1/2 guarantees linear minimum distance property to be preserved for higher rates. Through examples we show that the iterative decoding threshold as low as 0.544 dB can be achieved for small protographs with node degrees at least three. A family of low- to high-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.
Interleaved concatenated codes: New perspectives on approaching the Shannon limit
Viterbi, A. J.; Viterbi, A. M.; Sindhushayana, N. T.
1997-01-01
The last few years have witnessed a significant decrease in the gap between the Shannon channel capacity limit and what is practically achievable. Progress has resulted from novel extensions of previously known coding techniques involving interleaved concatenated codes. A considerable body of simulation results is now available, supported by an important but limited theoretical basis. This paper presents a computational technique which further ties simulation results to the known theory and reveals a considerable reduction in the complexity required to approach the Shannon limit. PMID:11038568
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, D. J., Jr.
1986-01-01
High rate concatenated coding systems with trellis inner codes and Reed-Solomon (RS) outer codes for application in satellite communication systems are considered. Two types of inner codes are studied: high rate punctured binary convolutional codes which result in overall effective information rates between 1/2 and 1 bit per channel use; and bandwidth efficient signal space trellis codes which can achieve overall effective information rates greater than 1 bit per channel use. Channel capacity calculations with and without side information performed for the concatenated coding system. Concatenated coding schemes are investigated. In Scheme 1, the inner code is decoded with the Viterbi algorithm and the outer RS code performs error-correction only (decoding without side information). In scheme 2, the inner code is decoded with a modified Viterbi algorithm which produces reliability information along with the decoded output. In this algorithm, path metrics are used to estimate the entire information sequence, while branch metrics are used to provide the reliability information on the decoded sequence. This information is used to erase unreliable bits in the decoded output. An errors-and-erasures RS decoder is then used for the outer code. These two schemes are proposed for use on NASA satellite channels. Results indicate that high system reliability can be achieved with little or no bandwidth expansion.
Jiao, Haisong; Pu, Tao; Zheng, Jilin; Xiang, Peng; Fang, Tao
2017-05-15
The physical-layer security of a quantum-noise randomized cipher (QNRC) system is, for the first time, quantitatively evaluated with secrecy capacity employed as the performance metric. Considering quantum noise as a channel advantage for legitimate parties over eavesdroppers, the specific wire-tap models for both channels of the key and data are built with channel outputs yielded by quantum heterodyne measurement; the general expressions of secrecy capacities for both channels are derived, where the matching codes are proved to be uniformly distributed. The maximal achievable secrecy rate of the system is proposed, under which secrecy of both the key and data is guaranteed. The influences of various system parameters on secrecy capacities are assessed in detail. The results indicate that QNRC combined with proper channel codes is a promising framework of secure communication for long distance with high speed, which can be orders of magnitude higher than the perfect secrecy rates of other encryption systems. Even if the eavesdropper intercepts more signal power than the legitimate receiver, secure communication (up to Gb/s) can still be achievable. Moreover, the secrecy of running key is found to be the main constraint to the systemic maximal secrecy rate.
ERIC Educational Resources Information Center
De Nigris, Rosemarie Previti
2017-01-01
The hypothesis of the study was explicit gradual release of responsibility comprehension instruction (GRR) (Pearson & Gallagher, 1983; Fisher & Frey, 2008) with the researcher-created Story Grammar Code (SGC) strategy would significantly increase third graders' comprehension of narrative fiction and nonfiction text. SGC comprehension…
NASA Astrophysics Data System (ADS)
Fehenberger, Tobias
2018-02-01
This paper studies probabilistic shaping in a multi-span wavelength-division multiplexing optical fiber system with 64-ary quadrature amplitude modulation (QAM) input. In split-step fiber simulations and via an enhanced Gaussian noise model, three figures of merit are investigated, which are signal-to-noise ratio (SNR), achievable information rate (AIR) for capacity-achieving forward error correction (FEC) with bit-metric decoding, and the information rate achieved with low-density parity-check (LDPC) FEC. For the considered system parameters and different shaped input distributions, shaping is found to decrease the SNR by 0.3 dB yet simultaneously increases the AIR by up to 0.4 bit per 4D-symbol. The information rates of LDPC-coded modulation with shaped 64QAM input are improved by up to 0.74 bit per 4D-symbol, which is larger than the shaping gain when considering AIRs. This increase is attributed to the reduced coding gap of the higher-rate code that is used for decoding the nonuniform QAM input.
Lin, Changyu; Zou, Ding; Liu, Tao; Djordjevic, Ivan B
2016-08-08
A mutual information inspired nonbinary coded modulation design with non-uniform shaping is proposed. Instead of traditional power of two signal constellation sizes, we design 5-QAM, 7-QAM and 9-QAM constellations, which can be used in adaptive optical networks. The non-uniform shaping and LDPC code rate are jointly considered in the design, which results in a better performance scheme for the same SNR values. The matched nonbinary (NB) LDPC code is used for this scheme, which further improves the coding gain and the overall performance. We analyze both coding performance and system SNR performance. We show that the proposed NB LDPC-coded 9-QAM has more than 2dB gain in symbol SNR compared to traditional LDPC-coded star-8-QAM. On the other hand, the proposed NB LDPC-coded 5-QAM and 7-QAM have even better performance than LDPC-coded QPSK.
NASA Technical Reports Server (NTRS)
Bogert, Philip B.; Satyanarayana, Arunkumar; Chunchu, Prasad B.
2006-01-01
Splitting, ultimate failure load and the damage path in center notched composite specimens subjected to in-plane tension loading are predicted using progressive failure analysis methodology. A 2-D Hashin-Rotem failure criterion is used in determining intra-laminar fiber and matrix failures. This progressive failure methodology has been implemented in the Abaqus/Explicit and Abaqus/Standard finite element codes through user written subroutines "VUMAT" and "USDFLD" respectively. A 2-D finite element model is used for predicting the intra-laminar damages. Analysis results obtained from the Abaqus/Explicit and Abaqus/Standard code show good agreement with experimental results. The importance of modeling delamination in progressive failure analysis methodology is recognized for future studies. The use of an explicit integration dynamics code for simple specimen geometry and static loading establishes a foundation for future analyses where complex loading and nonlinear dynamic interactions of damage and structure will necessitate it.
NASA Astrophysics Data System (ADS)
Dao, Thanh Hai
2018-01-01
Network coding techniques are seen as the new dimension to improve the network performances thanks to the capability of utilizing network resources more efficiently. Indeed, the application of network coding to the realm of failure recovery in optical networks has been marking a major departure from traditional protection schemes as it could potentially achieve both rapid recovery and capacity improvement, challenging the prevailing wisdom of trading capacity efficiency for speed recovery and vice versa. In this context, the maturing of all-optical XOR technologies appears as a good match to the necessity of a more efficient protection in transparent optical networks. In addressing this opportunity, we propose to use a practical all-optical XOR network coding to leverage the conventional 1 + 1 optical path protection in transparent WDM optical networks. The network coding-assisted protection solution combines protection flows of two demands sharing the same destination node in supportive conditions, paving the way for reducing the backup capacity. A novel mathematical model taking into account the operation of new protection scheme for optimal network designs is formulated as the integer linear programming. Numerical results based on extensive simulations on realistic topologies, COST239 and NSFNET networks, are presented to highlight the benefits of our proposal compared to the conventional approach in terms of wavelength resources efficiency and network throughput.
Crashdynamics with DYNA3D: Capabilities and research directions
NASA Technical Reports Server (NTRS)
Whirley, Robert G.; Engelmann, Bruce E.
1993-01-01
The application of the explicit nonlinear finite element analysis code DYNA3D to crashworthiness problems is discussed. Emphasized in the first part of this work are the most important capabilities of an explicit code for crashworthiness analyses. The areas with significant research promise for the computational simulation of crash events are then addressed.
Analysis of Optical CDMA Signal Transmission: Capacity Limits and Simulation Results
NASA Astrophysics Data System (ADS)
Garba, Aminata A.; Yim, Raymond M. H.; Bajcsy, Jan; Chen, Lawrence R.
2005-12-01
We present performance limits of the optical code-division multiple-access (OCDMA) networks. In particular, we evaluate the information-theoretical capacity of the OCDMA transmission when single-user detection (SUD) is used by the receiver. First, we model the OCDMA transmission as a discrete memoryless channel, evaluate its capacity when binary modulation is used in the interference-limited (noiseless) case, and extend this analysis to the case when additive white Gaussian noise (AWGN) is corrupting the received signals. Next, we analyze the benefits of using nonbinary signaling for increasing the throughput of optical CDMA transmission. It turns out that up to a fourfold increase in the network throughput can be achieved with practical numbers of modulation levels in comparison to the traditionally considered binary case. Finally, we present BER simulation results for channel coded binary and[InlineEquation not available: see fulltext.]-ary OCDMA transmission systems. In particular, we apply turbo codes concatenated with Reed-Solomon codes so that up to several hundred concurrent optical CDMA users can be supported at low target bit error rates. We observe that unlike conventional OCDMA systems, turbo-empowered OCDMA can allow overloading (more active users than is the length of the spreading sequences) with good bit error rate system performance.
NASA Astrophysics Data System (ADS)
Hauth, T.; Innocente and, V.; Piparo, D.
2012-12-01
The processing of data acquired by the CMS detector at LHC is carried out with an object-oriented C++ software framework: CMSSW. With the increasing luminosity delivered by the LHC, the treatment of recorded data requires extraordinary large computing resources, also in terms of CPU usage. A possible solution to cope with this task is the exploitation of the features offered by the latest microprocessor architectures. Modern CPUs present several vector units, the capacity of which is growing steadily with the introduction of new processor generations. Moreover, an increasing number of cores per die is offered by the main vendors, even on consumer hardware. Most recent C++ compilers provide facilities to take advantage of such innovations, either by explicit statements in the programs sources or automatically adapting the generated machine instructions to the available hardware, without the need of modifying the existing code base. Programming techniques to implement reconstruction algorithms and optimised data structures are presented, that aim to scalable vectorization and parallelization of the calculations. One of their features is the usage of new language features of the C++11 standard. Portions of the CMSSW framework are illustrated which have been found to be especially profitable for the application of vectorization and multi-threading techniques. Specific utility components have been developed to help vectorization and parallelization. They can easily become part of a larger common library. To conclude, careful measurements are described, which show the execution speedups achieved via vectorised and multi-threaded code in the context of CMSSW.
ERIC Educational Resources Information Center
Simmons, Deborah C.; And Others
1995-01-01
Examined effects of explicit teaching and peer tutoring on reading achievement of learning-disabled students and nondisabled, low-performing readers in academically integrated classrooms. Found that explicit-teaching students did not achieve reliably better than controls; students in the explicit teaching plus peer tutoring condition scored higher…
NASA Astrophysics Data System (ADS)
Damera-Venkata, Niranjan; Yen, Jonathan
2003-01-01
A Visually significant two-dimensional barcode (VSB) developed by Shaked et. al. is a method used to design an information carrying two-dimensional barcode, which has the appearance of a given graphical entity such as a company logo. The encoding and decoding of information using the VSB, uses a base image with very few graylevels (typically only two). This typically requires the image histogram to be bi-modal. For continuous-tone images such as digital photographs of individuals, the representation of tone or "shades of gray" is not only important to obtain a pleasing rendition of the face, but in most cases, the VSB renders these images unrecognizable due to its inability to represent true gray-tone variations. This paper extends the concept of a VSB to an image bar code (IBC). We enable the encoding and subsequent decoding of information embedded in the hardcopy version of continuous-tone base-images such as those acquired with a digital camera. The encoding-decoding process is modeled by robust data transmission through a noisy print-scan channel that is explicitly modeled. The IBC supports a high information capacity that differentiates it from common hardcopy watermarks. The reason for the improved image quality over the VSB is a joint encoding/halftoning strategy based on a modified version of block error diffusion. Encoder stability, image quality vs. information capacity tradeoffs and decoding issues with and without explicit knowledge of the base-image are discussed.
Drug-laden 3D biodegradable label using QR code for anti-counterfeiting of drugs.
Fei, Jie; Liu, Ran
2016-06-01
Wiping out counterfeit drugs is a great task for public health care around the world. The boost of these drugs makes treatment to become potentially harmful or even lethal. In this paper, biodegradable drug-laden QR code label for anti-counterfeiting of drugs is proposed that can provide the non-fluorescence recognition and high capacity. It is fabricated by the laser cutting to achieve the roughness over different surface which causes the difference in the gray levels on the translucent material the QR code pattern, and the micro mold process to obtain the drug-laden biodegradable label. We screened biomaterials presenting the relevant conditions and further requirements of the package. The drug-laden microlabel is on the surface of the troches or the bottom of the capsule and can be read by a simple smartphone QR code reader application. Labeling the pill directly and decoding the information successfully means more convenient and simple operation with non-fluorescence and high capacity in contrast to the traditional methods. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Jiang, Xue-Qin; Huang, Peng; Huang, Duan; Lin, Dakai; Zeng, Guihua
2017-02-01
Achieving information theoretic security with practical complexity is of great interest to continuous-variable quantum key distribution in the postprocessing procedure. In this paper, we propose a reconciliation scheme based on the punctured low-density parity-check (LDPC) codes. Compared to the well-known multidimensional reconciliation scheme, the present scheme has lower time complexity. Especially when the chosen punctured LDPC code achieves the Shannon capacity, the proposed reconciliation scheme can remove the information that has been leaked to an eavesdropper in the quantum transmission phase. Therefore, there is no information leaked to the eavesdropper after the reconciliation stage. This indicates that the privacy amplification algorithm of the postprocessing procedure is no more needed after the reconciliation process. These features lead to a higher secret key rate, optimal performance, and availability for the involved quantum key distribution scheme.
Alternative modeling methods for plasma-based Rf ion sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veitzer, Seth A., E-mail: veitzer@txcorp.com; Kundrapu, Madhusudhan, E-mail: madhusnk@txcorp.com; Stoltz, Peter H., E-mail: phstoltz@txcorp.com
Rf-driven ion sources for accelerators and many industrial applications benefit from detailed numerical modeling and simulation of plasma characteristics. For instance, modeling of the Spallation Neutron Source (SNS) internal antenna H{sup −} source has indicated that a large plasma velocity is induced near bends in the antenna where structural failures are often observed. This could lead to improved designs and ion source performance based on simulation and modeling. However, there are significant separations of time and spatial scales inherent to Rf-driven plasma ion sources, which makes it difficult to model ion sources with explicit, kinetic Particle-In-Cell (PIC) simulation codes. Inmore » particular, if both electron and ion motions are to be explicitly modeled, then the simulation time step must be very small, and total simulation times must be large enough to capture the evolution of the plasma ions, as well as extending over many Rf periods. Additional physics processes such as plasma chemistry and surface effects such as secondary electron emission increase the computational requirements in such a way that even fully parallel explicit PIC models cannot be used. One alternative method is to develop fluid-based codes coupled with electromagnetics in order to model ion sources. Time-domain fluid models can simulate plasma evolution, plasma chemistry, and surface physics models with reasonable computational resources by not explicitly resolving electron motions, which thereby leads to an increase in the time step. This is achieved by solving fluid motions coupled with electromagnetics using reduced-physics models, such as single-temperature magnetohydrodynamics (MHD), extended, gas dynamic, and Hall MHD, and two-fluid MHD models. We show recent results on modeling the internal antenna H{sup −} ion source for the SNS at Oak Ridge National Laboratory using the fluid plasma modeling code USim. We compare demonstrate plasma temperature equilibration in two-temperature MHD models for the SNS source and present simulation results demonstrating plasma evolution over many Rf periods for different plasma temperatures. We perform the calculations in parallel, on unstructured meshes, using finite-volume solvers in order to obtain results in reasonable time.« less
Alternative modeling methods for plasma-based Rf ion sources.
Veitzer, Seth A; Kundrapu, Madhusudhan; Stoltz, Peter H; Beckwith, Kristian R C
2016-02-01
Rf-driven ion sources for accelerators and many industrial applications benefit from detailed numerical modeling and simulation of plasma characteristics. For instance, modeling of the Spallation Neutron Source (SNS) internal antenna H(-) source has indicated that a large plasma velocity is induced near bends in the antenna where structural failures are often observed. This could lead to improved designs and ion source performance based on simulation and modeling. However, there are significant separations of time and spatial scales inherent to Rf-driven plasma ion sources, which makes it difficult to model ion sources with explicit, kinetic Particle-In-Cell (PIC) simulation codes. In particular, if both electron and ion motions are to be explicitly modeled, then the simulation time step must be very small, and total simulation times must be large enough to capture the evolution of the plasma ions, as well as extending over many Rf periods. Additional physics processes such as plasma chemistry and surface effects such as secondary electron emission increase the computational requirements in such a way that even fully parallel explicit PIC models cannot be used. One alternative method is to develop fluid-based codes coupled with electromagnetics in order to model ion sources. Time-domain fluid models can simulate plasma evolution, plasma chemistry, and surface physics models with reasonable computational resources by not explicitly resolving electron motions, which thereby leads to an increase in the time step. This is achieved by solving fluid motions coupled with electromagnetics using reduced-physics models, such as single-temperature magnetohydrodynamics (MHD), extended, gas dynamic, and Hall MHD, and two-fluid MHD models. We show recent results on modeling the internal antenna H(-) ion source for the SNS at Oak Ridge National Laboratory using the fluid plasma modeling code USim. We compare demonstrate plasma temperature equilibration in two-temperature MHD models for the SNS source and present simulation results demonstrating plasma evolution over many Rf periods for different plasma temperatures. We perform the calculations in parallel, on unstructured meshes, using finite-volume solvers in order to obtain results in reasonable time.
NASA Technical Reports Server (NTRS)
Arnold, S. M.; Saleeb, A. F.; Tan, H. Q.; Zhang, Y.
1993-01-01
The issue of developing effective and robust schemes to implement a class of the Ogden-type hyperelastic constitutive models is addressed. To this end, special purpose functions (running under MACSYMA) are developed for the symbolic derivation, evaluation, and automatic FORTRAN code generation of explicit expressions for the corresponding stress function and material tangent stiffness tensors. These explicit forms are valid over the entire deformation range, since the singularities resulting from repeated principal-stretch values have been theoretically removed. The required computational algorithms are outlined, and the resulting FORTRAN computer code is presented.
A comparison of two central difference schemes for solving the Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Maksymiuk, C. M.; Swanson, R. C.; Pulliam, T. H.
1990-01-01
Five viscous transonic airfoil cases were computed by two significantly different computational fluid dynamics codes: An explicit finite-volume algorithm with multigrid, and an implicit finite-difference approximate-factorization method with Eigenvector diagonalization. Both methods are described in detail, and their performance on the test cases is compared. The codes utilized the same grids, turbulence model, and computer to provide the truest test of the algorithms. The two approaches produce very similar results, which, for attached flows, also agree well with experimental results; however, the explicit code is considerably faster.
Capacity of noncoherent MFSK channels
NASA Technical Reports Server (NTRS)
Bar-David, I.; Butman, S. A.; Klass, M. J.; Levitt, B. K.; Lyon, R. F.
1974-01-01
Performance limits theoretically achievable over noncoherent channels perturbed by additive Gaussian noise in hard decision, optimal, and soft decision receivers are computed as functions of the number of orthogonal signals and the predetection signal-to-noise ratio. Equations are derived for orthogonal signal capacity, the ultimate MFSK capacity, and the convolutional coding and decoding limit. It is shown that performance improves as the signal-to-noise ratio increases, provided the bandwidth can be increased, that the optimum number of signals is not infinite (except for the optimal receiver), and that the optimum number decreases as the signal-to-noise ratio decreases, but is never less than 7 for even the hard decision receiver.
Three Dimensional Explicit Model for Cometary Tail Ions Interactions with Solar Wind
NASA Astrophysics Data System (ADS)
Al Bermani, M. J. F.; Alhamed, S. A.; Khalaf, S. Z.; Ali, H. Sh.; Selman, A. A.
2009-06-01
The different interactions between cometary tail and solar wind ions are studied in the present paper based on three-dimensional Lax explicit method. The model used in this research is based on the continuity equations describing the cometary tail-solar wind interactions. Three dimensional system was considered in this paper. Simulation of the physical system was achieved using computer code written using Matlab 7.0. The parameters studied here assumed Halley comet type and include the particle density rho, the particles velocity v, the magnetic field strength B, dynamic pressure p and internal energy E. The results of the present research showed that the interaction near the cometary nucleus is mainly affected by the new ions added to the plasma of the solar wind, which increases the average molecular weight and result in many unique characteristics of the cometary tail. These characteristics were explained in the presence of the IMF.
ERIC Educational Resources Information Center
Kalish, Michael L.; Newell, Ben R.; Dunn, John C.
2017-01-01
It is sometimes supposed that category learning involves competing explicit and procedural systems, with only the former reliant on working memory capacity (WMC). In 2 experiments participants were trained for 3 blocks on both filtering (often said to be learned explicitly) and condensation (often said to be learned procedurally) category…
ERIC Educational Resources Information Center
Ercetin, Gulcan; Alptekin, Cem
2013-01-01
Following an extensive overview of the subject, this study explores the relationships between second-language (L2) explicit/implicit knowledge sources, embedded in the declarative/procedural memory systems, and L2 working memory (WM) capacity. It further examines the relationships between L2 reading comprehension and L2 WM capacity as well as…
Construction of Protograph LDPC Codes with Linear Minimum Distance
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Dolinar, Sam; Jones, Christopher
2006-01-01
A construction method for protograph-based LDPC codes that simultaneously achieve low iterative decoding threshold and linear minimum distance is proposed. We start with a high-rate protograph LDPC code with variable node degrees of at least 3. Lower rate codes are obtained by splitting check nodes and connecting them by degree-2 nodes. This guarantees the linear minimum distance property for the lower-rate codes. Excluding checks connected to degree-1 nodes, we show that the number of degree-2 nodes should be at most one less than the number of checks for the protograph LDPC code to have linear minimum distance. Iterative decoding thresholds are obtained by using the reciprocal channel approximation. Thresholds are lowered by using either precoding or at least one very high-degree node in the base protograph. A family of high- to low-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1991-01-01
Shannon's capacity bound shows that coding can achieve large reductions in the required signal to noise ratio per information bit (E sub b/N sub 0 where E sub b is the energy per bit and (N sub 0)/2 is the double sided noise density) in comparison to uncoded schemes. For bandwidth efficiencies of 2 bit/sym or greater, these improvements were obtained through the use of Trellis Coded Modulation and Block Coded Modulation. A method of obtaining these high efficiencies using multidimensional Multiple Phase Shift Keying (MPSK) and Quadrature Amplitude Modulation (QAM) signal sets with trellis coding is described. These schemes have advantages in decoding speed, phase transparency, and coding gain in comparison to other trellis coding schemes. Finally, a general parity check equation for rotationally invariant trellis codes is introduced from which non-linear codes for two dimensional MPSK and QAM signal sets are found. These codes are fully transparent to all rotations of the signal set.
Semrau, Daniel; Killey, Robert; Bayvel, Polina
2017-06-12
As the bandwidths of optical communication systems are increased to maximize channel capacity, the impact of stimulated Raman scattering (SRS) on the achievable information rates (AIR) in ultra-wideband coherent WDM systems becomes significant, and is investigated in this work, for the first time. By modifying the GN-model to account for SRS, it is possible to derive a closed-form expression that predicts the optical signal-to-noise ratio of all channels at the receiver for bandwidths of up to 15 THz, which is in excellent agreement with numerical calculations. It is shown that, with fixed modulation and coding rate, SRS leads to a drop of approximately 40% in achievable information rates for bandwidths higher than 15 THz. However, if adaptive modulation and coding rates are applied across the entire spectrum, this AIR reduction can be limited to only 10%.
NASA Astrophysics Data System (ADS)
Cavaglieri, Daniele; Bewley, Thomas; Mashayek, Ali
2015-11-01
We present a new code, Diablo 2.0, for the simulation of the incompressible NSE in channel and duct flows with strong grid stretching near walls. The code leverages the fractional step approach with a few twists. New low-storage IMEX (implicit-explicit) Runge-Kutta time-marching schemes are tested which are superior to the traditional and widely-used CN/RKW3 (Crank-Nicolson/Runge-Kutta-Wray) approach; the new schemes tested are L-stable in their implicit component, and offer improved overall order of accuracy and stability with, remarkably, similar computational cost and storage requirements. For duct flow simulations, our new code also introduces a new smoother for the multigrid solver for the pressure Poisson equation. The classic approach, involving alternating-direction zebra relaxation, is replaced by a new scheme, dubbed tweed relaxation, which achieves the same convergence rate with roughly half the computational cost. The code is then tested on the simulation of a shear flow instability in a duct, a classic problem in fluid mechanics which has been the object of extensive numerical modelling for its role as a canonical pathway to energetic turbulence in several fields of science and engineering.
Information Theoretic Secret Key Generation: Structured Codes and Tree Packing
ERIC Educational Resources Information Center
Nitinawarat, Sirin
2010-01-01
This dissertation deals with a multiterminal source model for secret key generation by multiple network terminals with prior and privileged access to a set of correlated signals complemented by public discussion among themselves. Emphasis is placed on a characterization of secret key capacity, i.e., the largest rate of an achievable secret key,…
NASA Astrophysics Data System (ADS)
Drobny, Jon; Curreli, Davide; Ruzic, David; Lasa, Ane; Green, David; Canik, John; Younkin, Tim; Blondel, Sophie; Wirth, Brian
2017-10-01
Surface roughness greatly impacts material erosion, and thus plays an important role in Plasma-Surface Interactions. Developing strategies for efficiently introducing rough surfaces into ion-solid interaction codes will be an important step towards whole-device modeling of plasma devices and future fusion reactors such as ITER. Fractal TRIDYN (F-TRIDYN) is an upgraded version of the Monte Carlo, BCA program TRIDYN developed for this purpose that includes an explicit fractal model of surface roughness and extended input and output options for file-based code coupling. Code coupling with both plasma and material codes has been achieved and allows for multi-scale, whole-device modeling of plasma experiments. These code coupling results will be presented. F-TRIDYN has been further upgraded with an alternative, statistical model of surface roughness. The statistical model is significantly faster than and compares favorably to the fractal model. Additionally, the statistical model compares well to alternative computational surface roughness models and experiments. Theoretical links between the fractal and statistical models are made, and further connections to experimental measurements of surface roughness are explored. This work was supported by the PSI-SciDAC Project funded by the U.S. Department of Energy through contract DOE-DE-SC0008658.
Bates, Imelda; Boyd, Alan; Smith, Helen; Cole, Donald C
2014-03-03
Despite increasing investment in health research capacity strengthening efforts in low and middle income countries, published evidence to guide the systematic design and monitoring of such interventions is very limited. Systematic processes are important to underpin capacity strengthening interventions because they provide stepwise guidance and allow for continual improvement. Our objective here was to use evidence to inform the design of a replicable but flexible process to guide health research capacity strengthening that could be customized for different contexts, and to provide a framework for planning, collecting information, making decisions, and improving performance. We used peer-reviewed and grey literature to develop a five-step pathway for designing and evaluating health research capacity strengthening programmes, tested in a variety of contexts in Africa. The five steps are: i) defining the goal of the capacity strengthening effort, ii) describing the optimal capacity needed to achieve the goal, iii) determining the existing capacity gaps compared to the optimum, iv) devising an action plan to fill the gaps and associated indicators of change, and v) adapting the plan and indicators as the programme matures. Our paper describes three contrasting case studies of organisational research capacity strengthening to illustrate how our five-step approach works in practice. Our five-step pathway starts with a clear goal and objectives, making explicit the capacity required to achieve the goal. Strategies for promoting sustainability are agreed with partners and incorporated from the outset. Our pathway for designing capacity strengthening programmes focuses not only on technical, managerial, and financial processes within organisations, but also on the individuals within organisations and the wider system within which organisations are coordinated, financed, and managed. Our five-step approach is flexible enough to generate and utilise ongoing learning. We have tested and critiqued our approach in a variety of organisational settings in the health sector in sub-Saharan Africa, but it needs to be applied and evaluated in other sectors and continents to determine the extent of transferability.
Alternative Formats to Achieve More Efficient Energy Codes for Commercial Buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conover, David R.; Rosenberg, Michael I.; Halverson, Mark A.
2013-01-26
This paper identifies and examines several formats or structures that could be used to create the next generation of more efficient energy codes and standards for commercial buildings. Pacific Northwest National Laboratory (PNNL) is funded by the U.S. Department of Energy’s Building Energy Codes Program (BECP) to provide technical support to the development of ANSI/ASHRAE/IES Standard 90.1. While the majority of PNNL’s ASHRAE Standard 90.1 support focuses on developing and evaluating new requirements, a portion of its work involves consideration of the format of energy standards. In its current working plan, the ASHRAE 90.1 committee has approved an energy goalmore » of 50% improvement in Standard 90.1-2013 relative to Standard 90.1-2004, and will likely be considering higher improvement targets for future versions of the standard. To cost-effectively achieve the 50% goal in manner that can gain stakeholder consensus, formats other than prescriptive must be considered. Alternative formats that include reducing the reliance on prescriptive requirements may make it easier to achieve these aggressive efficiency levels in new codes and standards. The focus on energy code and standard formats is meant to explore approaches to presenting the criteria that will foster compliance, enhance verification, and stimulate innovation while saving energy in buildings. New formats may also make it easier for building designers and owners to design and build the levels of efficiency called for in the new codes and standards. This paper examines a number of potential formats and structures, including prescriptive, performance-based (with sub-formats of performance equivalency and performance targets), capacity constraint-based, and outcome-based. The paper also discusses the pros and cons of each format from the viewpoint of code users and of code enforcers.« less
NASA Astrophysics Data System (ADS)
Tanikawa, Ataru; Yoshikawa, Kohji; Okamoto, Takashi; Nitadori, Keigo
2012-02-01
We present a high-performance N-body code for self-gravitating collisional systems accelerated with the aid of a new SIMD instruction set extension of the x86 architecture: Advanced Vector eXtensions (AVX), an enhanced version of the Streaming SIMD Extensions (SSE). With one processor core of Intel Core i7-2600 processor (8 MB cache and 3.40 GHz) based on Sandy Bridge micro-architecture, we implemented a fourth-order Hermite scheme with individual timestep scheme ( Makino and Aarseth, 1992), and achieved the performance of ˜20 giga floating point number operations per second (GFLOPS) for double-precision accuracy, which is two times and five times higher than that of the previously developed code implemented with the SSE instructions ( Nitadori et al., 2006b), and that of a code implemented without any explicit use of SIMD instructions with the same processor core, respectively. We have parallelized the code by using so-called NINJA scheme ( Nitadori et al., 2006a), and achieved ˜90 GFLOPS for a system containing more than N = 8192 particles with 8 MPI processes on four cores. We expect to achieve about 10 tera FLOPS (TFLOPS) for a self-gravitating collisional system with N ˜ 10 5 on massively parallel systems with at most 800 cores with Sandy Bridge micro-architecture. This performance will be comparable to that of Graphic Processing Unit (GPU) cluster systems, such as the one with about 200 Tesla C1070 GPUs ( Spurzem et al., 2010). This paper offers an alternative to collisional N-body simulations with GRAPEs and GPUs.
Blast and the Consequences on Traumatic Brain Injury-Multiscale Mechanical Modeling of Brain
2011-02-17
blast simulation. LS-DYNA as an explicit FE code has been employed to simulate this multi- material fluid –structure interaction problem. The 3-D head...formulation is implemented to model the air-blast simulation. LS-DYNA as an explicit FE code has been employed to simulate this multi-material fluid ...Biomechanics Study of Influencing Parameters for brain under Impact ............................... 12 5.1 The Impact of Cerebrospinal Fluid
Sierra/Solid Mechanics 4.48 User's Guide.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Merewether, Mark Thomas; Crane, Nathan K; de Frias, Gabriel Jose
Sierra/SolidMechanics (Sierra/SM) is a Lagrangian, three-dimensional code for finite element analysis of solids and structures. It provides capabilities for explicit dynamic, implicit quasistatic and dynamic analyses. The explicit dynamics capabilities allow for the efficient and robust solution of models with extensive contact subjected to large, suddenly applied loads. For implicit problems, Sierra/SM uses a multi-level iterative solver, which enables it to effectively solve problems with large deformations, nonlinear material behavior, and contact. Sierra/SM has a versatile library of continuum and structural elements, and a large library of material models. The code is written for parallel computing environments enabling scalable solutionsmore » of extremely large problems for both implicit and explicit analyses. It is built on the SIERRA Framework, which facilitates coupling with other SIERRA mechanics codes. This document describes the functionality and input syntax for Sierra/SM.« less
A Three-Threshold Learning Rule Approaches the Maximal Capacity of Recurrent Neural Networks
Alemi, Alireza; Baldassi, Carlo; Brunel, Nicolas; Zecchina, Riccardo
2015-01-01
Understanding the theoretical foundations of how memories are encoded and retrieved in neural populations is a central challenge in neuroscience. A popular theoretical scenario for modeling memory function is the attractor neural network scenario, whose prototype is the Hopfield model. The model simplicity and the locality of the synaptic update rules come at the cost of a poor storage capacity, compared with the capacity achieved with perceptron learning algorithms. Here, by transforming the perceptron learning rule, we present an online learning rule for a recurrent neural network that achieves near-maximal storage capacity without an explicit supervisory error signal, relying only upon locally accessible information. The fully-connected network consists of excitatory binary neurons with plastic recurrent connections and non-plastic inhibitory feedback stabilizing the network dynamics; the memory patterns to be memorized are presented online as strong afferent currents, producing a bimodal distribution for the neuron synaptic inputs. Synapses corresponding to active inputs are modified as a function of the value of the local fields with respect to three thresholds. Above the highest threshold, and below the lowest threshold, no plasticity occurs. In between these two thresholds, potentiation/depression occurs when the local field is above/below an intermediate threshold. We simulated and analyzed a network of binary neurons implementing this rule and measured its storage capacity for different sizes of the basins of attraction. The storage capacity obtained through numerical simulations is shown to be close to the value predicted by analytical calculations. We also measured the dependence of capacity on the strength of external inputs. Finally, we quantified the statistics of the resulting synaptic connectivity matrix, and found that both the fraction of zero weight synapses and the degree of symmetry of the weight matrix increase with the number of stored patterns. PMID:26291608
A Three-Threshold Learning Rule Approaches the Maximal Capacity of Recurrent Neural Networks.
Alemi, Alireza; Baldassi, Carlo; Brunel, Nicolas; Zecchina, Riccardo
2015-08-01
Understanding the theoretical foundations of how memories are encoded and retrieved in neural populations is a central challenge in neuroscience. A popular theoretical scenario for modeling memory function is the attractor neural network scenario, whose prototype is the Hopfield model. The model simplicity and the locality of the synaptic update rules come at the cost of a poor storage capacity, compared with the capacity achieved with perceptron learning algorithms. Here, by transforming the perceptron learning rule, we present an online learning rule for a recurrent neural network that achieves near-maximal storage capacity without an explicit supervisory error signal, relying only upon locally accessible information. The fully-connected network consists of excitatory binary neurons with plastic recurrent connections and non-plastic inhibitory feedback stabilizing the network dynamics; the memory patterns to be memorized are presented online as strong afferent currents, producing a bimodal distribution for the neuron synaptic inputs. Synapses corresponding to active inputs are modified as a function of the value of the local fields with respect to three thresholds. Above the highest threshold, and below the lowest threshold, no plasticity occurs. In between these two thresholds, potentiation/depression occurs when the local field is above/below an intermediate threshold. We simulated and analyzed a network of binary neurons implementing this rule and measured its storage capacity for different sizes of the basins of attraction. The storage capacity obtained through numerical simulations is shown to be close to the value predicted by analytical calculations. We also measured the dependence of capacity on the strength of external inputs. Finally, we quantified the statistics of the resulting synaptic connectivity matrix, and found that both the fraction of zero weight synapses and the degree of symmetry of the weight matrix increase with the number of stored patterns.
An evaluation of the effect of JPEG, JPEG2000, and H.264/AVC on CQR codes decoding process
NASA Astrophysics Data System (ADS)
Vizcarra Melgar, Max E.; Farias, Mylène C. Q.; Zaghetto, Alexandre
2015-02-01
This paper presents a binarymatrix code based on QR Code (Quick Response Code), denoted as CQR Code (Colored Quick Response Code), and evaluates the effect of JPEG, JPEG2000 and H.264/AVC compression on the decoding process. The proposed CQR Code has three additional colors (red, green and blue), what enables twice as much storage capacity when compared to the traditional black and white QR Code. Using the Reed-Solomon error-correcting code, the CQR Code model has a theoretical correction capability of 38.41%. The goal of this paper is to evaluate the effect that degradations inserted by common image compression algorithms have on the decoding process. Results show that a successful decoding process can be achieved for compression rates up to 0.3877 bits/pixel, 0.1093 bits/pixel and 0.3808 bits/pixel for JPEG, JPEG2000 and H.264/AVC formats, respectively. The algorithm that presents the best performance is the H.264/AVC, followed by the JPEG2000, and JPEG.
Assessing the Formation of Experience-Based Gender Expectations in an Implicit Learning Scenario
Öttl, Anton; Behne, Dawn M.
2017-01-01
The present study investigates the formation of new word-referent associations in an implicit learning scenario, using a gender-coded artificial language with spoken words and visual referents. Previous research has shown that when participants are explicitly instructed about the gender-coding system underlying an artificial lexicon, they monitor the frequency of exposure to male vs. female referents within this lexicon, and subsequently use this probabilistic information to predict the gender of an upcoming referent. In an explicit learning scenario, the auditory and visual gender cues are necessarily highlighted prior to acqusition, and the effects previously observed may therefore depend on participants' overt awareness of these cues. To assess whether the formation of experience-based expectations is dependent on explicit awareness of the underlying coding system, we present data from an experiment in which gender-coding was acquired implicitly, thereby reducing the likelihood that visual and auditory gender cues are used strategically during acquisition. Results show that even if the gender coding system was not perfectly mastered (as reflected in the number of gender coding errors), participants develop frequency based expectations comparable to those previously observed in an explicit learning scenario. In line with previous findings, participants are quicker at recognizing a referent whose gender is consistent with an induced expectation than one whose gender is inconsistent with an induced expectation. At the same time however, eyetracking data suggest that these expectations may surface earlier in an implicit learning scenario. These findings suggest that experience-based expectations are robust against manner of acquisition, and contribute to understanding why similar expectations observed in the activation of stereotypes during the processing of natural language stimuli are difficult or impossible to suppress. PMID:28936186
CRKSPH: A new meshfree hydrodynamics method with applications to astrophysics
NASA Astrophysics Data System (ADS)
Owen, John Michael; Raskin, Cody; Frontiere, Nicholas
2018-01-01
The study of astrophysical phenomena such as supernovae, accretion disks, galaxy formation, and large-scale structure formation requires computational modeling of, at a minimum, hydrodynamics and gravity. Developing numerical methods appropriate for these kinds of problems requires a number of properties: shock-capturing hydrodynamics benefits from rigorous conservation of invariants such as total energy, linear momentum, and mass; lack of obvious symmetries or a simplified spatial geometry to exploit necessitate 3D methods that ideally are Galilean invariant; the dynamic range of mass and spatial scales that need to be resolved can span many orders of magnitude, requiring methods that are highly adaptable in their space and time resolution. We have developed a new Lagrangian meshfree hydrodynamics method called Conservative Reproducing Kernel Smoothed Particle Hydrodynamics, or CRKSPH, in order to meet these goals. CRKSPH is a conservative generalization of the meshfree reproducing kernel method, combining the high-order accuracy of reproducing kernels with the explicit conservation of mass, linear momentum, and energy necessary to study shock-driven hydrodynamics in compressible fluids. CRKSPH's Lagrangian, particle-like nature makes it simple to combine with well-known N-body methods for modeling gravitation, similar to the older Smoothed Particle Hydrodynamics (SPH) method. Indeed, CRKSPH can be substituted for SPH in existing SPH codes due to these similarities. In comparison to SPH, CRKSPH is able to achieve substantially higher accuracy for a given number of points due to the explicitly consistent (and higher-order) interpolation theory of reproducing kernels, while maintaining the same conservation principles (and therefore applicability) as SPH. There are currently two coded implementations of CRKSPH available: one in the open-source research code Spheral, and the other in the high-performance cosmological code HACC. Using these codes we have applied CRKSPH to a number of astrophysical scenarios, such as rotating gaseous disks, supernova remnants, and large-scale cosmological structure formation. In this poster we present an overview of CRKSPH and show examples of these astrophysical applications.
Volume 2: Explicit, multistage upwind schemes for Euler and Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Elmiligui, Alaa; Ash, Robert L.
1992-01-01
The objective of this study was to develop a high-resolution-explicit-multi-block numerical algorithm, suitable for efficient computation of the three-dimensional, time-dependent Euler and Navier-Stokes equations. The resulting algorithm has employed a finite volume approach, using monotonic upstream schemes for conservation laws (MUSCL)-type differencing to obtain state variables at cell interface. Variable interpolations were written in the k-scheme formulation. Inviscid fluxes were calculated via Roe's flux-difference splitting, and van Leer's flux-vector splitting techniques, which are considered state of the art. The viscous terms were discretized using a second-order, central-difference operator. Two classes of explicit time integration has been investigated for solving the compressible inviscid/viscous flow problems--two-state predictor-corrector schemes, and multistage time-stepping schemes. The coefficients of the multistage time-stepping schemes have been modified successfully to achieve better performance with upwind differencing. A technique was developed to optimize the coefficients for good high-frequency damping at relatively high CFL numbers. Local time-stepping, implicit residual smoothing, and multigrid procedure were added to the explicit time stepping scheme to accelerate convergence to steady-state. The developed algorithm was implemented successfully in a multi-block code, which provides complete topological and geometric flexibility. The only requirement is C degree continuity of the grid across the block interface. The algorithm has been validated on a diverse set of three-dimensional test cases of increasing complexity. The cases studied were: (1) supersonic corner flow; (2) supersonic plume flow; (3) laminar and turbulent flow over a flat plate; (4) transonic flow over an ONERA M6 wing; and (5) unsteady flow of a compressible jet impinging on a ground plane (with and without cross flow). The emphasis of the test cases was validation of code, and assessment of performance, as well as demonstration of flexibility.
Analysis of automatic repeat request methods for deep-space downlinks
NASA Technical Reports Server (NTRS)
Pollara, F.; Ekroot, L.
1995-01-01
Automatic repeat request (ARQ) methods cannot increase the capacity of a memoryless channel. However, they can be used to decrease the complexity of the channel-coding system to achieve essentially error-free transmission and to reduce link margins when the channel characteristics are poorly predictable. This article considers ARQ methods on a power-limited channel (e.g., the deep-space channel), where it is important to minimize the total power needed to transmit the data, as opposed to a bandwidth-limited channel (e.g., terrestrial data links), where the spectral efficiency or the total required transmission time is the most relevant performance measure. In the analysis, we compare the performance of three reference concatenated coded systems used in actual deep-space missions to that obtainable by ARQ methods using the same codes, in terms of required power, time to transmit with a given number of retransmissions, and achievable probability of word error. The ultimate limits of ARQ with an arbitrary number of retransmissions are also derived.
Quantum-capacity-approaching codes for the detected-jump channel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grassl, Markus; Wei Zhaohui; Ji Zhengfeng
2010-12-15
The quantum-channel capacity gives the ultimate limit for the rate at which quantum data can be reliably transmitted through a noisy quantum channel. Degradable quantum channels are among the few channels whose quantum capacities are known. Given the quantum capacity of a degradable channel, it remains challenging to find a practical coding scheme which approaches capacity. Here we discuss code designs for the detected-jump channel, a degradable channel with practical relevance describing the physics of spontaneous decay of atoms with detected photon emission. We show that this channel can be used to simulate a binary classical channel with both erasuresmore » and bit flips. The capacity of the simulated classical channel gives a lower bound on the quantum capacity of the detected-jump channel. When the jump probability is small, it almost equals the quantum capacity. Hence using a classical capacity-approaching code for the simulated classical channel yields a quantum code which approaches the quantum capacity of the detected-jump channel.« less
Working Memory Capacity Limits Motor Learning When Implementing Multiple Instructions
Buszard, Tim; Farrow, Damian; Verswijveren, Simone J. J. M.; Reid, Machar; Williams, Jacqueline; Polman, Remco; Ling, Fiona Chun Man; Masters, Rich S. W.
2017-01-01
Although it is generally accepted that certain practice conditions can place large demands on working memory (WM) when performing and learning a motor skill, the influence that WM capacity has on the acquisition of motor skills remains unsubstantiated. This study examined the role of WM capacity in a motor skill practice context that promoted WM involvement through the provision of explicit instructions. A cohort of 90 children aged 8 to 10 years were assessed on measures of WM capacity and attention. Children who scored in the lowest and highest thirds on the WM tasks were allocated to lower WM capacity (n = 24) and higher WM capacity (n = 24) groups, respectively. The remaining 42 participants did not participate in the motor task. The motor task required children to practice basketball shooting for 240 trials in blocks of 20 shots, with pre- and post-tests occurring before and after the intervention. A retention test was administered 1 week after the post-test. Prior to every practice block, children were provided with five explicit instructions that were specific to the technique of shooting a basketball. Results revealed that the higher WM capacity group displayed consistent improvements from pre- to post-test and through to the retention test, while the opposite effect occurred in the lower WM capacity group. This implies that the explicit instructions had a negative influence on learning by the lower WM capacity children. Results are discussed in relation to strategy selection for dealing with instructions and the role of attention control. PMID:28878701
NASA Astrophysics Data System (ADS)
Chang, Ching-Chun; Liu, Yanjun; Nguyen, Son T.
2015-03-01
Data hiding is a technique that embeds information into digital cover data. This technique has been concentrated on the spatial uncompressed domain, and it is considered more challenging to perform in the compressed domain, i.e., vector quantization, JPEG, and block truncation coding (BTC). In this paper, we propose a new data hiding scheme for BTC-compressed images. In the proposed scheme, a dynamic programming strategy was used to search for the optimal solution of the bijective mapping function for LSB substitution. Then, according to the optimal solution, each mean value embeds three secret bits to obtain high hiding capacity with low distortion. The experimental results indicated that the proposed scheme obtained both higher hiding capacity and hiding efficiency than the other four existing schemes, while ensuring good visual quality of the stego-image. In addition, the proposed scheme achieved a low bit rate as original BTC algorithm.
GPU-accelerated simulations of isolated black holes
NASA Astrophysics Data System (ADS)
Lewis, Adam G. M.; Pfeiffer, Harald P.
2018-05-01
We present a port of the numerical relativity code SpEC which is capable of running on NVIDIA GPUs. Since this code must be maintained in parallel with SpEC itself, a primary design consideration is to perform as few explicit code changes as possible. We therefore rely on a hierarchy of automated porting strategies. At the highest level we use TLoops, a C++ library of our design, to automatically emit CUDA code equivalent to tensorial expressions written into C++ source using a syntax similar to analytic calculation. Next, we trace out and cache explicit matrix representations of the numerous linear transformations in the SpEC code, which allows these to be performed on the GPU using pre-existing matrix-multiplication libraries. We port the few remaining important modules by hand. In this paper we detail the specifics of our port, and present benchmarks of it simulating isolated black hole spacetimes on several generations of NVIDIA GPU.
Sarriot, Eric G; Kouletio, Michelle; Jahan, Dr Shamim; Rasul, Izaz; Musha, Akm
2014-08-26
Starting in 1999, Concern Worldwide Inc. (Concern) worked with two Bangladeshi municipal health departments to support delivery of maternal and child health preventive services. A mid-term evaluation identified sustainability challenges. Concern relied on systems thinking implicitly to re-prioritize sustainability, but stakeholders also required a method, an explicit set of processes, to guide their decisions and choices during and after the project. Concern chose the Sustainability Framework method to generate creative thinking from stakeholders, create a common vision, and monitor progress. The Framework is based on participatory and iterative steps: defining (mapping) the local system and articulating a long-term vision, describing scenarios for achieving the vision, defining the elements of the model, and selecting corresponding indicators, setting and executing an assessment plan,, and repeated stakeholder engagement in analysis and decisions . Formal assessments took place up to 5 years post-project (2009). Strategic choices for the project were guided by articulating a collective vision for sustainable health, mapping the system of actors required to effect and sustain change, and defining different components of analysis. Municipal authorities oriented health teams toward equity-oriented service delivery efforts, strengthening of the functionality of Ward Health Committees, resource leveraging between municipalities and the Ministry of Health, and mitigation of contextual risks. Regular reference to a vision (and set of metrics (population health, organizational and community capacity) mitigated political factors. Key structures and processes were maintained following elections and political changes. Post-project achievements included the maintenance or improvement 5 years post-project (2009) in 9 of the 11 health indicator gains realized during the project (1999-2004). Some elements of performance and capacity weakened, but reductions in the equity gap achieved during the project were largely maintained post-project. Sustainability is dynamic and results from local systems processes, which can be strengthened through both implicit and explicit systems thinking steps applied with constancy of purpose.
An efficient, explicit finite-rate algorithm to compute flows in chemical nonequilibrium
NASA Technical Reports Server (NTRS)
Palmer, Grant
1989-01-01
An explicit finite-rate code was developed to compute hypersonic viscous chemically reacting flows about three-dimensional bodies. Equations describing the finite-rate chemical reactions were fully coupled to the gas dynamic equations using a new coupling technique. The new technique maintains stability in the explicit finite-rate formulation while permitting relatively large global time steps.
The Full Scope of Family Physicians' Work Is Not Reflected by Current Procedural Terminology Codes.
Young, Richard A; Burge, Sandy; Kumar, Kaparaboyna Ashok; Wilson, Jocelyn
2017-01-01
The purpose of this study was to characterize the content of family physician (FP) clinic encounters, and to count the number of visits in which the FPs addressed issues not explicitly reportable by 99211 to 99215 and 99354 Current Procedural Terminology (CPT) codes with current reimbursement methods and based on examples provided in the CPT manual. The data collection instrument was modeled on the National Ambulatory Medical Care Survey. Trained assistants directly observed every other FP-patient encounter and recorded every patient concern, issue addressed by the physician (including care barriers related to health care systems and social determinants), and treatment ordered in clinics affiliated with 10 residencies of the Residency Research Network of Texas. A visit was deemed to include physician work that was not explicitly reportable if the number or nature of issues addressed exceeded the definitions or examples for 99205/99215 or 99214 + 99354 or a preventive service code, included the physician addressing health care system or social determinant issues, or included the care of a family member. In 982 physician-patient encounters, patients raised 517 different reasons for visit (total, 5278; mean, 5.4 per visit; range, 1 to 16) and the FPs addressed 509 different issues (total issues, 3587; mean, 3.7 per visit; range, 1 to 10). FPs managed 425 different medications, 18 supplements, and 11 devices. A mean of 3.9 chronic medications were continued per visit (range, 0 to 21) and 4.6 total medications were managed (range, 0 to 22). In 592 (60.3%) of the visits the FPs did work that was not explicitly reportable with available CPT codes: 582 (59.3%) addressed more numerous issues than explicitly reportable, 64 (6.5%) addressed system barriers, and 13 (1.3%) addressed concerns for other family members. FPs perform cognitive work in a majority of their patient encounters that are not explicitly reportable, either by being higher than the CPT example number of diagnoses per code or the type of problems addressed, which has implications for the care of complex multi-morbid patients and the growth of the primary care workforce. To address these limitations, either the CPT codes and their associated rules should be updated to reflect the realities of family physicians' practices or new billing and coding approaches should be developed. © Copyright 2017 by the American Board of Family Medicine.
Limited capacity in US pediatric drug trials: qualitative analysis of expert interviews.
Wasserman, Richard; Bocian, Alison; Harris, Donna; Slora, Eric
2011-04-01
The recently renewed Best Pharmaceuticals for Children and Pediatric Research Equity Acts (BPCA/PREA) have continued industry incentives and opportunities for pediatric drug trials (PDTs). However, there is no current assessment of the capacity to perform PDTs. The aim of this study was to deepen understanding of the capacity for US PDTs by assessing PDT infrastructure, present barriers to PDTs, and potential approaches and solutions to identified issues. Pediatric clinical research experts participated in semi-structured interviews on current US pediatric research capacity (February-July 2007). An initial informant list was developed using purposive sampling, and supplemented and refined to generate a group of respondents to explore emerging themes. Each phone interview included a physician researcher and two health researchers who took notes and recorded the calls. Health researchers produced detailed summaries, which were verified by the physician researcher and informants. We then undertook qualitative analysis of the summaries, employing multiple coding, with the two health researchers and the physician researcher independently coding each summary for themes and subthemes. Coding variations were resolved by physician researcher/health researcher discussion and consensus achieved on themes and subthemes. The 33 informants' primary or secondary roles included academia (n = 21), federal official (5), industry medical officer (8), pediatric research network leader (10), pediatric specialist leader (8), pediatric clinical pharmacologist (5), and practitioner/research site director (9). While most experts noted an increase in PDTs since the initial passage of BPCA/PREA, a dominant theme of insufficient US PDT capacity emerged. Subthemes included (i) lack of systems for finding, incentivizing, and/or maintaining trial sites; (ii) complexity/demands of conducting PDTs in clinical settings; (iii) inadequate numbers of qualified pediatric pharmacologists and clinician investigators trained in FDA Good Clinical Practice; and (iv) poor PDT protocol design resulting in operational and enrollment difficulties in the pediatric population. Suggested potential solutions for insufficient PDT capacity included (i) consensus-building among stakeholders to create PDT systems; (ii) initiatives to train more pediatric pharmacologists and educate clinicians in Good Clinical Practice; (iii) advocacy for PDT protocols designed by individuals sensitive to pediatric issues; and (iv) physician and public education on the importance of PDTs. Insufficient US PDT capacity may hinder the development of new drugs for children and limit studies on the safety and efficacy of drugs presently used to treat pediatric conditions. Further public policy initiatives may be needed to achieve the full promise of BPCA/PREA.
Apply network coding for H.264/SVC multicasting
NASA Astrophysics Data System (ADS)
Wang, Hui; Kuo, C.-C. Jay
2008-08-01
In a packet erasure network environment, video streaming benefits from error control in two ways to achieve graceful degradation. The first approach is application-level (or the link-level) forward error-correction (FEC) to provide erasure protection. The second error control approach is error concealment at the decoder end to compensate lost packets. A large amount of research work has been done in the above two areas. More recently, network coding (NC) techniques have been proposed for efficient data multicast over networks. It was shown in our previous work that multicast video streaming benefits from NC for its throughput improvement. An algebraic model is given to analyze the performance in this work. By exploiting the linear combination of video packets along nodes in a network and the SVC video format, the system achieves path diversity automatically and enables efficient video delivery to heterogeneous receivers in packet erasure channels. The application of network coding can protect video packets against the erasure network environment. However, the rank defficiency problem of random linear network coding makes the error concealment inefficiently. It is shown by computer simulation that the proposed NC video multicast scheme enables heterogenous receiving according to their capacity constraints. But it needs special designing to improve the video transmission performance when applying network coding.
NASA Technical Reports Server (NTRS)
Palmer, Grant
1989-01-01
This study presents a three-dimensional explicit, finite-difference, shock-capturing numerical algorithm applied to viscous hypersonic flows in thermochemical nonequilibrium. The algorithm employs a two-temperature physical model. Equations governing the finite-rate chemical reactions are fully-coupled to the gas dynamic equations using a novel coupling technique. The new coupling method maintains stability in the explicit, finite-rate formulation while allowing relatively large global time steps. The code uses flux-vector accuracy. Comparisons with experimental data and other numerical computations verify the accuracy of the present method. The code is used to compute the three-dimensional flowfield over the Aeroassist Flight Experiment (AFE) vehicle at one of its trajectory points.
Constructing LDPC Codes from Loop-Free Encoding Modules
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Dolinar, Samuel; Jones, Christopher; Thorpe, Jeremy; Andrews, Kenneth
2009-01-01
A method of constructing certain low-density parity-check (LDPC) codes by use of relatively simple loop-free coding modules has been developed. The subclasses of LDPC codes to which the method applies includes accumulate-repeat-accumulate (ARA) codes, accumulate-repeat-check-accumulate codes, and the codes described in Accumulate-Repeat-Accumulate-Accumulate Codes (NPO-41305), NASA Tech Briefs, Vol. 31, No. 9 (September 2007), page 90. All of the affected codes can be characterized as serial/parallel (hybrid) concatenations of such relatively simple modules as accumulators, repetition codes, differentiators, and punctured single-parity check codes. These are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. These codes can also be characterized as hybrid turbolike codes that have projected graph or protograph representations (for example see figure); these characteristics make it possible to design high-speed iterative decoders that utilize belief-propagation algorithms. The present method comprises two related submethods for constructing LDPC codes from simple loop-free modules with circulant permutations. The first submethod is an iterative encoding method based on the erasure-decoding algorithm. The computations required by this method are well organized because they involve a parity-check matrix having a block-circulant structure. The second submethod involves the use of block-circulant generator matrices. The encoders of this method are very similar to those of recursive convolutional codes. Some encoders according to this second submethod have been implemented in a small field-programmable gate array that operates at a speed of 100 megasymbols per second. By use of density evolution (a computational- simulation technique for analyzing performances of LDPC codes), it has been shown through some examples that as the block size goes to infinity, low iterative decoding thresholds close to channel capacity limits can be achieved for the codes of the type in question having low maximum variable node degrees. The decoding thresholds in these examples are lower than those of the best-known unstructured irregular LDPC codes constrained to have the same maximum node degrees. Furthermore, the present method enables the construction of codes of any desired rate with thresholds that stay uniformly close to their respective channel capacity thresholds.
Ishikawa, Sohta A; Inagaki, Yuji; Hashimoto, Tetsuo
2012-01-01
In phylogenetic analyses of nucleotide sequences, 'homogeneous' substitution models, which assume the stationarity of base composition across a tree, are widely used, albeit individual sequences may bear distinctive base frequencies. In the worst-case scenario, a homogeneous model-based analysis can yield an artifactual union of two distantly related sequences that achieved similar base frequencies in parallel. Such potential difficulty can be countered by two approaches, 'RY-coding' and 'non-homogeneous' models. The former approach converts four bases into purine and pyrimidine to normalize base frequencies across a tree, while the heterogeneity in base frequency is explicitly incorporated in the latter approach. The two approaches have been applied to real-world sequence data; however, their basic properties have not been fully examined by pioneering simulation studies. Here, we assessed the performances of the maximum-likelihood analyses incorporating RY-coding and a non-homogeneous model (RY-coding and non-homogeneous analyses) on simulated data with parallel convergence to similar base composition. Both RY-coding and non-homogeneous analyses showed superior performances compared with homogeneous model-based analyses. Curiously, the performance of RY-coding analysis appeared to be significantly affected by a setting of the substitution process for sequence simulation relative to that of non-homogeneous analysis. The performance of a non-homogeneous analysis was also validated by analyzing a real-world sequence data set with significant base heterogeneity.
A note on the R sub 0-parameter for discrete memoryless channels
NASA Technical Reports Server (NTRS)
Mceliece, R. J.
1980-01-01
An explicit class of discrete memoryless channels (q-ary erasure channels) is exhibited. Practical and explicit coded systems of rate R with R/R sub o as large as desired can be designed for this class.
Adhesive-bonded double-lap joints. [analytical solutions for static load carrying capacity
NASA Technical Reports Server (NTRS)
Hart-Smith, L. J.
1973-01-01
Explicit analytical solutions are derived for the static load carrying capacity of double-lap adhesive-bonded joints. The analyses extend the elastic solution Volkersen and cover adhesive plasticity, adherend stiffness imbalance and thermal mismatch between the adherends. Both elastic-plastic and bi-elastic adhesive representations lead to the explicit result that the influence of the adhesive on the maximum potential bond strength is defined uniquely by the strain energy in shear per unit area of bond. Failures induced by peel stresses at the ends of the joint are examined. This failure mode is particularly important for composite adherends. The explicit solutions are sufficiently simple to be used for design purposes
Optimization of wood plastic composite decks
NASA Astrophysics Data System (ADS)
Ravivarman, S.; Venkatesh, G. S.; Karmarkar, A.; Shivkumar N., D.; Abhilash R., M.
2018-04-01
Wood Plastic Composite (WPC) is a new class of natural fibre based composite material that contains plastic matrix reinforced with wood fibres or wood flour. In the present work, Wood Plastic Composite was prepared with 70-wt% of wood flour reinforced in polypropylene matrix. Mechanical characterization of the composite was done by carrying out laboratory tests such as tensile test and flexural test as per the American Society for Testing and Materials (ASTM) standards. Computer Aided Design (CAD) model of the laboratory test specimen (tensile test) was created and explicit finite element analysis was carried out on the finite element model in non-linear Explicit FE code LS - DYNA. The piecewise linear plasticity (MAT 24) material model was identified as a suitable model in LS-DYNA material library, describing the material behavior of the developed composite. The composite structures for decking application in construction industry were then optimized for cross sectional area and distance between two successive supports (span length) by carrying out various numerical experiments in LS-DYNA. The optimized WPC deck (Elliptical channel-2 E10) has 45% reduced weight than the baseline model (solid cross-section) considered in this study with the load carrying capacity meeting acceptance criterion (allowable deflection & stress) for outdoor decking application.
Overview of the NASA Glenn Flux Reconstruction Based High-Order Unstructured Grid Code
NASA Technical Reports Server (NTRS)
Spiegel, Seth C.; DeBonis, James R.; Huynh, H. T.
2016-01-01
A computational fluid dynamics code based on the flux reconstruction (FR) method is currently being developed at NASA Glenn Research Center to ultimately provide a large- eddy simulation capability that is both accurate and efficient for complex aeropropulsion flows. The FR approach offers a simple and efficient method that is easy to implement and accurate to an arbitrary order on common grid cell geometries. The governing compressible Navier-Stokes equations are discretized in time using various explicit Runge-Kutta schemes, with the default being the 3-stage/3rd-order strong stability preserving scheme. The code is written in modern Fortran (i.e., Fortran 2008) and parallelization is attained through MPI for execution on distributed-memory high-performance computing systems. An h- refinement study of the isentropic Euler vortex problem is able to empirically demonstrate the capability of the FR method to achieve super-accuracy for inviscid flows. Additionally, the code is applied to the Taylor-Green vortex problem, performing numerous implicit large-eddy simulations across a range of grid resolutions and solution orders. The solution found by a pseudo-spectral code is commonly used as a reference solution to this problem, and the FR code is able to reproduce this solution using approximately the same grid resolution. Finally, an examination of the code's performance demonstrates good parallel scaling, as well as an implementation of the FR method with a computational cost/degree- of-freedom/time-step that is essentially independent of the solution order of accuracy for structured geometries.
Does achievement motivation mediate the semantic achievement priming effect?
Engeser, Stefan; Baumann, Nicola
2014-10-01
The aim of our research was to understand the processes of the prime-to-behavior effects with semantic achievement primes. We extended existing models with a perspective from achievement motivation theory and additionally used achievement primes embedded in the running text of excerpts of school textbooks to simulate a more natural priming condition. Specifically, we proposed that achievement primes affect implicit achievement motivation and conducted pilot experiments and 3 main experiments to explore this proposition. We found no reliable positive effect of achievement primes on implicit achievement motivation. In light of these findings, we tested whether explicit (instead of implicit) achievement motivation is affected by achievement primes and found this to be the case. In the final experiment, we found support for the assumption that higher explicit achievement motivation implies that achievement priming affects the outcome expectations. The implications of the results are discussed, and we conclude that primes affect achievement behavior by heightening explicit achievement motivation and outcome expectancies.
2014-01-01
Background Despite increasing investment in health research capacity strengthening efforts in low and middle income countries, published evidence to guide the systematic design and monitoring of such interventions is very limited. Systematic processes are important to underpin capacity strengthening interventions because they provide stepwise guidance and allow for continual improvement. Our objective here was to use evidence to inform the design of a replicable but flexible process to guide health research capacity strengthening that could be customized for different contexts, and to provide a framework for planning, collecting information, making decisions, and improving performance. Methods We used peer-reviewed and grey literature to develop a five-step pathway for designing and evaluating health research capacity strengthening programmes, tested in a variety of contexts in Africa. The five steps are: i) defining the goal of the capacity strengthening effort, ii) describing the optimal capacity needed to achieve the goal, iii) determining the existing capacity gaps compared to the optimum, iv) devising an action plan to fill the gaps and associated indicators of change, and v) adapting the plan and indicators as the programme matures. Our paper describes three contrasting case studies of organisational research capacity strengthening to illustrate how our five-step approach works in practice. Results Our five-step pathway starts with a clear goal and objectives, making explicit the capacity required to achieve the goal. Strategies for promoting sustainability are agreed with partners and incorporated from the outset. Our pathway for designing capacity strengthening programmes focuses not only on technical, managerial, and financial processes within organisations, but also on the individuals within organisations and the wider system within which organisations are coordinated, financed, and managed. Conclusions Our five-step approach is flexible enough to generate and utilise ongoing learning. We have tested and critiqued our approach in a variety of organisational settings in the health sector in sub-Saharan Africa, but it needs to be applied and evaluated in other sectors and continents to determine the extent of transferability. PMID:24581148
DOE Office of Scientific and Technical Information (OSTI.GOV)
Massimo, F., E-mail: francesco.massimo@ensta-paristech.fr; Dipartimento SBAI, Università di Roma “La Sapienza“, Via A. Scarpa 14, 00161 Roma; Atzeni, S.
Architect, a time explicit hybrid code designed to perform quick simulations for electron driven plasma wakefield acceleration, is described. In order to obtain beam quality acceptable for applications, control of the beam-plasma-dynamics is necessary. Particle in Cell (PIC) codes represent the state-of-the-art technique to investigate the underlying physics and possible experimental scenarios; however PIC codes demand the necessity of heavy computational resources. Architect code substantially reduces the need for computational resources by using a hybrid approach: relativistic electron bunches are treated kinetically as in a PIC code and the background plasma as a fluid. Cylindrical symmetry is assumed for themore » solution of the electromagnetic fields and fluid equations. In this paper both the underlying algorithms as well as a comparison with a fully three dimensional particle in cell code are reported. The comparison highlights the good agreement between the two models up to the weakly non-linear regimes. In highly non-linear regimes the two models only disagree in a localized region, where the plasma electrons expelled by the bunch close up at the end of the first plasma oscillation.« less
Crespo, Alejandro C.; Dominguez, Jose M.; Barreiro, Anxo; Gómez-Gesteira, Moncho; Rogers, Benedict D.
2011-01-01
Smoothed Particle Hydrodynamics (SPH) is a numerical method commonly used in Computational Fluid Dynamics (CFD) to simulate complex free-surface flows. Simulations with this mesh-free particle method far exceed the capacity of a single processor. In this paper, as part of a dual-functioning code for either central processing units (CPUs) or Graphics Processor Units (GPUs), a parallelisation using GPUs is presented. The GPU parallelisation technique uses the Compute Unified Device Architecture (CUDA) of nVidia devices. Simulations with more than one million particles on a single GPU card exhibit speedups of up to two orders of magnitude over using a single-core CPU. It is demonstrated that the code achieves different speedups with different CUDA-enabled GPUs. The numerical behaviour of the SPH code is validated with a standard benchmark test case of dam break flow impacting on an obstacle where good agreement with the experimental results is observed. Both the achieved speed-ups and the quantitative agreement with experiments suggest that CUDA-based GPU programming can be used in SPH methods with efficiency and reliability. PMID:21695185
Explicit robust schemes for implementation of general principal value-based constitutive models
NASA Technical Reports Server (NTRS)
Arnold, S. M.; Saleeb, A. F.; Tan, H. Q.; Zhang, Y.
1993-01-01
The issue of developing effective and robust schemes to implement general hyperelastic constitutive models is addressed. To this end, special purpose functions are used to symbolically derive, evaluate, and automatically generate the associated FORTRAN code for the explicit forms of the corresponding stress function and material tangent stiffness tensors. These explicit forms are valid for the entire deformation range. The analytical form of these explicit expressions is given here for the case in which the strain-energy potential is taken as a nonseparable polynomial function of the principle stretches.
NASA Technical Reports Server (NTRS)
Baysal, Oktay
1986-01-01
An explicit-implicit and an implicit two-dimensional Navier-Stokes code along with various grid generation capabilities were developed. A series of classical benckmark cases were simulated using these codes.
Hyperbolic/parabolic development for the GIM-STAR code. [flow fields in supersonic inlets
NASA Technical Reports Server (NTRS)
Spradley, L. W.; Stalnaker, J. F.; Ratliff, A. W.
1980-01-01
Flow fields in supersonic inlet configurations were computed using the eliptic GIM code on the STAR computer. Spillage flow under the lower cowl was calculated to be 33% of the incoming stream. The shock/boundary layer interaction on the upper propulsive surface was computed including separation. All shocks produced by the flow system were captured. Linearized block implicit (LBI) schemes were examined to determine their application to the GIM code. Pure explicit methods have stability limitations and fully implicit schemes are inherently inefficient; however, LBI schemes show promise as an effective compromise. A quasiparabolic version of the GIM code was developed using elastical parabolized Navier-Stokes methods combined with quasitime relaxation. This scheme is referred to as quasiparabolic although it applies equally well to hyperbolic supersonic inviscid flows. Second order windward differences are used in the marching coordinate and either explicit or linear block implicit time relaxation can be incorporated.
Jiao, Shuming; Jin, Zhi; Zhou, Changyuan; Zou, Wenbin; Li, Xia
2018-01-01
Quick response (QR) code has been employed as a data carrier for optical cryptosystems in many recent research works, and the error-correction coding mechanism allows the decrypted result to be noise free. However, in this paper, we point out for the first time that the Reed-Solomon coding algorithm in QR code is not a very suitable option for the nonlocally distributed speckle noise in optical cryptosystems from an information coding perspective. The average channel capacity is proposed to measure the data storage capacity and noise-resistant capability of different encoding schemes. We design an alternative 2D barcode scheme based on Bose-Chaudhuri-Hocquenghem (BCH) coding, which demonstrates substantially better average channel capacity than QR code in numerical simulated optical cryptosystems.
Quantum Dense Coding About a Two-Qubit Heisenberg XYZ Model
NASA Astrophysics Data System (ADS)
Xu, Hui-Yun; Yang, Guo-Hui
2017-09-01
By taking into account the nonuniform magnetic field, the quantum dense coding with thermal entangled states of a two-qubit anisotropic Heisenberg XYZ chain are investigated in detail. We mainly show the different properties about the dense coding capacity ( χ) with the changes of different parameters. It is found that dense coding capacity χ can be enhanced by decreasing the magnetic field B, the degree of inhomogeneity b and temperature T, or increasing the coupling constant along z-axis J z . In addition, we also find χ remains the stable value as the change of the anisotropy of the XY plane Δ in a certain temperature condition. Through studying different parameters effect on χ, it presents that we can properly turn the values of B, b, J z , Δ or adjust the temperature T to obtain a valid dense coding capacity ( χ satisfies χ > 1). Moreover, the temperature plays a key role in adjusting the value of dense coding capacity χ. The valid dense coding capacity could be always obtained in the lower temperature-limit case.
NASA Technical Reports Server (NTRS)
DeBonis, James R.
2013-01-01
A computational fluid dynamics code that solves the compressible Navier-Stokes equations was applied to the Taylor-Green vortex problem to examine the code s ability to accurately simulate the vortex decay and subsequent turbulence. The code, WRLES (Wave Resolving Large-Eddy Simulation), uses explicit central-differencing to compute the spatial derivatives and explicit Low Dispersion Runge-Kutta methods for the temporal discretization. The flow was first studied and characterized using Bogey & Bailley s 13-point dispersion relation preserving (DRP) scheme. The kinetic energy dissipation rate, computed both directly and from the enstrophy field, vorticity contours, and the energy spectra are examined. Results are in excellent agreement with a reference solution obtained using a spectral method and provide insight into computations of turbulent flows. In addition the following studies were performed: a comparison of 4th-, 8th-, 12th- and DRP spatial differencing schemes, the effect of the solution filtering on the results, the effect of large-eddy simulation sub-grid scale models, and the effect of high-order discretization of the viscous terms.
Meng, Xianwei; Murakami, Taro; Hashiya, Kazuhide
2017-01-01
Understanding the referent of other's utterance by referring the contextual information helps in smooth communication. Although this pragmatic referential process can be observed even in infants, its underlying mechanism and relative abilities remain unclear. This study aimed to comprehend the background of the referential process by investigating whether the phonological loop affected the referent assignment. A total of 76 children (43 girls) aged 3-5 years participated in a reference assignment task in which an experimenter asked them to answer explicit (e.g., "What color is this?") and ambiguous (e.g., "What about this?") questions about colorful objects. The phonological loop capacity was measured by using the forward digit span task in which children were required to repeat the numbers as an experimenter uttered them. The results showed that the scores of the forward digit span task positively predicted correct response to explicit questions and part of the ambiguous questions. That is, the phonological loop capacity did not have effects on referent assignment in response to ambiguous questions that were asked after a topic shift of the explicit questions and thus required a backward reference to the preceding explicit questions to detect the intent of the current ambiguous questions. These results suggest that although the phonological loop capacity could overtly enhance the storage of verbal information, it does not seem to directly contribute to the pragmatic referential process, which might require further social cognitive processes.
Accurate solutions for transonic viscous flow over finite wings
NASA Technical Reports Server (NTRS)
Vatsa, V. N.
1986-01-01
An explicit multistage Runge-Kutta type time-stepping scheme is used for solving the three-dimensional, compressible, thin-layer Navier-Stokes equations. A finite-volume formulation is employed to facilitate treatment of complex grid topologies encountered in three-dimensional calculations. Convergence to steady state is expedited through usage of acceleration techniques. Further numerical efficiency is achieved through vectorization of the computer code. The accuracy of the overall scheme is evaluated by comparing the computed solutions with the experimental data for a finite wing under different test conditions in the transonic regime. A grid refinement study ir conducted to estimate the grid requirements for adequate resolution of salient features of such flows.
A Hybrid Task Graph Scheduler for High Performance Image Processing Workflows.
Blattner, Timothy; Keyrouz, Walid; Bhattacharyya, Shuvra S; Halem, Milton; Brady, Mary
2017-12-01
Designing applications for scalability is key to improving their performance in hybrid and cluster computing. Scheduling code to utilize parallelism is difficult, particularly when dealing with data dependencies, memory management, data motion, and processor occupancy. The Hybrid Task Graph Scheduler (HTGS) improves programmer productivity when implementing hybrid workflows for multi-core and multi-GPU systems. The Hybrid Task Graph Scheduler (HTGS) is an abstract execution model, framework, and API that increases programmer productivity when implementing hybrid workflows for such systems. HTGS manages dependencies between tasks, represents CPU and GPU memories independently, overlaps computations with disk I/O and memory transfers, keeps multiple GPUs occupied, and uses all available compute resources. Through these abstractions, data motion and memory are explicit; this makes data locality decisions more accessible. To demonstrate the HTGS application program interface (API), we present implementations of two example algorithms: (1) a matrix multiplication that shows how easily task graphs can be used; and (2) a hybrid implementation of microscopy image stitching that reduces code size by ≈ 43% compared to a manually coded hybrid workflow implementation and showcases the minimal overhead of task graphs in HTGS. Both of the HTGS-based implementations show good performance. In image stitching the HTGS implementation achieves similar performance to the hybrid workflow implementation. Matrix multiplication with HTGS achieves 1.3× and 1.8× speedup over the multi-threaded OpenBLAS library for 16k × 16k and 32k × 32k size matrices, respectively.
MATLAB for laser speckle contrast analysis (LASCA): a practice-based approach
NASA Astrophysics Data System (ADS)
Postnikov, Eugene B.; Tsoy, Maria O.; Postnov, Dmitry E.
2018-04-01
Laser Speckle Contrast Analysis (LASCA) is one of the most powerful modern methods for revealing blood dynamics. The experimental design and theory for this method are well established, and the computational recipie is often regarded to be trivial. However, the achieved performance and spatial resolution may considerable differ for different implementations. We comprise a minireview of known approaches to the spatial laser speckle contrast data processing and their realization in MATLAB code providing an explicit correspondence to the mathematical representation, a discussion of available implementations. We also present the algorithm based on the 2D Haar wavelet transform, also supplied with the program code. This new method provides an opportunity to introduce horizontal, vertical and diagonal speckle contrasts; it may be used for processing highly anisotropic images of vascular trees. We provide the comparative analysis of the accuracy of vascular pattern detection and the processing times with a special attention to details of the used MATLAB procedures.
Prediction of Complex Aerodynamic Flows with Explicit Algebraic Stress Models
NASA Technical Reports Server (NTRS)
Abid, Ridha; Morrison, Joseph H.; Gatski, Thomas B.; Speziale, Charles G.
1996-01-01
An explicit algebraic stress equation, developed by Gatski and Speziale, is used in the framework of K-epsilon formulation to predict complex aerodynamic turbulent flows. The nonequilibrium effects are modeled through coefficients that depend nonlinearly on both rotational and irrotational strains. The proposed model was implemented in the ISAAC Navier-Stokes code. Comparisons with the experimental data are presented which clearly demonstrate that explicit algebraic stress models can predict the correct response to nonequilibrium flow.
Topological quantum error correction in the Kitaev honeycomb model
NASA Astrophysics Data System (ADS)
Lee, Yi-Chan; Brell, Courtney G.; Flammia, Steven T.
2017-08-01
The Kitaev honeycomb model is an approximate topological quantum error correcting code in the same phase as the toric code, but requiring only a 2-body Hamiltonian. As a frustrated spin model, it is well outside the commuting models of topological quantum codes that are typically studied, but its exact solubility makes it more amenable to analysis of effects arising in this noncommutative setting than a generic topologically ordered Hamiltonian. Here we study quantum error correction in the honeycomb model using both analytic and numerical techniques. We first prove explicit exponential bounds on the approximate degeneracy, local indistinguishability, and correctability of the code space. These bounds are tighter than can be achieved using known general properties of topological phases. Our proofs are specialized to the honeycomb model, but some of the methods may nonetheless be of broader interest. Following this, we numerically study noise caused by thermalization processes in the perturbative regime close to the toric code renormalization group fixed point. The appearance of non-topological excitations in this setting has no significant effect on the error correction properties of the honeycomb model in the regimes we study. Although the behavior of this model is found to be qualitatively similar to that of the standard toric code in most regimes, we find numerical evidence of an interesting effect in the low-temperature, finite-size regime where a preferred lattice direction emerges and anyon diffusion is geometrically constrained. We expect this effect to yield an improvement in the scaling of the lifetime with system size as compared to the standard toric code.
Highly parallel implementation of non-adiabatic Ehrenfest molecular dynamics
NASA Astrophysics Data System (ADS)
Kanai, Yosuke; Schleife, Andre; Draeger, Erik; Anisimov, Victor; Correa, Alfredo
2014-03-01
While the adiabatic Born-Oppenheimer approximation tremendously lowers computational effort, many questions in modern physics, chemistry, and materials science require an explicit description of coupled non-adiabatic electron-ion dynamics. Electronic stopping, i.e. the energy transfer of a fast projectile atom to the electronic system of the target material, is a notorious example. We recently implemented real-time time-dependent density functional theory based on the plane-wave pseudopotential formalism in the Qbox/qb@ll codes. We demonstrate that explicit integration using a fourth-order Runge-Kutta scheme is very suitable for modern highly parallelized supercomputers. Applying the new implementation to systems with hundreds of atoms and thousands of electrons, we achieved excellent performance and scalability on a large number of nodes both on the BlueGene based ``Sequoia'' system at LLNL as well as the Cray architecture of ``Blue Waters'' at NCSA. As an example, we discuss our work on computing the electronic stopping power of aluminum and gold for hydrogen projectiles, showing an excellent agreement with experiment. These first-principles calculations allow us to gain important insight into the the fundamental physics of electronic stopping.
Cardinality enhancement utilizing Sequential Algorithm (SeQ) code in OCDMA system
NASA Astrophysics Data System (ADS)
Fazlina, C. A. S.; Rashidi, C. B. M.; Rahman, A. K.; Aljunid, S. A.
2017-11-01
Optical Code Division Multiple Access (OCDMA) has been important with increasing demand for high capacity and speed for communication in optical networks because of OCDMA technique high efficiency that can be achieved, hence fibre bandwidth is fully used. In this paper we will focus on Sequential Algorithm (SeQ) code with AND detection technique using Optisystem design tool. The result revealed SeQ code capable to eliminate Multiple Access Interference (MAI) and improve Bit Error Rate (BER), Phase Induced Intensity Noise (PIIN) and orthogonally between users in the system. From the results, SeQ shows good performance of BER and capable to accommodate 190 numbers of simultaneous users contrast with existing code. Thus, SeQ code have enhanced the system about 36% and 111% of FCC and DCS code. In addition, SeQ have good BER performance 10-25 at 155 Mbps in comparison with 622 Mbps, 1 Gbps and 2 Gbps bit rate. From the plot graph, 155 Mbps bit rate is suitable enough speed for FTTH and LAN networks. Resolution can be made based on the superior performance of SeQ code. Thus, these codes will give an opportunity in OCDMA system for better quality of service in an optical access network for future generation's usage
Perceptual scale expansion: an efficient angular coding strategy for locomotor space.
Durgin, Frank H; Li, Zhi
2011-08-01
Whereas most sensory information is coded on a logarithmic scale, linear expansion of a limited range may provide a more efficient coding for the angular variables important to precise motor control. In four experiments, we show that the perceived declination of gaze, like the perceived orientation of surfaces, is coded on a distorted scale. The distortion seems to arise from a nearly linear expansion of the angular range close to horizontal/straight ahead and is evident in explicit verbal and nonverbal measures (Experiments 1 and 2), as well as in implicit measures of perceived gaze direction (Experiment 4). The theory is advanced that this scale expansion (by a factor of about 1.5) may serve a functional goal of coding efficiency for angular perceptual variables. The scale expansion of perceived gaze declination is accompanied by a corresponding expansion of perceived optical slants in the same range (Experiments 3 and 4). These dual distortions can account for the explicit misperception of distance typically obtained by direct report and exocentric matching, while allowing for accurate spatial action to be understood as the result of calibration.
Perceptual Scale Expansion: An Efficient Angular Coding Strategy for Locomotor Space
Durgin, Frank H.; Li, Zhi
2011-01-01
Whereas most sensory information is coded in a logarithmic scale, linear expansion of a limited range may provide a more efficient coding for angular variables important to precise motor control. In four experiments it is shown that the perceived declination of gaze, like the perceived orientation of surfaces is coded on a distorted scale. The distortion seems to arise from a nearly linear expansion of the angular range close to horizontal/straight ahead and is evident in explicit verbal and non-verbal measures (Experiments 1 and 2) and in implicit measures of perceived gaze direction (Experiment 4). The theory is advanced that this scale expansion (by a factor of about 1.5) may serve a functional goal of coding efficiency for angular perceptual variables. The scale expansion of perceived gaze declination is accompanied by a corresponding expansion of perceived optical slants in the same range (Experiments 3 and 4). These dual distortions can account for the explicit misperception of distance typically obtained by direct report and exocentric matching while allowing accurate spatial action to be understood as the result of calibration. PMID:21594732
Capacity Maximizing Constellations
NASA Technical Reports Server (NTRS)
Barsoum, Maged; Jones, Christopher
2010-01-01
Some non-traditional signal constellations have been proposed for transmission of data over the Additive White Gaussian Noise (AWGN) channel using such channel-capacity-approaching codes as low-density parity-check (LDPC) or turbo codes. Computational simulations have shown performance gains of more than 1 dB over traditional constellations. These gains could be translated to bandwidth- efficient communications, variously, over longer distances, using less power, or using smaller antennas. The proposed constellations have been used in a bit-interleaved coded modulation system employing state-ofthe-art LDPC codes. In computational simulations, these constellations were shown to afford performance gains over traditional constellations as predicted by the gap between the parallel decoding capacity of the constellations and the Gaussian capacity
Georges, Carrie; Hoffmann, Danielle; Schiltz, Christine
2018-01-01
Behavioral evidence for the link between numerical and spatial representations comes from the spatial-numerical association of response codes (SNARC) effect, consisting in faster reaction times to small/large numbers with the left/right hand respectively. The SNARC effect is, however, characterized by considerable intra- and inter-individual variability. It depends not only on the explicit or implicit nature of the numerical task, but also relates to interference control. To determine whether the prevalence of the latter relation in the elderly could be ascribed to younger individuals’ ceiling performances on executive control tasks, we determined whether the SNARC effect related to Stroop and/or Flanker effects in 26 young adults with ADHD. We observed a divergent pattern of correlation depending on the type of numerical task used to assess the SNARC effect and the type of interference control measure involved in number-space associations. Namely, stronger number-space associations during parity judgments involving implicit magnitude processing related to weaker interference control in the Stroop but not Flanker task. Conversely, stronger number-space associations during explicit magnitude classifications tended to be associated with better interference control in the Flanker but not Stroop paradigm. The association of stronger parity and magnitude SNARC effects with weaker and better interference control respectively indicates that different mechanisms underlie these relations. Activation of the magnitude-associated spatial code is irrelevant and potentially interferes with parity judgments, but in contrast assists explicit magnitude classifications. Altogether, the present study confirms the contribution of interference control to number-space associations also in young adults. It suggests that magnitude-associated spatial codes in implicit and explicit tasks are monitored by different interference control mechanisms, thereby explaining task-related intra-individual differences in number-space associations. PMID:29881363
Murakami, Taro; Hashiya, Kazuhide
2017-01-01
Understanding the referent of other’s utterance by referring the contextual information helps in smooth communication. Although this pragmatic referential process can be observed even in infants, its underlying mechanism and relative abilities remain unclear. This study aimed to comprehend the background of the referential process by investigating whether the phonological loop affected the referent assignment. A total of 76 children (43 girls) aged 3–5 years participated in a reference assignment task in which an experimenter asked them to answer explicit (e.g., “What color is this?”) and ambiguous (e.g., “What about this?”) questions about colorful objects. The phonological loop capacity was measured by using the forward digit span task in which children were required to repeat the numbers as an experimenter uttered them. The results showed that the scores of the forward digit span task positively predicted correct response to explicit questions and part of the ambiguous questions. That is, the phonological loop capacity did not have effects on referent assignment in response to ambiguous questions that were asked after a topic shift of the explicit questions and thus required a backward reference to the preceding explicit questions to detect the intent of the current ambiguous questions. These results suggest that although the phonological loop capacity could overtly enhance the storage of verbal information, it does not seem to directly contribute to the pragmatic referential process, which might require further social cognitive processes. PMID:29088282
Explicitness in Science Discourse: A Gricean Account of Income-Related Differences
ERIC Educational Resources Information Center
Avenia-Tapper, Brianna; Isacoff, Nora M.
2016-01-01
Highly explicit language use is prized in scientific discourse, and greater explicitness is hypothesized to facilitate academic achievement. Studies in the mid-twentieth century reported controversial findings that the explicitness of text differs by the income and education levels of authors' families. If income-related differences in…
NASA Astrophysics Data System (ADS)
Navon, I. M.; Yu, Jian
A FORTRAN computer program is presented and documented applying the Turkel-Zwas explicit large time-step scheme to a hemispheric barotropic model with constraint restoration of integral invariants of the shallow-water equations. We then proceed to detail the algorithms embodied in the code EXSHALL in this paper, particularly algorithms related to the efficiency and stability of T-Z scheme and the quadratic constraint restoration method which is based on a variational approach. In particular we provide details about the high-latitude filtering, Shapiro filtering, and Robert filtering algorithms used in the code. We explain in detail the various subroutines in the EXSHALL code with emphasis on algorithms implemented in the code and present the flowcharts of some major subroutines. Finally, we provide a visual example illustrating a 4-day run using real initial data, along with a sample printout and graphic isoline contours of the height field and velocity fields.
Phenotypic Graphs and Evolution Unfold the Standard Genetic Code as the Optimal
NASA Astrophysics Data System (ADS)
Zamudio, Gabriel S.; José, Marco V.
2018-03-01
In this work, we explicitly consider the evolution of the Standard Genetic Code (SGC) by assuming two evolutionary stages, to wit, the primeval RNY code and two intermediate codes in between. We used network theory and graph theory to measure the connectivity of each phenotypic graph. The connectivity values are compared to the values of the codes under different randomization scenarios. An error-correcting optimal code is one in which the algebraic connectivity is minimized. We show that the SGC is optimal in regard to its robustness and error-tolerance when compared to all random codes under different assumptions.
Relating quantum discord with the quantum dense coding capacity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Xin; Qiu, Liang, E-mail: lqiu@cumt.edu.cn; Li, Song
2015-01-15
We establish the relations between quantum discord and the quantum dense coding capacity in (n + 1)-particle quantum states. A necessary condition for the vanishing discord monogamy score is given. We also find that the loss of quantum dense coding capacity due to decoherence is bounded below by the sum of quantum discord. When these results are restricted to three-particle quantum states, some complementarity relations are obtained.
Development of 1D Liner Compression Code for IDL
NASA Astrophysics Data System (ADS)
Shimazu, Akihisa; Slough, John; Pancotti, Anthony
2015-11-01
A 1D liner compression code is developed to model liner implosion dynamics in the Inductively Driven Liner Experiment (IDL) where FRC plasmoid is compressed via inductively-driven metal liners. The driver circuit, magnetic field, joule heating, and liner dynamics calculations are performed at each time step in sequence to couple these effects in the code. To obtain more realistic magnetic field results for a given drive coil geometry, 2D and 3D effects are incorporated into the 1D field calculation through use of correction factor table lookup approach. Commercial low-frequency electromagnetic fields solver, ANSYS Maxwell 3D, is used to solve the magnetic field profile for static liner condition at various liner radius in order to derive correction factors for the 1D field calculation in the code. The liner dynamics results from the code is verified to be in good agreement with the results from commercial explicit dynamics solver, ANSYS Explicit Dynamics, and previous liner experiment. The developed code is used to optimize the capacitor bank and driver coil design for better energy transfer and coupling. FRC gain calculations are also performed using the liner compression data from the code for the conceptual design of the reactor sized system for fusion energy gains.
Individual differences in non-verbal number acuity correlate with maths achievement.
Halberda, Justin; Mazzocco, Michèle M M; Feigenson, Lisa
2008-10-02
Human mathematical competence emerges from two representational systems. Competence in some domains of mathematics, such as calculus, relies on symbolic representations that are unique to humans who have undergone explicit teaching. More basic numerical intuitions are supported by an evolutionarily ancient approximate number system that is shared by adults, infants and non-human animals-these groups can all represent the approximate number of items in visual or auditory arrays without verbally counting, and use this capacity to guide everyday behaviour such as foraging. Despite the widespread nature of the approximate number system both across species and across development, it is not known whether some individuals have a more precise non-verbal 'number sense' than others. Furthermore, the extent to which this system interfaces with the formal, symbolic maths abilities that humans acquire by explicit instruction remains unknown. Here we show that there are large individual differences in the non-verbal approximation abilities of 14-year-old children, and that these individual differences in the present correlate with children's past scores on standardized maths achievement tests, extending all the way back to kindergarten. Moreover, this correlation remains significant when controlling for individual differences in other cognitive and performance factors. Our results show that individual differences in achievement in school mathematics are related to individual differences in the acuity of an evolutionarily ancient, unlearned approximate number sense. Further research will determine whether early differences in number sense acuity affect later maths learning, whether maths education enhances number sense acuity, and the extent to which tertiary factors can affect both.
Bergeron, Kim; Abdi, Samiya; DeCorby, Kara; Mensah, Gloria; Rempel, Benjamin; Manson, Heather
2017-11-28
There is limited research on capacity building interventions that include theoretical foundations. The purpose of this systematic review is to identify underlying theories, models and frameworks used to support capacity building interventions relevant to public health practice. The aim is to inform and improve capacity building practices and services offered by public health organizations. Four search strategies were used: 1) electronic database searching; 2) reference lists of included papers; 3) key informant consultation; and 4) grey literature searching. Inclusion and exclusion criteria are outlined with included papers focusing on capacity building, learning plans, professional development plans in combination with tools, resources, processes, procedures, steps, model, framework, guideline, described in a public health or healthcare setting, or non-government, government, or community organizations as they relate to healthcare, and explicitly or implicitly mention a theory, model and/or framework that grounds the type of capacity building approach developed. Quality assessment were performed on all included articles. Data analysis included a process for synthesizing, analyzing and presenting descriptive summaries, categorizing theoretical foundations according to which theory, model and/or framework was used and whether or not the theory, model or framework was implied or explicitly identified. Nineteen articles were included in this review. A total of 28 theories, models and frameworks were identified. Of this number, two theories (Diffusion of Innovations and Transformational Learning), two models (Ecological and Interactive Systems Framework for Dissemination and Implementation) and one framework (Bloom's Taxonomy of Learning) were identified as the most frequently cited. This review identifies specific theories, models and frameworks to support capacity building interventions relevant to public health organizations. It provides public health practitioners with a menu of potentially usable theories, models and frameworks to support capacity building efforts. The findings also support the need for the use of theories, models or frameworks to be intentional, explicitly identified, referenced and for it to be clearly outlined how they were applied to the capacity building intervention.
Efficient self-consistent viscous-inviscid solutions for unsteady transonic flow
NASA Technical Reports Server (NTRS)
Howlett, J. T.
1985-01-01
An improved method is presented for coupling a boundary layer code with an unsteady inviscid transonic computer code in a quasi-steady fashion. At each fixed time step, the boundary layer and inviscid equations are successively solved until the process converges. An explicit coupling of the equations is described which greatly accelerates the convergence process. Computer times for converged viscous-inviscid solutions are about 1.8 times the comparable inviscid values. Comparison of the results obtained with experimental data on three airfoils are presented. These comparisons demonstrate that the explicitly coupled viscous-inviscid solutions can provide efficient predictions of pressure distributions and lift for unsteady two-dimensional transonic flows.
Efficient self-consistent viscous-inviscid solutions for unsteady transonic flow
NASA Technical Reports Server (NTRS)
Howlett, J. T.
1985-01-01
An improved method is presented for coupling a boundary layer code with an unsteady inviscid transonic computer code in a quasi-steady fashion. At each fixed time step, the boundary layer and inviscid equations are successively solved until the process converges. An explicit coupling of the equations is described which greatly accelerates the convergence process. Computer times for converged viscous-inviscid solutions are about 1.8 times the comparable inviscid values. Comparison of the results obtained with experimental data on three airfoils are presented. These comparisons demonstrate that the explicitly coupled viscous-inviscid solutions can provide efficient predictions of pressure distributions and lift for unsteady two-dimensional transonic flow.
NASA Astrophysics Data System (ADS)
Isaac, Aboagye Adjaye; Yongsheng, Cao; Fushen, Chen
2018-05-01
We present and compare the outcome of implicit and explicit labels using intensity modulation (IM), differential quadrature phase shift keying (DQPSK), and polarization division multiplexed (PDM-DQPSK). A payload bit rate of 1, 2, and 5 Gb/s is considered for IM implicit labels, while payloads of 40, 80, and 112 Gb/s are considered in DQPSK and PDM-DQPSK explicit labels by stimulating a 4-code 156-Mb/s SAC label. The generated label and payloads are observed by assessing the eye diagram, received optical power (ROP), and optical signal to noise ratio (OSNR).
A burst-mode photon counting receiver with automatic channel estimation and bit rate detection
NASA Astrophysics Data System (ADS)
Rao, Hemonth G.; DeVoe, Catherine E.; Fletcher, Andrew S.; Gaschits, Igor D.; Hakimi, Farhad; Hamilton, Scott A.; Hardy, Nicholas D.; Ingwersen, John G.; Kaminsky, Richard D.; Moores, John D.; Scheinbart, Marvin S.; Yarnall, Timothy M.
2016-04-01
We demonstrate a multi-rate burst-mode photon-counting receiver for undersea communication at data rates up to 10.416 Mb/s over a 30-foot water channel. To the best of our knowledge, this is the first demonstration of burst-mode photon-counting communication. With added attenuation, the maximum link loss is 97.1 dB at λ=517 nm. In clear ocean water, this equates to link distances up to 148 meters. For λ=470 nm, the achievable link distance in clear ocean water is 450 meters. The receiver incorporates soft-decision forward error correction (FEC) based on a product code of an inner LDPC code and an outer BCH code. The FEC supports multiple code rates to achieve error-free performance. We have selected a burst-mode receiver architecture to provide robust performance with respect to unpredictable channel obstructions. The receiver is capable of on-the-fly data rate detection and adapts to changing levels of signal and background light. The receiver updates its phase alignment and channel estimates every 1.6 ms, allowing for rapid changes in water quality as well as motion between transmitter and receiver. We demonstrate on-the-fly rate detection, channel BER within 0.2 dB of theory across all data rates, and error-free performance within 1.82 dB of soft-decision capacity across all tested code rates. All signal processing is done in FPGAs and runs continuously in real time.
Statistical mechanics of neocortical interactions: Path-integral evolution of short-term memory
NASA Astrophysics Data System (ADS)
Ingber, Lester
1994-05-01
Previous papers in this series of statistical mechanics of neocortical interactions (SMNI) have detailed a development from the relatively microscopic scales of neurons up to the macroscopic scales as recorded by electroencephalography (EEG), requiring an intermediate mesocolumnar scale to be developed at the scale of minicolumns (~=102 neurons) and macrocolumns (~=105 neurons). Opportunity was taken to view SMNI as sets of statistical constraints, not necessarily describing specific synaptic or neuronal mechanisms, on neuronal interactions, on some aspects of short-term memory (STM), e.g., its capacity, stability, and duration. A recently developed c-language code, pathint, provides a non-Monte Carlo technique for calculating the dynamic evolution of arbitrary-dimension (subject to computer resources) nonlinear Lagrangians, such as derived for the two-variable SMNI problem. Here, pathint is used to explicitly detail the evolution of the SMNI constraints on STM.
Reliability and throughput issues for optical wireless and RF wireless systems
NASA Astrophysics Data System (ADS)
Yu, Meng
The fast development of wireless communication technologies has two main trends. On one hand, in point-to-point communications, the demand for higher throughput called for the emergence of wireless broadband techniques including optical wireless (OW). One the other hand, wireless networks are becoming pervasive. New application of wireless networks ask for more flexible system infrastructures beyond the point-to-point prototype to achieve better performance. This dissertation investigates two topics on the reliability and throughput issues of new wireless technologies. The first topic is to study the capacity, and practical forward error control strategies for OW systems. We investigate the performance of OW systems under weak atmospheric turbulence. We first investigate the capacity and power allocation for multi-laser and multi-detector systems. Our results show that uniform power allocation is a practically optimal solution for paralleled channels. We also investigate the performance of Reed Solomon (RS) codes and turbo codes for OW systems. We present RS codes as good candidates for OW systems. The second topic targets user cooperation in wireless networks. We evaluate the relative merits of amplify-forward (AF) and decode-forward (DF) in practical scenarios. Both analysis and simulations show that the overall system performance is critically affected by the quality of the inter-user channel. Following this result, we investigate two schemes to improve the overall system performance. We first investigate the impact of the relay location on the overall system performance and determine the optimal location of relay. A best-selective single-relay 1 system is proposed and evaluated. Through the analysis of the average capacity and outage, we show that a small candidate pool of 3 to 5 relays suffices to reap most of the "geometric" gain available to a selective system. Second, we propose a new user cooperation scheme to provide an effective better inter-user channel. Most user cooperation protocols work in a time sharing manner, where a node forwards others' messages and sends its own message at different sections within a provisioned time slot. In the proposed scheme the two messages are encoded together in a single codework using network coding and transmitted in the given time slot. We also propose a general multiple-user cooperation framework. Under this framework, we show that network coding can achieve better diversity and provide effective better inter-user channels than time sharing. The last part of the dissertation focuses on multi-relay packet transmission. We propose an adaptive and distributive coding scheme for the relay nodes to adaptively cooperate and forward messages. The adaptive scheme shows performance gain over fixed schemes. Then we shift our viewpoint and represent the network as part of encoders and part of decoders.
Enhancing L2 Vocabulary Acquisition through Implicit Reading Support Cues in E-books
ERIC Educational Resources Information Center
Liu, Yeu-Ting; Leveridge, Aubrey Neil
2017-01-01
Various explicit reading support cues, such as gloss, QR codes and hypertext annotation, have been embedded in e-books designed specifically for fostering various aspects of language development. However, explicit visual cues are not always reliably perceived as salient or effective by language learners. The current study explored the efficacy of…
Efficiency Study of Implicit and Explicit Time Integration Operators for Finite Element Applications
1977-07-01
cffiAciency, wherein Beta =0 provides anl exp~licit algorithm, wvhile Beta &0 provides anl implicit algorithm. Both algorithmns arc used in the same...Hlueneme CA: CO, Code C44A Port j IHuenemne, CA NAVSEC Cod,. 6034 (Library), Washington DC NAVSI*CGRUAC’I’ PWO, ’rorri Sta, OkinawaI NAVSIIIPRBFTAC Library
Convergence studies of deterministic methods for LWR explicit reflector methodology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Canepa, S.; Hursin, M.; Ferroukhi, H.
2013-07-01
The standard approach in modem 3-D core simulators, employed either for steady-state or transient simulations, is to use Albedo coefficients or explicit reflectors at the core axial and radial boundaries. In the latter approach, few-group homogenized nuclear data are a priori produced with lattice transport codes using 2-D reflector models. Recently, the explicit reflector methodology of the deterministic CASMO-4/SIMULATE-3 code system was identified to potentially constitute one of the main sources of errors for core analyses of the Swiss operating LWRs, which are all belonging to GII design. Considering that some of the new GIII designs will rely on verymore » different reflector concepts, a review and assessment of the reflector methodology for various LWR designs appeared as relevant. Therefore, the purpose of this paper is to first recall the concepts of the explicit reflector modelling approach as employed by CASMO/SIMULATE. Then, for selected reflector configurations representative of both GII and GUI designs, a benchmarking of the few-group nuclear data produced with the deterministic lattice code CASMO-4 and its successor CASMO-5, is conducted. On this basis, a convergence study with regards to geometrical requirements when using deterministic methods with 2-D homogenous models is conducted and the effect on the downstream 3-D core analysis accuracy is evaluated for a typical GII deflector design in order to assess the results against available plant measurements. (authors)« less
The Role of Explicit Need Strength for Emotions during Learning
ERIC Educational Resources Information Center
Flunger, Barbara; Pretsch, Johanna; Schmitt, Manfred; Ludwig, Peter
2013-01-01
According to self-determination theory, the satisfaction of the basic needs for autonomy, competence, and relatedness influences achievement emotions and situational interest. The present study investigated whether domain-specific explicit need strength moderated the impact of need satisfaction/dissatisfaction on the outcomes achievement emotions…
Inactivation of Medial Prefrontal Cortex or Acute Stress Impairs Odor Span in Rats
ERIC Educational Resources Information Center
Davies, Don A.; Molder, Joel J.; Greba, Quentin; Howland, John G.
2013-01-01
The capacity of working memory is limited and is altered in brain disorders including schizophrenia. In rodent working memory tasks, capacity is typically not measured (at least not explicitly). One task that does measure working memory capacity is the odor span task (OST) developed by Dudchenko and colleagues. In separate experiments, the effects…
PharmARTS: terminology web services for drug safety data coding and retrieval.
Alecu, Iulian; Bousquet, Cédric; Degoulet, Patrice; Jaulent, Marie-Christine
2007-01-01
MedDRA and WHO-ART are the terminologies used to encode drug safety reports. The standardisation achieved with these terminologies facilitates: 1) The sharing of safety databases; 2) Data mining for the continuous reassessment of benefit-risk ratio at national or international level or in the pharmaceutical industry. There is some debate about the capacity of these terminologies for retrieving case reports related to similar medical conditions. We have developed a resource that allows grouping similar medical conditions more effectively than WHO-ART and MedDRA. We describe here a software tool facilitating the use of this terminological resource thanks to an RDF framework with support for RDF Schema inferencing and querying. This tool eases coding and data retrieval in drug safety.
Experimental realization of the analogy of quantum dense coding in classical optics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Zhenwei; Sun, Yifan; Li, Pengyun
2016-06-15
We report on the experimental realization of the analogy of quantum dense coding in classical optical communication using classical optical correlations. Compared to quantum dense coding that uses pairs of photons entangled in polarization, we find that the proposed design exhibits many advantages. Considering that it is convenient to realize in optical communication, the attainable channel capacity in the experiment for dense coding can reach 2 bits, which is higher than that of the usual quantum coding capacity (1.585 bits). This increased channel capacity has been proven experimentally by transmitting ASCII characters in 12 quaternary digitals instead of the usualmore » 24 bits.« less
LDPC Codes with Minimum Distance Proportional to Block Size
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel; Thorpe, Jeremy
2009-01-01
Low-density parity-check (LDPC) codes characterized by minimum Hamming distances proportional to block sizes have been demonstrated. Like the codes mentioned in the immediately preceding article, the present codes are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. The previously mentioned codes have low decoding thresholds and reasonably low error floors. However, the minimum Hamming distances of those codes do not grow linearly with code-block sizes. Codes that have this minimum-distance property exhibit very low error floors. Examples of such codes include regular LDPC codes with variable degrees of at least 3. Unfortunately, the decoding thresholds of regular LDPC codes are high. Hence, there is a need for LDPC codes characterized by both low decoding thresholds and, in order to obtain acceptably low error floors, minimum Hamming distances that are proportional to code-block sizes. The present codes were developed to satisfy this need. The minimum Hamming distances of the present codes have been shown, through consideration of ensemble-average weight enumerators, to be proportional to code block sizes. As in the cases of irregular ensembles, the properties of these codes are sensitive to the proportion of degree-2 variable nodes. A code having too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code having too many such nodes tends not to exhibit a minimum distance that is proportional to block size. Results of computational simulations have shown that the decoding thresholds of codes of the present type are lower than those of regular LDPC codes. Included in the simulations were a few examples from a family of codes characterized by rates ranging from low to high and by thresholds that adhere closely to their respective channel capacity thresholds; the simulation results from these examples showed that the codes in question have low error floors as well as low decoding thresholds. As an example, the illustration shows the protograph (which represents the blueprint for overall construction) of one proposed code family for code rates greater than or equal to 1.2. Any size LDPC code can be obtained by copying the protograph structure N times, then permuting the edges. The illustration also provides Field Programmable Gate Array (FPGA) hardware performance simulations for this code family. In addition, the illustration provides minimum signal-to-noise ratios (Eb/No) in decibels (decoding thresholds) to achieve zero error rates as the code block size goes to infinity for various code rates. In comparison with the codes mentioned in the preceding article, these codes have slightly higher decoding thresholds.
A CellML simulation compiler and code generator using ODE solving schemes
2012-01-01
Models written in description languages such as CellML are becoming a popular solution to the handling of complex cellular physiological models in biological function simulations. However, in order to fully simulate a model, boundary conditions and ordinary differential equation (ODE) solving schemes have to be combined with it. Though boundary conditions can be described in CellML, it is difficult to explicitly specify ODE solving schemes using existing tools. In this study, we define an ODE solving scheme description language-based on XML and propose a code generation system for biological function simulations. In the proposed system, biological simulation programs using various ODE solving schemes can be easily generated. We designed a two-stage approach where the system generates the equation set associating the physiological model variable values at a certain time t with values at t + Δt in the first stage. The second stage generates the simulation code for the model. This approach enables the flexible construction of code generation modules that can support complex sets of formulas. We evaluate the relationship between models and their calculation accuracies by simulating complex biological models using various ODE solving schemes. Using the FHN model simulation, results showed good qualitative and quantitative correspondence with the theoretical predictions. Results for the Luo-Rudy 1991 model showed that only first order precision was achieved. In addition, running the generated code in parallel on a GPU made it possible to speed up the calculation time by a factor of 50. The CellML Compiler source code is available for download at http://sourceforge.net/projects/cellmlcompiler. PMID:23083065
NASA Technical Reports Server (NTRS)
DeGaudenzi, Riccardo; Giannetti, Filippo
1995-01-01
The downlink of a satellite-mobile personal communication system employing power-controlled Direct Sequence Code Division Multiple Access (DS-CDMA) and exploiting satellite-diversity is analyzed and its performance compared with a more traditional communication system utilizing single satellite reception. The analytical model developed has been thoroughly validated by means of extensive Monte Carlo computer simulations. It is shown how the capacity gain provided by diversity reception shrinks considerably in the presence of increasing traffic or in the case of light shadowing conditions. Moreover, the quantitative results tend to indicate that to combat system capacity reduction due to intra-system interference, no more than two satellites shall be active over the same region. To achieve higher system capacity, differently from terrestrial cellular systems, Multi-User Detection (MUD) techniques are likely to be required in the mobile user terminal, thus considerably increasing its complexity.
High-Order Local Pooling and Encoding Gaussians Over a Dictionary of Gaussians.
Li, Peihua; Zeng, Hui; Wang, Qilong; Shiu, Simon C K; Zhang, Lei
2017-07-01
Local pooling (LP) in configuration (feature) space proposed by Boureau et al. explicitly restricts similar features to be aggregated, which can preserve as much discriminative information as possible. At the time it appeared, this method combined with sparse coding achieved competitive classification results with only a small dictionary. However, its performance lags far behind the state-of-the-art results as only the zero-order information is exploited. Inspired by the success of high-order statistical information in existing advanced feature coding or pooling methods, we make an attempt to address the limitation of LP. To this end, we present a novel method called high-order LP (HO-LP) to leverage the information higher than the zero-order one. Our idea is intuitively simple: we compute the first- and second-order statistics per configuration bin and model them as a Gaussian. Accordingly, we employ a collection of Gaussians as visual words to represent the universal probability distribution of features from all classes. Our problem is naturally formulated as encoding Gaussians over a dictionary of Gaussians as visual words. This problem, however, is challenging since the space of Gaussians is not a Euclidean space but forms a Riemannian manifold. We address this challenge by mapping Gaussians into the Euclidean space, which enables us to perform coding with common Euclidean operations rather than complex and often expensive Riemannian operations. Our HO-LP preserves the advantages of the original LP: pooling only similar features and using a small dictionary. Meanwhile, it achieves very promising performance on standard benchmarks, with either conventional, hand-engineered features or deep learning-based features.
Challenges of Achieving 2012 IECC Air Sealing Requirements in Multifamily Dwellings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klocke, S.; Faakye, O.; Puttagunta, S.
2014-10-01
While previous versions of the International Energy Conservation Code (IECC) have included provisions to improve the air tightness of dwellings, for the first time, the 2012 IECC mandates compliance verification through blower door testing. Simply completing the Air Barrier and Insulation Installation checklist through visual inspection is no longer sufficient by itself. In addition, the 2012 IECC mandates a significantly stricter air sealing requirement. In Climate Zones 3 through 8, air leakage may not exceed 3 ACH50, which is a significant reduction from the 2009 IECC requirement of 7 ACH50. This requirement is for all residential buildings, which includes low-risemore » multifamily dwellings. While this air leakage rate requirement is an important component to achieving an efficient building thermal envelope, currently, the code language doesn't explicitly address differences between single family and multifamily applications. In addition, the 2012 IECC does not provide an option to sample dwellings for larger multifamily buildings, so compliance would have to be verified on every unit. With compliance with the 2012 IECC air leakage requirements on the horizon, several of Consortium for Advanced Residential Building's (CARB’s) multifamily builder partners are evaluating how best to comply with this requirement. Builders are not sure whether it is more practical or beneficial to simply pay for guarded testing or to revise their air sealing strategies to improve compartmentalization to comply with code requirements based on unguarded blower door testing. This report summarizes CARB's research that was conducted to assess the feasibility of meeting the 2012 IECC air leakage requirements in 3 multifamily buildings.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
2014-11-01
While previous versions of the International Energy Conservation Code (IECC) have included provisions to improve the air tightness of dwellings, for the first time, the 2012 IECC mandates compliance verification through blower door testing. Simply completing the Air Barrier and Insulation Installation checklist through visual inspection is no longer sufficient by itself. In addition, the 2012 IECC mandates a significantly stricter air sealing requirement. In Climate Zones 3 through 8, air leakage may not exceed 3 ACH50, which is a significant reduction from the 2009 IECC requirement of 7 ACH50. This requirement is for all residential buildings, which includes low-risemore » multifamily dwellings. While this air leakage rate requirement is an important component to achieving an efficient building thermal envelope, currently, the code language doesn't explicitly address differences between single family and multifamily applications. In addition, the 2012 IECC does not provide an option to sample dwellings for larger multifamily buildings, so compliance would have to be verified on every unit. With compliance with the 2012 IECC air leakage requirements on the horizon, several of CARB's multifamily builder partners are evaluating how best to comply with this requirement. Builders are not sure whether it is more practical or beneficial to simply pay for guarded testing or to revise their air sealing strategies to improve compartmentalization to comply with code requirements based on unguarded blower door testing. This report summarizes CARB's research that was conducted to assess the feasibility of meeting the 2012 IECC air leakage requirements in 3 multifamily buildings.« less
Challenges of Achieving 2012 IECC Air Sealing Requirements in Multifamily Dwellings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klocke, S.; Faakye, O.; Puttagunta, S.
2014-10-01
While previous versions of the International Energy Conservation Code (IECC) have included provisions to improve the air tightness of dwellings, for the first time, the 2012 IECC mandates compliance verification through blower door testing. Simply completing the Air Barrier and Insulation Installation checklist through visual inspection is no longer sufficient by itself. In addition, the 2012 IECC mandates a significantly stricter air sealing requirement. In Climate Zones 3 through 8, air leakage may not exceed 3 ACH50, which is a significant reduction from the 2009 IECC requirement of 7 ACH50. This requirement is for all residential buildings, which includes low-risemore » multifamily dwellings. While this air leakage rate requirement is an important component to achieving an efficient building thermal envelope, currently, the code language doesn't explicitly address differences between single family and multifamily applications. In addition, the 2012 IECC does not provide an option to sample dwellings for larger multifamily buildings, so compliance would have to be verified on every unit. With compliance with the 2012 IECC air leakage requirements on the horizon, several of CARB's multifamily builder partners are evaluating how best to comply with this requirement. Builders are not sure whether it is more practical or beneficial to simply pay for guarded testing or to revise their air sealing strategies to improve compartmentalization to comply with code requirements based on unguarded blower door testing. This report summarizes CARB's research that was conducted to assess the feasibility of meeting the 2012 IECC air leakage requirements in 3 multifamily buildings.« less
Preliminary SAGE Simulations of Volcanic Jets Into a Stratified Atmosphere
NASA Astrophysics Data System (ADS)
Peterson, A. H.; Wohletz, K. H.; Ogden, D. E.; Gisler, G. R.; Glatzmaier, G. A.
2007-12-01
The SAGE (SAIC Adaptive Grid Eulerian) code employs adaptive mesh refinement in solving Eulerian equations of complex fluid flow desirable for simulation of volcanic eruptions. The goal of modeling volcanic eruptions is to better develop a code's predictive capabilities in order to understand the dynamics that govern the overall behavior of real eruption columns. To achieve this goal, we focus on the dynamics of underexpended jets, one of the fundamental physical processes important to explosive eruptions. Previous simulations of laboratory jets modeled in cylindrical coordinates were benchmarked with simulations in CFDLib (Los Alamos National Laboratory), which solves the full Navier-Stokes equations (includes viscous stress tensor), and showed close agreement, indicating that adaptive mesh refinement used in SAGE may offset the need for explicit calculation of viscous dissipation.We compare gas density contours of these previous simulations with the same initial conditions in cylindrical and Cartesian geometries to laboratory experiments to determine both the validity of the model and the robustness of the code. The SAGE results in both geometries are within several percent of the experiments for position and density of the incident (intercepting) and reflected shocks, slip lines, shear layers, and Mach disk. To expand our study into a volcanic regime, we simulate large-scale jets in a stratified atmosphere to establish the code's ability to model a sustained jet into a stable atmosphere.
Implementation of a 3D mixing layer code on parallel computers
NASA Technical Reports Server (NTRS)
Roe, K.; Thakur, R.; Dang, T.; Bogucz, E.
1995-01-01
This paper summarizes our progress and experience in the development of a Computational-Fluid-Dynamics code on parallel computers to simulate three-dimensional spatially-developing mixing layers. In this initial study, the three-dimensional time-dependent Euler equations are solved using a finite-volume explicit time-marching algorithm. The code was first programmed in Fortran 77 for sequential computers. The code was then converted for use on parallel computers using the conventional message-passing technique, while we have not been able to compile the code with the present version of HPF compilers.
The Plasma Simulation Code: A modern particle-in-cell code with patch-based load-balancing
NASA Astrophysics Data System (ADS)
Germaschewski, Kai; Fox, William; Abbott, Stephen; Ahmadi, Narges; Maynard, Kristofor; Wang, Liang; Ruhl, Hartmut; Bhattacharjee, Amitava
2016-08-01
This work describes the Plasma Simulation Code (PSC), an explicit, electromagnetic particle-in-cell code with support for different order particle shape functions. We review the basic components of the particle-in-cell method as well as the computational architecture of the PSC code that allows support for modular algorithms and data structure in the code. We then describe and analyze in detail a distinguishing feature of PSC: patch-based load balancing using space-filling curves which is shown to lead to major efficiency gains over unbalanced methods and a previously used simpler balancing method.
NASA Technical Reports Server (NTRS)
Kumar, A.
1984-01-01
A computer program NASCRIN has been developed for analyzing two-dimensional flow fields in high-speed inlets. It solves the two-dimensional Euler or Navier-Stokes equations in conservation form by an explicit, two-step finite-difference method. An explicit-implicit method can also be used at the user's discretion for viscous flow calculations. For turbulent flow, an algebraic, two-layer eddy-viscosity model is used. The code is operational on the CDC CYBER 203 computer system and is highly vectorized to take full advantage of the vector-processing capability of the system. It is highly user oriented and is structured in such a way that for most supersonic flow problems, the user has to make only a few changes. Although the code is primarily written for supersonic internal flow, it can be used with suitable changes in the boundary conditions for a variety of other problems.
DOT National Transportation Integrated Search
2012-08-06
We study multi-item inventory problems that explicitly account for realistic : transportation cost structures and constraints, including a per-truck capacity and per-truck cost. : We analyze shipment consolidation and coordination policies under thes...
Explicit and implicit motor learning in children with unilateral cerebral palsy.
van der Kamp, John; Steenbergen, Bert; Masters, Rich S W
2017-07-30
The current study aimed to investigate the capacity for explicit and implicit learning in children with unilateral cerebral palsy. Children with left and right unilateral cerebral palsy and typically developing children shuffled disks toward a target. A prism-adaptation design was implemented, consisting of pre-exposure, prism exposure, and post-exposure phases. Half of the participants were instructed about the function of the prism glasses, while the other half were not. For each trial, the distance between the target and the shuffled disk was determined. Explicit learning was indicated by the rate of adaptation during the prism exposure phase, whereas implicit learning was indicated by the magnitude of the negative after-effect at the start of the post-exposure phase. Results No significant effects were revealed between typically developing participants and participants with unilateral cerebral palsy. Comparison of participants with left and right unilateral cerebral palsy demonstrated that participants with right unilateral cerebral palsy had a significantly lower rate of adaptation than participants with left unilateral cerebral palsy, but only when no instructions were provided. The magnitude of the negative after-effects did not differ significantly between participants with right and left unilateral cerebral palsy. The capacity for explicit motor learning is reduced among individuals with right unilateral cerebral palsy when accumulation of declarative knowledge is unguided (i.e., discovery learning). In contrast, the capacity for implicit learning appears to remain intact among individuals with left as well as right unilateral cerebral palsy. Implications for rehabilitation Implicit motor learning interventions are recommended for individuals with cerebral palsy, particularly for individuals with right unilateral cerebral palsy Explicit motor learning interventions for individual with cerebral palsy - if used - best consist of singular verbal instruction.
A numerical algorithm for the explicit calculation of SU(N) and SL(N,C) Clebsch-Gordan coefficients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alex, Arne; Delft, Jan von; Kalus, Matthias
2011-02-15
We present an algorithm for the explicit numerical calculation of SU(N) and SL(N,C) Clebsch-Gordan coefficients, based on the Gelfand-Tsetlin pattern calculus. Our algorithm is well suited for numerical implementation; we include a computer code in an appendix. Our exposition presumes only familiarity with the representation theory of SU(2).
Correcting quantum errors with entanglement.
Brun, Todd; Devetak, Igor; Hsieh, Min-Hsiu
2006-10-20
We show how entanglement shared between encoder and decoder can simplify the theory of quantum error correction. The entanglement-assisted quantum codes we describe do not require the dual-containing constraint necessary for standard quantum error-correcting codes, thus allowing us to "quantize" all of classical linear coding theory. In particular, efficient modern classical codes that attain the Shannon capacity can be made into entanglement-assisted quantum codes attaining the hashing bound (closely related to the quantum capacity). For systems without large amounts of shared entanglement, these codes can also be used as catalytic codes, in which a small amount of initial entanglement enables quantum communication.
Implicit and semi-implicit schemes in the Versatile Advection Code: numerical tests
NASA Astrophysics Data System (ADS)
Toth, G.; Keppens, R.; Botchev, M. A.
1998-04-01
We describe and evaluate various implicit and semi-implicit time integration schemes applied to the numerical simulation of hydrodynamical and magnetohydrodynamical problems. The schemes were implemented recently in the software package Versatile Advection Code, which uses modern shock capturing methods to solve systems of conservation laws with optional source terms. The main advantage of implicit solution strategies over explicit time integration is that the restrictive constraint on the allowed time step can be (partially) eliminated, thus the computational cost is reduced. The test problems cover one and two dimensional, steady state and time accurate computations, and the solutions contain discontinuities. For each test, we confront explicit with implicit solution strategies.
NASA Astrophysics Data System (ADS)
Chung, Hye Won; Guha, Saikat; Zheng, Lizhong
2017-07-01
We study the problem of designing optical receivers to discriminate between multiple coherent states using coherent processing receivers—i.e., one that uses arbitrary coherent feedback control and quantum-noise-limited direct detection—which was shown by Dolinar to achieve the minimum error probability in discriminating any two coherent states. We first derive and reinterpret Dolinar's binary-hypothesis minimum-probability-of-error receiver as the one that optimizes the information efficiency at each time instant, based on recursive Bayesian updates within the receiver. Using this viewpoint, we propose a natural generalization of Dolinar's receiver design to discriminate M coherent states, each of which could now be a codeword, i.e., a sequence of N coherent states, each drawn from a modulation alphabet. We analyze the channel capacity of the pure-loss optical channel with a general coherent-processing receiver in the low-photon number regime and compare it with the capacity achievable with direct detection and the Holevo limit (achieving the latter would require a quantum joint-detection receiver). We show compelling evidence that despite the optimal performance of Dolinar's receiver for the binary coherent-state hypothesis test (either in error probability or mutual information), the asymptotic communication rate achievable by such a coherent-processing receiver is only as good as direct detection. This suggests that in the infinitely long codeword limit, all potential benefits of coherent processing at the receiver can be obtained by designing a good code and direct detection, with no feedback within the receiver.
Combining Distributed and Shared Memory Models: Approach and Evolution of the Global Arrays Toolkit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nieplocha, Jarek; Harrison, Robert J.; Kumar, Mukul
2002-07-29
Both shared memory and distributed memory models have advantages and shortcomings. Shared memory model is much easier to use but it ignores data locality/placement. Given the hierarchical nature of the memory subsystems in the modern computers this characteristic might have a negative impact on performance and scalability. Various techniques, such as code restructuring to increase data reuse and introducing blocking in data accesses, can address the problem and yield performance competitive with message passing[Singh], however at the cost of compromising the ease of use feature. Distributed memory models such as message passing or one-sided communication offer performance and scalability butmore » they compromise the ease-of-use. In this context, the message-passing model is sometimes referred to as?assembly programming for the scientific computing?. The Global Arrays toolkit[GA1, GA2] attempts to offer the best features of both models. It implements a shared-memory programming model in which data locality is managed explicitly by the programmer. This management is achieved by explicit calls to functions that transfer data between a global address space (a distributed array) and local storage. In this respect, the GA model has similarities to the distributed shared-memory models that provide an explicit acquire/release protocol. However, the GA model acknowledges that remote data is slower to access than local data and allows data locality to be explicitly specified and hence managed. The GA model exposes to the programmer the hierarchical memory of modern high-performance computer systems, and by recognizing the communication overhead for remote data transfer, it promotes data reuse and locality of reference. This paper describes the characteristics of the Global Arrays programming model, capabilities of the toolkit, and discusses its evolution.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Yongxi
We propose an integrated modeling framework to optimally locate wireless charging facilities along a highway corridor to provide sufficient in-motion charging. The integrated model consists of a master, Infrastructure Planning Model that determines best locations with integrated two sub-models that explicitly capture energy consumption and charging and the interactions between electric vehicle and wireless charging technologies, geometrics of highway corridors, speed, and auxiliary system. The model is implemented in an illustrative case study of a highway corridor of Interstate 5 in Oregon. We found that the cost of establishing the charging lane is sensitive and increases with the speed tomore » achieve. Through sensitivity analyses, we gain better understanding on the extent of impacts of geometric characteristics of highways and battery capacity on the charging lane design.« less
Biasing spatial attention with semantic information: an event coding approach.
Amer, Tarek; Gozli, Davood G; Pratt, Jay
2017-04-21
We investigated the influence of conceptual processing on visual attention from the standpoint of Theory of Event Coding (TEC). The theory makes two predictions: first, an important factor in determining the influence of event 1 on processing event 2 is whether features of event 1 are bound into a unified representation (i.e., selection or retrieval of event 1). Second, whether processing the two events facilitates or interferes with each other should depend on the extent to which their constituent features overlap. In two experiments, participants performed a visual-attention cueing task, in which the visual target (event 2) was preceded by a relevant or irrelevant explicit (e.g., "UP") or implicit (e.g., "HAPPY") spatial-conceptual cue (event 1). Consistent with TEC, we found relevant explicit cues (which featurally overlap to a greater extent with the target) and implicit cues (which featurally overlap to a lesser extent), respectively, facilitated and interfered with target processing at compatible locations. Irrelevant explicit and implicit cues, on the other hand, both facilitated target processing, presumably because they were less likely selected or retrieved as an integrated and unified event file. We argue that such effects, often described as "attentional cueing", are better accounted for within the event coding framework.
Bandwidth efficient coding for satellite communications
NASA Technical Reports Server (NTRS)
Lin, Shu; Costello, Daniel J., Jr.; Miller, Warner H.; Morakis, James C.; Poland, William B., Jr.
1992-01-01
An error control coding scheme was devised to achieve large coding gain and high reliability by using coded modulation with reduced decoding complexity. To achieve a 3 to 5 dB coding gain and moderate reliability, the decoding complexity is quite modest. In fact, to achieve a 3 dB coding gain, the decoding complexity is quite simple, no matter whether trellis coded modulation or block coded modulation is used. However, to achieve coding gains exceeding 5 dB, the decoding complexity increases drastically, and the implementation of the decoder becomes very expensive and unpractical. The use is proposed of coded modulation in conjunction with concatenated (or cascaded) coding. A good short bandwidth efficient modulation code is used as the inner code and relatively powerful Reed-Solomon code is used as the outer code. With properly chosen inner and outer codes, a concatenated coded modulation scheme not only can achieve large coding gains and high reliability with good bandwidth efficiency but also can be practically implemented. This combination of coded modulation and concatenated coding really offers a way of achieving the best of three worlds, reliability and coding gain, bandwidth efficiency, and decoding complexity.
Overcoming Challenges in Kinetic Modeling of Magnetized Plasmas and Vacuum Electronic Devices
NASA Astrophysics Data System (ADS)
Omelchenko, Yuri; Na, Dong-Yeop; Teixeira, Fernando
2017-10-01
We transform the state-of-the art of plasma modeling by taking advantage of novel computational techniques for fast and robust integration of multiscale hybrid (full particle ions, fluid electrons, no displacement current) and full-PIC models. These models are implemented in 3D HYPERS and axisymmetric full-PIC CONPIC codes. HYPERS is a massively parallel, asynchronous code. The HYPERS solver does not step fields and particles synchronously in time but instead executes local variable updates (events) at their self-adaptive rates while preserving fundamental conservation laws. The charge-conserving CONPIC code has a matrix-free explicit finite-element (FE) solver based on a sparse-approximate inverse (SPAI) algorithm. This explicit solver approximates the inverse FE system matrix (``mass'' matrix) using successive sparsity pattern orders of the original matrix. It does not reduce the set of Maxwell's equations to a vector-wave (curl-curl) equation of second order but instead utilizes the standard coupled first-order Maxwell's system. We discuss the ability of our codes to accurately and efficiently account for multiscale physical phenomena in 3D magnetized space and laboratory plasmas and axisymmetric vacuum electronic devices.
Minimum variance optimal rate allocation for multiplexed H.264/AVC bitstreams.
Tagliasacchi, Marco; Valenzise, Giuseppe; Tubaro, Stefano
2008-07-01
Consider the problem of transmitting multiple video streams to fulfill a constant bandwidth constraint. The available bit budget needs to be distributed across the sequences in order to meet some optimality criteria. For example, one might want to minimize the average distortion or, alternatively, minimize the distortion variance, in order to keep almost constant quality among the encoded sequences. By working in the rho-domain, we propose a low-delay rate allocation scheme that, at each time instant, provides a closed form solution for either the aforementioned problems. We show that minimizing the distortion variance instead of the average distortion leads, for each of the multiplexed sequences, to a coding penalty less than 0.5 dB, in terms of average PSNR. In addition, our analysis provides an explicit relationship between model parameters and this loss. In order to smooth the distortion also along time, we accommodate a shared encoder buffer to compensate for rate fluctuations. Although the proposed scheme is general, and it can be adopted for any video and image coding standard, we provide experimental evidence by transcoding bitstreams encoded using the state-of-the-art H.264/AVC standard. The results of our simulations reveal that is it possible to achieve distortion smoothing both in time and across the sequences, without sacrificing coding efficiency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forest, E.; Bengtsson, J.; Reusch, M.F.
1991-04-01
The full power of Yoshida's technique is exploited to produce an arbitrary order implicit symplectic integrator and multi-map explicit integrator. This implicit integrator uses a characteristic function involving the force term alone. Also we point out the usefulness of the plain Ruth algorithm in computing Taylor series map using the techniques first introduced by Berz in his 'COSY-INFINITY' code.
PERI - Auto-tuning Memory Intensive Kernels for Multicore
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bailey, David H; Williams, Samuel; Datta, Kaushik
2008-06-24
We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to Sparse Matrix Vector Multiplication (SpMV), the explicit heat equation PDE on a regular grid (Stencil), and a lattice Boltzmann application (LBMHD). We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Xeon Clovertown, AMD Opteron Barcelona, Sun Victoria Falls, and the Sony-Toshiba-IBM (STI) Cell. Rather than hand-tuning each kernel for each system, we developmore » a code generator for each kernel that allows us to identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned kernel applications often achieve a better than 4X improvement compared with the original code. Additionally, we analyze a Roofline performance model for each platform to reveal hardware bottlenecks and software challenges for future multicore systems and applications.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Procassini, R.J.
1997-12-31
The fine-scale, multi-space resolution that is envisioned for accurate simulations of complex weapons systems in three spatial dimensions implies flop-rate and memory-storage requirements that will only be obtained in the near future through the use of parallel computational techniques. Since the Monte Carlo transport models in these simulations usually stress both of these computational resources, they are prime candidates for parallelization. The MONACO Monte Carlo transport package, which is currently under development at LLNL, will utilize two types of parallelism within the context of a multi-physics design code: decomposition of the spatial domain across processors (spatial parallelism) and distribution ofmore » particles in a given spatial subdomain across additional processors (particle parallelism). This implementation of the package will utilize explicit data communication between domains (message passing). Such a parallel implementation of a Monte Carlo transport model will result in non-deterministic communication patterns. The communication of particles between subdomains during a Monte Carlo time step may require a significant level of effort to achieve a high parallel efficiency.« less
NASA Technical Reports Server (NTRS)
Choo, Yung K.; Slater, John W.; Henderson, Todd L.; Bidwell, Colin S.; Braun, Donald C.; Chung, Joongkee
1998-01-01
TURBO-GRD is a software system for interactive two-dimensional boundary/field grid generation. modification, and refinement. Its features allow users to explicitly control grid quality locally and globally. The grid control can be achieved interactively by using control points that the user picks and moves on the workstation monitor or by direct stretching and refining. The techniques used in the code are the control point form of algebraic grid generation, a damped cubic spline for edge meshing and parametric mapping between physical and computational domains. It also performs elliptic grid smoothing and free-form boundary control for boundary geometry manipulation. Internal block boundaries are constructed and shaped by using Bezier curve. Because TURBO-GRD is a highly interactive code, users can read in an initial solution, display its solution contour in the background of the grid and control net, and exercise grid modification using the solution contour as a guide. This process can be called an interactive solution-adaptive grid generation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Malley, Kathleen; Lopez, Hugo; Cairns, Julie
An overview of the main North American codes and standards associated with hydrogen safety sensors is provided. The distinction between a code and a standard is defined, and the relationship between standards and codes is clarified, especially for those circumstances where a standard or a certification requirement is explicitly referenced within a code. The report identifies three main types of standards commonly applied to hydrogen sensors (interface and controls standards, shock and hazard standards, and performance-based standards). The certification process and a list and description of the main standards and model codes associated with the use of hydrogen safety sensorsmore » in hydrogen infrastructure are presented.« less
ERIC Educational Resources Information Center
Doabler, Christian T.; Baker, Scott K.; Kosty, Derek B.; Smolkowski, Keith; Clarke, Ben; Miller, Saralyn J.; Fien, Hank
2015-01-01
Explicit instruction is a systematic instructional approach that facilitates frequent and meaningful instructional interactions between teachers and students around critical academic content. This study examined the relationship between student mathematics outcomes and the rate and quality of explicit instructional interactions that occur during…
Voting Intention and Choices: Are Voters Always Rational and Deliberative?
Lee, I-Ching; Chen, Eva E.; Tsai, Chia-Hung; Yen, Nai-Shing; Chen, Arbee L. P.; Lin, Wei-Chieh
2016-01-01
Human rationality–the ability to behave in order to maximize the achievement of their presumed goals (i.e., their optimal choices)–is the foundation for democracy. Research evidence has suggested that voters may not make decisions after exhaustively processing relevant information; instead, our decision-making capacity may be restricted by our own biases and the environment. In this paper, we investigate the extent to which humans in a democratic society can be rational when making decisions in a serious, complex situation–voting in a local political election. We believe examining human rationality in a political election is important, because a well-functioning democracy rests largely upon the rational choices of individual voters. Previous research has shown that explicit political attitudes predict voting intention and choices (i.e., actual votes) in democratic societies, indicating that people are able to reason comprehensively when making voting decisions. Other work, though, has demonstrated that the attitudes of which we may not be aware, such as our implicit (e.g., subconscious) preferences, can predict voting choices, which may question the well-functioning democracy. In this study, we systematically examined predictors on voting intention and choices in the 2014 mayoral election in Taipei, Taiwan. Results indicate that explicit political party preferences had the largest impact on voting intention and choices. Moreover, implicit political party preferences interacted with explicit political party preferences in accounting for voting intention, and in turn predicted voting choices. Ethnic identity and perceived voting intention of significant others were found to predict voting choices, but not voting intention. In sum, to the comfort of democracy, voters appeared to engage mainly explicit, controlled processes in making their decisions; but findings on ethnic identity and perceived voting intention of significant others may suggest otherwise. PMID:26886266
Voting Intention and Choices: Are Voters Always Rational and Deliberative?
Lee, I-Ching; Chen, Eva E; Tsai, Chia-Hung; Yen, Nai-Shing; Chen, Arbee L P; Lin, Wei-Chieh
2016-01-01
Human rationality--the ability to behave in order to maximize the achievement of their presumed goals (i.e., their optimal choices)--is the foundation for democracy. Research evidence has suggested that voters may not make decisions after exhaustively processing relevant information; instead, our decision-making capacity may be restricted by our own biases and the environment. In this paper, we investigate the extent to which humans in a democratic society can be rational when making decisions in a serious, complex situation-voting in a local political election. We believe examining human rationality in a political election is important, because a well-functioning democracy rests largely upon the rational choices of individual voters. Previous research has shown that explicit political attitudes predict voting intention and choices (i.e., actual votes) in democratic societies, indicating that people are able to reason comprehensively when making voting decisions. Other work, though, has demonstrated that the attitudes of which we may not be aware, such as our implicit (e.g., subconscious) preferences, can predict voting choices, which may question the well-functioning democracy. In this study, we systematically examined predictors on voting intention and choices in the 2014 mayoral election in Taipei, Taiwan. Results indicate that explicit political party preferences had the largest impact on voting intention and choices. Moreover, implicit political party preferences interacted with explicit political party preferences in accounting for voting intention, and in turn predicted voting choices. Ethnic identity and perceived voting intention of significant others were found to predict voting choices, but not voting intention. In sum, to the comfort of democracy, voters appeared to engage mainly explicit, controlled processes in making their decisions; but findings on ethnic identity and perceived voting intention of significant others may suggest otherwise.
Dittrich, Peter
2018-02-01
The organic code concept and its operationalization by molecular codes have been introduced to study the semiotic nature of living systems. This contribution develops further the idea that the semantic capacity of a physical medium can be measured by assessing its ability to implement a code as a contingent mapping. For demonstration and evaluation, the approach is applied to a formal medium: elementary cellular automata (ECA). The semantic capacity is measured by counting the number of ways codes can be implemented. Additionally, a link to information theory is established by taking multivariate mutual information for quantifying contingency. It is shown how ECAs differ in their semantic capacities, how this is related to various ECA classifications, and how this depends on how a meaning is defined. Interestingly, if the meaning should persist for a certain while, the highest semantic capacity is found in CAs with apparently simple behavior, i.e., the fixed-point and two-cycle class. Synergy as a predictor for a CA's ability to implement codes can only be used if context implementing codes are common. For large context spaces with sparse coding contexts synergy is a weak predictor. Concluding, the approach presented here can distinguish CA-like systems with respect to their ability to implement contingent mappings. Applying this to physical systems appears straight forward and might lead to a novel physical property indicating how suitable a physical medium is to implement a semiotic system. Copyright © 2017 Elsevier B.V. All rights reserved.
Preliminary user's manuals for DYNA3D and DYNAP. [In FORTRAN IV for CDC 7600 and Cray-1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hallquist, J. O.
1979-10-01
This report provides a user's manual for DYNA3D, an explicit three-dimensional finite-element code for analyzing the large deformation dynamic response of inelastic solids. A contact-impact algorithm permits gaps and sliding along material interfaces. By a specialization of this algorithm, such interfaces can be rigidly tied to admit variable zoning without the need of transition regions. Spatial discretization is achieved by the use of 8-node solid elements, and the equations of motion are integrated by the central difference method. Post-processors for DYNA3D include GRAPE for plotting deformed shapes and stress contours and DYNAP for plotting time histories. A user's manual formore » DYNAP is also provided. 23 figures.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar, A.S.; Sidener, S.E.; Hamilton, M.L.
1999-10-01
Dynamic finite element modeling of the fracture behavior of fatigue-precracked Charpy specimens in both unirradiated and irradiated conditions was performed using a computer code, ABAQUS Explicit, to predict the upper shelf energy of precracked specimens of a given size from experimental data obtained for a different size. A tensile fracture-strain based method for modeling crack extension and propagation was used. It was found that the predicted upper shelf energies of full and half size precracked specimens based on third size data were in reasonable agreement with their respective experimental values. Similar success was achieved for predicting the upper shelf energymore » of subsize precracked specimens based on full size data.« less
NASA Technical Reports Server (NTRS)
Padovan, J.; Adams, M.; Fertis, J.; Zeid, I.; Lam, P.
1982-01-01
Finite element codes are used in modelling rotor-bearing-stator structure common to the turbine industry. Engine dynamic simulation is used by developing strategies which enable the use of available finite element codes. benchmarking the elements developed are benchmarked by incorporation into a general purpose code (ADINA); the numerical characteristics of finite element type rotor-bearing-stator simulations are evaluated through the use of various types of explicit/implicit numerical integration operators. Improving the overall numerical efficiency of the procedure is improved.
KEWPIE: A dynamical cascade code for decaying exited compound nuclei
NASA Astrophysics Data System (ADS)
Bouriquet, Bertrand; Abe, Yasuhisa; Boilley, David
2004-05-01
A new dynamical cascade code for decaying hot nuclei is proposed and specially adapted to the synthesis of super-heavy nuclei. For such a case, the interesting channel is of the tiny fraction that will decay through particles emission, thus the code avoids classical Monte-Carlo methods and proposes a new numerical scheme. The time dependence is explicitely taken into account in order to cope with the fact that fission decay rate might not be constant. The code allows to evaluate both statistical and dynamical observables. Results are successfully compared to experimental data.
ERIC Educational Resources Information Center
Hammer, David; Berland, Leema K.
2014-01-01
We question widely accepted practices of publishing articles that present quantified analyses of qualitative data. First, articles are often published that provide only very brief excerpts of the qualitative data themselves to illustrate the coding scheme, tacitly or explicitly treating the coding results as data. Second, articles are often…
Cardiac ultrasonography over 4G wireless networks using a tele-operated robot
Panayides, Andreas S.; Jossif, Antonis P.; Christoforou, Eftychios G.; Vieyres, Pierre; Novales, Cyril; Voskarides, Sotos; Pattichis, Constantinos S.
2016-01-01
This Letter proposes an end-to-end mobile tele-echography platform using a portable robot for remote cardiac ultrasonography. Performance evaluation investigates the capacity of long-term evolution (LTE) wireless networks to facilitate responsive robot tele-manipulation and real-time ultrasound video streaming that qualifies for clinical practice. Within this context, a thorough video coding standards comparison for cardiac ultrasound applications is performed, using a data set of ten ultrasound videos. Both objective and subjective (clinical) video quality assessment demonstrate that H.264/AVC and high efficiency video coding standards can achieve diagnostically-lossless video quality at bitrates well within the LTE supported data rates. Most importantly, reduced latencies experienced throughout the live tele-echography sessions allow the medical expert to remotely operate the robot in a responsive manner, using the wirelessly communicated cardiac ultrasound video to reach a diagnosis. Based on preliminary results documented in this Letter, the proposed robotised tele-echography platform can provide for reliable, remote diagnosis, achieving comparable quality of experience levels with in-hospital ultrasound examinations. PMID:27733929
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2014-11-01
While previous versions of the International Energy Conservation Code (IECC) have included provisions to improve the air tightness of dwellings, for the first time, the 2012 IECC mandates compliance verification through blower door testing. Simply completing the Air Barrier and Insulation Installation checklist through visual inspection is no longer sufficient; the 2012 IECC mandates a significantly stricter air sealing requirement. In Climate Zones 3 through 8, air leakage may not exceed 3 ACH50, which is a significant reduction from the 2009 IECC requirement of 7 ACH50. This requirement is for all residential buildings, which includes low-rise multifamily dwellings. While thismore » air leakage rate requirement is an important component to achieving an efficient building thermal envelope, currently, the code language doesn't explicitly address differences between single family and multifamily applications. In addition, the 2012 IECC does not provide an option to sample dwellings for larger multifamily buildings, so compliance would have to be verified on every unit. With compliance with the 2012 IECC air leakage requirements on the horizon, several of Building America team Consortium for Advanced Residential Building's (CARB) multifamily builder partners are evaluating how best to comply with this requirement. Builders are not sure whether it is more practical or beneficial to simply pay for guarded testing or to revise their air sealing strategies to improve compartmentalization to comply with code requirements based on unguarded blower door testing. This report summarizes CARB's research that was conducted to assess the feasibility of meeting the 2012 IECC air leakage requirements in three multifamily buildings.« less
Finite Element Modeling of Coupled Flexible Multibody Dynamics and Liquid Sloshing
2006-09-01
tanks is presented. The semi-discrete combined solid and fluid equations of motions are integrated using a time- accurate parallel explicit solver...Incompressible fluid flow in a moving/deforming container including accurate modeling of the free-surface, turbulence, and viscous effects ...paper, a single computational code which uses a time- accurate explicit solution procedure is used to solve both the solid and fluid equations of
Impact of Explicit Vocabulary Instruction on Writing Achievement of Upper-Intermediate EFL Learners
ERIC Educational Resources Information Center
Solati-Dehkordi, Seyed Amir; Salehi, Hadi
2016-01-01
Studying explicit vocabulary instruction effects on improving L2 learners' writing skill and their short and long-term retention is the purpose of the present study. To achieve the mentioned goal, a fill-in-the-blank test including 36 single words and 60 lexical phrases were administrated to 30 female upper-intermediate EFL learners. The EFL…
Optimizing Environmental Flow Operation Rules based on Explicit IHA Constraints
NASA Astrophysics Data System (ADS)
Dongnan, L.; Wan, W.; Zhao, J.
2017-12-01
Multi-objective operation of reservoirs are increasingly asked to consider the environmental flow to support ecosystem health. Indicators of Hydrologic Alteration (IHA) is widely used to describe environmental flow regimes, but few studies have explicitly formulated it into optimization models and thus is difficult to direct reservoir release. In an attempt to incorporate the benefit of environmental flow into economic achievement, a two-objective reservoir optimization model is developed and all 33 hydrologic parameters of IHA are explicitly formulated into constraints. The benefit of economic is defined by Hydropower Production (HP) while the benefit of environmental flow is transformed into Eco-Index (EI) that combined 5 of the 33 IHA parameters chosen by principal component analysis method. Five scenarios (A to E) with different constraints are tested and solved by nonlinear programming. The case study of Jing Hong reservoir, located in the upstream of Mekong basin, China, shows: 1. A Pareto frontier is formed by maximizing on only HP objective in scenario A and on only EI objective in scenario B. 2. Scenario D using IHA parameters as constraints obtains the optimal benefits of both economic and ecological. 3. A sensitive weight coefficient is found in scenario E, but the trade-offs between HP and EI objectives are not within the Pareto frontier. 4. When the fraction of reservoir utilizable capacity reaches 0.8, both HP and EI capture acceptable values. At last, to make this modelmore conveniently applied to everyday practice, a simplified operation rule curve is extracted.
When cognition kicks in: working memory and speech understanding in noise.
Rönnberg, Jerker; Rudner, Mary; Lunner, Thomas; Zekveld, Adriana A
2010-01-01
Perceptual load and cognitive load can be separately manipulated and dissociated in their effects on speech understanding in noise. The Ease of Language Understanding model assumes a theoretical position where perceptual task characteristics interact with the individual's implicit capacities to extract the phonological elements of speech. Phonological precision and speed of lexical access are important determinants for listening in adverse conditions. If there are mismatches between the phonological elements perceived and phonological representations in long-term memory, explicit working memory (WM)-related capacities will be continually invoked to reconstruct and infer the contents of the ongoing discourse. Whether this induces a high cognitive load or not will in turn depend on the individual's storage and processing capacities in WM. Data suggest that modulated noise maskers may serve as triggers for speech maskers and therefore induce a WM, explicit mode of processing. Individuals with high WM capacity benefit more than low WM-capacity individuals from fast amplitude compression at low or negative input speech-to-noise ratios. The general conclusion is that there is an overarching interaction between the focal purpose of processing in the primary listening task and the extent to which a secondary, distracting task taps into these processes.
Djordjevic, Ivan B
2007-08-06
We describe a coded power-efficient transmission scheme based on repetition MIMO principle suitable for communication over the atmospheric turbulence channel, and determine its channel capacity. The proposed scheme employs the Q-ary pulse-position modulation. We further study how to approach the channel capacity limits using low-density parity-check (LDPC) codes. Component LDPC codes are designed using the concept of pairwise-balanced designs. Contrary to the several recent publications, bit-error rates and channel capacities are reported assuming non-ideal photodetection. The atmospheric turbulence channel is modeled using the Gamma-Gamma distribution function due to Al-Habash et al. Excellent bit-error rate performance improvement, over uncoded case, is found.
ERIC Educational Resources Information Center
Sanz, Cristina; Lin, Hui-Ju; Lado, Beatriz; Stafford, Catherine A.; Bowden, Harriet W.
2016-01-01
The article summarizes results from two experimental studies (N = 23, N = 21) investigating the extent to which working memory capacity (WMC) intervenes in "ab initio" language development under two pedagogical conditions [± grammar lesson + input-based practice + explicit feedback]. The linguistic target is the use of morphosyntax to…
Parallel Semi-Implicit Spectral Element Atmospheric Model
NASA Astrophysics Data System (ADS)
Fournier, A.; Thomas, S.; Loft, R.
2001-05-01
The shallow-water equations (SWE) have long been used to test atmospheric-modeling numerical methods. The SWE contain essential wave-propagation and nonlinear effects of more complete models. We present a semi-implicit (SI) improvement of the Spectral Element Atmospheric Model to solve the SWE (SEAM, Taylor et al. 1997, Fournier et al. 2000, Thomas & Loft 2000). SE methods are h-p finite element methods combining the geometric flexibility of size-h finite elements with the accuracy of degree-p spectral methods. Our work suggests that exceptional parallel-computation performance is achievable by a General-Circulation-Model (GCM) dynamical core, even at modest climate-simulation resolutions (>1o). The code derivation involves weak variational formulation of the SWE, Gauss(-Lobatto) quadrature over the collocation points, and Legendre cardinal interpolators. Appropriate weak variation yields a symmetric positive-definite Helmholtz operator. To meet the Ladyzhenskaya-Babuska-Brezzi inf-sup condition and avoid spurious modes, we use a staggered grid. The SI scheme combines leapfrog and Crank-Nicholson schemes for the nonlinear and linear terms respectively. The localization of operations to elements ideally fits the method to cache-based microprocessor computer architectures --derivatives are computed as collections of small (8x8), naturally cache-blocked matrix-vector products. SEAM also has desirable boundary-exchange communication, like finite-difference models. Timings on on the IBM SP and Compaq ES40 supercomputers indicate that the SI code (20-min timestep) requires 1/3 the CPU time of the explicit code (2-min timestep) for T42 resolutions. Both codes scale nearly linearly out to 400 processors. We achieved single-processor performance up to 30% of peak for both codes on the 375-MHz IBM Power-3 processors. Fast computation and linear scaling lead to a useful climate-simulation dycore only if enough model time is computed per unit wall-clock time. An efficient SI solver is essential to substantially increase this rate. Parallel preconditioning for an iterative conjugate-gradient elliptic solver is described. We are building a GCM dycore capable of 200 GF% lOPS sustained performance on clustered RISC/cache architectures using hybrid MPI/OpenMP programming.
Moderate Deviation Analysis for Classical Communication over Quantum Channels
NASA Astrophysics Data System (ADS)
Chubb, Christopher T.; Tan, Vincent Y. F.; Tomamichel, Marco
2017-11-01
We analyse families of codes for classical data transmission over quantum channels that have both a vanishing probability of error and a code rate approaching capacity as the code length increases. To characterise the fundamental tradeoff between decoding error, code rate and code length for such codes we introduce a quantum generalisation of the moderate deviation analysis proposed by Altŭg and Wagner as well as Polyanskiy and Verdú. We derive such a tradeoff for classical-quantum (as well as image-additive) channels in terms of the channel capacity and the channel dispersion, giving further evidence that the latter quantity characterises the necessary backoff from capacity when transmitting finite blocks of classical data. To derive these results we also study asymmetric binary quantum hypothesis testing in the moderate deviations regime. Due to the central importance of the latter task, we expect that our techniques will find further applications in the analysis of other quantum information processing tasks.
Modeling the Capacity of Riverscapes to Support Dam-Building Beaver
NASA Astrophysics Data System (ADS)
Macfarlane, W.; Wheaton, J. M.
2012-12-01
Beaver (Castor canadensis) dam-building activities lead to a cascade of aquatic and riparian effects that increase the complexity of streams. As a result, beaver are increasingly being used as a critical component of passive stream and riparian restoration strategies. We developed the spatially-explicit Beaver Assessment and Restoration Tool (BRAT) to assess the capacity of the landscape in and around streams and rivers to support dam-building activity for beaver. Capacity was assessed in terms of readily available nation-wide GIS datasets to assess key habitat capacity indicators: water availability, relative abundance of preferred food/building materials and stream power. Beaver capacity was further refined by: 1) ungulate grazing capacity 2) proximity to human conflicts (e.g., irrigation diversions, settlements) 3) conservation/management objectives (endangered fish habitat) and 4) projected benefits related to beaver re-introductions (e.g., repair incisions). Fuzzy inference systems were used to assess the relative importance of these inputs which allowed explicit incorporation of uncertainty resulting from categorical ambiguity of inputs into the capacity model. Results indicate that beaver capacity varies widely within the study area, but follows predictable spatial patterns that correspond to distinct River Styles and landscape units. We present a case study application and verification/validation data from the Escalante River Watershed in southern Utah, and show how the models can be used to help resource managers develop and implement restoration and conservation strategies employing beaver that will have the greatest potential to yield increases in biodiversity and ecosystem services.
Ultrasonic modeling of an embedded elliptic crack
NASA Astrophysics Data System (ADS)
Fradkin, Larissa Ju.; Zalipaev, Victor
2000-05-01
Experiments indicate that the radiating near zone of a compressional circular transducer directly coupled to a homogeneous and isotropic solid has the following structure: there are geometrical zones where one can distinguish a plane compressional wave and toroidal waves, both compressional and shear, radiated by the transducer rim. As has been shown previously the modern diffraction theory allows to describe these explicitly. It also gives explicit asymptotic description of waves present in the transition zones. In case of a normal incidence of a plane compressional wave the explicit expressions have been obtained by Achenbach and co-authors for the fields diffracted by a penny-shaped crack. We build on the above work by applying the uniform GTD to model an oblique incidence of a plane compressional wave on an elliptical crack. We compare our asymptotic results with numerical results based on the boundary integral code as developed by Glushkovs, Krasnodar University, Russia. The asymptotic formulas form a basis of a code for high-frequency simulation of ultrasonic scattering by elliptical cracks situated in the vicinity of a compressional circular transducer, currently under development at our Center.
2013-01-01
Based Micropolar Single Crystal Plasticity: Comparison of Multi - and Single Criterion Theories. J. Mech. Phys. Solids 2011, 59, 398–422. ALE3D ...element boundaries in a multi -step constitutive evaluation (Becker, 2011). The results showed the desired effects of smoothing the deformation field...Implementation The model was implemented in the large-scale parallel, explicit finite element code ALE3D (2012). The crystal plasticity
The Space Telescope SI C&DH system. [Scientific Instrument Control and Data Handling Subsystem
NASA Technical Reports Server (NTRS)
Gadwal, Govind R.; Barasch, Ronald S.
1990-01-01
The Hubble Space Telescope Scientific Instrument Control and Data Handling Subsystem (SI C&DH) is designed to interface with five scientific instruments of the Space Telescope to provide ground and autonomous control and collect health and status information using the Standard Telemetry and Command Components (STACC) multiplex data bus. It also formats high throughput science data into packets. The packetized data is interleaved and Reed-Solomon encoded for error correction and Pseudo Random encoded. An inner convolutional coding with the outer Reed-Solomon coding provides excellent error correction capability. The subsystem is designed with the capacity for orbital replacement in order to meet a mission life of fifteen years. The spacecraft computer and the SI C&DH computer coordinate the activities of the spacecraft and the scientific instruments to achieve the mission objectives.
Low-density parity-check codes for volume holographic memory systems.
Pishro-Nik, Hossein; Rahnavard, Nazanin; Ha, Jeongseok; Fekri, Faramarz; Adibi, Ali
2003-02-10
We investigate the application of low-density parity-check (LDPC) codes in volume holographic memory (VHM) systems. We show that a carefully designed irregular LDPC code has a very good performance in VHM systems. We optimize high-rate LDPC codes for the nonuniform error pattern in holographic memories to reduce the bit error rate extensively. The prior knowledge of noise distribution is used for designing as well as decoding the LDPC codes. We show that these codes have a superior performance to that of Reed-Solomon (RS) codes and regular LDPC counterparts. Our simulation shows that we can increase the maximum storage capacity of holographic memories by more than 50 percent if we use irregular LDPC codes with soft-decision decoding instead of conventionally employed RS codes with hard-decision decoding. The performance of these LDPC codes is close to the information theoretic capacity.
Discrepancies between implicit and explicit motivation and unhealthy eating behavior.
Job, Veronika; Oertig, Daniela; Brandstätter, Veronika; Allemand, Mathias
2010-08-01
Many people change their eating behavior as a consequence of stress. One source of stress is intrapersonal psychological conflict as caused by discrepancies between implicit and explicit motives. In the present research, we examined whether eating behavior is related to this form of stress. Study 1 (N=53), a quasi-experimental study in the lab, showed that the interaction between the implicit achievement motive disposition and explicit commitment toward an achievement task significantly predicts the number of snacks consumed in a consecutive taste test. In cross-sectional Study 2 (N=100), with a sample of middle-aged women, overall motive discrepancy was significantly related to diverse indices of unsettled eating. Regression analyses revealed interaction effects specifically for power and achievement motivation and not for affiliation. Emotional distress further partially mediated the relationship between the overall motive discrepancy and eating behavior.
Han, Sangkwon; Bae, Hyung Jong; Kim, Junhoi; Shin, Sunghwan; Choi, Sung-Eun; Lee, Sung Hoon; Kwon, Sunghoon; Park, Wook
2012-11-20
A QR-coded microtaggant for the anti-counterfeiting of drugs is proposed that can provide high capacity and error-correction capability. It is fabricated lithographically in a microfluidic channel with special consideration of the island patterns in the QR Code. The microtaggant is incorporated in the drug capsule ("on-dose authentication") and can be read by a simple smartphone QR Code reader application when removed from the capsule and washed free of drug. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
FPGA implementation of low complexity LDPC iterative decoder
NASA Astrophysics Data System (ADS)
Verma, Shivani; Sharma, Sanjay
2016-07-01
Low-density parity-check (LDPC) codes, proposed by Gallager, emerged as a class of codes which can yield very good performance on the additive white Gaussian noise channel as well as on the binary symmetric channel. LDPC codes have gained lots of importance due to their capacity achieving property and excellent performance in the noisy channel. Belief propagation (BP) algorithm and its approximations, most notably min-sum, are popular iterative decoding algorithms used for LDPC and turbo codes. The trade-off between the hardware complexity and the decoding throughput is a critical factor in the implementation of the practical decoder. This article presents introduction to LDPC codes and its various decoding algorithms followed by realisation of LDPC decoder by using simplified message passing algorithm and partially parallel decoder architecture. Simplified message passing algorithm has been proposed for trade-off between low decoding complexity and decoder performance. It greatly reduces the routing and check node complexity of the decoder. Partially parallel decoder architecture possesses high speed and reduced complexity. The improved design of the decoder possesses a maximum symbol throughput of 92.95 Mbps and a maximum of 18 decoding iterations. The article presents implementation of 9216 bits, rate-1/2, (3, 6) LDPC decoder on Xilinx XC3D3400A device from Spartan-3A DSP family.
Teaching for Civic Capacity and Engagement: How Faculty Members Align Teaching and Purpose
ERIC Educational Resources Information Center
Domagal-Goldman, Jennifer M.
2010-01-01
Although higher education institutions in the United States have long claimed to teach for civic purposes, only recently have explicit goals related to the development of civic capacity and engagement been included in college and university curricula. The purpose of the study was to advance theoretical and practical understanding of the role of…
Implicit Coupling Approach for Simulation of Charring Carbon Ablators
NASA Technical Reports Server (NTRS)
Chen, Yih-Kanq; Gokcen, Tahir
2013-01-01
This study demonstrates that coupling of a material thermal response code and a flow solver with nonequilibrium gas/surface interaction for simulation of charring carbon ablators can be performed using an implicit approach. The material thermal response code used in this study is the three-dimensional version of Fully Implicit Ablation and Thermal response program, which predicts charring material thermal response and shape change on hypersonic space vehicles. The flow code solves the reacting Navier-Stokes equations using Data Parallel Line Relaxation method. Coupling between the material response and flow codes is performed by solving the surface mass balance in flow solver and the surface energy balance in material response code. Thus, the material surface recession is predicted in flow code, and the surface temperature and pyrolysis gas injection rate are computed in material response code. It is demonstrated that the time-lagged explicit approach is sufficient for simulations at low surface heating conditions, in which the surface ablation rate is not a strong function of the surface temperature. At elevated surface heating conditions, the implicit approach has to be taken, because the carbon ablation rate becomes a stiff function of the surface temperature, and thus the explicit approach appears to be inappropriate resulting in severe numerical oscillations of predicted surface temperature. Implicit coupling for simulation of arc-jet models is performed, and the predictions are compared with measured data. Implicit coupling for trajectory based simulation of Stardust fore-body heat shield is also conducted. The predicted stagnation point total recession is compared with that predicted using the chemical equilibrium surface assumption
Geospace simulations using modern accelerator processor technology
NASA Astrophysics Data System (ADS)
Germaschewski, K.; Raeder, J.; Larson, D. J.
2009-12-01
OpenGGCM (Open Geospace General Circulation Model) is a well-established numerical code simulating the Earth's space environment. The most computing intensive part is the MHD (magnetohydrodynamics) solver that models the plasma surrounding Earth and its interaction with Earth's magnetic field and the solar wind flowing in from the sun. Like other global magnetosphere codes, OpenGGCM's realism is currently limited by computational constraints on grid resolution. OpenGGCM has been ported to make use of the added computational powerof modern accelerator based processor architectures, in particular the Cell processor. The Cell architecture is a novel inhomogeneous multicore architecture capable of achieving up to 230 GFLops on a single chip. The University of New Hampshire recently acquired a PowerXCell 8i based computing cluster, and here we will report initial performance results of OpenGGCM. Realizing the high theoretical performance of the Cell processor is a programming challenge, though. We implemented the MHD solver using a multi-level parallelization approach: On the coarsest level, the problem is distributed to processors based upon the usual domain decomposition approach. Then, on each processor, the problem is divided into 3D columns, each of which is handled by the memory limited SPEs (synergistic processing elements) slice by slice. Finally, SIMD instructions are used to fully exploit the SIMD FPUs in each SPE. Memory management needs to be handled explicitly by the code, using DMA to move data from main memory to the per-SPE local store and vice versa. We use a modern technique, automatic code generation, which shields the application programmer from having to deal with all of the implementation details just described, keeping the code much more easily maintainable. Our preliminary results indicate excellent performance, a speed-up of a factor of 30 compared to the unoptimized version.
Han, Dahai; Gu, Yanjie; Zhang, Min
2017-08-10
An optimized scheme of pulse symmetrical position-orthogonal space-time block codes (PSP-OSTBC) is proposed and applied with m-pulse positions modulation (m-PPM) without the use of a complex decoding algorithm in an optical multi-input multi-output (MIMO) ultraviolet (UV) communication system. The proposed scheme breaks through the limitation of the traditional Alamouti code and is suitable for high-order m-PPM in a UV scattering channel, verified by both simulation experiments and field tests with specific parameters. The performances of 1×1, 2×1, and 2×2 PSP-OSTBC systems with 4-PPM are compared experimentally as the optimal tradeoff between modification and coding in practical application. Meanwhile, the feasibility of the proposed scheme for 8-PPM is examined by a simulation experiment as well. The results suggest that the proposed scheme makes the system insensitive to the influence of path loss with a larger channel capacity, and a higher diversity gain and coding gain with a simple decoding algorithm will be achieved by employing the orthogonality of m-PPM in an optical-MIMO-based ultraviolet scattering channel.
Targeting multiple heterogeneous hardware platforms with OpenCL
NASA Astrophysics Data System (ADS)
Fox, Paul A.; Kozacik, Stephen T.; Humphrey, John R.; Paolini, Aaron; Kuller, Aryeh; Kelmelis, Eric J.
2014-06-01
The OpenCL API allows for the abstract expression of parallel, heterogeneous computing, but hardware implementations have substantial implementation differences. The abstractions provided by the OpenCL API are often insufficiently high-level to conceal differences in hardware architecture. Additionally, implementations often do not take advantage of potential performance gains from certain features due to hardware limitations and other factors. These factors make it challenging to produce code that is portable in practice, resulting in much OpenCL code being duplicated for each hardware platform being targeted. This duplication of effort offsets the principal advantage of OpenCL: portability. The use of certain coding practices can mitigate this problem, allowing a common code base to be adapted to perform well across a wide range of hardware platforms. To this end, we explore some general practices for producing performant code that are effective across platforms. Additionally, we explore some ways of modularizing code to enable optional optimizations that take advantage of hardware-specific characteristics. The minimum requirement for portability implies avoiding the use of OpenCL features that are optional, not widely implemented, poorly implemented, or missing in major implementations. Exposing multiple levels of parallelism allows hardware to take advantage of the types of parallelism it supports, from the task level down to explicit vector operations. Static optimizations and branch elimination in device code help the platform compiler to effectively optimize programs. Modularization of some code is important to allow operations to be chosen for performance on target hardware. Optional subroutines exploiting explicit memory locality allow for different memory hierarchies to be exploited for maximum performance. The C preprocessor and JIT compilation using the OpenCL runtime can be used to enable some of these techniques, as well as to factor in hardware-specific optimizations as necessary.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boche, H., E-mail: boche@tum.de, E-mail: janis.noetzel@tum.de; Nötzel, J., E-mail: boche@tum.de, E-mail: janis.noetzel@tum.de
2014-12-15
This work is motivated by a quite general question: Under which circumstances are the capacities of information transmission systems continuous? The research is explicitly carried out on finite arbitrarily varying quantum channels (AVQCs). We give an explicit example that answers the recent question whether the transmission of messages over AVQCs can benefit from assistance by distribution of randomness between the legitimate sender and receiver in the affirmative. The specific class of channels introduced in that example is then extended to show that the unassisted capacity does have discontinuity points, while it is known that the randomness-assisted capacity is always continuousmore » in the channel. We characterize the discontinuity points and prove that the unassisted capacity is always continuous around its positivity points. After having established shared randomness as an important resource, we quantify the interplay between the distribution of finite amounts of randomness between the legitimate sender and receiver, the (nonzero) probability of a decoding error with respect to the average error criterion and the number of messages that can be sent over a finite number of channel uses. We relate our results to the entanglement transmission capacities of finite AVQCs, where the role of shared randomness is not yet well understood, and give a new sufficient criterion for the entanglement transmission capacity with randomness assistance to vanish.« less
MINIVER upgrade for the AVID system. Volume 2: LANMIN input guide
NASA Technical Reports Server (NTRS)
Engel, C. D.; Schmitz, C. P.
1983-01-01
In order to effectively incorporate MINIVER into the AVID system, several changes to MINIVER were made. The thermal conduction options in MINIVER were removed and a new Explicit Interactive Thermal Structures (EXITS) code was developed. Many upgrades to the MINIVER code were made and a new Langley version of MINIVER called LANMIN was created. A user input guide for LANMIN is provided.
Genetic code, hamming distance and stochastic matrices.
He, Matthew X; Petoukhov, Sergei V; Ricci, Paolo E
2004-09-01
In this paper we use the Gray code representation of the genetic code C=00, U=10, G=11 and A=01 (C pairs with G, A pairs with U) to generate a sequence of genetic code-based matrices. In connection with these code-based matrices, we use the Hamming distance to generate a sequence of numerical matrices. We then further investigate the properties of the numerical matrices and show that they are doubly stochastic and symmetric. We determine the frequency distributions of the Hamming distances, building blocks of the matrices, decomposition and iterations of matrices. We present an explicit decomposition formula for the genetic code-based matrix in terms of permutation matrices, which provides a hypercube representation of the genetic code. It is also observed that there is a Hamiltonian cycle in a genetic code-based hypercube.
Implicit Knowledge, Explicit Knowledge, and Achievement in Second Language (L2) Spanish
ERIC Educational Resources Information Center
Gutierrez, Xavier
2012-01-01
Implicit and explicit knowledge of the second language (L2) are two central constructs in the field of second language acquisition (SLA). In recent years, there has been a renewed interest in obtaining valid and reliable measures of L2 learners' implicit and explicit knowledge (e.g., Bowles, 2011; R. Ellis, 2005). The purpose of the present study…
Two related numerical codes, 3DFEMWATER and 3DLEWASTE, are presented sed to delineate wellhead protection areas in agricultural regions using the assimilative capacity criterion. DFEMWATER (Three-dimensional Finite Element Model of Water Flow Through Saturated-Unsaturated Media) ...
Application of the Hughes-LIU algorithm to the 2-dimensional heat equation
NASA Technical Reports Server (NTRS)
Malkus, D. S.; Reichmann, P. I.; Haftka, R. T.
1982-01-01
An implicit explicit algorithm for the solution of transient problems in structural dynamics is described. The method involved dividing the finite elements into implicit and explicit groups while automatically satisfying the conditions. This algorithm is applied to the solution of the linear, transient, two dimensional heat equation subject to an initial condition derived from the soluton of a steady state problem over an L-shaped region made up of a good conductor and an insulating material. Using the IIT/PRIME computer with virtual memory, a FORTRAN computer program code was developed to make accuracy, stability, and cost comparisons among the fully explicit Euler, the Hughes-Liu, and the fully implicit Crank-Nicholson algorithms. The Hughes-Liu claim that the explicit group governs the stability of the entire region while maintaining the unconditional stability of the implicit group is illustrated.
Communications terminal breadboard
NASA Technical Reports Server (NTRS)
1972-01-01
A baseline design is presented of a digital communications link between an advanced manned spacecraft (AMS) and an earth terminal via an Intelsat 4 type communications satellite used as a geosynchronous orbiting relay station. The fabrication, integration, and testing of terminal elements at each end of the link are discussed. In the baseline link design, the information carrying capacity of the link was estimated for both the forward direction (earth terminal to AMS) and the return direction, based upon orbital geometry, relay satellite characteristics, terminal characteristics, and the improvement that can be achieved by the use of convolutional coding/Viterbi decoding techniques.
Capacity, cutoff rate, and coding for a direct-detection optical channel
NASA Technical Reports Server (NTRS)
Massey, J. L.
1980-01-01
It is shown that Pierce's pulse position modulation scheme with 2 to the L pulse positions used on a self-noise-limited direct detection optical communication channel results in a 2 to the L-ary erasure channel that is equivalent to the parallel combination of L completely correlated binary erasure channels. The capacity of the full channel is the sum of the capacities of the component channels, but the cutoff rate of the full channel is shown to be much smaller than the sum of the cutoff rates. An interpretation of the cutoff rate is given that suggests a complexity advantage in coding separately on the component channels. It is shown that if short-constraint-length convolutional codes with Viterbi decoders are used on the component channels, then the performance and complexity compare favorably with the Reed-Solomon coding system proposed by McEliece for the full channel. The reasons for this unexpectedly fine performance by the convolutional code system are explored in detail, as are various facets of the channel structure.
Population planning: a well co-ordinated approach required.
1984-01-01
This discussion combines information obtained from 5 countries in the Economic and Social Commission for Asia and the Pacific (ESCAP) region on the role of population planning in the context of integrated policies and programs. The countries were asked what specific aspects of the present population policy and program would require concentrated inputs in order to achieve stated goals and targets. In Indonesia 2 program areas are identified for intensification: the organized transmigration scheme which aims at a balanced distribution of population and exploitation of potential resources throughout the country, including islands outside Java and Bali; and the national family planning program as a whole, in order to achieve the target of 60% prevalence rate of contraceptive use among eligible couples in 1990 and a decline of the crude birthrate from 33/1000 to 23/1000 by that date. Both programs are receiving high priority. Nepal policy and programs are aimed at achieving replacement level fertility by 2000. Steps that have been initiated in Bangladesh include intensive motivation activities with strong media inputs, the maintenance of a regular and adequate supply of contraceptives at the doorstep of clients, and strengthening the multisectoral program. The Philippines National Population Program advocates and promotes 4 norms in order to achieve a population growth rate of 2%, a prevalence rate of 54%, and contraceptive effectiveness of 80% by 1987: small family size; birth spacing; delayed marriages; and reduced incidence of teenage pregnancies. The goals envisaged for India are a reduction in the crude birthrate to not more than 21/1000, crude death rate of not more than 9/1000, and an infant mortality rate of less than 60/1000 live births by 2000. Concentrated efforts will be needed in the use of mass media and interpersonal communication strategies with services and supplies being provided as close to the doorstep of the acceptor as possible. In most countries of the region explicit or implicit incentives and/or disincentives are included in the population/family planning program. In the Philippine Population Program, incentives are explicitly given only to volunteer program workers. Disincentives are incorporated in the Internal Revenue Code and the Woman and Child Labor Code. In Indonesia preference is given to incentives rather than to disincentives. The government of Bangladesh is seriously considering the introduction of a package deal of incentives and disincentives in an all out effort to reach desired demographic objectives. In Nepal such a package is already in operation. The more recent innovative measures to encourage the 2-child child family norm in India include: increased compensation money to acceptors of sterilization and IUD and giving lottery tickets to acceptors of sterilization. There is recognition in these countries of the need for an integrated approach to population and development programs.
Bidirectional holographic codes and sub-AdS locality
NASA Astrophysics Data System (ADS)
Yang, Zhao; Hayden, Patrick; Qi, Xiaoliang
Tensor networks implementing quantum error correcting codes have recently been used as toy models of the holographic duality which explicitly realize some of the more puzzling features of the AdS/CFT correspondence. These models reproduce the Ryu-Takayanagi entropy formula for boundary intervals, and allow bulk operators to be mapped to the boundary in a redundant fashion. These exactly solvable, explicit models have provided valuable insight but nonetheless suffer from many deficiencies, some of which we attempt to address in this talk. We propose a new class of tensor network models that subsume the earlier advances and, in addition, incorporate additional features of holographic duality, including: (1) a holographic interpretation of all boundary states, not just those in a ''code'' subspace, (2) a set of bulk states playing the role of ''classical geometries'' which reproduce the Ryu-Takayanagi formula for boundary intervals, (3) a bulk gauge symmetry analogous to diffeomorphism invariance in gravitational theories, (4) emergent bulk locality for sufficiently sparse excitations, and the ability to describe geometry at sub-AdS resolutions or even flat space. David and Lucile Packard Foundation.
Bidirectional holographic codes and sub-AdS locality
NASA Astrophysics Data System (ADS)
Yang, Zhao; Hayden, Patrick; Qi, Xiao-Liang
2016-01-01
Tensor networks implementing quantum error correcting codes have recently been used to construct toy models of holographic duality explicitly realizing some of the more puzzling features of the AdS/CFT correspondence. These models reproduce the Ryu-Takayanagi entropy formula for boundary intervals, and allow bulk operators to be mapped to the boundary in a redundant fashion. These exactly solvable, explicit models have provided valuable insight but nonetheless suffer from many deficiencies, some of which we attempt to address in this article. We propose a new class of tensor network models that subsume the earlier advances and, in addition, incorporate additional features of holographic duality, including: (1) a holographic interpretation of all boundary states, not just those in a "code" subspace, (2) a set of bulk states playing the role of "classical geometries" which reproduce the Ryu-Takayanagi formula for boundary intervals, (3) a bulk gauge symmetry analogous to diffeomorphism invariance in gravitational theories, (4) emergent bulk locality for sufficiently sparse excitations, and (5) the ability to describe geometry at sub-AdS resolutions or even flat space.
Aeronautical audio broadcasting via satellite
NASA Technical Reports Server (NTRS)
Tzeng, Forrest F.
1993-01-01
A system design for aeronautical audio broadcasting, with C-band uplink and L-band downlink, via Inmarsat space segments is presented. Near-transparent-quality compression of 5-kHz bandwidth audio at 20.5 kbit/s is achieved based on a hybrid technique employing linear predictive modeling and transform-domain residual quantization. Concatenated Reed-Solomon/convolutional codes with quadrature phase shift keying are selected for bandwidth and power efficiency. RF bandwidth at 25 kHz per channel, and a decoded bit error rate at 10(exp -6) with E(sub b)/N(sub o) at 3.75 dB are obtained. An interleaver, scrambler, modem synchronization, and frame format were designed, and frequency-division multiple access was selected over code-division multiple access. A link budget computation based on a worst-case scenario indicates sufficient system power margins. Transponder occupancy analysis for 72 audio channels demonstrates ample remaining capacity to accommodate emerging aeronautical services.
Direct and reverse secret-key capacities of a quantum channel.
Pirandola, Stefano; García-Patrón, Raul; Braunstein, Samuel L; Lloyd, Seth
2009-02-06
We define the direct and reverse secret-key capacities of a memoryless quantum channel as the optimal rates that entanglement-based quantum-key-distribution protocols can reach by using a single forward classical communication (direct reconciliation) or a single feedback classical communication (reverse reconciliation). In particular, the reverse secret-key capacity can be positive for antidegradable channels, where no forward strategy is known to be secure. This property is explicitly shown in the continuous variable framework by considering arbitrary one-mode Gaussian channels.
ERIC Educational Resources Information Center
Wall, Candace A.; Rafferty, Lisa A.; Camizzi, Mariya A.; Max, Caroline A.; Van Blargan, David M.
2016-01-01
Many students who struggle to obtain the alphabetic principle are at risk for being identified as having a reading disability and would benefit from additional explicit phonics instruction as a remedial measure. In this action research case study, the research team conducted two experiments to investigate the effects of a color-coded, onset-rime,…
Surface code implementation of block code state distillation.
Fowler, Austin G; Devitt, Simon J; Jones, Cody
2013-01-01
State distillation is the process of taking a number of imperfect copies of a particular quantum state and producing fewer better copies. Until recently, the lowest overhead method of distilling states produced a single improved [formula: see text] state given 15 input copies. New block code state distillation methods can produce k improved [formula: see text] states given 3k + 8 input copies, potentially significantly reducing the overhead associated with state distillation. We construct an explicit surface code implementation of block code state distillation and quantitatively compare the overhead of this approach to the old. We find that, using the best available techniques, for parameters of practical interest, block code state distillation does not always lead to lower overhead, and, when it does, the overhead reduction is typically less than a factor of three.
Surface code implementation of block code state distillation
Fowler, Austin G.; Devitt, Simon J.; Jones, Cody
2013-01-01
State distillation is the process of taking a number of imperfect copies of a particular quantum state and producing fewer better copies. Until recently, the lowest overhead method of distilling states produced a single improved |A〉 state given 15 input copies. New block code state distillation methods can produce k improved |A〉 states given 3k + 8 input copies, potentially significantly reducing the overhead associated with state distillation. We construct an explicit surface code implementation of block code state distillation and quantitatively compare the overhead of this approach to the old. We find that, using the best available techniques, for parameters of practical interest, block code state distillation does not always lead to lower overhead, and, when it does, the overhead reduction is typically less than a factor of three. PMID:23736868
Lung volumes: measurement, clinical use, and coding.
Flesch, Judd D; Dine, C Jessica
2012-08-01
Measurement of lung volumes is an integral part of complete pulmonary function testing. Some lung volumes can be measured during spirometry; however, measurement of the residual volume (RV), functional residual capacity (FRC), and total lung capacity (TLC) requires special techniques. FRC is typically measured by one of three methods. Body plethysmography uses Boyle's Law to determine lung volumes, whereas inert gas dilution and nitrogen washout use dilution properties of gases. After determination of FRC, expiratory reserve volume and inspiratory vital capacity are measured, which allows the calculation of the RV and TLC. Lung volumes are commonly used for the diagnosis of restriction. In obstructive lung disease, they are used to assess for hyperinflation. Changes in lung volumes can also be seen in a number of other clinical conditions. Reimbursement for measurement of lung volumes requires knowledge of current procedural terminology (CPT) codes, relevant indications, and an appropriate level of physician supervision. Because of recent efforts to eliminate payment inefficiencies, the 10 previous CPT codes for lung volumes, airway resistance, and diffusing capacity have been bundled into four new CPT codes.
ERIC Educational Resources Information Center
Shumack, Kellie A.; Reilly, Erin; Chamberlain, Nik
2013-01-01
space, has error-correction capacity, and can be read from any direction. These codes are used in manufacturing, shipping, and marketing, as well as in education. QR codes can be created to produce…
Development based on carrying capacity. A strategy for environmental protection
Carey, D.I.
1993-01-01
Environmental degradation has accelerated in recent years because economic development activities have been inconsistent with a sustainable environment. In human ecology, the concept of 'carrying capacity' implies an optimum level of development and population size based on a complex of interacting factors - physical, institutional, social, and psychological. Development studies which have explicitly recognized carrying capacity have shown that this approach can be used to promote economic activities which are consistent with a sustainable social and physical environment. The concept of carrying capacity provides a framework for integrating physical, socioeconomic, and environmental systems into planning for a sustainable environment. ?? 1993.
Modelling explicit fracture of nuclear fuel pellets using peridynamics
NASA Astrophysics Data System (ADS)
Mella, R.; Wenman, M. R.
2015-12-01
Three dimensional models of explicit cracking of nuclear fuel pellets for a variety of power ratings have been explored with peridynamics, a non-local, mesh free, fracture mechanics method. These models were implemented in the explicitly integrated molecular dynamics code LAMMPS, which was modified to include thermal strains in solid bodies. The models of fuel fracture, during initial power transients, are shown to correlate with the mean number of cracks observed on the inner and outer edges of the pellet, by experimental post irradiation examination of fuel, for power ratings of 10 and 15 W g-1 UO2. The models of the pellet show the ability to predict expected features such as the mid-height pellet crack, the correct number of radial cracks and initiation and coalescence of radial cracks. This work presents a modelling alternative to empirical fracture data found in many fuel performance codes and requires just one parameter of fracture strain. Weibull distributions of crack numbers were fitted to both numerical and experimental data using maximum likelihood estimation so that statistical comparison could be made. The findings show P-values of less than 0.5% suggesting an excellent agreement between model and experimental distributions.
Numerical Studies of Impurities in Fusion Plasmas
DOE R&D Accomplishments Database
Hulse, R. A.
1982-09-01
The coupled partial differential equations used to describe the behavior of impurity ions in magnetically confined controlled fusion plasmas require numerical solution for cases of practical interest. Computer codes developed for impurity modeling at the Princeton Plasma Physics Laboratory are used as examples of the types of codes employed for this purpose. These codes solve for the impurity ionization state densities and associated radiation rates using atomic physics appropriate for these low-density, high-temperature plasmas. The simpler codes solve local equations in zero spatial dimensions while more complex cases require codes which explicitly include transport of the impurity ions simultaneously with the atomic processes of ionization and recombination. Typical applications are discussed and computational results are presented for selected cases of interest.
Children exhibit different performance patterns in explicit and implicit theory of mind tasks.
Oktay-Gür, Nese; Schulz, Alexandra; Rakoczy, Hannes
2018-04-01
Three studies tested scope and limits of children's implicit and explicit theory of mind. In Studies 1 and 2, three- to six-year-olds (N = 84) were presented with closely matched explicit false belief tasks that differed in whether or not they required an understanding of aspectuality. Results revealed that children performed equally well in the different tasks, and performance was strongly correlated. Study 3 tested two-year-olds (N = 81) in implicit interactive versions of these tasks and found evidence for dis-unity: children performed competently only in those tasks that did not require an understanding of aspectuality. Taken together, the present findings suggest that early implicit and later explicit theory of mind tasks may tap different forms of cognitive capacities. Copyright © 2018 Elsevier B.V. All rights reserved.
Phase 1 Validation Testing and Simulation for the WEC-Sim Open Source Code
NASA Astrophysics Data System (ADS)
Ruehl, K.; Michelen, C.; Gunawan, B.; Bosma, B.; Simmons, A.; Lomonaco, P.
2015-12-01
WEC-Sim is an open source code to model wave energy converters performance in operational waves, developed by Sandia and NREL and funded by the US DOE. The code is a time-domain modeling tool developed in MATLAB/SIMULINK using the multibody dynamics solver SimMechanics, and solves the WEC's governing equations of motion using the Cummins time-domain impulse response formulation in 6 degrees of freedom. The WEC-Sim code has undergone verification through code-to-code comparisons; however validation of the code has been limited to publicly available experimental data sets. While these data sets provide preliminary code validation, the experimental tests were not explicitly designed for code validation, and as a result are limited in their ability to validate the full functionality of the WEC-Sim code. Therefore, dedicated physical model tests for WEC-Sim validation have been performed. This presentation provides an overview of the WEC-Sim validation experimental wave tank tests performed at the Oregon State University's Directional Wave Basin at Hinsdale Wave Research Laboratory. Phase 1 of experimental testing was focused on device characterization and completed in Fall 2015. Phase 2 is focused on WEC performance and scheduled for Winter 2015/2016. These experimental tests were designed explicitly to validate the performance of WEC-Sim code, and its new feature additions. Upon completion, the WEC-Sim validation data set will be made publicly available to the wave energy community. For the physical model test, a controllable model of a floating wave energy converter has been designed and constructed. The instrumentation includes state-of-the-art devices to measure pressure fields, motions in 6 DOF, multi-axial load cells, torque transducers, position transducers, and encoders. The model also incorporates a fully programmable Power-Take-Off system which can be used to generate or absorb wave energy. Numerical simulations of the experiments using WEC-Sim will be presented. These simulations highlight the code features included in the latest release of WEC-Sim (v1.2), including: wave directionality, nonlinear hydrostatics and hydrodynamics, user-defined wave elevation time-series, state space radiation, and WEC-Sim compatibility with BEMIO (open source AQWA/WAMI/NEMOH coefficient parser).
Mood induction effects on motor sequence learning and stop signal reaction time.
Greeley, Brian; Seidler, Rachael D
2017-01-01
The neurobiological theory of positive affect proposes that positive mood states may benefit cognitive performance due to an increase of dopamine throughout the brain. However, the results of many positive affect studies are inconsistent; this may be due to individual differences. The relationship between dopamine and performance is not linear, but instead follows an inverted "U" shape. Given this, we hypothesized that individuals with high working memory capacity, a proxy measure for dopaminergic transmission, would not benefit from positive mood induction and in fact performance in dopamine-mediated tasks would decline. In contrast, we predicted that individuals with low working memory capacities would receive the most benefit after positive mood induction. Here, we explored the effect of positive affect on two dopamine-mediated tasks, an explicit serial reaction time sequence learning task and the stop signal task, predicting that an individual's performance is modulated not only by working memory capacity, but also on the type of mood. Improvements in explicit sequence learning from pre- to post-positive mood induction were associated with working memory capacity; performance declined in individuals with higher working memory capacities following positive mood induction, but improved in individuals with lower working memory capacities. This was not the case for negative or neutral mood induction. Moreover, there was no relationship between the change in stop signal reaction time with any of the mood inductions and individual differences in working memory capacity. These results provide partial support for the neurobiological theory of positive affect and highlight the importance of taking into account individual differences in working memory when examining the effects of positive mood induction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frew, Bethany A; Cole, Wesley J; Sun, Yinong
Capacity expansion models (CEMs) are widely used to evaluate the least-cost portfolio of electricity generators, transmission, and storage needed to reliably serve demand over the evolution of many years or decades. Various CEM formulations are used to evaluate systems ranging in scale from states or utility service territories to national or multi-national systems. CEMs can be computationally complex, and to achieve acceptable solve times, key parameters are often estimated using simplified methods. In this paper, we focus on two of these key parameters associated with the integration of variable generation (VG) resources: capacity value and curtailment. We first discuss commonmore » modeling simplifications used in CEMs to estimate capacity value and curtailment, many of which are based on a representative subset of hours that can miss important tail events or which require assumptions about the load and resource distributions that may not match actual distributions. We then present an alternate approach that captures key elements of chronological operation over all hours of the year without the computationally intensive economic dispatch optimization typically employed within more detailed operational models. The updated methodology characterizes the (1) contribution of VG to system capacity during high load and net load hours, (2) the curtailment level of VG, and (3) the potential reductions in curtailments enabled through deployment of storage and more flexible operation of select thermal generators. We apply this alternate methodology to an existing CEM, the Regional Energy Deployment System (ReEDS). Results demonstrate that this alternate approach provides more accurate estimates of capacity value and curtailments by explicitly capturing system interactions across all hours of the year. This approach could be applied more broadly to CEMs at many different scales where hourly resource and load data is available, greatly improving the representation of challenges associate with integration of variable generation resources.« less
A roadmap for acute care training of frontline Healthcare workers in LMICs.
Shah, Nirupa; Bhagwanjee, Satish; Diaz, Janet; Gopalan, P D; Appiah, John Adabie
2017-10-01
This 10-step roadmap outlines explicit procedures for developing, implementing and evaluating short focused training programs for acute care in low and middle income countries (LMICs). A roadmap is necessary to develop resilient training programs that achieve equivalent outcomes despite regional variability in human capacity and infrastructure. Programs based on the roadmap should address shortfalls in human capacity and access to care in the short term and establish the ground work for health systems strengthening in the long term. The primary targets for acute care training are frontline healthcare workers at the clinic level. The programs will differ from others currently available with respect to the timelines, triage method, therapeutic interventions and potential for secondary prevention. The roadmap encompasses multiple iterative cycles of the Plan-Do-Study-Act framework. Core features are integration of frontline trainees with the referral system while promoting research, quality improvement and evaluation from the bottom-up. Training programs must be evidence based, developed along action timelines and use adaptive training methods. A systems approach is essential because training programs that take cognizance of all factors that influence health care delivery have the potential to produce health systems strengthening (HSS). Copyright © 2017 Elsevier Inc. All rights reserved.
[The significance of quality of life from a socio-legal perspective].
Axer, Peter
2014-01-01
Only rarely is the term quality of life explicitly mentioned in the Social Security Code (Sozialgesetzbuch, SGB). In the statutory health insurance law (Book V of the Social Security Code, SGB V), the term is explicitly regulated within the context of the entitlement to pharmaceuticals. While there are pharmaceuticals that have the priority to increase the quality of life but are excluded from the provision of healthcare (Section 34 (1) Sentence 7 SGB V), the improvement of the quality of life has to be taken into account for the cost-benefit assessment (Section 35b SGB V) as well as for the early pharmaceutical benefit assessment (Section 35a SGB V) and for the formation of reference price groups (Section 35 SGB V) for and in the case of an entitlement to benefits in the event of illness. Copyright © 2014. Published by Elsevier GmbH.
Babin, Volodymyr; Roland, Christopher; Darden, Thomas A.; Sagui, Celeste
2007-01-01
There is considerable interest in developing methodologies for the accurate evaluation of free energies, especially in the context of biomolecular simulations. Here, we report on a reexamination of the recently developed metadynamics method, which is explicitly designed to probe “rare events” and areas of phase space that are typically difficult to access with a molecular dynamics simulation. Specifically, we show that the accuracy of the free energy landscape calculated with the metadynamics method may be considerably improved when combined with umbrella sampling techniques. As test cases, we have studied the folding free energy landscape of two prototypical peptides: Ace-(Gly)2-Pro-(Gly)3-Nme in vacuo and trialanine solvated by both implicit and explicit water. The method has been implemented in the classical biomolecular code AMBER and is to be distributed in the next scheduled release of the code. © 2006 American Institute of Physics. PMID:17144742
A proto-code of ethics and conduct for European nurse directors.
Stievano, Alessandro; De Marinis, Maria Grazia; Kelly, Denise; Filkins, Jacqueline; Meyenburg-Altwarg, Iris; Petrangeli, Mauro; Tschudin, Verena
2012-03-01
The proto-code of ethics and conduct for European nurse directors was developed as a strategic and dynamic document for nurse managers in Europe. It invites critical dialogue, reflective thinking about different situations, and the development of specific codes of ethics and conduct by nursing associations in different countries. The term proto-code is used for this document so that specifically country-orientated or organization-based and practical codes can be developed from it to guide professionals in more particular or situation-explicit reflection and values. The proto-code of ethics and conduct for European nurse directors was designed and developed by the European Nurse Directors Association's (ENDA) advisory team. This article gives short explanations of the code' s preamble and two main parts: Nurse directors' ethical basis, and Principles of professional practice, which is divided into six specific points: competence, care, safety, staff, life-long learning and multi-sectorial working.
Operational rate-distortion performance for joint source and channel coding of images.
Ruf, M J; Modestino, J W
1999-01-01
This paper describes a methodology for evaluating the operational rate-distortion behavior of combined source and channel coding schemes with particular application to images. In particular, we demonstrate use of the operational rate-distortion function to obtain the optimum tradeoff between source coding accuracy and channel error protection under the constraint of a fixed transmission bandwidth for the investigated transmission schemes. Furthermore, we develop information-theoretic bounds on performance for specific source and channel coding systems and demonstrate that our combined source-channel coding methodology applied to different schemes results in operational rate-distortion performance which closely approach these theoretical limits. We concentrate specifically on a wavelet-based subband source coding scheme and the use of binary rate-compatible punctured convolutional (RCPC) codes for transmission over the additive white Gaussian noise (AWGN) channel. Explicit results for real-world images demonstrate the efficacy of this approach.
Topological entanglement entropy with a twist.
Brown, Benjamin J; Bartlett, Stephen D; Doherty, Andrew C; Barrett, Sean D
2013-11-27
Defects in topologically ordered models have interesting properties that are reminiscent of the anyonic excitations of the models themselves. For example, dislocations in the toric code model are known as twists and possess properties that are analogous to Ising anyons. We strengthen this analogy by using the topological entanglement entropy as a diagnostic tool to identify properties of both defects and excitations in the toric code. Specifically, we show, through explicit calculation, that the toric code model including twists and dyon excitations has the same quantum dimensions, the same total quantum dimension, and the same fusion rules as an Ising anyon model.
Treatment of isomers in nucleosynthesis codes
NASA Astrophysics Data System (ADS)
Reifarth, René; Fiebiger, Stefan; Göbel, Kathrin; Heftrich, Tanja; Kausch, Tanja; Köppchen, Christoph; Kurtulgil, Deniz; Langer, Christoph; Thomas, Benedikt; Weigand, Mario
2018-03-01
The decay properties of long-lived excited states (isomers) can have a significant impact on the destruction channels of isotopes under stellar conditions. In sufficiently hot environments, the population of isomers can be altered via thermal excitation or de-excitation. If the corresponding lifetimes are of the same order of magnitude as the typical time scales of the environment, the isomers have to be treated explicitly. We present a general approach to the treatment of isomers in stellar nucleosynthesis codes and discuss a few illustrative examples. The corresponding code is available online at http://exp-astro.de/isomers/.
NASA Astrophysics Data System (ADS)
Kudryavtsev, Alexey N.; Kashkovsky, Alexander V.; Borisov, Semyon P.; Shershnev, Anton A.
2017-10-01
In the present work a computer code RCFS for numerical simulation of chemically reacting compressible flows on hybrid CPU/GPU supercomputers is developed. It solves 3D unsteady Euler equations for multispecies chemically reacting flows in general curvilinear coordinates using shock-capturing TVD schemes. Time advancement is carried out using the explicit Runge-Kutta TVD schemes. Program implementation uses CUDA application programming interface to perform GPU computations. Data between GPUs is distributed via domain decomposition technique. The developed code is verified on the number of test cases including supersonic flow over a cylinder.
Visual Search Elicits the Electrophysiological Marker of Visual Working Memory
Emrich, Stephen M.; Al-Aidroos, Naseem; Pratt, Jay; Ferber, Susanne
2009-01-01
Background Although limited in capacity, visual working memory (VWM) plays an important role in many aspects of visually-guided behavior. Recent experiments have demonstrated an electrophysiological marker of VWM encoding and maintenance, the contralateral delay activity (CDA), which has been shown in multiple tasks that have both explicit and implicit memory demands. Here, we investigate whether the CDA is evident during visual search, a thoroughly-researched task that is a hallmark of visual attention but has no explicit memory requirements. Methodology/Principal Findings The results demonstrate that the CDA is present during a lateralized search task, and that it is similar in amplitude to the CDA observed in a change-detection task, but peaks slightly later. The changes in CDA amplitude during search were strongly correlated with VWM capacity, as well as with search efficiency. These results were paralleled by behavioral findings showing a strong correlation between VWM capacity and search efficiency. Conclusions/Significance We conclude that the activity observed during visual search was generated by the same neural resources that subserve VWM, and that this activity reflects the maintenance of previously searched distractors. PMID:19956663
NASA Technical Reports Server (NTRS)
Mendoza, John Cadiz
1995-01-01
The computational fluid dynamics code, PARC3D, is tested to see if its use of non-physical artificial dissipation affects the accuracy of its results. This is accomplished by simulating a shock-laminar boundary layer interaction and several hypersonic flight conditions of the Pegasus(TM) launch vehicle using full artificial dissipation, low artificial dissipation, and the Engquist filter. Before the filter is applied to the PARC3D code, it is validated in one-dimensional and two-dimensional form in a MacCormack scheme against the Riemann and convergent duct problem. For this explicit scheme, the filter shows great improvements in accuracy and computational time as opposed to the nonfiltered solutions. However, for the implicit PARC3D code it is found that the best estimate of the Pegasus experimental heat fluxes and surface pressures is the simulation utilizing low artificial dissipation and no filter. The filter does improve accuracy over the artificially dissipative case but at a computational expense greater than that achieved by the low artificial dissipation case which has no computational time penalty and shows better results. For the shock-boundary layer simulation, the filter does well in terms of accuracy for a strong impingement shock but not as well for weaker shock strengths. Furthermore, for the latter problem the filter reduces the required computational time to convergence by 18.7 percent.
Local coding based matching kernel method for image classification.
Song, Yan; McLoughlin, Ian Vince; Dai, Li-Rong
2014-01-01
This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV) techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK) method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method.
ERIC Educational Resources Information Center
Pon-Barry, Heather; Packard, Becky Wai-Ling; St. John, Audrey
2017-01-01
A dilemma within computer science departments is developing sustainable ways to expand capacity within introductory computer science courses while remaining committed to inclusive practices. Training near-peer mentors for peer code review is one solution. This paper describes the preparation of near-peer mentors for their role, with a focus on…
JEWEL 2.0.0: directions for use
NASA Astrophysics Data System (ADS)
Zapp, Korinna
2014-02-01
In this publication the first official release of the Jewel 2.0.0 code [The first version Jewel 1 (Zapp et al. in Eur Phys J C 60:617, 2009) could only treat elastic scattering explicitly and the code was never published, The code can be downloaded from the official Jewel homepage http://jewel.hepforge.org] is presented. Jewel is a Monte Carlo event generator simulating QCD jet evolution in heavy-ion collisions. It treats the interplay of QCD radiation and re-scattering in a medium with fully microscopic dynamics in a consistent perturbative framework with minimal assumptions. After a qualitative introduction into the physics of Jewel detailed information about the practical aspects of using the code is given. The code is available from the official Jewel homepage http://jewel.hepforge.org.
NASA Astrophysics Data System (ADS)
Firdausi, N.; Prabawa, H. W.; Sutarno, H.
2017-02-01
In an effort to maximize a student’s academic growth, one of the tools available to educators is the explicit instruction. Explicit instruction is marked by a series of support or scaffold, where the students will be guided through the learning process with a clear statement of purpose and a reason for learning new skills, a clear explanation and demonstration of learning targets, supported and practiced with independent feedback until mastery has been achieved. The technology development trend of todays, requires an adjustment in the development of learning object that supports the achievement of explicit instruction targets. This is where the gamification position is. In the role as a pedagogical strategy, the use of gamification preformance study class is still relatively new. Gamification not only use the game elements and game design techniques in non-game contexts, but also to empower and engage learners with the ability of motivation on learning approach and maintains a relaxed atmosphere. With using Reseach and Development methods, this paper presents the integration of technology (which in this case using the concept of gamification) in explicit instruction settings and the impact on the improvement of students’ understanding.
Test-Case Generation using an Explicit State Model Checker Final Report
NASA Technical Reports Server (NTRS)
Heimdahl, Mats P. E.; Gao, Jimin
2003-01-01
In the project 'Test-Case Generation using an Explicit State Model Checker' we have extended an existing tools infrastructure for formal modeling to export Java code so that we can use the NASA Ames tool Java Pathfinder (JPF) for test case generation. We have completed a translator from our source language RSML(exp -e) to Java and conducted initial studies of how JPF can be used as a testing tool. In this final report, we provide a detailed description of the translation approach as implemented in our tools.
Young Adults' Implicit and Explicit Attitudes towards the Sexuality of Older Adults.
Thompson, Ashley E; O'Sullivan, Lucia F; Byers, E Sandra; Shaughnessy, Krystelle
2014-09-01
Sexual interest and capacity can extend far into later life and result in many positive health outcomes. Yet there is little support for sexual expression in later life, particularly among young adults. This study assessed and compared young adults' explicit and implicit attitudes towards older adult sexuality. A sample of 120 participants (18-24 years; 58% female) completed a self-report (explicit) measure and a series of Implicit Association Tests capturing attitudes towards sexuality among older adults. Despite reporting positive explicit attitudes, young people revealed an implicit bias against the sexual lives of older adults. In particular, young adults demonstrated implicit biases favouring general, as compared to sexual, activities and young adults as compared to older adults. Moreover, the bias favouring general activities was amplified with regard to older adults as compared to younger adults. Our findings challenge the validity of research relying on self-reports of attitudes about older adult sexuality.
Marshall, Charles R.; Quental, Tiago B.
2016-01-01
There is no agreement among palaeobiologists or biologists as to whether, or to what extent, there are limits on diversification and species numbers. Here, we posit that part of the disagreement stems from: (i) the lack of explicit criteria for defining the relevant species pools, which may be defined phylogenetically, ecologically or geographically; (ii) assumptions that must be made when extrapolating from population-level logistic growth to macro-evolutionary diversification; and (iii) too much emphasis being placed on fixed carrying capacities, rather than taking into account the opportunities for increased species richness on evolutionary timescales, for example, owing to increased biologically available energy, increased habitat complexity and the ability of many clades to better extract resources from the environment, or to broaden their resource base. Thus, we argue that a more effective way of assessing the evidence for and against the ideas of bound versus unbound diversification is through appropriate definition of the relevant species pools, and through explicit modelling of diversity-dependent diversification with time-varying carrying capacities. Here, we show that time-varying carrying capacities, either increases or decreases, can be accommodated through changing intrinsic diversification rates (diversity-independent effects), or changing the effects of crowding (diversity-dependent effects). PMID:26977059
Scaling participation in payments for ecosystem services programs
Donlan, C. Josh; Boyle, Kevin J.; Xu, Weibin; Gelcich, Stefan
2018-01-01
Payments for ecosystem services programs have become common tools but most have failed to achieve wide-ranging conservation outcomes. The capacity for scale and impact increases when PES programs are designed through the lens of the potential participants, yet this has received little attention in research or practice. Our work with small-scale marine fisheries integrates the social science of PES programs and provides a framework for designing programs that focus a priori on scaling. In addition to payments, desirable non-monetary program attributes and ecological feedbacks attract a wider range of potential participants into PES programs, including those who have more negative attitudes and lower trust. Designing programs that draw individuals into participating in PES programs is likely the most strategic path to reaching scale. Research should engage in new models of participatory research to understand these dynamics and to design programs that explicitly integrate a broad range of needs, values, and modes of implementation. PMID:29522554
Control Law Design in a Computational Aeroelasticity Environment
NASA Technical Reports Server (NTRS)
Newsom, Jerry R.; Robertshaw, Harry H.; Kapania, Rakesh K.
2003-01-01
A methodology for designing active control laws in a computational aeroelasticity environment is given. The methodology involves employing a systems identification technique to develop an explicit state-space model for control law design from the output of a computational aeroelasticity code. The particular computational aeroelasticity code employed in this paper solves the transonic small disturbance aerodynamic equation using a time-accurate, finite-difference scheme. Linear structural dynamics equations are integrated simultaneously with the computational fluid dynamics equations to determine the time responses of the structure. These structural responses are employed as the input to a modern systems identification technique that determines the Markov parameters of an "equivalent linear system". The Eigensystem Realization Algorithm is then employed to develop an explicit state-space model of the equivalent linear system. The Linear Quadratic Guassian control law design technique is employed to design a control law. The computational aeroelasticity code is modified to accept control laws and perform closed-loop simulations. Flutter control of a rectangular wing model is chosen to demonstrate the methodology. Various cases are used to illustrate the usefulness of the methodology as the nonlinearity of the aeroelastic system is increased through increased angle-of-attack changes.
Chiu, Ming-Jang; Liu, Kristina; Hsieh, Ming H; Hwu, Hai-Gwo
2005-12-12
Implicit learning was reported to be intact in schizophrenia using artificial grammar learning. However, emerging evidence indicates that artificial grammar learning is not a unitary process. The authors used dual coding stimuli and schizophrenia clinical symptom dimensions to re-evaluate the effect of schizophrenia on various components of artificial grammar learning. Letter string and color pattern artificial grammar learning performances were compared between 63 schizophrenic patients and 27 comparison subjects. Four symptom dimensions derived from a Chinese Positive and Negative Symptom Scale ratings were correlated with patients' artificial grammar implicit learning performances along the two stimulus dimensions. Patients' explicit memory performances were assessed by verbal paired associates and visual reproduction subtests of the Wechsler Memory Scales Revised Version to provide a contrast to their implicit memory function. Schizophrenia severely hindered color pattern artificial grammar learning while the disease affected lexical string artificial grammar learning to a lesser degree after correcting the influences from age, education and the performance of explicit memory function of both verbal and visual modalities. Both learning performances correlated significantly with the severity of patients' schizophrenic clinical symptom dimensions that reflect poor abstract thinking, disorganized thinking, and stereotyped thinking. The results of this study suggested that schizophrenia affects various mechanisms of artificial grammar learning differently. Implicit learning, knowledge acquisition in the absence of conscious awareness, is not entirely intact in patients with schizophrenia. Schizophrenia affects implicit learning through an impairment of the ability of making abstractions from rules and at least in part decreasing the capacity for perceptual learning.
NASA Astrophysics Data System (ADS)
Fourtakas, G.; Rogers, B. D.
2016-06-01
A two-phase numerical model using Smoothed Particle Hydrodynamics (SPH) is applied to two-phase liquid-sediments flows. The absence of a mesh in SPH is ideal for interfacial and highly non-linear flows with changing fragmentation of the interface, mixing and resuspension. The rheology of sediment induced under rapid flows undergoes several states which are only partially described by previous research in SPH. This paper attempts to bridge the gap between the geotechnics, non-Newtonian and Newtonian flows by proposing a model that combines the yielding, shear and suspension layer which are needed to predict accurately the global erosion phenomena, from a hydrodynamics prospective. The numerical SPH scheme is based on the explicit treatment of both phases using Newtonian and the non-Newtonian Bingham-type Herschel-Bulkley-Papanastasiou constitutive model. This is supplemented by the Drucker-Prager yield criterion to predict the onset of yielding of the sediment surface and a concentration suspension model. The multi-phase model has been compared with experimental and 2-D reference numerical models for scour following a dry-bed dam break yielding satisfactory results and improvements over well-known SPH multi-phase models. With 3-D simulations requiring a large number of particles, the code is accelerated with a graphics processing unit (GPU) in the open-source DualSPHysics code. The implementation and optimisation of the code achieved a speed up of x58 over an optimised single thread serial code. A 3-D dam break over a non-cohesive erodible bed simulation with over 4 million particles yields close agreement with experimental scour and water surface profiles.
Communication skills training: describing a new conceptual model.
Brown, Richard F; Bylund, Carma L
2008-01-01
Current research in communication in physician-patient consultations is multidisciplinary and multimethodological. As this research has progressed, a considerable body of evidence on the best practices in physician-patient communication has been amassed. This evidence provides a foundation for communication skills training (CST) at all levels of medical education. Although the CST literature has demonstrated that communication skills can be taught, one critique of this literature is that it is not always clear which skills are being taught and whether those skills are matched with those being assessed. The Memorial Sloan-Kettering Cancer Center Comskil Model for CST seeks to answer those critiques by explicitly defining the important components of a consultation, based on Goals, Plans, and Actions theories and sociolinguistic theory. Sequenced guidelines as a mechanism for teaching about particular communication challenges are adapted from these other methods. The authors propose that consultation communication can be guided by an overarching goal, which is achieved through the use of a set of predetermined strategies. Strategies are common in CST; however, strategies often contain embedded communication skills. These skills can exist across strategies, and the Comskil Model seeks to make them explicit in these contexts. Separate from the skills are process tasks and cognitive appraisals that need to be addressed in teaching. The authors also describe how assessment practices foster concordance between skills taught and those assessed through careful coding of trainees' communication encounters and direct feedback.
NASA Astrophysics Data System (ADS)
Chiroux, Robert Charles
The objective of this research was to produce a three dimensional, non-linear, dynamic simulation of the interaction between a hyperelastic wheel rolling over compactable soil. The finite element models developed to produce the simulation utilized the ABAQUS/Explicit computer code. Within the simulation two separate bodies were modeled, the hyperelastic wheel and a compactable soil-bed. Interaction between the bodies was achieved by allowing them to come in contact but not to penetrate the contact surface. The simulation included dynamic loading of a hyperelastic, rubber tire in contact with compactable soil with an applied constant angular velocity or torque, including a tow load, applied to the wheel hub. The constraints on the wheel model produced a straight and curved path. In addition the simulation included a shear limit between the tire and soil allowing for the introduction of slip. Soil properties were simulated using the Drucker-Prager, Cap Plasticity model available within the ABAQUS/Explicit program. Numerical results obtained from the three dimensional model were compared with related experimental data and showed good correlation for similar conditions. Numerical and experimental data compared well for both stress and wheel rut formation depth under a weight of 5.8 kN and a constant angular velocity applied to the wheel hub. The simulation results provided a demonstration of the benefit of three-dimensional simulation in comparison to previous two-dimensional, plane strain simulations.
[INVITED] Luminescent QR codes for smart labelling and sensing
NASA Astrophysics Data System (ADS)
Ramalho, João F. C. B.; António, L. C. F.; Correia, S. F. H.; Fu, L. S.; Pinho, A. S.; Brites, C. D. S.; Carlos, L. D.; André, P. S.; Ferreira, R. A. S.
2018-05-01
QR (Quick Response) codes are two-dimensional barcodes composed of special geometric patterns of black modules in a white square background that can encode different types of information with high density and robustness, correct errors and physical damages, thus keeping the stored information protected. Recently, these codes have gained increased attention as they offer a simple physical tool for quick access to Web sites for advertising and social interaction. Challenges encompass the increase of the storage capacity limit, even though they can store approximately 350 times more information than common barcodes, and encode different types of characters (e.g., numeric, alphanumeric, kanji and kana). In this work, we fabricate luminescent QR codes based on a poly(methyl methacrylate) substrate coated with organic-inorganic hybrid materials doped with trivalent terbium (Tb3+) and europium (Eu3+) ions, demonstrating the increase of storage capacity per unit area by a factor of two by using the colour multiplexing, when compared to conventional QR codes. A novel methodology to decode the multiplexed QR codes is developed based on a colour separation threshold where a decision level is calculated through a maximum-likelihood criteria to minimize the error probability of the demultiplexed modules, maximizing the foreseen total storage capacity. Moreover, the thermal dependence of the emission colour coordinates of the Eu3+/Tb3+-based hybrids enables the simultaneously QR code colour-multiplexing and may be used to sense temperature (reproducibility higher than 93%), opening new fields of applications for QR codes as smart labels for sensing.
NASA Astrophysics Data System (ADS)
Lach, Adeline; Boulahya, Faïza; André, Laurent; Lassin, Arnault; Azaroual, Mohamed; Serin, Jean-Paul; Cézac, Pierre
2016-07-01
The thermal and volumetric properties of complex aqueous solutions are described according to the Pitzer equation, explicitly taking into account the speciation in the aqueous solutions. The thermal properties are the apparent relative molar enthalpy (Lϕ) and the apparent molar heat capacity (Cp,ϕ). The volumetric property is the apparent molar volume (Vϕ). Equations describing these properties are obtained from the temperature or pressure derivatives of the excess Gibbs energy and make it possible to calculate the dilution enthalpy (∆HD), the heat capacity (cp) and the density (ρ) of aqueous solutions up to high concentrations. Their implementation in PHREEQC V.3 (Parkhurst and Appelo, 2013) is described and has led to a new numerical tool, called PhreeSCALE. It was tested first, using a set of parameters (specific interaction parameters and standard properties) from the literature for two binary systems (Na2SO4-H2O and MgSO4-H2O), for the quaternary K-Na-Cl-SO4 system (heat capacity only) and for the Na-K-Ca-Mg-Cl-SO4-HCO3 system (density only). The results obtained with PhreeSCALE are in agreement with the literature data when the same standard solution heat capacity (Cp0) and volume (V0) values are used. For further applications of this improved computation tool, these standard solution properties were calculated independently, using the Helgeson-Kirkham-Flowers (HKF) equations. By using this kind of approach, most of the Pitzer interaction parameters coming from literature become obsolete since they are not coherent with the standard properties calculated according to the HKF formalism. Consequently a new set of interaction parameters must be determined. This approach was successfully applied to the Na2SO4-H2O and MgSO4-H2O binary systems, providing a new set of optimized interaction parameters, consistent with the standard solution properties derived from the HKF equations.
NASA Astrophysics Data System (ADS)
Yeh, Peter C. Y.; Lee, C. C.; Chao, T. C.; Tung, C. J.
2017-11-01
Intensity-modulated radiation therapy is an effective treatment modality for the nasopharyngeal carcinoma. One important aspect of this cancer treatment is the need to have an accurate dose algorithm dealing with the complex air/bone/tissue interface in the head-neck region to achieve the cure without radiation-induced toxicities. The Acuros XB algorithm explicitly solves the linear Boltzmann transport equation in voxelized volumes to account for the tissue heterogeneities such as lungs, bone, air, and soft tissues in the treatment field receiving radiotherapy. With the single beam setup in phantoms, this algorithm has already been demonstrated to achieve the comparable accuracy with Monte Carlo simulations. In the present study, five nasopharyngeal carcinoma patients treated with the intensity-modulated radiation therapy were examined for their dose distributions calculated using the Acuros XB in the planning target volume and the organ-at-risk. Corresponding results of Monte Carlo simulations were computed from the electronic portal image data and the BEAMnrc/DOSXYZnrc code. Analysis of dose distributions in terms of the clinical indices indicated that the Acuros XB was in comparable accuracy with Monte Carlo simulations and better than the anisotropic analytical algorithm for dose calculations in real patients.
Climate Change Impacts on Freshwater Recreational Fishing in the United States
Using a geographic information system, a spatially explicit modeling framework was developed consisting grid cells organized into 2,099 eight-digit hydrologic unit code (HUC-8) polygons for the coterminous United States. Projected temperature and precipitation changes associated...
NASA Astrophysics Data System (ADS)
Lertwiram, Namzilp; Tran, Gia Khanh; Mizutani, Keiichi; Sakaguchi, Kei; Araki, Kiyomichi
Setting relays can address the shadowing problem between a transmitter (Tx) and a receiver (Rx). Moreover, the Multiple-Input Multiple-Output (MIMO) technique has been introduced to improve wireless link capacity. The MIMO technique can be applied in relay network to enhance system performance. However, the efficiency of relaying schemes and relay placement have not been well investigated with experiment-based study. This paper provides a propagation measurement campaign of a MIMO two-hop relay network in 5GHz band in an L-shaped corridor environment with various relay locations. Furthermore, this paper proposes a Relay Placement Estimation (RPE) scheme to identify the optimum relay location, i.e. the point at which the network performance is highest. Analysis results of channel capacity show that relaying technique is beneficial over direct transmission in strong shadowing environment while it is ineffective in non-shadowing environment. In addition, the optimum relay location estimated with the RPE scheme also agrees with the location where the network achieves the highest performance as identified by network capacity. Finally, the capacity analysis shows that two-way MIMO relay employing network coding has the best performance while cooperative relaying scheme is not effective due to shadowing effect weakening the signal strength of the direct link.
Effects of strategy on visual working memory capacity
Bengson, Jesse J.; Luck, Steven J.
2015-01-01
Substantial evidence suggests that individual differences in estimates of working memory capacity reflect differences in how effectively people use their intrinsic storage capacity. This suggests that estimated capacity could be increased by instructions that encourage more effective encoding strategies. The present study tested this by giving different participants explicit strategy instructions in a change detection task. Compared to a condition in which participants were simply told to do their best, we found that estimated capacity was increased for participants who were instructed to remember the entire visual display, even at set sizes beyond their capacity. However, no increase in estimated capacity was found for a group that was told to focus on a subset of the items in supracapacity arrays. This finding confirms the hypothesis that encoding strategies may influence visual working memory performance, and it is contrary to the hypothesis that the optimal strategy is to filter out any items beyond the storage capacity. PMID:26139356
Effects of strategy on visual working memory capacity.
Bengson, Jesse J; Luck, Steven J
2016-02-01
Substantial evidence suggests that individual differences in estimates of working memory capacity reflect differences in how effectively people use their intrinsic storage capacity. This suggests that estimated capacity could be increased by instructions that encourage more effective encoding strategies. The present study tested this by giving different participants explicit strategy instructions in a change detection task. Compared to a condition in which participants were simply told to do their best, we found that estimated capacity was increased for participants who were instructed to remember the entire visual display, even at set sizes beyond their capacity. However, no increase in estimated capacity was found for a group that was told to focus on a subset of the items in supracapacity arrays. This finding confirms the hypothesis that encoding strategies may influence visual working memory performance, and it is contrary to the hypothesis that the optimal strategy is to filter out any items beyond the storage capacity.
Patterns in clinicians' responses to patient emotion in cancer care.
Finset, Arnstein; Heyn, Lena; Ruland, Cornelia
2013-10-01
To investigate how patient, clinician and relationship characteristics may predict how oncologists and nurses respond to patients' emotional expressions. Observational study of audiotapes of 196 consultations in cancer care. The consultations were coded according to Verona Coding Definitions of Emotional Sequences (VR-CoDES). Associations were tested in multi-level analyzes. There were 471 cues and 109 concerns with a mean number of 3.0 (SD=3.2) cues and concerns per consultation. Nurses in admittance interviews were five times more likely to provide space for further disclosure of cues and concerns (according to VR-CoDES definitions) than oncologists in out-patient follow-up consultations. Oncologists gave more room for disclosure to the first cue or concern in the consultation, to more explicit and doctor initiated cues/concerns and when the doctor and/or patient was female. Nurses gave room for further disclosure to explicit and nurse initiated cues/concerns, but the effects were smaller than for oncologists. Responses of clinicians which provide room for further disclosure do not occur at random and are systematically dependent on the source, explicitness and timing of the cue or concern. Knowledge on which factors influence responses to cues and concerns may be useful in communication skills training. Copyright © 2013. Published by Elsevier Ireland Ltd.
Morality constrains the default representation of what is possible.
Phillips, Jonathan; Cushman, Fiery
2017-05-02
The capacity for representing and reasoning over sets of possibilities, or modal cognition, supports diverse kinds of high-level judgments: causal reasoning, moral judgment, language comprehension, and more. Prior research on modal cognition asks how humans explicitly and deliberatively reason about what is possible but has not investigated whether or how people have a default, implicit representation of which events are possible. We present three studies that characterize the role of implicit representations of possibility in cognition. Collectively, these studies differentiate explicit reasoning about possibilities from default implicit representations, demonstrate that human adults often default to treating immoral and irrational events as impossible, and provide a case study of high-level cognitive judgments relying on default implicit representations of possibility rather than explicit deliberation.
ERIC Educational Resources Information Center
Harris, Dira D.
2017-01-01
Growing gaps in reading and vocabulary achievement between minority and majority student subgroups have led to an intense focus on implementing effective classroom instructional strategies. Prior research concerning teachers' perceptions of using explicit instructional strategies to teach vocabulary to underperforming students has been…
Efficient Signal, Code, and Receiver Designs for MIMO Communication Systems
2003-06-01
167 5-31 Concatenation of a tilted-QAM inner code with an LDPC outer code with a two component iterative soft-decision decoder. . . . . . . . . 168 5...for AWGN channels has long been studied. There are well-known soft-decision codes like the turbo codes and LDPC codes that can approach capacity to...bits) low density parity check ( LDPC ) code 1. 2. The coded bits are randomly interleaved so that bits nearby go through different sub-channels, and are
On the evolution of primitive genetic codes.
Weberndorfer, Günter; Hofacker, Ivo L; Stadler, Peter F
2003-10-01
The primordial genetic code probably has been a drastically simplified ancestor of the canonical code that is used by contemporary cells. In order to understand how the present-day code came about we first need to explain how the language of the building plan can change without destroying the encoded information. In this work we introduce a minimal organism model that is based on biophysically reasonable descriptions of RNA and protein, namely secondary structure folding and knowledge based potentials. The evolution of a population of such organism under competition for a common resource is simulated explicitly at the level of individual replication events. Starting with very simple codes, and hence greatly reduced amino acid alphabets, we observe a diversification of the codes in most simulation runs. The driving force behind this effect is the possibility to produce fitter proteins when the repertoire of amino acids is enlarged.
29 CFR 553.30 - Occasional or sporadic employment-section 7(p)(2).
Code of Federal Regulations, 2011 CFR
2011-07-01
... capacity must be made freely and without coercion, implicit or explicit, by the employer. An employer may.... Public safety employees taking on any kind of security or safety function within the same local...
29 CFR 553.30 - Occasional or sporadic employment-section 7(p)(2).
Code of Federal Regulations, 2010 CFR
2010-07-01
... capacity must be made freely and without coercion, implicit or explicit, by the employer. An employer may.... Public safety employees taking on any kind of security or safety function within the same local...
An efficient decoding for low density parity check codes
NASA Astrophysics Data System (ADS)
Zhao, Ling; Zhang, Xiaolin; Zhu, Manjie
2009-12-01
Low density parity check (LDPC) codes are a class of forward-error-correction codes. They are among the best-known codes capable of achieving low bit error rates (BER) approaching Shannon's capacity limit. Recently, LDPC codes have been adopted by the European Digital Video Broadcasting (DVB-S2) standard, and have also been proposed for the emerging IEEE 802.16 fixed and mobile broadband wireless-access standard. The consultative committee for space data system (CCSDS) has also recommended using LDPC codes in the deep space communications and near-earth communications. It is obvious that LDPC codes will be widely used in wired and wireless communication, magnetic recording, optical networking, DVB, and other fields in the near future. Efficient hardware implementation of LDPC codes is of great interest since LDPC codes are being considered for a wide range of applications. This paper presents an efficient partially parallel decoder architecture suited for quasi-cyclic (QC) LDPC codes using Belief propagation algorithm for decoding. Algorithmic transformation and architectural level optimization are incorporated to reduce the critical path. First, analyze the check matrix of LDPC code, to find out the relationship between the row weight and the column weight. And then, the sharing level of the check node updating units (CNU) and the variable node updating units (VNU) are determined according to the relationship. After that, rearrange the CNU and the VNU, and divide them into several smaller parts, with the help of some assistant logic circuit, these smaller parts can be grouped into CNU during the check node update processing and grouped into VNU during the variable node update processing. These smaller parts are called node update kernel units (NKU) and the assistant logic circuit are called node update auxiliary unit (NAU). With NAUs' help, the two steps of iteration operation are completed by NKUs, which brings in great hardware resource reduction. Meanwhile, efficient techniques have been developed to reduce the computation delay of the node processing units and to minimize hardware overhead for parallel processing. This method may be applied not only to regular LDPC codes, but also to the irregular ones. Based on the proposed architectures, a (7493, 6096) irregular QC-LDPC code decoder is described using verilog hardware design language and implemented on Altera field programmable gate array (FPGA) StratixII EP2S130. The implementation results show that over 20% of logic core size can be saved than conventional partially parallel decoder architectures without any performance degradation. If the decoding clock is 100MHz, the proposed decoder can achieve a maximum (source data) decoding throughput of 133 Mb/s at 18 iterations.
NASA Technical Reports Server (NTRS)
Denney, Ewen W.; Fischer, Bernd
2009-01-01
Model-based development and automated code generation are increasingly used for production code in safety-critical applications, but since code generators are typically not qualified, the generated code must still be fully tested, reviewed, and certified. This is particularly arduous for mathematical and control engineering software which requires reviewers to trace subtle details of textbook formulas and algorithms to the code, and to match requirements (e.g., physical units or coordinate frames) not represented explicitly in models or code. Both tasks are complicated by the often opaque nature of auto-generated code. We address these problems by developing a verification-driven approach to traceability and documentation. We apply the AUTOCERT verification system to identify and then verify mathematical concepts in the code, based on a mathematical domain theory, and then use these verified traceability links between concepts, code, and verification conditions to construct a natural language report that provides a high-level structured argument explaining why and how the code uses the assumptions and complies with the requirements. We have applied our approach to generate review documents for several sub-systems of NASA s Project Constellation.
Modeling the Effect of Fluid-Structure Interaction on the Impact Dynamics of Pressurized Tank Cars
DOT National Transportation Integrated Search
2009-11-13
This paper presents a computational framework that : analyzes the effect of fluid-structure interaction (FSI) on the : impact dynamics of pressurized commodity tank cars using the : nonlinear dynamic finite element code ABAQUS/Explicit. : There exist...
Co-simulation coupling spectral/finite elements for 3D soil/structure interaction problems
NASA Astrophysics Data System (ADS)
Zuchowski, Loïc; Brun, Michael; De Martin, Florent
2018-05-01
The coupling between an implicit finite elements (FE) code and an explicit spectral elements (SE) code has been explored for solving the elastic wave propagation in the case of soil/structure interaction problem. The coupling approach is based on domain decomposition methods in transient dynamics. The spatial coupling at the interface is managed by a standard coupling mortar approach, whereas the time integration is dealt with an hybrid asynchronous time integrator. An external coupling software, handling the interface problem, has been set up in order to couple the FE software Code_Aster with the SE software EFISPEC3D.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Sha; Tan, Qing; Evans, Meredydd
India is expected to add 40 billion m2 of new buildings till 2050. Buildings are responsible for one third of India’s total energy consumption today and building energy use is expected to continue growing driven by rapid income and population growth. The implementation of the Energy Conservation Building Code (ECBC) is one of the measures to improve building energy efficiency. Using the Global Change Assessment Model, this study assesses growth in the buildings sector and impacts of building energy policies in Gujarat, which would help the state adopt ECBC and expand building energy efficiency programs. Without building energy policies, buildingmore » energy use in Gujarat would grow by 15 times in commercial buildings and 4 times in urban residential buildings between 2010 and 2050. ECBC improves energy efficiency in commercial buildings and could reduce building electricity use in Gujarat by 20% in 2050, compared to the no policy scenario. Having energy codes for both commercial and residential buildings could result in additional 10% savings in electricity use. To achieve these intended savings, it is critical to build capacity and institution for robust code implementation.« less
LDPC decoder with a limited-precision FPGA-based floating-point multiplication coprocessor
NASA Astrophysics Data System (ADS)
Moberly, Raymond; O'Sullivan, Michael; Waheed, Khurram
2007-09-01
Implementing the sum-product algorithm, in an FPGA with an embedded processor, invites us to consider a tradeoff between computational precision and computational speed. The algorithm, known outside of the signal processing community as Pearl's belief propagation, is used for iterative soft-decision decoding of LDPC codes. We determined the feasibility of a coprocessor that will perform product computations. Our FPGA-based coprocessor (design) performs computer algebra with significantly less precision than the standard (e.g. integer, floating-point) operations of general purpose processors. Using synthesis, targeting a 3,168 LUT Xilinx FPGA, we show that key components of a decoder are feasible and that the full single-precision decoder could be constructed using a larger part. Soft-decision decoding by the iterative belief propagation algorithm is impacted both positively and negatively by a reduction in the precision of the computation. Reducing precision reduces the coding gain, but the limited-precision computation can operate faster. A proposed solution offers custom logic to perform computations with less precision, yet uses the floating-point format to interface with the software. Simulation results show the achievable coding gain. Synthesis results help theorize the the full capacity and performance of an FPGA-based coprocessor.
A Parallel Decoding Algorithm for Short Polar Codes Based on Error Checking and Correcting
Pan, Xiaofei; Pan, Kegang; Ye, Zhan; Gong, Chao
2014-01-01
We propose a parallel decoding algorithm based on error checking and correcting to improve the performance of the short polar codes. In order to enhance the error-correcting capacity of the decoding algorithm, we first derive the error-checking equations generated on the basis of the frozen nodes, and then we introduce the method to check the errors in the input nodes of the decoder by the solutions of these equations. In order to further correct those checked errors, we adopt the method of modifying the probability messages of the error nodes with constant values according to the maximization principle. Due to the existence of multiple solutions of the error-checking equations, we formulate a CRC-aided optimization problem of finding the optimal solution with three different target functions, so as to improve the accuracy of error checking. Besides, in order to increase the throughput of decoding, we use a parallel method based on the decoding tree to calculate probability messages of all the nodes in the decoder. Numerical results show that the proposed decoding algorithm achieves better performance than that of some existing decoding algorithms with the same code length. PMID:25540813
Explicit Content Caching at Mobile Edge Networks with Cross-Layer Sensing
Chen, Lingyu; Su, Youxing; Luo, Wenbin; Hong, Xuemin; Shi, Jianghong
2018-01-01
The deployment density and computational power of small base stations (BSs) are expected to increase significantly in the next generation mobile communication networks. These BSs form the mobile edge network, which is a pervasive and distributed infrastructure that can empower a variety of edge/fog computing applications. This paper proposes a novel edge-computing application called explicit caching, which stores selective contents at BSs and exposes such contents to local users for interactive browsing and download. We formulate the explicit caching problem as a joint content recommendation, caching, and delivery problem, which aims to maximize the expected user quality-of-experience (QoE) with varying degrees of cross-layer sensing capability. Optimal and effective heuristic algorithms are presented to solve the problem. The theoretical performance bounds of the explicit caching system are derived in simplified scenarios. The impacts of cache storage space, BS backhaul capacity, cross-layer information, and user mobility on the system performance are simulated and discussed in realistic scenarios. Results suggest that, compared with conventional implicit caching schemes, explicit caching can better exploit the mobile edge network infrastructure for personalized content dissemination. PMID:29565313
Explicit Content Caching at Mobile Edge Networks with Cross-Layer Sensing.
Chen, Lingyu; Su, Youxing; Luo, Wenbin; Hong, Xuemin; Shi, Jianghong
2018-03-22
The deployment density and computational power of small base stations (BSs) are expected to increase significantly in the next generation mobile communication networks. These BSs form the mobile edge network, which is a pervasive and distributed infrastructure that can empower a variety of edge/fog computing applications. This paper proposes a novel edge-computing application called explicit caching, which stores selective contents at BSs and exposes such contents to local users for interactive browsing and download. We formulate the explicit caching problem as a joint content recommendation, caching, and delivery problem, which aims to maximize the expected user quality-of-experience (QoE) with varying degrees of cross-layer sensing capability. Optimal and effective heuristic algorithms are presented to solve the problem. The theoretical performance bounds of the explicit caching system are derived in simplified scenarios. The impacts of cache storage space, BS backhaul capacity, cross-layer information, and user mobility on the system performance are simulated and discussed in realistic scenarios. Results suggest that, compared with conventional implicit caching schemes, explicit caching can better exploit the mobile edge network infrastructure for personalized content dissemination.
Scalable Computing of the Mesh Size Effect on Modeling Damage Mechanics in Woven Armor Composites
2008-12-01
manner of a user defined material subroutine to provide overall stress increments to, the parallel LS-DYNA3D a Lagrangian explicit code used in...finite element code, as a user defined material subroutine . The ability of this subroutine to model the effect of the progressions of a select number...is added as a user defined material subroutine to parallel LS-DYNA3D. The computations of the global mesh are handled by LS-DYNA3D and are spread
NASA Technical Reports Server (NTRS)
Hofmann, R.
1980-01-01
The STEALTH code system, which solves large strain, nonlinear continuum mechanics problems, was rigorously structured in both overall design and programming standards. The design is based on the theoretical elements of analysis while the programming standards attempt to establish a parallelism between physical theory, programming structure, and documentation. These features have made it easy to maintain, modify, and transport the codes. It has also guaranteed users a high level of quality control and quality assurance.
The journey from forensic to predictive materials science using density functional theory
Schultz, Peter A.
2017-09-12
Approximate methods for electronic structure, implemented in sophisticated computer codes and married to ever-more powerful computing platforms, have become invaluable in chemistry and materials science. The maturing and consolidation of quantum chemistry codes since the 1980s, based upon explicitly correlated electronic wave functions, has made them a staple of modern molecular chemistry. Here, the impact of first principles electronic structure in physics and materials science had lagged owing to the extra formal and computational demands of bulk calculations.
The journey from forensic to predictive materials science using density functional theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schultz, Peter A.
Approximate methods for electronic structure, implemented in sophisticated computer codes and married to ever-more powerful computing platforms, have become invaluable in chemistry and materials science. The maturing and consolidation of quantum chemistry codes since the 1980s, based upon explicitly correlated electronic wave functions, has made them a staple of modern molecular chemistry. Here, the impact of first principles electronic structure in physics and materials science had lagged owing to the extra formal and computational demands of bulk calculations.
Time Resolved Phonon Spectroscopy, Version 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goett, Johnny; Zhu, Brian
TRPS code was developed for the project "Time Resolved Phonon Spectroscopy". Routines contained in this piece of software were specially created to model phonon generation and tracking within materials that interact with ionizing radiation, particularly applicable to the modeling of cryogenic radiation detectors for dark matter and neutrino research. These routines were created to link seamlessly with the open source Geant4 framework for the modeling of radiation transport in matter, with the explicit intent of open sourcing them for eventual integration into that code base.
NASA Technical Reports Server (NTRS)
Lin, Shu (Principal Investigator); Uehara, Gregory T.; Nakamura, Eric; Chu, Cecilia W. P.
1996-01-01
The (64, 40, 8) subcode of the third-order Reed-Muller (RM) code for high-speed satellite communications is proposed. The RM subcode can be used either alone or as an inner code of a concatenated coding system with the NASA standard (255, 233, 33) Reed-Solomon (RS) code as the outer code to achieve high performance (or low bit-error rate) with reduced decoding complexity. It can also be used as a component code in a multilevel bandwidth efficient coded modulation system to achieve reliable bandwidth efficient data transmission. The progress made toward achieving the goal of implementing a decoder system based upon this code is summarized. The development of the integrated circuit prototype sub-trellis IC, particularly focusing on the design methodology, is addressed.
Pelicand, Julie; Fournier, Cécile; Le Rhun, Anne; Aujoulat, Isabelle
2015-06-01
This study examines how the term 'self-care' imported from health promotion has been used in the context of patient education interventions for paediatric patients with type 1 diabetes. Thirty articles over the last decade were analysed, using a qualitative method of thematic coding and categorizing. The term 'self-care' has been mainly used as a synonym for self-management of one's condition and treatment. Indeed, the activities performed by paediatric patients independently or with the help of their parents under the term 'self-care' fail to explicitly take into account the general health and life dimensions of self-care, as defined in health promotion. Although such dimensions are implicitly present when it comes to define the parents' and health-care providers' roles in supporting the children's emerging self-care capacity, their importance is acknowledged as a way of strengthening the children's and their families' capacity to respond to illness demands, rather than in relation to their general well-being. The discourse on self-care in the field of paediatric diabetes therefore appears to be oriented more towards disease and prevention, rather than health promotion. The psychosocial dimension of self-care should be particularly investigated, as young patients need to be supported in their efforts to gain autonomy not only in relation to the management of their condition, but in their lives in general. © 2013 Blackwell Publishing Ltd.
Cockrell, Robert Chase; Christley, Scott; Chang, Eugene; An, Gary
2015-01-01
Perhaps the greatest challenge currently facing the biomedical research community is the ability to integrate highly detailed cellular and molecular mechanisms to represent clinical disease states as a pathway to engineer effective therapeutics. This is particularly evident in the representation of organ-level pathophysiology in terms of abnormal tissue structure, which, through histology, remains a mainstay in disease diagnosis and staging. As such, being able to generate anatomic scale simulations is a highly desirable goal. While computational limitations have previously constrained the size and scope of multi-scale computational models, advances in the capacity and availability of high-performance computing (HPC) resources have greatly expanded the ability of computational models of biological systems to achieve anatomic, clinically relevant scale. Diseases of the intestinal tract are exemplary examples of pathophysiological processes that manifest at multiple scales of spatial resolution, with structural abnormalities present at the microscopic, macroscopic and organ-levels. In this paper, we describe a novel, massively parallel computational model of the gut, the Spatially Explicitly General-purpose Model of Enteric Tissue_HPC (SEGMEnT_HPC), which extends an existing model of the gut epithelium, SEGMEnT, in order to create cell-for-cell anatomic scale simulations. We present an example implementation of SEGMEnT_HPC that simulates the pathogenesis of ileal pouchitis, and important clinical entity that affects patients following remedial surgery for ulcerative colitis.
A transient FETI methodology for large-scale parallel implicit computations in structural mechanics
NASA Technical Reports Server (NTRS)
Farhat, Charbel; Crivelli, Luis; Roux, Francois-Xavier
1992-01-01
Explicit codes are often used to simulate the nonlinear dynamics of large-scale structural systems, even for low frequency response, because the storage and CPU requirements entailed by the repeated factorizations traditionally found in implicit codes rapidly overwhelm the available computing resources. With the advent of parallel processing, this trend is accelerating because explicit schemes are also easier to parallelize than implicit ones. However, the time step restriction imposed by the Courant stability condition on all explicit schemes cannot yet -- and perhaps will never -- be offset by the speed of parallel hardware. Therefore, it is essential to develop efficient and robust alternatives to direct methods that are also amenable to massively parallel processing because implicit codes using unconditionally stable time-integration algorithms are computationally more efficient when simulating low-frequency dynamics. Here we present a domain decomposition method for implicit schemes that requires significantly less storage than factorization algorithms, that is several times faster than other popular direct and iterative methods, that can be easily implemented on both shared and local memory parallel processors, and that is both computationally and communication-wise efficient. The proposed transient domain decomposition method is an extension of the method of Finite Element Tearing and Interconnecting (FETI) developed by Farhat and Roux for the solution of static problems. Serial and parallel performance results on the CRAY Y-MP/8 and the iPSC-860/128 systems are reported and analyzed for realistic structural dynamics problems. These results establish the superiority of the FETI method over both the serial/parallel conjugate gradient algorithm with diagonal scaling and the serial/parallel direct method, and contrast the computational power of the iPSC-860/128 parallel processor with that of the CRAY Y-MP/8 system.
EXTINCTION DEBT OF PROTECTED AREAS IN DEVELOPING LANDSCAPES
To conserve biological diversity, protected-area networks must be based not only upon current species distributions but also the landscape's long-term capacity to support populations. We used spatially-explicit population models requiring detailed habitat and demographic data to ...
Kinetic turbulence simulations at extreme scale on leadership-class systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Bei; Ethier, Stephane; Tang, William
2013-01-01
Reliable predictive simulation capability addressing confinement properties in magnetically confined fusion plasmas is critically-important for ITER, a 20 billion dollar international burning plasma device under construction in France. The complex study of kinetic turbulence, which can severely limit the energy confinement and impact the economic viability of fusion systems, requires simulations at extreme scale for such an unprecedented device size. Our newly optimized, global, ab initio particle-in-cell code solving the nonlinear equations underlying gyrokinetic theory achieves excellent performance with respect to "time to solution" at the full capacity of the IBM Blue Gene/Q on 786,432 cores of Mira at ALCFmore » and recently of the 1,572,864 cores of Sequoia at LLNL. Recent multithreading and domain decomposition optimizations in the new GTC-P code represent critically important software advances for modern, low memory per core systems by enabling routine simulations at unprecedented size (130 million grid points ITER-scale) and resolution (65 billion particles).« less
Game-theoretic approach to joint transmitter adaptation and power control in wireless systems.
Popescu, Dimitrie C; Rawat, Danda B; Popescu, Otilia; Saquib, Mohamad
2010-06-01
Game theory has emerged as a new mathematical tool in the analysis and design of wireless communication systems, being particularly useful in studying the interactions among adaptive transmitters that attempt to achieve specific objectives without cooperation. In this paper, we present a game-theoretic approach to the problem of joint transmitter adaptation and power control in wireless systems, where users' transmissions are subject to quality-of-service requirements specified in terms of target signal-to-interference-plus-noise ratios (SINRs) and nonideal vector channels between transmitters and receivers are explicitly considered. Our approach is based on application of separable games, which are a specific class of noncooperative games where the players' cost is a separable function of their strategic choices. We formally state a joint codeword and power adaptation game, which is separable, and we study its properties in terms of its subgames, namely, the codeword adaptation subgame and the power adaptation subgame. We investigate the necessary conditions for an optimal Nash equilibrium and show that this corresponds to an ensemble of user codewords and powers, which maximizes the sum capacity of the corresponding multiaccess vector channel model, and for which the specified target SINRs are achieved with minimum transmitted power.
Trajectory Specification for Automation of Terminal Air Traffic Control
NASA Technical Reports Server (NTRS)
Paielli, Russell A.
2016-01-01
"Trajectory specification" is the explicit bounding and control of aircraft tra- jectories such that the position at each point in time is constrained to a precisely defined volume of space. The bounding space is defined by cross-track, along-track, and vertical tolerances relative to a reference trajectory that specifies position as a function of time. The tolerances are dynamic and will be based on the aircraft nav- igation capabilities and the current traffic situation. A standard language will be developed to represent these specifications and to communicate them by datalink. Assuming conformance, trajectory specification can guarantee safe separation for an arbitrary period of time even in the event of an air traffic control (ATC) sys- tem or datalink failure, hence it can help to achieve the high level of safety and reliability needed for ATC automation. As a more proactive form of ATC, it can also maximize airspace capacity and reduce the reliance on tactical backup systems during normal operation. It applies to both enroute airspace and the terminal area around airports, but this paper focuses on arrival spacing in the terminal area and presents ATC algorithms and software for achieving a specified delay of runway arrival time.
Do Explicit Number Names Accelerate Pre-Kindergarteners' Numeracy and Place Value Acquisition?
ERIC Educational Resources Information Center
Magargee, Suzanne D.; Beauford, Judith E.
2016-01-01
The purpose of this longitudinal study is to investigate whether an early childhood intervention using an explicit and transparent number naming system will have a lasting benefit to English and Spanish speaking children in their mathematics achievement related to number sense by accelerating their acquisition of concepts of numeracy and place…
Recent improvements of reactor physics codes in MHI
NASA Astrophysics Data System (ADS)
Kosaka, Shinya; Yamaji, Kazuya; Kirimura, Kazuki; Kamiyama, Yohei; Matsumoto, Hideki
2015-12-01
This paper introduces recent improvements for reactor physics codes in Mitsubishi Heavy Industries, Ltd(MHI). MHI has developed a new neutronics design code system Galaxy/Cosmo-S(GCS) for PWR core analysis. After TEPCO's Fukushima Daiichi accident, it is required to consider design extended condition which has not been covered explicitly by the former safety licensing analyses. Under these circumstances, MHI made some improvements for GCS code system. A new resonance calculation model of lattice physics code and homogeneous cross section representative model for core simulator have been developed to apply more wide range core conditions corresponding to severe accident status such like anticipated transient without scram (ATWS) analysis and criticality evaluation of dried-up spent fuel pit. As a result of these improvements, GCS code system has very wide calculation applicability with good accuracy for any core conditions as far as fuel is not damaged. In this paper, the outline of GCS code system is described briefly and recent relevant development activities are presented.
Radiative transfer code SHARM for atmospheric and terrestrial applications
NASA Astrophysics Data System (ADS)
Lyapustin, A. I.
2005-12-01
An overview of the publicly available radiative transfer Spherical Harmonics code (SHARM) is presented. SHARM is a rigorous code, as accurate as the Discrete Ordinate Radiative Transfer (DISORT) code, yet faster. It performs simultaneous calculations for different solar zenith angles, view zenith angles, and view azimuths and allows the user to make multiwavelength calculations in one run. The Δ-M method is implemented for calculations with highly anisotropic phase functions. Rayleigh scattering is automatically included as a function of wavelength, surface elevation, and the selected vertical profile of one of the standard atmospheric models. The current version of the SHARM code does not explicitly include atmospheric gaseous absorption, which should be provided by the user. The SHARM code has several built-in models of the bidirectional reflectance of land and wind-ruffled water surfaces that are most widely used in research and satellite data processing. A modification of the SHARM code with the built-in Mie algorithm designed for calculations with spherical aerosols is also described.
Radiative transfer code SHARM for atmospheric and terrestrial applications.
Lyapustin, A I
2005-12-20
An overview of the publicly available radiative transfer Spherical Harmonics code (SHARM) is presented. SHARM is a rigorous code, as accurate as the Discrete Ordinate Radiative Transfer (DISORT) code, yet faster. It performs simultaneous calculations for different solar zenith angles, view zenith angles, and view azimuths and allows the user to make multiwavelength calculations in one run. The Delta-M method is implemented for calculations with highly anisotropic phase functions. Rayleigh scattering is automatically included as a function of wavelength, surface elevation, and the selected vertical profile of one of the standard atmospheric models. The current version of the SHARM code does not explicitly include atmospheric gaseous absorption, which should be provided by the user. The SHARM code has several built-in models of the bidirectional reflectance of land and wind-ruffled water surfaces that are most widely used in research and satellite data processing. A modification of the SHARM code with the built-in Mie algorithm designed for calculations with spherical aerosols is also described.
Recent improvements of reactor physics codes in MHI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kosaka, Shinya, E-mail: shinya-kosaka@mhi.co.jp; Yamaji, Kazuya; Kirimura, Kazuki
2015-12-31
This paper introduces recent improvements for reactor physics codes in Mitsubishi Heavy Industries, Ltd(MHI). MHI has developed a new neutronics design code system Galaxy/Cosmo-S(GCS) for PWR core analysis. After TEPCO’s Fukushima Daiichi accident, it is required to consider design extended condition which has not been covered explicitly by the former safety licensing analyses. Under these circumstances, MHI made some improvements for GCS code system. A new resonance calculation model of lattice physics code and homogeneous cross section representative model for core simulator have been developed to apply more wide range core conditions corresponding to severe accident status such like anticipatedmore » transient without scram (ATWS) analysis and criticality evaluation of dried-up spent fuel pit. As a result of these improvements, GCS code system has very wide calculation applicability with good accuracy for any core conditions as far as fuel is not damaged. In this paper, the outline of GCS code system is described briefly and recent relevant development activities are presented.« less
Implementing TCP/IP and a socket interface as a server in a message-passing operating system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hipp, E.; Wiltzius, D.
1990-03-01
The UNICOS 4.3BSD network code and socket transport interface are the basis of an explicit network server for NLTSS, a message passing operating system on the Cray YMP. A BSD socket user library provides access to the network server using an RPC mechanism. The advantages of this server methodology are its modularity and extensibility to migrate to future protocol suites (e.g. OSI) and transport interfaces. In addition, the network server is implemented in an explicit multi-tasking environment to take advantage of the Cray YMP multi-processor platform. 19 refs., 5 figs.
Benchmark Analysis of Pion Contribution from Galactic Cosmic Rays
NASA Technical Reports Server (NTRS)
Aghara, Sukesh K.; Blattnig, Steve R.; Norbury, John W.; Singleterry, Robert C., Jr.
2008-01-01
Shielding strategies for extended stays in space must include a comprehensive resolution of the secondary radiation environment inside the spacecraft induced by the primary, external radiation. The distribution of absorbed dose and dose equivalent is a function of the type, energy and population of these secondary products. A systematic verification and validation effort is underway for HZETRN, which is a space radiation transport code currently used by NASA. It performs neutron, proton and heavy ion transport explicitly, but it does not take into account the production and transport of mesons, photons and leptons. The question naturally arises as to what is the contribution of these particles to space radiation. The pion has a production kinetic energy threshold of about 280 MeV. The Galactic cosmic ray (GCR) spectra, coincidentally, reaches flux maxima in the hundreds of MeV range, corresponding to the pion production threshold. We present results from the Monte Carlo code MCNPX, showing the effect of lepton and meson physics when produced and transported explicitly in a GCR environment.
NASA Technical Reports Server (NTRS)
Pratt, D. T.
1984-01-01
Conventional algorithms for the numerical integration of ordinary differential equations (ODEs) are based on the use of polynomial functions as interpolants. However, the exact solutions of stiff ODEs behave like decaying exponential functions, which are poorly approximated by polynomials. An obvious choice of interpolant are the exponential functions themselves, or their low-order diagonal Pade (rational function) approximants. A number of explicit, A-stable, integration algorithms were derived from the use of a three-parameter exponential function as interpolant, and their relationship to low-order, polynomial-based and rational-function-based implicit and explicit methods were shown by examining their low-order diagonal Pade approximants. A robust implicit formula was derived by exponential fitting the trapezoidal rule. Application of these algorithms to integration of the ODEs governing homogenous, gas-phase chemical kinetics was demonstrated in a developmental code CREK1D, which compares favorably with the Gear-Hindmarsh code LSODE in spite of the use of a primitive stepsize control strategy.
A comparison of the effects of a secondary task and lorazepam on cognitive performance.
File, S E
1992-01-01
In order to test whether the lorazepam-induced impairments in a variety of cognitive tasks were similar to those of divided attention, the effects of lorazepam (2.5 mg) in healthy volunteers were compared with those requiring subjects to perform an additional task (detecting silences superimposed onto classical music). Neither treatment impaired implicit memory or judgements of frequency. Both treatments impaired performance in tests of speed, lorazepam having the greatest effect on number cancellation and the additional task having the greatest effect on simple reaction time. Both treatments impaired performance in a coding task, in a test of explicit episodic memory and in judgements of recency (indicating impaired coding of contextual information). Lorazepam significantly reduced performance in a word completion task, but this was unimpaired in the group performing the additional task. In general, the pattern of results suggests that there are similarities between the effects of divided attention and lorazepam treatment, and that lorazepam-induced cognitive impairments are not restricted to explicit tests of episodic memory.
A multiblock multigrid three-dimensional Euler equation solver
NASA Technical Reports Server (NTRS)
Cannizzaro, Frank E.; Elmiligui, Alaa; Melson, N. Duane; Vonlavante, E.
1990-01-01
Current aerodynamic designs are often quite complex (geometrically). Flexible computational tools are needed for the analysis of a wide range of configurations with both internal and external flows. In the past, geometrically dissimilar configurations required different analysis codes with different grid topologies in each. The duplicity of codes can be avoided with the use of a general multiblock formulation which can handle any grid topology. Rather than hard wiring the grid topology into the program, it is instead dictated by input to the program. In this work, the compressible Euler equations, written in a body-fitted finite-volume formulation, are solved using a pseudo-time-marching approach. Two upwind methods (van Leer's flux-vector-splitting and Roe's flux-differencing) were investigated. Two types of explicit solvers (a two-step predictor-corrector and a modified multistage Runge-Kutta) were used with multigrid acceleration to enhance convergence. A multiblock strategy is used to allow greater geometric flexibility. A report on simple explicit upwind schemes for solving compressible flows is included.
NASA Technical Reports Server (NTRS)
Kumar, A.; Rudy, D. H.; Drummond, J. P.; Harris, J. E.
1982-01-01
Several two- and three-dimensional external and internal flow problems solved on the STAR-100 and CYBER-203 vector processing computers are described. The flow field was described by the full Navier-Stokes equations which were then solved by explicit finite-difference algorithms. Problem results and computer system requirements are presented. Program organization and data base structure for three-dimensional computer codes which will eliminate or improve on page faulting, are discussed. Storage requirements for three-dimensional codes are reduced by calculating transformation metric data in each step. As a result, in-core grid points were increased in number by 50% to 150,000, with a 10% execution time increase. An assessment of current and future machine requirements shows that even on the CYBER-205 computer only a few problems can be solved realistically. Estimates reveal that the present situation is more storage limited than compute rate limited, but advancements in both storage and speed are essential to realistically calculate three-dimensional flow.
Light Steel-Timber Frame with Composite and Plaster Bracing Panels
Scotta, Roberto; Trutalli, Davide; Fiorin, Laura; Pozza, Luca; Marchi, Luca; De Stefani, Lorenzo
2015-01-01
The proposed light-frame structure comprises steel columns for vertical loads and an innovative bracing system to efficiently resist seismic actions. This seismic force resisting system consists of a light timber frame braced with an Oriented Strand Board (OSB) sheet and an external technoprene plaster-infilled slab. Steel brackets are used as foundation and floor connections. Experimental cyclic-loading tests were conduced to study the seismic response of two shear-wall specimens. A numerical model was calibrated on experimental results and the dynamic non-linear behavior of a case-study building was assessed. Numerical results were then used to estimate the proper behavior factor value, according to European seismic codes. Obtained results demonstrate that this innovative system is suitable for the use in seismic-prone areas thanks to the high ductility and dissipative capacity achieved by the bracing system. This favorable behavior is mainly due to the fasteners and materials used and to the correct application of the capacity design approach. PMID:28793642
Light Steel-Timber Frame with Composite and Plaster Bracing Panels.
Scotta, Roberto; Trutalli, Davide; Fiorin, Laura; Pozza, Luca; Marchi, Luca; De Stefani, Lorenzo
2015-11-03
The proposed light-frame structure comprises steel columns for vertical loads and an innovative bracing system to efficiently resist seismic actions. This seismic force resisting system consists of a light timber frame braced with an Oriented Strand Board (OSB) sheet and an external technoprene plaster-infilled slab. Steel brackets are used as foundation and floor connections. Experimental cyclic-loading tests were conduced to study the seismic response of two shear-wall specimens. A numerical model was calibrated on experimental results and the dynamic non-linear behavior of a case-study building was assessed. Numerical results were then used to estimate the proper behavior factor value, according to European seismic codes. Obtained results demonstrate that this innovative system is suitable for the use in seismic-prone areas thanks to the high ductility and dissipative capacity achieved by the bracing system. This favorable behavior is mainly due to the fasteners and materials used and to the correct application of the capacity design approach.
Safe working hours--doctors in training a best practice issue.
Lewis, Andrew
2002-01-01
In 1995, the Australian Medical Association launched its Safe Working Hours campaign. By 1998, this had been developed into a National Code of Conduct that continues to resonate in the Australian public health system. However, and particularly in respect of Doctors in Training (DITs) who continue to work long hours, there are levels of resistance to proposals that seek to re-organise work or change prevailing professional and cultural expectations. Long working hours have substantial impacts on a DIT's capacity to consistently deliver high quality patient care, dilute the effectiveness of their training regime and have negative consequences on their health, social life and family responsibilities. While public hospitals often maintain the view that minimal budget flexibility restricts their capacity to affect change in a positive way, in fact devisable productivity and efficiency gains can be achieved by reducing working hours. Further, the medical profession needs to consider whether long hours provide an optimal environment for quality learning and performance.
Plaie, Thierry; Thomas, Delphine
2008-06-01
Our study specifies the contributions of image generation and image maintenance processes occurring at the time of imaginal coding of verbal information in memory during normal aging. The memory capacities of 19 young adults (average age of 24 years) and 19 older adults (average age of 75 years) were assessed using recall tasks according to the imagery value of the stimuli to learn. The mental visual imagery capacities are assessed using tasks of image generation and temporary storage of mental imagery. The variance analysis indicates a more important decrease with age of the concretness effect. The major contribution of our study rests on the fact that the decline with age of dual coding of verbal information in memory would result primarily from the decline of image maintenance capacities and from a slowdown in image generation. (PsycINFO Database Record (c) 2008 APA, all rights reserved).
Benchmarking Defmod, an open source FEM code for modeling episodic fault rupture
NASA Astrophysics Data System (ADS)
Meng, Chunfang
2017-03-01
We present Defmod, an open source (linear) finite element code that enables us to efficiently model the crustal deformation due to (quasi-)static and dynamic loadings, poroelastic flow, viscoelastic flow and frictional fault slip. Ali (2015) provides the original code introducing an implicit solver for (quasi-)static problem, and an explicit solver for dynamic problem. The fault constraint is implemented via Lagrange Multiplier. Meng (2015) combines these two solvers into a hybrid solver that uses failure criteria and friction laws to adaptively switch between the (quasi-)static state and dynamic state. The code is capable of modeling episodic fault rupture driven by quasi-static loadings, e.g. due to reservoir fluid withdraw or injection. Here, we focus on benchmarking the Defmod results against some establish results.
NASA Astrophysics Data System (ADS)
Kalnins, L. M.
2015-12-01
Over the last year we implemented a complete restructuring of a second year Matlab-based course on numerical modelling of Earth processes, with changes aimed at 1) strengthening students' independence as programmers, 2) addressing student concerns about support in developing coding skills, and 3) improving key modelling skills such as choosing boundary conditions. To address this, we designed a mastery-based approach where students progress through a series of small programming projects at their own pace. As part of this, all lectures are `flipped' into short videos, allowing all contact hours to be spent on programming. The projects themselves are structured based on a `bottlenecks to learning' approach, explicitly separating out the steps of learning new commands and code structures, creating a conceptual and mathematical model of the problem, and development of more generic programmings skills such as debugging before asking the students to combine all of the above to build a numerical model of an Earth Sciences problem. Compared with the previous, traditionally taught cohort, student questionnaires show a strong improvement in overall satisfaction. Free text responses show a focus on learning for understanding, and that students particularly valued the encouragement to slow down and work towards understanding when they encountered a difficult topic, rather than being pressured by a set timetable to move on. Quantitatively, exam performance improved on key conceptual questions, such as boundary conditions and discretisation, and overall achievement also rose, with 25% of students achieving an `A+' standard of work. Many of the final projects also demonstrated programming and modelling skills that had not been directly taught, ranging from use of new commands to extension of techniques taught in 1D to the 2D case: strong confirmation of the independent skills we aimed to foster with this new approach.
A Comparative Study on Safe Pile Capacity as Shown in Table 1 of IS 2911 (Part III): 1980
NASA Astrophysics Data System (ADS)
Pakrashi, Somdev
2017-06-01
Code of practice for design and construction of under reamed pile foundations: IS 2911 (Part-III)—1980 presents one table in respect of safe load for bored cast in situ under reamed piles in sandy and clayey soils including black cotton soils, stem dia. of pile ranging from 20 to 50 cm and its effective length being 3.50 m. A comparative study, was taken up by working out safe pile capacity for one 400 dia., 3.5 m long bored cast in situ under reamed pile based on subsoil properties obtained from soil investigation work as well as subsoil properties of different magnitudes of clayey, sandy soils and comparing the same with the safe pile capacity shown in Table 1 of that IS Code. The study reveals that safe pile capacity computed from subsoil properties, barring a very few cases, considerably differs from that shown in the aforesaid code and looks forward for more research work and study to find out a conclusive explanation of this probable anomaly.
Cook, Thomas D; Herman, Melissa R; Phillips, Meredith; Settersten, Richard A
2002-01-01
This study assessed some ways in which schools, neighborhoods, nuclear families, and friendship groups jointly contribute to positive change during early adolescence. For each context, existing theory was used to develop a multiattribute index that should promote successful development. Descriptive analyses showed that the four resulting context indices were only modestly intercorrelated at the individual student level (N = 12,398), but clustered more tightly at the school and neighborhood levels (N = 23 and 151 respectively). Only for aggregated units did knowing the developmental capacity of any one context strongly predict the corresponding capacity of the other contexts. Analyses also revealed that each context facilitated individual change in a success index that tapped into student academic performance, mental health, and social behavior. However, individual context effects were only modest in size over the 19 months studied and did not vary much by context. The joint influence of all four contexts was cumulatively large, however, and because it was generally additive in form, no constellation of contexts was identified whose total effect reliably surpassed the sum of its individual context main effects. These results suggest that achieving significant population changes in multidimensional student growth during early adolescence most likely requires both theory and interventions that are explicitly pan-contextual.
Development of a cryogenic mixed fluid J-T cooling computer code, 'JTMIX'
NASA Technical Reports Server (NTRS)
Jones, Jack A.
1991-01-01
An initial study was performed for analyzing and predicting the temperatures and cooling capacities when mixtures of fluids are used in Joule-Thomson coolers and in heat pipes. A computer code, JTMIX, was developed for mixed gas J-T analysis for any fluid combination of neon, nitrogen, various hydrocarbons, argon, oxygen, carbon monoxide, carbon dioxide, and hydrogen sulfide. When used in conjunction with the NIST computer code, DDMIX, it has accurately predicted order-of-magnitude increases in J-T cooling capacities when various hydrocarbons are added to nitrogen, and it predicts nitrogen normal boiling point depressions to as low as 60 K when neon is added.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lougovski, P.; Uskov, D. B.
Entanglement can effectively increase communication channel capacity as evidenced by dense coding that predicts a capacity gain of 1 bit when compared to entanglement-free protocols. However, dense coding relies on Bell states and when implemented using photons the capacity gain is bounded by 0.585 bits due to one's inability to discriminate between the four optically encoded Bell states. In this research we study the following question: Are there alternative entanglement-assisted protocols that rely only on linear optics, coincidence photon counting, and separable single-photon input states and at the same time provide a greater capacity gain than 0.585 bits? In thismore » study, we show that besides the Bell states there is a class of bipartite four-mode two-photon entangled states that facilitate an increase in channel capacity. We also discuss how the proposed scheme can be generalized to the case of two-photon N-mode entangled states for N=6,8.« less
A general multiblock Euler code for propulsion integration. Volume 3: User guide for the Euler code
NASA Technical Reports Server (NTRS)
Chen, H. C.; Su, T. Y.; Kao, T. J.
1991-01-01
This manual explains the procedures for using the general multiblock Euler (GMBE) code developed under NASA contract NAS1-18703. The code was developed for the aerodynamic analysis of geometrically complex configurations in either free air or wind tunnel environments (vol. 1). The complete flow field is divided into a number of topologically simple blocks within each of which surface fitted grids and efficient flow solution algorithms can easily be constructed. The multiblock field grid is generated with the BCON procedure described in volume 2. The GMBE utilizes a finite volume formulation with an explicit time stepping scheme to solve the Euler equations. A multiblock version of the multigrid method was developed to accelerate the convergence of the calculations. This user guide provides information on the GMBE code, including input data preparations with sample input files and a sample Unix script for program execution in the UNICOS environment.
Program optimizations: The interplay between power, performance, and energy
Leon, Edgar A.; Karlin, Ian; Grant, Ryan E.; ...
2016-05-16
Practical considerations for future supercomputer designs will impose limits on both instantaneous power consumption and total energy consumption. Working within these constraints while providing the maximum possible performance, application developers will need to optimize their code for speed alongside power and energy concerns. This paper analyzes the effectiveness of several code optimizations including loop fusion, data structure transformations, and global allocations. A per component measurement and analysis of different architectures is performed, enabling the examination of code optimizations on different compute subsystems. Using an explicit hydrodynamics proxy application from the U.S. Department of Energy, LULESH, we show how code optimizationsmore » impact different computational phases of the simulation. This provides insight for simulation developers into the best optimizations to use during particular simulation compute phases when optimizing code for future supercomputing platforms. Here, we examine and contrast both x86 and Blue Gene architectures with respect to these optimizations.« less
Hiding message into DNA sequence through DNA coding and chaotic maps.
Liu, Guoyan; Liu, Hongjun; Kadir, Abdurahman
2014-09-01
The paper proposes an improved reversible substitution method to hide data into deoxyribonucleic acid (DNA) sequence, and four measures have been taken to enhance the robustness and enlarge the hiding capacity, such as encode the secret message by DNA coding, encrypt it by pseudo-random sequence, generate the relative hiding locations by piecewise linear chaotic map, and embed the encoded and encrypted message into a randomly selected DNA sequence using the complementary rule. The key space and the hiding capacity are analyzed. Experimental results indicate that the proposed method has a better performance compared with the competing methods with respect to robustness and capacity.
Decision or no decision: how do patient-physician interactions end and what matters?
Tai-Seale, Ming; Bramson, Rachel; Bao, Xiaoming
2007-03-01
A clearly stated clinical decision can induce a cognitive closure in patients and is an important investment in the end of patient-physician communications. Little is known about how often explicit decisions are made in primary care visits. To use an innovative videotape analysis approach to assess physicians' propensity to state decisions explicitly, and to examine the factors influencing decision patterns. We coded topics discussed in 395 videotapes of primary care visits, noting the number of instances and the length of discussions on each topic, and how discussions ended. A regression analysis tested the relationship between explicit decisions and visit factors such as the nature of topics under discussion, instances of discussion, the amount of time the patient spoke, and competing demands from other topics. About 77% of topics ended with explicit decisions. Patients spoke for an average of 58 seconds total per topic. Patients spoke more during topics that ended with an explicit decision, (67 seconds), compared with 36 seconds otherwise. The number of instances of a topic was associated with higher odds of having an explicit decision (OR = 1.73, p < 0.01). Increases in the number of topics discussed in visits (OR = 0.95, p < .05), and topics on lifestyle and habits (OR = 0.60, p < .01) were associated with lower odds of explicit decisions. Although discussions often ended with explicit decisions, there were variations related to the content and dynamics of interactions. We recommend strengthening patients' voice and developing clinical tools, e.g., an "exit prescription," to improving decision making.
Lougovski, P.; Uskov, D. B.
2015-08-04
Entanglement can effectively increase communication channel capacity as evidenced by dense coding that predicts a capacity gain of 1 bit when compared to entanglement-free protocols. However, dense coding relies on Bell states and when implemented using photons the capacity gain is bounded by 0.585 bits due to one's inability to discriminate between the four optically encoded Bell states. In this research we study the following question: Are there alternative entanglement-assisted protocols that rely only on linear optics, coincidence photon counting, and separable single-photon input states and at the same time provide a greater capacity gain than 0.585 bits? In thismore » study, we show that besides the Bell states there is a class of bipartite four-mode two-photon entangled states that facilitate an increase in channel capacity. We also discuss how the proposed scheme can be generalized to the case of two-photon N-mode entangled states for N=6,8.« less
ERIC Educational Resources Information Center
Amtmann, Dagmar; Abbott, Robert D.; Berninger, Virginia W.
2008-01-01
After explicit spelling instruction, low achieving second grade spellers increased the number of correctly spelled words during composing but differed in response trajectories. Class 1 (low initial and slow growth) had the lowest initial performance and improved at a relatively slow rate. Class 2 (high initial and fast growth) started higher than…
Construction of optimal resources for concatenated quantum protocols
NASA Astrophysics Data System (ADS)
Pirker, A.; Wallnöfer, J.; Briegel, H. J.; Dür, W.
2017-06-01
We consider the explicit construction of resource states for measurement-based quantum information processing. We concentrate on special-purpose resource states that are capable to perform a certain operation or task, where we consider unitary Clifford circuits as well as non-trace-preserving completely positive maps, more specifically probabilistic operations including Clifford operations and Pauli measurements. We concentrate on 1 →m and m →1 operations, i.e., operations that map one input qubit to m output qubits or vice versa. Examples of such operations include encoding and decoding in quantum error correction, entanglement purification, or entanglement swapping. We provide a general framework to construct optimal resource states for complex tasks that are combinations of these elementary building blocks. All resource states only contain input and output qubits, and are hence of minimal size. We obtain a stabilizer description of the resulting resource states, which we also translate into a circuit pattern to experimentally generate these states. In particular, we derive recurrence relations at the level of stabilizers as key analytical tool to generate explicit (graph) descriptions of families of resource states. This allows us to explicitly construct resource states for encoding, decoding, and syndrome readout for concatenated quantum error correction codes, code switchers, multiple rounds of entanglement purification, quantum repeaters, and combinations thereof (such as resource states for entanglement purification of encoded states).
Explicit finite-difference simulation of optical integrated devices on massive parallel computers.
Sterkenburgh, T; Michels, R M; Dress, P; Franke, H
1997-02-20
An explicit method for the numerical simulation of optical integrated circuits by means of the finite-difference time-domain (FDTD) method is presented. This method, based on an explicit solution of Maxwell's equations, is well established in microwave technology. Although the simulation areas are small, we verified the behavior of three interesting problems, especially nonparaxial problems, with typical aspects of integrated optical devices. Because numerical losses are within acceptable limits, we suggest the use of the FDTD method to achieve promising quantitative simulation results.
Effect of Color-Coded Notation on Music Achievement of Elementary Instrumental Students.
ERIC Educational Resources Information Center
Rogers, George L.
1991-01-01
Presents results of a study of color-coded notation to teach music reading to instrumental students. Finds no clear evidence that color-coded notation enhances achievement on performing by memory, sight-reading, or note naming. Suggests that some students depended on the color-coding and were unable to read uncolored notation well. (DK)
A validated non-linear Kelvin-Helmholtz benchmark for numerical hydrodynamics
NASA Astrophysics Data System (ADS)
Lecoanet, D.; McCourt, M.; Quataert, E.; Burns, K. J.; Vasil, G. M.; Oishi, J. S.; Brown, B. P.; Stone, J. M.; O'Leary, R. M.
2016-02-01
The non-linear evolution of the Kelvin-Helmholtz instability is a popular test for code verification. To date, most Kelvin-Helmholtz problems discussed in the literature are ill-posed: they do not converge to any single solution with increasing resolution. This precludes comparisons among different codes and severely limits the utility of the Kelvin-Helmholtz instability as a test problem. The lack of a reference solution has led various authors to assert the accuracy of their simulations based on ad hoc proxies, e.g. the existence of small-scale structures. This paper proposes well-posed two-dimensional Kelvin-Helmholtz problems with smooth initial conditions and explicit diffusion. We show that in many cases numerical errors/noise can seed spurious small-scale structure in Kelvin-Helmholtz problems. We demonstrate convergence to a reference solution using both ATHENA, a Godunov code, and DEDALUS, a pseudo-spectral code. Problems with constant initial density throughout the domain are relatively straightforward for both codes. However, problems with an initial density jump (which are the norm in astrophysical systems) exhibit rich behaviour and are more computationally challenging. In the latter case, ATHENA simulations are prone to an instability of the inner rolled-up vortex; this instability is seeded by grid-scale errors introduced by the algorithm, and disappears as resolution increases. Both ATHENA and DEDALUS exhibit late-time chaos. Inviscid simulations are riddled with extremely vigorous secondary instabilities which induce more mixing than simulations with explicit diffusion. Our results highlight the importance of running well-posed test problems with demonstrated convergence to a reference solution. To facilitate future comparisons, we include as supplementary material the resolved, converged solutions to the Kelvin-Helmholtz problems in this paper in machine-readable form.
Realistic mass ratio magnetic reconnection simulations with the Multi Level Multi Domain method
NASA Astrophysics Data System (ADS)
Innocenti, Maria Elena; Beck, Arnaud; Lapenta, Giovanni; Markidis, Stefano
2014-05-01
Space physics simulations with the ambition of realistically representing both ion and electron dynamics have to be able to cope with the huge scale separation between the electron and ion parameters while respecting the stability constraints of the numerical method of choice. Explicit Particle In Cell (PIC) simulations with realistic mass ratio are limited in the size of the problems they can tackle by the restrictive stability constraints of the explicit method (Birdsall and Langdon, 2004). Many alternatives are available to reduce such computation costs. Reduced mass ratios can be used, with the caveats highlighted in Bret and Dieckmann (2010). Fully implicit (Chen et al., 2011a; Markidis and Lapenta, 2011) or semi implicit (Vu and Brackbill, 1992; Lapenta et al., 2006; Cohen et al., 1989) methods can bypass the strict stability constraints of explicit PIC codes. Adaptive Mesh Refinement (AMR) techniques (Vay et al., 2004; Fujimoto and Sydora, 2008) can be employed to change locally the simulation resolution. We focus here on the Multi Level Multi Domain (MLMD) method introduced in Innocenti et al. (2013) and Beck et al. (2013). The method combines the advantages of implicit algorithms and adaptivity. Two levels are fully simulated with fields and particles. The so called "refined level" simulates a fraction of the "coarse level" with a resolution RF times bigger than the coarse level resolution, where RF is the Refinement Factor between the levels. This method is particularly suitable for magnetic reconnection simulations (Biskamp, 2005), where the characteristic Ion and Electron Diffusion Regions (IDR and EDR) develop at the ion and electron scales respectively (Daughton et al., 2006). In Innocenti et al. (2013) we showed that basic wave and instability processes are correctly reproduced by MLMD simulations. In Beck et al. (2013) we applied the technique to plasma expansion and magnetic reconnection problems. We showed that notable computational time savings can be achieved. More importantly, we were able to correctly reproduce EDR features, such as the inversion layer of the electric field observed in Chen et al. (2011b), with a MLMD simulation at a significantly lower cost. Here, we present recent results on EDR dynamics achieved with the MLMD method and a realistic mass ratio.
Geospatial Analysis of Near-Term Technical Potential of BECCS in the U.S.
NASA Astrophysics Data System (ADS)
Baik, E.; Sanchez, D.; Turner, P. A.; Mach, K. J.; Field, C. B.; Benson, S. M.
2017-12-01
Atmospheric carbon dioxide (CO2) removal using bioenergy with carbon capture and storage (BECCS) is crucial for achieving stringent climate change mitigation targets. To date, previous work discussing the feasibility of BECCS has largely focused on land availability and bioenergy potential, while CCS components - including capacity, injectivity, and location of potential storage sites - have not been thoroughly considered in the context of BECCS. A high-resolution geospatial analysis of both biomass production and potential geologic storage sites is conducted to consider the near-term deployment potential of BECCS in the U.S. The analysis quantifies the overlap between the biomass resource and CO2 storage locations within the context of storage capacity and injectivity. This analysis leverages county-level biomass production data from the U.S. Department of Energy's Billion Ton Report alongside potential CO2 geologic storage sites as provided by the USGS Assessment of Geologic Carbon Dioxide Storage Resources. Various types of lignocellulosic biomass (agricultural residues, dedicated energy crops, and woody biomass) result in a potential 370-400 Mt CO2 /yr of negative emissions in 2020. Of that CO2, only 30-31% of the produced biomass (110-120 Mt CO2 /yr) is co-located with a potential storage site. While large potential exists, there would need to be more than 250 50-MW biomass power plants fitted with CCS to capture all the co-located CO2 capacity in 2020. Neither absolute injectivity nor absolute storage capacity is likely to limit BECCS, but the results show regional capacity and injectivity constraints in the U.S. that had not been identified in previous BECCS analysis studies. The state of Illinois, the Gulf region, and western North Dakota emerge as the best locations for near-term deployment of BECCS with abundant biomass, sufficient storage capacity and injectivity, and the co-location of the two resources. Future studies assessing BECCS potential should employ higher-resolution spatial datasets to identify near-term deployment opportunities, explicitly including the availability of co-located storage, regional capacity limitations, and integration of electricity produced with BECCS into local electricity grids.
Processing of Visual--Action Codes by Deaf and Hearing Children: Coding Orientation or "M"-Capacity?
ERIC Educational Resources Information Center
Todman, John; Cowdy, Natascha
1993-01-01
Results from a study in which 25 deaf children and 25 hearing children completed a vocabulary test and a compound stimulus visual information task support the hypothesis that performance on cognitive tasks is dependent on compatibility of task demands with a coding orientation. (SLD)
A Mixed Multi-Field Finite Element Formulation for Thermopiezoelectric Composite Shells
NASA Technical Reports Server (NTRS)
Lee, Ho-Jun; Saravanos, Dimitris A.
1999-01-01
Analytical formulations are presented which account for the coupled mechanical, electrical, and thermal response of piezoelectric composite shell structures. A new mixed multi-field laminate theory is developed which combines "single layer" assumptions for the displacements along with layerwise fields for the electric potential and temperature. This laminate theory is formulated using curvilinear coordinates and is based on the principles of linear thermopiezoelectricity. The mechanics have the inherent capability to explicitly model both the active and sensory responses of piezoelectric composite shells in thermal environment. Finite element equations are derived and implemented for an eight-noded shell element. Numerical studies are conducted to investigate both the sensory and active responses of piezoelectric composite shell structures subjected to thermal loads. Results for a cantilevered plate with an attached piezoelectric layer are com- pared with corresponding results from a commercial finite element code and a previously developed program. Additional studies are conducted on a cylindrical shell with an attached piezoelectric layer to demonstrate capabilities to achieve thermal shape control on curved piezoelectric structures.
Two-Stream Transformer Networks for Video-based Face Alignment.
Liu, Hao; Lu, Jiwen; Feng, Jianjiang; Zhou, Jie
2017-08-01
In this paper, we propose a two-stream transformer networks (TSTN) approach for video-based face alignment. Unlike conventional image-based face alignment approaches which cannot explicitly model the temporal dependency in videos and motivated by the fact that consistent movements of facial landmarks usually occur across consecutive frames, our TSTN aims to capture the complementary information of both the spatial appearance on still frames and the temporal consistency information across frames. To achieve this, we develop a two-stream architecture, which decomposes the video-based face alignment into spatial and temporal streams accordingly. Specifically, the spatial stream aims to transform the facial image to the landmark positions by preserving the holistic facial shape structure. Accordingly, the temporal stream encodes the video input as active appearance codes, where the temporal consistency information across frames is captured to help shape refinements. Experimental results on the benchmarking video-based face alignment datasets show very competitive performance of our method in comparisons to the state-of-the-arts.
Multitasking a three-dimensional Navier-Stokes algorithm on the Cray-2
NASA Technical Reports Server (NTRS)
Swisshelm, Julie M.
1989-01-01
A three-dimensional computational aerodynamics algorithm has been multitasked for efficient parallel execution on the Cray-2. It provides a means for examining the multitasking performance of a complete CFD application code. An embedded zonal multigrid scheme is used to solve the Reynolds-averaged Navier-Stokes equations for an internal flow model problem. The explicit nature of each component of the method allows a spatial partitioning of the computational domain to achieve a well-balanced task load for MIMD computers with vector-processing capability. Experiments have been conducted with both two- and three-dimensional multitasked cases. The best speedup attained by an individual task group was 3.54 on four processors of the Cray-2, while the entire solver yielded a speedup of 2.67 on four processors for the three-dimensional case. The multiprocessing efficiency of various types of computational tasks is examined, performance on two Cray-2s with different memory access speeds is compared, and extrapolation to larger problems is discussed.
Dysmorphometrics: the modelling of morphological abnormalities.
Claes, Peter; Daniels, Katleen; Walters, Mark; Clement, John; Vandermeulen, Dirk; Suetens, Paul
2012-02-06
The study of typical morphological variations using quantitative, morphometric descriptors has always interested biologists in general. However, unusual examples of form, such as abnormalities are often encountered in biomedical sciences. Despite the long history of morphometrics, the means to identify and quantify such unusual form differences remains limited. A theoretical concept, called dysmorphometrics, is introduced augmenting current geometric morphometrics with a focus on identifying and modelling form abnormalities. Dysmorphometrics applies the paradigm of detecting form differences as outliers compared to an appropriate norm. To achieve this, the likelihood formulation of landmark superimpositions is extended with outlier processes explicitly introducing a latent variable coding for abnormalities. A tractable solution to this augmented superimposition problem is obtained using Expectation-Maximization. The topography of detected abnormalities is encoded in a dysmorphogram. We demonstrate the use of dysmorphometrics to measure abrupt changes in time, asymmetry and discordancy in a set of human faces presenting with facial abnormalities. The results clearly illustrate the unique power to reveal unusual form differences given only normative data with clear applications in both biomedical practice & research.
Watt, S; Shores, E A; Kinoshita, S
1999-07-01
Implicit and explicit memory were examined in individuals with severe traumatic brain injury (TBI) under conditions of full and divided attention. Participants included 12 individuals with severe TBI and 12 matched controls. In Experiment 1, participants carried out an implicit test of word-stem completion and an explicit test of cued recall. Results demonstrated that TBI participants exhibited impaired explicit memory but preserved implicit memory. In Experiment 2, a significant reduction in the explicit memory performance of both TBI and control participants, as well as a significant decrease in the implicit memory performance of TBI participants, was achieved by reducing attentional resources at encoding. These results indicated that performance on an implicit task of word-stem completion may require the availability of additional attentional resources that are not preserved after severe TBI.
Asymmetric information capacities of reciprocal pairs of quantum channels
NASA Astrophysics Data System (ADS)
Rosati, Matteo; Giovannetti, Vittorio
2018-05-01
Reciprocal pairs of quantum channels are defined as completely positive transformations which admit a rigid, distance-preserving, yet not completely positive transformation that allows one to reproduce the outcome of one from the corresponding outcome of the other. From a classical perspective these transmission lines should exhibit the same communication efficiency. This is no longer the case in the quantum setting: explicit asymmetric behaviors are reported studying the classical communication capacities of reciprocal pairs of depolarizing and Weyl-covariant channels.
What Do Beginning Special Educators Need to Know about Intensive Reading Interventions?
ERIC Educational Resources Information Center
Coyne, Michael D.; Koriakin, Taylor A.
2017-01-01
Evidence based reading instruction and intervention are essential for students with disabilities. The authors recommend that elementary special education teachers emphasize both code-based and meaning-based skills as part of delivering intensive reading interventions, including providing explicit and systematic decoding and vocabulary instruction.…
Promoting Election-Related Policy Practice among Social Work Students
ERIC Educational Resources Information Center
Pritzker, Suzanne; Burwell, Christianna
2016-01-01
Political involvement is an integral component of the social work profession, yet there is no explicit reference to social work participation in election-related activities in either the National Association of Social Workers Code of Ethics or the Council on Social Work Education Educational Policy and Accreditation Standards. Social work…
False Belief and Language Comprehension in Cantonese-Speaking Children
ERIC Educational Resources Information Center
Cheung, Him
2006-01-01
The current research compared two accounts of the relation between language and false belief in children, namely that (a) language is generally related to false belief because both require secondary representation in a social-interactional context and that (b) specific language structures that explicitly code meta representation contribute…
How Does Ethics Institutionalization Reduce Academic Cheating?
ERIC Educational Resources Information Center
Popoola, Ifeoluwa; Garner, Bart; Ammeter, Anthony; Krey, Nina; Beu Ammeter, Danielle; Schafer, Stuart
2017-01-01
Extant research on academic cheating primarily focuses on the impact of honor codes on academic cheating. However, the influence of ethics institutionalization is curiously missing in past research. The authors developed and validated a structural equations model in the R programming language to examine the impact of formal (explicit) and informal…
A Comparison of Schools: Teacher Knowledge of Explicit Code-Based Reading Instruction
ERIC Educational Resources Information Center
Cohen, Rebecca A.; Mather, Nancy; Schneider, Deborah A.; White, Jennifer M.
2017-01-01
One-hundred-fourteen kindergarten through third-grade teachers from seven different schools were surveyed using "The Survey of Preparedness and Knowledge of Language Structure Related to Teaching Reading to Struggling Students." The purpose was to compare their definitions and application knowledge of language structure, phonics, and…
Educational Research Ethics: A Discussion Paper.
ERIC Educational Resources Information Center
Lafleur, Clay
Educational researchers should be aware of the general consensus of the research community as to what is proper and improper in the conduct of educational research, since there is as yet no explicit code of ethics. Accordingly, this paper examines existing ethical standards to initiate discussion of issues specific to the educational research…
Partial Picture Effects on Children's Memory for Sentences Containing Implicit Information.
ERIC Educational Resources Information Center
Miller, Gloria E.; Pressley, Michael
1987-01-01
Two experiments were conducted examining the effects of partial picture adjuncts on young children's coding of information implied in sentences. Developmental differences were found in whether (l) partial pictures facilitated inferencing and (2) pictures containing information not explicitly stated in sentences promoted cue recall of the…
75 FR 62257 - Women-Owned Small Business Federal Contract Program
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-07
... that, in industries in which WOSBs are underrepresented, two or more EDWOSBs will submit offers for the contract or, in industries where WOSBs are substantially underrepresented, two or more WOSBs will submit... view of the statute's explicit requirements, SBA cannot simply deem a NAICS code eligible under the...
Navier-Stokes calculations for DFVLR F5-wing in wind tunnel using Runge-Kutta time-stepping scheme
NASA Technical Reports Server (NTRS)
Vatsa, V. N.; Wedan, B. W.
1988-01-01
A three-dimensional Navier-Stokes code using an explicit multistage Runge-Kutta type of time-stepping scheme is used for solving the transonic flow past a finite wing mounted inside a wind tunnel. Flow past the same wing in free air was also computed to assess the effect of wind-tunnel walls on such flows. Numerical efficiency is enhanced through vectorization of the computer code. A Cyber 205 computer with 32 million words of internal memory was used for these computations.
Summary of EASM Turbulence Models in CFL3D With Validation Test Cases
NASA Technical Reports Server (NTRS)
Rumsey, Christopher L.; Gatski, Thomas B.
2003-01-01
This paper summarizes the Explicit Algebraic Stress Model in k-omega form (EASM-ko) and in k-epsilon form (EASM-ke) in the Reynolds-averaged Navier-Stokes code CFL3D. These models have been actively used over the last several years in CFL3D, and have undergone some minor modifications during that time. Details of the equations and method for coding the latest versions of the models are given, and numerous validation cases are presented. This paper serves as a validation archive for these models.
Parallelizing a peanut butter sandwich
NASA Astrophysics Data System (ADS)
Quenette, S. M.
2005-12-01
This poster aims to demonstrate, in a novel way, why contemporary computational code development is seemingly hard to a geodynamics modeler (i.e. a non-computer-scientist). For example, to utilise comtemporary computer hardware, parallelisation is required. But why do we chose the explicit approach (MPI) over an implicit (OpenMP) one? How does this relate to the typical geodynamics codes. And do we face this same style of problems in every day life? We aim to demonstrate that the little bit of complexity, fore-thought and effort is worth its while.
A Fast and Scalable Algorithm for Calculating the Achievable Capacity of a Wireless Mesh Network
2016-04-10
to interference from a given transmission . We then use our algorithm to perform a network capacity analysis comparing different wireless technologies...A Fast and Scalable Algorithm for Calculating the Achievable Capacity of a Wireless Mesh Network Greg Kuperman, Jun Sun, and Aradhana Narula-Tam MIT...the maximum achievable capacity of a multi-hop wireless mesh network subject to interference constraints. Being able to quickly determine the maximum
One-way quantum repeaters with quantum Reed-Solomon codes
NASA Astrophysics Data System (ADS)
Muralidharan, Sreraman; Zou, Chang-Ling; Li, Linshu; Jiang, Liang
2018-05-01
We show that quantum Reed-Solomon codes constructed from classical Reed-Solomon codes can approach the capacity on the quantum erasure channel of d -level systems for large dimension d . We study the performance of one-way quantum repeaters with these codes and obtain a significant improvement in key generation rate compared to previously investigated encoding schemes with quantum parity codes and quantum polynomial codes. We also compare the three generations of quantum repeaters using quantum Reed-Solomon codes and identify parameter regimes where each generation performs the best.
Educated Parents, Educated Children: Toward a Multiple Life Cycles Education Policy
ERIC Educational Resources Information Center
Sticht, Thomas G.
2010-01-01
Given the important intergenerational effects of parents' education level on the achievement of their children, education policies should shift from a focus on one life cycle to a focus on "multiple life cycles". Such a policy would explicitly recognize that adults transfer their educational achievements to the achievement of their…
ERIC Educational Resources Information Center
Mason, Andrew J.; Bertram, Charles A.
2018-01-01
When considering performing an Introductory Physics for Life Sciences course transformation for one's own institution, life science majors' achievement goals are a necessary consideration to ensure the pedagogical transformation will be effective. However, achievement goals are rarely an explicit consideration in physics education research topics…
Achievement Motivation Development Project. Final Report. Appendix IV, Part 2.
ERIC Educational Resources Information Center
McClelland, David C.; Alschuler, Alfred S.
The Achievement Motivation Development Project is described. The Project has culminated in the development of course materials designed explicitly to promote aspects of psychological growth. As such, it is viewed as but one thrust in an emerging psychological education movement. Achievement motivation is defined as a way of planning, a set of…
New coding advances for deep space communications
NASA Technical Reports Server (NTRS)
Yuen, Joseph H.
1987-01-01
Advances made in error-correction coding for deep space communications are described. The code believed to be the best is a (15, 1/6) convolutional code, with maximum likelihood decoding; when it is concatenated with a 10-bit Reed-Solomon code, it achieves a bit error rate of 10 to the -6th, at a bit SNR of 0.42 dB. This code outperforms the Voyager code by 2.11 dB. The use of source statics in decoding convolutionally encoded Voyager images from the Uranus encounter is investigated, and it is found that a 2 dB decoding gain can be achieved.
Jump conditions in transonic equilibria
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guazzotto, L.; Betti, R.; Jardin, S. C.
2013-04-15
In the present paper, the numerical calculation of transonic equilibria, first introduced with the FLOW code in Guazzotto et al.[Phys. Plasmas 11, 604 (2004)], is critically reviewed. In particular, the necessity and effect of imposing explicit jump conditions at the transonic discontinuity are investigated. It is found that 'standard' (low-{beta}, large aspect ratio) transonic equilibria satisfy the correct jump condition with very good approximation even if the jump condition is not explicitly imposed. On the other hand, it is also found that high-{beta}, low aspect ratio equilibria require the correct jump condition to be explicitly imposed. Various numerical approaches aremore » described to modify FLOW to include the jump condition. It is proved that the new methods converge to the correct solution even in extreme cases of very large {beta}, while they agree with the results obtained with the old implementation of FLOW in lower-{beta} equilibria.« less
NASA Technical Reports Server (NTRS)
Yee, H. C.
1995-01-01
Two classes of explicit compact high-resolution shock-capturing methods for the multidimensional compressible Euler equations for fluid dynamics are constructed. Some of these schemes can be fourth-order accurate away from discontinuities. For the semi-discrete case their shock-capturing properties are of the total variation diminishing (TVD), total variation bounded (TVB), total variation diminishing in the mean (TVDM), essentially nonoscillatory (ENO), or positive type of scheme for 1-D scalar hyperbolic conservation laws and are positive schemes in more than one dimension. These fourth-order schemes require the same grid stencil as their second-order non-compact cousins. One class does not require the standard matrix inversion or a special numerical boundary condition treatment associated with typical compact schemes. Due to the construction, these schemes can be viewed as approximations to genuinely multidimensional schemes in the sense that they might produce less distortion in spherical type shocks and are more accurate in vortex type flows than schemes based purely on one-dimensional extensions. However, one class has a more desirable high-resolution shock-capturing property and a smaller operation count in 3-D than the other class. The extension of these schemes to coupled nonlinear systems can be accomplished using the Roe approximate Riemann solver, the generalized Steger and Warming flux-vector splitting or the van Leer type flux-vector splitting. Modification to existing high-resolution second- or third-order non-compact shock-capturing computer codes is minimal. High-resolution shock-capturing properties can also be achieved via a variant of the second-order Lax-Friedrichs numerical flux without the use of Riemann solvers for coupled nonlinear systems with comparable operations count to their classical shock-capturing counterparts. The simplest extension to viscous flows can be achieved by using the standard fourth-order compact or non-compact formula for the viscous terms.
NASA Technical Reports Server (NTRS)
Kumar, A.; Graves, R. A., Jr.
1980-01-01
A user's guide is provided for a computer code which calculates the laminar and turbulent hypersonic flows about blunt axisymmetric bodies, such as spherically blunted cones, hyperboloids, etc., at zero and small angles of attack. The code is written in STAR FORTRAN language for the CDC-STAR-100 computer. Time-dependent, viscous-shock-layer-type equations are used to describe the flow field. These equations are solved by an explicit, two-step, time asymptotic, finite-difference method. For the turbulent flow, a two-layer, eddy-viscosity model is used. The code provides complete flow-field properties including shock location, surface pressure distribution, surface heating rates, and skin-friction coefficients. This report contains descriptions of the input and output, the listing of the program, and a sample flow-field solution.
Topological order and memory time in marginally-self-correcting quantum memory
NASA Astrophysics Data System (ADS)
Siva, Karthik; Yoshida, Beni
2017-03-01
We examine two proposals for marginally-self-correcting quantum memory: the cubic code by Haah and the welded code by Michnicki. In particular, we prove explicitly that they are absent of topological order above zero temperature, as their Gibbs ensembles can be prepared via a short-depth quantum circuit from classical ensembles. Our proof technique naturally gives rise to the notion of free energy associated with excitations. Further, we develop a framework for an ergodic decomposition of Davies generators in CSS codes which enables formal reduction to simpler classical memory problems. We then show that memory time in the welded code is doubly exponential in inverse temperature via the Peierls argument. These results introduce further connections between thermal topological order and self-correction from the viewpoint of free energy and quantum circuit depth.
Bobrova, E V; Liakhovetskiĭ, V A; Borshchevskaia, E R
2011-01-01
The dependence of errors during reproduction of a sequence of hand movements without visual feedback on the previous right- and left-hand performance ("prehistory") and on positions in space of sequence elements (random or ordered by the explicit rule) was analyzed. It was shown that the preceding information about the ordered positions of the sequence elements was used during right-hand movements, whereas left-hand movements were performed with involvement of the information about the random sequence. The data testify to a central mechanism of the analysis of spatial structure of sequence elements. This mechanism activates movement coding specific for the left hemisphere (vector coding) in case of an ordered sequence structure and positional coding specific for the right hemisphere in case of a random sequence structure.
NASA Astrophysics Data System (ADS)
Williams, Theresa
In order to achieve academic success, students must be able to comprehend written material in content-area textbooks. However, a large number of high school students struggle to comprehend science content. Research findings have demonstrated that students make measurable gains in comprehending content-area textbooks when provided quality reading comprehension instruction. The purpose of this study was to gain an understanding of how high school science teachers perceived their responsibility to provide content-related comprehension instruction and 10 high school science teachers were interviewed for this study. Data analysis consisted of open, axial, and selective coding. The findings revealed that 8 out of the 10 participants believed that it is their responsibility to provide reading comprehension. However, the findings also revealed that the participants provided varying levels of reading comprehension instruction as an integral part of their science instruction. The potential for positive social change could be achieved by teachers and administrators. Teachers may use the findings to reflect upon their own personal feelings and beliefs about providing explicit reading comprehension. In addition to teachers' commitment to reading comprehension instruction, administrators could deliberate about professional development opportunities that might improve necessary skills, eventually leading to better comprehension skills for students and success in their education.
Seismic Safety Of Simple Masonry Buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guadagnuolo, Mariateresa; Faella, Giuseppe
2008-07-08
Several masonry buildings comply with the rules for simple buildings provided by seismic codes. For these buildings explicit safety verifications are not compulsory if specific code rules are fulfilled. In fact it is assumed that their fulfilment ensures a suitable seismic behaviour of buildings and thus adequate safety under earthquakes. Italian and European seismic codes differ in the requirements for simple masonry buildings, mostly concerning the building typology, the building geometry and the acceleration at site. Obviously, a wide percentage of buildings assumed simple by codes should satisfy the numerical safety verification, so that no confusion and uncertainty have tomore » be given rise to designers who must use the codes. This paper aims at evaluating the seismic response of some simple unreinforced masonry buildings that comply with the provisions of the new Italian seismic code. Two-story buildings, having different geometry, are analysed and results from nonlinear static analyses performed by varying the acceleration at site are presented and discussed. Indications on the congruence between code rules and results of numerical analyses performed according to the code itself are supplied and, in this context, the obtained result can provide a contribution for improving the seismic code requirements.« less
Optimal Near-Hitless Network Failure Recovery Using Diversity Coding
ERIC Educational Resources Information Center
Avci, Serhat Nazim
2013-01-01
Link failures in wide area networks are common and cause significant data losses. Mesh-based protection schemes offer high capacity efficiency but they are slow, require complex signaling, and instable. Diversity coding is a proactive coding-based recovery technique which offers near-hitless (sub-ms) restoration with a competitive spare capacity…
The Magnetic Reconnection Code: an AMR-based fully implicit simulation suite
NASA Astrophysics Data System (ADS)
Germaschewski, K.; Bhattacharjee, A.; Ng, C.-S.
2006-12-01
Extended MHD models, which incorporate two-fluid effects, are promising candidates to enhance understanding of collisionless reconnection phenomena in laboratory, space and astrophysical plasma physics. In this paper, we introduce two simulation codes in the Magnetic Reconnection Code suite which integrate reduced and full extended MHD models. Numerical integration of these models comes with two challenges: Small-scale spatial structures, e.g. thin current sheets, develop and must be well resolved by the code. Adaptive mesh refinement (AMR) is employed to provide high resolution where needed while maintaining good performance. Secondly, the two-fluid effects in extended MHD give rise to dispersive waves, which lead to a very stringent CFL condition for explicit codes, while reconnection happens on a much slower time scale. We use a fully implicit Crank--Nicholson time stepping algorithm. Since no efficient preconditioners are available for our system of equations, we instead use a direct solver to handle the inner linear solves. This requires us to actually compute the Jacobian matrix, which is handled by a code generator that calculates the derivative symbolically and then outputs code to calculate it.
Nalkur, Priya G; Jamieson, Patrick E; Romer, Daniel
2010-11-01
Youth exposure to explicit film violence and sex is linked to adverse health outcomes and is a serious public health concern. The Motion Picture Association of America's (MPAA's) rating system's effectiveness in reducing youth exposure to harmful content has been questioned. To determine the MPAA's rating system's effectiveness in screening explicit violence and sex since the system's initiation (1968) and the introduction of the PG-13 category (1984). Also, to examine evidence of less restrictive ratings over time ("ratings creep"). Top-grossing movies from 1950 to 2006 (N = 855) were coded for explicitness of violent and sexual content. Trends in rating assignments and in the content of different rating categories since 1968 were assessed. The explicitness of violent and sexual content significantly increased following the rating system's initiation. The system did not differentiate violent content as well as sexual content, and ratings creep was only evident for violent films. Explicit violence in R-rated films increased, while films that would previously have been rated R were increasingly assigned to PG-13. This pattern was not evident for sex; only R-rated films exhibited higher levels of explicit sex compared to preratings period. While relatively effective for screening explicit sex, the rating system has allowed increasingly violent content into PG-13 films, thereby increasing youth access to more harmful content. Assignment of films in the current rating system should be more sensitive to the link between violent media exposure and youth violence. Copyright © 2010 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.
LSB-based Steganography Using Reflected Gray Code for Color Quantum Images
NASA Astrophysics Data System (ADS)
Li, Panchi; Lu, Aiping
2018-02-01
At present, the classical least-significant-bit (LSB) based image steganography has been extended to quantum image processing. For the existing LSB-based quantum image steganography schemes, the embedding capacity is no more than 3 bits per pixel. Therefore, it is meaningful to study how to improve the embedding capacity of quantum image steganography. This work presents a novel LSB-based steganography using reflected Gray code for colored quantum images, and the embedding capacity of this scheme is up to 4 bits per pixel. In proposed scheme, the secret qubit sequence is considered as a sequence of 4-bit segments. For the four bits in each segment, the first bit is embedded in the second LSB of B channel of the cover image, and and the remaining three bits are embedded in LSB of RGB channels of each color pixel simultaneously using reflected-Gray code to determine the embedded bit from secret information. Following the transforming rule, the LSB of stego-image are not always same as the secret bits and the differences are up to almost 50%. Experimental results confirm that the proposed scheme shows good performance and outperforms the previous ones currently found in the literature in terms of embedding capacity.
The properties of retrieval cues constrain the picture superiority effect.
Weldon, M S; Roediger, H L; Challis, B H
1989-01-01
In three experiments, we examined why pictures are remembered better than words on explicit memory tests like recall and recognition, whereas words produce more priming than pictures on some implicit tests, such as word-fragment and word-stem completion (e.g., completing -l-ph-nt or ele----- as elephant). One possibility is that pictures are always more accessible than words if subjects are given explicit retrieval instructions. An alternative possibility is that the properties of the retrieval cues themselves constrain the retrieval processes engaged; word fragments might induce data-driven (perceptually based) retrieval, which favors words regardless of the retrieval instructions. Experiment 1 demonstrated that words were remembered better than pictures on both the word-fragment and word-stem completion tasks under both implicit and explicit retrieval conditions. In Experiment 2, pictures were recalled better than words with semantically related extralist cues. In Experiment 3, when semantic cues were combined with word fragments, pictures and words were recalled equally well under explicit retrieval conditions, but words were superior to pictures under implicit instructions. Thus, the inherently data-limited properties of fragmented words limit their use in accessing conceptual codes. Overall, the results indicate that retrieval operations are largely determined by properties of the retrieval cues under both implicit and explicit retrieval conditions.
White, Jaclyn M; Dunham, Emilia; Rowley, Blake; Reisner, Sari L; Mimiaga, Matthew J
2015-01-01
Sexually explicit media may perpetuate racial and sexual norms among men who have sex with men. While men may be exposed to sexually explicit media in the online settings where they seek sex with other men, no studies to our knowledge have explored the relationship between the racial and sexual content of advertisements appearing in these spaces. In 2011, using a detailed codebook, 217 sexually explicit advertisements on a male sex-seeking website were coded for themes, actor characteristics and sexual acts depicted. Multivariable logistic regression models examined the association between skin colour, theme, sexual acts and condomless sex acts. Nearly half (45%) featured a 'thug' theme (a style emphasising Black masculinity/hip-hop culture), 21% featured a college theme and 44% featured condomless sex. Advertisements featuring only Black men, advertisements featuring Black men with men of other skin tones and advertisements depicting a thug theme were positively associated with depictions of condomless sex. Online sexually explicit advertisements featuring Black themes and actors more frequently depicted condomless sex than advertisements with White men alone. Future research should examine whether depictions of Black men engaging in condomless sex in online advertisements influence the sexual norms and cognitions of Black men who have sex with men and their partners.
White, Jaclyn M.; Dunham, Emilia; Rowley, Blake; Reisner, Sari L.; Mimiaga, Matthew J.
2015-01-01
Sexually explicit media may perpetuate racial and sexual norms among men who have sex with men. While men may be exposed to sexually explicit media in the online settings where they seek sex with other men, no studies to our knowledge have explored the relationship between the racial and sexual content of advertisements appearing in these spaces. In 2011, 217 sexually explicit advertisements on a male sex-seeking website were coded for themes, actor characteristics, and sexual acts depicted using a detailed codebook. Multivariable logistic regression models examined the association between skin colour, theme, sexual acts, and condomless sex acts. Nearly half (45%) featured a ‘thug’ theme (style emphasising Black masculinity/hip-hop culture), 21% featured a college theme, and 44% featured condomless sex. Ads featuring only Black men, ads featuring Black men with men of other skin tones, and ads depicting a thug theme were positively associated with depictions of condomless sex. Online sexually explicit ads featuring Black themes and actors more frequently depicted risky sex than ads with White men alone. Future research should examine whether risky depictions of Black men in online ads influence the sexual norms and cognitions of Black men who have sex with men and their partners. PMID:25891135
Landsgesell, Jonas; Holm, Christian; Smiatek, Jens
2017-02-14
We present a novel method for the study of weak polyelectrolytes and general acid-base reactions in molecular dynamics and Monte Carlo simulations. The approach combines the advantages of the reaction ensemble and the Wang-Landau sampling method. Deprotonation and protonation reactions are simulated explicitly with the help of the reaction ensemble method, while the accurate sampling of the corresponding phase space is achieved by the Wang-Landau approach. The combination of both techniques provides a sufficient statistical accuracy such that meaningful estimates for the density of states and the partition sum can be obtained. With regard to these estimates, several thermodynamic observables like the heat capacity or reaction free energies can be calculated. We demonstrate that the computation times for the calculation of titration curves with a high statistical accuracy can be significantly decreased when compared to the original reaction ensemble method. The applicability of our approach is validated by the study of weak polyelectrolytes and their thermodynamic properties.
Furze, Jennifer; Kenyon, Lisa K; Jensen, Gail M
2015-01-01
Clinical reasoning is an essential skill in pediatric physical therapist (PT) practice. As such, explicit instruction in clinical reasoning should be emphasized in PT education. This article provides academic faculty and clinical instructors with an overview of strategies to develop and expand the clinical reasoning capacity of PT students within the scope of pediatric PT practice. Achieving a balance between deductive reasoning strategies that provide a framework for thinking and inductive reasoning strategies that emphasize patient factors and the context of the clinical situation is an important variable in educational pedagogy. Consideration should be given to implementing various teaching and learning approaches across the curriculum that reflect the developmental level of the student(s). Deductive strategies may be helpful early in the curriculum, whereas inductive strategies are often advantageous after patient interactions; however, exposure to both is necessary to fully develop the learner's clinical reasoning abilities. For more insights from the authors, see Supplemental Digital Content 1, available at http://links.lww.com/PPT/A87.
Cockrell, Robert Chase; Christley, Scott; Chang, Eugene; An, Gary
2015-01-01
Perhaps the greatest challenge currently facing the biomedical research community is the ability to integrate highly detailed cellular and molecular mechanisms to represent clinical disease states as a pathway to engineer effective therapeutics. This is particularly evident in the representation of organ-level pathophysiology in terms of abnormal tissue structure, which, through histology, remains a mainstay in disease diagnosis and staging. As such, being able to generate anatomic scale simulations is a highly desirable goal. While computational limitations have previously constrained the size and scope of multi-scale computational models, advances in the capacity and availability of high-performance computing (HPC) resources have greatly expanded the ability of computational models of biological systems to achieve anatomic, clinically relevant scale. Diseases of the intestinal tract are exemplary examples of pathophysiological processes that manifest at multiple scales of spatial resolution, with structural abnormalities present at the microscopic, macroscopic and organ-levels. In this paper, we describe a novel, massively parallel computational model of the gut, the Spatially Explicitly General-purpose Model of Enteric Tissue_HPC (SEGMEnT_HPC), which extends an existing model of the gut epithelium, SEGMEnT, in order to create cell-for-cell anatomic scale simulations. We present an example implementation of SEGMEnT_HPC that simulates the pathogenesis of ileal pouchitis, and important clinical entity that affects patients following remedial surgery for ulcerative colitis. PMID:25806784
Balancing bioethics by sensing the aesthetic.
Macneill, Paul
2017-10-01
This article is critical of "bioethics" as it is widely understood and taught, noting in particular an emphasis given to philosophical justification, reason and rationality. It is proposed that "balancing" bioethics be achieved by giving greater weight to practice and the aesthetic: Defined in terms of sensory perception, emotion and feeling. Each of those three elements is elaborated as a non-cognitive capacity and, when taken together, comprise aesthetic sensitivity and responsiveness. This is to recognise the aesthetic as a productive element in bioethics as practice. Contributions from the philosophy of art and aesthetics are drawn into the discussion to bring depth to an understanding of "the aesthetic". This approach is buttressed by philosophers - including Foucault and 18th century German philosophers (in particular Kant) - who recognized a link between ethics and aesthetics. The article aims to give substance to a claim that bioethics necessarily comprises a cognitive component, relating to reason, and a non-cognitive component that draws on aesthetic sensibility and relates to practice. A number of advantages of bioethics, understood to explicitly acknowledge the aesthetic, are proffered. Having defined bioethics in conventional terms, there is discussion of the extent to which other approaches to bioethics (including casuistry, virtue ethics, and narrative ethics) recognize aesthetic sensitivity in their practice. It is apparent that they do so to varying extents although not always explicitly. By examining this aspect of applied ethics, the paper aims to draw attention to aesthetic sensitivity and responsiveness as integral to ethical and effective health care. © 2017 John Wiley & Sons Ltd.
Best interests of adults who lack capacity part 2: key considerations.
Griffith, Richard
Last month's article discussed the key concepts underpinning the notion of best interests. In this article the author discusses the requirements for determining the best interests of an adult who lacks capacity under the provisions of the Mental Capacity Act 2005 and its code of practice (Department for Constitutional Affairs 2007).
The Things You Do: Internal Models of Others’ Expected Behaviour Guide Action Observation
Schenke, Kimberley C.; Wyer, Natalie A.; Bach, Patric
2016-01-01
Predictions allow humans to manage uncertainties within social interactions. Here, we investigate how explicit and implicit person models–how different people behave in different situations–shape these predictions. In a novel action identification task, participants judged whether actors interacted with or withdrew from objects. In two experiments, we manipulated, unbeknownst to participants, the two actors action likelihoods across situations, such that one actor typically interacted with one object and withdrew from the other, while the other actor showed the opposite behaviour. In Experiment 2, participants additionally received explicit information about the two individuals that either matched or mismatched their actual behaviours. The data revealed direct but dissociable effects of both kinds of person information on action identification. Implicit action likelihoods affected response times, speeding up the identification of typical relative to atypical actions, irrespective of the explicit knowledge about the individual’s behaviour. Explicit person knowledge, in contrast, affected error rates, causing participants to respond according to expectations instead of observed behaviour, even when they were aware that the explicit information might not be valid. Together, the data show that internal models of others’ behaviour are routinely re-activated during action observation. They provide first evidence of a person-specific social anticipation system, which predicts forthcoming actions from both explicit information and an individuals’ prior behaviour in a situation. These data link action observation to recent models of predictive coding in the non-social domain where similar dissociations between implicit effects on stimulus identification and explicit behavioural wagers have been reported. PMID:27434265
NASA Technical Reports Server (NTRS)
Pulliam, T. H.; Nemec, M.; Holst, T.; Zingg, D. W.; Kwak, Dochan (Technical Monitor)
2002-01-01
A comparison between an Evolutionary Algorithm (EA) and an Adjoint-Gradient (AG) Method applied to a two-dimensional Navier-Stokes code for airfoil design is presented. Both approaches use a common function evaluation code, the steady-state explicit part of the code,ARC2D. The parameterization of the design space is a common B-spline approach for an airfoil surface, which together with a common griding approach, restricts the AG and EA to the same design space. Results are presented for a class of viscous transonic airfoils in which the optimization tradeoff between drag minimization as one objective and lift maximization as another, produces the multi-objective design space. Comparisons are made for efficiency, accuracy and design consistency.
Gerstenberg, Friederike X. R.; Imhoff, Roland; Banse, Rainer; Schmitt, Manfred
2014-01-01
Previous research has shown that different configurations of the implicit self-concept of intelligence (iSCI) and the explicit self-concept of intelligence (eSCI) are consistently related to individuals’ performance on different intelligence tests (Dislich etal., 2012). The results indicated that any discrepant configuration between the iSCI and the eSCI impairs performance. In the present study, how correspondence between the iSCI and the eSCI is related to intelligence test performance as well as personality traits of modesty (low eSCI, high iSCI), narcissism (high eSCI, low iSCI), and achievement motivation was investigated. Furthermore, a moderated mediation analysis showed that the relation between the iSCI–eSCI configurations and intelligence test performance was mediated by achievement motivation for modest individuals. PMID:24575063
Application des codes de Monte Carlo à la radiothérapie par rayonnement à faible TEL
NASA Astrophysics Data System (ADS)
Marcié, S.
1998-04-01
In radiation therapy, there is low LET rays: photons of 60Co, photons and electrons to 4 at 25 MV created in a linac, photons 137Cs, of 192Ir and of 125I. To know the most exactly possible the dose to the tissu by this rays, software and measurements are used. With the development of the power and the capacity of computers, the application of Monte Carlo codes expand to the radiation therapy which have permitted to better determine effects of rays and spectra, to explicit parameters used in dosimetric calculation, to verify algorithms , to study measuremtents systems and phantoms, to calculate the dose in inaccessible points and to consider the utilization of new radionuclides. En Radiothérapie, il existe une variété, de rayonnements ? faible TLE : photons du cobalt 60, photons et ,électron de 4 à? 25 MV générés dans des accélérateurs linéaires, photons du césium 137, de l'iridium 192 et de l'iode 125. Pour connatre le plus exactement possible la dose délivrée aux tissus par ces rayonnements, des logiciels sont utilisés ainsi que des instruments de mesures. Avec le développement de la puissance et de la capacité, des calculateurs, l'application des codes de Monte Carlo s'est ,étendue ? la Radiothérapie ce qui a permis de mieux cerner les effets des rayonnements, déterminer les spectres, préciser les valeurs des paramètres utilisés dans les calculs dosimétriques, vérifier les algorithmes, ,étudier les systèmes de mesures et les fantomes utilisés, calculer la dose en des points inaccessibles ?à la mesure et envisager l'utilisation de nouveaux radio,éléments.
Beyond Labeling: The Role of Maternal Input in the Acquisition of Richly Structured Categories.
ERIC Educational Resources Information Center
Gelman, Susan A.; Coley, John D.; Rosengren, Karl S.; Hartman, Erin; Pappas, Athina
1998-01-01
Explored how mothers convey information about category structure during naturalistic interactions. Videotaped reading-aloud sessions between mothers and toddlers; coded their interactions for explicit and implicit discussion of animal and artifact categories. Found that mothers provided a rich array of information beyond simple labeling routines,…
ERIC Educational Resources Information Center
Jiang, Yuhong V.; Swallow, Khena M.; Sun, Liwei
2014-01-01
Visuospatial attention prioritizes regions of space for perceptual processing. Knowing how attended locations are represented is critical for understanding the architecture of attention. We examined the spatial reference frame of incidentally learned attention and asked how it is influenced by explicit, top-down knowledge. Participants performed a…
ERIC Educational Resources Information Center
Jozwik, Sara L.; Douglas, Karen H.
2017-01-01
This study integrated technology tools into a reading comprehension intervention that used explicit instruction to teach strategies (i.e., asking questions, making connections, and coding the text to monitor for meaning) to mixed-ability small groups, which included four English Learners with learning disabilities in a fourth-grade general…
Composing for Digital Publication: Rhetoric, Design, Code
ERIC Educational Resources Information Center
Eyman, Douglas; Ball, Cheryl E.
2014-01-01
The authors discuss the state of digital publication with the claim that, at this historical moment, nearly all composition is digital composition. But, as a field, composition studies has not yet made that shift completely explicit in the discussions of composing processes and writing pedagogies. A deeper engagement with this very rapid shift in…
A Semblance of Sense: Kristeva's and Gertrude Stein's Analysis of Language.
ERIC Educational Resources Information Center
Tate, Alison
1995-01-01
Examines the limits of Julia Kristeva's approach to modernist language. The article argues that Kristeva draws on a structuralist model of language and the unconscious, utilizing a code and deviation framework, thereby restricting her ability to elucidate explicit effects of modernist dislocation of language. The article also probes the problems…
NASA Technical Reports Server (NTRS)
Gilbertsen, Noreen D.; Belytschko, Ted
1990-01-01
The implementation of a nonlinear explicit program on a vectorized, concurrent computer with shared memory is described and studied. The conflict between vectorization and concurrency is described and some guidelines are given for optimal block sizes. Several example problems are summarized to illustrate the types of speed-ups which can be achieved by reprogramming as compared to compiler optimization.
NASA Technical Reports Server (NTRS)
Wang, John T.; Pineda, Evan J.; Ranatunga, Vipul; Smeltzer, Stanley S.
2015-01-01
A simple continuum damage mechanics (CDM) based 3D progressive damage analysis (PDA) tool for laminated composites was developed and implemented as a user defined material subroutine to link with a commercially available explicit finite element code. This PDA tool uses linear lamina properties from standard tests, predicts damage initiation with an easy-to-implement Hashin-Rotem failure criteria, and in the damage evolution phase, evaluates the degradation of material properties based on the crack band theory and traction-separation cohesive laws. It follows Matzenmiller et al.'s formulation to incorporate the degrading material properties into the damaged stiffness matrix. Since nonlinear shear and matrix stress-strain relations are not implemented, correction factors are used for slowing the reduction of the damaged shear stiffness terms to reflect the effect of these nonlinearities on the laminate strength predictions. This CDM based PDA tool is implemented as a user defined material (VUMAT) to link with the Abaqus/Explicit code. Strength predictions obtained, using this VUMAT, are correlated with test data for a set of notched specimens under tension and compression loads.
Attending to space within and between objects: Implications from a patient with Balint’s syndrome
Robertson, Lynn C.; Treisman, Anne
2007-01-01
Neuropsychological conditions such as Balint’s syndrome have shown that perceptual organization of parts into a perceptual unit can be dissociated from the ability to localize objects relative to each other. Neural mechanisms that code the spatial structure within individual objects or words may seem to be intact, while between-object structure is compromised. Here we investigate the nature of within-object spatial processing in a patient with Balint’s syndrome (RM). We suggest that within-object spatial structure can be determined (a) directly by explicit spatial processing of between-part relations, mediated by the same dorsal pathway as between-object spatial relations; or (b) indirectly by the discrimination of object identities, which may involve implicit processing of between-part relations and which is probably mediated by the ventral system. When this route is ruled out, by testing discrimination of differences in part location that do not change the identity of the object, we find no evidence of explicit within-object spatial coding in a patient without functioning parietal lobes. PMID:21049339
Review of Punching Shear Behaviour of Flat Slabs Reinforced with FRP Bars
NASA Astrophysics Data System (ADS)
Mohamed, Osama A.; Khattab, Rania
2017-10-01
Using Fibre Reinforced Polymer (FRP) bars to reinforce two-way concrete slabs can extend the service life, reduce maintenance cost and improve-life cycle cost efficiency. FRP reinforcing bars are more environmentally friendly alternatives to traditional reinforcing steel. Shear behaviour of reinforced concrete structural members is a complex phenomenon that relies on the development of internal load-carrying mechanisms, the magnitude and combination of which is still a subject of research. Many building codes and design standards provide design formulas for estimation of punching shear capacity of FRP reinforced flat slabs. Building code formulas take into account the effects of the axial stiffness of main reinforcement bars, the ratio of the perimeter of the critical section to the slab effective depth, and the slab thickness on the punching shear capacity of two-way slabs reinforced with FRP bars or grids. The goal of this paper is to compare experimental data published in the literature to the equations offered by building codes for the estimation of punching shear capacity of concrete flat slabs reinforced with FRP bars. Emphasis in this paper is on two North American codes, namely, ACI 440.1R-15 and CSA S806-12. The experimental data covered in this paper include flat slabs reinforced with GFRP, BFRP, and CFRP bars. Both ACI 440.1R-15 and CSA S806-12 are shown to be in good agreement with test results in terms of predicting the punching shear capacity.
The Role of Ontologies in Schema-based Program Synthesis
NASA Technical Reports Server (NTRS)
Bures, Tomas; Denney, Ewen; Fischer, Bernd; Nistor, Eugen C.
2004-01-01
Program synthesis is the process of automatically deriving executable code from (non-executable) high-level specifications. It is more flexible and powerful than conventional code generation techniques that simply translate algorithmic specifications into lower-level code or only create code skeletons from structural specifications (such as UML class diagrams). Key to building a successful synthesis system is specializing to an appropriate application domain. The AUTOBAYES and AUTOFILTER systems, under development at NASA Ames, operate in the two domains of data analysis and state estimation, respectively. The central concept of both systems is the schema, a representation of reusable computational knowledge. This can take various forms, including high-level algorithm templates, code optimizations, datatype refinements, or architectural information. A schema also contains applicability conditions that are used to determine when it can be applied safely. These conditions can refer to the initial specification, to intermediate results, or to elements of the partially-instantiated code. Schema-based synthesis uses AI technology to recursively apply schemas to gradually refine a specification into executable code. This process proceeds in two main phases. A front-end gradually transforms the problem specification into a program represented in an abstract intermediate code. A backend then compiles this further down into a concrete target programming language of choice. A core engine applies schemas on the initial problem specification, then uses the output of those schemas as the input for other schemas, until the full implementation is generated. Since there might be different schemas that implement different solutions to the same problem this process can generate an entire solution tree. AUTOBAYES and AUTOFILTER have reached the level of maturity where they enable users to solve interesting application problems, e.g., the analysis of Hubble Space Telescope images. They are large (in total around 100kLoC Prolog), knowledge intensive systems that employ complex symbolic reasoning to generate a wide range of non-trivial programs for complex application do- mains. Their schemas can have complex interactions, which make it hard to change them in isolation or even understand what an existing schema actually does. Adding more capabilities by increasing the number of schemas will only worsen this situation, ultimately leading to the entropy death of the synthesis system. The root came of this problem is that the domain knowledge is scattered throughout the entire system and only represented implicitly in the schema implementations. In our current work, we are addressing this problem by making explicit the knowledge from Merent parts of the synthesis system. Here; we discuss how Gruber's definition of an ontology as an explicit specification of a conceptualization matches our efforts in identifying and explicating the domain-specific concepts. We outline the dual role ontologies play in schema-based synthesis and argue that they address different audiences and serve different purposes. Their first role is descriptive: they serve as explicit documentation, and help to understand the internal structure of the system. Their second role is prescriptive: they provide the formal basis against which the other parts of the system (e.g., schemas) can be checked. Their final role is referential: ontologies also provide semantically meaningful "hooks" which allow schemas and tools to access the internal state of the program derivation process (e.g., fragments of the generated code) in domain-specific rather than language-specific terms, and thus to modify it in a controlled fashion. For discussion purposes we use AUTOLINEAR, a small synthesis system we are currently experimenting with, which can generate code for solving a system of linear equations, Az = b.
Game-theoretic equilibrium analysis applications to deregulated electricity markets
NASA Astrophysics Data System (ADS)
Joung, Manho
This dissertation examines game-theoretic equilibrium analysis applications to deregulated electricity markets. In particular, three specific applications are discussed: analyzing the competitive effects of ownership of financial transmission rights, developing a dynamic game model considering the ramp rate constraints of generators, and analyzing strategic behavior in electricity capacity markets. In the financial transmission right application, an investigation is made of how generators' ownership of financial transmission rights may influence the effects of the transmission lines on competition. In the second application, the ramp rate constraints of generators are explicitly modeled using a dynamic game framework, and the equilibrium is characterized as the Markov perfect equilibrium. Finally, the strategic behavior of market participants in electricity capacity markets is analyzed and it is shown that the market participants may exaggerate their available capacity in a Nash equilibrium. It is also shown that the more conservative the independent system operator's capacity procurement, the higher the risk of exaggerated capacity offers.
DNA barcode goes two-dimensions: DNA QR code web server.
Liu, Chang; Shi, Linchun; Xu, Xiaolan; Li, Huan; Xing, Hang; Liang, Dong; Jiang, Kun; Pang, Xiaohui; Song, Jingyuan; Chen, Shilin
2012-01-01
The DNA barcoding technology uses a standard region of DNA sequence for species identification and discovery. At present, "DNA barcode" actually refers to DNA sequences, which are not amenable to information storage, recognition, and retrieval. Our aim is to identify the best symbology that can represent DNA barcode sequences in practical applications. A comprehensive set of sequences for five DNA barcode markers ITS2, rbcL, matK, psbA-trnH, and CO1 was used as the test data. Fifty-three different types of one-dimensional and ten two-dimensional barcode symbologies were compared based on different criteria, such as coding capacity, compression efficiency, and error detection ability. The quick response (QR) code was found to have the largest coding capacity and relatively high compression ratio. To facilitate the further usage of QR code-based DNA barcodes, a web server was developed and is accessible at http://qrfordna.dnsalias.org. The web server allows users to retrieve the QR code for a species of interests, convert a DNA sequence to and from a QR code, and perform species identification based on local and global sequence similarities. In summary, the first comprehensive evaluation of various barcode symbologies has been carried out. The QR code has been found to be the most appropriate symbology for DNA barcode sequences. A web server has also been constructed to allow biologists to utilize QR codes in practical DNA barcoding applications.
Beta Regression Finite Mixture Models of Polarization and Priming
ERIC Educational Resources Information Center
Smithson, Michael; Merkle, Edgar C.; Verkuilen, Jay
2011-01-01
This paper describes the application of finite-mixture general linear models based on the beta distribution to modeling response styles, polarization, anchoring, and priming effects in probability judgments. These models, in turn, enhance our capacity for explicitly testing models and theories regarding the aforementioned phenomena. The mixture…
NASA Astrophysics Data System (ADS)
Krank, Benjamin; Fehn, Niklas; Wall, Wolfgang A.; Kronbichler, Martin
2017-11-01
We present an efficient discontinuous Galerkin scheme for simulation of the incompressible Navier-Stokes equations including laminar and turbulent flow. We consider a semi-explicit high-order velocity-correction method for time integration as well as nodal equal-order discretizations for velocity and pressure. The non-linear convective term is treated explicitly while a linear system is solved for the pressure Poisson equation and the viscous term. The key feature of our solver is a consistent penalty term reducing the local divergence error in order to overcome recently reported instabilities in spatially under-resolved high-Reynolds-number flows as well as small time steps. This penalty method is similar to the grad-div stabilization widely used in continuous finite elements. We further review and compare our method to several other techniques recently proposed in literature to stabilize the method for such flow configurations. The solver is specifically designed for large-scale computations through matrix-free linear solvers including efficient preconditioning strategies and tensor-product elements, which have allowed us to scale this code up to 34.4 billion degrees of freedom and 147,456 CPU cores. We validate our code and demonstrate optimal convergence rates with laminar flows present in a vortex problem and flow past a cylinder and show applicability of our solver to direct numerical simulation as well as implicit large-eddy simulation of turbulent channel flow at Reτ = 180 as well as 590.
Subsumption principles underlying medical concept systems and their formal reconstruction.
Bernauer, J.
1994-01-01
Conventional medical concept systems represent generic concept relations by hierarchical coding principles. Often, these coding principles constrain the concept system and reduce the potential for automatical derivation of subsumption. Formal reconstruction of medical concept systems is an approach that bases on the conceptual representation of meanings and that allows for the application of formal criteria for subsumption. Those criteria must reflect intuitive principles of subordination which are underlying conventional medical concept systems. Particularly these are: The subordinate concept results (1) from adding a specializing criterion to the superordinate concept, (2) from refining the primary category, or a criterion of the superordinate concept, by a concept that is less general, (3) from adding a partitive criterion to a criterion of the superordinate, (4) from refining a criterion by a concept that is less comprehensive, and finally (5) from coordinating the superordinate concept, or one of its criteria. This paper introduces a formalism called BERNWARD that aims at the formal reconstruction of medical concept systems according to these intuitive principles. The automatical derivation of hierarchical relations is primarily supported by explicit generic and explicit partititive hierarchies of concepts, secondly, by two formal criteria that base on the structure of concept descriptions and explicit hierarchical relations between their elements, namely: formal subsumption and part-sensitive subsumption. Formal subsumption takes only generic relations into account, part-sensitive subsumption additionally regards partive relations between criteria. This approach seems to be flexible enough to cope with unforeseeable effects of partitive criteria on subsumption. PMID:7949907
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmintier, Bryan; Broderick, Robert; Mather, Barry
2016-05-01
This report analyzes distribution-integration challenges, solutions, and research needs in the context of distributed generation from PV (DGPV) deployment to date and the much higher levels of deployment expected with achievement of the U.S. Department of Energy's SunShot targets. Recent analyses have improved estimates of the DGPV hosting capacities of distribution systems. This report uses these results to statistically estimate the minimum DGPV hosting capacity for the contiguous United States using traditional inverters of approximately 170 GW without distribution system modifications. This hosting capacity roughly doubles if advanced inverters are used to manage local voltage and additional minor, low-cost changesmore » could further increase these levels substantially. Key to achieving these deployment levels at minimum cost is siting DGPV based on local hosting capacities, suggesting opportunities for regulatory, incentive, and interconnection innovation. Already, pre-computed hosting capacity is beginning to expedite DGPV interconnection requests and installations in select regions; however, realizing SunShot-scale deployment will require further improvements to DGPV interconnection processes, standards and codes, and compensation mechanisms so they embrace the contributions of DGPV to system-wide operations. SunShot-scale DGPV deployment will also require unprecedented coordination of the distribution and transmission systems. This includes harnessing DGPV's ability to relieve congestion and reduce system losses by generating closer to loads; minimizing system operating costs and reserve deployments through improved DGPV visibility; developing communication and control architectures that incorporate DGPV into system operations; providing frequency response, transient stability, and synthesized inertia with DGPV in the event of large-scale system disturbances; and potentially managing reactive power requirements due to large-scale deployment of advanced inverter functions. Finally, additional local and system-level value could be provided by integrating DGPV with energy storage and 'virtual storage,' which exploits improved management of electric vehicle charging, building energy systems, and other large loads. Together, continued innovation across this rich distribution landscape can enable the very-high deployment levels envisioned by SunShot.« less
NASA Astrophysics Data System (ADS)
Nekuchaev, A. O.; Shuteev, S. A.
2014-04-01
A new method of data transmission in DWDM systems along existing long-distance fiber-optic communication lines is proposed. The existing method, e.g., uses 32 wavelengths in the NRZ code with an average power of 16 conventional units (16 units and 16 zeros on the average) and transmission of 32 bits/cycle. In the new method, one of 124 wavelengths with a duration of one cycle each (at any time instant, no more than 16 obligatory different wavelengths) and capacity of 4 bits with an average power of 15 conventional units and rate of 64 bits/cycle is transmitted at every instant of a 1/16 cycle. The cross modulation and double Rayleigh scattering are significantly decreased owing to uniform distribution of power over time at different wavelengths. The time redundancy (forward error correction (FEC)) is about 7% and allows one to achieve a coding enhancement of about 6 dB by detecting and removing deletions and errors simultaneously.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, Lei
Magnetic confinement fusion is one of the most promising approaches to achieve fusion energy. With the rapid increase of the computational power over the past decades, numerical simulation have become an important tool to study the fusion plasmas. Eventually, the numerical models will be used to predict the performance of future devices, such as the International Thermonuclear Experiment Reactor (ITER) or DEMO. However, the reliability of these models needs to be carefully validated against experiments before the results can be trusted. The validation between simulations and measurements is hard particularly because the quantities directly available from both sides are different.more » While the simulations have the information of the plasma quantities calculated explicitly, the measurements are usually in forms of diagnostic signals. The traditional way of making the comparison relies on the diagnosticians to interpret the measured signals as plasma quantities. The interpretation is in general very complicated and sometimes not even unique. In contrast, given the plasma quantities from the plasma simulations, we can unambiguously calculate the generation and propagation of the diagnostic signals. These calculations are called synthetic diagnostics, and they enable an alternate way to compare the simulation results with the measurements. In this dissertation, we present a platform for developing and applying synthetic diagnostic codes. Three diagnostics on the platform are introduced. The reflectometry and beam emission spectroscopy diagnostics measure the electron density, and the electron cyclotron emission diagnostic measures the electron temperature. The theoretical derivation and numerical implementation of a new two dimensional Electron cyclotron Emission Imaging code is discussed in detail. This new code has shown the potential to address many challenging aspects of the present ECE measurements, such as runaway electron effects, and detection of the cross phase between the electron temperature and density fluctuations.« less
Intercomparison of 3D pore-scale flow and solute transport simulation methods
Mehmani, Yashar; Schoenherr, Martin; Pasquali, Andrea; ...
2015-09-28
Multiple numerical approaches have been developed to simulate porous media fluid flow and solute transport at the pore scale. These include 1) methods that explicitly model the three-dimensional geometry of pore spaces and 2) methods that conceptualize the pore space as a topologically consistent set of stylized pore bodies and pore throats. In previous work we validated a model of the first type, using computational fluid dynamics (CFD) codes employing a standard finite volume method (FVM), against magnetic resonance velocimetry (MRV) measurements of pore-scale velocities. Here we expand that validation to include additional models of the first type based onmore » the lattice Boltzmann method (LBM) and smoothed particle hydrodynamics (SPH), as well as a model of the second type, a pore-network model (PNM). The PNM approach used in the current study was recently improved and demonstrated to accurately simulate solute transport in a two-dimensional experiment. While the PNM approach is computationally much less demanding than direct numerical simulation methods, the effect of conceptualizing complex three-dimensional pore geometries on solute transport in the manner of PNMs has not been fully determined. We apply all four approaches (FVM-based CFD, LBM, SPH and PNM) to simulate pore-scale velocity distributions and (for capable codes) nonreactive solute transport, and intercompare the model results. Comparisons are drawn both in terms of macroscopic variables (e.g., permeability, solute breakthrough curves) and microscopic variables (e.g., local velocities and concentrations). Generally good agreement was achieved among the various approaches, but some differences were observed depending on the model context. The intercomparison work was challenging because of variable capabilities of the codes, and inspired some code enhancements to allow consistent comparison of flow and transport simulations across the full suite of methods. This paper provides support for confidence in a variety of pore-scale modeling methods and motivates further development and application of pore-scale simulation methods.« less
Intercomparison of 3D pore-scale flow and solute transport simulation methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Xiaofan; Mehmani, Yashar; Perkins, William A.
2016-09-01
Multiple numerical approaches have been developed to simulate porous media fluid flow and solute transport at the pore scale. These include 1) methods that explicitly model the three-dimensional geometry of pore spaces and 2) methods that conceptualize the pore space as a topologically consistent set of stylized pore bodies and pore throats. In previous work we validated a model of the first type, using computational fluid dynamics (CFD) codes employing a standard finite volume method (FVM), against magnetic resonance velocimetry (MRV) measurements of pore-scale velocities. Here we expand that validation to include additional models of the first type based onmore » the lattice Boltzmann method (LBM) and smoothed particle hydrodynamics (SPH), as well as a model of the second type, a pore-network model (PNM). The PNM approach used in the current study was recently improved and demonstrated to accurately simulate solute transport in a two-dimensional experiment. While the PNM approach is computationally much less demanding than direct numerical simulation methods, the effect of conceptualizing complex three-dimensional pore geometries on solute transport in the manner of PNMs has not been fully determined. We apply all four approaches (FVM-based CFD, LBM, SPH and PNM) to simulate pore-scale velocity distributions and (for capable codes) nonreactive solute transport, and intercompare the model results. Comparisons are drawn both in terms of macroscopic variables (e.g., permeability, solute breakthrough curves) and microscopic variables (e.g., local velocities and concentrations). Generally good agreement was achieved among the various approaches, but some differences were observed depending on the model context. The intercomparison work was challenging because of variable capabilities of the codes, and inspired some code enhancements to allow consistent comparison of flow and transport simulations across the full suite of methods. This study provides support for confidence in a variety of pore-scale modeling methods and motivates further development and application of pore-scale simulation methods.« less
Explicit Oral Narrative Intervention for Students with Williams Syndrome
Diez-Itza, Eliseo; Martínez, Verónica; Pérez, Vanesa; Fernández-Urquiza, Maite
2018-01-01
Narrative skills play a crucial role in organizing experience, facilitating social interaction and building academic discourse and literacy. They are at the interface of cognitive, social, and linguistic abilities related to school engagement. Despite their relative strengths in social and grammatical skills, students with Williams syndrome (WS) do not show parallel cognitive and pragmatic performance in narrative generation tasks. The aim of the present study was to assess retelling of a TV cartoon tale and the effect of an individualized explicit instruction of the narrative structure. Participants included eight students with WS who attended different special education levels. Narratives were elicited in two sessions (pre and post intervention), and were transcribed, coded and analyzed using the tools of the CHILDES Project. Narratives were coded for productivity and complexity at the microstructure and macrostructure levels. Microstructure productivity (i.e., length of narratives) included number of utterances, clauses, and tokens. Microstructure complexity included mean length of utterances, lexical diversity and use of discourse markers as cohesive devices. Narrative macrostructure was assessed for textual coherence through the Pragmatic Evaluation Protocol for Speech Corpora (PREP-CORP). Macrostructure productivity and complexity included, respectively, the recall and sequential order of scenarios, episodes, events and characters. A total of four intervention sessions, lasting approximately 20 min, were delivered individually once a week. This brief intervention addressed explicit instruction about the narrative structure and the use of specific discourse markers to improve cohesion of story retellings. Intervention strategies included verbal scaffolding and modeling, conversational context for retelling the story and visual support with pictures printed from the cartoon. Results showed significant changes in WS students’ retelling of the story, both at macro- and microstructure levels, when assessed following a 2-week interval. Outcomes were better in microstructure than in macrostructure, where sequential order (i.e., complexity) did not show significant improvement. These findings are consistent with previous research supporting the use of explicit oral narrative intervention with participants who are at risk of school failure due to communication impairments. Discussion focuses on how assessment and explicit instruction of narrative skills might contribute to effective intervention programs enhancing school engagement in WS students. PMID:29379455
Hippocampal Remapping Is Constrained by Sparseness rather than Capacity
Kammerer, Axel; Leibold, Christian
2014-01-01
Grid cells in the medial entorhinal cortex encode space with firing fields that are arranged on the nodes of spatial hexagonal lattices. Potential candidates to read out the space information of this grid code and to combine it with other sensory cues are hippocampal place cells. In this paper, we investigate a population of grid cells providing feed-forward input to place cells. The capacity of the underlying synaptic transformation is determined by both spatial acuity and the number of different spatial environments that can be represented. The codes for different environments arise from phase shifts of the periodical entorhinal cortex patterns that induce a global remapping of hippocampal place fields, i.e., a new random assignment of place fields for each environment. If only a single environment is encoded, the grid code can be read out at high acuity with only few place cells. A surplus in place cells can be used to store a space code for more environments via remapping. The number of stored environments can be increased even more efficiently by stronger recurrent inhibition and by partitioning the place cell population such that learning affects only a small fraction of them in each environment. We find that the spatial decoding acuity is much more resilient to multiple remappings than the sparseness of the place code. Since the hippocampal place code is sparse, we thus conclude that the projection from grid cells to the place cells is not using its full capacity to transfer space information. Both populations may encode different aspects of space. PMID:25474570
NASA Astrophysics Data System (ADS)
Phelan, Julie E.
This research investigated the role of implicit science beliefs in the gender gap in science aspirations and achievement, with the goal of testing identification with a female role model as a potential intervention strategy for increasing women's representation in science careers. At Time 1, women's implicit science stereotyping (i.e., associating men more than women with science) was linked to more negative (implicit and explicit) attitudes towards science and less identification with science. For men, stereotypes were either non-significantly or positively related to science attitudes and identification. Time 2 examined the influence of implicit and explicit science cognitions on students' science aspirations and achievement, and found that implicit stereotyping, attitudes, and identification were all unique predictors of science aspirations, but not achievement. Of more importance, Time 2 examined the influence of science role models, and found that identification with a role model of either gender reduced women's implicit science stereotyping and increased their positive attitudes toward science. Implications for decreasing the gender gap in advanced science achievement are discussed.
NASA Astrophysics Data System (ADS)
Chandramouli, Rajarathnam; Li, Grace; Memon, Nasir D.
2002-04-01
Steganalysis techniques attempt to differentiate between stego-objects and cover-objects. In recent work we developed an explicit analytic upper bound for the steganographic capacity of LSB based steganographic techniques for a given false probability of detection. In this paper we look at adaptive steganographic techniques. Adaptive steganographic techniques take explicit steps to escape detection. We explore different techniques that can be used to adapt message embedding to the image content or to a known steganalysis technique. We investigate the advantages of adaptive steganography within an analytical framework. We also give experimental results with a state-of-the-art steganalysis technique demonstrating that adaptive embedding results in a significant number of bits embedded without detection.
High-capacity quantum secure direct communication using hyper-entanglement of photonic qubits
NASA Astrophysics Data System (ADS)
Cai, Jiarui; Pan, Ziwen; Wang, Tie-Jun; Wang, Sihai; Wang, Chuan
2016-11-01
Hyper-entanglement is a system constituted by photons entangled in multiple degrees of freedom (DOF), being considered as a promising way of increasing channel capacity and guaranteeing powerful eavesdropping safeguard. In this work, we propose a coding scheme based on a 3-particle hyper-entanglement of polarization and orbital angular momentum (OAM) system and its application as a quantum secure direct communication (QSDC) protocol. The OAM values are specially encoded by Fibonacci sequence and the polarization carries information by defined unitary operations. The internal relations of the secret message enhances security due to principle of quantum mechanics and Fibonacci sequence. We also discuss the coding capacity and security property along with some simulation results to show its superiority and extensibility.
Fundamental Limits of Delay and Security in Device-to-Device Communication
2013-01-01
systematic MDS (maximum distance separable) codes and random binning strategies that achieve a Pareto optimal delayreconstruction tradeoff. The erasure MD...file, and a coding scheme based on erasure compression and Slepian-Wolf binning is presented. The coding scheme is shown to provide a Pareto optimal...ble) codes and random binning strategies that achieve a Pareto optimal delay- reconstruction tradeoff. The erasure MD setup is then used to propose a
Edge-relevant plasma simulations with the continuum code COGENT
NASA Astrophysics Data System (ADS)
Dorf, M.; Dorr, M.; Ghosh, D.; Hittinger, J.; Rognlien, T.; Cohen, R.; Lee, W.; Schwartz, P.
2016-10-01
We describe recent advances in cross-separatrix and other edge-relevant plasma simulations with COGENT, a continuum gyro-kinetic code being developed by the Edge Simulation Laboratory (ESL) collaboration. The distinguishing feature of the COGENT code is its high-order finite-volume discretization methods, which employ arbitrary mapped multiblock grid technology (nearly field-aligned on blocks) to handle the complexity of tokamak divertor geometry with high accuracy. This paper discusses the 4D (axisymmetric) electrostatic version of the code, and the presented topics include: (a) initial simulations with kinetic electrons and development of reduced fluid models; (b) development and application of implicit-explicit (IMEX) time integration schemes; and (c) conservative modeling of drift-waves and the universal instability. Work performed for USDOE, at LLNL under contract DE-AC52-07NA27344 and at LBNL under contract DE-AC02-05CH11231.
Maximum likelihood decoding analysis of Accumulate-Repeat-Accumulate Codes
NASA Technical Reports Server (NTRS)
Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung
2004-01-01
Repeat-Accumulate (RA) codes are the simplest turbo-like codes that achieve good performance. However, they cannot compete with Turbo codes or low-density parity check codes (LDPC) as far as performance is concerned. The Accumulate Repeat Accumulate (ARA) codes, as a subclass of LDPC codes, are obtained by adding a pre-coder in front of RA codes with puncturing where an accumulator is chosen as a precoder. These codes not only are very simple, but also achieve excellent performance with iterative decoding. In this paper, the performance of these codes with (ML) decoding are analyzed and compared to random codes by very tight bounds. The weight distribution of some simple ARA codes is obtained, and through existing tightest bounds we have shown the ML SNR threshold of ARA codes approaches very closely to the performance of random codes. We have shown that the use of precoder improves the SNR threshold but interleaving gain remains unchanged with respect to RA code with puncturing.
Integration of Rotor Aerodynamic Optimization with the Conceptual Design of a Large Civil Tiltrotor
NASA Technical Reports Server (NTRS)
Acree, C. W., Jr.
2010-01-01
Coupling of aeromechanics analysis with vehicle sizing is demonstrated with the CAMRAD II aeromechanics code and NDARC sizing code. The example is optimization of cruise tip speed with rotor/wing interference for the Large Civil Tiltrotor (LCTR2) concept design. Free-wake models were used for both rotors and the wing. This report is part of a NASA effort to develop an integrated analytical capability combining rotorcraft aeromechanics, structures, propulsion, mission analysis, and vehicle sizing. The present paper extends previous efforts by including rotor/wing interference explicitly in the rotor performance optimization and implicitly in the sizing.
Efficient Polar Coding of Quantum Information
NASA Astrophysics Data System (ADS)
Renes, Joseph M.; Dupuis, Frédéric; Renner, Renato
2012-08-01
Polar coding, introduced 2008 by Arıkan, is the first (very) efficiently encodable and decodable coding scheme whose information transmission rate provably achieves the Shannon bound for classical discrete memoryless channels in the asymptotic limit of large block sizes. Here, we study the use of polar codes for the transmission of quantum information. Focusing on the case of qubit Pauli channels and qubit erasure channels, we use classical polar codes to construct a coding scheme that asymptotically achieves a net transmission rate equal to the coherent information using efficient encoding and decoding operations and code construction. Our codes generally require preshared entanglement between sender and receiver, but for channels with a sufficiently low noise level we demonstrate that the rate of preshared entanglement required is zero.
Examining the cognitive demands of analogy instructions compared to explicit instructions.
Tse, Choi Yeung Andy; Wong, Andus; Whitehill, Tara; Ma, Estella; Masters, Rich
2016-10-01
In many learning domains, instructions are presented explicitly despite high cognitive demands associated with their processing. This study examined cognitive demands imposed on working memory by different types of instruction to speak with maximum pitch variation: visual analogy, verbal analogy and explicit verbal instruction. Forty participants were asked to memorise a set of 16 visual and verbal stimuli while reading aloud a Cantonese paragraph with maximum pitch variation. Instructions about how to achieve maximum pitch variation were presented via visual analogy, verbal analogy, explicit rules or no instruction. Pitch variation was assessed off-line, using standard deviation of fundamental frequency. Immediately after reading, participants recalled as many stimuli as possible. Analogy instructions resulted in significantly increased pitch variation compared to explicit instructions or no instructions. Explicit instructions resulted in poorest recall of stimuli. Visual analogy instructions resulted in significantly poorer recall of visual stimuli than verbal stimuli. The findings suggest that non-propositional instructions presented via analogy may be less cognitively demanding than instructions that are presented explicitly. Processing analogy instructions that are presented as a visual representation is likely to load primarily visuospatial components of working memory rather than phonological components. The findings are discussed with reference to speech therapy and human cognition.
Visual Working Memory Capacity and Proactive Interference
Hartshorne, Joshua K.
2008-01-01
Background Visual working memory capacity is extremely limited and appears to be relatively immune to practice effects or the use of explicit strategies. The recent discovery that visual working memory tasks, like verbal working memory tasks, are subject to proactive interference, coupled with the fact that typical visual working memory tasks are particularly conducive to proactive interference, suggests that visual working memory capacity may be systematically under-estimated. Methodology/Principal Findings Working memory capacity was probed behaviorally in adult humans both in laboratory settings and via the Internet. Several experiments show that although the effect of proactive interference on visual working memory is significant and can last over several trials, it only changes the capacity estimate by about 15%. Conclusions/Significance This study further confirms the sharp limitations on visual working memory capacity, both in absolute terms and relative to verbal working memory. It is suggested that future research take these limitations into account in understanding differences across a variety of tasks between human adults, prelinguistic infants and nonlinguistic animals. PMID:18648493
Visual working memory capacity and proactive interference.
Hartshorne, Joshua K
2008-07-23
Visual working memory capacity is extremely limited and appears to be relatively immune to practice effects or the use of explicit strategies. The recent discovery that visual working memory tasks, like verbal working memory tasks, are subject to proactive interference, coupled with the fact that typical visual working memory tasks are particularly conducive to proactive interference, suggests that visual working memory capacity may be systematically under-estimated. Working memory capacity was probed behaviorally in adult humans both in laboratory settings and via the Internet. Several experiments show that although the effect of proactive interference on visual working memory is significant and can last over several trials, it only changes the capacity estimate by about 15%. This study further confirms the sharp limitations on visual working memory capacity, both in absolute terms and relative to verbal working memory. It is suggested that future research take these limitations into account in understanding differences across a variety of tasks between human adults, prelinguistic infants and nonlinguistic animals.
McCalman, Janya; Tsey, Komla; Baird, Bradley; Connolly, Brian; Baird, Leslie; Jackson, Rita
2009-08-01
This case study describes the efforts of an Aboriginal men's group to facilitate and support the empowerment of young people in their community. It is part of a broader participatory action research (PAR) study of men's groups. Data was derived from quarterly reflective PAR sessions with men's and youth workers and steering committee members, interviews with workers, and focus groups with young people. The data was coded and categorized, with five themes identified. Key opportunities and challenges related to building staff capacity, engaging young people, giving voice to young people and reconciling diverse community views. Emerging outcomes included young people's definition of vision and values, social cohesion, personal achievements and recognition. The youth projects also resulted in local employment, improvements in workforce capacity and proposals to extend the empowerment model in Yarrabah and transfer it to another community. PAR frameworks provide a useful tool for facilitating and sustaining empowerment outcomes. They can be used to support the transfer of knowledge and skills from one Aboriginal community group to another.
Neural representation of objects in space: a dual coding account.
Humphreys, G W
1998-01-01
I present evidence on the nature of object coding in the brain and discuss the implications of this coding for models of visual selective attention. Neuropsychological studies of task-based constraints on: (i) visual neglect; and (ii) reading and counting, reveal the existence of parallel forms of spatial representation for objects: within-object representations, where elements are coded as parts of objects, and between-object representations, where elements are coded as independent objects. Aside from these spatial codes for objects, however, the coding of visual space is limited. We are extremely poor at remembering small spatial displacements across eye movements, indicating (at best) impoverished coding of spatial position per se. Also, effects of element separation on spatial extinction can be eliminated by filling the space with an occluding object, indicating that spatial effects on visual selection are moderated by object coding. Overall, there are separate limits on visual processing reflecting: (i) the competition to code parts within objects; (ii) the small number of independent objects that can be coded in parallel; and (iii) task-based selection of whether within- or between-object codes determine behaviour. Between-object coding may be linked to the dorsal visual system while parallel coding of parts within objects takes place in the ventral system, although there may additionally be some dorsal involvement either when attention must be shifted within objects or when explicit spatial coding of parts is necessary for object identification. PMID:9770227
Solutions of conformal Israel-Stewart relativistic viscous fluid dynamics
NASA Astrophysics Data System (ADS)
Marrochio, Hugo; Noronha, Jorge; Denicol, Gabriel S.; Luzum, Matthew; Jeon, Sangyong; Gale, Charles
2015-01-01
We use symmetry arguments developed by Gubser to construct the first radially expanding explicit solutions of the Israel-Stewart formulation of hydrodynamics. Along with a general semi-analytical solution, an exact analytical solution is given which is valid in the cold plasma limit where viscous effects from shear viscosity and the relaxation time coefficient are important. The radially expanding solutions presented in this paper can be used as nontrivial checks of numerical algorithms employed in hydrodynamic simulations of the quark-gluon plasma formed in ultrarelativistic heavy ion collisions. We show this explicitly by comparing such analytic and semi-analytic solutions with the corresponding numerical solutions obtained using the music viscous hydrodynamics simulation code.
Stability of mixed time integration schemes for transient thermal analysis
NASA Technical Reports Server (NTRS)
Liu, W. K.; Lin, J. I.
1982-01-01
A current research topic in coupled-field problems is the development of effective transient algorithms that permit different time integration methods with different time steps to be used simultaneously in various regions of the problems. The implicit-explicit approach seems to be very successful in structural, fluid, and fluid-structure problems. This paper summarizes this research direction. A family of mixed time integration schemes, with the capabilities mentioned above, is also introduced for transient thermal analysis. A stability analysis and the computer implementation of this technique are also presented. In particular, it is shown that the mixed time implicit-explicit methods provide a natural framework for the further development of efficient, clean, modularized computer codes.
An improved flux-split algorithm applied to hypersonic flows in chemical equilibrium
NASA Technical Reports Server (NTRS)
Palmer, Grant
1988-01-01
An explicit, finite-difference, shock-capturing numerical algorithm is presented and applied to hypersonic flows assumed to be in thermochemical equilibrium. Real-gas chemistry is either loosely coupled to the gasdynamics by way of a Gibbs free energy minimization package or fully coupled using species mass conservation equations with finite-rate chemical reactions. A scheme is developed that maintains stability in the explicit, finite-rate formulation while allowing relatively high time steps. The codes use flux vector splitting to difference the inviscid fluxes and employ real-gas corrections to viscosity and thermal conductivity. Numerical results are compared against existing ballistic range and flight data. Flows about complex geometries are also computed.
NASA Technical Reports Server (NTRS)
Campbell, W.
1981-01-01
A theoretical evaluation of the stability of an explicit finite difference solution of the transient temperature field in a composite medium is presented. The grid points of the field are assumed uniformly spaced, and media interfaces are either vertical or horizontal and pass through grid points. In addition, perfect contact between different media (infinite interfacial conductance) is assumed. A finite difference form of the conduction equation is not valid at media interfaces; therefore, heat balance forms are derived. These equations were subjected to stability analysis, and a computer graphics code was developed that permitted determination of a maximum time step for a given grid spacing.
ERIC Educational Resources Information Center
Busch, Holger; Hofer, Jan; Chasiotis, Athanasios; Campos, Domingo
2013-01-01
Human behavior is directed by an implicit and an explicit motivational system. The intrinsic form of the implicit achievement motive has been demonstrated to predict the experience of flow. Thus, this achievement flow motive can be considered an integral component of the autotelic personality, posited in Flow Theory as dispositional difference in…
Fractal Viscous Fingering in Fracture Networks
NASA Astrophysics Data System (ADS)
Boyle, E.; Sams, W.; Ferer, M.; Smith, D. H.
2007-12-01
We have used two very different physical models and computer codes to study miscible injection of a low- viscosity fluid into a simple fracture network, where it displaces a much-more viscous "defending" fluid through "rock" that is otherwise impermeable. The one code (NETfLow) is a standard pore level model, originally intended to treat laboratory-scale experiments; it assumes negligible mixing of the two fluids. The other code (NFFLOW) was written to treat reservoir-scale engineering problems; It explicitly treats the flow through the fractures and allows for significant mixing of the fluids at the interface. Both codes treat the fractures as parallel plates, of different effective apertures. Results are presented for the composition profiles from both codes. Independent of the degree of fluid-mixing, the profiles from both models have a functional form identical to that for fractal viscous fingering (i.e., diffusion limited aggregation, DLA). The two codes that solve the equations for different models gave similar results; together they suggest that the injection of a low-viscosity fluid into large- scale fracture networks may be much more significantly affected by fractal fingering than previously illustrated.
Transversal Clifford gates on folded surface codes
Moussa, Jonathan E.
2016-10-12
Surface and color codes are two forms of topological quantum error correction in two spatial dimensions with complementary properties. Surface codes have lower-depth error detection circuits and well-developed decoders to interpret and correct errors, while color codes have transversal Clifford gates and better code efficiency in the number of physical qubits needed to achieve a given code distance. A formal equivalence exists between color codes and folded surface codes, but it does not guarantee the transferability of any of these favorable properties. However, the equivalence does imply the existence of constant-depth circuit implementations of logical Clifford gates on folded surfacemore » codes. We achieve and improve this result by constructing two families of folded surface codes with transversal Clifford gates. This construction is presented generally for qudits of any dimension. Lastly, the specific application of these codes to universal quantum computation based on qubit fusion is also discussed.« less
LRFD software for design and actual ultimate capacity of confined rectangular columns.
DOT National Transportation Integrated Search
2013-04-01
The analysis of concrete columns using unconfined concrete models is a well established practice. On the : other hand, prediction of the actual ultimate capacity of confined concrete columns requires specialized nonlinear : analysis. Modern codes and...
Community-based research in action: tales from the Ktunaxa community learning centres project.
Stacy, Elizabeth; Wisener, Katherine; Liman, Yolanda; Beznosova, Olga; Lauscher, Helen Novak; Ho, Kendall; Jarvis-Selinger, Sandra
2014-01-01
Rural communities, particularly Aboriginal communities, often have limited access to health information, a situation that can have significant negative consequences. To address the lack of culturally and geographically relevant health information, a community-university partnership was formed to develop, implement, and evaluate Aboriginal Community Learning Centres (CLCs). The objective of this paper is to evaluate the community-based research process used in the development of the CLCs. It focuses on the process of building relationships among partners and the CLC's value and sustainability. Semistructured interviews were conducted with key stakeholders, including principal investigators, community research leads, and supervisors. The interview transcripts were analyzed using an open-coding process to identify themes. Key challenges included enacting shared project governance, negotiating different working styles, and hiring practices based on commitment to project objectives rather than skill set. Technological access provided by the CLCs increased capacity for learning and collective community initiatives, as well as building community leads' skills, knowledge, and self-efficacy. An important lesson was to meet all partners "where they are" in building trusting relationships and adapting research methods to fit the project's context and strengths. Successful results were dependent upon persistence and patience in working through differences, and breaking the project into achievable goals, which collectively contributed to trust and capacity building. The process of building these partnerships resulted in increased capacity of communities to facilitate learning and change initiatives, and the capacity of the university to engage in successful research partnerships with Aboriginal communities in the future.
Tabak, Rachel G; Duggan, Katie; Smith, Carson; Aisaka, Kristelle; Moreland-Russell, Sarah; Brownson, Ross C
2016-01-01
Sustainability has been defined as the existence of structures and processes that allow a program to leverage resources to effectively implement and maintain evidence-based public health and is important in local health departments (LHDs) to retain the benefits of effective programs. Explore the applicability of the Program Sustainability Framework in high- and low-capacity LHDs as defined by national performance standards. Case study interviews from June to July 2013. Standard qualitative methodology was used to code transcripts; codes were developed inductively and deductively. Six geographically diverse LHD's (selected from 3 of high and 3 of low capacity) : 35 LHD practitioners. Thematic reports explored the 8 domains (Organizational Capacity, Program Adaptation, Program Evaluation, Communications, Strategic Planning, Funding Stability, Environmental Support, and Partnerships) of the Program Sustainability Framework. High-capacity LHDs described having environmental support, while low-capacity LHDs reported this was lacking. Both high- and low-capacity LHDs described limited funding; however, high-capacity LHDs reported greater funding flexibility. Partnerships were important to high- and low-capacity LHDs, and both described building partnerships to sustain programming. Regarding organizational capacity, high-capacity LHDs reported better access to and support for adequate staff and staff training when compared with low-capacity LHDs. While high-capacity LHDs described integration of program evaluation into implementation and sustainability, low-capacity LHDs reported limited capacity for measurement specifically and evaluation generally. When high-capacity LHDs described program adoption, they discussed an opportunity to adapt and evaluate. Low-capacity LHDs struggled with programs requiring adaptation. High-capacity LHDs described higher quality communication than low-capacity LHDs. High- and low-capacity LHDs described strategic planning, but high-capacity LHDs reported efforts to integrate evidence-based public health. Investments in leadership support for improving organizational capacity, improvements in communication from the top of the organization, integrating program evaluation into implementation, and greater funding flexibility may enhance sustainability of evidence-based public health in LHDs.
Concatenated Coding Using Trellis-Coded Modulation
NASA Technical Reports Server (NTRS)
Thompson, Michael W.
1997-01-01
In the late seventies and early eighties a technique known as Trellis Coded Modulation (TCM) was developed for providing spectrally efficient error correction coding. Instead of adding redundant information in the form of parity bits, redundancy is added at the modulation stage thereby increasing bandwidth efficiency. A digital communications system can be designed to use bandwidth-efficient multilevel/phase modulation such as Amplitude Shift Keying (ASK), Phase Shift Keying (PSK), Differential Phase Shift Keying (DPSK) or Quadrature Amplitude Modulation (QAM). Performance gain can be achieved by increasing the number of signals over the corresponding uncoded system to compensate for the redundancy introduced by the code. A considerable amount of research and development has been devoted toward developing good TCM codes for severely bandlimited applications. More recently, the use of TCM for satellite and deep space communications applications has received increased attention. This report describes the general approach of using a concatenated coding scheme that features TCM and RS coding. Results have indicated that substantial (6-10 dB) performance gains can be achieved with this approach with comparatively little bandwidth expansion. Since all of the bandwidth expansion is due to the RS code we see that TCM based concatenated coding results in roughly 10-50% bandwidth expansion compared to 70-150% expansion for similar concatenated scheme which use convolution code. We stress that combined coding and modulation optimization is important for achieving performance gains while maintaining spectral efficiency.
The Rhythm Aftereffect: Support for Time Sensitive Neurons with Broad Overlapping Tuning Curves
ERIC Educational Resources Information Center
Becker, Mark W.; Rasmussen, Ian P.
2007-01-01
Ivry [Ivry, R. B. (1996). The representation of temporal information in perception and motor control. Current Opinion in Neurobiology, 6, 851-857.] proposed that explicit coding of brief time intervals is accomplished by neurons that are tuned to a preferred temporal interval and have broad overlapping tuning curves. This proposal is analogous to…
Whistle-Blowing as a Form of Advocacy: Guidelines for the Practitioner and Organization
ERIC Educational Resources Information Center
Greene, Annette D.; Latting, Jean Kantambu
2004-01-01
Advocacy has been an inherent component of social work since the mid-1800s. The NASW Code of Ethics explicitly promotes advocacy as an ethical stance against inhumane conditions. Whistle-blowing, on the other hand, occurs mostly in the business and public administration disciplines and is relatively unknown in the social work profession. Using…
2006-09-01
compression, including real-time cinematography of failure under dynamic compression, was evaluated. The results (figure 10) clearly show that the failure... art of simulations of dynamic failure and damage mechanisms. An explicit dynamic parallel code has been developed to track damage mechanisms in the
A co-designed equalization, modulation, and coding scheme
NASA Technical Reports Server (NTRS)
Peile, Robert E.
1992-01-01
The commercial impact and technical success of Trellis Coded Modulation seems to illustrate that, if Shannon's capacity is going to be neared, the modulation and coding of an analogue signal ought to be viewed as an integrated process. More recent work has focused on going beyond the gains obtained for Average White Gaussian Noise and has tried to combine the coding/modulation with adaptive equalization. The motive is to gain similar advances on less perfect or idealized channels.
Purser, Harry; Jarrold, Christopher
2010-04-01
A long-standing body of research supports the existence of separable short- and long-term memory systems, relying on phonological and semantic codes, respectively. The aim of the current study was to measure the contribution of long-term knowledge to short-term memory performance by looking for evidence of phonologically and semantically coded storage within a short-term recognition task, among developmental samples. Each experimental trial presented 4-item lists. In Experiment 1 typically developing children aged 5 to 6 years old showed evidence of phonologically coded storage across all 4 serial positions, but evidence of semantically coded storage at Serial Positions 1 and 2. In a further experiment, a group of individuals with Down syndrome was investigated as a test case that might be expected to use semantic coding to support short-term storage, but these participants showed no evidence of semantically coded storage and evidenced phonologically coded storage only at Serial Position 4, suggesting that individuals with Down syndrome have a verbal short-term memory capacity of 1 item. Our results suggest that previous evidence of semantic effects on "short-term memory performance" does not reflect semantic coding in short-term memory itself, and provide an experimental method for researchers wishing to take a relatively pure measure of verbal short-term memory capacity, in cases where rehearsal is unlikely.
Cooperative optimization and their application in LDPC codes
NASA Astrophysics Data System (ADS)
Chen, Ke; Rong, Jian; Zhong, Xiaochun
2008-10-01
Cooperative optimization is a new way for finding global optima of complicated functions of many variables. The proposed algorithm is a class of message passing algorithms and has solid theory foundations. It can achieve good coding gains over the sum-product algorithm for LDPC codes. For (6561, 4096) LDPC codes, the proposed algorithm can achieve 2.0 dB gains over the sum-product algorithm at BER of 4×10-7. The decoding complexity of the proposed algorithm is lower than the sum-product algorithm can do; furthermore, the former can achieve much lower error floor than the latter can do after the Eb / No is higher than 1.8 dB.
Research on pre-processing of QR Code
NASA Astrophysics Data System (ADS)
Sun, Haixing; Xia, Haojie; Dong, Ning
2013-10-01
QR code encodes many kinds of information because of its advantages: large storage capacity, high reliability, full arrange of utter-high-speed reading, small printing size and high-efficient representation of Chinese characters, etc. In order to obtain the clearer binarization image from complex background, and improve the recognition rate of QR code, this paper researches on pre-processing methods of QR code (Quick Response Code), and shows algorithms and results of image pre-processing for QR code recognition. Improve the conventional method by changing the Souvola's adaptive text recognition method. Additionally, introduce the QR code Extraction which adapts to different image size, flexible image correction approach, and improve the efficiency and accuracy of QR code image processing.
NASA Technical Reports Server (NTRS)
Rudy, David H.; Kumar, Ajay; Thomas, James L.; Gnoffo, Peter A.; Chakravarthy, Sukumar R.
1988-01-01
A comparative study was made using 4 different computer codes for solving the compressible Navier-Stokes equations. Three different test problems were used, each of which has features typical of high speed internal flow problems of practical importance in the design and analysis of propulsion systems for advanced hypersonic vehicles. These problems are the supersonic flow between two walls, one of which contains a 10 deg compression ramp, the flow through a hypersonic inlet, and the flow in a 3-D corner formed by the intersection of two symmetric wedges. Three of the computer codes use similar recently developed implicit upwind differencing technology, while the fourth uses a well established explicit method. The computed results were compared with experimental data where available.
High Performance Fortran for Aerospace Applications
NASA Technical Reports Server (NTRS)
Mehrotra, Piyush; Zima, Hans; Bushnell, Dennis M. (Technical Monitor)
2000-01-01
This paper focuses on the use of High Performance Fortran (HPF) for important classes of algorithms employed in aerospace applications. HPF is a set of Fortran extensions designed to provide users with a high-level interface for programming data parallel scientific applications, while delegating to the compiler/runtime system the task of generating explicitly parallel message-passing programs. We begin by providing a short overview of the HPF language. This is followed by a detailed discussion of the efficient use of HPF for applications involving multiple structured grids such as multiblock and adaptive mesh refinement (AMR) codes as well as unstructured grid codes. We focus on the data structures and computational structures used in these codes and on the high-level strategies that can be expressed in HPF to optimally exploit the parallelism in these algorithms.
Monte Carlo simulation of ion-neutral charge exchange collisions and grid erosion in an ion thruster
NASA Technical Reports Server (NTRS)
Peng, Xiaohang; Ruyten, Wilhelmus M.; Keefer, Dennis
1991-01-01
A combined particle-in-cell (PIC)/Monte Carlo simulation model has been developed in which the PIC method is used to simulate the charge exchange collisions. It is noted that a number of features were reproduced correctly by this code, but that its assumption of two-dimensional axisymmetry for a single set of grid apertures precluded the reproduction of the most characteristic feature of actual test data; namely, the concentrated grid erosion at the geometric center of the hexagonal aperture array. The first results of a three-dimensional code, which takes into account the hexagonal symmetry of the grid, are presented. It is shown that, with this code, the experimentally observed erosion patterns are reproduced correctly, demonstrating explicitly the concentration of sputtering between apertures.
Low Density Parity Check Codes Based on Finite Geometries: A Rediscovery and More
NASA Technical Reports Server (NTRS)
Kou, Yu; Lin, Shu; Fossorier, Marc
1999-01-01
Low density parity check (LDPC) codes with iterative decoding based on belief propagation achieve astonishing error performance close to Shannon limit. No algebraic or geometric method for constructing these codes has been reported and they are largely generated by computer search. As a result, encoding of long LDPC codes is in general very complex. This paper presents two classes of high rate LDPC codes whose constructions are based on finite Euclidean and projective geometries, respectively. These classes of codes a.re cyclic and have good constraint parameters and minimum distances. Cyclic structure adows the use of linear feedback shift registers for encoding. These finite geometry LDPC codes achieve very good error performance with either soft-decision iterative decoding based on belief propagation or Gallager's hard-decision bit flipping algorithm. These codes can be punctured or extended to obtain other good LDPC codes. A generalization of these codes is also presented.
Enhancing Academic Achievement through Direct Instruction of Social Skills.
ERIC Educational Resources Information Center
Bendt, Lori; Nunan, Jan
This paper examines the impact of the explicit teaching of social skills to enhance academic achievement. The targeted population comprised kindergarten and second grade students in a middle-class community located in central Illinois. The problem of inappropriate behaviors and difficulties interacting with peers and how this may affect academic…
The Futility of Attempting to Codify Academic Achievement Standards
ERIC Educational Resources Information Center
Sadler, D. Royce
2014-01-01
Internationally, attempts at developing explicit descriptions of academic achievement standards have been steadily intensifying. The aim has been to capture the essence of the standards in words, symbols or diagrams (collectively referred to as codifications) so that standards can be: set and maintained at appropriate levels; made broadly…
Huang, Yumi H; Wood, Stacey; Berger, Dale E; Hanoch, Yaniv
2015-09-01
Older adults experience declines in deliberative decisional capacities, while their affective or experiential abilities tend to remain intact (Peters & Bruine de Bruin, 2012). The current study used this framework to investigate age differences in description-based and experience-based decision-making tasks. Description-based tasks emphasize deliberative processing by allowing decision makers to analyze explicit descriptions of choice-reward information. Experience-based tasks emphasize affective or experiential processing because they lack the explicit choice-reward information, forcing decision makers to rely on feelings and information derived from past experiences. This study used the Columbia Card Task (CCT) as a description-based task where probability information is provided and the Iowa Gambling Task (IGT) as an experience-based task, where it is not. As predicted, compared to younger adults (N = 65), older adults (N = 65) performed more poorly on the CCT but performed similarly on the IGT. Deliberative capacities (i.e., executive control and numeracy abilities) explained the relationship between age and performance on the CCT, suggesting that age-related differences in description-based decision-making tasks are related to declines in deliberative capacities. However, deliberative capacities were not associated with performance on the IGT for either older or younger adults. Nevertheless, on the IGT, older adults reported more use of affect-based strategies versus deliberative strategies, whereas younger adults reported similar use of these strategies. This finding offers partial support for the idea that decision-making tasks that rely on deliberate processing are more likely to demonstrate age effects than those that are more experiential. (c) 2015 APA, all rights reserved).
Inostroza, Luis; Palme, Massimo; de la Barrera, Francisco
2016-01-01
Climate change will worsen the high levels of urban vulnerability in Latin American cities due to specific environmental stressors. Some impacts of climate change, such as high temperatures in urban environments, have not yet been addressed through adaptation strategies, which are based on poorly supported data. These impacts remain outside the scope of urban planning. New spatially explicit approaches that identify highly vulnerable urban areas and include specific adaptation requirements are needed in current urban planning practices to cope with heat hazards. In this paper, a heat vulnerability index is proposed for Santiago, Chile. The index was created using a GIS-based spatial information system and was constructed from spatially explicit indexes for exposure, sensitivity and adaptive capacity levels derived from remote sensing data and socio-economic information assessed via principal component analysis (PCA). The objective of this study is to determine the levels of heat vulnerability at local scales by providing insights into these indexes at the intra city scale. The results reveal a spatial pattern of heat vulnerability with strong variations among individual spatial indexes. While exposure and adaptive capacities depict a clear spatial pattern, sensitivity follows a complex spatial distribution. These conditions change when examining PCA results, showing that sensitivity is more robust than exposure and adaptive capacity. These indexes can be used both for urban planning purposes and for proposing specific policies and measures that can help minimize heat hazards in highly dynamic urban areas. The proposed methodology can be applied to other Latin American cities to support policy making.
Palme, Massimo; de la Barrera, Francisco
2016-01-01
Climate change will worsen the high levels of urban vulnerability in Latin American cities due to specific environmental stressors. Some impacts of climate change, such as high temperatures in urban environments, have not yet been addressed through adaptation strategies, which are based on poorly supported data. These impacts remain outside the scope of urban planning. New spatially explicit approaches that identify highly vulnerable urban areas and include specific adaptation requirements are needed in current urban planning practices to cope with heat hazards. In this paper, a heat vulnerability index is proposed for Santiago, Chile. The index was created using a GIS-based spatial information system and was constructed from spatially explicit indexes for exposure, sensitivity and adaptive capacity levels derived from remote sensing data and socio-economic information assessed via principal component analysis (PCA). The objective of this study is to determine the levels of heat vulnerability at local scales by providing insights into these indexes at the intra city scale. The results reveal a spatial pattern of heat vulnerability with strong variations among individual spatial indexes. While exposure and adaptive capacities depict a clear spatial pattern, sensitivity follows a complex spatial distribution. These conditions change when examining PCA results, showing that sensitivity is more robust than exposure and adaptive capacity. These indexes can be used both for urban planning purposes and for proposing specific policies and measures that can help minimize heat hazards in highly dynamic urban areas. The proposed methodology can be applied to other Latin American cities to support policy making. PMID:27606592
Carels, R A; Wott, C B; Young, K M; Gumble, A; Koball, A; Oehlhof, M W
2010-08-01
Weight bias among weight loss treatment-seeking adults has been understudied. This investigation examined the 1) levels of implicit, explicit, and internalized weight bias among overweight/obese treatment-seeking adults, 2) association between weight bias and psychosocial maladjustment (binge eating, body image, depression), and 3) association between participation in weight loss treatment and changes in weight bias. Fifty-four overweight and obese individuals (BMI > or = 27) recruited for a weight loss intervention completed measures of depression, body image, binge eating, and implicit, explicit, and internalized weight bias. Participants evidenced significant implicit, explicit, and internalized weight bias. Greater weight bias was associated with greater depression, poorer body image, and increased binge eating. Despite significant reductions in negative internalized and explicit weight bias following treatment, weight bias remained strong. Weight bias among treatment-seeking adults is associated with greater psychological maladjustment and may interfere with their ability to achieve optimal health and well-being. 2010 Elsevier Ltd. All rights reserved.
Reusing Design Knowledge Based on Design Cases and Knowledge Map
ERIC Educational Resources Information Center
Yang, Cheng; Liu, Zheng; Wang, Haobai; Shen, Jiaoqi
2013-01-01
Design knowledge was reused for innovative design work to support designers with product design knowledge and help designers who lack rich experiences to improve their design capacity and efficiency. First, based on the ontological model of product design knowledge constructed by taxonomy, implicit and explicit knowledge was extracted from some…
Emotion-Cognition Interactions in Schizophrenia: Implicit and Explicit Effects of Facial Expression
ERIC Educational Resources Information Center
Linden, Stefanie C.; Jackson, Margaret C.; Subramanian, Leena; Wolf, Claudia; Green, Paul; Healy, David; Linden, David E. J.
2010-01-01
Working memory (WM) and emotion classification are amongst the cognitive domains where specific deficits have been reported for patients with schizophrenia. In healthy individuals, the capacity of visual working memory is enhanced when the material to be retained is emotionally salient, particularly for angry faces. We investigated whether…
Spatially-explicit estimation of Wright's neighborhood size in continuous populations
Andrew J. Shirk; Samuel A. Cushman
2014-01-01
Effective population size (Ne) is an important parameter in conservation genetics because it quantifies a population's capacity to resist loss of genetic diversity due to inbreeding and drift. The classical approach to estimate Ne from genetic data involves grouping sampled individuals into discretely defined subpopulations assumed to be panmictic. Importantly,...
Barriers to Learners' Successful Completion of VET Flexible Delivery Programs.
ERIC Educational Resources Information Center
Grace, Lauri
In the early 1990s, Australian policymakers began explicitly promoting increased use of flexible delivery in vocational education and training (VET). Some researchers argued that many students lack the learning skills required to deal with the unique demands of flexible delivery. Concerns were also raised about the VET sector's capacity to help…
ERIC Educational Resources Information Center
Hartwell, Laura M.; Jacques, Marie-Paule
2012-01-01
Both reading and writing abstracts require specific language skills and conceptual capacities, which may challenge advanced learners. This paper draws explicitly upon the "Emergence" and "Scientext" research projects which focused on the lexis of scientific texts in French and English. The teaching objective of the project…
Improved Secret Image Sharing Scheme in Embedding Capacity without Underflow and Overflow.
Pang, Liaojun; Miao, Deyu; Li, Huixian; Wang, Qiong
2015-01-01
Computational secret image sharing (CSIS) is an effective way to protect a secret image during its transmission and storage, and thus it has attracted lots of attentions since its appearance. Nowadays, it has become a hot topic for researchers to improve the embedding capacity and eliminate the underflow and overflow situations, which is embarrassing and difficult to deal with. The scheme, which has the highest embedding capacity among the existing schemes, has the underflow and overflow problems. Although the underflow and overflow situations have been well dealt with by different methods, the embedding capacities of these methods are reduced more or less. Motivated by these concerns, we propose a novel scheme, in which we take the differential coding, Huffman coding, and data converting to compress the secret image before embedding it to further improve the embedding capacity, and the pixel mapping matrix embedding method with a newly designed matrix is used to embed secret image data into the cover image to avoid the underflow and overflow situations. Experiment results show that our scheme can improve the embedding capacity further and eliminate the underflow and overflow situations at the same time.
Improved Secret Image Sharing Scheme in Embedding Capacity without Underflow and Overflow
Pang, Liaojun; Miao, Deyu; Li, Huixian; Wang, Qiong
2015-01-01
Computational secret image sharing (CSIS) is an effective way to protect a secret image during its transmission and storage, and thus it has attracted lots of attentions since its appearance. Nowadays, it has become a hot topic for researchers to improve the embedding capacity and eliminate the underflow and overflow situations, which is embarrassing and difficult to deal with. The scheme, which has the highest embedding capacity among the existing schemes, has the underflow and overflow problems. Although the underflow and overflow situations have been well dealt with by different methods, the embedding capacities of these methods are reduced more or less. Motivated by these concerns, we propose a novel scheme, in which we take the differential coding, Huffman coding, and data converting to compress the secret image before embedding it to further improve the embedding capacity, and the pixel mapping matrix embedding method with a newly designed matrix is used to embed secret image data into the cover image to avoid the underflow and overflow situations. Experiment results show that our scheme can improve the embedding capacity further and eliminate the underflow and overflow situations at the same time. PMID:26351657
NASA Astrophysics Data System (ADS)
Athy, Jeremy; Friedrich, Jeff; Delany, Eileen
2008-05-01
Egon Brunswik (1903 1955) first made an interesting distinction between perception and explicit reasoning, arguing that perception included quick estimates of an object’s size, nearly always resulting in good approximations in uncertain environments, whereas explicit reasoning, while better at achieving exact estimates, could often fail by wide margins. An experiment conducted by Brunswik to investigate these ideas was never published and the only available information is a figure of the results presented in a posthumous book in 1956. We replicated and extended his study to gain insight into the procedures Brunswik used in obtaining his results. Explicit reasoning resulted in fewer errors, yet more extreme ones than perception. Brunswik’s graphical analysis of the results led to different conclusions, however, than did a modern statistically-based analysis.
An Explicit Upwind Algorithm for Solving the Parabolized Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
Korte, John J.
1991-01-01
An explicit, upwind algorithm was developed for the direct (noniterative) integration of the 3-D Parabolized Navier-Stokes (PNS) equations in a generalized coordinate system. The new algorithm uses upwind approximations of the numerical fluxes for the pressure and convection terms obtained by combining flux difference splittings (FDS) formed from the solution of an approximate Riemann (RP). The approximate RP is solved using an extension of the method developed by Roe for steady supersonic flow of an ideal gas. Roe's method is extended for use with the 3-D PNS equations expressed in generalized coordinates and to include Vigneron's technique of splitting the streamwise pressure gradient. The difficulty associated with applying Roe's scheme in the subsonic region is overcome. The second-order upwind differencing of the flux derivatives are obtained by adding FDS to either an original forward or backward differencing of the flux derivative. This approach is used to modify an explicit MacCormack differencing scheme into an upwind differencing scheme. The second order upwind flux approximations, applied with flux limiters, provide a method for numerically capturing shocks without the need for additional artificial damping terms which require adjustment by the user. In addition, a cubic equation is derived for determining Vegneron's pressure splitting coefficient using the updated streamwise flux vector. Decoding the streamwise flux vector with the updated value of Vigneron's pressure splitting improves the stability of the scheme. The new algorithm is applied to 2-D and 3-D supersonic and hypersonic laminar flow test cases. Results are presented for the experimental studies of Holden and of Tracy. In addition, a flow field solution is presented for a generic hypersonic aircraft at a Mach number of 24.5 and angle of attack of 1 degree. The computed results compare well to both experimental data and numerical results from other algorithms. Computational times required for the upwind PNS code are approximately equal to an explicit PNS MacCormack's code and existing implicit PNS solvers.
DNA Barcode Goes Two-Dimensions: DNA QR Code Web Server
Li, Huan; Xing, Hang; Liang, Dong; Jiang, Kun; Pang, Xiaohui; Song, Jingyuan; Chen, Shilin
2012-01-01
The DNA barcoding technology uses a standard region of DNA sequence for species identification and discovery. At present, “DNA barcode” actually refers to DNA sequences, which are not amenable to information storage, recognition, and retrieval. Our aim is to identify the best symbology that can represent DNA barcode sequences in practical applications. A comprehensive set of sequences for five DNA barcode markers ITS2, rbcL, matK, psbA-trnH, and CO1 was used as the test data. Fifty-three different types of one-dimensional and ten two-dimensional barcode symbologies were compared based on different criteria, such as coding capacity, compression efficiency, and error detection ability. The quick response (QR) code was found to have the largest coding capacity and relatively high compression ratio. To facilitate the further usage of QR code-based DNA barcodes, a web server was developed and is accessible at http://qrfordna.dnsalias.org. The web server allows users to retrieve the QR code for a species of interests, convert a DNA sequence to and from a QR code, and perform species identification based on local and global sequence similarities. In summary, the first comprehensive evaluation of various barcode symbologies has been carried out. The QR code has been found to be the most appropriate symbology for DNA barcode sequences. A web server has also been constructed to allow biologists to utilize QR codes in practical DNA barcoding applications. PMID:22574113
A versus F: the effects of implicit letter priming on cognitive performance.
Ciani, Keith D; Sheldon, Kennon M
2010-03-01
It has been proposed that motivational responses outside people's conscious awareness can be primed to affect academic performance. The current research focused on the relationship between primed evaluative letters (A and F), explicit and implicit achievement motivation, and cognitive performance. Given the evaluative connotation associated with letter grades, we wanted to know if exposure to the letter A before a task could improve performance, and exposure to the letter F could impair performance. If such effects are found, we suspected that they may be rooted in implicit approach versus avoidance motivation, and occur without participants' awareness. The current research was conducted at a large research university in the USA. Twenty-three undergraduates participated in Expt 1, 32 graduate students in Expt 2, and 76 undergraduates in Expt 3. Expts 1 and 2 were conducted in classroom settings, and Expt 3 in a laboratory. In Expt 1, participants were randomly assigned to either the A or F condition. The letter manipulation came in the form of an ostensible Test Bank ID code on the cover of an analogy test, which participants were prompted to view and write on each page of their test. Expt 2 followed a similar procedure but included the neutral letter J as a third condition to serve as a control. In Expt 3, participants' letter condition was presented in the form of an ostensible Subject ID code prior to an anagram test. Expts 1-3 demonstrated that exposure to the letter A enhances performance relative to the exposure to the letter F, whereas exposure to the letter F prior to an achievement task can impair performance. This effect was demonstrated using two different types of samples (undergraduate and graduate students), in two different experimental settings (classroom and laboratory), using two different types of achievement tasks (analogy and anagram), and using two different types of letter presentation (Test Bank ID and Subject ID). Results from the funnelled debriefing, self-report goals, and word-stem completion support our position that the effect of letter on academic performance takes place outside the conscious awareness of participants. Our findings suggest that students are vulnerable to evaluative letters presented before a task, and support years of research highlighting the significant role that nonconscious processes play in achievement settings.
Low Density Parity Check Codes: Bandwidth Efficient Channel Coding
NASA Technical Reports Server (NTRS)
Fong, Wai; Lin, Shu; Maki, Gary; Yeh, Pen-Shu
2003-01-01
Low Density Parity Check (LDPC) Codes provide near-Shannon Capacity performance for NASA Missions. These codes have high coding rates R=0.82 and 0.875 with moderate code lengths, n=4096 and 8176. Their decoders have inherently parallel structures which allows for high-speed implementation. Two codes based on Euclidean Geometry (EG) were selected for flight ASIC implementation. These codes are cyclic and quasi-cyclic in nature and therefore have a simple encoder structure. This results in power and size benefits. These codes also have a large minimum distance as much as d,,, = 65 giving them powerful error correcting capabilities and error floors less than lo- BER. This paper will present development of the LDPC flight encoder and decoder, its applications and status.
NASA Astrophysics Data System (ADS)
D'Alessandro, Valerio; Binci, Lorenzo; Montelpare, Sergio; Ricci, Renato
2018-01-01
Open-source CFD codes provide suitable environments for implementing and testing low-dissipative algorithms typically used to simulate turbulence. In this research work we developed CFD solvers for incompressible flows based on high-order explicit and diagonally implicit Runge-Kutta (RK) schemes for time integration. In particular, an iterated PISO-like procedure based on Rhie-Chow correction was used to handle pressure-velocity coupling within each implicit RK stage. For the explicit approach, a projected scheme was used to avoid the "checker-board" effect. The above-mentioned approaches were also extended to flow problems involving heat transfer. It is worth noting that the numerical technology available in the OpenFOAM library was used for space discretization. In this work, we additionally explore the reliability and effectiveness of the proposed implementations by computing several unsteady flow benchmarks; we also show that the numerical diffusion due to the time integration approach is completely canceled using the solution techniques proposed here.
Comparison of three explicit multigrid methods for the Euler and Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Chima, Rodrick V.; Turkel, Eli; Schaffer, Steve
1987-01-01
Three explicit multigrid methods, Ni's method, Jameson's finite-volume method, and a finite-difference method based on Brandt's work, are described and compared for two model problems. All three methods use an explicit multistage Runge-Kutta scheme on the fine grid, and this scheme is also described. Convergence histories for inviscid flow over a bump in a channel for the fine-grid scheme alone show that convergence rate is proportional to Courant number and that implicit residual smoothing can significantly accelerate the scheme. Ni's method was slightly slower than the implicitly-smoothed scheme alone. Brandt's and Jameson's methods are shown to be equivalent in form but differ in their node versus cell-centered implementations. They are about 8.5 times faster than Ni's method in terms of CPU time. Results for an oblique shock/boundary layer interaction problem verify the accuracy of the finite-difference code. All methods slowed considerably on the stretched viscous grid but Brandt's method was still 2.1 times faster than Ni's method.
Allocentrically implied target locations are updated in an eye-centred reference frame.
Thompson, Aidan A; Glover, Christopher V; Henriques, Denise Y P
2012-04-18
When reaching to remembered target locations following an intervening eye movement a systematic pattern of error is found indicating eye-centred updating of visuospatial memory. Here we investigated if implicit targets, defined only by allocentric visual cues, are also updated in an eye-centred reference frame as explicit targets are. Participants viewed vertical bars separated by varying distances, and horizontal lines of equivalently varying lengths, implying a "target" location at the midpoint of the stimulus. After determining the implied "target" location from only the allocentric stimuli provided, participants saccaded to an eccentric location, and reached to the remembered "target" location. Irrespective of the type of stimulus reaching errors to these implicit targets are gaze-dependent, and do not differ from those found when reaching to remembered explicit targets. Implicit target locations are coded and updated as a function of relative gaze direction with respect to those implied locations just as explicit targets are, even though no target is specifically represented. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Discrete ordinates solutions of nongray radiative transfer with diffusely reflecting walls
NASA Technical Reports Server (NTRS)
Menart, J. A.; Lee, Haeok S.; Kim, Tae-Kuk
1993-01-01
Nongray gas radiation in a plane parallel slab bounded by gray, diffusely reflecting walls is studied using the discrete ordinates method. The spectral equation of transfer is averaged over a narrow wavenumber interval preserving the spectral correlation effect. The governing equations are derived by considering the history of multiple reflections between two reflecting wails. A closure approximation is applied so that only a finite number of reflections have to be explicitly included. The closure solutions express the physics of the problem to a very high degree and show relatively little error. Numerical solutions are obtained by applying a statistical narrow-band model for gas properties and a discrete ordinates code. The net radiative wail heat fluxes and the radiative source distributions are obtained for different temperature profiles. A zeroth-degree formulation, where no wall reflection is handled explicitly, is sufficient to predict the radiative transfer accurately for most cases considered, when compared with increasingly accurate solutions based on explicitly tracing a larger number of wail reflections without any closure approximation applied.
Non-coding, mRNA-like RNAs database Y2K.
Erdmann, V A; Szymanski, M; Hochberg, A; Groot, N; Barciszewski, J
2000-01-01
In last few years much data has accumulated on various non-translatable RNA transcripts that are synthesised in different cells. They are lacking in protein coding capacity and it seems that they work mainly or exclusively at the RNA level. All known non-coding RNA transcripts are collected in the database: http://www. man.poznan.pl/5SData/ncRNA/index.html
Non-coding, mRNA-like RNAs database Y2K
Erdmann, Volker A.; Szymanski, Maciej; Hochberg, Abraham; Groot, Nathan de; Barciszewski, Jan
2000-01-01
In last few years much data has accumulated on various non-translatable RNA transcripts that are synthesised in different cells. They are lacking in protein coding capacity and it seems that they work mainly or exclusively at the RNA level. All known non-coding RNA transcripts are collected in the database: http://www.man.poznan.pl/5SData/ncRNA/index.html PMID:10592224
NASA Astrophysics Data System (ADS)
Lee, Eun Seok
2000-10-01
An improved aerodynamics performance of a turbine cascade shape can be achieved by an understanding of the flow-field associated with the stator-rotor interaction. In this research, an axial gas turbine airfoil cascade shape is optimized for improved aerodynamic performance by using an unsteady Navier-Stokes solver and a parallel genetic algorithm. The objective of the research is twofold: (1) to develop a computational fluid dynamics code having faster convergence rate and unsteady flow simulation capabilities, and (2) to optimize a turbine airfoil cascade shape with unsteady passing wakes for improved aerodynamic performance. The computer code solves the Reynolds averaged Navier-Stokes equations. It is based on the explicit, finite difference, Runge-Kutta time marching scheme and the Diagonalized Alternating Direction Implicit (DADI) scheme, with the Baldwin-Lomax algebraic and k-epsilon turbulence modeling. Improvements in the code focused on the cascade shape design capability, convergence acceleration and unsteady formulation. First, the inverse shape design method was implemented in the code to provide the design capability, where a surface transpiration concept was employed as an inverse technique to modify the geometry satisfying the user specified pressure distribution on the airfoil surface. Second, an approximation storage multigrid method was implemented as an acceleration technique. Third, the preconditioning method was adopted to speed up the convergence rate in solving the low Mach number flows. Finally, the implicit dual time stepping method was incorporated in order to simulate the unsteady flow-fields. For the unsteady code validation, the Stokes's 2nd problem and the Poiseuille flow were chosen and compared with the computed results and analytic solutions. To test the code's ability to capture the natural unsteady flow phenomena, vortex shedding past a cylinder and the shock oscillation over a bicircular airfoil were simulated and compared with experiments and other research results. The rotor cascade shape optimization with unsteady passing wakes was performed to obtain an improved aerodynamic performance using the unsteady Navier-Stokes solver. Two objective functions were defined as minimization of total pressure loss and maximization of lift, while the mass flow rate was fixed. A parallel genetic algorithm was used as an optimizer and the penalty method was introduced. Each individual's objective function was computed simultaneously by using a 32 processor distributed memory computer. One optimization took about four days.
A simple GPU-accelerated two-dimensional MUSCL-Hancock solver for ideal magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Bard, Christopher M.; Dorelli, John C.
2014-02-01
We describe our experience using NVIDIA's CUDA (Compute Unified Device Architecture) C programming environment to implement a two-dimensional second-order MUSCL-Hancock ideal magnetohydrodynamics (MHD) solver on a GTX 480 Graphics Processing Unit (GPU). Taking a simple approach in which the MHD variables are stored exclusively in the global memory of the GTX 480 and accessed in a cache-friendly manner (without further optimizing memory access by, for example, staging data in the GPU's faster shared memory), we achieved a maximum speed-up of ≈126 for a 10242 grid relative to the sequential C code running on a single Intel Nehalem (2.8 GHz) core. This speedup is consistent with simple estimates based on the known floating point performance, memory throughput and parallel processing capacity of the GTX 480.
Genomic analysis of organismal complexity in the multicellular green alga Volvox carteri
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prochnik, Simon E.; Umen, James; Nedelcu, Aurora
2010-07-01
Analysis of the Volvox carteri genome reveals that this green alga's increased organismal complexity and multicellularity are associated with modifications in protein families shared with its unicellular ancestor, and not with large-scale innovations in protein coding capacity. The multicellular green alga Volvox carteri and its morphologically diverse close relatives (the volvocine algae) are uniquely suited for investigating the evolution of multicellularity and development. We sequenced the 138 Mb genome of V. carteri and compared its {approx}14,500 predicted proteins to those of its unicellular relative, Chlamydomonas reinhardtii. Despite fundamental differences in organismal complexity and life history, the two species have similarmore » protein-coding potentials, and few species-specific protein-coding gene predictions. Interestingly, volvocine algal-specific proteins are enriched in Volvox, including those associated with an expanded and highly compartmentalized extracellular matrix. Our analysis shows that increases in organismal complexity can be associated with modifications of lineage-specific proteins rather than large-scale invention of protein-coding capacity.« less
Bhugra, Dinesh; Pathare, Soumitra; Nardodkar, Renuka; Gosavi, Chetna; Ng, Roger; Torales, Julio; Ventriglio, Antonio
2016-08-01
Realization of right to marry by a person is an exercise of personal liberty, even if concepts of marriage and expectations from such commitment vary across cultures and societies. Once married, if an individual develops mental illness the legal system often starts to discriminate against the individual. There is no doubt that every individual's right to marry or remain married is regulated by their country's family codes, civil codes, marriage laws, or divorce laws. Historically mental health condition of a spouse or intending spouse has been of interest to lawmakers in a number of ways from facilitating divorce to helping the individual with mental illness. There is no doubt that there are deeply ingrained stereotypes that persons with mental health problems lack capacity to consent and, therefore, cannot enter into a marital contract of their own free will. These assumptions lead to discrimination both in practice and in law. Furthermore, the probability of mental illness being genetically transmitted and passed on to offspring adds yet another dimension of discrimination. Thus, the system may also raise questions about the ability of persons with mental health problems to care, nurture, and support a family and children. Internationally, rights to marry, the right to remain married, and dissolution of marriage have been enshrined in several human rights instruments. Domestic laws were studied in 193 countries to explore whether laws affected the rights of people with mental illness with respect to marriage; it was found that 37% of countries explicitly prohibit marriage by persons with mental health problems. In 11% (21 countries) the presence of mental health problems can render a marriage void or can be considered grounds for nullity of marriage. Thus, in many countries basic human rights related to marriage are being flouted.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makwana, K. D., E-mail: kirit.makwana@gmx.com; Cattaneo, F.; Zhdankin, V.
Simulations of decaying magnetohydrodynamic (MHD) turbulence are performed with a fluid and a kinetic code. The initial condition is an ensemble of long-wavelength, counter-propagating, shear-Alfvén waves, which interact and rapidly generate strong MHD turbulence. The total energy is conserved and the rate of turbulent energy decay is very similar in both codes, although the fluid code has numerical dissipation, whereas the kinetic code has kinetic dissipation. The inertial range power spectrum index is similar in both the codes. The fluid code shows a perpendicular wavenumber spectral slope of k{sub ⊥}{sup −1.3}. The kinetic code shows a spectral slope of k{submore » ⊥}{sup −1.5} for smaller simulation domain, and k{sub ⊥}{sup −1.3} for larger domain. We estimate that collisionless damping mechanisms in the kinetic code can account for the dissipation of the observed nonlinear energy cascade. Current sheets are geometrically characterized. Their lengths and widths are in good agreement between the two codes. The length scales linearly with the driving scale of the turbulence. In the fluid code, their thickness is determined by the grid resolution as there is no explicit diffusivity. In the kinetic code, their thickness is very close to the skin-depth, irrespective of the grid resolution. This work shows that kinetic codes can reproduce the MHD inertial range dynamics at large scales, while at the same time capturing important kinetic physics at small scales.« less
A GPU-accelerated implicit meshless method for compressible flows
NASA Astrophysics Data System (ADS)
Zhang, Jia-Le; Ma, Zhi-Hua; Chen, Hong-Quan; Cao, Cheng
2018-05-01
This paper develops a recently proposed GPU based two-dimensional explicit meshless method (Ma et al., 2014) by devising and implementing an efficient parallel LU-SGS implicit algorithm to further improve the computational efficiency. The capability of the original 2D meshless code is extended to deal with 3D complex compressible flow problems. To resolve the inherent data dependency of the standard LU-SGS method, which causes thread-racing conditions destabilizing numerical computation, a generic rainbow coloring method is presented and applied to organize the computational points into different groups by painting neighboring points with different colors. The original LU-SGS method is modified and parallelized accordingly to perform calculations in a color-by-color manner. The CUDA Fortran programming model is employed to develop the key kernel functions to apply boundary conditions, calculate time steps, evaluate residuals as well as advance and update the solution in the temporal space. A series of two- and three-dimensional test cases including compressible flows over single- and multi-element airfoils and a M6 wing are carried out to verify the developed code. The obtained solutions agree well with experimental data and other computational results reported in the literature. Detailed analysis on the performance of the developed code reveals that the developed CPU based implicit meshless method is at least four to eight times faster than its explicit counterpart. The computational efficiency of the implicit method could be further improved by ten to fifteen times on the GPU.
Probabilistic seismic loss estimation via endurance time method
NASA Astrophysics Data System (ADS)
Tafakori, Ehsan; Pourzeynali, Saeid; Estekanchi, Homayoon E.
2017-01-01
Probabilistic Seismic Loss Estimation is a methodology used as a quantitative and explicit expression of the performance of buildings using terms that address the interests of both owners and insurance companies. Applying the ATC 58 approach for seismic loss assessment of buildings requires using Incremental Dynamic Analysis (IDA), which needs hundreds of time-consuming analyses, which in turn hinders its wide application. The Endurance Time Method (ETM) is proposed herein as part of a demand propagation prediction procedure and is shown to be an economical alternative to IDA. Various scenarios were considered to achieve this purpose and their appropriateness has been evaluated using statistical methods. The most precise and efficient scenario was validated through comparison against IDA driven response predictions of 34 code conforming benchmark structures and was proven to be sufficiently precise while offering a great deal of efficiency. The loss values were estimated by replacing IDA with the proposed ETM-based procedure in the ATC 58 procedure and it was found that these values suffer from varying inaccuracies, which were attributed to the discretized nature of damage and loss prediction functions provided by ATC 58.
NASA Astrophysics Data System (ADS)
Foronda, Augusto; Ohta, Chikara; Tamaki, Hisashi
Dirty paper coding (DPC) is a strategy to achieve the region capacity of multiple input multiple output (MIMO) downlink channels and a DPC scheduler is throughput optimal if users are selected according to their queue states and current rates. However, DPC is difficult to implement in practical systems. One solution, zero-forcing beamforming (ZFBF) strategy has been proposed to achieve the same asymptotic sum rate capacity as that of DPC with an exhaustive search over the entire user set. Some suboptimal user group selection schedulers with reduced complexity based on ZFBF strategy (ZFBF-SUS) and proportional fair (PF) scheduling algorithm (PF-ZFBF) have also been proposed to enhance the throughput and fairness among the users, respectively. However, they are not throughput optimal, fairness and throughput decrease if each user queue length is different due to different users channel quality. Therefore, we propose two different scheduling algorithms: a throughput optimal scheduling algorithm (ZFBF-TO) and a reduced complexity scheduling algorithm (ZFBF-RC). Both are based on ZFBF strategy and, at every time slot, the scheduling algorithms have to select some users based on user channel quality, user queue length and orthogonality among users. Moreover, the proposed algorithms have to produce the rate allocation and power allocation for the selected users based on a modified water filling method. We analyze the schedulers complexity and numerical results show that ZFBF-RC provides throughput and fairness improvements compared to the ZFBF-SUS and PF-ZFBF scheduling algorithms.
NASA Astrophysics Data System (ADS)
Castrillón, Mario A.; Morero, Damián A.; Agazzi, Oscar E.; Hueda, Mario R.
2015-08-01
The joint iterative detection and decoding (JIDD) technique has been proposed by Barbieri et al. (2007) with the objective of compensating the time-varying phase noise and constant frequency offset experienced in satellite communication systems. The application of JIDD to optical coherent receivers in the presence of laser frequency fluctuations has not been reported in prior literature. Laser frequency fluctuations are caused by mechanical vibrations, power supply noise, and other mechanisms. They significantly degrade the performance of the carrier phase estimator in high-speed intradyne coherent optical receivers. This work investigates the performance of the JIDD algorithm in multi-gigabit optical coherent receivers. We present simulation results of bit error rate (BER) for non-differential polarization division multiplexing (PDM)-16QAM modulation in a 200 Gb/s coherent optical system that includes an LDPC code with 20% overhead and net coding gain of 11.3 dB at BER = 10-15. Our study shows that JIDD with a pilot rate ⩽ 5 % compensates for both laser phase noise and laser frequency fluctuation. Furthermore, since JIDD is used with non-differential modulation formats, we find that gains in excess of 1 dB can be achieved over existing solutions based on an explicit carrier phase estimator with differential modulation. The impact of the fiber nonlinearities in dense wavelength division multiplexing (DWDM) systems is also investigated. Our results demonstrate that JIDD is an excellent candidate for application in next generation high-speed optical coherent receivers.
Validation of a Node-Centered Wall Function Model for the Unstructured Flow Code FUN3D
NASA Technical Reports Server (NTRS)
Carlson, Jan-Renee; Vasta, Veer N.; White, Jeffery
2015-01-01
In this paper, the implementation of two wall function models in the Reynolds averaged Navier-Stokes (RANS) computational uid dynamics (CFD) code FUN3D is described. FUN3D is a node centered method for solving the three-dimensional Navier-Stokes equations on unstructured computational grids. The first wall function model, based on the work of Knopp et al., is used in conjunction with the one-equation turbulence model of Spalart-Allmaras. The second wall function model, also based on the work of Knopp, is used in conjunction with the two-equation k-! turbulence model of Menter. The wall function models compute the wall momentum and energy flux, which are used to weakly enforce the wall velocity and pressure flux boundary conditions in the mean flow momentum and energy equations. These wall conditions are implemented in an implicit form where the contribution of the wall function model to the Jacobian are also included. The boundary conditions of the turbulence transport equations are enforced explicitly (strongly) on all solid boundaries. The use of the wall function models is demonstrated on four test cases: a at plate boundary layer, a subsonic di user, a 2D airfoil, and a 3D semi-span wing. Where possible, different near-wall viscous spacing tactics are examined. Iterative residual convergence was obtained in most cases. Solution results are compared with theoretical and experimental data for several variations of grid spacing. In general, very good comparisons with data were achieved.
Code-to-Code Comparison, and Material Response Modeling of Stardust and MSL using PATO and FIAT
NASA Technical Reports Server (NTRS)
Omidy, Ali D.; Panerai, Francesco; Martin, Alexandre; Lachaud, Jean R.; Cozmuta, Ioana; Mansour, Nagi N.
2015-01-01
This report provides a code-to-code comparison between PATO, a recently developed high fidelity material response code, and FIAT, NASA's legacy code for ablation response modeling. The goal is to demonstrates that FIAT and PATO generate the same results when using the same models. Test cases of increasing complexity are used, from both arc-jet testing and flight experiment. When using the exact same physical models, material properties and boundary conditions, the two codes give results that are within 2% of errors. The minor discrepancy is attributed to the inclusion of the gas phase heat capacity (cp) in the energy equation in PATO, and not in FIAT.
Protograph LDPC Codes Over Burst Erasure Channels
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Dolinar, Sam; Jones, Christopher
2006-01-01
In this paper we design high rate protograph based LDPC codes suitable for binary erasure channels. To simplify the encoder and decoder implementation for high data rate transmission, the structure of codes are based on protographs and circulants. These LDPC codes can improve data link and network layer protocols in support of communication networks. Two classes of codes were designed. One class is designed for large block sizes with an iterative decoding threshold that approaches capacity of binary erasure channels. The other class is designed for short block sizes based on maximizing minimum stopping set size. For high code rates and short blocks the second class outperforms the first class.
2014-01-01
Background The Integra Initiative designed, tested, and adapted protocols for peer mentorship in order to improve service providers’ skills, knowledge, and capacity to provide quality integrated HIV and sexual and reproductive health (SRH) services. This paper describes providers’ experiences in mentoring as a method of capacity building. Service providers who were skilled in the provision of FP or PNC services were selected to undergo a mentorship training program and to subsequently build the capacity of their peers in SRH-HIV integration. Methods A qualitative assessment was conducted to assess provider experiences and perceptions about peer mentoring. In-depth interviews were conducted with twelve mentors and twenty-three mentees who were trained in SRH and HIV integration. Interviews were recorded, transcribed, and imported to NVivo 9 for analysis. Thematic analysis methods were used to develop a coding framework from the research questions and other emerging themes. Results Mentorship was perceived as a feasible and acceptable method of training among mentors and mentees. Both mentors and mentees agreed that the success of peer mentoring largely depended on cordial relationship and consensus to work together to achieve a specific set of skills. Mentees reported improved knowledge, skills, self-confidence, and team work in delivering integrated SRH and HIV services as benefits associated with mentoring. They also associated mentoring with an increase in the range of services available and the number of clients seeking those services. Successful mentorship was conditional upon facility management support, sufficient supplies and commodities, a positive work environment, and mentors selection. Conclusion Mentoring was perceived by both mentors and mentees as a sustainable method for capacity building, which increased providers’ ability to offer a wide range of and improved access to integrated SRH and HIV services. PMID:24581143
Ndwiga, Charity; Abuya, Timothy; Mutemwa, Richard; Kimani, James Kelly; Colombini, Manuela; Mayhew, Susannah; Baird, Averie; Muia, Ruth Wayua; Kivunaga, Jackline; Warren, Charlotte E
2014-03-01
The Integra Initiative designed, tested, and adapted protocols for peer mentorship in order to improve service providers' skills, knowledge, and capacity to provide quality integrated HIV and sexual and reproductive health (SRH) services. This paper describes providers' experiences in mentoring as a method of capacity building. Service providers who were skilled in the provision of FP or PNC services were selected to undergo a mentorship training program and to subsequently build the capacity of their peers in SRH-HIV integration. A qualitative assessment was conducted to assess provider experiences and perceptions about peer mentoring. In-depth interviews were conducted with twelve mentors and twenty-three mentees who were trained in SRH and HIV integration. Interviews were recorded, transcribed, and imported to NVivo 9 for analysis. Thematic analysis methods were used to develop a coding framework from the research questions and other emerging themes. Mentorship was perceived as a feasible and acceptable method of training among mentors and mentees. Both mentors and mentees agreed that the success of peer mentoring largely depended on cordial relationship and consensus to work together to achieve a specific set of skills. Mentees reported improved knowledge, skills, self-confidence, and team work in delivering integrated SRH and HIV services as benefits associated with mentoring. They also associated mentoring with an increase in the range of services available and the number of clients seeking those services. Successful mentorship was conditional upon facility management support, sufficient supplies and commodities, a positive work environment, and mentors selection. Mentoring was perceived by both mentors and mentees as a sustainable method for capacity building, which increased providers' ability to offer a wide range of and improved access to integrated SRH and HIV services.
Health impact assessment needs in south-east Asian countries.
Caussy, Deoraj; Kumar, Priti; Than Sein, U.
2003-01-01
A situation analysis was undertaken to assess impediments to health impact assessment (HIA) in the South-East Asia Region of WHO (SEARO). The countries of the region were assessed on the policy framework and procedures for HIA, existing infrastructure required to support HIA, the capacity for undertaking HIA, and the potential for intersectoral collaboration. The findings show that environmental impact assessment (EIA) is being used implicitly as a substitute for HIA, which is not explicitly or routinely conducted in virtually all countries of the Region. Therefore, policy, infrastructure, capacity, and intersectoral collaboration need strengthening for the routine implementation of HIA. PMID:12894329
Border-ownership-dependent tilt aftereffect in incomplete figures
NASA Astrophysics Data System (ADS)
Sugihara, Tadashi; Tsuji, Yoshihisa; Sakai, Ko
2007-01-01
A recent physiological finding of neural coding for border ownership (BO) that defines the direction of a figure with respect to the border has provided a possible basis for figure-ground segregation. To explore the underlying neural mechanisms of BO, we investigated stimulus configurations that activate BO circuitry through psychophysical investigation of the BO-dependent tilt aftereffect (BO-TAE). Specifically, we examined robustness of the border ownership signal by determining whether the BO-TAE is observed when gestalt factors are broken. The results showed significant BO-TAEs even when a global shape was not explicitly given due to the ambiguity of the contour, suggesting a contour-independent mechanism for BO coding.
Optimal quantum error correcting codes from absolutely maximally entangled states
NASA Astrophysics Data System (ADS)
Raissi, Zahra; Gogolin, Christian; Riera, Arnau; Acín, Antonio
2018-02-01
Absolutely maximally entangled (AME) states are pure multi-partite generalizations of the bipartite maximally entangled states with the property that all reduced states of at most half the system size are in the maximally mixed state. AME states are of interest for multipartite teleportation and quantum secret sharing and have recently found new applications in the context of high-energy physics in toy models realizing the AdS/CFT-correspondence. We work out in detail the connection between AME states of minimal support and classical maximum distance separable (MDS) error correcting codes and, in particular, provide explicit closed form expressions for AME states of n parties with local dimension \
Border-ownership-dependent tilt aftereffect in incomplete figures.
Sugihara, Tadashi; Tsuji, Yoshihisa; Sakai, Ko
2007-01-01
A recent physiological finding of neural coding for border ownership (BO) that defines the direction of a figure with respect to the border has provided a possible basis for figure-ground segregation. To explore the underlying neural mechanisms of BO, we investigated stimulus configurations that activate BO circuitry through psychophysical investigation of the BO-dependent tilt aftereffect (BO-TAE). Specifically, we examined robustness of the border ownership signal by determining whether the BO-TAE is observed when gestalt factors are broken. The results showed significant BO-TAEs even when a global shape was not explicitly given due to the ambiguity of the contour, suggesting a contour-independent mechanism for BO coding.
NASA Astrophysics Data System (ADS)
Reznik, A. L.; Tuzikov, A. V.; Solov'ev, A. A.; Torgov, A. V.
2016-11-01
Original codes and combinatorial-geometrical computational schemes are presented, which are developed and applied for finding exact analytical formulas that describe the probability of errorless readout of random point images recorded by a scanning aperture with a limited number of threshold levels. Combinatorial problems encountered in the course of the study and associated with the new generalization of Catalan numbers are formulated and solved. An attempt is made to find the explicit analytical form of these numbers, which is, on the one hand, a necessary stage of solving the basic research problem and, on the other hand, an independent self-consistent problem.
Psychosocial factors and theory in physical activity studies in minorities.
Mama, Scherezade K; McNeill, Lorna H; McCurdy, Sheryl A; Evans, Alexandra E; Diamond, Pamela M; Adamus-Leach, Heather J; Lee, Rebecca E
2015-01-01
To summarize the effectiveness of interventions targeting psychosocial factors to increase physical activity (PA) among ethnic minority adults and explore theory use in PA interventions. Studies (N = 11) were identified through a systematic review and targeted African American/Hispanic adults, specific psychosocial factors, and PA. Data were extracted using a standard code sheet and the Theory Coding Scheme. Social support was the most common psychosocial factor reported, followed by motivational readiness, and self-efficacy, as being associated with increased PA. Only 7 studies explicitly reported using a theoretical framework. Future efforts should explore theory use in PA interventions and how integration of theoretical constructs, including psychosocial factors, increases PA.
1988-05-01
Seeciv Limited- System for varying Senses term filter capacity output until some Figure 2. Original limited-capacity channel model (Frim Broadbent, 1958) S...2 Figure 2. Original limited-capacity channel model (From Broadbent, 1958) .... 10 Figure 3. Experimental...unlimited variety of human voices for digital recording sources. Synthesis by Analysis Analysis-synthesis methods electronically model the human voice
The effect of articulatory suppression on implicit and explicit false memory in the DRM paradigm.
Van Damme, Ilse; Menten, Jan; d'Ydewalle, Gery
2010-11-01
Several studies have shown that reliable implicit false memory can be obtained in the DRM paradigm. There has been considerable debate, however, about whether or not conscious activation of critical lures during study is a necessary condition for this. Recent findings have revealed that articulatory suppression prevents subsequent false priming in an anagram task (Lovden & Johansson, 2003). The present experiment sought to replicate and extend these findings to an implicit word stem completion task, and to additionally investigate the effect of articulatory suppression on explicit false memory. Results showed an inhibitory effect of articulatory suppression on veridical memory, as well as on implicit false memory, whereas the level of explicit false memory was heightened. This suggests that articulatory suppression did not merely eliminate conscious lure activation, but had a more general capacity-delimiting effect. The drop in veridical memory can be attributed to diminished encoding of item-specific information. Superficial encoding also limited the spreading of semantic activation during study, which inhibited later false priming. In addition, the lack of item-specific and phenomenological details caused impaired source monitoring at test, resulting in heightened explicit false memory.
DOT National Transportation Integrated Search
2016-08-09
The AASHTO codes for Load Resistance Factored Design (LRFD) regarding shallow bridge foundations : and walls have been implemented into a set of spreadsheet algorithms to facilitate the calculations of bearing : capacity and footing settlements on na...
Wolf, Eric M.; Causley, Matthew; Christlieb, Andrew; ...
2016-08-09
Here, we propose a new particle-in-cell (PIC) method for the simulation of plasmas based on a recently developed, unconditionally stable solver for the wave equation. This method is not subject to a CFL restriction, limiting the ratio of the time step size to the spatial step size, typical of explicit methods, while maintaining computational cost and code complexity comparable to such explicit schemes. We describe the implementation in one and two dimensions for both electrostatic and electromagnetic cases, and present the results of several standard test problems, showing good agreement with theory with time step sizes much larger than allowedmore » by typical CFL restrictions.« less
NASA Technical Reports Server (NTRS)
Saleeb, A. F.; Arnold, S. M.
1991-01-01
The issue of developing effective and robust schemes to implement a class of the Ogden-type hyperelastic constitutive models is addressed. To this end, explicit forms for the corresponding material tangent stiffness tensors are developed, and these are valid for the entire deformation range; i.e., with both distinct as well as repeated principal-stretch values. Throughout the analysis the various implications of the underlying property of separability of the strain-energy functions are exploited, thus leading to compact final forms of the tensor expressions. In particular, this facilitated the treatment of complex cases of uncoupled volumetric/deviatoric formulations for incompressible materials. The forms derived are also amenable for use with symbolic-manipulation packages for systematic code generation.
NASA Astrophysics Data System (ADS)
You, Minli; Lin, Min; Wang, Shurui; Wang, Xuemin; Zhang, Ge; Hong, Yuan; Dong, Yuqing; Jin, Guorui; Xu, Feng
2016-05-01
Medicine counterfeiting is a serious issue worldwide, involving potentially devastating health repercussions. Advanced anti-counterfeit technology for drugs has therefore aroused intensive interest. However, existing anti-counterfeit technologies are associated with drawbacks such as the high cost, complex fabrication process, sophisticated operation and incapability in authenticating drug ingredients. In this contribution, we developed a smart phone recognition based upconversion fluorescent three-dimensional (3D) quick response (QR) code for tracking and anti-counterfeiting of drugs. We firstly formulated three colored inks incorporating upconversion nanoparticles with RGB (i.e., red, green and blue) emission colors. Using a modified inkjet printer, we printed a series of colors by precisely regulating the overlap of these three inks. Meanwhile, we developed a multilayer printing and splitting technology, which significantly increases the information storage capacity per unit area. As an example, we directly printed the upconversion fluorescent 3D QR code on the surface of drug capsules. The 3D QR code consisted of three different color layers with each layer encoded by information of different aspects of the drug. A smart phone APP was designed to decode the multicolor 3D QR code, providing the authenticity and related information of drugs. The developed technology possesses merits in terms of low cost, ease of operation, high throughput and high information capacity, thus holds great potential for drug anti-counterfeiting.Medicine counterfeiting is a serious issue worldwide, involving potentially devastating health repercussions. Advanced anti-counterfeit technology for drugs has therefore aroused intensive interest. However, existing anti-counterfeit technologies are associated with drawbacks such as the high cost, complex fabrication process, sophisticated operation and incapability in authenticating drug ingredients. In this contribution, we developed a smart phone recognition based upconversion fluorescent three-dimensional (3D) quick response (QR) code for tracking and anti-counterfeiting of drugs. We firstly formulated three colored inks incorporating upconversion nanoparticles with RGB (i.e., red, green and blue) emission colors. Using a modified inkjet printer, we printed a series of colors by precisely regulating the overlap of these three inks. Meanwhile, we developed a multilayer printing and splitting technology, which significantly increases the information storage capacity per unit area. As an example, we directly printed the upconversion fluorescent 3D QR code on the surface of drug capsules. The 3D QR code consisted of three different color layers with each layer encoded by information of different aspects of the drug. A smart phone APP was designed to decode the multicolor 3D QR code, providing the authenticity and related information of drugs. The developed technology possesses merits in terms of low cost, ease of operation, high throughput and high information capacity, thus holds great potential for drug anti-counterfeiting. Electronic supplementary information (ESI) available: Calculating details of UCNP content per 3D QR code and decoding process of the 3D QR code. See DOI: 10.1039/c6nr01353h
ERIC Educational Resources Information Center
Campbell, Stacey; Torr, Jane; Cologon, Kathy
2012-01-01
Commercial phonics programmes (e.g. Jolly Phonics and Letterland) are becoming widely used in the early years of school. These programmes claim to use a systematic explicit approach, considered as the preferred method of phonics instruction for teaching alphabetic code-breaking skills in Australia and the UK in the first years of school…
An Exact Integration Scheme for Radiative Cooling in Hydrodynamical Simulations
NASA Astrophysics Data System (ADS)
Townsend, R. H. D.
2009-04-01
A new scheme for incorporating radiative cooling in hydrodynamical codes is presented, centered around exact integration of the governing semidiscrete cooling equation. Using benchmark calculations based on the cooling downstream of a radiative shock, I demonstrate that the new scheme outperforms traditional explicit and implicit approaches in terms of accuracy, while remaining competitive in terms of execution speed.
ERIC Educational Resources Information Center
Cromley, Jennifer G.; Wills, Theodore W.
2016-01-01
Van den Broek's landscape model explicitly posits sequences of moves during reading in real time. Two other models that implicitly describe sequences of processes during reading are tested in the present research. Coded think-aloud data from 24 undergraduate students reading scientific text were analysed with lag-sequential techniques to compare…
ERIC Educational Resources Information Center
PERRY, REGINALD
SOME 4000 YEARS AGO BABYLONIAN CODE OF HAMMURABI MADE EXPLICIT PROVISIONS THAT ARTISANS TEACH THEIR CRAFTS TO YOUTH. THE CRAFTS THEMSELVES HAVE BEEN A FAMILY TRADITION IN MORE RECENT TIMES. THE INDENTURE AND THE MASTER-APPRENTICESHIP RELATIONSHIP WAS ADOPTED BY CRAFTSMEN WHO CAME FROM EUROPE. SUCH FAMOUS AMERICANS AS PAUL REVERE AND BENJAMIN…
ERIC Educational Resources Information Center
Boyd, Sally; Huss, Leena; Ottesjö, Cajsa
2017-01-01
This paper presents results from an ethnographic study of language policy as it is enacted in everyday interaction in two language profile preschools in Sweden with explicit monolingual language policies: English and Finnish, respectively. However, in both preschools, children are free to choose language or code alternate. The study shows how…
Pavanello, Michele; Tung, Wei-Cheng; Adamowicz, Ludwik
2009-11-14
Efficient optimization of the basis set is key to achieving a very high accuracy in variational calculations of molecular systems employing basis functions that are explicitly dependent on the interelectron distances. In this work we present a method for a systematic enlargement of basis sets of explicitly correlated functions based on the iterative-complement-interaction approach developed by Nakatsuji [Phys. Rev. Lett. 93, 030403 (2004)]. We illustrate the performance of the method in the variational calculations of H(3) where we use explicitly correlated Gaussian functions with shifted centers. The total variational energy (-1.674 547 421 Hartree) and the binding energy (-15.74 cm(-1)) obtained in the calculation with 1000 Gaussians are the most accurate results to date.
Explicit reference governor for linear systems
NASA Astrophysics Data System (ADS)
Garone, Emanuele; Nicotra, Marco; Ntogramatzidis, Lorenzo
2018-06-01
The explicit reference governor is a constrained control scheme that was originally introduced for generic nonlinear systems. This paper presents two explicit reference governor strategies that are specifically tailored for the constrained control of linear time-invariant systems subject to linear constraints. Both strategies are based on the idea of maintaining the system states within an invariant set which is entirely contained in the constraints. This invariant set can be constructed by exploiting either the Lyapunov inequality or modal decomposition. To improve the performance, we show that the two strategies can be combined by choosing at each time instant the least restrictive set. Numerical simulations illustrate that the proposed scheme achieves performances that are comparable to optimisation-based reference governors.
Reputation Management in Children on the Autism Spectrum.
Cage, Eilidh; Bird, Geoffrey; Pellicano, Elizabeth
2016-12-01
Being able to manage reputation is an important social skill, but it is unclear whether autistic children can manage reputation. This study investigated whether 33 autistic children matched to 33 typical children could implicitly or explicitly manage reputation. Further, we examined whether cognitive processes-theory of mind, social motivation, inhibitory control and reciprocity-contribute to reputation management. Results showed that neither group implicitly managed reputation, and there was no group difference in explicit reputation management. Results suggested different mechanisms contribute to reputation management in these groups-social motivation in typical children and reciprocity in autistic children. Explicit reputation management is achievable for autistic children, and there are individual differences in its relationship to underlying cognitive processes.
The global heliosphere: A parametric study
NASA Technical Reports Server (NTRS)
McNutt, R. L., Jr.; Lyon, J.; Goodrich, C. C.
1995-01-01
As the Pioneer 10 and 11 and Voyager 1 and 2 spacecraft continue their penetration into the outer heliosphere, more attention has been focused on the nature of the solar wind interaction with the Very Local Interstellar Medium (VLISM). Since the initial pioneering concepts of Davis in 1955 and Parker in the early 1960's both in situ and remote measurements have led to various constraints that do not fit well into a coherent picture. To provide a context for these various observable constraints, we have adapted an explicitly time-dependent, explicitly three-dimensional magnetohydrodynamic (MHD) code to simulate the dependence of the heliospheric configuration and interaction with the VLISM on the properties of the external medium. The code also allows us to study temporal variations brought about by both short- and long-term changes in the solar wind and/or VLISM properties. We will discuss some of the initial results from this new effort and implications for the distances inferred to the termination shock and heliopause boundary. In particular, we will consider the effect of the Very Local Interstellar Magnetic Field (VLIMF) on the configuration and compare it with inferences from observations of outer heliosphere cosmic rays and the Very Low Frequency (VLF) outer heliospheric radio emissions.
NASA Astrophysics Data System (ADS)
Buzan, J. R.; Huber, M.
2015-12-01
The summer of 2015 has experienced major heat waves on 4 continents, and heat stress left ~4000 people dead in India and Pakistan. Heat stress is caused by a combination of meteorological factors: temperature, humidity, and radiation. The International Organization for Standardization (ISO) uses Wet Bulb Globe Temperature (WBGT)—an empirical metric this is calibrated with temperature, humidity, and radiation—for determining labor capacity during heat stress. Unfortunately, most literature studying global heat stress focuses on extreme temperature events, and a limited number of studies use the combination of temperature and humidity. Recent global assessments use WBGT, yet omit the radiation component without recalibrating the metric.Here we explicitly calculate future WBGT within a land surface model, including radiative fluxes as produced by a modeled globe thermometer. We use the Community Land Model version 4.5 (CLM4.5), which is a component model of the Community Earth System Model (CESM), and is maintained by the National Center for Atmospheric Research (NCAR). To drive our CLM4.5 simulations, we use greenhouse gasses Representative Concentration Pathway 8.5 (business as usual), and atmospheric output from the CMIP5 Archive. Humans work in a variety of environments, and we place the modeled globe thermometer in a variety of environments. We modify CLM4.5 code to calculate solar and thermal radiation fluxes below and above canopy vegetation, and in bare ground. To calculate wet bulb temperature, we implemented the HumanIndexMod into CLM4.5. The temperature, wet bulb temperature, and radiation fields are calculated at every model time step and are outputted 4x Daily. We use these fields to calculate WBGT and labor capacity for two time slices: 2026-2045 and 2081-2100.
Early numerical foundations of young children's mathematical development.
Chu, Felicia W; vanMarle, Kristy; Geary, David C
2015-04-01
This study focused on the relative contributions of the acuity of the approximate number system (ANS) and knowledge of quantitative symbols to young children's early mathematical learning. At the beginning of preschool, 191 children (Mage=46 months) were administered tasks that assessed ANS acuity and explicit knowledge of the cardinal values represented by number words, and their mathematics achievement was assessed at the end of the school year. Children's executive functions, intelligence, and preliteracy skills and their parents' educational levels were also assessed and served as covariates. Both the ANS and cardinality tasks were significant predictors of end-of-year mathematics achievement with and without control of the covariates. As simultaneous predictors and with control of the covariates, cardinality remained significantly related to mathematics achievement, but ANS acuity did not. Mediation analyses revealed that the relation between ANS acuity and mathematics achievement was fully mediated by cardinality, suggesting that the ANS may facilitate children's explicit understanding of cardinal value and in this way may indirectly influence early mathematical learning. Copyright © 2015 Elsevier Inc. All rights reserved.
Breast cancer screening services: trade-offs in quality, capacity, outreach, and centralization.
Güneş, Evrim D; Chick, Stephen E; Akşin, O Zeynep
2004-11-01
This work combines and extends previous work on breast cancer screening models by explicitly incorporating, for the first time, aspects of the dynamics of health care states, program outreach, and the screening volume-quality relationship in a service system model to examine the effect of public health policy and service capacity decisions on public health outcomes. We consider the impact of increasing standards for minimum reading volume to improve quality, expanding outreach with or without decentralization of service facilities, and the potential of queueing due to stochastic effects and limited capacity. The results indicate a strong relation between screening quality and the cost of screening and treatment, and emphasize the importance of accounting for service dynamics when assessing the performance of health care interventions. For breast cancer screening, increasing outreach without improving quality and maintaining capacity results in less benefit than predicted by standard models.
A Navier-Strokes Chimera Code on the Connection Machine CM-5: Design and Performance
NASA Technical Reports Server (NTRS)
Jespersen, Dennis C.; Levit, Creon; Kwak, Dochan (Technical Monitor)
1994-01-01
We have implemented a three-dimensional compressible Navier-Stokes code on the Connection Machine CM-5. The code is set up for implicit time-stepping on single or multiple structured grids. For multiple grids and geometrically complex problems, we follow the 'chimera' approach, where flow data on one zone is interpolated onto another in the region of overlap. We will describe our design philosophy and give some timing results for the current code. A parallel machine like the CM-5 is well-suited for finite-difference methods on structured grids. The regular pattern of connections of a structured mesh maps well onto the architecture of the machine. So the first design choice, finite differences on a structured mesh, is natural. We use centered differences in space, with added artificial dissipation terms. When numerically solving the Navier-Stokes equations, there are liable to be some mesh cells near a solid body that are small in at least one direction. This mesh cell geometry can impose a very severe CFL (Courant-Friedrichs-Lewy) condition on the time step for explicit time-stepping methods. Thus, though explicit time-stepping is well-suited to the architecture of the machine, we have adopted implicit time-stepping. We have further taken the approximate factorization approach. This creates the need to solve large banded linear systems and creates the first possible barrier to an efficient algorithm. To overcome this first possible barrier we have considered two options. The first is just to solve the banded linear systems with data spread over the whole machine, using whatever fast method is available. This option is adequate for solving scalar tridiagonal systems, but for scalar pentadiagonal or block tridiagonal systems it is somewhat slower than desired. The second option is to 'transpose' the flow and geometry variables as part of the time-stepping process: Start with x-lines of data in-processor. Form explicit terms in x, then transpose so y-lines of data are in-processor. Form explicit terms in y, then transpose so z-lines are in processor. Form explicit terms in z, then solve linear systems in the z-direction. Transpose to the y-direction, then solve linear systems in the y-direction. Finally transpose to the x direction and solve linear systems in the x-direction. This strategy avoids inter-processor communication when differencing and solving linear systems, but requires a large amount of communication when doing the transposes. The transpose method is more efficient than the non-transpose strategy when dealing with scalar pentadiagonal or block tridiagonal systems. For handling geometrically complex problems the chimera strategy was adopted. For multiple zone cases we compute on each zone sequentially (using the whole parallel machine), then send the chimera interpolation data to a distributed data structure (array) laid out over the whole machine. This information transfer implies an irregular communication pattern, and is the second possible barrier to an efficient algorithm. We have implemented these ideas on the CM-5 using CMF (Connection Machine Fortran), a data parallel language which combines elements of Fortran 90 and certain extensions, and which bears a strong similarity to High Performance Fortran. We make use of the Connection Machine Scientific Software Library (CMSSL) for the linear solver and array transpose operations.
The application of LDPC code in MIMO-OFDM system
NASA Astrophysics Data System (ADS)
Liu, Ruian; Zeng, Beibei; Chen, Tingting; Liu, Nan; Yin, Ninghao
2018-03-01
The combination of MIMO and OFDM technology has become one of the key technologies of the fourth generation mobile communication., which can overcome the frequency selective fading of wireless channel, increase the system capacity and improve the frequency utilization. Error correcting coding introduced into the system can further improve its performance. LDPC (low density parity check) code is a kind of error correcting code which can improve system reliability and anti-interference ability, and the decoding is simple and easy to operate. This paper mainly discusses the application of LDPC code in MIMO-OFDM system.
Multiple access capacity trade-offs for a Ka-band personal access satellite system
NASA Technical Reports Server (NTRS)
Dessouky, Khaled; Motamedi, Masoud
1990-01-01
System capability is critical to the economic viability of a personal satellite communication system. Ka band has significant potential to support a high capacity multiple access system because of the availability of bandwidth. System design tradeoffs are performed and multiple access schemes are compared with the design goal of achieving the highest capacity and efficiency. Conclusions regarding the efficiency of the different schemes and the achievable capacities are given.
Stereotype Threat and Women's Performance in Physics
NASA Astrophysics Data System (ADS)
Marchand, Gwen C.; Taasoobshirazi, Gita
2013-12-01
Stereotype threat (ST), which involves confirming a negative stereotype about one's group, is a factor thought to contribute to the gender gap in science achievement and participation. This study involved a quasi-experiment in which 312 US high school physics students were randomly assigned, via their classroom cluster, to one of three ST conditions. The conditions included an explicit ST condition, an implicit ST condition, and a nullified condition. Results indicated that males in all three conditions performed similarly on a set of physics problems. Females in the nullified condition outperformed females in the explicit ST condition and females in the implicit and explicit conditions performed similarly. Males performed better than females in the implicit and explicit ST conditions, but male and female performance on the physics problems was not significantly different in the nullified condition. The implications of these findings for physics instruction and future research on gender differences in physics and ST in science are discussed.
Using Explicit and Systematic Instruction to Support Working Memory
ERIC Educational Resources Information Center
Smith, Jean Louise M.; Sáez, Leilani; Doabler, Christian T.
2016-01-01
Students are frequently expected to complete multistep tasks within a range of academic or classroom routines and to do so independently. Students' ability to complete these tasks successfully may vary as a consequence of both their working-memory capacity and the conditions under which they are expected to learn. Crucial features in the design or…
ERIC Educational Resources Information Center
Mongeon, David; Blanchet, Pierre; Messier, Julie
2013-01-01
The capacity to learn new visuomotor associations is fundamental to adaptive motor behavior. Evidence suggests visuomotor learning deficits in Parkinson's disease (PD). However, the exact nature of these deficits and the ability of dopamine medication to improve them are under-explored. Previous studies suggested that learning driven by large and…
Todd A. Schroeder; Robbie Hember; Nicholas C. Coops; Shunlin Liang
2009-01-01
The magnitude and distribution of incoming shortwave solar radiation (SW) has significant influence on the productive capacity of forest vegetation. Models that estimate forest productivity require accurate and spatially explicit radiation surfaces that resolve both long- and short-term temporal climatic patterns and that account for topographic variability of the land...
Informal Science Education Policy: Issues and Opportunities. A CAISE Inquiry Group Report
ERIC Educational Resources Information Center
Eisenkraft, Arthur; Flatow, Ira; Friedman, Alan J.; Kirsch, Jeffrey W.; Macdonald, Maritza; Marshall, Eric; McCallie, Ellen; Nesbit, Trevor; Prosino, Rebecca Nesbitt; Petit, Charles; Schubel, Jerry R.; Traill, Saskia; Wharton, Dan; Williams, Steven H.; Witte, Joe
2010-01-01
The goal of the CAISE "Policy Study Inquiry Group" (PSIG) was to inventory and comment on policies (current or potential, organizational or governmental, explicit or implicit) which affect the capacity of informal science education to have an impact. This group represented a cross-section of organizations and entities that touch upon or play a…
From Students to Consumers: Reflections on the Marketisation of Portuguese Higher Education
ERIC Educational Resources Information Center
Cardoso, Sonia; Carvalho, Teresa; Santiago, Rui
2011-01-01
A progressive attempt to replace traditional public administration values and concepts by others that are closer to private management can be observed in the replacement of the service user concept by that of consumer or client. This redefinition's more implicit or explicit intent is to increase consumers'/clients' status, their capacity to choose…
The Person in the Profession: Renewing Teacher Vitality through Professional Development
ERIC Educational Resources Information Center
Intrator, Sam M.; Kunzman, Robert
2006-01-01
A teacher's vocational vitality, or capacity to be vital, present, and deeply connected to his or her students, is not a fixed, indelible condition, but a state that ebbs and flows with the context and challenges of the teaching life. In light of this, an emerging form of professional development programming explicitly devoted to nourishing the…