ERIC Educational Resources Information Center
Jones, Gary; Gobet, Fernand; Pine, Julian M.
2008-01-01
Increasing working memory (WM) capacity is often cited as a major influence on children's development and yet WM capacity is difficult to examine independently of long-term knowledge. A computational model of children's nonword repetition (NWR) performance is presented that independently manipulates long-term knowledge and WM capacity to determine…
ERIC Educational Resources Information Center
Simmering, Vanessa R.; Patterson, Rebecca
2012-01-01
Numerous studies have established that visual working memory has a limited capacity that increases during childhood. However, debate continues over the source of capacity limits and its developmental increase. Simmering (2008) adapted a computational model of spatial cognitive development, the Dynamic Field Theory, to explain not only the source…
High Speed Computing, LANs, and WAMs
NASA Technical Reports Server (NTRS)
Bergman, Larry A.; Monacos, Steve
1994-01-01
Optical fiber networks may one day offer potential capacities exceeding 10 terabits/sec. This paper describes present gigabit network techniques for distributed computing as illustrated by the CASA gigabit testbed, and then explores future all-optic network architectures that offer increased capacity, more optimized level of service for a given application, high fault tolerance, and dynamic reconfigurability.
Modeling of urban solid waste management system: The case of Dhaka city
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sufian, M.A.; Bala, B.K.
2007-07-01
This paper presents a system dynamics computer model to predict solid waste generation, collection capacity and electricity generation from solid waste and to assess the needs for waste management of the urban city of Dhaka, Bangladesh. Simulated results show that solid waste generation, collection capacity and electricity generation potential from solid waste increase with time. Population, uncleared waste, untreated waste, composite index and public concern are projected to increase with time for Dhaka city. Simulated results also show that increasing the budget for collection capacity alone does not improve environmental quality; rather an increased budget is required for both collectionmore » and treatment of solid wastes of Dhaka city. Finally, this model can be used as a computer laboratory for urban solid waste management (USWM) policy analysis.« less
Pseudo-orthogonalization of memory patterns for associative memory.
Oku, Makito; Makino, Takaki; Aihara, Kazuyuki
2013-11-01
A new method for improving the storage capacity of associative memory models on a neural network is proposed. The storage capacity of the network increases in proportion to the network size in the case of random patterns, but, in general, the capacity suffers from correlation among memory patterns. Numerous solutions to this problem have been proposed so far, but their high computational cost limits their scalability. In this paper, we propose a novel and simple solution that is locally computable without any iteration. Our method involves XNOR masking of the original memory patterns with random patterns, and the masked patterns and masks are concatenated. The resulting decorrelated patterns allow higher storage capacity at the cost of the pattern length. Furthermore, the increase in the pattern length can be reduced through blockwise masking, which results in a small amount of capacity loss. Movie replay and image recognition are presented as examples to demonstrate the scalability of the proposed method.
Parallel Calculations in LS-DYNA
NASA Astrophysics Data System (ADS)
Vartanovich Mkrtychev, Oleg; Aleksandrovich Reshetov, Andrey
2017-11-01
Nowadays, structural mechanics exhibits a trend towards numeric solutions being found for increasingly extensive and detailed tasks, which requires that capacities of computing systems be enhanced. Such enhancement can be achieved by different means. E.g., in case a computing system is represented by a workstation, its components can be replaced and/or extended (CPU, memory etc.). In essence, such modification eventually entails replacement of the entire workstation, i.e. replacement of certain components necessitates exchange of others (faster CPUs and memory devices require buses with higher throughput etc.). Special consideration must be given to the capabilities of modern video cards. They constitute powerful computing systems capable of running data processing in parallel. Interestingly, the tools originally designed to render high-performance graphics can be applied for solving problems not immediately related to graphics (CUDA, OpenCL, Shaders etc.). However, not all software suites utilize video cards’ capacities. Another way to increase capacity of a computing system is to implement a cluster architecture: to add cluster nodes (workstations) and to increase the network communication speed between the nodes. The advantage of this approach is extensive growth due to which a quite powerful system can be obtained by combining not particularly powerful nodes. Moreover, separate nodes may possess different capacities. This paper considers the use of a clustered computing system for solving problems of structural mechanics with LS-DYNA software. To establish a range of dependencies a mere 2-node cluster has proven sufficient.
From photons to big-data applications: terminating terabits
2016-01-01
Computer architectures have entered a watershed as the quantity of network data generated by user applications exceeds the data-processing capacity of any individual computer end-system. It will become impossible to scale existing computer systems while a gap grows between the quantity of networked data and the capacity for per system data processing. Despite this, the growth in demand in both task variety and task complexity continues unabated. Networked computer systems provide a fertile environment in which new applications develop. As networked computer systems become akin to infrastructure, any limitation upon the growth in capacity and capabilities becomes an important constraint of concern to all computer users. Considering a networked computer system capable of processing terabits per second, as a benchmark for scalability, we critique the state of the art in commodity computing, and propose a wholesale reconsideration in the design of computer architectures and their attendant ecosystem. Our proposal seeks to reduce costs, save power and increase performance in a multi-scale approach that has potential application from nanoscale to data-centre-scale computers. PMID:26809573
From photons to big-data applications: terminating terabits.
Zilberman, Noa; Moore, Andrew W; Crowcroft, Jon A
2016-03-06
Computer architectures have entered a watershed as the quantity of network data generated by user applications exceeds the data-processing capacity of any individual computer end-system. It will become impossible to scale existing computer systems while a gap grows between the quantity of networked data and the capacity for per system data processing. Despite this, the growth in demand in both task variety and task complexity continues unabated. Networked computer systems provide a fertile environment in which new applications develop. As networked computer systems become akin to infrastructure, any limitation upon the growth in capacity and capabilities becomes an important constraint of concern to all computer users. Considering a networked computer system capable of processing terabits per second, as a benchmark for scalability, we critique the state of the art in commodity computing, and propose a wholesale reconsideration in the design of computer architectures and their attendant ecosystem. Our proposal seeks to reduce costs, save power and increase performance in a multi-scale approach that has potential application from nanoscale to data-centre-scale computers. © 2016 The Authors.
Computer Anxiety: Relationship to Math Anxiety and Holland Types.
ERIC Educational Resources Information Center
Bellando, Jayne; Winer, Jane L.
Although the number of computers in the school system is increasing, many schools are not using computers to their capacity. One reason for this may be computer anxiety on the part of the teacher. A review of the computer anxiety literature reveals little information on the subject, and findings from previous studies suggest that basic controlled…
Big data computing: Building a vision for ARS information management
USDA-ARS?s Scientific Manuscript database
Improvements are needed within the ARS to increase scientific capacity and keep pace with new developments in computer technologies that support data acquisition and analysis. Enhancements in computing power and IT infrastructure are needed to provide scientists better access to high performance com...
The impact of individual factors on healthcare staff's computer use in psychiatric hospitals.
Koivunen, Marita; Välimäki, Maritta; Koskinen, Anita; Staggers, Nancy; Katajisto, Jouko
2009-04-01
The study examines whether individual factors of healthcare staff are associated with computer use in psychiatric hospitals. In addition, factors inhibiting staff's optimal use of computers were explored. Computer applications have developed the content of clinical practice and changed patterns of professional working. Healthcare staff need new capacities to work in clinical practice, including the basic computers skills. Computer use amongst healthcare staff has widely been studied in general, but cogent information is still lacking in psychiatric care. Staff's computer use was assessed using a structured questionnaire (The Staggers Nursing Computer Experience Questionnaire). The study population was healthcare staff working in two psychiatric hospitals in Finland (n = 470, response rate = 59%). The data were analysed with descriptive statistics and manova with main effects and two-way interaction effects of six individual factors. Nurses who had more experience of computer use or of the implementation processes of computer systems were more motivated to use computers than those who had less experience of these issues. Males and administrative personnel who were younger had also participated more often than women in implementation processes of computer systems. The most significant factor inhibiting the use of computers was lack of interest in them. In psychiatric hospitals, more direct attention should focus on staff's capacities to use computers and to increase their understanding of the benefits in clinical care, especially for women and ageing staff working in psychiatric hospitals. To avoid exclusion amongst healthcare personnel in information society and to ensure that they have capacities to guide patients on how to use computers or to evaluate the quality of health information on the web, staff's capacities and motivation to use computers in mental health and psychiatric nursing should be ensured.
ERIC Educational Resources Information Center
Ginsberg, Ralph B.
Most of the now commonplace computer-assisted instruction (CAI) uses computers to increase the capacity to perform logical, numerical, and symbolic computations. However, computers are an interactive and potentially intelligent medium. The implications of artificial intelligence (AI) for learning are more radical than those for traditional CAI. AI…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-07
... technology, to include computer telecommunications or other electronic means, that the lead agency is... assess the capacity and resources of the public to utilize and maintain an electronic- or computer... the technology, to include computer telecommunications or other electronic means, that the lead agency...
Cloud computing for comparative genomics
2010-01-01
Background Large comparative genomics studies and tools are becoming increasingly more compute-expensive as the number of available genome sequences continues to rise. The capacity and cost of local computing infrastructures are likely to become prohibitive with the increase, especially as the breadth of questions continues to rise. Alternative computing architectures, in particular cloud computing environments, may help alleviate this increasing pressure and enable fast, large-scale, and cost-effective comparative genomics strategies going forward. To test this, we redesigned a typical comparative genomics algorithm, the reciprocal smallest distance algorithm (RSD), to run within Amazon's Elastic Computing Cloud (EC2). We then employed the RSD-cloud for ortholog calculations across a wide selection of fully sequenced genomes. Results We ran more than 300,000 RSD-cloud processes within the EC2. These jobs were farmed simultaneously to 100 high capacity compute nodes using the Amazon Web Service Elastic Map Reduce and included a wide mix of large and small genomes. The total computation time took just under 70 hours and cost a total of $6,302 USD. Conclusions The effort to transform existing comparative genomics algorithms from local compute infrastructures is not trivial. However, the speed and flexibility of cloud computing environments provides a substantial boost with manageable cost. The procedure designed to transform the RSD algorithm into a cloud-ready application is readily adaptable to similar comparative genomics problems. PMID:20482786
Cloud computing for comparative genomics.
Wall, Dennis P; Kudtarkar, Parul; Fusaro, Vincent A; Pivovarov, Rimma; Patil, Prasad; Tonellato, Peter J
2010-05-18
Large comparative genomics studies and tools are becoming increasingly more compute-expensive as the number of available genome sequences continues to rise. The capacity and cost of local computing infrastructures are likely to become prohibitive with the increase, especially as the breadth of questions continues to rise. Alternative computing architectures, in particular cloud computing environments, may help alleviate this increasing pressure and enable fast, large-scale, and cost-effective comparative genomics strategies going forward. To test this, we redesigned a typical comparative genomics algorithm, the reciprocal smallest distance algorithm (RSD), to run within Amazon's Elastic Computing Cloud (EC2). We then employed the RSD-cloud for ortholog calculations across a wide selection of fully sequenced genomes. We ran more than 300,000 RSD-cloud processes within the EC2. These jobs were farmed simultaneously to 100 high capacity compute nodes using the Amazon Web Service Elastic Map Reduce and included a wide mix of large and small genomes. The total computation time took just under 70 hours and cost a total of $6,302 USD. The effort to transform existing comparative genomics algorithms from local compute infrastructures is not trivial. However, the speed and flexibility of cloud computing environments provides a substantial boost with manageable cost. The procedure designed to transform the RSD algorithm into a cloud-ready application is readily adaptable to similar comparative genomics problems.
Static Memory Deduplication for Performance Optimization in Cloud Computing.
Jia, Gangyong; Han, Guangjie; Wang, Hao; Yang, Xuan
2017-04-27
In a cloud computing environment, the number of virtual machines (VMs) on a single physical server and the number of applications running on each VM are continuously growing. This has led to an enormous increase in the demand of memory capacity and subsequent increase in the energy consumption in the cloud. Lack of enough memory has become a major bottleneck for scalability and performance of virtualization interfaces in cloud computing. To address this problem, memory deduplication techniques which reduce memory demand through page sharing are being adopted. However, such techniques suffer from overheads in terms of number of online comparisons required for the memory deduplication. In this paper, we propose a static memory deduplication (SMD) technique which can reduce memory capacity requirement and provide performance optimization in cloud computing. The main innovation of SMD is that the process of page detection is performed offline, thus potentially reducing the performance cost, especially in terms of response time. In SMD, page comparisons are restricted to the code segment, which has the highest shared content. Our experimental results show that SMD efficiently reduces memory capacity requirement and improves performance. We demonstrate that, compared to other approaches, the cost in terms of the response time is negligible.
Static Memory Deduplication for Performance Optimization in Cloud Computing
Jia, Gangyong; Han, Guangjie; Wang, Hao; Yang, Xuan
2017-01-01
In a cloud computing environment, the number of virtual machines (VMs) on a single physical server and the number of applications running on each VM are continuously growing. This has led to an enormous increase in the demand of memory capacity and subsequent increase in the energy consumption in the cloud. Lack of enough memory has become a major bottleneck for scalability and performance of virtualization interfaces in cloud computing. To address this problem, memory deduplication techniques which reduce memory demand through page sharing are being adopted. However, such techniques suffer from overheads in terms of number of online comparisons required for the memory deduplication. In this paper, we propose a static memory deduplication (SMD) technique which can reduce memory capacity requirement and provide performance optimization in cloud computing. The main innovation of SMD is that the process of page detection is performed offline, thus potentially reducing the performance cost, especially in terms of response time. In SMD, page comparisons are restricted to the code segment, which has the highest shared content. Our experimental results show that SMD efficiently reduces memory capacity requirement and improves performance. We demonstrate that, compared to other approaches, the cost in terms of the response time is negligible. PMID:28448434
Nicenboim, Bruno; Logačev, Pavel; Gattei, Carolina; Vasishth, Shravan
2016-01-01
We examined the effects of argument-head distance in SVO and SOV languages (Spanish and German), while taking into account readers' working memory capacity and controlling for expectation (Levy, 2008) and other factors. We predicted only locality effects, that is, a slowdown produced by increased dependency distance (Gibson, 2000; Lewis and Vasishth, 2005). Furthermore, we expected stronger locality effects for readers with low working memory capacity. Contrary to our predictions, low-capacity readers showed faster reading with increased distance, while high-capacity readers showed locality effects. We suggest that while the locality effects are compatible with memory-based explanations, the speedup of low-capacity readers can be explained by an increased probability of retrieval failure. We present a computational model based on ACT-R built under the previous assumptions, which is able to give a qualitative account for the present data and can be tested in future research. Our results suggest that in some cases, interpreting longer RTs as indexing increased processing difficulty and shorter RTs as facilitation may be too simplistic: The same increase in processing difficulty may lead to slowdowns in high-capacity readers and speedups in low-capacity ones. Ignoring individual level capacity differences when investigating locality effects may lead to misleading conclusions.
Nicenboim, Bruno; Logačev, Pavel; Gattei, Carolina; Vasishth, Shravan
2016-01-01
We examined the effects of argument-head distance in SVO and SOV languages (Spanish and German), while taking into account readers' working memory capacity and controlling for expectation (Levy, 2008) and other factors. We predicted only locality effects, that is, a slowdown produced by increased dependency distance (Gibson, 2000; Lewis and Vasishth, 2005). Furthermore, we expected stronger locality effects for readers with low working memory capacity. Contrary to our predictions, low-capacity readers showed faster reading with increased distance, while high-capacity readers showed locality effects. We suggest that while the locality effects are compatible with memory-based explanations, the speedup of low-capacity readers can be explained by an increased probability of retrieval failure. We present a computational model based on ACT-R built under the previous assumptions, which is able to give a qualitative account for the present data and can be tested in future research. Our results suggest that in some cases, interpreting longer RTs as indexing increased processing difficulty and shorter RTs as facilitation may be too simplistic: The same increase in processing difficulty may lead to slowdowns in high-capacity readers and speedups in low-capacity ones. Ignoring individual level capacity differences when investigating locality effects may lead to misleading conclusions. PMID:27014113
The Advance of Computing from the Ground to the Cloud
ERIC Educational Resources Information Center
Breeding, Marshall
2009-01-01
A trend toward the abstraction of computing platforms that has been developing in the broader IT arena over the last few years is just beginning to make inroads into the library technology scene. Cloud computing offers for libraries many interesting possibilities that may help reduce technology costs and increase capacity, reliability, and…
Handheld Computers: A Boon for Principals
ERIC Educational Resources Information Center
Brazell, Wayne
2005-01-01
As I reflect on my many years as an elementary school principal, I realize how much more effective I would have been if I had owned a wireless handheld computer. This relatively new technology can provide considerable assistance to today?s principals and recent advancements have increased its functions and capacity. Handheld computers are…
Flight test validation of a design procedure for digital autopilots
NASA Technical Reports Server (NTRS)
Bryant, W. H.
1983-01-01
Commercially available general aviation autopilots are currently in transition from an analogue circuit system to a computer implemented digital flight control system. Well known advantages of the digital autopilot include enhanced modes, self-test capacity, fault detection, and greater computational capacity. A digital autopilot's computational capacity can be used to full advantage by increasing the sophistication of the digital autopilot's chief function, stability and control. NASA's Langley Research Center has been pursuing the development of direct digital design tools for aircraft stabilization systems for several years. This effort has most recently been directed towards the development and realization of multi-mode digital autopilots for GA aircraft, conducted under a SPIFR-related program called the General Aviation Terminal Operations Research (GATOR) Program. This presentation focuses on the implementation and testing of a candidate multi-mode autopilot designed using these newly developed tools.
Interhemispheric interaction expands attentional capacity in an auditory selective attention task.
Scalf, Paige E; Banich, Marie T; Erickson, Andrew B
2009-04-01
Previous work from our laboratory indicates that interhemispheric interaction (IHI) functionally increases the attentional capacity available to support performance on visual tasks (Banich in The asymmetrical brain, pp 261-302, 2003). Because manipulations of both computational complexity and selection demand alter the benefits of IHI to task performance, we argue that IHI may be a general strategy for meeting increases in attentional demand. Other researchers, however, have suggested that the apparent benefits of IHI to attentional capacity are an epiphenomenon of the organization of the visual system (Fecteau and Enns in Neuropsychologia 43:1412-1428, 2005; Marsolek et al. in Neuropsychologia 40:1983-1999, 2002). In the current experiment, we investigate whether IHI increases attentional capacity outside the visual system by manipulating the selection demands of an auditory temporal pattern-matching task. We find that IHI expands attentional capacity in the auditory system. This suggests that the benefits of requiring IHI derive from a functional increase in attentional capacity rather than the organization of a specific sensory modality.
Development of a cryogenic mixed fluid J-T cooling computer code, 'JTMIX'
NASA Technical Reports Server (NTRS)
Jones, Jack A.
1991-01-01
An initial study was performed for analyzing and predicting the temperatures and cooling capacities when mixtures of fluids are used in Joule-Thomson coolers and in heat pipes. A computer code, JTMIX, was developed for mixed gas J-T analysis for any fluid combination of neon, nitrogen, various hydrocarbons, argon, oxygen, carbon monoxide, carbon dioxide, and hydrogen sulfide. When used in conjunction with the NIST computer code, DDMIX, it has accurately predicted order-of-magnitude increases in J-T cooling capacities when various hydrocarbons are added to nitrogen, and it predicts nitrogen normal boiling point depressions to as low as 60 K when neon is added.
Building Software Development Capacity to Advance the State of Educational Technology
ERIC Educational Resources Information Center
Luterbach, Kenneth J.
2013-01-01
Educational technologists may advance the state of the field by increasing capacity to develop software tools and instructional applications. Presently, few academic programs in educational technology require even a single computer programming course. Further, the educational technologists who develop software generally work independently or in…
The potential energy landscape contribution to the dynamic heat capacity
NASA Astrophysics Data System (ADS)
Brown, Jonathan R.; McCoy, John D.
2011-05-01
The dynamic heat capacity of a simple polymeric, model glassformer was computed using molecular dynamics simulations by sinusoidally driving the temperature and recording the resultant energy. The underlying potential energy landscape of the system was probed by taking a time series of particle positions and quenching them. The resulting dynamic heat capacity demonstrates that the long time relaxation is the direct result of dynamics resulting from the potential energy landscape. Moreover, the equilibrium (low frequency) portion of the potential energy landscape contribution to the heat capacity is found to increase rapidly at low temperatures and at high packing fractions. This increase in the heat capacity is explained by a statistical mechanical model based on the distribution of minima in the potential energy landscape.
Volunteered Cloud Computing for Disaster Management
NASA Astrophysics Data System (ADS)
Evans, J. D.; Hao, W.; Chettri, S. R.
2014-12-01
Disaster management relies increasingly on interpreting earth observations and running numerical models; which require significant computing capacity - usually on short notice and at irregular intervals. Peak computing demand during event detection, hazard assessment, or incident response may exceed agency budgets; however some of it can be met through volunteered computing, which distributes subtasks to participating computers via the Internet. This approach has enabled large projects in mathematics, basic science, and climate research to harness the slack computing capacity of thousands of desktop computers. This capacity is likely to diminish as desktops give way to battery-powered mobile devices (laptops, smartphones, tablets) in the consumer market; but as cloud computing becomes commonplace, it may offer significant slack capacity -- if its users are given an easy, trustworthy mechanism for participating. Such a "volunteered cloud computing" mechanism would also offer several advantages over traditional volunteered computing: tasks distributed within a cloud have fewer bandwidth limitations; granular billing mechanisms allow small slices of "interstitial" computing at no marginal cost; and virtual storage volumes allow in-depth, reversible machine reconfiguration. Volunteered cloud computing is especially suitable for "embarrassingly parallel" tasks, including ones requiring large data volumes: examples in disaster management include near-real-time image interpretation, pattern / trend detection, or large model ensembles. In the context of a major disaster, we estimate that cloud users (if suitably informed) might volunteer hundreds to thousands of CPU cores across a large provider such as Amazon Web Services. To explore this potential, we are building a volunteered cloud computing platform and targeting it to a disaster management context. Using a lightweight, fault-tolerant network protocol, this platform helps cloud users join parallel computing projects; automates reconfiguration of their virtual machines; ensures accountability for donated computing; and optimizes the use of "interstitial" computing. Initial applications include fire detection from multispectral satellite imagery and flood risk mapping through hydrological simulations.
Tri-Laboratory Linux Capacity Cluster 2007 SOW
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seager, M
2007-03-22
The Advanced Simulation and Computing (ASC) Program (formerly know as Accelerated Strategic Computing Initiative, ASCI) has led the world in capability computing for the last ten years. Capability computing is defined as a world-class platform (in the Top10 of the Top500.org list) with scientific simulations running at scale on the platform. Example systems are ASCI Red, Blue-Pacific, Blue-Mountain, White, Q, RedStorm, and Purple. ASC applications have scaled to multiple thousands of CPUs and accomplished a long list of mission milestones on these ASC capability platforms. However, the computing demands of the ASC and Stockpile Stewardship programs also include a vastmore » number of smaller scale runs for day-to-day simulations. Indeed, every 'hero' capability run requires many hundreds to thousands of much smaller runs in preparation and post processing activities. In addition, there are many aspects of the Stockpile Stewardship Program (SSP) that can be directly accomplished with these so-called 'capacity' calculations. The need for capacity is now so great within the program that it is increasingly difficult to allocate the computer resources required by the larger capability runs. To rectify the current 'capacity' computing resource shortfall, the ASC program has allocated a large portion of the overall ASC platforms budget to 'capacity' systems. In addition, within the next five to ten years the Life Extension Programs (LEPs) for major nuclear weapons systems must be accomplished. These LEPs and other SSP programmatic elements will further drive the need for capacity calculations and hence 'capacity' systems as well as future ASC capability calculations on 'capability' systems. To respond to this new workload analysis, the ASC program will be making a large sustained strategic investment in these capacity systems over the next ten years, starting with the United States Government Fiscal Year 2007 (GFY07). However, given the growing need for 'capability' systems as well, the budget demands are extreme and new, more cost effective ways of fielding these systems must be developed. This Tri-Laboratory Linux Capacity Cluster (TLCC) procurement represents the ASC first investment vehicle in these capacity systems. It also represents a new strategy for quickly building, fielding and integrating many Linux clusters of various sizes into classified and unclassified production service through a concept of Scalable Units (SU). The programmatic objective is to dramatically reduce the overall Total Cost of Ownership (TCO) of these 'capacity' systems relative to the best practices in Linux Cluster deployments today. This objective only makes sense in the context of these systems quickly becoming very robust and useful production clusters under the crushing load that will be inflicted on them by the ASC and SSP scientific simulation capacity workload.« less
The economics of time shared computing: Congestion, user costs and capacity
NASA Technical Reports Server (NTRS)
Agnew, C. E.
1982-01-01
Time shared systems permit the fixed costs of computing resources to be spread over large numbers of users. However, bottleneck results in the theory of closed queueing networks can be used to show that this economy of scale will be offset by the increased congestion that results as more users are added to the system. If one considers the total costs, including the congestion cost, there is an optimal number of users for a system which equals the saturation value usually used to define system capacity.
"Life" and Education Policy: Intervention, Augmentation and Computation
ERIC Educational Resources Information Center
Gulson, Kalervo N.; Webb, P. Taylor
2018-01-01
In this paper, we are interested in the notion of multiple ways of thinking, knowing and transforming life, namely an increasing capacity to intervene in "life" as a "molecular biopolitics," and the changing ways in which "life" can be understood computationally. We identify and speculate on the ways different ideas…
Secure steganography designed for mobile platforms
NASA Astrophysics Data System (ADS)
Agaian, Sos S.; Cherukuri, Ravindranath; Sifuentes, Ronnie R.
2006-05-01
Adaptive steganography, an intelligent approach to message hiding, integrated with matrix encoding and pn-sequences serves as a promising resolution to recent security assurance concerns. Incorporating the above data hiding concepts with established cryptographic protocols in wireless communication would greatly increase the security and privacy of transmitting sensitive information. We present an algorithm which will address the following problems: 1) low embedding capacity in mobile devices due to fixed image dimensions and memory constraints, 2) compatibility between mobile and land based desktop computers, and 3) detection of stego images by widely available steganalysis software [1-3]. Consistent with the smaller available memory, processor capabilities, and limited resolution associated with mobile devices, we propose a more magnified approach to steganography by focusing adaptive efforts at the pixel level. This deeper method, in comparison to the block processing techniques commonly found in existing adaptive methods, allows an increase in capacity while still offering a desired level of security. Based on computer simulations using high resolution, natural imagery and mobile device captured images, comparisons show that the proposed method securely allows an increased amount of embedding capacity but still avoids detection by varying steganalysis techniques.
Future Approach to tier-0 extension
NASA Astrophysics Data System (ADS)
Jones, B.; McCance, G.; Cordeiro, C.; Giordano, D.; Traylen, S.; Moreno García, D.
2017-10-01
The current tier-0 processing at CERN is done on two managed sites, the CERN computer centre and the Wigner computer centre. With the proliferation of public cloud resources at increasingly competitive prices, we have been investigating how to transparently increase our compute capacity to include these providers. The approach taken has been to integrate these resources using our existing deployment and computer management tools and to provide them in a way that exposes them to users as part of the same site. The paper will describe the architecture, the toolset and the current production experiences of this model.
GPSS computer simulation of aircraft passenger emergency evacuations.
DOT National Transportation Integrated Search
1978-06-01
The costs of civil air transport emergency evacuation demonstrations using human subjects have risen as seating capacities of these aircraft have increased. Repeated tests further increase the costs and also the risks of injuries to participants. A m...
NASA Astrophysics Data System (ADS)
Hanumagowda, B. N.; Gonchigara, Thippeswamy; Santhosh Kumar, J.; MShiva Kumar, H.
2018-04-01
Exponential slider bearings with porous facing is analysed in this article. The modified Reynolds equation is derived for the Exponential porous slider bearing with MHD and couple stress fluid. Computed values of Steady film pressure, Steady load capacity, Dynamic stiffness and Damping coefficient are presented in graphical form. The Steady film pressure, Steady load capacity, Dynamic stiffness and Damping coefficient decreases with increasing values of permeability parameter and increases with increasing values of couplestress parameter and Hartmann number.
An analysis on the magnetic fluid seal capacity
NASA Astrophysics Data System (ADS)
Meng, Zhao; Jibin, Zou; Jianhui, Hu
2006-08-01
The capacity of the magnetic fluid seal depends on the magnetic field and the saturation magnetization of the magnetic fluid. There are many factors that influence the magnetic field and the seal capacity of the magnetic fluid seal, such as the sealing gap, the shaft eccentricity, the shaft diameter, and the centrifugal force. In this paper, these factors are analyzed by numerical computations. When the material and structure are the same, the magnetic fluid seal capacity will reduce with the increasing of the sealing gap. When the shaft diameter is large the gravity should be considered. The centrifugal force has influence on the magnetic fluid seal capacity.
Algorithmic complexity of quantum capacity
NASA Astrophysics Data System (ADS)
Oskouei, Samad Khabbazi; Mancini, Stefano
2018-04-01
We analyze the notion of quantum capacity from the perspective of algorithmic (descriptive) complexity. To this end, we resort to the concept of semi-computability in order to describe quantum states and quantum channel maps. We introduce algorithmic entropies (like algorithmic quantum coherent information) and derive relevant properties for them. Then we show that quantum capacity based on semi-computable concept equals the entropy rate of algorithmic coherent information, which in turn equals the standard quantum capacity. Thanks to this, we finally prove that the quantum capacity, for a given semi-computable channel, is limit computable.
Capacity of noncoherent MFSK channels
NASA Technical Reports Server (NTRS)
Bar-David, I.; Butman, S. A.; Klass, M. J.; Levitt, B. K.; Lyon, R. F.
1974-01-01
Performance limits theoretically achievable over noncoherent channels perturbed by additive Gaussian noise in hard decision, optimal, and soft decision receivers are computed as functions of the number of orthogonal signals and the predetection signal-to-noise ratio. Equations are derived for orthogonal signal capacity, the ultimate MFSK capacity, and the convolutional coding and decoding limit. It is shown that performance improves as the signal-to-noise ratio increases, provided the bandwidth can be increased, that the optimum number of signals is not infinite (except for the optimal receiver), and that the optimum number decreases as the signal-to-noise ratio decreases, but is never less than 7 for even the hard decision receiver.
The Mark III Hypercube-Ensemble Computers
NASA Technical Reports Server (NTRS)
Peterson, John C.; Tuazon, Jesus O.; Lieberman, Don; Pniel, Moshe
1988-01-01
Mark III Hypercube concept applied in development of series of increasingly powerful computers. Processor of each node of Mark III Hypercube ensemble is specialized computer containing three subprocessors and shared main memory. Solves problem quickly by simultaneously processing part of problem at each such node and passing combined results to host computer. Disciplines benefitting from speed and memory capacity include astrophysics, geophysics, chemistry, weather, high-energy physics, applied mechanics, image processing, oil exploration, aircraft design, and microcircuit design.
A Computer Program for Training Eccentric Reading in Persons with Central Scotoma
ERIC Educational Resources Information Center
Kasten, Erich; Haschke, Peggy; Meinhold, Ulrike; Oertel-Verweyen, Petra
2010-01-01
This article explores the effectiveness of a computer program--Xcentric viewing--for training eccentric reading in persons with central scotoma. The authors conducted a small study to investigate whether this program increases the reading capacities of individuals with age-related macular degeneration (AMD). Instead of a control group, they…
The Relation between Acquisition of a Theory of Mind and the Capacity to Hold in Mind.
ERIC Educational Resources Information Center
Gordon, Anne C. L.; Olson, David R.
1998-01-01
Tested hypothesized relationship between development of a theory of mind and increasing computational resources in 3- to 5-year olds. Found that the correlations between performance on theory of mind tasks and dual processing tasks were as high as r=.64, suggesting that changes in working memory capacity allow the expression of, and arguably the…
Arranging computer architectures to create higher-performance controllers
NASA Technical Reports Server (NTRS)
Jacklin, Stephen A.
1988-01-01
Techniques for integrating microprocessors, array processors, and other intelligent devices in control systems are reviewed, with an emphasis on the (re)arrangement of components to form distributed or parallel processing systems. Consideration is given to the selection of the host microprocessor, increasing the power and/or memory capacity of the host, multitasking software for the host, array processors to reduce computation time, the allocation of real-time and non-real-time events to different computer subsystems, intelligent devices to share the computational burden for real-time events, and intelligent interfaces to increase communication speeds. The case of a helicopter vibration-suppression and stabilization controller is analyzed as an example, and significant improvements in computation and throughput rates are demonstrated.
Opportunities for nonvolatile memory systems in extreme-scale high-performance computing
Vetter, Jeffrey S.; Mittal, Sparsh
2015-01-12
For extreme-scale high-performance computing systems, system-wide power consumption has been identified as one of the key constraints moving forward, where DRAM main memory systems account for about 30 to 50 percent of a node's overall power consumption. As the benefits of device scaling for DRAM memory slow, it will become increasingly difficult to keep memory capacities balanced with increasing computational rates offered by next-generation processors. However, several emerging memory technologies related to nonvolatile memory (NVM) devices are being investigated as an alternative for DRAM. Moving forward, NVM devices could offer solutions for HPC architectures. Researchers are investigating how to integratemore » these emerging technologies into future extreme-scale HPC systems and how to expose these capabilities in the software stack and applications. In addition, current results show several of these strategies could offer high-bandwidth I/O, larger main memory capacities, persistent data structures, and new approaches for application resilience and output postprocessing, such as transaction-based incremental checkpointing and in situ visualization, respectively.« less
How to Use Removable Mass Storage Memory Devices
ERIC Educational Resources Information Center
Branzburg, Jeffrey
2004-01-01
Mass storage refers to the variety of ways to keep large amounts of information that are used on a computer. Over the years, the removable storage devices have grown smaller, increased in capacity, and transferred the information to the computer faster. The 8" floppy disk of the 1960s stored 100 kilobytes, or about 60 typewritten, double-spaced…
Capacity of a direct detection optical communication channel
NASA Technical Reports Server (NTRS)
Tan, H. H.
1980-01-01
The capacity of a free space optical channel using a direct detection receiver is derived under both peak and average signal power constraints and without a signal bandwidth constraint. The addition of instantaneous noiseless feedback from the receiver to the transmitter does not increase the channel capacity. In the absence of received background noise, an optimally coded PPM system is shown to achieve capacity in the limit as signal bandwidth approaches infinity. In the case of large peak to average signal power ratios, an interleaved coding scheme with PPM modulation is shown to have a computational cutoff rate far greater than ordinary coding schemes.
Daigger, Glen T; Siczka, John S; Smith, Thomas F; Frank, David A; McCorquodale, J A
2017-08-01
The need to increase the peak wet weather secondary treatment capacity of the City of Akron, Ohio, Water Reclamation Facility (WRF) provided the opportunity to test an integrated methodology for maximizing the peak wet weather secondary treatment capacity of activated sludge systems. An initial investigation, consisting of process modeling of the secondary treatment system and computational fluid dynamics (CFD) analysis of the existing relatively shallow secondary clarifiers (3.3 and 3.7 m sidewater depth in 30.5 m diameter units), indicated that a significant increase in capacity from 416 000 to 684 000 m3/d or more was possible by adding step feed capabilities to the existing bioreactors and upgrading the existing secondary clarifiers. One of the six treatment units at the WRF was modified, and an extensive 2-year testing program was conducted to determine the total peak wet weather secondary treatment capacity achievable. The results demonstrated that a peak wet weather secondary treatment capacity approaching 974 000 m3/d is possible as long as secondary clarifier solids and hydraulic loadings could be separately controlled using the step feed capability provided. Excellent sludge settling characteristics are routinely experienced at the City of Akron WRF, raising concerns that the identified peak wet weather secondary treatment capacity could not be maintained should sludge settling characteristics deteriorate for some reason. Computational fluid dynamics analysis indicated that the impact of the deterioration of sludge settling characteristics could be mitigated and the identified peak wet weather secondary treatment capacity maintained by further use of the step feed capability provided to further reduce secondary clarifier solids loading rates at the identified high surface overflow rates. The results also demonstrated that effluent limits not only for total suspended solids (TSS) and five-day carbonaceous biochemical oxygen demand (cBOD5) could be maintained, but also for ammonia-nitrogen and total phosphorous (TP). Although hydraulic limitations in other parts of the WRP prevent this full capacity to be realized, the City is proceeding to implement the modifications identified using this integrated methodology.
Computer-assisted learning in critical care: from ENIAC to HAL.
Tegtmeyer, K; Ibsen, L; Goldstein, B
2001-08-01
Computers are commonly used to serve many functions in today's modern intensive care unit. One of the most intriguing and perhaps most challenging applications of computers has been to attempt to improve medical education. With the introduction of the first computer, medical educators began looking for ways to incorporate their use into the modern curriculum. Prior limitations of cost and complexity of computers have consistently decreased since their introduction, making it increasingly feasible to incorporate computers into medical education. Simultaneously, the capabilities and capacities of computers have increased. Combining the computer with other modern digital technology has allowed the development of more intricate and realistic educational tools. The purpose of this article is to briefly describe the history and use of computers in medical education with special reference to critical care medicine. In addition, we will examine the role of computers in teaching and learning and discuss the types of interaction between the computer user and the computer.
Working Towards New Transformative Geoscience Analytics Enabled by Petascale Computing
NASA Astrophysics Data System (ADS)
Woodcock, R.; Wyborn, L.
2012-04-01
Currently the top 10 supercomputers in the world are petascale and already exascale computers are being planned. Cloud computing facilities are becoming mainstream either as private or commercial investments. These computational developments will provide abundant opportunities for the earth science community to tackle the data deluge which has resulted from new instrumentation enabling data to be gathered at a greater rate and at higher resolution. Combined, the new computational environments should enable the earth sciences to be transformed. However, experience in Australia and elsewhere has shown that it is not easy to scale existing earth science methods, software and analytics to take advantage of the increased computational capacity that is now available. It is not simply a matter of 'transferring' current work practices to the new facilities: they have to be extensively 'transformed'. In particular new Geoscientific methods will need to be developed using advanced data mining, assimilation, machine learning and integration algorithms. Software will have to be capable of operating in highly parallelised environments, and will also need to be able to scale as the compute systems grow. Data access will have to improve and the earth science community needs to move from the file discovery, display and then locally download paradigm to self describing data cubes and data arrays that are available as online resources from either major data repositories or in the cloud. In the new transformed world, rather than analysing satellite data scene by scene, sensor agnostic data cubes of calibrated earth observation data will enable researchers to move across data from multiple sensors at varying spatial data resolutions. In using geophysics to characterise basement and cover, rather than analysing individual gridded airborne geophysical data sets, and then combining the results, petascale computing will enable analysis of multiple data types, collected at varying resolutions with integration and validation across data type boundaries. Increased capacity of storage and compute will mean that uncertainty and reliability of individual observations will consistently be taken into account and propagated throughout the processing chain. If these data access difficulties can be overcome, the increased compute capacity will also mean that larger scale, more complex models can be run at higher resolution and instead of single pass modelling runs. Ensembles of models will be able to be run to simultaneously test multiple hypotheses. Petascale computing and high performance data offer more than "bigger, faster": it is an opportunity for a transformative change in the way in which geoscience research is routinely conducted.
About an Extreme Achievable Current in Plasma Focus Installation of Mather Type
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nikulin, V. Ya.; Polukhin, S. N.; Vikhrev, V. V.
A computer simulation and analytical analysis of the discharge process in Plasma Focus has shown that there is an upper limit to the current which can be achieved in Plasma Focus installation of Mather type by only increasing the capacity of the condenser bank. The maximum current achieved for various plasma focus installations of 1 MJ level is discussed. For example, for the PF-1000 (IFPiLM) and 1 MJ Frascati PF, the maximum current is near 2 MA. Thus, the commonly used method of increasing the energy of the PF installation by increasing of the capacity has no merit. Alternative optionsmore » in order to increase the current are discussed.« less
A Next-Generation Parallel File System Environment for the OLCF
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dillow, David A; Fuller, Douglas; Gunasekaran, Raghul
2012-01-01
When deployed in 2008/2009 the Spider system at the Oak Ridge National Laboratory s Leadership Computing Facility (OLCF) was the world s largest scale Lustre parallel file system. Envisioned as a shared parallel file system capable of delivering both the bandwidth and capacity requirements of the OLCF s diverse computational environment, Spider has since become a blueprint for shared Lustre environments deployed worldwide. Designed to support the parallel I/O requirements of the Jaguar XT5 system and other smallerscale platforms at the OLCF, the upgrade to the Titan XK6 heterogeneous system will begin to push the limits of Spider s originalmore » design by mid 2013. With a doubling in total system memory and a 10x increase in FLOPS, Titan will require both higher bandwidth and larger total capacity. Our goal is to provide a 4x increase in total I/O bandwidth from over 240GB=sec today to 1TB=sec and a doubling in total capacity. While aggregate bandwidth and total capacity remain important capabilities, an equally important goal in our efforts is dramatically increasing metadata performance, currently the Achilles heel of parallel file systems at leadership. We present in this paper an analysis of our current I/O workloads, our operational experiences with the Spider parallel file systems, the high-level design of our Spider upgrade, and our efforts in developing benchmarks that synthesize our performance requirements based on our workload characterization studies.« less
NASA Technical Reports Server (NTRS)
Roberts, Christopher L.; Smith, Sonya T.; Vicroy, Dan D.
2000-01-01
Several of our major airports are operating at or near their capacity limit, increasing congestion and delays for travelers. As a result, the National Aeronautics and Space Administration (NASA) has been working in conjunction with the Federal Aviation Administration (FAA), airline operators, and the airline industry to increase airport capacity and safety. As more and more airplanes are placed into the terminal area the probability of encountering wake turbulence is increased. The NASA Langley Research Center conducted a series of flight tests from 1995 through 1997 to develop a wake encounter and wake-measurement data set with the accompanying atmospheric state information. The purpose of this research is to use the data from those flights to compute the wake-induced forced and moments exerted on the aircraft The calculated forces and moments will then be compiled into a database that can be used by wake vortex researchers to compare with experimental and computational results.
1982-08-01
of sensitivity with background luminance, and the finitE capacity of visual short term memory are discussed in terms of a small set of ...binocular rivalry, reflectance rivalry, Fechner’s paradox, decrease of threshold contrast with increased number of cycles in a grating pattern, hysteresis...adaptation level tuning, Weber law modulation, shift of sensitivity with background luminance, and the finite capacity of visual
NASA Astrophysics Data System (ADS)
Navas, Javier; Sánchez-Coronilla, Antonio; Martín, Elisa I.; Gómez-Villarejo, Roberto; Teruel, Miriam; Gallardo, Juan Jesús; Aguilar, Teresa; Alcántara, Rodrigo; Fernández-Lorenzo, Concha; Martín-Calleja, Joaquín
2017-04-01
In this work, nanofluids were prepared using commercial Cu nanoparticles and a commercial high temperature-heat transfer Fluid (eutectic mixture of diphenyl oxide and biphenyl) as the base fluid, which is used in concentrating solar power (CSP) plants. Different properties such as density, viscosity, heat capacity and thermal conductivity were characterized. Nanofluids showed enhanced heat transfer efficiency. In detail, the incorporation of Cu nanoparticles led to an increase of the heat capacity up to 14%. Also, thermal conductivity was increased up to 13%. Finally, the performance of the nanofluids prepared increased up to 11% according to the Dittus-Boelter correlation. On the other hand, equilibrium molecular dynamics simulation was used to model the experimental nanofluid system studied. Thermodynamic properties such as heat capacity and thermal conductivity were calculated and the results were compared with experimental data. The analysis of the radial function distributions (RDFs) and the inspection of the spatial distribution functions (SDFs) indicate the important role that plays the metal-oxygen interaction in the system. Dynamic properties such as the diffusion coefficients of base fluid and nanofluid were computed according to Einstein relation by computing the mean square displacement (MSD). Supplementary online material is available in electronic form at http://www.epjap.org
Morán-Sánchez, Inés; Luna, Aurelio; Pérez-Cárceles, Maria D
2016-11-30
Informed consent is a key element of ethical clinical research. Those with mental disorders may be at risk for impaired consent capacity. Problems with procedures may also contribute to patient's ´difficulties in understanding consent forms. The present investigation explores if a brief technologically based information presentation of the informed consent process may enhance psychiatric patients understanding and satisfaction. In this longitudinal, within-participants comparison study, patients who initially were judged to lack capacity to make research decisions (n=41) and a control group (n=47) were followed up. Decisional capacity, willingness to participate and cognitive and clinical scores were assessed at baseline and after receiving the computer-assisted enhanced consent. With sufficient cueing, patients with impaired research-related decision-making capacity at baseline were able to display enough understanding of the consent form. Patient satisfaction and willingness to participate also increased at follow up. Implications of these results for clinical practice and medical research involving people with mental disorders are discussed. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Capacity Adequacy and Revenue Sufficiency in Electricity Markets With Wind Power
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levin, Todd; Botterud, Audun
2015-05-01
We present a computationally efficient mixed-integer program (MIP) that determines optimal generator expansion decisions, as well as periodic unit commitment and dispatch. The model is applied to analyze the impact of increasing wind power capacity on the optimal generation mix and the profitability of thermal generators. In a case study, we find that increasing wind penetration reduces energy prices while the prices for operating reserves increase. Moreover, scarcity pricing for operating reserves through reserve shortfall penalties significantly impacts the prices and profitability of thermal generators. Without scarcity pricing, no thermal units are profitable, however scarcity pricing can ensure profitability formore » peaking units at high wind penetration levels. Capacity payments can also ensure profitability, but the payments required for baseload units to break even increase with the amount of wind power. The results indicate that baseload units are most likely to experience revenue sufficiency problems when wind penetration increases and new baseload units are only developed when natural gas prices are high and wind penetration is low.« less
The impact of technological change on census taking.
Brackstone, G J
1984-01-01
The increasing costs of traditional census collection methods have forced census administrators to look at the possibility of using administrative record systems in order to obtain population data. This article looks at the recent technological developments which have taken place in the last decade, and how they may affect data collection for the 1990 census. Because it is important to allow sufficient developmental and testing time of potential automated methods and technologies, it is not too soon to look at the trends resulting from technological advances and their implications for census data collection. These trends are: 1) the declining ratio of computing costs to manpower costs; 2) the increasing ratio of power and capacity of computers to their physical size; 3) declining data storage costs; 4) the increasing public acceptance of computers; 5) the increasing workforce familiarity with computers; and 6) the growing interactive computing capacity. Traditional use of computers for government data gathering operations were primarily for the processing stage. Now the possibility of applying these trends to census material may influence all aspects of the process; from questionnaire design and production, to data analysis. Examples include: the production of high quality maps for geographic frameworks, optical readers for data entry, the ability to provide users with a final data base, as well as printed output, and quicker dissemination of data results. Although these options exist, just like the use of administrative records for statistical purposes, they must be carefully analysed in context to the purposes for which they were created. The limitations of using administrative records for the and 2) definition, coverage, and quality limitations could bias statistical data derived from them. Perhaps they should be used as potential complementary sources of data, and not as replacements for census data. Influencing the evolution of these administrative records will help increase their chances fo being used for future census information.
Incorporating principal component analysis into air quality model evaluation
The efficacy of standard air quality model evaluation techniques is becoming compromised as the simulation periods continue to lengthen in response to ever increasing computing capacity. Accordingly, the purpose of this paper is to demonstrate a statistical approach called Princi...
EU Strategies of Integrating ICT into Initial Teacher Training
ERIC Educational Resources Information Center
Garapko, Vitaliya
2013-01-01
Education and learning are strongly linked with society and its evolution and knowledge. In the field of formal education, ICTs are increasingly deployed as tools to extend the learner's capacity to perceive, understand and communicate, as seen in the increase in online learning programs and the use of the computer as a learning support tool in…
A CFD Heterogeneous Parallel Solver Based on Collaborating CPU and GPU
NASA Astrophysics Data System (ADS)
Lai, Jianqi; Tian, Zhengyu; Li, Hua; Pan, Sha
2018-03-01
Since Graphic Processing Unit (GPU) has a strong ability of floating-point computation and memory bandwidth for data parallelism, it has been widely used in the areas of common computing such as molecular dynamics (MD), computational fluid dynamics (CFD) and so on. The emergence of compute unified device architecture (CUDA), which reduces the complexity of compiling program, brings the great opportunities to CFD. There are three different modes for parallel solution of NS equations: parallel solver based on CPU, parallel solver based on GPU and heterogeneous parallel solver based on collaborating CPU and GPU. As we can see, GPUs are relatively rich in compute capacity but poor in memory capacity and the CPUs do the opposite. We need to make full use of the GPUs and CPUs, so a CFD heterogeneous parallel solver based on collaborating CPU and GPU has been established. Three cases are presented to analyse the solver’s computational accuracy and heterogeneous parallel efficiency. The numerical results agree well with experiment results, which demonstrate that the heterogeneous parallel solver has high computational precision. The speedup on a single GPU is more than 40 for laminar flow, it decreases for turbulent flow, but it still can reach more than 20. What’s more, the speedup increases as the grid size becomes larger.
Microprocessor-Based Systems Control for the Rigidized Inflatable Get-Away-Special Experiment
2004-03-01
communications and faster data throughput increase, satellites are becoming larger. Larger satellite antennas help to provide the needed gain to...increase communications in space. Compounding the performance and size trade-offs are the payload weight and size limit imposed by the launch vehicles...increased communications capacity, and reduce launch costs. This thesis develops and implements the computer control system and power system to
NASA Astrophysics Data System (ADS)
Klamerus-Iwan, Anna; Błońska, Ewa
2018-04-01
The canopy storage capacity (S) is a major component of the surface water balance. We analysed the relationship between the tree canopy water storage capacity and leaf wettability under changing simulated rainfall temperature. We estimated the effect of the rain temperature change on the canopy storage capacity and contact angle of leave and needle surfaces based on two scenarios. Six dominant forest trees were analysed: English oak (Quercus roburL.), common beech (Fagus sylvatica L.), small-leaved lime (Tilia cordata Mill), silver fir (Abies alba), Scots pine (Pinus sylvestris L.),and Norway spruce (Picea abies L.). Twigs of these species were collected from Krynica Zdrój, that is, the Experimental Forestry unit of the University of Agriculture in Cracow (southern Poland). Experimental analyses (simulations of precipitation) were performed in a laboratory under controlled conditions. The canopy storage capacity and leaf wettability classification were determined at 12 water temperatures and a practical calculator to compute changes of S and contact angles of droplets was developed. Among all species, an increase of the rainfall temperature by 0.7 °C decreases the contact angle between leave and needle surfaces by 2.41° and increases the canopy storage capacity by 0.74 g g-1; an increase of the rain temperature by 2.7 °C decreases the contact angle by 9.29° and increases the canopy storage capacity by 2.85 g g-1. A decreased contact angle between a water droplet and leaf surface indicates increased wettability. Thus, our results show that an increased temperature increases the leaf wettability in all examined species. The comparison of different species implies that the water temperature has the strongest effect on spruce and the weakest effect on oak. These data indicate that the rainfall temperature influences the canopy storage capacity.
Simulation model for port shunting yards
NASA Astrophysics Data System (ADS)
Rusca, A.; Popa, M.; Rosca, E.; Rosca, M.; Dragu, V.; Rusca, F.
2016-08-01
Sea ports are important nodes in the supply chain, joining two high capacity transport modes: rail and maritime transport. The huge cargo flows transiting port requires high capacity construction and installation such as berths, large capacity cranes, respectively shunting yards. However, the port shunting yards specificity raises several problems such as: limited access since these are terminus stations for rail network, the in-output of large transit flows of cargo relatively to the scarcity of the departure/arrival of a ship, as well as limited land availability for implementing solutions to serve these flows. It is necessary to identify technological solutions that lead to an answer to these problems. The paper proposed a simulation model developed with ARENA computer simulation software suitable for shunting yards which serve sea ports with access to the rail network. Are investigates the principal aspects of shunting yards and adequate measures to increase their transit capacity. The operation capacity for shunting yards sub-system is assessed taking in consideration the required operating standards and the measure of performance (e.g. waiting time for freight wagons, number of railway line in station, storage area, etc.) of the railway station are computed. The conclusion and results, drawn from simulation, help transports and logistics specialists to test the proposals for improving the port management.
Badenhorst, Anna; Mansoori, Parisa; Chan, Kit Yee
2016-06-01
The past two decades have seen a large increase in investment in global public health research. There is a need for increased coordination and accountability, particularly in understanding where funding is being allocated and who has capacity to perform research. In this paper, we aim to assess global, regional, national and sub-national capacity for public health research and how it is changing over time in different parts of the world. To allow comparisons of regions, countries and universities/research institutes over time, we relied on Web of Science(TM) database and used Hirsch (h) index based on 5-year-periods (h5). We defined articles relevant to public health research with 98% specificity using the combination of search terms relevant to public health, epidemiology or meta-analysis. Based on those selected papers, we computed h5 for each country of the world and their main universities/research institutes for these 5-year time periods: 1996-2000, 2001-2005 and 2006-2010. We computed h5 with a 3-year-window after each time period, to allow citations from more recent years to accumulate. Among the papers contributing to h5-core, we explored a topic/disease under investigation, "instrument" of health research used (eg, descriptive, discovery, development or delivery research); and universities/research institutes contributing to h5-core. Globally, the majority of public health research has been conducted in North America and Europe, but other regions (particularly Eastern Mediterranean and South-East Asia) are showing greater improvement rate and are rapidly gaining capacity. Moreover, several African nations performed particularly well when their research output is adjusted by their gross domestic product (GDP). In the regions gaining capacity, universities are contributing more substantially to the h-core publications than other research institutions. In all regions of the world, the topics of articles in h-core are shifting from communicable to non-communicable diseases (NCDs). There is also a trend of reduction in "discovery" research and increase in "delivery" research. Funding agencies and research policy makers should recognise nations where public health research capacity is increasing. These countries are worthy of increased investment in order to further increase the production of high quality local research and continue to develop their research capacity. Similarly, universities that contribute substantially to national research capacity should be recognised and supported. Biomedical journals should also take notice to ensure equity in peer-review process and provide researchers from all countries an equal opportunity to publish high-quality research and reduce financial barriers to accessing these journals.
Badenhorst, Anna; Mansoori, Parisa; Chan, Kit Yee
2016-01-01
Background The past two decades have seen a large increase in investment in global public health research. There is a need for increased coordination and accountability, particularly in understanding where funding is being allocated and who has capacity to perform research. In this paper, we aim to assess global, regional, national and sub–national capacity for public health research and how it is changing over time in different parts of the world. Methods To allow comparisons of regions, countries and universities/research institutes over time, we relied on Web of ScienceTM database and used Hirsch (h) index based on 5–year–periods (h5). We defined articles relevant to public health research with 98% specificity using the combination of search terms relevant to public health, epidemiology or meta–analysis. Based on those selected papers, we computed h5 for each country of the world and their main universities/research institutes for these 5–year time periods: 1996–2000, 2001–2005 and 2006–2010. We computed h5 with a 3–year–window after each time period, to allow citations from more recent years to accumulate. Among the papers contributing to h5–core, we explored a topic/disease under investigation, “instrument” of health research used (eg, descriptive, discovery, development or delivery research); and universities/research institutes contributing to h5–core. Results Globally, the majority of public health research has been conducted in North America and Europe, but other regions (particularly Eastern Mediterranean and South–East Asia) are showing greater improvement rate and are rapidly gaining capacity. Moreover, several African nations performed particularly well when their research output is adjusted by their gross domestic product (GDP). In the regions gaining capacity, universities are contributing more substantially to the h–core publications than other research institutions. In all regions of the world, the topics of articles in h–core are shifting from communicable to non–communicable diseases (NCDs). There is also a trend of reduction in “discovery” research and increase in “delivery” research. Conclusion Funding agencies and research policy makers should recognise nations where public health research capacity is increasing. These countries are worthy of increased investment in order to further increase the production of high quality local research and continue to develop their research capacity. Similarly, universities that contribute substantially to national research capacity should be recognised and supported. Biomedical journals should also take notice to ensure equity in peer–review process and provide researchers from all countries an equal opportunity to publish high–quality research and reduce financial barriers to accessing these journals. PMID:27350875
Biophysical constraints on the computational capacity of biochemical signaling networks
NASA Astrophysics Data System (ADS)
Wang, Ching-Hao; Mehta, Pankaj
Biophysics fundamentally constrains the computations that cells can carry out. Here, we derive fundamental bounds on the computational capacity of biochemical signaling networks that utilize post-translational modifications (e.g. phosphorylation). To do so, we combine ideas from the statistical physics of disordered systems and the observation by Tony Pawson and others that the biochemistry underlying protein-protein interaction networks is combinatorial and modular. Our results indicate that the computational capacity of signaling networks is severely limited by the energetics of binding and the need to achieve specificity. We relate our results to one of the theoretical pillars of statistical learning theory, Cover's theorem, which places bounds on the computational capacity of perceptrons. PM and CHW were supported by a Simons Investigator in the Mathematical Modeling of Living Systems Grant, and NIH Grant No. 1R35GM119461 (both to PM).
Schalk, Gerwin
2009-01-01
The theoretical groundwork of the 1930’s and 1940’s and the technical advance of computers in the following decades provided the basis for dramatic increases in human efficiency. While computers continue to evolve, and we can still expect increasing benefits from their use, the interface between humans and computers has begun to present a serious impediment to full realization of the potential payoff. This article is about the theoretical and practical possibility that direct communication between the brain and the computer can be used to overcome this impediment by improving or augmenting conventional forms of human communication. It is about the opportunity that the limitations of our body’s input and output capacities can be overcome using direct interaction with the brain, and it discusses the assumptions, possible limitations, and implications of a technology that I anticipate will be a major source of pervasive changes in the coming decades. PMID:18310804
Perspectives on the Future of CFD
NASA Technical Reports Server (NTRS)
Kwak, Dochan
2000-01-01
This viewgraph presentation gives an overview of the future of computational fluid dynamics (CFD), which in the past has pioneered the field of flow simulation. Over time CFD has progressed as computing power. Numerical methods have been advanced as CPU and memory capacity increases. Complex configurations are routinely computed now and direct numerical simulations (DNS) and large eddy simulations (LES) are used to study turbulence. As the computing resources changed to parallel and distributed platforms, computer science aspects such as scalability (algorithmic and implementation) and portability and transparent codings have advanced. Examples of potential future (or current) challenges include risk assessment, limitations of the heuristic model, and the development of CFD and information technology (IT) tools.
Homeostatic plasticity for single node delay-coupled reservoir computing.
Toutounji, Hazem; Schumacher, Johannes; Pipa, Gordon
2015-06-01
Supplementing a differential equation with delays results in an infinite-dimensional dynamical system. This property provides the basis for a reservoir computing architecture, where the recurrent neural network is replaced by a single nonlinear node, delay-coupled to itself. Instead of the spatial topology of a network, subunits in the delay-coupled reservoir are multiplexed in time along one delay span of the system. The computational power of the reservoir is contingent on this temporal multiplexing. Here, we learn optimal temporal multiplexing by means of a biologically inspired homeostatic plasticity mechanism. Plasticity acts locally and changes the distances between the subunits along the delay, depending on how responsive these subunits are to the input. After analytically deriving the learning mechanism, we illustrate its role in improving the reservoir's computational power. To this end, we investigate, first, the increase of the reservoir's memory capacity. Second, we predict a NARMA-10 time series, showing that plasticity reduces the normalized root-mean-square error by more than 20%. Third, we discuss plasticity's influence on the reservoir's input-information capacity, the coupling strength between subunits, and the distribution of the readout coefficients.
The Next Generation of Personal Computers.
ERIC Educational Resources Information Center
Crecine, John P.
1986-01-01
Discusses factors converging to create high-capacity, low-cost nature of next generation of microcomputers: a coherent vision of what graphics workstation and future computing environment should be like; hardware developments leading to greater storage capacity at lower costs; and development of software and expertise to exploit computing power…
NASA Astrophysics Data System (ADS)
Belyaev, A.; Berezhnaya, A.; Betev, L.; Buncic, P.; De, K.; Drizhuk, D.; Klimentov, A.; Lazin, Y.; Lyalin, I.; Mashinistov, R.; Novikov, A.; Oleynik, D.; Polyakov, A.; Poyda, A.; Ryabinkin, E.; Teslyuk, A.; Tkachenko, I.; Yasnopolskiy, L.
2015-12-01
The LHC experiments are preparing for the precision measurements and further discoveries that will be made possible by higher LHC energies from April 2015 (LHC Run2). The need for simulation, data processing and analysis would overwhelm the expected capacity of grid infrastructure computing facilities deployed by the Worldwide LHC Computing Grid (WLCG). To meet this challenge the integration of the opportunistic resources into LHC computing model is highly important. The Tier-1 facility at Kurchatov Institute (NRC-KI) in Moscow is a part of WLCG and it will process, simulate and store up to 10% of total data obtained from ALICE, ATLAS and LHCb experiments. In addition Kurchatov Institute has supercomputers with peak performance 0.12 PFLOPS. The delegation of even a fraction of supercomputing resources to the LHC Computing will notably increase total capacity. In 2014 the development a portal combining a Tier-1 and a supercomputer in Kurchatov Institute was started to provide common interfaces and storage. The portal will be used not only for HENP experiments, but also by other data- and compute-intensive sciences like biology with genome sequencing analysis; astrophysics with cosmic rays analysis, antimatter and dark matter search, etc.
Optoelectronic Terminal-Attractor-Based Associative Memory
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang; Barhen, Jacob; Farhat, Nabil H.
1994-01-01
Report presents theoretical and experimental study of optically and electronically addressable optical implementation of artificial neural network that performs associative recall. Shows by computer simulation that terminal-attractor-based associative memory can have perfect convergence in associative retrieval and increased storage capacity. Spurious states reduced by exploiting terminal attractors.
Binder, Kyle W; Murfee, Walter L; Song, Ji; Laughlin, M Harold; Price, Richard J
2007-01-01
Exercise training is known to enhance skeletal muscle blood flow capacity, with high-intensity interval sprint training (IST) primarily affecting muscles with a high proportion of fast twitch glycolytic fibers. The objective of this study was to determine the relative contributions of new arteriole formation and lumenal arteriolar remodeling to enhanced flow capacity and the impact of these adaptations on local microvascular hemodynamics deep within the muscle. The authors studied arteriolar adaptation in the white/mixed-fiber portion of gastrocnemius muscles of IST (6 bouts of running/day; 2.5 min/bout; 60 m/min speed; 15% grade; 4.5 min rest between bouts; 5 training days/wk; 10 wks total) and sedentary (SED) control rats using whole-muscle Microfil casts. Dimensional and topological data were then used to construct a series of computational hemodynamic network models that incorporated physiological red blood cell distributions and hematocrit and diameter dependent apparent viscosities. In comparison to SED controls, IST elicited a significant increase in arterioles/order in the 3A through 6A generations. Predicted IST and SED flows through the 2A generation agreed closely with in vivo measurements made in a previous study, illustrating the accuracy of the model. IST shifted the bulk of the pressure drop across the network from the 3As to the 4As and 5As, and flow capacity increased from 0.7 mL/min in SED to 1.5 mL/min in IST when a driving pressure of 80 mmHg was applied. The primary adaptation to IST is an increase in arterioles in the 3A through 6A generations, which, in turn, creates an approximate doubling of flow capacity and a deeper penetration of high pressure into the arteriolar network.
Simmering, Vanessa R
2016-09-01
Working memory is a vital cognitive skill that underlies a broad range of behaviors. Higher cognitive functions are reliably predicted by working memory measures from two domains: children's performance on complex span tasks, and infants' performance in looking paradigms. Despite the similar predictive power across these research areas, theories of working memory development have not connected these different task types and developmental periods. The current project takes a first step toward bridging this gap by presenting a process-oriented theory, focusing on two tasks designed to assess visual working memory capacity in infants (the change-preference task) versus children and adults (the change detection task). Previous studies have shown inconsistent results, with capacity estimates increasing from one to four items during infancy, but only two to three items during early childhood. A probable source of this discrepancy is the different task structures used with each age group, but prior theories were not sufficiently specific to explain how performance relates across tasks. The current theory focuses on cognitive dynamics, that is, how memory representations are formed, maintained, and used within specific task contexts over development. This theory was formalized in a computational model to generate three predictions: 1) capacity estimates in the change-preference task should continue to increase beyond infancy; 2) capacity estimates should be higher in the change-preference versus change detection task when tested within individuals; and 3) performance should correlate across tasks because both rely on the same underlying memory system. I also tested a fourth prediction, that development across tasks could be explained through increasing real-time stability, realized computationally as strengthening connectivity within the model. Results confirmed these predictions, supporting the cognitive dynamics account of performance and developmental changes in real-time stability. The monograph concludes with implications for understanding memory, behavior, and development in a broader range of cognitive development. © 2016 The Society for Research in Child Development, Inc.
NASA Technical Reports Server (NTRS)
DeGaudenzi, Riccardo; Giannetti, Filippo
1995-01-01
The downlink of a satellite-mobile personal communication system employing power-controlled Direct Sequence Code Division Multiple Access (DS-CDMA) and exploiting satellite-diversity is analyzed and its performance compared with a more traditional communication system utilizing single satellite reception. The analytical model developed has been thoroughly validated by means of extensive Monte Carlo computer simulations. It is shown how the capacity gain provided by diversity reception shrinks considerably in the presence of increasing traffic or in the case of light shadowing conditions. Moreover, the quantitative results tend to indicate that to combat system capacity reduction due to intra-system interference, no more than two satellites shall be active over the same region. To achieve higher system capacity, differently from terrestrial cellular systems, Multi-User Detection (MUD) techniques are likely to be required in the mobile user terminal, thus considerably increasing its complexity.
NASA Astrophysics Data System (ADS)
Jiang, Zhong-Yuan; Ma, Jian-Feng
Existing routing strategies such as the global dynamic routing [X. Ling, M. B. Hu, R. Jiang and Q. S. Wu, Phys. Rev. E 81, 016113 (2010)] can achieve very high traffic capacity at the cost of extremely long packet traveling delay. In many real complex networks, especially for real-time applications such as the instant communication software, extremely long packet traveling time is unacceptable. In this work, we propose to assign a finite Time-to-Live (TTL) parameter for each packet. To guarantee every packet to arrive at its destination within its TTL, we assume that a packet is retransmitted by its source once its TTL expires. We employ source routing mechanisms in the traffic model to avoid the routing-flaps induced by the global dynamic routing. We compose extensive simulations to verify our proposed mechanisms. With small TTL, the effects of packet retransmission on network traffic capacity are obvious, and the phase transition from flow free state to congested state occurs. For the purpose of reducing the computation frequency of the routing table, we employ a computing cycle Tc within which the routing table is recomputed once. The simulation results show that the traffic capacity decreases with increasing Tc. Our work provides a good insight into the understanding of effects of packet retransmission with finite packet lifetime on traffic capacity in scale-free networks.
Lithium ion rechargeable systems studies
NASA Astrophysics Data System (ADS)
Levy, Samuel C.; Lasasse, Robert R.; Cygan, Randall T.; Voigt, James A.
Lithium ion systems, although relatively new, have attracted much interest worldwide. Their high energy density, long cycle life and relative safety, compared with metallic lithium rechargeable systems, make them prime candidates for powering portable electronic equipment. Although lithium ion cells are presently used in a few consumer devices, e.g., portable phones, camcorders, and laptop computers, there is room for considerable improvement in their performance. Specific areas that need to be addressed include: (1) carbon anode-increase reversible capacity, and minimize passivation; (2) cathode-extend cycle life, improve rate capability, and increase capacity. There are several programs ongoing at Sandia National Laboratories which are investigating means of achieving the stated objectives in these specific areas. This paper will review these programs.
Jaarsma, Tiny; Klompstra, Leonie; Ben Gal, Tuvia; Boyne, Josiane; Vellone, Ercole; Bäck, Maria; Dickstein, Kenneth; Fridlund, Bengt; Hoes, Arno; Piepoli, Massimo F; Chialà, Oronzo; Mårtensson, Jan; Strömberg, Anna
2015-07-01
Exercise is known to be beneficial for patients with heart failure (HF), and these patients should therefore be routinely advised to exercise and to be or to become physically active. Despite the beneficial effects of exercise such as improved functional capacity and favourable clinical outcomes, the level of daily physical activity in most patients with HF is low. Exergaming may be a promising new approach to increase the physical activity of patients with HF at home. The aim of this study is to determine the effectiveness of the structured introduction and access to a Wii game computer in patients with HF to improve exercise capacity and level of daily physical activity, to decrease healthcare resource use, and to improve self-care and health-related quality of life. A multicentre randomized controlled study with two treatment groups will include 600 patients with HF. In each centre, patients will be randomized to either motivational support only (control) or structured access to a Wii game computer (Wii). Patients in the control group will receive advice on physical activity and will be contacted by four telephone calls. Patients in the Wii group also will receive advice on physical activity along with a Wii game computer, with instructions and training. The primary endpoint will be exercise capacity at 3 months as measured by the 6 min walk test. Secondary endpoints include exercise capacity at 6 and 12 months, level of daily physical activity, muscle function, health-related quality of life, and hospitalization or death during the 12 months follow-up. The HF-Wii study is a randomized study that will evaluate the effect of exergaming in patients with HF. The findings can be useful to healthcare professionals and improve our understanding of the potential role of exergaming in the treatment and management of patients with HF. NCT01785121. © 2015 The Authors. European Journal of Heart Failure © 2015 European Society of Cardiology.
Ecological forecasts: An emerging imperative
James S. Clark; Steven R. Carpenter; Mary Barber; Scott Collins; Andy Dobson; Jonathan A. Foley; David M. Lodge; Mercedes Pascual; Roger Pielke; William Pizer; Cathy Pringle; Walter V. Reid; Kenneth A. Rose; Osvaldo Sala; William H. Schlesinger; Diana H. Wall; David Wear
2001-01-01
Planning and decision-making can be improved by access to reliable forecasts of ecosystem state, ecosystem services, and natural capital. Availability of new data sets, together with progress in computation and statistics, will increase our ability to forecast ecosystem change. An agenda that would lead toward a capacity to produce, evaluate, and communicate forecasts...
DOT National Transportation Integrated Search
1978-09-01
The requirements for a navigation guidance system which will effect an increase in the ship processing capacity of the Saint Lawrence Seaway (Lake Ontario to Montreal, Quebec) are developed. The requirements include a specification of system position...
ERIC Educational Resources Information Center
Pu, Minran
2009-01-01
The purpose of the study was to investigate the relationship between college EFL students' autonomous learning capacity and motivation in using web-based Computer-Assisted Language Learning (CALL) in China. This study included three questionnaires: the student background questionnaire, the questionnaire on student autonomous learning capacity, and…
Capacity planning for maternal-fetal medicine using discrete event simulation.
Ferraro, Nicole M; Reamer, Courtney B; Reynolds, Thomas A; Howell, Lori J; Moldenhauer, Julie S; Day, Theodore Eugene
2015-07-01
Maternal-fetal medicine is a rapidly growing field requiring collaboration from many subspecialties. We provide an evidence-based estimate of capacity needs for our clinic, as well as demonstrate how simulation can aid in capacity planning in similar environments. A Discrete Event Simulation of the Center for Fetal Diagnosis and Treatment and Special Delivery Unit at The Children's Hospital of Philadelphia was designed and validated. This model was then used to determine the time until demand overwhelms inpatient bed availability under increasing capacity. No significant deviation was found between historical inpatient censuses and simulated censuses for the validation phase (p = 0.889). Prospectively increasing capacity was found to delay time to balk (the inability of the center to provide bed space for a patient in need of admission). With current capacity, the model predicts mean time to balk of 276 days. Adding three beds delays mean time to first balk to 762 days; an additional six beds to 1,335 days. Providing sufficient access is a patient safety issue, and good planning is crucial for targeting infrastructure investments appropriately. Computer-simulated analysis can provide an evidence base for both medical and administrative decision making in a complex clinical environment. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
The Practical Obstacles of Data Transfer: Why researchers still love scp
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nam, Hai Ah; Hill, Jason J; Parete-Koon, Suzanne T
The importance of computing facilities is heralded every six months with the announcement of the new Top500 list, showcasing the world s fastest supercomputers. Unfortu- nately, with great computing capability does not come great long-term data storage capacity, which often means users must move their data to their local site archive, to remote sites where they may be doing future computation or anal- ysis, or back to their home institution, else face the dreaded data purge that most HPC centers employ to keep utiliza- tion of large parallel filesystems low to manage performance and capacity. At HPC centers, data transfermore » is crucial to the scientific workflow and will increase in importance as computing systems grow in size. The Energy Sciences Net- work (ESnet) recently launched its fifth generation network, a 100 Gbps high-performance, unclassified national network connecting more than 40 DOE research sites to support scientific research and collaboration. Despite the tenfold increase in bandwidth to DOE research sites amenable to multiple data transfer streams and high throughput, in prac- tice, researchers often under-utilize the network and resort to painfully-slow single stream transfer methods such as scp to avoid the complexity of using multiple stream tools such as GridFTP and bbcp, and contend with frustration from the lack of consistency of available tools between sites. In this study we survey and assess the data transfer methods pro- vided at several DOE supported computing facilities, includ- ing both leadership-computing facilities, connected through ESnet. We present observed transfer rates, suggested opti- mizations, and discuss the obstacles the tools must overcome to receive wide-spread adoption over scp.« less
Computer Simulation and Field Experiment for Downlink Multiuser MIMO in Mobile WiMAX System.
Yamaguchi, Kazuhiro; Nagahashi, Takaharu; Akiyama, Takuya; Matsue, Hideaki; Uekado, Kunio; Namera, Takakazu; Fukui, Hiroshi; Nanamatsu, Satoshi
2015-01-01
The transmission performance for a downlink mobile WiMAX system with multiuser multiple-input multiple-output (MU-MIMO) systems in a computer simulation and field experiment is described. In computer simulation, a MU-MIMO transmission system can be realized by using the block diagonalization (BD) algorithm, and each user can receive signals without any signal interference from other users. The bit error rate (BER) performance and channel capacity in accordance with modulation schemes and the number of streams were simulated in a spatially correlated multipath fading environment. Furthermore, we propose a method for evaluating the transmission performance for this downlink mobile WiMAX system in this environment by using the computer simulation. In the field experiment, the received power and downlink throughput in the UDP layer were measured on an experimental mobile WiMAX system developed in Azumino City in Japan. In comparison with the simulated and experimented results, the measured maximum throughput performance in the downlink had almost the same performance as the simulated throughput. It was confirmed that the experimental mobile WiMAX system for MU-MIMO transmission successfully increased the total channel capacity of the system.
Computer Simulation and Field Experiment for Downlink Multiuser MIMO in Mobile WiMAX System
Yamaguchi, Kazuhiro; Nagahashi, Takaharu; Akiyama, Takuya; Matsue, Hideaki; Uekado, Kunio; Namera, Takakazu; Fukui, Hiroshi; Nanamatsu, Satoshi
2015-01-01
The transmission performance for a downlink mobile WiMAX system with multiuser multiple-input multiple-output (MU-MIMO) systems in a computer simulation and field experiment is described. In computer simulation, a MU-MIMO transmission system can be realized by using the block diagonalization (BD) algorithm, and each user can receive signals without any signal interference from other users. The bit error rate (BER) performance and channel capacity in accordance with modulation schemes and the number of streams were simulated in a spatially correlated multipath fading environment. Furthermore, we propose a method for evaluating the transmission performance for this downlink mobile WiMAX system in this environment by using the computer simulation. In the field experiment, the received power and downlink throughput in the UDP layer were measured on an experimental mobile WiMAX system developed in Azumino City in Japan. In comparison with the simulated and experimented results, the measured maximum throughput performance in the downlink had almost the same performance as the simulated throughput. It was confirmed that the experimental mobile WiMAX system for MU-MIMO transmission successfully increased the total channel capacity of the system. PMID:26421311
Multi-Hop Link Capacity of Multi-Route Multi-Hop MRC Diversity for a Virtual Cellular Network
NASA Astrophysics Data System (ADS)
Daou, Imane; Kudoh, Eisuke; Adachi, Fumiyuki
In virtual cellular network (VCN), proposed for high-speed mobile communications, the signal transmitted from a mobile terminal is received by some wireless ports distributed in each virtual cell and relayed to the central port that acts as a gateway to the core network. In this paper, we apply the multi-route MHMRC diversity in order to decrease the transmit power and increase the multi-hop link capacity. The transmit power, the interference power and the link capacity are evaluated for DS-CDMA multi-hop VCN by computer simulation. The multi-route MHMRC diversity can be applied to not only DS-CDMA but also other access schemes (i. e. MC-CDMA, OFDM, etc.).
ERIC Educational Resources Information Center
Pon-Barry, Heather; Packard, Becky Wai-Ling; St. John, Audrey
2017-01-01
A dilemma within computer science departments is developing sustainable ways to expand capacity within introductory computer science courses while remaining committed to inclusive practices. Training near-peer mentors for peer code review is one solution. This paper describes the preparation of near-peer mentors for their role, with a focus on…
Does Poor Handwriting Conceal Literacy Potential in Primary School Children?
ERIC Educational Resources Information Center
McCarney, Debra; Peters, Lynne; Jackson, Sarah; Thomas, Marie; Kirby, Amanda
2013-01-01
Handwriting is a complex skill that, despite increasing use of computers, still plays a vital role in education. It is assumed that children will master letter formation at a relatively early stage in their school life, with handwriting fluency developing steadily until automaticity is attained. The capacity theory of writing suggests that as…
The DYNES Instrument: A Description and Overview
NASA Astrophysics Data System (ADS)
Zurawski, Jason; Ball, Robert; Barczyk, Artur; Binkley, Mathew; Boote, Jeff; Boyd, Eric; Brown, Aaron; Brown, Robert; Lehman, Tom; McKee, Shawn; Meekhof, Benjeman; Mughal, Azher; Newman, Harvey; Rozsa, Sandor; Sheldon, Paul; Tackett, Alan; Voicu, Ramiro; Wolff, Stephen; Yang, Xi
2012-12-01
Scientific innovation continues to increase requirements for the computing and networking infrastructures of the world. Collaborative partners, instrumentation, storage, and processing facilities are often geographically and topologically separated, as is the case with LHC virtual organizations. These separations challenge the technology used to interconnect available resources, often delivered by Research and Education (R&E) networking providers, and leads to complications in the overall process of end-to-end data management. Capacity and traffic management are key concerns of R&E network operators; a delicate balance is required to serve both long-lived, high capacity network flows, as well as more traditional end-user activities. The advent of dynamic circuit services, a technology that enables the creation of variable duration, guaranteed bandwidth networking channels, allows for the efficient use of common network infrastructures. These gains are seen particularly in locations where overall capacity is scarce compared to the (sustained peak) needs of user communities. Related efforts, including those of the LHCOPN [3] operations group and the emerging LHCONE [4] project, may take advantage of available resources by designating specific network activities as a “high priority”, allowing reservation of dedicated bandwidth or optimizing for deadline scheduling and predicable delivery patterns. This paper presents the DYNES instrument, an NSF funded cyberinfrastructure project designed to facilitate end-to-end dynamic circuit services [2]. This combination of hardware and software innovation is being deployed across R&E networks in the United States at selected end-sites located on University Campuses. DYNES is peering with international efforts in other countries using similar solutions, and is increasing the reach of this emerging technology. This global data movement solution could be integrated into computing paradigms such as cloud and grid computing platforms, and through the use of APIs can be integrated into existing data movement software.
Unbounded number of channel uses may be required to detect quantum capacity.
Cubitt, Toby; Elkouss, David; Matthews, William; Ozols, Maris; Pérez-García, David; Strelchuk, Sergii
2015-03-31
Transmitting data reliably over noisy communication channels is one of the most important applications of information theory, and is well understood for channels modelled by classical physics. However, when quantum effects are involved, we do not know how to compute channel capacities. This is because the formula for the quantum capacity involves maximizing the coherent information over an unbounded number of channel uses. In fact, entanglement across channel uses can even increase the coherent information from zero to non-zero. Here we study the number of channel uses necessary to detect positive coherent information. In all previous known examples, two channel uses already sufficed. It might be that only a finite number of channel uses is always sufficient. We show that this is not the case: for any number of uses, there are channels for which the coherent information is zero, but which nonetheless have capacity.
Expanded all-optical programmable logic array based on multi-input/output canonical logic units.
Lei, Lei; Dong, Jianji; Zou, Bingrong; Wu, Zhao; Dong, Wenchan; Zhang, Xinliang
2014-04-21
We present an expanded all-optical programmable logic array (O-PLA) using multi-input and multi-output canonical logic units (CLUs) generation. Based on four-wave mixing (FWM) in highly nonlinear fiber (HNLF), two-input and three-input CLUs are simultaneously achieved in five different channels with an operation speed of 40 Gb/s. Clear temporal waveforms and wide open eye diagrams are successfully observed. The effectiveness of the scheme is validated by extinction ratio and optical signal-to-noise ratio measurements. The computing capacity, defined as the total amount of logic functions achieved by the O-PLA, is discussed in detail. For a three-input O-PLA, the computing capacity of the expanded CLUs-PLA is more than two times as large as that of the standard CLUs-PLA, and this multiple will increase to more than three and a half as the idlers are individually independent.
Tsivion, Ehud; Mason, Jarad A.; Gonzalez, Miguel. I.; ...
2016-03-29
In order to store natural gas (NG) inexpensively at adequate densities for use as a fuel in the transportation sector, new porous materials are being developed. Our work uses computational methods to explore strategies for improving the usable methane storage capacity of adsorbents, including metal-organic frameworks (MOFs), that feature open-metal sites incorporated into their structure by postsynthetic modification. The adsorption of CH 4 on several open-metal sites is studied by calculating geometries and adsorption energies and analyzing the relevant interaction factors. Approximate site-specific adsorption isotherms are obtained, and the open-metal site contribution to the overall CH 4 usable capacity ismore » evaluated. It is found that sufficient ionic character is required, as exemplified by the strong CH 4 affinities of 2,2'-bipyridine-CaCl 2 and Mg, Ca-catecholate. In addition, it is found that the capacity of a single metal site depends not only on its affinity but also on its geometry, where trigonal or "bent" low-coordinate exposed sites can accommodate three or four methane molecules, as exemplified by Ca-decorated nitrilotriacetic acid. The effect of residual solvent molecules at the open-metal site is also explored, with some positive conclusions. Not only can residual solvent stabilize the open-metal site, surprisingly, solvent molecules do not necessarily reduce CH 4 affinity, but can contribute to increased usable capacity by modifying adsorption interactions.« less
Salovey, Peter; Williams-Piehota, Pamela; Mowad, Linda; Moret, Marta Elisa; Edlund, Denielle; Andersen, Judith
2009-01-01
This article describes the establishment of two community technology centers affiliated with Head Start early childhood education programs focused especially on Latino and African American parents of children enrolled in Head Start. A 6-hour course concerned with computer and cancer literacy was presented to 120 parents and other community residents who earned a free, refurbished, Internet-ready computer after completing the program. Focus groups provided the basis for designing the structure and content of the course and modifying it during the project period. An outcomes-based assessment comparing program participants with 70 nonparticipants at baseline, immediately after the course ended, and 3 months later suggested that the program increased knowledge about computers and their use, knowledge about cancer and its prevention, and computer use including health information-seeking via the Internet. The creation of community computer technology centers requires the availability of secure space, capacity of a community partner to oversee project implementation, and resources of this partner to ensure sustainability beyond core funding.
More efficient optimization of long-term water supply portfolios
NASA Astrophysics Data System (ADS)
Kirsch, Brian R.; Characklis, Gregory W.; Dillard, Karen E. M.; Kelley, C. T.
2009-03-01
The use of temporary transfers, such as options and leases, has grown as utilities attempt to meet increases in demand while reducing dependence on the expansion of costly infrastructure capacity (e.g., reservoirs). Earlier work has been done to construct optimal portfolios comprising firm capacity and transfers, using decision rules that determine the timing and volume of transfers. However, such work has only focused on the short-term (e.g., 1-year scenarios), which limits the utility of these planning efforts. Developing multiyear portfolios can lead to the exploration of a wider range of alternatives but also increases the computational burden. This work utilizes a coupled hydrologic-economic model to simulate the long-term performance of a city's water supply portfolio. This stochastic model is linked with an optimization search algorithm that is designed to handle the high-frequency, low-amplitude noise inherent in many simulations, particularly those involving expected values. This noise is detrimental to the accuracy and precision of the optimized solution and has traditionally been controlled by investing greater computational effort in the simulation. However, the increased computational effort can be substantial. This work describes the integration of a variance reduction technique (control variate method) within the simulation/optimization as a means of more efficiently identifying minimum cost portfolios. Random variation in model output (i.e., noise) is moderated using knowledge of random variations in stochastic input variables (e.g., reservoir inflows, demand), thereby reducing the computing time by 50% or more. Using these efficiency gains, water supply portfolios are evaluated over a 10-year period in order to assess their ability to reduce costs and adapt to demand growth, while still meeting reliability goals. As a part of the evaluation, several multiyear option contract structures are explored and compared.
Vector computer memory bank contention
NASA Technical Reports Server (NTRS)
Bailey, D. H.
1985-01-01
A number of vector supercomputers feature very large memories. Unfortunately the large capacity memory chips that are used in these computers are much slower than the fast central processing unit (CPU) circuitry. As a result, memory bank reservation times (in CPU ticks) are much longer than on previous generations of computers. A consequence of these long reservation times is that memory bank contention is sharply increased, resulting in significantly lowered performance rates. The phenomenon of memory bank contention in vector computers is analyzed using both a Markov chain model and a Monte Carlo simulation program. The results of this analysis indicate that future generations of supercomputers must either employ much faster memory chips or else feature very large numbers of independent memory banks.
Vector computer memory bank contention
NASA Technical Reports Server (NTRS)
Bailey, David H.
1987-01-01
A number of vector supercomputers feature very large memories. Unfortunately the large capacity memory chips that are used in these computers are much slower than the fast central processing unit (CPU) circuitry. As a result, memory bank reservation times (in CPU ticks) are much longer than on previous generations of computers. A consequence of these long reservation times is that memory bank contention is sharply increased, resulting in significantly lowered performance rates. The phenomenon of memory bank contention in vector computers is analyzed using both a Markov chain model and a Monte Carlo simulation program. The results of this analysis indicate that future generations of supercomputers must either employ much faster memory chips or else feature very large numbers of independent memory banks.
Electricity market design for generator revenue sufficiency with increased variable generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levin, Todd; Botterud, Audun
Here, we present a computationally efficient mixed-integer program (MIP) that determines optimal generator expansion decisions, and hourly unit commitment and dispatch in a power system. The impact of increasing wind power capacity on the optimal generation mix and generator profitability is analyzed for a test case that approximates the electricity market in Texas (ERCOT). We analyze three market policies that may support resource adequacy: Operating Reserve Demand Curves (ORDC), Fixed Reserve Scarcity Prices (FRSP) and fixed capacity payments (CP). Optimal expansion plans are comparable between the ORDC and FRSP implementations, while capacity payments may result in additional new capacity. Themore » FRSP policy leads to frequent reserves scarcity events and corresponding price spikes, while the ORDC implementation results in more continuous energy prices. Average energy prices decrease with increasing wind penetration under all policies, as do revenues for baseload and wind generators. Intermediate and peak load plants benefit from higher reserve prices and are less exposed to reduced energy prices. All else equal, an ORDC approach may be preferred to FRSP as it results in similar expansion and revenues with less extreme energy prices. A fixed CP leads to additional new flexible NGCT units, but lower profits for other technologies.« less
Electricity market design for generator revenue sufficiency with increased variable generation
Levin, Todd; Botterud, Audun
2015-10-01
Here, we present a computationally efficient mixed-integer program (MIP) that determines optimal generator expansion decisions, and hourly unit commitment and dispatch in a power system. The impact of increasing wind power capacity on the optimal generation mix and generator profitability is analyzed for a test case that approximates the electricity market in Texas (ERCOT). We analyze three market policies that may support resource adequacy: Operating Reserve Demand Curves (ORDC), Fixed Reserve Scarcity Prices (FRSP) and fixed capacity payments (CP). Optimal expansion plans are comparable between the ORDC and FRSP implementations, while capacity payments may result in additional new capacity. Themore » FRSP policy leads to frequent reserves scarcity events and corresponding price spikes, while the ORDC implementation results in more continuous energy prices. Average energy prices decrease with increasing wind penetration under all policies, as do revenues for baseload and wind generators. Intermediate and peak load plants benefit from higher reserve prices and are less exposed to reduced energy prices. All else equal, an ORDC approach may be preferred to FRSP as it results in similar expansion and revenues with less extreme energy prices. A fixed CP leads to additional new flexible NGCT units, but lower profits for other technologies.« less
Self-Organized Service Negotiation for Collaborative Decision Making
Zhang, Bo; Zheng, Ziming
2014-01-01
This paper proposes a self-organized service negotiation method for CDM in intelligent and automatic manners. It mainly includes three phases: semantic-based capacity evaluation for the CDM sponsor, trust computation of the CDM organization, and negotiation selection of the decision-making service provider (DMSP). In the first phase, the CDM sponsor produces the formal semantic description of the complex decision task for DMSP and computes the capacity evaluation values according to participator instructions from different DMSPs. In the second phase, a novel trust computation approach is presented to compute the subjective belief value, the objective reputation value, and the recommended trust value. And in the third phase, based on the capacity evaluation and trust computation, a negotiation mechanism is given to efficiently implement the service selection. The simulation experiment results show that our self-organized service negotiation method is feasible and effective for CDM. PMID:25243228
Self-organized service negotiation for collaborative decision making.
Zhang, Bo; Huang, Zhenhua; Zheng, Ziming
2014-01-01
This paper proposes a self-organized service negotiation method for CDM in intelligent and automatic manners. It mainly includes three phases: semantic-based capacity evaluation for the CDM sponsor, trust computation of the CDM organization, and negotiation selection of the decision-making service provider (DMSP). In the first phase, the CDM sponsor produces the formal semantic description of the complex decision task for DMSP and computes the capacity evaluation values according to participator instructions from different DMSPs. In the second phase, a novel trust computation approach is presented to compute the subjective belief value, the objective reputation value, and the recommended trust value. And in the third phase, based on the capacity evaluation and trust computation, a negotiation mechanism is given to efficiently implement the service selection. The simulation experiment results show that our self-organized service negotiation method is feasible and effective for CDM.
Time Division Multiplexing of Semiconductor Qubits
NASA Astrophysics Data System (ADS)
Jarratt, Marie Claire; Hornibrook, John; Croot, Xanthe; Watson, John; Gardner, Geoff; Fallahi, Saeed; Manfra, Michael; Reilly, David
Readout chains, comprising resonators, amplifiers, and demodulators, are likely to be precious resources in quantum computing architectures. The potential to share readout resources is contingent on realising efficient means of time-division multiplexing (TDM) schemes that are compatible with quantum computing. Here, we demonstrate TDM using a GaAs quantum dot device with multiple charge sensors. Our device incorporates chip-level switches that do not load the impedance matching network. When used in conjunction with frequency multiplexing, each frequency tone addresses multiple time-multiplexed qubits, vastly increasing the capacity of a single readout line.
Positive lithiation potential on functionalized Graphene sheets
NASA Astrophysics Data System (ADS)
Chouhan, Rajiv Kumar; Raghani, Pushpa
2015-03-01
Designing lithium batteries with high capacities is major challenge in the field of energy storage. As an alternative to the conventional graphitic anode with a capacity of ~372 mAhg-1 , we look at the adsorption of lithium on 2D graphene oxide (GO) sheets. We have included van-der-waal's interaction in our calculation and compared with literature showing its importance in Li binding on Graphene sheets. In comparison to the negative lithiation potential in prestine graphene sheets, we were able to get positive lithiation potential by introducing functional groups such as epoxy(-O-) and hydroxyl(-OH) on graphene. Also the non-stoichiometic nature of GO provides better potential to increase the lithiation potential in compare to the defects induced graphene 2D sheet. Dramatic charge redistribution within the sheet due to presence of highly electronegative oxygen plays an important role in increasing the capacity. Financial support from Research Corporation's Cottrell College Science award and National Science Foundation's CAREER award (DMR-1255584). Computational facilities provided by HPC center of Idaho National Laboratory.
Building the Capacity of HBCU's for Establishing Effective Globe Partnerships
NASA Technical Reports Server (NTRS)
Bagayoko, Diola; Ford, Robert L.
2002-01-01
The special GLOBE train-the-trainer (TTT) workshop entitled "Building the Capacity of HBCUs For Establishing Effective GLOBE Partnerships" was help for the purpose of expanding GLOBE training capacity on the campuses of Historically Black Colleges and Universities (HBCUs) and community colleges (CCs). The workshop was held March 17-22, 2002 in Washington, D.C. at Howard University. It was designed to establish research and instructional collaboration between and among U.S. universities (HBCUs and CCs) and African countries. Representatives from 13 HBCUs, and two community colleges were represented among trainees, so were representatives from eight African countries who were financially supported by other sources. A total of 38 trainees increased their knowledge of GLOBE protocols through five days of rigorous classroom instruction, field experiences, cultural events, and computer lab sessions.
ERIC Educational Resources Information Center
Halac, Hicran Hanim; Cabuk, Alper
2013-01-01
Depending on the evolving technological possibilities, distance and online education applications have gradually gained more significance in the education system. Regarding the issues, such as advancements in the server services, disc capacity, cloud computing opportunities resulting from the increase in the number of the broadband internet users,…
ERIC Educational Resources Information Center
Beavis, Catherine; Muspratt, Sandy; Thompson, Roberta
2015-01-01
There is considerable enthusiasm in many quarters for the incorporation of digital games into the classroom, and the capacity of games to engage and challenge players, present complex representations and experiences, foster collaborative learning, and promote deep learning. But while there is increasing research documenting the progress and…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Langer, S; Rotman, D; Schwegler, E
The Institutional Computing Executive Group (ICEG) review of FY05-06 Multiprogrammatic and Institutional Computing (M and IC) activities is presented in the attached report. In summary, we find that the M and IC staff does an outstanding job of acquiring and supporting a wide range of institutional computing resources to meet the programmatic and scientific goals of LLNL. The responsiveness and high quality of support given to users and the programs investing in M and IC reflects the dedication and skill of the M and IC staff. M and IC has successfully managed serial capacity, parallel capacity, and capability computing resources.more » Serial capacity computing supports a wide range of scientific projects which require access to a few high performance processors within a shared memory computer. Parallel capacity computing supports scientific projects that require a moderate number of processors (up to roughly 1000) on a parallel computer. Capability computing supports parallel jobs that push the limits of simulation science. M and IC has worked closely with Stockpile Stewardship, and together they have made LLNL a premier institution for computational and simulation science. Such a standing is vital to the continued success of laboratory science programs and to the recruitment and retention of top scientists. This report provides recommendations to build on M and IC's accomplishments and improve simulation capabilities at LLNL. We recommend that institution fully fund (1) operation of the atlas cluster purchased in FY06 to support a few large projects; (2) operation of the thunder and zeus clusters to enable 'mid-range' parallel capacity simulations during normal operation and a limited number of large simulations during dedicated application time; (3) operation of the new yana cluster to support a wide range of serial capacity simulations; (4) improvements to the reliability and performance of the Lustre parallel file system; (5) support for the new GDO petabyte-class storage facility on the green network for use in data intensive external collaborations; and (6) continued support for visualization and other methods for analyzing large simulations. We also recommend that M and IC begin planning in FY07 for the next upgrade of its parallel clusters. LLNL investments in M and IC have resulted in a world-class simulation capability leading to innovative science. We thank the LLNL management for its continued support and thank the M and IC staff for its vision and dedicated efforts to make it all happen.« less
A Computational Model of Spatial Visualization Capacity
ERIC Educational Resources Information Center
Lyon, Don R.; Gunzelmann, Glenn; Gluck, Kevin A.
2008-01-01
Visualizing spatial material is a cornerstone of human problem solving, but human visualization capacity is sharply limited. To investigate the sources of this limit, we developed a new task to measure visualization accuracy for verbally-described spatial paths (similar to street directions), and implemented a computational process model to…
Modeling a Wireless Network for International Space Station
NASA Technical Reports Server (NTRS)
Alena, Richard; Yaprak, Ece; Lamouri, Saad
2000-01-01
This paper describes the application of wireless local area network (LAN) simulation modeling methods to the hybrid LAN architecture designed for supporting crew-computing tools aboard the International Space Station (ISS). These crew-computing tools, such as wearable computers and portable advisory systems, will provide crew members with real-time vehicle and payload status information and access to digital technical and scientific libraries, significantly enhancing human capabilities in space. A wireless network, therefore, will provide wearable computer and remote instruments with the high performance computational power needed by next-generation 'intelligent' software applications. Wireless network performance in such simulated environments is characterized by the sustainable throughput of data under different traffic conditions. This data will be used to help plan the addition of more access points supporting new modules and more nodes for increased network capacity as the ISS grows.
BINDER, KYLE W.; MURFEE, WALTER L.; SONG, JI; LAUGHLIN, M. HAROLD; PRICE, RICHARD J.
2009-01-01
Objectives Exercise training is known to enhance skeletal muscle blood flow capacity, with high-intensity interval sprint training (IST) primarily affecting muscles with a high proportion of fast twitch glycolytic fibers. The objective of this study was to determine the relative contributions of new arteriole formation and lumenal arteriolar remodeling to enhanced flow capacity and the impact of these adaptations on local microvascular hemodynamics deep within the muscle. Methods The authors studied arteriolar adaptation in the white/mixed-fiber portion of gastrocnemius muscles of IST (6 bouts of running/day; 2.5 min/bout; 60 m/min speed; 15% grade; 4.5 min rest between bouts; 5 training days/wk; 10 wks total) and sedentary (SED) control rats using whole-muscle Microfil casts. Dimensional and topological data were then used to construct a series of computational hemodynamic network models that incorporated physiological red blood cell distributions and hematocrit and diameter dependent apparent viscosities. Results In comparison to SED controls, IST elicited a significant increase in arterioles/order in the 3A through 6A generations. Predicted IST and SED flows through the 2A generation agreed closely with in vivo measurements made in a previous study, illustrating the accuracy of the model. IST shifted the bulk of the pressure drop across the network from the 3As to the 4As and 5As, and flow capacity increased from 0.7 mL/min in SED to 1.5 mL/min in IST when a driving pressure of 80 mmHg was applied. Conclusions The primary adaptation to IST is an increase in arterioles in the 3A through 6A generations, which, in turn, creates an approximate doubling of flow capacity and a deeper penetration of high pressure into the arteriolar network. PMID:17454671
Evaluation of on-board hydrogen storage methods f or high-speed aircraft
NASA Technical Reports Server (NTRS)
Akyurtlu, Ates; Akyurtlu, Jale F.
1991-01-01
Hydrogen is the fuel of choice for hypersonic vehicles. Its main disadvantage is its low liquid and solid density. This increases the vehicle volume and hence the drag losses during atmospheric flight. In addition, the dry mass of the vehicle is larger due to larger vehicle structure and fuel tankage. Therefore it is very desirable to find a fuel system with smaller fuel storage requirements without deteriorating the vehicle performance substantially. To evaluate various candidate fuel systems, they were first screened thermodynamically with respect to their energy content and cooling capacities. To evaluate the vehicle performance with different fuel systems, a simple computer model is developed to compute the vehicle parameters such as the vehicle volume, dry mass, effective specific impulse, and payload capacity. The results indicate that if the payload capacity (or the gross lift-off mass) is the most important criterion, only slush hydrogen and liquid hydrogen - liquid methane gel shows better performance than the liquid hydrogen vehicle. If all the advantages of a smaller vehicle are considered and a more accurate mass analysis can be performed, other systems using endothermic fuels such as cyclohexane, and some boranes may prove to be worthy of further consideration.
Masonry Columns Confined by Steel Fiber Composite Wraps
Borri, Antonio; Castori, Giulio; Corradi, Marco
2011-01-01
The application of steel fiber reinforced polymer (SRP) as a means of increasing the capacity of masonry columns is investigated in this study. The behavior of 23 solid-brick specimens that are externally wrapped by SRP sheets in low volumetric ratios is presented. The specimens are subjected to axial monotonic load until failure occurs. Two widely used types of masonry columns of differing square cross-sections were tested in compression (square and octagonal cross-sections). It is concluded that SRP-confined masonry behaves very much like fiber reinforced polymers (FRP)-confined masonry. Confinement increases both the load-carrying capacity and the deformability of masonry almost linearly with average confining stress. A comparative analysis between experimental and theoretical values computed in compliance with the Italian Council of Research (CNR) was also developed. PMID:28879991
Masonry Columns Confined by Steel Fiber Composite Wraps.
Borri, Antonio; Castori, Giulio; Corradi, Marco
2011-01-21
The application of steel fiber reinforced polymer (SRP) as a means of increasing the capacity of masonry columns is investigated in this study. The behavior of 23 solid-brick specimens that are externally wrapped by SRP sheets in low volumetric ratios is presented. The specimens are subjected to axial monotonic load until failure occurs. Two widely used types of masonry columns of differing square cross-sections were tested in compression (square and octagonal cross-sections). It is concluded that SRP-confined masonry behaves very much like fiber reinforced polymers (FRP)-confined masonry. Confinement increases both the load-carrying capacity and the deformability of masonry almost linearly with average confining stress. A comparative analysis between experimental and theoretical values computed in compliance with the Italian Council of Research (CNR) was also developed.
Capacity utilization study for aviation security cargo inspection queuing system
NASA Astrophysics Data System (ADS)
Allgood, Glenn O.; Olama, Mohammed M.; Lake, Joe E.; Brumback, Daryl
2010-04-01
In this paper, we conduct performance evaluation study for an aviation security cargo inspection queuing system for material flow and accountability. The queuing model employed in our study is based on discrete-event simulation and processes various types of cargo simultaneously. Onsite measurements are collected in an airport facility to validate the queuing model. The overall performance of the aviation security cargo inspection system is computed, analyzed, and optimized for the different system dynamics. Various performance measures are considered such as system capacity, residual capacity, throughput, capacity utilization, subscribed capacity utilization, resources capacity utilization, subscribed resources capacity utilization, and number of cargo pieces (or pallets) in the different queues. These metrics are performance indicators of the system's ability to service current needs and response capacity to additional requests. We studied and analyzed different scenarios by changing various model parameters such as number of pieces per pallet, number of TSA inspectors and ATS personnel, number of forklifts, number of explosives trace detection (ETD) and explosives detection system (EDS) inspection machines, inspection modality distribution, alarm rate, and cargo closeout time. The increased physical understanding resulting from execution of the queuing model utilizing these vetted performance measures should reduce the overall cost and shipping delays associated with new inspection requirements.
Capacity Utilization Study for Aviation Security Cargo Inspection Queuing System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allgood, Glenn O; Olama, Mohammed M; Lake, Joe E
In this paper, we conduct performance evaluation study for an aviation security cargo inspection queuing system for material flow and accountability. The queuing model employed in our study is based on discrete-event simulation and processes various types of cargo simultaneously. Onsite measurements are collected in an airport facility to validate the queuing model. The overall performance of the aviation security cargo inspection system is computed, analyzed, and optimized for the different system dynamics. Various performance measures are considered such as system capacity, residual capacity, throughput, capacity utilization, subscribed capacity utilization, resources capacity utilization, subscribed resources capacity utilization, and number ofmore » cargo pieces (or pallets) in the different queues. These metrics are performance indicators of the system s ability to service current needs and response capacity to additional requests. We studied and analyzed different scenarios by changing various model parameters such as number of pieces per pallet, number of TSA inspectors and ATS personnel, number of forklifts, number of explosives trace detection (ETD) and explosives detection system (EDS) inspection machines, inspection modality distribution, alarm rate, and cargo closeout time. The increased physical understanding resulting from execution of the queuing model utilizing these vetted performance measures should reduce the overall cost and shipping delays associated with new inspection requirements.« less
A Case Study on Neural Inspired Dynamic Memory Management Strategies for High Performance Computing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vineyard, Craig Michael; Verzi, Stephen Joseph
As high performance computing architectures pursue more computational power there is a need for increased memory capacity and bandwidth as well. A multi-level memory (MLM) architecture addresses this need by combining multiple memory types with different characteristics as varying levels of the same architecture. How to efficiently utilize this memory infrastructure is an unknown challenge, and in this research we sought to investigate whether neural inspired approaches can meaningfully help with memory management. In particular we explored neurogenesis inspired re- source allocation, and were able to show a neural inspired mixed controller policy can beneficially impact how MLM architectures utilizemore » memory.« less
Exploring Asynchronous Many-Task Runtime Systems toward Extreme Scales
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knight, Samuel; Baker, Gavin Matthew; Gamell, Marc
2015-10-01
Major exascale computing reports indicate a number of software challenges to meet the dramatic change of system architectures in near future. While several-orders-of-magnitude increase in parallelism is the most commonly cited of those, hurdles also include performance heterogeneity of compute nodes across the system, increased imbalance between computational capacity and I/O capabilities, frequent system interrupts, and complex hardware architectures. Asynchronous task-parallel programming models show a great promise in addressing these issues, but are not yet fully understood nor developed su ciently for computational science and engineering application codes. We address these knowledge gaps through quantitative and qualitative exploration of leadingmore » candidate solutions in the context of engineering applications at Sandia. In this poster, we evaluate MiniAero code ported to three leading candidate programming models (Charm++, Legion and UINTAH) to examine the feasibility of these models that permits insertion of new programming model elements into an existing code base.« less
2005 White Paper on Institutional Capability Computing Requirements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carnes, B; McCoy, M; Seager, M
This paper documents the need for a significant increase in the computing infrastructure provided to scientists working in the unclassified domains at Lawrence Livermore National Laboratory (LLNL). This need could be viewed as the next step in a broad strategy outlined in the January 2002 White Paper (UCRL-ID-147449) that bears essentially the same name as this document. Therein we wrote: 'This proposed increase could be viewed as a step in a broader strategy linking hardware evolution to applications development that would take LLNL unclassified computational science to a position of distinction if not preeminence by 2006.' This position of distinctionmore » has certainly been achieved. This paper provides a strategy for sustaining this success but will diverge from its 2002 predecessor in that it will: (1) Amplify the scientific and external success LLNL has enjoyed because of the investments made in 2002 (MCR, 11 TF) and 2004 (Thunder, 23 TF). (2) Describe in detail the nature of additional investments that are important to meet both the institutional objectives of advanced capability for breakthrough science and the scientists clearly stated request for adequate capacity and more rapid access to moderate-sized resources. (3) Put these requirements in the context of an overall strategy for simulation science and external collaboration. While our strategy for Multiprogrammatic and Institutional Computing (M&IC) has worked well, three challenges must be addressed to assure and enhance our position. The first is that while we now have over 50 important classified and unclassified simulation codes available for use by our computational scientists, we find ourselves coping with high demand for access and long queue wait times. This point was driven home in the 2005 Institutional Computing Executive Group (ICEG) 'Report Card' to the Deputy Director for Science and Technology (DDST) Office and Computation Directorate management. The second challenge is related to the balance that should be maintained in the simulation environment. With the advent of Thunder, the institution directed a change in course from past practice. Instead of making Thunder available to the large body of scientists, as was MCR, and effectively using it as a capacity system, the intent was to make it available to perhaps ten projects so that these teams could run very aggressive problems for breakthrough science. This usage model established Thunder as a capability system. The challenge this strategy raises is that the majority of scientists have not seen an improvement in capacity computing resources since MCR, thus creating significant tension in the system. The question then is: 'How do we address the institution's desire to maintain the potential for breakthrough science and also meet the legitimate requests from the ICEG to achieve balance?' Both the capability and the capacity environments must be addressed through this one procurement. The third challenge is to reach out more aggressively to the national science community to encourage access to LLNL resources as part of a strategy for sharpening our science through collaboration. Related to this, LLNL has been unable in the past to provide access for sensitive foreign nationals (SFNs) to the Livermore Computing (LC) unclassified 'yellow' network. Identifying some mechanism for data sharing between LLNL computational scientists and SFNs would be a first practical step in fostering cooperative, collaborative relationships with an important and growing sector of the American science community.« less
Long live the Data Scientist, but can he/she persist?
NASA Astrophysics Data System (ADS)
Wyborn, L. A.
2011-12-01
In recent years the fourth paradigm of data intensive science has slowly taken hold as the increased capacity of instruments and an increasing number of instruments (in particular sensor networks) have changed how fundamental research is undertaken. Most modern scientific research is about digital capture of data direct from instruments, processing it by computers, storing the results on computers and only publishing a small fraction of data in hard copy publications. At the same time, the rapid increase in capacity of supercomputers, particularly at petascale, means that far larger data sets can be analysed and to greater resolution than previously possible. The new cloud computing paradigm which allows distributed data, software and compute resources to be linked by seamless workflows, is creating new opportunities in processing of high volumes of data to an increasingly larger number of researchers. However, to take full advantage of these compute resources, data sets for analysis have to be aggregated from multiple sources to create high performance data sets. These new technology developments require that scientists must become more skilled in data management and/or have a higher degree of computer literacy. In almost every science discipline there is now an X-informatics branch and a computational X branch (eg, Geoinformatics and Computational Geoscience): both require a new breed of researcher that has skills in both the science fundamentals and also knowledge of some ICT aspects (computer programming, data base design and development, data curation, software engineering). People that can operate in both science and ICT are increasingly known as 'data scientists'. Data scientists are a critical element of many large scale earth and space science informatics projects, particularly those that are tackling current grand challenges at an international level on issues such as climate change, hazard prediction and sustainable development of our natural resources. These projects by their very nature require the integration of multiple digital data sets from multiple sources. Often the preparation of the data for computational analysis can take months and requires painstaking attention to detail to ensure that anomalies identified are real and are not just artefacts of the data preparation and/or the computational analysis. Although data scientists are increasingly vital to successful data intensive earth and space science projects, unless they are recognised for their capabilities in both the science and the computational domains they are likely to migrate to either a science role or an ICT role as their career advances. Most reward and recognition systems do not recognise those with skills in both, hence, getting trained data scientists to persist beyond one or two projects can be challenge. Those data scientists that persist in the profession are characteristically committed and enthusiastic people who have the support of their organisations to take on this role. They also tend to be people who share developments and are critical to the success of the open source software movement. However, the fact remains that survival of the data scientist as a species is being threatened unless something is done to recognise their invaluable contributions to the new fourth paradigm of science.
NASA Astrophysics Data System (ADS)
Zhuge, Qunbi; Chen, Xi
2018-02-01
Global IP traffic is predicted to increase nearly threefold over the next 5 years, driven by emerging high-bandwidth-demanding applications, such as cloud computing, 5G wireless, high-definition video streaming, and virtual reality. This results in a continuously increasing demand on the capacity of backbone optical networks. During the past decade, advanced digital signal processing (DSP), modulation formats, and forward error correction (FEC) were commercially realized to exploit the capacity potential of long-haul fiber channels, and have increased per channel data rate from 10 Gb/s to 400 Gb/s. DSP has played a crucial role in coherent transceivers to accommodate channel impairments including chromatic dispersion (CD), polarization mode dispersion (PMD), laser phase noise, fiber nonlinearities, clock jitter, and so forth. The advance of DSP has also enabled innovations in modulation formats to increase spectral efficiency, improve linear/nonlinear noise tolerance, and realize flexible bandwidth. Moving forward to next generation 1 Tb/s systems on conventional single mode fiber (SMF) platform, more innovations in DSP techniques are needed to further reduce cost per bit, increase network efficiency, and close the gap to the Shannon limit. To further increase capacity per fiber, spatial-division multiplexing (SDM) systems can be used. DSP techniques such as advanced channel equalization methods and distortion compensation can help SDM systems to achieve higher system capacity. In the area of short-reach transmission, the rapid increase of data center network traffic has driven the development of optical technologies for both intra- and inter-data center interconnects (DCI). In particular, DSP has been exploited in intensity-modulation direct detection (IM/DD) systems to realize 400 Gb/s pluggable optical transceivers. In addition, multi-dimensional direct detection modulation schemes are being investigated to increase the data rate per wavelength targeting 1 Tb/s interface.
Design and implementation of a UNIX based distributed computing system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Love, J.S.; Michael, M.W.
1994-12-31
We have designed, implemented, and are running a corporate-wide distributed processing batch queue on a large number of networked workstations using the UNIX{reg_sign} operating system. Atlas Wireline researchers and scientists have used the system for over a year. The large increase in available computer power has greatly reduced the time required for nuclear and electromagnetic tool modeling. Use of remote distributed computing has simultaneously reduced computation costs and increased usable computer time. The system integrates equipment from different manufacturers, using various CPU architectures, distinct operating system revisions, and even multiple processors per machine. Various differences between the machines have tomore » be accounted for in the master scheduler. These differences include shells, command sets, swap spaces, memory sizes, CPU sizes, and OS revision levels. Remote processing across a network must be performed in a manner that is seamless from the users` perspective. The system currently uses IBM RISC System/6000{reg_sign}, SPARCstation{sup TM}, HP9000s700, HP9000s800, and DEC Alpha AXP{sup TM} machines. Each CPU in the network has its own speed rating, allowed working hours, and workload parameters. The system if designed so that all of the computers in the network can be optimally scheduled without adversely impacting the primary users of the machines. The increase in the total usable computational capacity by means of distributed batch computing can change corporate computing strategy. The integration of disparate computer platforms eliminates the need to buy one type of computer for computations, another for graphics, and yet another for day-to-day operations. It might be possible, for example, to meet all research and engineering computing needs with existing networked computers.« less
ERIC Educational Resources Information Center
Myszak, Jessica Peters
2010-01-01
The ability to understand theory of mind and understand the emotions of others has significant consequences for the social competency of individuals. As early as the preschool years, theory of mind ability has been associated with the capacity of children to engage in and sustain pretend play with peers. Individuals on the autism spectrum…
HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation
Holzman, Burt; Bauerdick, Lothar A. T.; Bockelman, Brian; ...
2017-09-29
Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized bothmore » local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. Additionally, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.« less
HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holzman, Burt; Bauerdick, Lothar A. T.; Bockelman, Brian
Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized bothmore » local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. Additionally, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.« less
Charge induced enhancement of adsorption for hydrogen storage materials
NASA Astrophysics Data System (ADS)
Sun, Xiang
2009-12-01
The rising concerns about environmental pollution and global warming have facilitated research interest in hydrogen energy as an alternative energy source. To apply hydrogen for transportations, several issues have to be solved, within which hydrogen storage is the most critical problem. Lots of materials and devices have been developed; however, none is able to meet the DOE storage target. The primary issue for hydrogen physisorption is a weak interaction between hydrogen and the surface of solid materials, resulting negligible adsorption at room temperature. To solve this issue, there is a need to increase the interaction between the hydrogen molecules and adsorbent surface. In this study, intrinsic electric dipole is investigated to enhance the adsorption energy. The results from the computer simulation of single ionic compounds with hydrogen molecules to form hydrogen clusters showed that electrical charge of substances plays an important role in generation of attractive interaction with hydrogen molecules. In order to further examine the effects of static interaction on hydrogen adsorption, activated carbon with a large surface area was impregnated with various ionic salts including LiCl, NaCl, KCl, KBr, and NiCl2 and their performance for hydrogen storage was evaluated by using a volumetric method. Corresponding computer simulations have been carried out by using DFT (Density Functional Theory) method combined with point charge arrays. Both experimental and computational results prove that the adsorption capacity of hydrogen and its interaction with the solid materials increased with electrical dipole moment. Besides the intrinsic dipole, an externally applied electric field could be another means to enhance hydrogen adsorption. Hydrogen adsorption under an applied electric field was examined by using porous nickel foil as electrodes. Electrical signals showed that adsorption capacity increased with the increasing of gas pressure and external electric voltage. Direct measurement of the amount of hydrogen adsorption was also carried out with porous nickel oxides and magnesium oxides using the piezoelectric material PMN-PT as the charge supplier due to the pressure. The adsorption enhancement from the PMN-PT generated charges is obvious at hydrogen pressure between 0 and 60 bars, where the hydrogen uptake is increased at about 35% for nickel oxide and 25% for magnesium oxide. Computer simulation reveals that under the external electric field, the electron cloud of hydrogen molecules is pulled over to the adsorbent site and can overlap with the adsorbent electrons, which in turn enhances the adsorption energy. Experiments were also carried out to examine the effects of hydrogen spillover with charge induced enhancement. The results show that the overall storage capacity in nickel oxide increased remarkably by a factor of 4.
Reconciliation of the cloud computing model with US federal electronic health record regulations
2011-01-01
Cloud computing refers to subscription-based, fee-for-service utilization of computer hardware and software over the Internet. The model is gaining acceptance for business information technology (IT) applications because it allows capacity and functionality to increase on the fly without major investment in infrastructure, personnel or licensing fees. Large IT investments can be converted to a series of smaller operating expenses. Cloud architectures could potentially be superior to traditional electronic health record (EHR) designs in terms of economy, efficiency and utility. A central issue for EHR developers in the US is that these systems are constrained by federal regulatory legislation and oversight. These laws focus on security and privacy, which are well-recognized challenges for cloud computing systems in general. EHRs built with the cloud computing model can achieve acceptable privacy and security through business associate contracts with cloud providers that specify compliance requirements, performance metrics and liability sharing. PMID:21727204
Reconciliation of the cloud computing model with US federal electronic health record regulations.
Schweitzer, Eugene J
2012-01-01
Cloud computing refers to subscription-based, fee-for-service utilization of computer hardware and software over the Internet. The model is gaining acceptance for business information technology (IT) applications because it allows capacity and functionality to increase on the fly without major investment in infrastructure, personnel or licensing fees. Large IT investments can be converted to a series of smaller operating expenses. Cloud architectures could potentially be superior to traditional electronic health record (EHR) designs in terms of economy, efficiency and utility. A central issue for EHR developers in the US is that these systems are constrained by federal regulatory legislation and oversight. These laws focus on security and privacy, which are well-recognized challenges for cloud computing systems in general. EHRs built with the cloud computing model can achieve acceptable privacy and security through business associate contracts with cloud providers that specify compliance requirements, performance metrics and liability sharing.
Program For Joule-Thomson Analysis Of Mixed Cryogens
NASA Technical Reports Server (NTRS)
Jones, Jack A.; Lund, Alan
1994-01-01
JTMIX computer program predicts ideal and realistic properties of mixed gases at temperatures between 65 and 80 K. Performs Joule-Thomson analysis of any gaseous mixture of neon, nitrogen, various hydrocarbons, argon, oxygen, carbon monoxide, carbon dioxide, and hydrogen sulfide. When used in conjunction with DDMIX computer program of National Institute of Standards and Technology (NIST), JTMIX accurately predicts order-of-magnitude increases in Joule-Thomson cooling capacities occuring when various hydrocarbons added to nitrogen. Also predicts boiling temperature of nitrogen depressed from normal value to as low as 60 K upon addition of neon. Written in Turbo C.
Qin, Zhongyuan; Zhang, Xinshuai; Feng, Kerong; Zhang, Qunfang; Huang, Jie
2014-01-01
With the rapid development and widespread adoption of wireless sensor networks (WSNs), security has become an increasingly prominent problem. How to establish a session key in node communication is a challenging task for WSNs. Considering the limitations in WSNs, such as low computing capacity, small memory, power supply limitations and price, we propose an efficient identity-based key management (IBKM) scheme, which exploits the Bloom filter to authenticate the communication sensor node with storage efficiency. The security analysis shows that IBKM can prevent several attacks effectively with acceptable computation and communication overhead. PMID:25264955
NASA Technical Reports Server (NTRS)
Park, Michael A.; Krakos, Joshua A.; Michal, Todd; Loseille, Adrien; Alonso, Juan J.
2016-01-01
Unstructured grid adaptation is a powerful tool to control discretization error for Computational Fluid Dynamics (CFD). It has enabled key increases in the accuracy, automation, and capacity of some fluid simulation applications. Slotnick et al. provides a number of case studies in the CFD Vision 2030 Study: A Path to Revolutionary Computational Aerosciences to illustrate the current state of CFD capability and capacity. The authors forecast the potential impact of emerging High Performance Computing (HPC) environments forecast in the year 2030 and identify that mesh generation and adaptivity continue to be significant bottlenecks in the CFD work flow. These bottlenecks may persist because very little government investment has been targeted in these areas. To motivate investment, the impacts of improved grid adaptation technologies are identified. The CFD Vision 2030 Study roadmap and anticipated capabilities in complementary disciplines are quoted to provide context for the progress made in grid adaptation in the past fifteen years, current status, and a forecast for the next fifteen years with recommended investments. These investments are specific to mesh adaptation and impact other aspects of the CFD process. Finally, a strategy is identified to diffuse grid adaptation technology into production CFD work flows.
Stochastic Feedforward Control Technique
NASA Technical Reports Server (NTRS)
Halyo, Nesim
1990-01-01
Class of commanded trajectories modeled as stochastic process. Advanced Transport Operating Systems (ATOPS) research and development program conducted by NASA Langley Research Center aimed at developing capabilities for increases in capacities of airports, safe and accurate flight in adverse weather conditions including shear, winds, avoidance of wake vortexes, and reduced consumption of fuel. Advances in techniques for design of modern controls and increased capabilities of digital flight computers coupled with accurate guidance information from Microwave Landing System (MLS). Stochastic feedforward control technique developed within context of ATOPS program.
NASA Technical Reports Server (NTRS)
Peterson, Victor L.; Kim, John; Holst, Terry L.; Deiwert, George S.; Cooper, David M.; Watson, Andrew B.; Bailey, F. Ron
1992-01-01
Report evaluates supercomputer needs of five key disciplines: turbulence physics, aerodynamics, aerothermodynamics, chemistry, and mathematical modeling of human vision. Predicts these fields will require computer speed greater than 10(Sup 18) floating-point operations per second (FLOP's) and memory capacity greater than 10(Sup 15) words. Also, new parallel computer architectures and new structured numerical methods will make necessary speed and capacity available.
Estimating aquifer transmissivity from specific capacity using MATLAB.
McLin, Stephen G
2005-01-01
Historically, specific capacity information has been used to calculate aquifer transmissivity when pumping test data are unavailable. This paper presents a simple computer program written in the MATLAB programming language that estimates transmissivity from specific capacity data while correcting for aquifer partial penetration and well efficiency. The program graphically plots transmissivity as a function of these factors so that the user can visually estimate their relative importance in a particular application. The program is compatible with any computer operating system running MATLAB, including Windows, Macintosh OS, Linux, and Unix. Two simple examples illustrate program usage.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurata, Masaki; Devanathan, Ramaswami
2015-10-13
Free energy and heat capacity of actinide elements and compounds are important properties for the evaluation of the safety and reliable performance of nuclear fuel. They are essential inputs for models that describe complex phenomena that govern the behaviour of actinide compounds during nuclear fuel fabrication and irradiation. This chapter introduces various experimental methods to measure free energy and heat capacity to serve as inputs for models and to validate computer simulations. This is followed by a discussion of computer simulation of these properties, and recent simulations of thermophysical properties of nuclear fuel are briefly reviewed.
Capacity Maximizing Constellations
NASA Technical Reports Server (NTRS)
Barsoum, Maged; Jones, Christopher
2010-01-01
Some non-traditional signal constellations have been proposed for transmission of data over the Additive White Gaussian Noise (AWGN) channel using such channel-capacity-approaching codes as low-density parity-check (LDPC) or turbo codes. Computational simulations have shown performance gains of more than 1 dB over traditional constellations. These gains could be translated to bandwidth- efficient communications, variously, over longer distances, using less power, or using smaller antennas. The proposed constellations have been used in a bit-interleaved coded modulation system employing state-ofthe-art LDPC codes. In computational simulations, these constellations were shown to afford performance gains over traditional constellations as predicted by the gap between the parallel decoding capacity of the constellations and the Gaussian capacity
A review of recent wake vortex research for increasing airport capacity
NASA Astrophysics Data System (ADS)
Hallock, James N.; Holzäpfel, Frank
2018-04-01
This paper is a brief review of recent wake vortex research as it affects the operational problem of spacing aircraft to increase airport capacity and throughput. The paper addresses the questions of what do we know about wake vortices and what don't we know about wake vortices. The introduction of Heavy jets in the late 1960s stimulated the study of wake vortices for safety reasons and the use of pulsed lidars and the maturity of computational fluid dynamics in the last three decades have led to extensive data collection and analyses which are now resulting in the development and implementation of systems to safely decrease separations in the terminal environment. Although much has been learned about wake vortices and their behavior, there is still more to be learned about the phenomena of aircraft wake vortices.
Integrating Micro-computers with a Centralized DBMS: ORACLE, SEED AND INGRES
NASA Technical Reports Server (NTRS)
Hoerger, J.
1984-01-01
Users of ADABAS, a relational-like data base management system (ADABAS) with its data base programming language (NATURAL) are acquiring microcomputers with hopes of solving their individual word processing, office automation, decision support, and simple data processing problems. As processor speeds, memory sizes, and disk storage capacities increase, individual departments begin to maintain "their own" data base on "their own" micro-computer. This situation can adversely affect several of the primary goals set for implementing a centralized DBMS. In order to avoid this potential problem, these micro-computers must be integrated with the centralized DBMS. An easy to use and flexible means for transferring logic data base files between the central data base machine and micro-computers must be provided. Some of the problems encounted in an effort to accomplish this integration and possible solutions are discussed.
Modeling Coevolution between Language and Memory Capacity during Language Origin
Gong, Tao; Shuai, Lan
2015-01-01
Memory is essential to many cognitive tasks including language. Apart from empirical studies of memory effects on language acquisition and use, there lack sufficient evolutionary explorations on whether a high level of memory capacity is prerequisite for language and whether language origin could influence memory capacity. In line with evolutionary theories that natural selection refined language-related cognitive abilities, we advocated a coevolution scenario between language and memory capacity, which incorporated the genetic transmission of individual memory capacity, cultural transmission of idiolects, and natural and cultural selections on individual reproduction and language teaching. To illustrate the coevolution dynamics, we adopted a multi-agent computational model simulating the emergence of lexical items and simple syntax through iterated communications. Simulations showed that: along with the origin of a communal language, an initially-low memory capacity for acquired linguistic knowledge was boosted; and such coherent increase in linguistic understandability and memory capacities reflected a language-memory coevolution; and such coevolution stopped till memory capacities became sufficient for language communications. Statistical analyses revealed that the coevolution was realized mainly by natural selection based on individual communicative success in cultural transmissions. This work elaborated the biology-culture parallelism of language evolution, demonstrated the driving force of culturally-constituted factors for natural selection of individual cognitive abilities, and suggested that the degree difference in language-related cognitive abilities between humans and nonhuman animals could result from a coevolution with language. PMID:26544876
Modeling Coevolution between Language and Memory Capacity during Language Origin.
Gong, Tao; Shuai, Lan
2015-01-01
Memory is essential to many cognitive tasks including language. Apart from empirical studies of memory effects on language acquisition and use, there lack sufficient evolutionary explorations on whether a high level of memory capacity is prerequisite for language and whether language origin could influence memory capacity. In line with evolutionary theories that natural selection refined language-related cognitive abilities, we advocated a coevolution scenario between language and memory capacity, which incorporated the genetic transmission of individual memory capacity, cultural transmission of idiolects, and natural and cultural selections on individual reproduction and language teaching. To illustrate the coevolution dynamics, we adopted a multi-agent computational model simulating the emergence of lexical items and simple syntax through iterated communications. Simulations showed that: along with the origin of a communal language, an initially-low memory capacity for acquired linguistic knowledge was boosted; and such coherent increase in linguistic understandability and memory capacities reflected a language-memory coevolution; and such coevolution stopped till memory capacities became sufficient for language communications. Statistical analyses revealed that the coevolution was realized mainly by natural selection based on individual communicative success in cultural transmissions. This work elaborated the biology-culture parallelism of language evolution, demonstrated the driving force of culturally-constituted factors for natural selection of individual cognitive abilities, and suggested that the degree difference in language-related cognitive abilities between humans and nonhuman animals could result from a coevolution with language.
Benjamin Wang; Robert E. Manning; Steven R. Lawson; William A. Valliere
2001-01-01
Recent research and management experience has led to several frameworks for defining and managing carrying capacity of national parks and related areas. These frameworks rely on monitoring indicator variables to ensure that standards of quality are maintained. The objective of this study was to develop a computer simulation model to estimate the relationships between...
Modelling Mass Casualty Decontamination Systems Informed by Field Exercise Data
Egan, Joseph R.; Amlôt, Richard
2012-01-01
In the event of a large-scale chemical release in the UK decontamination of ambulant casualties would be undertaken by the Fire and Rescue Service (FRS). The aim of this study was to track the movement of volunteer casualties at two mass decontamination field exercises using passive Radio Frequency Identification tags and detection mats that were placed at pre-defined locations. The exercise data were then used to inform a computer model of the FRS component of the mass decontamination process. Having removed all clothing and having showered, the re-dressing (termed re-robing) of casualties was found to be a bottleneck in the mass decontamination process during both exercises. Computer simulations showed that increasing the capacity of each lane of the re-robe section to accommodate 10 rather than five casualties would be optimal in general, but that a capacity of 15 might be required to accommodate vulnerable individuals. If the duration of the shower was decreased from three minutes to one minute then a per lane re-robe capacity of 20 might be necessary to maximise the throughput of casualties. In conclusion, one practical enhancement to the FRS response may be to provide at least one additional re-robe section per mass decontamination unit. PMID:23202768
King, Nina-Marie; Lovric, Vedran; Parr, William C H; Walsh, W R; Moradi, Pouria
2017-05-01
Breast augmentation surgery poses many challenges, and meeting the patient's expectations is one of the most important. Previous reports equate 100 cc to a one-cup-size increase; however, no studies have confirmed this between commercially available bras. The aim of this study was to identify the volume increase between cup sizes across different brands and the relationship with implant selection. Five bra cup sizes from three different companies were analyzed for their volume capacity. Three methods were used to calculate the volume of the bras: (1) linear measurements; (2) volume measurement by means of water displacement; and (3) volume calculation after three-dimensional reconstruction of serial radiographic data (computed tomography). The clinical arm consisted of 79 patients who underwent breast augmentation surgery from February 1, 2014, to June 30, 2016. Answers from a short questionnaire in combination with the implant volume were analyzed. Across all three brands, the interval volume increase varied between sizes, but not all were above 100 cc. There was some variation in the volume capacity of the same cup size among the different brands. The average incremental increase in bra cup size across all three brands in the laboratory arm was 135 cc. The mean volume increase per cup size was 138.23 cc in the clinical arm. This article confirms that there is no standardization within the bra manufacturing industry. On the basis of this study, patients should be advised that 130 to 150 cc equates to a one-cup-size increase. Bras with narrower band widths need 130 cc and wider band widths require 150 cc to increase one cup size.
NASA Astrophysics Data System (ADS)
Lin, Yi-Kuei; Huang, Cheng-Fu
2015-04-01
From a quality of service viewpoint, the transmission packet unreliability and transmission time are both critical performance indicators in a computer system when assessing the Internet quality for supervisors and customers. A computer system is usually modelled as a network topology where each branch denotes a transmission medium and each vertex represents a station of servers. Almost every branch has multiple capacities/states due to failure, partial failure, maintenance, etc. This type of network is known as a multi-state computer network (MSCN). This paper proposes an efficient algorithm that computes the system reliability, i.e., the probability that a specified amount of data can be sent through k (k ≥ 2) disjoint minimal paths within both the tolerable packet unreliability and time threshold. Furthermore, two routing schemes are established in advance to indicate the main and spare minimal paths to increase the system reliability (referred to as spare reliability). Thus, the spare reliability can be readily computed according to the routing scheme.
Use of Lean response to improve pandemic influenza surge in public health laboratories.
Isaac-Renton, Judith L; Chang, Yin; Prystajecky, Natalie; Petric, Martin; Mak, Annie; Abbott, Brendan; Paris, Benjamin; Decker, K C; Pittenger, Lauren; Guercio, Steven; Stott, Jeff; Miller, Joseph D
2012-01-01
A novel influenza A (H1N1) virus detected in April 2009 rapidly spread around the world. North American provincial and state laboratories have well-defined roles and responsibilities, including providing accurate, timely test results for patients and information for regional public health and other decision makers. We used the multidisciplinary response and rapid implementation of process changes based on Lean methods at the provincial public health laboratory in British Columbia, Canada, to improve laboratory surge capacity in the 2009 influenza pandemic. Observed and computer simulating evaluation results from rapid processes changes showed that use of Lean tools successfully expanded surge capacity, which enabled response to the 10-fold increase in testing demands.
School Capacity. Educational Facility Series; A Guide to Planning.
ERIC Educational Resources Information Center
New Jersey State Dept. of Education, Trenton. Bureau of School Planning Services.
Information, instructions and worksheets are provided for use in computing the functional capacity of an elementary, middle or secondary school building. The functional capacity is the number of pupils that can adequately be housed in a school building without overcrowding. (FS)
UNDERSTANDING, DERIVING, AND COMPUTING BUFFER CAPACITY
Derivation and systematic calculation of buffer capacity is a topic that seems often to be neglected in chemistry courses and given minimal treatment in most texts. However, buffer capacity is very important in the chemistry of natural waters and potable water. It affects corro...
MODELING AND PERFORMANCE EVALUATION FOR AVIATION SECURITY CARGO INSPECTION QUEUING SYSTEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allgood, Glenn O; Olama, Mohammed M; Rose, Terri A
Beginning in 2010, the U.S. will require that all cargo loaded in passenger aircraft be inspected. This will require more efficient processing of cargo and will have a significant impact on the inspection protocols and business practices of government agencies and the airlines. In this paper, we conduct performance evaluation study for an aviation security cargo inspection queuing system for material flow and accountability. The overall performance of the aviation security cargo inspection system is computed, analyzed, and optimized for the different system dynamics. Various performance measures are considered such as system capacity, residual capacity, and throughput. These metrics aremore » performance indicators of the system s ability to service current needs and response capacity to additional requests. The increased physical understanding resulting from execution of the queuing model utilizing these vetted performance measures will reduce the overall cost and shipping delays associated with the new inspection requirements.« less
Impaired associative learning in schizophrenia: behavioral and computational studies
Diwadkar, Vaibhav A.; Flaugher, Brad; Jones, Trevor; Zalányi, László; Ujfalussy, Balázs; Keshavan, Matcheri S.
2008-01-01
Associative learning is a central building block of human cognition and in large part depends on mechanisms of synaptic plasticity, memory capacity and fronto–hippocampal interactions. A disorder like schizophrenia is thought to be characterized by altered plasticity, and impaired frontal and hippocampal function. Understanding the expression of this dysfunction through appropriate experimental studies, and understanding the processes that may give rise to impaired behavior through biologically plausible computational models will help clarify the nature of these deficits. We present a preliminary computational model designed to capture learning dynamics in healthy control and schizophrenia subjects. Experimental data was collected on a spatial-object paired-associate learning task. The task evinces classic patterns of negatively accelerated learning in both healthy control subjects and patients, with patients demonstrating lower rates of learning than controls. Our rudimentary computational model of the task was based on biologically plausible assumptions, including the separation of dorsal/spatial and ventral/object visual streams, implementation of rules of learning, the explicit parameterization of learning rates (a plausible surrogate for synaptic plasticity), and learning capacity (a plausible surrogate for memory capacity). Reductions in learning dynamics in schizophrenia were well-modeled by reductions in learning rate and learning capacity. The synergy between experimental research and a detailed computational model of performance provides a framework within which to infer plausible biological bases of impaired learning dynamics in schizophrenia. PMID:19003486
Discovering and understanding oncogenic gene fusions through data intensive computational approaches
Latysheva, Natasha S.; Babu, M. Madan
2016-01-01
Abstract Although gene fusions have been recognized as important drivers of cancer for decades, our understanding of the prevalence and function of gene fusions has been revolutionized by the rise of next-generation sequencing, advances in bioinformatics theory and an increasing capacity for large-scale computational biology. The computational work on gene fusions has been vastly diverse, and the present state of the literature is fragmented. It will be fruitful to merge three camps of gene fusion bioinformatics that appear to rarely cross over: (i) data-intensive computational work characterizing the molecular biology of gene fusions; (ii) development research on fusion detection tools, candidate fusion prioritization algorithms and dedicated fusion databases and (iii) clinical research that seeks to either therapeutically target fusion transcripts and proteins or leverages advances in detection tools to perform large-scale surveys of gene fusion landscapes in specific cancer types. In this review, we unify these different—yet highly complementary and symbiotic—approaches with the view that increased synergy will catalyze advancements in gene fusion identification, characterization and significance evaluation. PMID:27105842
Siekmeier, Peter J
2015-10-01
A good deal of recent research has centered on the identification of biomarkers and endophenotypic measures of psychiatric illnesses using in vivo and in vitro studies. This is understandable, as these measures-as opposed to complex clinical phenotypes-may be more closely related to neurobiological and genetic vulnerabilities. However, instantiation of such biomarkers using computational models-in silico studies-has received less attention. This approach could become increasingly important, given the wealth of detailed information produced by recent basic neuroscience research, and increasing availability of high capacity computing platforms. The purpose of this review is to survey the current state of the art of research in this area. We discuss computational approaches to schizophrenia, bipolar disorder, Alzheimer's disease, fragile X syndrome and autism, and argue that it represents a promising and underappreciated research modality. In conclusion, we outline specific avenues for future research; also, potential uses of in silico models to conduct "virtual experiments" and to generate novel hypotheses, and as an aid in neuropsychiatric drug development are discussed. Copyright © 2015 Elsevier Ltd. All rights reserved.
Bernard, F E; Thom, D J
1981-04-01
The nature and magnitude of population pressure in Machakos and Kitui Districts of Kenya were investigated. Specific study objectives were: 1) to examine the roots and evolution of the problem; 2) to compute carrying capacity for a sample of locations and ecozones in the 2 districts; and 3) to consider agricultural and demographic implications of the findings. Carrying capacity is defined as the number of people and the level of their activities which a region is able to sustain in perpetuity at an acceptable quality of life and without land deterioration. A methodology for calculating human carrying capacity utilizing crude soil, ecological, crop yield, and land use data for 41 locations in Machakos and Kitui Districts is demonstrated. Analysis and a comparison of population carrying capacities within the study area reveals that Machakos has reached a critical level of population pressure. To the north in Mbere and eastward in Kitui there are areas that are not currently experiencing population pressure, but it is likely that as the pressure in western Machakos becomes more acute, movement into these adjacent lands of relatively sparse settlement will increase. Signs of environmental stress resulting from overpopulation are evident throughout Machakos. The methodology used for estimating population pressure provided reasonably accurate carrying capacity estimates, but the methodology could be refined. Vigorous efforts to rejuvenate the land through soil and water conservation have been undertaken, but these have been insufficient and must be increased. Such efforts will fail unless the basic problem of population pressure in the marginal lands is resolved.
Ensemble representations: effects of set size and item heterogeneity on average size perception.
Marchant, Alexander P; Simons, Daniel J; de Fockert, Jan W
2013-02-01
Observers can accurately perceive and evaluate the statistical properties of a set of objects, forming what is now known as an ensemble representation. The accuracy and speed with which people can judge the mean size of a set of objects have led to the proposal that ensemble representations of average size can be computed in parallel when attention is distributed across the display. Consistent with this idea, judgments of mean size show little or no decrement in accuracy when the number of objects in the set increases. However, the lack of a set size effect might result from the regularity of the item sizes used in previous studies. Here, we replicate these previous findings, but show that judgments of mean set size become less accurate when set size increases and the heterogeneity of the item sizes increases. This pattern can be explained by assuming that average size judgments are computed using a limited capacity sampling strategy, and it does not necessitate an ensemble representation computed in parallel across all items in a display. Copyright © 2012 Elsevier B.V. All rights reserved.
A thermodynamic study of Abeta(16-21) dissociation from a fibril using computer simulations
NASA Astrophysics Data System (ADS)
Dias, Cristiano; Mahmoudinobar, Farbod; Su, Zhaoqian
Here, I will discuss recent all-atom molecular dynamics simulations with explicit water in which we studied the thermodynamic properties of Abeta(16-21) dissociation from an amyloid fibril. Changes in thermodynamics quantities, e.g., entropy, enthalpy, and volume, are computed from the temperature dependence of the free-energy computed using the umbrella sampling method. We find similarities and differences between the thermodynamics of peptide dissociation and protein unfolding. Similarly to protein unfolding, Abeta(16-21) dissociation is characterized by an unfavorable change in enthalpy, a favorable change in the entropic energy, and an increase in the heat capacity. A main difference is that peptide dissociation is characterized by a weak enthalpy-entropy compensation. We characterize dock and lock states of the peptide based on the solvent accessible surface area. The Lennard-Jones energy of the system is observed to increase continuously in lock and dock states as the peptide dissociates. The electrostatic energy increases in the lock state and it decreases in the dock state as the peptide dissociates. These results will be discussed as well as their implication for fibril growth.
Meinhardt, Udo; Nelson, Anne E; Hansen, Jennifer L; Birzniece, Vita; Clifford, David; Leung, Kin-Chuen; Graham, Kenneth; Ho, Ken K Y
2010-05-04
Growth hormone is widely abused by athletes, frequently with androgenic steroids. Its effects on performance are unclear. To determine the effect of growth hormone alone or with testosterone on body composition and measures of performance. Randomized, placebo-controlled, blinded study of 8 weeks of treatment followed by a 6-week washout period. Randomization was computer-generated with concealed allocation. (Australian-New Zealand Clinical Trials Registry registration number: ACTRN012605000508673) Clinical research facility in Sydney, Australia. 96 recreationally trained athletes (63 men and 33 women) with a mean age of 27.9 years (SD, 5.7). Men were randomly assigned to receive placebo, growth hormone (2 mg/d subcutaneously), testosterone (250 mg/wk intramuscularly), or combined treatments. Women were randomly assigned to receive either placebo or growth hormone (2 mg/d). Body composition variables (fat mass, lean body mass, extracellular water mass, and body cell mass) and physical performance variables (endurance [maximum oxygen consumption], strength [dead lift], power [jump height], and sprint capacity [Wingate value]). Body cell mass was correlated with all measures of performance at baseline. Growth hormone significantly reduced fat mass, increased lean body mass through an increase in extracellular water, and increased body cell mass in men when coadministered with testosterone. Growth hormone significantly increased sprint capacity, by 0.71 kJ (95% CI, 0.1 to 1.3 kJ; relative increase, 3.9% [CI, 0.0% to 7.7%]) in men and women combined and by 1.7 kJ (CI, 0.5 to 3.0 kJ; relative increase, 8.3% [CI, 3.0% to 13.6%]) when coadministered with testosterone to men; other performance measures did not significantly change. The increase in sprint capacity was not maintained 6 weeks after discontinuation of the drug. Growth hormone dosage may have been lower than that used covertly by competitive athletes. The athletic significance of the observed improvements in sprint capacity is unclear, and the study was too small to draw conclusions about safety. Growth hormone supplementation influenced body composition and increased sprint capacity when administered alone and in combination with testosterone. The World Anti-Doping Agency.
NASA Astrophysics Data System (ADS)
Miller, Stephen D.; Herwig, Kenneth W.; Ren, Shelly; Vazhkudai, Sudharshan S.; Jemian, Pete R.; Luitz, Steffen; Salnikov, Andrei A.; Gaponenko, Igor; Proffen, Thomas; Lewis, Paul; Green, Mark L.
2009-07-01
The primary mission of user facilities operated by Basic Energy Sciences under the Department of Energy is to produce data for users in support of open science and basic research [1]. We trace back almost 30 years of history across selected user facilities illustrating the evolution of facility data management practices and how these practices have related to performing scientific research. The facilities cover multiple techniques such as X-ray and neutron scattering, imaging and tomography sciences. Over time, detector and data acquisition technologies have dramatically increased the ability to produce prolific volumes of data challenging the traditional paradigm of users taking data home upon completion of their experiments to process and publish their results. During this time, computing capacity has also increased dramatically, though the size of the data has grown significantly faster than the capacity of one's laptop to manage and process this new facility produced data. Trends indicate that this will continue to be the case for yet some time. Thus users face a quandary for how to manage today's data complexity and size as these may exceed the computing resources users have available to themselves. This same quandary can also stifle collaboration and sharing. Realizing this, some facilities are already providing web portal access to data and computing thereby providing users access to resources they need [2]. Portal based computing is now driving researchers to think about how to use the data collected at multiple facilities in an integrated way to perform their research, and also how to collaborate and share data. In the future, inter-facility data management systems will enable next tier cross-instrument-cross facility scientific research fuelled by smart applications residing upon user computer resources. We can learn from the medical imaging community that has been working since the early 1990's to integrate data from across multiple modalities to achieve better diagnoses [3] - similarly, data fusion across BES facilities will lead to new scientific discoveries.
Transferring data oscilloscope to an IBM using an Apple II+
NASA Technical Reports Server (NTRS)
Miller, D. L.; Frenklach, M. Y.; Laughlin, P. J.; Clary, D. W.
1984-01-01
A set of PASCAL programs permitting the use of a laboratory microcomputer to facilitate and control the transfer of data from a digital oscilloscope (used with photomultipliers in experiments on soot formation in hydrocarbon combustion) to a mainframe computer and the subsequent mainframe processing of these data is presented. Advantages of this approach include the possibility of on-line computations, transmission flexibility, automatic transfer and selection, increased capacity and analysis options (such as smoothing, averaging, Fourier transformation, and high-quality plotting), and more rapid availability of results. The hardware and software are briefly characterized, the programs are discussed, and printouts of the listings are provided.
McCormick, Cornelia; Protzner, Andrea B.; Barnett, Alexander J.; Cohn, Melanie; Valiante, Taufik A.; McAndrews, Mary Pat
2014-01-01
Computational models predict that focal damage to the Default Mode Network (DMN) causes widespread decreases and increases of functional DMN connectivity. How such alterations impact functioning in a specific cognitive domain such as episodic memory remains relatively unexplored. Here, we show in patients with unilateral medial temporal lobe epilepsy (mTLE) that focal structural damage leads indeed to specific patterns of DMN functional connectivity alterations, specifically decreased connectivity between both medial temporal lobes (MTLs) and the posterior part of the DMN and increased intrahemispheric anterior–posterior connectivity. Importantly, these patterns were associated with better and worse episodic memory capacity, respectively. These distinct patterns, shown here for the first time, suggest that a close dialogue between both MTLs and the posterior components of the DMN is required to fully express the extensive repertoire of episodic memory abilities. PMID:25068108
Garai, Sisir Kumar
2012-04-10
To meet the demand of very fast and agile optical networks, the optical processors in a network system should have a very fast execution rate, large information handling, and large information storage capacities. Multivalued logic operations and multistate optical flip-flops are the basic building blocks for such fast running optical computing and data processing systems. In the past two decades, many methods of implementing all-optical flip-flops have been proposed. Most of these suffer from speed limitations because of the low switching response of active devices. The frequency encoding technique has been used because of its many advantages. It can preserve its identity throughout data communication irrespective of loss of light energy due to reflection, refraction, attenuation, etc. The action of polarization-rotation-based very fast switching of semiconductor optical amplifiers increases processing speed. At the same time, tristate optical flip-flops increase information handling capacity.
Bicen, A Ozan; Lehtomaki, Janne J; Akyildiz, Ian F
2018-03-01
Molecular communication (MC) over a microfluidic channel with flow is investigated based on Shannon's channel capacity theorem and Fick's laws of diffusion. Specifically, the sum capacity for MC between a single transmitter and multiple receivers (broadcast MC) is studied. The transmitter communicates by using different types of signaling molecules with each receiver over the microfluidic channel. The transmitted molecules propagate through microfluidic channel until reaching the corresponding receiver. Although the use of different types of molecules provides orthogonal signaling, the sum broadcast capacity may not scale with the number of the receivers due to physics of the propagation (interplay between convection and diffusion based on distance). In this paper, the performance of broadcast MC on a microfluidic chip is characterized by studying the physical geometry of the microfluidic channel and leveraging the information theory. The convergence of the sum capacity for microfluidic broadcast channel is analytically investigated based on the physical system parameters with respect to the increasing number of molecular receivers. The analysis presented here can be useful to predict the achievable information rate in microfluidic interconnects for the biochemical computation and microfluidic multi-sample assays.
Marta, Carlos C; Marinho, Daniel A; Barbosa, Tiago M; Carneiro, André L; Izquierdo, Mikel; Marques, Mário C
2013-12-01
The purpose of this study was to analyze the influence of body fat and somatotype on explosive strength and aerobic capacity trainability in the prepubertal growth spurt, marked by rapid changes in body size, shape, and composition, all of which are sexually dimorphic. One hundred twenty-five healthy children (58 boys, 67 girls), aged 10-11 years (10.8 ± 0.4 years), who were self-assessed in Tanner stages 1-2, were randomly assigned into 2 experimental groups to train twice a week for 8 weeks: strength training group (19 boys, 22 girls), endurance training group (21 boys, 24 girls), and a control group (18 boys, 21 girls). Evaluation of body fat was carried out using the method described by Slaughter. Somatotype was computed according to the Heath-Carter method. Increased endomorphy reduced the likelihood of vertical jump height improvement (odds ratio [OR], 0.10; 95% confidence interval [CI], 0.01-0.85), increased mesomorphy (OR, 6.15; 95% CI, 1.52-24.88) and ectomorphy (OR, 6.52; 95% CI, 1.71-24.91) increased the likelihood of sprint performance, and increased ectomorphy (OR, 3.84; 95% CI, 1.20-12.27) increased the likelihood of aerobic fitness gains. Sex did not affect the training-induced changes in strength or aerobic fitness. These data suggest that somatotype has an effect on explosive strength and aerobic capacity trainability, which should not be disregarded. The effect of adiposity on explosive strength, musculoskeletal magnitude on running speed, and relative linearity on running speed and aerobic capacity seem to be crucial factors related to training-induced gains in prepubescent boys and girls.
Motoyoshi, Mitsuru; Uchida, Yasuki; Inaba, Mizuki; Ejima, Ken-Ichiro; Honda, Kazuya; Shimizu, Noriyoshi
2016-07-01
Placement torque and damping capacity may increase when the orthodontic anchor screws make contact with an adjacent root. If this is the case, root contact can be inferred from the placement torque and damping capacity. The purpose of this study was to verify the detectability of root proximity of the screws by placement torque and damping capacity. For this purpose, we investigated the relationship among placement torque, damping capacity, and screw-root proximity. The placement torque, damping capacity, and root proximity of 202 screws (diameter, 1.6 mm; length, 8.0 mm) were evaluated in 110 patients (31 male, 79 female; mean age, 21.3 ± 6.9 years). Placement torque was measured using a digital torque tester, damping capacity was measured with a Periotest device (Medizintechnik Gulden, Modautal, Germany), and root contact was judged using cone-beam computed tomography images. The rate of root contact was 18.3%. Placement torque and damping capacity were 7.8 N·cm and 3.8, respectively. The placement torque of screws with root contact was greater than that of screws with no root contact (P <0.05; effect size, 0.44; power, <0.8). Damping capacity of screws with root contact was significantly greater than that of screws with no root contact (P <0.01; effect size, >0.5; power, >0.95). It was suggested that the damping capacity is related to root contact. Copyright © 2016 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.
Bathymetry and capacity of Chambers Lake, Chester County, Pennsylvania
Gyves, Matthew C.
2015-10-26
This report describes the methods used to create a bathymetric map of Chambers Lake for the computation of reservoir storage capacity as of September 2014. The product is a bathymetric map and a table showing the storage capacity of the reservoir at 2-foot increments from minimum usable elevation up to full capacity at the crest of the auxiliary spillway.
NASA Center for Computational Sciences: History and Resources
NASA Technical Reports Server (NTRS)
2000-01-01
The Nasa Center for Computational Sciences (NCCS) has been a leading capacity computing facility, providing a production environment and support resources to address the challenges facing the Earth and space sciences research community.
An automatic step adjustment method for average power analysis technique used in fiber amplifiers
NASA Astrophysics Data System (ADS)
Liu, Xue-Ming
2006-04-01
An automatic step adjustment (ASA) method for average power analysis (APA) technique used in fiber amplifiers is proposed in this paper for the first time. In comparison with the traditional APA technique, the proposed method has suggested two unique merits such as a higher order accuracy and an ASA mechanism, so that it can significantly shorten the computing time and improve the solution accuracy. A test example demonstrates that, by comparing to the APA technique, the proposed method increases the computing speed by more than a hundredfold under the same errors. By computing the model equations of erbium-doped fiber amplifiers, the numerical results show that our method can improve the solution accuracy by over two orders of magnitude at the same amplifying section number. The proposed method has the capacity to rapidly and effectively compute the model equations of fiber Raman amplifiers and semiconductor lasers.
NASA Technical Reports Server (NTRS)
Katz, Randy H.; Anderson, Thomas E.; Ousterhout, John K.; Patterson, David A.
1991-01-01
Rapid advances in high performance computing are making possible more complete and accurate computer-based modeling of complex physical phenomena, such as weather front interactions, dynamics of chemical reactions, numerical aerodynamic analysis of airframes, and ocean-land-atmosphere interactions. Many of these 'grand challenge' applications are as demanding of the underlying storage system, in terms of their capacity and bandwidth requirements, as they are on the computational power of the processor. A global view of the Earth's ocean chlorophyll and land vegetation requires over 2 terabytes of raw satellite image data. In this paper, we describe our planned research program in high capacity, high bandwidth storage systems. The project has four overall goals. First, we will examine new methods for high capacity storage systems, made possible by low cost, small form factor magnetic and optical tape systems. Second, access to the storage system will be low latency and high bandwidth. To achieve this, we must interleave data transfer at all levels of the storage system, including devices, controllers, servers, and communications links. Latency will be reduced by extensive caching throughout the storage hierarchy. Third, we will provide effective management of a storage hierarchy, extending the techniques already developed for the Log Structured File System. Finally, we will construct a protototype high capacity file server, suitable for use on the National Research and Education Network (NREN). Such research must be a Cornerstone of any coherent program in high performance computing and communications.
Mobile computing in critical care.
Lapinsky, Stephen E
2007-03-01
Handheld computing devices are increasingly used by health care workers, and offer a mobile platform for point-of-care information access. Improved technology, with larger memory capacity, higher screen resolution, faster processors, and wireless connectivity has broadened the potential roles for these devices in critical care. In addition to the personal information management functions, handheld computers have been used to access reference information, management guidelines and pharmacopoeias as well as to track the educational experience of trainees. They can act as an interface with a clinical information system, providing rapid access to patient information. Despite their popularity, these devices have limitations related to their small size, and acceptance by physicians has not been uniform. In the critical care environment, the risk of transmitting microorganisms by such a portable device should always be considered.
Linear optical quantum computing in a single spatial mode.
Humphreys, Peter C; Metcalf, Benjamin J; Spring, Justin B; Moore, Merritt; Jin, Xian-Min; Barbieri, Marco; Kolthammer, W Steven; Walmsley, Ian A
2013-10-11
We present a scheme for linear optical quantum computing using time-bin-encoded qubits in a single spatial mode. We show methods for single-qubit operations and heralded controlled-phase (cphase) gates, providing a sufficient set of operations for universal quantum computing with the Knill-Laflamme-Milburn [Nature (London) 409, 46 (2001)] scheme. Our protocol is suited to currently available photonic devices and ideally allows arbitrary numbers of qubits to be encoded in the same spatial mode, demonstrating the potential for time-frequency modes to dramatically increase the quantum information capacity of fixed spatial resources. As a test of our scheme, we demonstrate the first entirely single spatial mode implementation of a two-qubit quantum gate and show its operation with an average fidelity of 0.84±0.07.
Computers and Data Processing. Subject Bibliography.
ERIC Educational Resources Information Center
United States Government Printing Office, Washington, DC.
This annotated bibliography of U.S. Government publications contains over 90 entries on topics including telecommunications standards, U.S. competitiveness in high technology industries, computer-related crimes, capacity management of information technology systems, the application of computer technology in the Soviet Union, computers and…
Secure data sharing in public cloud
NASA Astrophysics Data System (ADS)
Venkataramana, Kanaparti; Naveen Kumar, R.; Tatekalva, Sandhya; Padmavathamma, M.
2012-04-01
Secure multi-party protocols have been proposed for entities (organizations or individuals) that don't fully trust each other to share sensitive information. Many types of entities need to collect, analyze, and disseminate data rapidly and accurately, without exposing sensitive information to unauthorized or untrusted parties. Solutions based on secure multiparty computation guarantee privacy and correctness, at an extra communication (too costly in communication to be practical) and computation cost. The high overhead motivates us to extend this SMC to cloud environment which provides large computation and communication capacity which makes SMC to be used between multiple clouds (i.e., it may between private or public or hybrid clouds).Cloud may encompass many high capacity servers which acts as a hosts which participate in computation (IaaS and PaaS) for final result, which is controlled by Cloud Trusted Authority (CTA) for secret sharing within the cloud. The communication between two clouds is controlled by High Level Trusted Authority (HLTA) which is one of the hosts in a cloud which provides MgaaS (Management as a Service). Due to high risk for security in clouds, HLTA generates and distributes public keys and private keys by using Carmichael-R-Prime- RSA algorithm for exchange of private data in SMC between itself and clouds. In cloud, CTA creates Group key for Secure communication between the hosts in cloud based on keys sent by HLTA for exchange of Intermediate values and shares for computation of final result. Since this scheme is extended to be used in clouds( due to high availability and scalability to increase computation power) it is possible to implement SMC practically for privacy preserving in data mining at low cost for the clients.
Enhancing Care of Aged and Dying Prisoners: Is e-Learning a Feasible Approach?
Loeb, Susan J; Penrod, Janice; Myers, Valerie H; Baney, Brenda L; Strickfaden, Sophia M; Kitt-Lewis, Erin; Wion, Rachel K
Prisons and jails are facing sharply increased demands in caring for aged and dying inmates. Our Toolkit for Enhancing End-of-life Care in Prisons effectively addressed end-of-life (EOL) care; however, geriatric content was limited, and the product was not formatted for broad dissemination. Prior research adapted best practices in EOL care and aging; but, delivery methods lacked emerging technology-focused learning and interactivity. Our purposes were to uncover current training approaches and preferences and to ascertain the technological capacity of correctional settings to deliver computer-based and other e-learning training. An environmental scan was conducted with 11 participants from U.S. prisons and jails to ensure proper fit, in terms of content and technology capacity, between an envisioned computer-based training product and correctional settings. Environmental scan findings focused on content of training, desirable qualities of training, prominence of "homegrown" products, and feasibility of commercial e-learning. This study identified qualities of training programs to adopt and pitfalls to avoid and revealed technology-related issues to be mindful of when designing computer-based training for correctional settings, and participants spontaneously expressed an interest in geriatrics and EOL training using this learning modality as long as training allowed for tailoring of materials.
Parallel task processing of very large datasets
NASA Astrophysics Data System (ADS)
Romig, Phillip Richardson, III
This research concerns the use of distributed computer technologies for the analysis and management of very large datasets. Improvements in sensor technology, an emphasis on global change research, and greater access to data warehouses all are increase the number of non-traditional users of remotely sensed data. We present a framework for distributed solutions to the challenges of datasets which exceed the online storage capacity of individual workstations. This framework, called parallel task processing (PTP), incorporates both the task- and data-level parallelism exemplified by many image processing operations. An implementation based on the principles of PTP, called Tricky, is also presented. Additionally, we describe the challenges and practical issues in modeling the performance of parallel task processing with large datasets. We present a mechanism for estimating the running time of each unit of work within a system and an algorithm that uses these estimates to simulate the execution environment and produce estimated runtimes. Finally, we describe and discuss experimental results which validate the design. Specifically, the system (a) is able to perform computation on datasets which exceed the capacity of any one disk, (b) provides reduction of overall computation time as a result of the task distribution even with the additional cost of data transfer and management, and (c) in the simulation mode accurately predicts the performance of the real execution environment.
NASA Astrophysics Data System (ADS)
Wyborn, Lesley; Evans, Ben; Foster, Clinton; Pugh, Timothy; Uhlherr, Alfred
2015-04-01
Digital geoscience data and information are integral to informing decisions on the social, economic and environmental management of natural resources. Traditionally, such decisions were focused on regional or national viewpoints only, but it is increasingly being recognised that global perspectives are required to meet new challenges such as predicting impacts of climate change; sustainably exploiting scarce water, mineral and energy resources; and protecting our communities through better prediction of the behaviour of natural hazards. In recent years, technical advances in scientific instruments have resulted in a surge in data volumes, with data now being collected at unprecedented rates and at ever increasing resolutions. The size of many earth science data sets now exceed the computational capacity of many government and academic organisations to locally store and dynamically access the data sets; to internally process and analyse them to high resolutions; and then to deliver them online to clients, partners and stakeholders. Fortunately, at the same time, computational capacities have commensurately increased (both cloud and HPC): these can now provide the capability to effectively access the ever-growing data assets within realistic time frames. However, to achieve this, data and computing need to be co-located: bandwidth limits the capacity to move the large data sets; the data transfers are too slow; and latencies to access them are too high. These scenarios are driving the move towards more centralised High Performance (HP) Infrastructures. The rapidly increasing scale of data, the growing complexity of software and hardware environments, combined with the energy costs of running such infrastructures is creating a compelling economic argument for just having one or two major national (or continental) HP facilities that can be federated internationally to enable earth and environmental issues to be tackled at global scales. But at the same time, if properly constructed, these infrastructures can also service very small-scale research projects. The National Computational Infrastructure (NCI) at the Australian National University (ANU) has built such an HP infrastructure as part of the Australian Government's National Collaborative Research Infrastructure Strategy. NCI operates as a formal partnership between the ANU and the three major Australian National Government Scientific Agencies: the Commonwealth Scientific and Industrial Research Organisation (CSIRO), the Bureau of Meteorology and Geoscience Australia. The government partners agreed to explore the new opportunities offered within the partnership with NCI, rather than each running their own separate agenda independently. The data from these national agencies, as well as from collaborating overseas organisations (e.g., NASA, NOAA, USGS, CMIP, etc.) are either replicated to, or produced at, NCI. By co-locating and harmonising these vast data collections within the integrated HP computing environments at NCI, new opportunities have arisen for Data-intensive Interdisciplinary Science at scales and resolutions not hitherto possible. The new NCI infrastructure has also enabled the blending of research by the university sector with the more operational business of government science agencies, with the fundamental shift being that researchers from both sectors work and collaborate within a federated data and computational environment that contains both national and international data collections.
Code of Federal Regulations, 2010 CFR
2010-07-01
... DELIMITING THE EXEMPTIONS FOR EXECUTIVE, ADMINISTRATIVE, PROFESSIONAL, COMPUTER AND OUTSIDE SALES EMPLOYEES... executive, administrative, professional, outside sales or computer employee capacity who are not actually performing the duties of an executive, administrative, professional, outside sales or computer employee. ...
Study on the flow nonuniformity in a high capacity Stirling pulse tube cryocooler
NASA Astrophysics Data System (ADS)
You, X.; Zhi, X.; Duan, C.; Jiang, X.; Qiu, L.; Li, J.
2017-12-01
High capacity Stirling-type pulse tube cryocoolers (SPTC) have promising applications in high temperature superconductive motor and gas liquefaction. However, with the increase of cooling capacity, its performance deviates from well-accepted one-dimensional model simulation, such as Sage and Regen, mainly due to the strong field nonuniformity. In this study, several flow straighteners placed at both ends of the pulse tube are investigated to improve the flow distribution. A two-dimensional model of the pulse tube based on the computational fluid dynamics (CFD) method has been built to study the flow distribution of the pulse tube with different flow straighteners including copper screens, copper slots, taper transition and taper stainless slot. A SPTC set-up which has more than one hundred Watts cooling power at 80 K has been built and tested. The flow straighteners mentioned above have been applied and tested. The results show that with the best flow straightener the cooling performance of the SPTC can be significantly improved. Both CFD simulation and experiment show that the straighteners have impacts on the flow distribution and the performance of the high capacity SPTC.
Vera, Javier
2018-01-01
What is the influence of short-term memory enhancement on the emergence of grammatical agreement systems in multi-agent language games? Agreement systems suppose that at least two words share some features with each other, such as gender, number, or case. Previous work, within the multi-agent language-game framework, has recently proposed models stressing the hypothesis that the emergence of a grammatical agreement system arises from the minimization of semantic ambiguity. On the other hand, neurobiological evidence argues for the hypothesis that language evolution has mainly related to an increasing of short-term memory capacity, which has allowed the online manipulation of words and meanings participating particularly in grammatical agreement systems. Here, the main aim is to propose a multi-agent language game for the emergence of a grammatical agreement system, under measurable long-range relations depending on the short-term memory capacity. Computer simulations, based on a parameter that measures the amount of short-term memory capacity, suggest that agreement marker systems arise in a population of agents equipped at least with a critical short-term memory capacity.
Menegaux, Aurore; Meng, Chun; Neitzel, Julia; Bäuml, Josef G; Müller, Hermann J; Bartmann, Peter; Wolke, Dieter; Wohlschläger, Afra M; Finke, Kathrin; Sorg, Christian
2017-04-15
Preterm birth is associated with an increased risk for lasting changes in both the cortico-thalamic system and attention; however, the link between cortico-thalamic and attention changes is as yet little understood. In preterm newborns, cortico-cortical and cortico-thalamic structural connectivity are distinctively altered, with increased local clustering for cortico-cortical and decreased integrity for cortico-thalamic connectivity. In preterm-born adults, among the various attention functions, visual short-term memory (vSTM) capacity is selectively impaired. We hypothesized distinct associations between vSTM capacity and the structural integrity of cortico-thalamic and cortico-cortical connections, respectively, in preterm-born adults. A whole-report paradigm of briefly presented letter arrays based on the computationally formalized Theory of Visual Attention (TVA) was used to quantify parameter vSTM capacity in 26 preterm- and 21 full-term-born adults. Fractional anisotropy (FA) of posterior thalamic radiations and the splenium of the corpus callosum obtained by diffusion tensor imaging were analyzed by tract-based spatial statistics and used as proxies for cortico-thalamic and cortico-cortical structural connectivity. The relationship between vSTM capacity and cortico-thalamic and cortico-cortical connectivity, respectively, was significantly modified by prematurity. In full-term-born adults, the higher FA in the right posterior thalamic radiation the higher vSTM capacity; in preterm-born adults this FA-vSTM-relationship was inversed. In the splenium, higher FA was correlated with higher vSTM capacity in preterm-born adults, whereas no significant relationship was evident in full-term-born adults. These results indicate distinct associations between cortico-thalamic and cortico-cortical integrity and vSTM capacity in preterm-and full-term-born adults. Data suggest compensatory cortico-cortical fiber re-organization for attention deficits after preterm delivery. Copyright © 2017 Elsevier Inc. All rights reserved.
Impact of Middle vs. Inferior Total Turbinectomy on Nasal Aerodynamics
Dayal, Anupriya; Rhee, John S.; Garcia, Guilherme J. M.
2016-01-01
Objectives This computational study aims to: (1) Use virtual surgery to theoretically investigate the maximum possible change in nasal aerodynamics after turbinate surgery; (2) Quantify the relative contributions of the middle and inferior turbinates to nasal resistance and air conditioning; (3) Quantify to what extent total turbinectomy impairs the nasal air conditioning capacity. Study Design Virtual surgery and computational fluid dynamics (CFD). Setting Academic tertiary medical center. Subjects and Methods Ten patients with inferior turbinate hypertrophy were studied. Three-dimensional models of their nasal anatomies were built based on pre-surgery computed tomography scans. Virtual surgery was applied to create models representing either total inferior turbinectomy (TIT) or total middle turbinectomy (TMT). Airflow, heat transfer, and humidity transport were simulated at a 15 L/min steady-state inhalation rate. The surface area stimulated by mucosal cooling was defined as the area where heat fluxes exceed 50 W/cm2. Results In both virtual total turbinectomy models, nasal resistance decreased and airflow increased. However, the surface area where heat fluxes exceed 50 W/cm2 either decreased (TIT) or did not change significantly (TMT), suggesting that total turbinectomy may reduce the stimulation of cold receptors by inspired air. Nasal heating and humidification efficiencies decreased significantly after both TIT and TMT. All changes were greater in the TIT models than in the TMT models. Conclusion TIT yields greater increases in nasal airflow, but also impairs the nasal air conditioning capacity to a greater extent than TMT. Radical resection of the turbinates may decrease the surface area stimulated by mucosal cooling. PMID:27165673
Impact of Middle versus Inferior Total Turbinectomy on Nasal Aerodynamics.
Dayal, Anupriya; Rhee, John S; Garcia, Guilherme J M
2016-09-01
This computational study aims to (1) use virtual surgery to theoretically investigate the maximum possible change in nasal aerodynamics after turbinate surgery, (2) quantify the relative contributions of the middle and inferior turbinates to nasal resistance and air conditioning, and (3) quantify to what extent total turbinectomy impairs the nasal air-conditioning capacity. Virtual surgery and computational fluid dynamics. Academic tertiary medical center. Ten patients with inferior turbinate hypertrophy were studied. Three-dimensional models of their nasal anatomies were built according to presurgery computed tomography scans. Virtual surgery was applied to create models representing either total inferior turbinectomy (TIT) or total middle turbinectomy (TMT). Airflow, heat transfer, and humidity transport were simulated at a steady-state inhalation rate of 15 L/min. The surface area stimulated by mucosal cooling was defined as the area where heat fluxes exceed 50 W/m(2). In both virtual total turbinectomy models, nasal resistance decreased and airflow increased. However, the surface area where heat fluxes exceed 50 W/m(2) either decreased (TIT) or did not change significantly (TMT), suggesting that total turbinectomy may reduce the stimulation of cold receptors by inspired air. Nasal heating and humidification efficiencies decreased significantly after both TIT and TMT. All changes were greater in the TIT models than in the TMT models. TIT yields greater increases in nasal airflow but also impairs the nasal air-conditioning capacity to a greater extent than TMT. Radical resection of the turbinates may decrease the surface area stimulated by mucosal cooling. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2016.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, B.
The Energy Research program may be on the verge of abdicating an important role it has traditionally played in the development and use of state-of-the-art computer systems. The lack of easy access to Class VI systems coupled to the easy availability of local, user-friendly systems is conspiring to drive many investigators away from forefront research in computational science and in the use of state-of-the-art computers for more discipline-oriented problem solving. The survey conducted under the auspices of this contract clearly demonstrates a significant suppressed demand for actual Class VI hours totaling the full capacity of one such system. The currentmore » usage is about a factor of 15 below this level. There is also a need for about 50% more capacity in the current mini/midi availability. Meeting the needs of the ER community for this level of computing power and capacity is most probably best achieved through the establishment of a central Class VI capability at some site linked through a nationwide network to the various ER laboratories and universities and interfaced with the local user-friendly systems at those remote sites.« less
Reaeration capacity of the Rock River between Lake Koshkonong, Wisconsin and Rockton, Illinois
Grant, R. Stephen
1978-01-01
The reaeration capacity of the Rock River from Lake Koshkonong, Wisconsin, to Rockton, Illinois, was determined using the energy-dissipation model. The model was calibrated using data from radioactive-tracer measurements in the study reach. Reaeration coefficients (K2) were computed for the annual minimum 7-day mean discharge that occurs on the average of once in 10 years (Q7,10). A time-of-travel model was developed using river discharge, slope, and velocity data from three dye studies. The model was used to estimate traveltime for the Q7,10 for use in the energy-dissipation model. During one radiotracer study, 17 mile per hour winds apparently increased the reaeration coefficient about 40 times. (Woodard-USGS)
Use of Lean Response to Improve Pandemic Influenza Surge in Public Health Laboratories
Chang, Yin; Prystajecky, Natalie; Petric, Martin; Mak, Annie; Abbott, Brendan; Paris, Benjamin; Decker, K.C.; Pittenger, Lauren; Guercio, Steven; Stott, Jeff; Miller, Joseph D.
2012-01-01
A novel influenza A (H1N1) virus detected in April 2009 rapidly spread around the world. North American provincial and state laboratories have well-defined roles and responsibilities, including providing accurate, timely test results for patients and information for regional public health and other decision makers. We used the multidisciplinary response and rapid implementation of process changes based on Lean methods at the provincial public health laboratory in British Columbia, Canada, to improve laboratory surge capacity in the 2009 influenza pandemic. Observed and computer simulating evaluation results from rapid processes changes showed that use of Lean tools successfully expanded surge capacity, which enabled response to the 10-fold increase in testing demands. PMID:22257385
NASA Astrophysics Data System (ADS)
Mori, Kazuo; Naito, Katsuhiro; Kobayashi, Hideo
This paper proposes an asymmetric traffic accommodation scheme using a multihop transmission technique for CDMA/FDD cellular communication systems. The proposed scheme exploits the multihop transmission to downlink packet transmissions, which require the large transmission power at their single-hop transmissions, in order to increase the downlink capacity. In these multihop transmissions, vacant uplink band is used for the transmissions from relay stations to destination mobile stations, and this leads more capacity enhancement in the downlink communications. The relay route selection method and power control method for the multihop transmissions are also investigated in the proposed scheme. The proposed scheme is evaluated by computer simulation and the results show that the proposed scheme can achieve better system performance.
Networked Microcomputers--The Next Generation in College Computing.
ERIC Educational Resources Information Center
Harris, Albert L.
The evolution of computer hardware for college computing has mirrored the industry's growth. When computers were introduced into the educational environment, they had limited capacity and served one user at a time. Then came large mainframes with many terminals sharing the resource. Next, the use of computers in office automation emerged. As…
Report to the Institutional Computing Executive Group (ICEG) August 14, 2006
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carnes, B
We have delayed this report from its normal distribution schedule for two reasons. First, due to the coverage provided in the White Paper on Institutional Capability Computing Requirements distributed in August 2005, we felt a separate 2005 ICEG report would not be value added. Second, we wished to provide some specific information about the Peloton procurement and we have just now reached a point in the process where we can make some definitive statements. The Peloton procurement will result in an almost complete replacement of current M&IC systems. We have plans to retire MCR, iLX, and GPS. We will replacemore » them with new parallel and serial capacity systems based on the same node architecture in the new Peloton capability system named ATLAS. We are currently adding the first users to the Green Data Oasis, a large file system on the open network that will provide the institution with external collaboration data sharing. Only Thunder will remain from the current M&IC system list and it will be converted from Capability to Capacity. We are confident that we are entering a challenging yet rewarding new phase for the M&IC program. Institutional computing has been an essential component of our S&T investment strategy and has helped us achieve recognition in many scientific and technical forums. Through consistent institutional investments, M&IC has grown into a powerful unclassified computing resource that is being used across the Lab to push the limits of computing and its application to simulation science. With the addition of Peloton, the Laboratory will significantly increase the broad-based computing resources available to meet the ever-increasing demand for the large scale simulations indispensable to advancing all scientific disciplines. All Lab research efforts are bolstered through the long term development of mission driven scalable applications and platforms. The new systems will soon be fully utilized and will position Livermore to extend the outstanding science and technology breakthroughs the M&IC program has enabled to date.« less
DiNapoli, Jean Marie; Garcia-Dia, Mary Joy; Garcia-Ona, Leila; O'Flaherty, Deirdre; Siller, Jennifer
2014-02-01
The Healthy People 2020 (2012) report has identified that isolation, lack of social services, and a shortage of culturally competent providers serve as barriers to the health of lesbian, gay, bisexual, and transgender (LGBT) individuals who have HIV/AIDS. Self-transcendence theory proposes that individuals who face increased vulnerability or mortality may acquire an increased capacity for self-transcendence and its positive influence on mental health and well-being. The use of technology-enabled social and community support and group interventions through computer mediated self-help (CMSH) with LGBT individuals may help meet mental health needs of this group, and support healthy lifestyle practices. This article presents an overview of steps taken to propose a theory-based CMSH intervention for testing in research and eventual application in practice. © 2013.
Optical memories in digital computing
NASA Technical Reports Server (NTRS)
Alford, C. O.; Gaylord, T. K.
1979-01-01
High capacity optical memories with relatively-high data-transfer rate and multiport simultaneous access capability may serve as basis for new computer architectures. Several computer structures that might profitably use memories are: a) simultaneous record-access system, b) simultaneously-shared memory computer system, and c) parallel digital processing structure.
US power plant sites at risk of future sea-level rise
NASA Astrophysics Data System (ADS)
Bierkandt, R.; Auffhammer, M.; Levermann, A.
2015-12-01
Unmitigated greenhouse gas emissions may increase global mean sea-level by about 1 meter during this century. Such elevation of the mean sea-level enhances the risk of flooding of coastal areas. We compute the power capacity that is currently out-of-reach of a 100-year coastal flooding but will be exposed to such a flood by the end of the century for different US states, if no adaptation measures are taken. The additional exposed capacity varies strongly among states. For Delaware it is 80% of the mean generated power load. For New York this number is 63% and for Florida 43%. The capacity that needs additional protection compared to today increases by more than 250% for Texas, 90% for Florida and 70% for New York. Current development in power plant building points towards a reduced future exposure to sea-level rise: proposed and planned power plants are less exposed than those which are currently operating. However, power plants that have been retired or canceled were less exposed than those operating at present. If sea-level rise is properly accounted for in future planning, an adaptation to sea-level rise may be costly but possible.
1991-09-01
System ( CAPMS ) in lieu of using DODI 4151.15H. Facility utilization rate computation is not explicitly defined; it is merely identified as a ratio of...front of a bottleneck buffers the critical resource and protects against disruption of the system. This approach optimizes facility utilization by...run titled BUFFERED BASELINE. Three different levels of inventory were used to evaluate the effect of increasing the inventory level on critical
Sera, Toshihiro; Fujioka, Hideki; Yokota, Hideo; Makinouchi, Akitake; Himeno, Ryutaro; Schroter, Robert C; Tanishita, Kazuo
2004-05-01
Airway compliance is a key factor in understanding lung mechanics and is used as a clinical diagnostic index. Understanding such mechanics in small airways physiologically and clinically is critical. We have determined the "morphometric change" and "localized compliance" of small airways under "near"-physiological conditions; namely, the airways were embedded in parenchyma without dehydration and fixation. Previously, we developed a two-step method to visualize small airways in detail by staining the lung tissue with a radiopaque solution and then visualizing the tissue with a cone-beam microfocal X-ray computed tomography system (Sera et al. J Biomech 36: 1587-1594, 2003). In this study, we used this technique to analyze changes in diameter and length of the same small airways ( approximately 150 microm ID) and then evaluated the localized compliance as a function of airway generation (Z). For smaller (<300-microm-diameter) airways, diameter was 36% larger at end-tidal inspiration and 89% larger at total lung capacity; length was 18% larger at end-tidal inspiration and 43% larger at total lung capacity than at functional residual capacity. Diameter, especially at smaller airways, did not behave linearly with V(1/3) (where V is volume). With increasing lung pressure, diameter changed dramatically at a particular pressure and length changed approximately linearly during inflation and deflation. Percentage of airway volume for smaller airways did not behave linearly with that of lung volume. Smaller airways were generally more compliant than larger airways with increasing Z and exhibited hysteresis in their diameter behavior. Airways at higher Z deformed at a lower pressure than those at lower Z. These results indicated that smaller airways did not behave homogeneously.
The OSG open facility: A sharing ecosystem
Jayatilaka, B.; Levshina, T.; Rynge, M.; ...
2015-12-23
The Open Science Grid (OSG) ties together individual experiments’ computing power, connecting their resources to create a large, robust computing grid, this computing infrastructure started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero. In the years since, the OSG has broadened its focus to also address the needs of other US researchers and increased delivery of Distributed High Through-put Computing (DHTC) to users from a wide variety of disciplines via the OSG Open Facility. Presently, the Open Facility delivers about 100 million computing wall hours per year to researchers whomore » are not already associated with the owners of the computing sites, this is primarily accomplished by harvesting and organizing the temporarily unused capacity (i.e. opportunistic cycles) from the sites in the OSG. Using these methods, OSG resource providers and scientists share computing hours with researchers in many other fields to enable their science, striving to make sure that these computing power used with maximal efficiency. Furthermore, we believe that expanded access to DHTC is an essential tool for scientific innovation and work continues in expanding this service.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shorgin, Sergey Ya.; Pechinkin, Alexander V.; Samouylov, Konstantin E.
Cloud computing is promising technology to manage and improve utilization of computing center resources to deliver various computing and IT services. For the purpose of energy saving there is no need to unnecessarily operate many servers under light loads, and they are switched off. On the other hand, some servers should be switched on in heavy load cases to prevent very long delays. Thus, waiting times and system operating cost can be maintained on acceptable level by dynamically adding or removing servers. One more fact that should be taken into account is significant server setup costs and activation times. Formore » better energy efficiency, cloud computing system should not react on instantaneous increase or instantaneous decrease of load. That is the main motivation for using queuing systems with hysteresis for cloud computing system modelling. In the paper, we provide a model of cloud computing system in terms of multiple server threshold-based infinite capacity queuing system with hysteresis and noninstantanuous server activation. For proposed model, we develop a method for computing steady-state probabilities that allow to estimate a number of performance measures.« less
Survey on Security Issues in Cloud Computing and Associated Mitigation Techniques
NASA Astrophysics Data System (ADS)
Bhadauria, Rohit; Sanyal, Sugata
2012-06-01
Cloud Computing holds the potential to eliminate the requirements for setting up of high-cost computing infrastructure for IT-based solutions and services that the industry uses. It promises to provide a flexible IT architecture, accessible through internet for lightweight portable devices. This would allow multi-fold increase in the capacity or capabilities of the existing and new software. In a cloud computing environment, the entire data reside over a set of networked resources, enabling the data to be accessed through virtual machines. Since these data-centers may lie in any corner of the world beyond the reach and control of users, there are multifarious security and privacy challenges that need to be understood and taken care of. Also, one can never deny the possibility of a server breakdown that has been witnessed, rather quite often in the recent times. There are various issues that need to be dealt with respect to security and privacy in a cloud computing scenario. This extensive survey paper aims to elaborate and analyze the numerous unresolved issues threatening the cloud computing adoption and diffusion affecting the various stake-holders linked to it.
Ultra-Scale Computing for Emergency Evacuation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhaduri, Budhendra L; Nutaro, James J; Liu, Cheng
2010-01-01
Emergency evacuations are carried out in anticipation of a disaster such as hurricane landfall or flooding, and in response to a disaster that strikes without a warning. Existing emergency evacuation modeling and simulation tools are primarily designed for evacuation planning and are of limited value in operational support for real time evacuation management. In order to align with desktop computing, these models reduce the data and computational complexities through simple approximations and representations of real network conditions and traffic behaviors, which rarely represent real-world scenarios. With the emergence of high resolution physiographic, demographic, and socioeconomic data and supercomputing platforms, itmore » is possible to develop micro-simulation based emergency evacuation models that can foster development of novel algorithms for human behavior and traffic assignments, and can simulate evacuation of millions of people over a large geographic area. However, such advances in evacuation modeling and simulations demand computational capacity beyond the desktop scales and can be supported by high performance computing platforms. This paper explores the motivation and feasibility of ultra-scale computing for increasing the speed of high resolution emergency evacuation simulations.« less
NASA Technical Reports Server (NTRS)
1972-01-01
The design is reported of an advanced modular computer system designated the Automatically Reconfigurable Modular Multiprocessor System, which anticipates requirements for higher computing capacity and reliability for future spaceborne computers. Subjects discussed include: an overview of the architecture, mission analysis, synchronous and nonsynchronous scheduling control, reliability, and data transmission.
High Performance, Dependable Multiprocessor
NASA Technical Reports Server (NTRS)
Ramos, Jeremy; Samson, John R.; Troxel, Ian; Subramaniyan, Rajagopal; Jacobs, Adam; Greco, James; Cieslewski, Grzegorz; Curreri, John; Fischer, Michael; Grobelny, Eric;
2006-01-01
With the ever increasing demand for higher bandwidth and processing capacity of today's space exploration, space science, and defense missions, the ability to efficiently apply commercial-off-the-shelf (COTS) processors for on-board computing is now a critical need. In response to this need, NASA's New Millennium Program office has commissioned the development of Dependable Multiprocessor (DM) technology for use in payload and robotic missions. The Dependable Multiprocessor technology is a COTS-based, power efficient, high performance, highly dependable, fault tolerant cluster computer. To date, Honeywell has successfully demonstrated a TRL4 prototype of the Dependable Multiprocessor [I], and is now working on the development of a TRLS prototype. For the present effort Honeywell has teamed up with the University of Florida's High-performance Computing and Simulation (HCS) Lab, and together the team has demonstrated major elements of the Dependable Multiprocessor TRLS system.
Improvements to the fastex flutter analysis computer code
NASA Technical Reports Server (NTRS)
Taylor, Ronald F.
1987-01-01
Modifications to the FASTEX flutter analysis computer code (UDFASTEX) are described. The objectives were to increase the problem size capacity of FASTEX, reduce run times by modification of the modal interpolation procedure, and to add new user features. All modifications to the program are operable on the VAX 11/700 series computers under the VAX operating system. Interfaces were provided to aid in the inclusion of alternate aerodynamic and flutter eigenvalue calculations. Plots can be made of the flutter velocity, display and frequency data. A preliminary capability was also developed to plot contours of unsteady pressure amplitude and phase. The relevant equations of motion, modal interpolation procedures, and control system considerations are described and software developments are summarized. Additional information documenting input instructions, procedures, and details of the plate spline algorithm is found in the appendices.
Graphics Processors in HEP Low-Level Trigger Systems
NASA Astrophysics Data System (ADS)
Ammendola, Roberto; Biagioni, Andrea; Chiozzi, Stefano; Cotta Ramusino, Angelo; Cretaro, Paolo; Di Lorenzo, Stefano; Fantechi, Riccardo; Fiorini, Massimiliano; Frezza, Ottorino; Lamanna, Gianluca; Lo Cicero, Francesca; Lonardo, Alessandro; Martinelli, Michele; Neri, Ilaria; Paolucci, Pier Stanislao; Pastorelli, Elena; Piandani, Roberto; Pontisso, Luca; Rossetti, Davide; Simula, Francesco; Sozzi, Marco; Vicini, Piero
2016-11-01
Usage of Graphics Processing Units (GPUs) in the so called general-purpose computing is emerging as an effective approach in several fields of science, although so far applications have been employing GPUs typically for offline computations. Taking into account the steady performance increase of GPU architectures in terms of computing power and I/O capacity, the real-time applications of these devices can thrive in high-energy physics data acquisition and trigger systems. We will examine the use of online parallel computing on GPUs for the synchronous low-level trigger, focusing on tests performed on the trigger system of the CERN NA62 experiment. To successfully integrate GPUs in such an online environment, latencies of all components need analysing, networking being the most critical. To keep it under control, we envisioned NaNet, an FPGA-based PCIe Network Interface Card (NIC) enabling GPUDirect connection. Furthermore, it is assessed how specific trigger algorithms can be parallelized and thus benefit from a GPU implementation, in terms of increased execution speed. Such improvements are particularly relevant for the foreseen Large Hadron Collider (LHC) luminosity upgrade where highly selective algorithms will be essential to maintain sustainable trigger rates with very high pileup.
Redistricting Is Less Torturous When a Computer Does the Nitty-Gritty for You.
ERIC Educational Resources Information Center
Rust, Albert O.; Judd, Frank F.
1984-01-01
Describes "optimization" computer programing to aid in school redistricting. Using diverse demographic data, the computer plots district boundaries to minimize children's walking distance and maximize safety, improve racial balance, and keep enrollment within school capacity. (TE)
Design of a modular digital computer system
NASA Technical Reports Server (NTRS)
1973-01-01
A design tradeoff study is reported for a modular spaceborne computer system that is responsive to many mission types and phases. The computer uses redundancy to maximize reliability, and multiprocessing to maximize processing capacity. Fault detection and recovery features provide optimal reliability.
Behaviour of Frictional Joints in Steel Arch Yielding Supports
NASA Astrophysics Data System (ADS)
Horyl, Petr; Šňupárek, Richard; Maršálek, Pavel
2014-10-01
The loading capacity and ability of steel arch supports to accept deformations from the surrounding rock mass is influenced significantly by the function of the connections and in particular, the tightening of the bolts. This contribution deals with computer modelling of the yielding bolt connections for different torques to determine the load-bearing capacity of the connections. Another parameter that affects the loading capacity significantly is the value of the friction coefficient of the contacts between the elements of the joints. The authors investigated both the behaviour and conditions of the individual parts for three values of tightening moment and the relation between the value of screw tightening and load-bearing capacity of the connections for different friction coefficients. ANSYS software and the finite element method were used for the computer modelling. The solution is nonlinear because of the bi-linear material properties of steel and the large deformations. The geometry of the computer model was created from designs of all four parts of the structure. The calculation also defines the weakest part of the joint's structure based on stress analysis. The load was divided into two loading steps: the pre-tensioning of connecting bolts and the deformation loading corresponding to 50-mm slip of one support. The full Newton-Raphson method was chosen for the solution. The calculations were carried out on a computer at the Supercomputing Centre VSB-Technical University of Ostrava.
Combustor Computations for CO2-Neutral Aviation
NASA Technical Reports Server (NTRS)
Hendricks, Robert C.; Brankovic, Andreja; Ryder, Robert C.; Huber, Marcia
2011-01-01
Knowing the pure component C(sub p)(sup 0) or mixture C(sub p) (sup 0) as computed by a flexible code such as NIST-STRAPP or McBride-Gordon, one can, within reasonable accuracy, determine the thermophysical properties necessary to predict the combustion characteristics when there are no tabulated or computed data for those fluid mixtures 3or limited results for lower temperatures. (Note: C(sub p) (sup 0) is molar heat capacity at constant pressure.) The method can be used in the determination of synthetic and biological fuels and blends using the NIST code to compute the C(sub p) (sup 0) of the mixture. In this work, the values of the heat capacity were set at zero pressure, which provided the basis for integration to determine the required combustor properties from the injector to the combustor exit plane. The McBride-Gordon code was used to determine the heat capacity at zero pressure over a wide range of temperatures (room to 6,000 K). The selected fluids were Jet-A, 224TMP (octane), and C12. It was found that each heat capacity loci were form-similar. It was then determined that the results [near 400 to 3,000 K] could be represented to within acceptable engineering accuracy with the simplified equation C(sub p) (sup 0) = A/T + B, where A and B are fluid-dependent constants and T is temperature (K).
The OSG Open Facility: an on-ramp for opportunistic scientific computing
NASA Astrophysics Data System (ADS)
Jayatilaka, B.; Levshina, T.; Sehgal, C.; Gardner, R.; Rynge, M.; Würthwein, F.
2017-10-01
The Open Science Grid (OSG) is a large, robust computing grid that started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero, but has evolved in recent years to a much larger user and resource platform. In addition to meeting the US LHC community’s computational needs, the OSG continues to be one of the largest providers of distributed high-throughput computing (DHTC) to researchers from a wide variety of disciplines via the OSG Open Facility. The Open Facility consists of OSG resources that are available opportunistically to users other than resource owners and their collaborators. In the past two years, the Open Facility has doubled its annual throughput to over 200 million wall hours. More than half of these resources are used by over 100 individual researchers from over 60 institutions in fields such as biology, medicine, math, economics, and many others. Over 10% of these individual users utilized in excess of 1 million computational hours each in the past year. The largest source of these cycles is temporary unused capacity at institutions affiliated with US LHC computational sites. An increasing fraction, however, comes from university HPC clusters and large national infrastructure supercomputers offering unused capacity. Such expansions have allowed the OSG to provide ample computational resources to both individual researchers and small groups as well as sizable international science collaborations such as LIGO, AMS, IceCube, and sPHENIX. Opening up access to the Fermilab FabrIc for Frontier Experiments (FIFE) project has also allowed experiments such as mu2e and NOvA to make substantial use of Open Facility resources, the former with over 40 million wall hours in a year. We present how this expansion was accomplished as well as future plans for keeping the OSG Open Facility at the forefront of enabling scientific research by way of DHTC.
The OSG Open Facility: An On-Ramp for Opportunistic Scientific Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jayatilaka, B.; Levshina, T.; Sehgal, C.
The Open Science Grid (OSG) is a large, robust computing grid that started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero, but has evolved in recent years to a much larger user and resource platform. In addition to meeting the US LHC community’s computational needs, the OSG continues to be one of the largest providers of distributed high-throughput computing (DHTC) to researchers from a wide variety of disciplines via the OSG Open Facility. The Open Facility consists of OSG resources that are available opportunistically to users other than resource ownersmore » and their collaborators. In the past two years, the Open Facility has doubled its annual throughput to over 200 million wall hours. More than half of these resources are used by over 100 individual researchers from over 60 institutions in fields such as biology, medicine, math, economics, and many others. Over 10% of these individual users utilized in excess of 1 million computational hours each in the past year. The largest source of these cycles is temporary unused capacity at institutions affiliated with US LHC computational sites. An increasing fraction, however, comes from university HPC clusters and large national infrastructure supercomputers offering unused capacity. Such expansions have allowed the OSG to provide ample computational resources to both individual researchers and small groups as well as sizable international science collaborations such as LIGO, AMS, IceCube, and sPHENIX. Opening up access to the Fermilab FabrIc for Frontier Experiments (FIFE) project has also allowed experiments such as mu2e and NOvA to make substantial use of Open Facility resources, the former with over 40 million wall hours in a year. We present how this expansion was accomplished as well as future plans for keeping the OSG Open Facility at the forefront of enabling scientific research by way of DHTC.« less
A Method for Measuring Collection Expansion Rates and Shelf Space Capacities.
ERIC Educational Resources Information Center
Sapp, Gregg; Suttle, George
1994-01-01
Describes an effort to quantify annual collection expansion and shelf space capacities with a computer spreadsheet program. Methods used to quantify the space taken at the beginning of the project; to estimate annual rate of collection growth; and to plot stack space and usage, volume equivalents and usage, and growth capacity are covered.…
A case for Redundant Arrays of Inexpensive Disks (RAID)
NASA Technical Reports Server (NTRS)
Patterson, David A.; Gibson, Garth; Katz, Randy H.
1988-01-01
Increasing performance of CPUs and memories will be squandered if not matched by a similar performance increase in I/O. While the capacity of Single Large Expensive Disks (SLED) has grown rapidly, the performance improvement of SLED has been modest. Redundant Arrays of Inexpensive Disks (RAID), based on the magnetic disk technology developed for personal computers, offers an attractive alternative to SLED, promising improvements of an order of magnitude in performance, reliability, power consumption, and scalability. This paper introduces five levels of RAIDs, giving their relative cost/performance, and compares RAID to an IBM 3380 and a Fujitsu Super Eagle.
NASA Astrophysics Data System (ADS)
Croissant, Thomas; Lague, Dimitri; Davy, Philippe
2016-04-01
Climate fluctuations at geological timescales control the capacity of rivers to transport sediment with consequences on geochemical cycles, sedimentary basins dynamics and sedimentation/tectonics interactions. While the impact of differential friction generated by riparian vegetation has been studied for individual flood events, its impact on the long-term sediment transport capacity of rivers, modulated by the frequency of floods remains unknown. Here, we investigate this effect on a simplified river-floodplain configuration obeying observed hydraulic scaling laws. We numerically integrate the full-frequency magnitude distribution of discharge events and its impact on the transport capacity of bedload and suspended material for various level of vegetation-linked differential friction. We demonstrate that riparian vegetation by acting as a virtual confinement of the flow i) increases significantly the instantaneous transport capacity of the river independently of the transport mode and ii) increases the long term bedload transport rates as a function of discharge variability. Our results expose the dominance of flood frequency rather than riparian vegetation on the long term sediment transport capacity. Therefore, flood frequency has to be considered when evaluating long-term bedload transport capacity while floodplain vegetation is important only in high discharge variability regimes. By comparing the transport capacity of unconfined alluvial rivers and confined bedrock gorges, we demonstrate that the latter always presents the highest long term transport capacity at equivalent width and slope. The loss of confinement at the transition between bedrock and alluvial river must be compensated by a widening or a steepening of the alluvial channel to avoid infinite storage. Because steepening is never observed in natural system, we compute the alluvial widening factor value that varies between 3 to 11 times the width of the bedrock channel depending on riparian vegetation and discharge variability. This result is well supported by measurements made in natural river systems in different worldwide locations (Taiwan, Himalayas and New Zealand). Although bank cohesion is often invoked to as a property that sets alluvial river width, we propose unconfinement as another important control factor.
The Tractable Cognition Thesis
ERIC Educational Resources Information Center
van Rooij, Iris
2008-01-01
The recognition that human minds/brains are finite systems with limited resources for computation has led some researchers to advance the "Tractable Cognition thesis": Human cognitive capacities are constrained by computational tractability. This thesis, if true, serves cognitive psychology by constraining the space of computational-level theories…
Poonam Khanijo Ahluwalia; Nema, Arvind K
2011-07-01
Selection of optimum locations for locating new facilities and decision regarding capacities at the proposed facilities is a major concern for municipal authorities/managers. The decision as to whether a single facility is preferred over multiple facilities of smaller capacities would vary with varying priorities to cost and associated risks such as environmental or health risk or risk perceived by the society. Currently management of waste streams such as that of computer waste is being done using rudimentary practices and is flourishing as an unorganized sector, mainly as backyard workshops in many cities of developing nations such as India. Uncertainty in the quantification of computer waste generation is another major concern due to the informal setup of present computer waste management scenario. Hence, there is a need to simultaneously address uncertainty in waste generation quantities while analyzing the tradeoffs between cost and associated risks. The present study aimed to address the above-mentioned issues in a multi-time-step, multi-objective decision-support model, which can address multiple objectives of cost, environmental risk, socially perceived risk and health risk, while selecting the optimum configuration of existing and proposed facilities (location and capacities).
NASA Technical Reports Server (NTRS)
Mortensen, L. O.
1982-01-01
The Mark IV ground communication facility (GCF) as it is implemented to support the network consolidation program is reviewed. Changes in the GCF are made in the area of increased capacity. Common carrier circuits are the medium for data transfer. The message multiplexing in the Mark IV era differs from the Mark III era, in that all multiplexing is done in a GCF computer under GCF software control, which is similar to the multiplexing currently done in the high speed data subsystem.
Federal Aviation Administration Aviation System Capital Investment Plan 1993
1993-12-01
Facilitates full use of terminal airspace capacity. 0 Increases safety and efficiency. 62-21 Airport Surface Traffic 0 Optimizes sequencing and...installation of tower control computer complexes (TCCCs) in se- 0 AAS software for terminal and en route ATC lected airport traffic control towers. TCCCs...project provides economical ASR-4/5/6, and install 40 ASR-9s at radar service at airports with air traffic densi- ASR-4/5/6 sites). ties high enough to
Multimode and single-mode fibers for data center and high-performance computing applications
NASA Astrophysics Data System (ADS)
Bickham, Scott R.
2016-03-01
Data center (DC) and high performance computing (HPC) applications have traditionally used a combination of copper, multimode fiber and single-mode fiber interconnects with relative percentages that depend on factors such as the line rate, reach and connectivity costs. The balance between these transmission media has increasingly shifted towards optical fiber due to the reach constraints of copper at data rates of 10 Gb/s and higher. The percentage of single-mode fiber deployed in the DC has also grown slightly since 2014, coinciding with the emergence of mega DCs with extended distance needs beyond 100 m. This trend will likely continue in the next few years as DCs expand their capacity from 100G to 400G, increase the physical size of their facilities and begin to utilize silicon-photonics transceiver technology. However there is a still a need for the low-cost and high-density connectivity, and this is sustaining the deployment of multimode fiber for links <= 100 m. In this paper, we discuss options for single-mode and multimode fibers in DCs and HPCs and introduce a reduced diameter multimode fiber concept which provides intra-and inter-rack connectivity as well as compatibility with silicon-photonic transceivers operating at 1310 nm. We also discuss the trade-offs between single-mode fiber attributes such as bend-insensitivity, attenuation and mode field diameter and their roles in capacity and connectivity in data centers.
Starck, J M; Weimer, I; Aupperle, H; Müller, K; Marschang, R E; Kiefer, I; Pees, M
2015-11-01
A qualitative and quantitative morphological study of the pulmonary exchange capacity of healthy and diseased Burmese pythons (Python molurus) was carried out in order to test the hypothesis that the high morphological excess capacity for oxygen exchange in the lungs of these snakes is one of the reasons why pathological processes extend throughout the lung parenchyma and impair major parts of the lungs before clinical signs of respiratory disease become apparent. Twenty-four Burmese pythons (12 healthy and 12 diseased) were included in the study. A stereology-based approach was used to quantify the lung parenchyma using computed tomography. Light microscopy was used to quantify tissue compartments and the respiratory exchange surface, and transmission electron microscopy was used to measure the thickness of the diffusion barrier. The morphological diffusion capacity for oxygen of the lungs and the anatomical diffusion factor were calculated. The calculated anatomical diffusion capacity was compared with published values for oxygen consumption of healthy snakes, and the degree to which the exchange capacity can be obstructed before normal physiological function is impaired was estimated. Heterogeneous pulmonary infections result in graded morphological transformations of pulmonary parenchyma involving lymphocyte migration into the connective tissue and thickening of the septal connective tissue, increasing thickness of the diffusion barrier and increasing transformation of the pulmonary epithelium into a columnar pseudostratified or stratified epithelium. The transformed epithelium developed by hyperplasia of ciliated cells arising from the tip of the faveolar septa and by hyperplasia of type II pneumocytes. These results support the idea that the lungs have a remarkable overcapacity for oxygen consumption and that the development of pulmonary disease continuously reduces the capacity for oxygen consumption. However, due to the overcapacity of the lungs, this reduction does not result in clinical signs and disease can progress unrecognized for an extended period. Copyright © 2015 Elsevier Ltd. All rights reserved.
McWhinney, S R; Tremblay, A; Boe, S G; Bardouille, T
2018-02-01
Neurofeedback training teaches individuals to modulate brain activity by providing real-time feedback and can be used for brain-computer interface control. The present study aimed to optimize training by maximizing engagement through goal-oriented task design. Participants were shown either a visual display or a robot, where each was manipulated using motor imagery (MI)-related electroencephalography signals. Those with the robot were instructed to quickly navigate grid spaces, as the potential for goal-oriented design to strengthen learning was central to our investigation. Both groups were hypothesized to show increased magnitude of these signals across 10 sessions, with the greatest gains being seen in those navigating the robot due to increased engagement. Participants demonstrated the predicted increase in magnitude, with no differentiation between hemispheres. Participants navigating the robot showed stronger left-hand MI increases than those with the computer display. This is likely due to success being reliant on maintaining strong MI-related signals. While older participants showed stronger signals in early sessions, this trend later reversed, suggesting greater natural proficiency but reduced flexibility. These results demonstrate capacity for modulating neurofeedback using MI over a series of training sessions, using tasks of varied design. Importantly, the more goal-oriented robot control task resulted in greater improvements.
A Novel Biobjective Risk-Based Model for Stochastic Air Traffic Network Flow Optimization Problem.
Cai, Kaiquan; Jia, Yaoguang; Zhu, Yanbo; Xiao, Mingming
2015-01-01
Network-wide air traffic flow management (ATFM) is an effective way to alleviate demand-capacity imbalances globally and thereafter reduce airspace congestion and flight delays. The conventional ATFM models assume the capacities of airports or airspace sectors are all predetermined. However, the capacity uncertainties due to the dynamics of convective weather may make the deterministic ATFM measures impractical. This paper investigates the stochastic air traffic network flow optimization (SATNFO) problem, which is formulated as a weighted biobjective 0-1 integer programming model. In order to evaluate the effect of capacity uncertainties on ATFM, the operational risk is modeled via probabilistic risk assessment and introduced as an extra objective in SATNFO problem. Computation experiments using real-world air traffic network data associated with simulated weather data show that presented model has far less constraints compared to stochastic model with nonanticipative constraints, which means our proposed model reduces the computation complexity.
Polyphony: A Workflow Orchestration Framework for Cloud Computing
NASA Technical Reports Server (NTRS)
Shams, Khawaja S.; Powell, Mark W.; Crockett, Tom M.; Norris, Jeffrey S.; Rossi, Ryan; Soderstrom, Tom
2010-01-01
Cloud Computing has delivered unprecedented compute capacity to NASA missions at affordable rates. Missions like the Mars Exploration Rovers (MER) and Mars Science Lab (MSL) are enjoying the elasticity that enables them to leverage hundreds, if not thousands, or machines for short durations without making any hardware procurements. In this paper, we describe Polyphony, a resilient, scalable, and modular framework that efficiently leverages a large set of computing resources to perform parallel computations. Polyphony can employ resources on the cloud, excess capacity on local machines, as well as spare resources on the supercomputing center, and it enables these resources to work in concert to accomplish a common goal. Polyphony is resilient to node failures, even if they occur in the middle of a transaction. We will conclude with an evaluation of a production-ready application built on top of Polyphony to perform image-processing operations of images from around the solar system, including Mars, Saturn, and Titan.
Ethics in published brain-computer interface research
NASA Astrophysics Data System (ADS)
Specker Sullivan, L.; Illes, J.
2018-02-01
Objective. Sophisticated signal processing has opened the doors to more research with human subjects than ever before. The increase in the use of human subjects in research comes with a need for increased human subjects protections. Approach. We quantified the presence or absence of ethics language in published reports of brain-computer interface (BCI) studies that involved human subjects and qualitatively characterized ethics statements. Main results. Reports of BCI studies with human subjects that are published in neural engineering and engineering journals are anchored in the rationale of technological improvement. Ethics language is markedly absent, omitted from 31% of studies published in neural engineering journals and 59% of studies in biomedical engineering journals. Significance. As the integration of technological tools with the capacities of the mind deepens, explicit attention to ethical issues will ensure that broad human benefit is embraced and not eclipsed by technological exclusiveness.
The architecture of tomorrow's massively parallel computer
NASA Technical Reports Server (NTRS)
Batcher, Ken
1987-01-01
Goodyear Aerospace delivered the Massively Parallel Processor (MPP) to NASA/Goddard in May 1983, over three years ago. Ever since then, Goodyear has tried to look in a forward direction. There is always some debate as to which way is forward when it comes to supercomputer architecture. Improvements to the MPP's massively parallel architecture are discussed in the areas of data I/O, memory capacity, connectivity, and indirect (or local) addressing. In I/O, transfer rates up to 640 megabytes per second can be achieved. There are devices that can supply the data and accept it at this rate. The memory capacity can be increased up to 128 megabytes in the ARU and over a gigabyte in the staging memory. For connectivity, there are several different kinds of multistage networks that should be considered.
Gold Nanoparticles for Neural Prosthetics Devices
Zhang, Huanan; Shih, Jimmy; Zhu, Jian; Kotov, Nicholas A.
2012-01-01
Treatments of neurological diseases and the realization of brain-computer interfaces require ultrasmall electrodes which are “invisible” to resident immune cells. Functional electrodes smaller than 50μm are impossible to produce with traditional materials due to high interfacial impedance at the characteristic frequency of neural activity and insufficient charge storage capacity. The problem can be resolved by using gold nanoparticle nanocomposites. Careful comparison indicates that layer-by-layer assembled films from Au NPs provide more than threefold improvement in interfacial impedance and one order of magnitude increase in charge storage capacity. Prototypes of microelectrodes could be made using traditional photolithography. Integration of unique nanocomposite materials with microfabrication techniques opens the door for practical realization of the ultrasmall implantable electrodes. Further improvement of electrical properties is expected when using special shapes of gold nanoparticles. PMID:22734673
47 CFR 64.702 - Furnishing of enhanced services and customer-premises equipment.
Code of Federal Regulations, 2012 CFR
2012-10-01
... separate operating, marketing, installation, and maintenance personnel, and utilize separate computer... available to the separate corporation any capacity or computer system component on its computer system or... Enhanced Services and Customer-Premises Equipment by Bell Operating Companies; Telephone Operator Services...
47 CFR 64.702 - Furnishing of enhanced services and customer-premises equipment.
Code of Federal Regulations, 2013 CFR
2013-10-01
... separate operating, marketing, installation, and maintenance personnel, and utilize separate computer... available to the separate corporation any capacity or computer system component on its computer system or... Enhanced Services and Customer-Premises Equipment by Bell Operating Companies; Telephone Operator Services...
47 CFR 64.702 - Furnishing of enhanced services and customer-premises equipment.
Code of Federal Regulations, 2011 CFR
2011-10-01
... separate operating, marketing, installation, and maintenance personnel, and utilize separate computer... available to the separate corporation any capacity or computer system component on its computer system or... Enhanced Services and Customer-Premises Equipment by Bell Operating Companies; Telephone Operator Services...
47 CFR 64.702 - Furnishing of enhanced services and customer-premises equipment.
Code of Federal Regulations, 2010 CFR
2010-10-01
... separate operating, marketing, installation, and maintenance personnel, and utilize separate computer... available to the separate corporation any capacity or computer system component on its computer system or... Enhanced Services and Customer-Premises Equipment by Bell Operating Companies; Telephone Operator Services...
47 CFR 64.702 - Furnishing of enhanced services and customer-premises equipment.
Code of Federal Regulations, 2014 CFR
2014-10-01
... separate operating, marketing, installation, and maintenance personnel, and utilize separate computer... available to the separate corporation any capacity or computer system component on its computer system or... Enhanced Services and Customer-Premises Equipment by Bell Operating Companies; Telephone Operator Services...
Implementing direct, spatially isolated problems on transputer networks
NASA Technical Reports Server (NTRS)
Ellis, Graham K.
1988-01-01
Parametric studies were performed on transputer networks of up to 40 processors to determine how to implement and maximize the performance of the solution of problems where no processor-to-processor data transfer is required for the problem solution (spatially isolated). Two types of problems are investigated a computationally intensive problem where the solution required the transmission of 160 bytes of data through the parallel network, and a communication intensive example that required the transmission of 3 Mbytes of data through the network. This data consists of solutions being sent back to the host processor and not intermediate results for another processor to work on. Studies were performed on both integer and floating-point transputers. The latter features an on-chip floating-point math unit and offers approximately an order of magnitude performance increase over the integer transputer on real valued computations. The results indicate that a minimum amount of work is required on each node per communication to achieve high network speedups (efficiencies). The floating-point processor requires approximately an order of magnitude more work per communication than the integer processor because of the floating-point unit's increased computing capacity.
Interference assembly and fretting wear analysis of hollow shaft.
Han, Chuanjun; Zhang, Jie
2014-01-01
Fretting damage phenomenon often appears in the interference fit assembly. The finite element model of hollow shaft and shaft sleeve was established, and the equivalent stress and contact stress were computed after interference assembly. The assembly body of hollow shaft and shaft sleeve was in whirling bending load, and the contact status (sticking, sliding, and opening) and the distribution of stress along one typical contact line were computed under different loads, interferences, hollow degrees, friction coefficient, and wear quantity. Judgment formula of contact state was fixed by introducing the corrected coefficient k. The computation results showed that the "edge effect" appears in the contact surface after interference fit. The size of slip zone is unchanged along with the increase of bending load. The greater the interference value, the bigger the wear range. The hollow degree does not influence the size of stick zone but controls the position of the junction point of slip-open. Tangential contact stress increases with the friction coefficient, which has a little effect on normal contact stress. The relationship between open size and wear capacity is approximately linear.
Development of the HERMIES III mobile robot research testbed at Oak Ridge National Laboratory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manges, W.W.; Hamel, W.R.; Weisbin, C.R.
1988-01-01
The latest robot in the Hostile Environment Robotic Machine Intelligence Experiment Series (HERMIES) is now under development at the Center for Engineering Systems Advanced Research (CESAR) in the Oak Ridge National Laboratory. The HERMIES III robot incorporates a larger than human size 7-degree-of-freedom manipulator mounted on a 2-degree-of-freedom mobile platform including a variety of sensors and computers. The deployment of this robot represents a significant increase in research capabilities for the CESAR laboratory. The initial on-board computer capacity of the robot exceeds that of 20 Vax 11/780s. The navigation and vision algorithms under development make extensive use of the on-boardmore » NCUBE hypercube computer while the sensors are interfaced through five VME computers running the OS-9 real-time, multitasking operating system. This paper describes the motivation, key issues, and detailed design trade-offs of implementing the first phase (basic functionality) of the HERMIES III robot. 10 refs., 7 figs.« less
Security and Cloud Outsourcing Framework for Economic Dispatch
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarker, Mushfiqur R.; Wang, Jianhui; Li, Zuyi
The computational complexity and problem sizes of power grid applications have increased significantly with the advent of renewable resources and smart grid technologies. The current paradigm of solving these issues consist of inhouse high performance computing infrastructures, which have drawbacks of high capital expenditures, maintenance, and limited scalability. Cloud computing is an ideal alternative due to its powerful computational capacity, rapid scalability, and high cost-effectiveness. A major challenge, however, remains in that the highly confidential grid data is susceptible for potential cyberattacks when outsourced to the cloud. In this work, a security and cloud outsourcing framework is developed for themore » Economic Dispatch (ED) linear programming application. As a result, the security framework transforms the ED linear program into a confidentiality-preserving linear program, that masks both the data and problem structure, thus enabling secure outsourcing to the cloud. Results show that for large grid test cases the performance gain and costs outperforms the in-house infrastructure.« less
Security and Cloud Outsourcing Framework for Economic Dispatch
Sarker, Mushfiqur R.; Wang, Jianhui; Li, Zuyi; ...
2017-04-24
The computational complexity and problem sizes of power grid applications have increased significantly with the advent of renewable resources and smart grid technologies. The current paradigm of solving these issues consist of inhouse high performance computing infrastructures, which have drawbacks of high capital expenditures, maintenance, and limited scalability. Cloud computing is an ideal alternative due to its powerful computational capacity, rapid scalability, and high cost-effectiveness. A major challenge, however, remains in that the highly confidential grid data is susceptible for potential cyberattacks when outsourced to the cloud. In this work, a security and cloud outsourcing framework is developed for themore » Economic Dispatch (ED) linear programming application. As a result, the security framework transforms the ED linear program into a confidentiality-preserving linear program, that masks both the data and problem structure, thus enabling secure outsourcing to the cloud. Results show that for large grid test cases the performance gain and costs outperforms the in-house infrastructure.« less
Research in the design of high-performance reconfigurable systems
NASA Technical Reports Server (NTRS)
Mcewan, S. D.; Spry, A. J.
1985-01-01
Computer aided design and computer aided manufacturing have the potential for greatly reducing the cost and lead time in the development of VLSI components. This potential paves the way for the design and fabrication of a wide variety of economically feasible high level functional units. It was observed that current computer systems have only a limited capacity to absorb new VLSI component types other than memory, microprocessors, and a relatively small number of other parts. The first purpose is to explore a system design which is capable of effectively incorporating a considerable number of VLSI part types and will both increase the speed of computation and reduce the attendant programming effort. A second purpose is to explore design techniques for VLSI parts which when incorporated by such a system will result in speeds and costs which are optimal. The proposed work may lay the groundwork for future efforts in the extensive simulation and measurements of the system's cost effectiveness and lead to prototype development.
A review of GPU-based medical image reconstruction.
Després, Philippe; Jia, Xun
2017-10-01
Tomographic image reconstruction is a computationally demanding task, even more so when advanced models are used to describe a more complete and accurate picture of the image formation process. Such advanced modeling and reconstruction algorithms can lead to better images, often with less dose, but at the price of long calculation times that are hardly compatible with clinical workflows. Fortunately, reconstruction tasks can often be executed advantageously on Graphics Processing Units (GPUs), which are exploited as massively parallel computational engines. This review paper focuses on recent developments made in GPU-based medical image reconstruction, from a CT, PET, SPECT, MRI and US perspective. Strategies and approaches to get the most out of GPUs in image reconstruction are presented as well as innovative applications arising from an increased computing capacity. The future of GPU-based image reconstruction is also envisioned, based on current trends in high-performance computing. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Functional capacities of lungs and thorax in beagles after prolonged residence at 3,100 m.
Johnson, R L; Cassidy, S S; Grover, R F; Schutte, J E; Epstein, R H
1985-12-01
Functional capacities of the lungs and thorax in beagles taken to high altitude as adults for 33 mo or in beagles raised from puppies at high altitude were compared with functional capacities in corresponding sets of beagles kept simultaneously at sea level. Comparisons were made after reacclimatization to sea level. Lung volumes, airway pressures, esophageal pressures, CO diffusing capacities (DLCO), pulmonary blood flow, and lung tissue volume (Vt) were measured by a rebreathing technique at inspired volumes ranging from 15 to 90 ml/kg. In beagles raised from puppies we measured anatomical distribution of intrathoracic air and tissue using X-ray computed tomography at transpulmonary pressures of 20 cm H2O. Lung and thoracic distensibility, DLCO, and Vt were not different between beagles that had been kept at high altitude for 33 mo as adults and control subjects kept simultaneously at sea level. Lung distensibility, DLCO, and Vt were significantly greater in beagles raised at high altitude than control subjects raised simultaneously at sea level. Thoracic distensibility was not increased in beagles raised at high altitude; the larger lung volume was accommodated by a lower diaphragm, not a larger rib cage.
Franc, Jeffrey Michael; Ingrassia, Pier Luigi; Verde, Manuela; Colombo, Davide; Della Corte, Francesco
2015-02-01
Surge capacity, or the ability to manage an extraordinary volume of patients, is fundamental for hospital management of mass-casualty incidents. However, quantification of surge capacity is difficult and no universal standard for its measurement has emerged, nor has a standardized statistical method been advocated. As mass-casualty incidents are rare, simulation may represent a viable alternative to measure surge capacity. Hypothesis/Problem The objective of the current study was to develop a statistical method for the quantification of surge capacity using a combination of computer simulation and simple process-control statistical tools. Length-of-stay (LOS) and patient volume (PV) were used as metrics. The use of this method was then demonstrated on a subsequent computer simulation of an emergency department (ED) response to a mass-casualty incident. In the derivation phase, 357 participants in five countries performed 62 computer simulations of an ED response to a mass-casualty incident. Benchmarks for ED response were derived from these simulations, including LOS and PV metrics for triage, bed assignment, physician assessment, and disposition. In the application phase, 13 students of the European Master in Disaster Medicine (EMDM) program completed the same simulation scenario, and the results were compared to the standards obtained in the derivation phase. Patient-volume metrics included number of patients to be triaged, assigned to rooms, assessed by a physician, and disposed. Length-of-stay metrics included median time to triage, room assignment, physician assessment, and disposition. Simple graphical methods were used to compare the application phase group to the derived benchmarks using process-control statistical tools. The group in the application phase failed to meet the indicated standard for LOS from admission to disposition decision. This study demonstrates how simulation software can be used to derive values for objective benchmarks of ED surge capacity using PV and LOS metrics. These objective metrics can then be applied to other simulation groups using simple graphical process-control tools to provide a numeric measure of surge capacity. Repeated use in simulations of actual EDs may represent a potential means of objectively quantifying disaster management surge capacity. It is hoped that the described statistical method, which is simple and reusable, will be useful for investigators in this field to apply to their own research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tamrin, Mohd Izzuddin Mohd; Turaev, Sherzod; Sembok, Tengku Mohd Tengku
There are tremendous works in biotechnology especially in area of DNA molecules. The computer society is attempting to develop smaller computing devices through computational models which are based on the operations performed on the DNA molecules. A Watson-Crick automaton, a theoretical model for DNA based computation, has two reading heads, and works on double-stranded sequences of the input related by a complementarity relation similar with the Watson-Crick complementarity of DNA nucleotides. Over the time, several variants of Watson-Crick automata have been introduced and investigated. However, they cannot be used as suitable DNA based computational models for molecular stochastic processes andmore » fuzzy processes that are related to important practical problems such as molecular parsing, gene disease detection, and food authentication. In this paper we define new variants of Watson-Crick automata, called weighted Watson-Crick automata, developing theoretical models for molecular stochastic and fuzzy processes. We define weighted Watson-Crick automata adapting weight restriction mechanisms associated with formal grammars and automata. We also study the generative capacities of weighted Watson-Crick automata, including probabilistic and fuzzy variants. We show that weighted variants of Watson-Crick automata increase their generative power.« less
Timpka, T
2001-08-01
In an analysis departing from the global health situation, the foundation for a change of paradigm in health informatics based on socially embedded information infrastructures and technologies is identified and discussed. It is shown how an increasing computing and data transmitting capacity can be employed for proactive health computing. As a foundation for ubiquitous health promotion and prevention of disease and injury, proactive health systems use data from multiple sources to supply individuals and communities evidence-based information on means to improve their state of health and avoid health risks. The systems are characterised by: (1) being profusely connected to the world around them, using perceptual interfaces, sensors and actuators; (2) responding to external stimuli at faster than human speeds; (3) networked feedback loops; and (4) humans remaining in control, while being left outside the primary computing loop. The extended scientific mission of this new partnership between computer science, electrical engineering and social medicine is suggested to be the investigation of how the dissemination of information and communication technology on democratic grounds can be made even more important for global health than sanitation and urban planning became a century ago.
Weighted Watson-Crick automata
NASA Astrophysics Data System (ADS)
Tamrin, Mohd Izzuddin Mohd; Turaev, Sherzod; Sembok, Tengku Mohd Tengku
2014-07-01
There are tremendous works in biotechnology especially in area of DNA molecules. The computer society is attempting to develop smaller computing devices through computational models which are based on the operations performed on the DNA molecules. A Watson-Crick automaton, a theoretical model for DNA based computation, has two reading heads, and works on double-stranded sequences of the input related by a complementarity relation similar with the Watson-Crick complementarity of DNA nucleotides. Over the time, several variants of Watson-Crick automata have been introduced and investigated. However, they cannot be used as suitable DNA based computational models for molecular stochastic processes and fuzzy processes that are related to important practical problems such as molecular parsing, gene disease detection, and food authentication. In this paper we define new variants of Watson-Crick automata, called weighted Watson-Crick automata, developing theoretical models for molecular stochastic and fuzzy processes. We define weighted Watson-Crick automata adapting weight restriction mechanisms associated with formal grammars and automata. We also study the generative capacities of weighted Watson-Crick automata, including probabilistic and fuzzy variants. We show that weighted variants of Watson-Crick automata increase their generative power.
ERIC Educational Resources Information Center
Menekse, Muhsin
2015-01-01
While there has been a remarkable interest to make computer science a core K-12 academic subject in the United States, there is a shortage of K-12 computer science teachers to successfully implement computer sciences courses in schools. In order to enhance computer science teacher capacity, training programs have been offered through teacher…
Nature apps: Waiting for the revolution.
Jepson, Paul; Ladle, Richard J
2015-12-01
Apps are small task-orientated programs with the potential to integrate the computational and sensing capacities of smartphones with the power of cloud computing, social networking, and crowdsourcing. They have the potential to transform how humans interact with nature, cause a step change in the quantity and resolution of biodiversity data, democratize access to environmental knowledge, and reinvigorate ways of enjoying nature. To assess the extent to which this potential is being exploited in relation to nature, we conducted an automated search of the Google Play Store using 96 nature-related terms. This returned data on ~36 304 apps, of which ~6301 were nature-themed. We found that few of these fully exploit the full range of capabilities inherent in the technology and/or have successfully captured the public imagination. Such breakthroughs will only be achieved by increasing the frequency and quality of collaboration between environmental scientists, information engineers, computer scientists, and interested publics.
Code of Federal Regulations, 2014 CFR
2014-01-01
... terrestrial technology having the capacity to provide transmission facilities that enable subscribers of the...) Computer Access Points and wireless access, that is used for the purposes of providing free access to and..., and after normal working hours and on Saturdays or Sunday. Computer Access Point means a new computer...
Secure Wireless Networking at Simon Fraser University.
ERIC Educational Resources Information Center
Johnson, Worth
2003-01-01
Describes the wireless local area network (WLAN) at Simon Fraser University, British Columbia, Canada. Originally conceived to address computing capacity and reduce university computer space demands, the WLAN has provided a seamless computing environment for students and solved a number of other campus problems as well. (SLD)
20 CFR 628.325 - Incentive grants, capacity building, and technical assistance.
Code of Federal Regulations, 2011 CFR
2011-04-01
... for the development of Statewide communications and training mechanisms involving computer-based communication technologies that directly facilitate interaction with the National Capacity Building and... section 205(a) of the Act, in developing electronic communications, training mechanisms and/or...
20 CFR 628.325 - Incentive grants, capacity building, and technical assistance.
Code of Federal Regulations, 2012 CFR
2012-04-01
... for the development of Statewide communications and training mechanisms involving computer-based communication technologies that directly facilitate interaction with the National Capacity Building and... section 205(a) of the Act, in developing electronic communications, training mechanisms and/or...
National Software Capacity: Near-Term Study
1990-05-01
Productivity Gains 17 2. Labor Markets and Human Resource Impacts on Capacity 21 2.1. Career Ladders 21 2.1.1. Industry 21 2.1.2. Civil Service 22 2.1.3...Inflows to and Outflows from Computer-Related Jobs 29 2.3.2. Inflows to and Outflows from the DoD Industrial Contractors 31 3. Major Impacts of Other...Factors on Capacity: 35 A Systems View 3.1. Organizational Impacts on Capacity 35 3.1.1. Requirements Specification and Changes 35 3.1.2. The Contracting
NASA Astrophysics Data System (ADS)
2009-09-01
IBM scientist wins magnetism prizes Stuart Parkin, an applied physicist at IBM's Almaden Research Center, has won the European Geophysical Society's Néel Medal and the Magnetism Award from the International Union of Pure and Applied Physics (IUPAP) for his fundamental contributions to nanodevices used in information storage. Parkin's research on giant magnetoresistance in the late 1980s led IBM to develop computer hard drives that packed 1000 times more data onto a disk; his recent work focuses on increasing the storage capacity of solid-state electronic devices.
[Chapter 2. Internet of Things help to collect Big Data].
Brouard, Benoît
2017-10-27
According to the report ?The Internet of Things Market? the number of connected devices will reach 68 billion in 2020. In 2012, the total amount of data was 500 petabytes. So, after the race to increase power computation, now the stake is in the capacity to store all these data in the cloud, to open their access and to analyze these data properly. The use of these data is a major challenge for medical research and public health.
Future trends in computer waste generation in India.
Dwivedy, Maheshwar; Mittal, R K
2010-11-01
The objective of this paper is to estimate the future projection of computer waste in India and to subsequently analyze their flow at the end of their useful phase. For this purpose, the study utilizes the logistic model-based approach proposed by Yang and Williams to forecast future trends in computer waste. The model estimates future projection of computer penetration rate utilizing their first lifespan distribution and historical sales data. A bounding analysis on the future carrying capacity was simulated using the three parameter logistic curve. The observed obsolete generation quantities from the extrapolated penetration rates are then used to model the disposal phase. The results of the bounding analysis indicate that in the year 2020, around 41-152 million units of computers will become obsolete. The obsolete computer generation quantities are then used to estimate the End-of-Life outflows by utilizing a time-series multiple lifespan model. Even a conservative estimate of the future recycling capacity of PCs will reach upwards of 30 million units during 2025. Apparently, more than 150 million units could be potentially recycled in the upper bound case. However, considering significant future investment in the e-waste recycling sector from all stakeholders in India, we propose a logistic growth in the recycling rate and estimate the requirement of recycling capacity between 60 and 400 million units for the lower and upper bound case during 2025. Finally, we compare the future obsolete PC generation amount of the US and India. Copyright © 2010 Elsevier Ltd. All rights reserved.
Study on shear strengthening of RC continuous T-beams using different layers of CFRP strips
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alferjani, M. B. S.; Samad, A. A. Abdul; Mohamad, Noridah
2015-05-15
Carbon fiber reinforced polymer (CFRP) laminates are externally bonded to reinforced concrete (RC) members to provide additional strength such as flexural, shear, etc. However, this paper presents the results of an experimental investigation for enhancing the shear capacity of reinforced concrete (RC) continuous T- beams using different layers of CFRP wrapping schemes. A total of three concrete beams were tested and various sheet configurations and layouts were studied to determine their effects on ultimate shear strength and shear capacity of the beams. One beam was kept as control beams, while other beams were strengthened with externally bonded CFRP strips withmore » three side bonding and one or two layers of CFRP strips. From the test results, it was found that all schemes were found to be effective in enhancing the shear strength of RC beams. It was observed that the strength increases with the number of sheet layers provided the most effective strengthening for RC continuous T- beam. Beam strengthened using this scheme showed 23.21% increase in shear capacity as compared to the control beam. Two prediction models available in literature were used for computing the contribution of CFRP strips and compared with the experimental results.« less
Collet, Lila; Ruelland, Denis; Borrell-Estupina, Valérie; Dezetter, Alain; Servat, Eric
2013-09-01
Assessing water supply capacity is crucial to meet stakeholders' needs, notably in the Mediterranean region. This region has been identified as a climate change hot spot, and as a region where water demand is continuously increasing due to population growth and the expansion of irrigated areas. The Hérault River catchment (2500 km(2), France) is a typical example and a negative trend in discharge has been observed since the 1960s. In this context, local stakeholders need first to understand the processes controlling the evolution of water resources and demands in the past to latter evaluate future water supply capacity and anticipate the tensions users could be confronted to in the future. A modelling framework is proposed at a 10-day time step to assess whether water resources have been able to meet water demands over the last 50 years. Water supply was evaluated using hydrological modelling and a dam management model. Water demand dynamics were estimated for the domestic and agricultural sectors. A water supply capacity index is computed to assess the extent and the frequency to which water demand has been satisfied at the sub-basin scale. Simulated runoff dynamics were in good agreement with observations over the calibration and validation periods. Domestic water demand has increased considerably since the 1980s and is characterized by a seasonal peak in summer. Agricultural demand has increased in the downstream sub-basins and decreased upstream where irrigated areas have decreased. As a result, although most water demands were satisfied between 1961 and 1980, irrigation requirements in summer have sometimes not been satisfied since the 1980s. This work is the first step toward evaluating possible future changes in water allocation capacity in the catchment, using future climate change, dam management and water use scenarios. Copyright © 2013 Elsevier B.V. All rights reserved.
Enhancing Lay Counselor Capacity to Improve Patient Outcomes with Multimedia Technology.
Robbins, Reuben N; Mellins, Claude A; Leu, Cheng-Shiun; Rowe, Jessica; Warne, Patricia; Abrams, Elaine J; Witte, Susan; Stein, Dan J; Remien, Robert H
2015-06-01
Multimedia technologies offer powerful tools to increase capacity of health workers to deliver standardized, effective, and engaging antiretroviral medication adherence counseling. Masivukeni-is an innovative multimedia-based, computer-driven, lay counselor-delivered intervention designed to help people living with HIV in resource-limited settings achieve optimal adherence. This pilot study examined medication adherence and key psychosocial outcomes among 55 non-adherent South African HIV+ patients, on antiretroviral therapy (ART) for at least 6 months, who were randomized to receive either Masivukeni or standard of care (SOC) counseling for ART non-adherence. At baseline, there were no significant differences between the SOC and Masivukeni groups on any outcome variables. At post-intervention (approximately 5-6 weeks after baseline), -clinic-based pill count adherence data available for 20 participants (10 per intervention arm) showed a 10 % improvement for-participants and a decrease of 8 % for SOC participants. Masivukeni participants reported significantly more positive attitudes towards disclosure and medication social support, less social rejection, and better clinic-patient relationships than did SOC participants. Masivukeni shows promise to promote optimal adherence and provides preliminary evidence that multimedia, computer-based technology can help lay counselors offer better adherence counseling than standard approaches.
Enhancing Lay Counselor Capacity to Improve Patient Outcomes with Multimedia Technology
Robbins, Reuben N.; Mellins, Claude A.; Leu, Cheng-Shiun; Rowe, Jessica; Warne, Patricia; Abrams, Elaine J.; Witte, Susan; Stein, Dan J.; Remien, Robert H.
2015-01-01
Multimedia technologies offer powerful tools to increase capacity of health workers to deliver standardized, effective, and engaging antiretroviral medication adherence counseling. Masivukeni is an innovative multimedia-based, computer-driven, lay counselor-delivered intervention designed to help people living with HIV in resource-limited settings achieve optimal adherence. This pilot study examined medication adherence and key psychosocial outcomes among 55 non-adherent South African HIV+ patients, on ART for at least 6 months, who were randomized to receive either Masivukeni or standard of care (SOC) counseling for ART non-adherence. At baseline, there were no significant differences between the SOC and Masivukeni groups on any outcome variables. At post-intervention (approximately 5–6 weeks after baseline), clinic-based pill count adherence data available for 20 participants (10 per intervention arm) showed a 10% improvement for Masivukeni participants and a decrease of 8% for SOC participants. Masivukeni participants reported significantly more positive attitudes towards disclosure and medication social support, less social rejection, and better clinic-patient relationships than did SOC participants. Masivukeni shows promise to promote optimal adherence and provides preliminary evidence that multimedia, computer-based technology can help lay counselors offer better adherence counseling than standard approaches. PMID:25566763
ERIC Educational Resources Information Center
Smith, Garon C.; Hossain, Md Mainul
2016-01-01
BufCap TOPOS is free software that generates 3-D topographical surfaces ("topos") for acid-base equilibrium studies. It portrays pH and buffer capacity behavior during titration and dilution procedures. Topo surfaces are created by plotting computed pH and buffer capacity values above a composition grid with volume of NaOH as the x axis…
THE INTERNAL ORGANIZATION OF COMPUTER MODELS OF COGNITIVE BEHAVIOR.
ERIC Educational Resources Information Center
BAKER, FRANK B.
IF COMPUTER PROGRAMS ARE TO SERVE AS USEFUL MODELS OF COGNITIVE BEHAVIOR, THEIR CREATORS MUST FACE THE NEED TO ESTABLISH AN INTERNAL ORGANIZATION FOR THEIR MODEL WHICH IMPLEMENTS THE HIGHER LEVEL COGNITIVE BEHAVIORS ASSOCIATED WITH THE HUMAN CAPACITY FOR SELF-DIRECTION, AUTOCRITICISM, AND ADAPTATION. PRESENT COMPUTER MODELS OF COGNITIVE BEHAVIOR…
ERIC Educational Resources Information Center
Dikli, Semire
2006-01-01
The impacts of computers on writing have been widely studied for three decades. Even basic computers functions, i.e. word processing, have been of great assistance to writers in modifying their essays. The research on Automated Essay Scoring (AES) has revealed that computers have the capacity to function as a more effective cognitive tool (Attali,…
Jian Yang; Hong S. He; Stephen R. Shifley; Frank R. Thompson; Yangjian Zhang
2011-01-01
Although forest landscape models (FLMs) have benefited greatly from ongoing advances of computer technology and software engineering, computing capacity remains a bottleneck in the design and development of FLMs. Computer memory overhead and run time efficiency are primary limiting factors when applying forest landscape models to simulate large landscapes with fine...
Characterizing Crowd Participation and Productivity of Foldit Through Web Scraping
2016-03-01
Berkeley Open Infrastructure for Network Computing CDF Cumulative Distribution Function CPU Central Processing Unit CSSG Crowdsourced Serious Game...computers at once can create a similar capacity. According to Anderson [6], principal investigator for the Berkeley Open Infrastructure for Network...extraterrestrial life. From this project, a software-based distributed computing platform called the Berkeley Open Infrastructure for Network Computing
Thermodynamic effects of single-qubit operations in silicon-based quantum computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lougovski, Pavel; Peters, Nicholas A.
Silicon-based quantum logic is a promising technology to implement universal quantum computing. It is widely believed that a millikelvin cryogenic environment will be necessary to accommodate silicon-based qubits. This prompts a question of the ultimate scalability of the technology due to finite cooling capacity of refrigeration systems. In this work, we answer this question by studying energy dissipation due to interactions between nuclear spin impurities and qubit control pulses. Furthermore, we demonstrate that this interaction constrains the sustainable number of single-qubit operations per second for a given cooling capacity.
Thermodynamic effects of single-qubit operations in silicon-based quantum computing
Lougovski, Pavel; Peters, Nicholas A.
2018-05-21
Silicon-based quantum logic is a promising technology to implement universal quantum computing. It is widely believed that a millikelvin cryogenic environment will be necessary to accommodate silicon-based qubits. This prompts a question of the ultimate scalability of the technology due to finite cooling capacity of refrigeration systems. In this work, we answer this question by studying energy dissipation due to interactions between nuclear spin impurities and qubit control pulses. Furthermore, we demonstrate that this interaction constrains the sustainable number of single-qubit operations per second for a given cooling capacity.
Computational discovery of metal-organic frameworks with high gas deliverable capacity
NASA Astrophysics Data System (ADS)
Bao, Yi
Metal-organic frameworks (MOFs) are a rapidly emerging class of nanoporous materials with largely tunable chemistry and diverse applications in gas storage, gas purification, catalysis, sensing and drug delivery. Efforts have been made to develop new MOFs with desirable properties both experimentally and computationally for decades. To guide experimental synthesis, we here develop a computational methodology to explore MOFs with high gas deliverable capacity. This de novo design procedure applies known chemical reactions, considers synthesizability and geometric requirements of organic linkers, and efficiently evolves a population of MOFs to optimize a desirable property. We identify 48 MOFs with higher methane deliverable capacity at 65-5.8 bar condition than the MOF-5 reference in nine networks. In a more comprehensive work, we predict two sets of MOFs with high methane deliverable capacity at a 65-5.8 bar loading-delivery condition or a 35-5.8 bar loading-delivery condition. We also optimize a set of MOFs with high methane accessible internal surface area to investigate the relationship between deliverable capacities and internal surface area. This methodology can be extended to MOFs with multiple types of linkers and multiple SBUs. Flexibile MOFs may allow for sophisticated heat management strategies and also provide higher gas deliverable capacity than rigid frameworks. We investigate flexible MOFs, such as MIL-53 families, and Fe(bdp) and Co(bdp) analogs, to understand the structural phase transition of frameworks and the resulting influence on heat of adsorption. Challenges of simulating a system with a flexible host structure and incoming guest molecules are discussed. Preliminary results from isotherm simulation using the hybrid MC/MD simulation scheme on MIL-53(Cr) are presented. Suggestions for proceeding to understand the free energy profile of flexible MOFs are provided.
Error suppression via complementary gauge choices in Reed-Muller codes
NASA Astrophysics Data System (ADS)
Chamberland, Christopher; Jochym-O'Connor, Tomas
2017-09-01
Concatenation of two quantum error-correcting codes with complementary sets of transversal gates can provide a means toward universal fault-tolerant quantum computation. We first show that it is generally preferable to choose the inner code with the higher pseudo-threshold to achieve lower logical failure rates. We then explore the threshold properties of a wide range of concatenation schemes. Notably, we demonstrate that the concatenation of complementary sets of Reed-Muller codes can increase the code capacity threshold under depolarizing noise when compared to extensions of previously proposed concatenation models. We also analyze the properties of logical errors under circuit-level noise, showing that smaller codes perform better for all sampled physical error rates. Our work provides new insights into the performance of universal concatenated quantum codes for both code capacity and circuit-level noise.
Effect of Moisture Content on Thermal Properties of Porous Building Materials
NASA Astrophysics Data System (ADS)
Kočí, Václav; Vejmelková, Eva; Čáchová, Monika; Koňáková, Dana; Keppert, Martin; Maděra, Jiří; Černý, Robert
2017-02-01
The thermal conductivity and specific heat capacity of characteristic types of porous building materials are determined in the whole range of moisture content from dry to fully water-saturated state. A transient pulse technique is used in the experiments, in order to avoid the influence of moisture transport on measured data. The investigated specimens include cement composites, ceramics, plasters, and thermal insulation boards. The effect of moisture-induced changes in thermal conductivity and specific heat capacity on the energy performance of selected building envelopes containing the studied materials is then analyzed using computational modeling of coupled heat and moisture transport. The results show an increased moisture content as a substantial negative factor affecting both thermal properties of materials and energy balance of envelopes, which underlines the necessity to use moisture-dependent thermal parameters of building materials in energy-related calculations.
NASA Astrophysics Data System (ADS)
Lou, Wentao; Zhu, Miaoyong
2014-10-01
A computation fluid dynamics-simultaneous reaction model (CFD-SRM) coupled model has been proposed to describe the desulfurization behavior in a gas-stirred ladle. For the desulfurization thermodynamics, different models were investigated to determine sulfide capacity and oxygen activity. For the desulfurization kinetic, the effect of bubbly plume flow, as well as oxygen absorption and oxidation reactions in slag eyes are considered. The thermodynamic and kinetic modification coefficients are proposed to fit the measured data, respectively. Finally, the effects of slag basicity and gas flow rate on the desulfurization efficiency are investigated. The results show that as the interfacial reactions (Al2O3)-(FeO)-(SiO2)-(MnO)-[S]-[O] simultaneous kinetic equilibrium is adopted to determine the oxygen activity, and the Young's model with the modification coefficient R th of 1.5 is adopted to determine slag sulfide capacity, the predicted sulfur distribution ratio LS agrees well with the measured data. With an increase of the gas blowing time, the predicted desulfurization rate gradually decreased, and when the modification parameter R k is 0.8, the predicted sulfur content changing with time in ladle agrees well with the measured data. If the oxygen absorption and oxidation reactions in slag eyes are not considered in this model, then the sulfur removal rate in the ladle would be overestimated, and this trend would become more obvious with an increase of the gas flow rate and decrease of the slag layer height. With the slag basicity increasing, the total desulfurization ratio increases; however, the total desulfurization ratio changes weakly as the slag basicity exceeds 7. With the increase of the gas flow rate, the desulfurization ratio first increases and then decreases. When the gas flow rate is 200 NL/min, the desulfurization ratio reaches a maximum value in an 80-ton gas-stirred ladle.
NASA Astrophysics Data System (ADS)
Pon-Barry, Heather; Packard, Becky Wai-Ling; St. John, Audrey
2017-01-01
A dilemma within computer science departments is developing sustainable ways to expand capacity within introductory computer science courses while remaining committed to inclusive practices. Training near-peer mentors for peer code review is one solution. This paper describes the preparation of near-peer mentors for their role, with a focus on regular, consistent feedback via peer code review and inclusive pedagogy. Introductory computer science students provided consistently high ratings of the peer mentors' knowledge, approachability, and flexibility, and credited peer mentor meetings for their strengthened self-efficacy and understanding. Peer mentors noted the value of videotaped simulations with reflection, discussions of inclusion, and the cohort's weekly practicum for improving practice. Adaptations of peer mentoring for different types of institutions are discussed. Computer science educators, with hopes of improving the recruitment and retention of underrepresented groups, can benefit from expanding their peer support infrastructure and improving the quality of peer mentor preparation.
Computer Utilization in Middle Tennessee High Schools.
ERIC Educational Resources Information Center
Lucas, Sam
In order to determine the capacity of high schools to profit from the pre-high school computer experiences of its students, a study was conducted to measure computer utilization in selected high schools of Middle Tennessee. Questionnaires distributed to 50 principals in 28 school systems covered the following areas: school enrollment; number and…
ERIC Educational Resources Information Center
Hsu, Ying-Shao; Wu, Hsin-Kai; Hwang, Fu-Kwun
2007-01-01
Sandholtz, Ringstaff, & Dwyer (1996) list five stages in the "evolution" of a teacher's capacity for computer-based instruction--entry, adoption, adaptation, appropriation and invention--which hereafter will be called the teacher's computer-based instructional evolution. In this study of approximately six hundred junior high school…
ERIC Educational Resources Information Center
Mills, Steven C.; Ragan, Tillman J.
This paper examines a research paradigm that is particularly suited to experimentation-related computer-based instruction and integrated learning systems. The main assumption of the model is that one of the most powerful capabilities of computer-based instruction, and specifically of integrated learning systems, is the capacity to adapt…
International Computer and Information Literacy Study: Assessment Framework
ERIC Educational Resources Information Center
Fraillon, Julian; Schulz, Wolfram; Ainley, John
2013-01-01
The purpose of the International Computer and Information Literacy Study 2013 (ICILS 2013) is to investigate, in a range of countries, the ways in which young people are developing "computer and information literacy" (CIL) to support their capacity to participate in the digital age. To achieve this aim, the study will assess student…
Information Processing Capacity of Dynamical Systems
NASA Astrophysics Data System (ADS)
Dambre, Joni; Verstraeten, David; Schrauwen, Benjamin; Massar, Serge
2012-07-01
Many dynamical systems, both natural and artificial, are stimulated by time dependent external signals, somehow processing the information contained therein. We demonstrate how to quantify the different modes in which information can be processed by such systems and combine them to define the computational capacity of a dynamical system. This is bounded by the number of linearly independent state variables of the dynamical system, equaling it if the system obeys the fading memory condition. It can be interpreted as the total number of linearly independent functions of its stimuli the system can compute. Our theory combines concepts from machine learning (reservoir computing), system modeling, stochastic processes, and functional analysis. We illustrate our theory by numerical simulations for the logistic map, a recurrent neural network, and a two-dimensional reaction diffusion system, uncovering universal trade-offs between the non-linearity of the computation and the system's short-term memory.
Gering, Kevin L.
2013-06-18
A system includes an electrochemical cell, monitoring hardware, and a computing system. The monitoring hardware periodically samples charge characteristics of the electrochemical cell. The computing system periodically determines cell information from the charge characteristics of the electrochemical cell. The computing system also periodically adds a first degradation characteristic from the cell information to a first sigmoid expression, periodically adds a second degradation characteristic from the cell information to a second sigmoid expression and combines the first sigmoid expression and the second sigmoid expression to develop or augment a multiple sigmoid model (MSM) of the electrochemical cell. The MSM may be used to estimate a capacity loss of the electrochemical cell at a desired point in time and analyze other characteristics of the electrochemical cell. The first and second degradation characteristics may be loss of active host sites and loss of free lithium for Li-ion cells.
Information Processing Capacity of Dynamical Systems
Dambre, Joni; Verstraeten, David; Schrauwen, Benjamin; Massar, Serge
2012-01-01
Many dynamical systems, both natural and artificial, are stimulated by time dependent external signals, somehow processing the information contained therein. We demonstrate how to quantify the different modes in which information can be processed by such systems and combine them to define the computational capacity of a dynamical system. This is bounded by the number of linearly independent state variables of the dynamical system, equaling it if the system obeys the fading memory condition. It can be interpreted as the total number of linearly independent functions of its stimuli the system can compute. Our theory combines concepts from machine learning (reservoir computing), system modeling, stochastic processes, and functional analysis. We illustrate our theory by numerical simulations for the logistic map, a recurrent neural network, and a two-dimensional reaction diffusion system, uncovering universal trade-offs between the non-linearity of the computation and the system's short-term memory. PMID:22816038
Computer assisted analysis of medical x-ray images
NASA Astrophysics Data System (ADS)
Bengtsson, Ewert
1996-01-01
X-rays were originally used to expose film. The early computers did not have enough capacity to handle images with useful resolution. The rapid development of computer technology over the last few decades has, however, led to the introduction of computers into radiology. In this overview paper, the various possible roles of computers in radiology are examined. The state of the art is briefly presented, and some predictions about the future are made.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brun, B.
1997-07-01
Computer technology has improved tremendously during the last years with larger media capacity, memory and more computational power. Visual computing with high-performance graphic interface and desktop computational power have changed the way engineers accomplish everyday tasks, development and safety studies analysis. The emergence of parallel computing will permit simulation over a larger domain. In addition, new development methods, languages and tools have appeared in the last several years.
Spatiotemporal Domain Decomposition for Massive Parallel Computation of Space-Time Kernel Density
NASA Astrophysics Data System (ADS)
Hohl, A.; Delmelle, E. M.; Tang, W.
2015-07-01
Accelerated processing capabilities are deemed critical when conducting analysis on spatiotemporal datasets of increasing size, diversity and availability. High-performance parallel computing offers the capacity to solve computationally demanding problems in a limited timeframe, but likewise poses the challenge of preventing processing inefficiency due to workload imbalance between computing resources. Therefore, when designing new algorithms capable of implementing parallel strategies, careful spatiotemporal domain decomposition is necessary to account for heterogeneity in the data. In this study, we perform octtree-based adaptive decomposition of the spatiotemporal domain for parallel computation of space-time kernel density. In order to avoid edge effects near subdomain boundaries, we establish spatiotemporal buffers to include adjacent data-points that are within the spatial and temporal kernel bandwidths. Then, we quantify computational intensity of each subdomain to balance workloads among processors. We illustrate the benefits of our methodology using a space-time epidemiological dataset of Dengue fever, an infectious vector-borne disease that poses a severe threat to communities in tropical climates. Our parallel implementation of kernel density reaches substantial speedup compared to sequential processing, and achieves high levels of workload balance among processors due to great accuracy in quantifying computational intensity. Our approach is portable of other space-time analytical tests.
Comparison of Models for Ball Bearing Dynamic Capacity and Life
NASA Technical Reports Server (NTRS)
Gupta, Pradeep K.; Oswald, Fred B.; Zaretsky, Erwin V.
2015-01-01
Generalized formulations for dynamic capacity and life of ball bearings, based on the models introduced by Lundberg and Palmgren and Zaretsky, have been developed and implemented in the bearing dynamics computer code, ADORE. Unlike the original Lundberg-Palmgren dynamic capacity equation, where the elastic properties are part of the life constant, the generalized formulations permit variation of elastic properties of the interacting materials. The newly updated Lundberg-Palmgren model allows prediction of life as a function of elastic properties. For elastic properties similar to those of AISI 52100 bearing steel, both the original and updated Lundberg-Palmgren models provide identical results. A comparison between the Lundberg-Palmgren and the Zaretsky models shows that at relatively light loads the Zaretsky model predicts a much higher life than the Lundberg-Palmgren model. As the load increases, the Zaretsky model provides a much faster drop off in life. This is because the Zaretsky model is much more sensitive to load than the Lundberg-Palmgren model. The generalized implementation where all model parameters can be varied provides an effective tool for future model validation and enhancement in bearing life prediction capabilities.
A Primer on High-Throughput Computing for Genomic Selection
Wu, Xiao-Lin; Beissinger, Timothy M.; Bauck, Stewart; Woodward, Brent; Rosa, Guilherme J. M.; Weigel, Kent A.; Gatti, Natalia de Leon; Gianola, Daniel
2011-01-01
High-throughput computing (HTC) uses computer clusters to solve advanced computational problems, with the goal of accomplishing high-throughput over relatively long periods of time. In genomic selection, for example, a set of markers covering the entire genome is used to train a model based on known data, and the resulting model is used to predict the genetic merit of selection candidates. Sophisticated models are very computationally demanding and, with several traits to be evaluated sequentially, computing time is long, and output is low. In this paper, we present scenarios and basic principles of how HTC can be used in genomic selection, implemented using various techniques from simple batch processing to pipelining in distributed computer clusters. Various scripting languages, such as shell scripting, Perl, and R, are also very useful to devise pipelines. By pipelining, we can reduce total computing time and consequently increase throughput. In comparison to the traditional data processing pipeline residing on the central processors, performing general-purpose computation on a graphics processing unit provide a new-generation approach to massive parallel computing in genomic selection. While the concept of HTC may still be new to many researchers in animal breeding, plant breeding, and genetics, HTC infrastructures have already been built in many institutions, such as the University of Wisconsin–Madison, which can be leveraged for genomic selection, in terms of central processing unit capacity, network connectivity, storage availability, and middleware connectivity. Exploring existing HTC infrastructures as well as general-purpose computing environments will further expand our capability to meet increasing computing demands posed by unprecedented genomic data that we have today. We anticipate that HTC will impact genomic selection via better statistical models, faster solutions, and more competitive products (e.g., from design of marker panels to realized genetic gain). Eventually, HTC may change our view of data analysis as well as decision-making in the post-genomic era of selection programs in animals and plants, or in the study of complex diseases in humans. PMID:22303303
Computer laboratory in medical education for medical students.
Hercigonja-Szekeres, Mira; Marinović, Darko; Kern, Josipa
2009-01-01
Five generations of second year students at the Zagreb University School of Medicine were interviewed through an anonymous questionnaire on their use of personal computers, Internet, computer laboratories and computer-assisted education in general. Results show an advance in students' usage of information and communication technology during the period from 1998/99 to 2002/03. However, their positive opinion about computer laboratory depends on installed capacities: the better the computer laboratory technology, the better the students' acceptance and use of it.
On the Emergence of Modern Humans
ERIC Educational Resources Information Center
Amati, Daniele; Shallice, Tim
2007-01-01
The emergence of modern humans with their extraordinary cognitive capacities is ascribed to a novel type of cognitive computational process (sustained non-routine multi-level operations) required for abstract projectuality, held to be the common denominator of the cognitive capacities specific to modern humans. A brain operation (latching) that…
Improving agreement between static method and dynamic formula for driven cast-in-place piles.
DOT National Transportation Integrated Search
2013-06-01
This study focuses on comparing the capacities and lengths of piling necessary as determined with a static method and with a dynamic formula. Pile capacities and their required lengths are determined two ways: 1) using a design and computed method, s...
Rich client data exploration and research prototyping for NOAA
NASA Astrophysics Data System (ADS)
Grossberg, Michael; Gladkova, Irina; Guch, Ingrid; Alabi, Paul; Shahriar, Fazlul; Bonev, George; Aizenman, Hannah
2009-08-01
Data from satellites and model simulations is increasing exponentially as observations and model computing power improve rapidly. Not only is technology producing more data, but it often comes from sources all over the world. Researchers and scientists who must collaborate are also located globally. This work presents a software design and technologies which will make it possible for groups of researchers to explore large data sets visually together without the need to download these data sets locally. The design will also make it possible to exploit high performance computing remotely and transparently to analyze and explore large data sets. Computer power, high quality sensing, and data storage capacity have improved at a rate that outstrips our ability to develop software applications that exploit these resources. It is impractical for NOAA scientists to download all of the satellite and model data that may be relevant to a given problem and the computing environments available to a given researcher range from supercomputers to only a web browser. The size and volume of satellite and model data are increasing exponentially. There are at least 50 multisensor satellite platforms collecting Earth science data. On the ground and in the sea there are sensor networks, as well as networks of ground based radar stations, producing a rich real-time stream of data. This new wealth of data would have limited use were it not for the arrival of large-scale high-performance computation provided by parallel computers, clusters, grids, and clouds. With these computational resources and vast archives available, it is now possible to analyze subtle relationships which are global, multi-modal and cut across many data sources. Researchers, educators, and even the general public, need tools to access, discover, and use vast data center archives and high performance computing through a simple yet flexible interface.
NASA Astrophysics Data System (ADS)
Cox, B. S.; Groh, R. M. J.; Avitabile, D.; Pirrera, A.
2018-07-01
The buckling and post-buckling behaviour of slender structures is increasingly being harnessed for smart functionalities. Equally, the post-buckling regime of many traditional engineering structures is not being used for design and may therefore harbour latent load-bearing capacity for further structural efficiency. Both applications can benefit from a robust means of modifying and controlling the post-buckling behaviour for a specific purpose. To this end, we introduce a structural design paradigm termed modal nudging, which can be used to tailor the post-buckling response of slender engineering structures without any significant increase in mass. Modal nudging uses deformation modes of stable post-buckled equilibria to perturb the undeformed baseline geometry of the structure imperceptibly, thereby favouring the seeded post-buckling response over potential alternatives. The benefits of this technique are enhanced control over the post-buckling behaviour, such as modal differentiation for smart structures that use snap-buckling for shape adaptation, or alternatively, increased load-carrying capacity, increased compliance or a shift from imperfection sensitivity to imperfection insensitivity. Although these concepts are, in theory, of general applicability, we concentrate here on planar frame structures analysed using the nonlinear finite element method and numerical continuation procedures. Using these computational techniques, we show that planar frame structures may exhibit isolated regions of stable equilibria in otherwise unstable post-buckling regimes, or indeed stable equilibria entirely disconnected from the natural structural response. In both cases, the load-carrying capacity of these isolated stable equilibria is greater than the natural structural response of the frames. Using the concept of modal nudging it is possible to "nudge" the frames onto these equilibrium paths of greater load-carrying capacity. Due to the scale invariance of modal nudging, these findings may impact the design of structures from the micro- to the macro-scale.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patton, A.D.; Ayoub, A.K.; Singh, C.
1982-07-01
Existing methods for generating capacity reliability evaluation do not explicitly recognize a number of operating considerations which may have important effects in system reliability performance. Thus, current methods may yield estimates of system reliability which differ appreciably from actual observed reliability. Further, current methods offer no means of accurately studying or evaluating alternatives which may differ in one or more operating considerations. Operating considerations which are considered to be important in generating capacity reliability evaluation include: unit duty cycles as influenced by load cycle shape, reliability performance of other units, unit commitment policy, and operating reserve policy; unit start-up failuresmore » distinct from unit running failures; unit start-up times; and unit outage postponability and the management of postponable outages. A detailed Monte Carlo simulation computer model called GENESIS and two analytical models called OPCON and OPPLAN have been developed which are capable of incorporating the effects of many operating considerations including those noted above. These computer models have been used to study a variety of actual and synthetic systems and are available from EPRI. The new models are shown to produce system reliability indices which differ appreciably from index values computed using traditional models which do not recognize operating considerations.« less
Computational power and generative capacity of genetic systems.
Igamberdiev, Abir U; Shklovskiy-Kordi, Nikita E
2016-01-01
Semiotic characteristics of genetic sequences are based on the general principles of linguistics formulated by Ferdinand de Saussure, such as the arbitrariness of sign and the linear nature of the signifier. Besides these semiotic features that are attributable to the basic structure of the genetic code, the principle of generativity of genetic language is important for understanding biological transformations. The problem of generativity in genetic systems arises to a possibility of different interpretations of genetic texts, and corresponds to what Alexander von Humboldt called "the infinite use of finite means". These interpretations appear in the individual development as the spatiotemporal sequences of realizations of different textual meanings, as well as the emergence of hyper-textual statements about the text itself, which underlies the process of biological evolution. These interpretations are accomplished at the level of the readout of genetic texts by the structures defined by Efim Liberman as "the molecular computer of cell", which includes DNA, RNA and the corresponding enzymes operating with molecular addresses. The molecular computer performs physically manifested mathematical operations and possesses both reading and writing capacities. Generativity paradoxically resides in the biological computational system as a possibility to incorporate meta-statements about the system, and thus establishes the internal capacity for its evolution. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Energy challenges in optical access and aggregation networks.
Kilper, Daniel C; Rastegarfar, Houman
2016-03-06
Scalability is a critical issue for access and aggregation networks as they must support the growth in both the size of data capacity demands and the multiplicity of access points. The number of connected devices, the Internet of Things, is growing to the tens of billions. Prevailing communication paradigms are reaching physical limitations that make continued growth problematic. Challenges are emerging in electronic and optical systems and energy increasingly plays a central role. With the spectral efficiency of optical systems approaching the Shannon limit, increasing parallelism is required to support higher capacities. For electronic systems, as the density and speed increases, the total system energy, thermal density and energy per bit are moving into regimes that become impractical to support-for example requiring single-chip processor powers above the 100 W limit common today. We examine communication network scaling and energy use from the Internet core down to the computer processor core and consider implications for optical networks. Optical switching in data centres is identified as a potential model from which scalable access and aggregation networks for the future Internet, with the application of integrated photonic devices and intelligent hybrid networking, will emerge. © 2016 The Author(s).
Friedel, Eva; Sebold, Miriam; Kuitunen-Paul, Sören; Nebe, Stephan; Veer, Ilya M.; Zimmermann, Ulrich S.; Schlagenhauf, Florian; Smolka, Michael N.; Rapp, Michael; Walter, Henrik; Heinz, Andreas
2017-01-01
Rationale: Advances in neurocomputational modeling suggest that valuation systems for goal-directed (deliberative) on one side, and habitual (automatic) decision-making on the other side may rely on distinct computational strategies for reinforcement learning, namely model-free vs. model-based learning. As a key theoretical difference, the model-based system strongly demands cognitive functions to plan actions prospectively based on an internal cognitive model of the environment, whereas valuation in the model-free system relies on rather simple learning rules from operant conditioning to retrospectively associate actions with their outcomes and is thus cognitively less demanding. Acute stress reactivity is known to impair model-based but not model-free choice behavior, with higher working memory capacity protecting the model-based system from acute stress. However, it is not clear which impact accumulated real life stress has on model-free and model-based decision systems and how this influence interacts with cognitive abilities. Methods: We used a sequential decision-making task distinguishing relative contributions of both learning strategies to choice behavior, the Social Readjustment Rating Scale questionnaire to assess accumulated real life stress, and the Digit Symbol Substitution Test to test cognitive speed in 95 healthy subjects. Results: Individuals reporting high stress exposure who had low cognitive speed showed reduced model-based but increased model-free behavioral control. In contrast, subjects exposed to accumulated real life stress with high cognitive speed displayed increased model-based performance but reduced model-free control. Conclusion: These findings suggest that accumulated real life stress exposure can enhance reliance on cognitive speed for model-based computations, which may ultimately protect the model-based system from the detrimental influences of accumulated real life stress. The combination of accumulated real life stress exposure and slower information processing capacities, however, might favor model-free strategies. Thus, the valence and preference of either system strongly depends on stressful experiences and individual cognitive capacities. PMID:28642696
Friedel, Eva; Sebold, Miriam; Kuitunen-Paul, Sören; Nebe, Stephan; Veer, Ilya M; Zimmermann, Ulrich S; Schlagenhauf, Florian; Smolka, Michael N; Rapp, Michael; Walter, Henrik; Heinz, Andreas
2017-01-01
Rationale: Advances in neurocomputational modeling suggest that valuation systems for goal-directed (deliberative) on one side, and habitual (automatic) decision-making on the other side may rely on distinct computational strategies for reinforcement learning, namely model-free vs. model-based learning. As a key theoretical difference, the model-based system strongly demands cognitive functions to plan actions prospectively based on an internal cognitive model of the environment, whereas valuation in the model-free system relies on rather simple learning rules from operant conditioning to retrospectively associate actions with their outcomes and is thus cognitively less demanding. Acute stress reactivity is known to impair model-based but not model-free choice behavior, with higher working memory capacity protecting the model-based system from acute stress. However, it is not clear which impact accumulated real life stress has on model-free and model-based decision systems and how this influence interacts with cognitive abilities. Methods: We used a sequential decision-making task distinguishing relative contributions of both learning strategies to choice behavior, the Social Readjustment Rating Scale questionnaire to assess accumulated real life stress, and the Digit Symbol Substitution Test to test cognitive speed in 95 healthy subjects. Results: Individuals reporting high stress exposure who had low cognitive speed showed reduced model-based but increased model-free behavioral control. In contrast, subjects exposed to accumulated real life stress with high cognitive speed displayed increased model-based performance but reduced model-free control. Conclusion: These findings suggest that accumulated real life stress exposure can enhance reliance on cognitive speed for model-based computations, which may ultimately protect the model-based system from the detrimental influences of accumulated real life stress. The combination of accumulated real life stress exposure and slower information processing capacities, however, might favor model-free strategies. Thus, the valence and preference of either system strongly depends on stressful experiences and individual cognitive capacities.
State-of-the-art research on electromagnetic information security
NASA Astrophysics Data System (ADS)
Hayashi, Yu-ichi
2016-07-01
As information security is becoming increasingly significant, security at the hardware level is as important as in networks and applications. In recent years, instrumentation has become cheaper and more precise, computation has become faster, and capacities have increased. With these advancements, the threat of advanced attacks that were considerably difficult to carry out previously has increased not only in military and diplomatic fields but also in general-purpose manufactured devices. This paper focuses on the problem of the security limitations concerning electromagnetic waves (electromagnetic information security) that has rendered attack detection particularly difficult at the hardware level. In addition to reviewing the mechanisms of these information leaks and countermeasures, this paper also presents the latest research trends and standards.
Leung, Kevin; Lin, Yu -Xiao; Liu, Zhe; ...
2016-01-01
The formation and continuous growth of a solid electrolyte interphase (SEI) layer are responsible for the irreversible capacity loss of batteries in the initial and subsequent cycles, respectively. In this article, the electron tunneling barriers from Li metal through three insulating SEI components, namely Li 2CO 3, LiF and Li 3PO 4, are computed by density function theory (DFT) approaches. Based on electron tunneling theory, it is estimated that sufficient to block electron tunneling. It is also found that the band gap decreases under tension while the work function remains the same, and thus the tunneling barrier decreases under tensionmore » and increases under compression. A new parameter, η, characterizing the average distances between anions, is proposed to unify the variation of band gap with strain under different loading conditions into a single linear function of η. An analytical model based on the tunneling results is developed to connect the irreversible capacity loss, due to the Li ions consumed in forming these SEI component layers on the surface of negative electrodes. As a result, the agreement between the model predictions and experimental results suggests that only the initial irreversible capacity loss is due to the self-limiting electron tunneling property of the SEI.« less
Data distribution method of workflow in the cloud environment
NASA Astrophysics Data System (ADS)
Wang, Yong; Wu, Junjuan; Wang, Ying
2017-08-01
Cloud computing for workflow applications provides the required high efficiency calculation and large storage capacity and it also brings challenges to the protection of trade secrets and other privacy data. Because of privacy data will cause the increase of the data transmission time, this paper presents a new data allocation algorithm based on data collaborative damage degree, to improve the existing data allocation strategy? Safety and public cloud computer algorithm depends on the private cloud; the static allocation method in the initial stage only to the non-confidential data division to improve the original data, in the operational phase will continue to generate data to dynamically adjust the data distribution scheme. The experimental results show that the improved method is effective in reducing the data transmission time.
Integral processing in beyond-Hartree-Fock calculations
NASA Technical Reports Server (NTRS)
Taylor, P. R.
1986-01-01
The increasing rate at which improvements in processing capacity outstrip improvements in input/output performance of large computers has led to recent attempts to bypass generation of a disk-based integral file. The direct self-consistent field (SCF) method of Almlof and co-workers represents a very successful implementation of this approach. This paper is concerned with the extension of this general approach to configuration interaction (CI) and multiconfiguration-self-consistent field (MCSCF) calculations. After a discussion of the particular types of molecular orbital (MO) integrals for which -- at least for most current generation machines -- disk-based storage seems unavoidable, it is shown how all the necessary integrals can be obtained as matrix elements of Coulomb and exchange operators that can be calculated using a direct approach. Computational implementations of such a scheme are discussed.
Multiplexing technique for computer communications via satellite channels
NASA Technical Reports Server (NTRS)
Binder, R.
1975-01-01
Multiplexing scheme combines technique of dynamic allocation with conventional time-division multiplexing. Scheme is designed to expedite short-duration interactive or priority traffic and to delay large data transfers; as result, each node has effective capacity of almost total channel capacity when other nodes have light traffic loads.
Real-Time Communication Systems: Design, Analysis and Implementation
1984-07-31
sively [141-[19). A two-hop configuration involving a ring of repeaters around a station has been analyzed by Gitman [20) ; STATION network capacity...control of the packet-switching broadcast channels," J. Ass. Comput Mach., vol. 24, pp. 375-386, July 1977. [201 I. Gitman , "On the capacity of
78 FR 77161 - Grant Program To Build Tribal Energy Development Capacity
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-20
... project equipment such as computers, vehicles, field gear, etc; Legal fees; Contract negotiation fees; and... tribes for projects to build tribal capacity for energy resource development under the Department of the... Information section of this notice to select projects for funding awards. DATES: Submit grant proposals by...
Fiscal Capacity and Educational Finance: Some Further Variations.
ERIC Educational Resources Information Center
Dziuban, Charles; And Others
The school district fiscal capacity data (1962 and 1967) of the National Finance Project were analyzed for psychometric adequacy and robustness of component composition. The procedures involved: (1) the computation of the Kaiser, Meyer, Olkin Measure of Sampling Adequacy, (2) inspection of the off-diagonal elements of the antiimage covariance…
ERIC Educational Resources Information Center
Chang, Chi-Cheng; Yao, Shu-Nung; Chen, Shi-An; King, Jung-Tai; Liang, Chaoyun
2016-01-01
This article describes a structural examination of the interaction among different imaginative capacities and the entrepreneurial intention of electrical and computer engineering students. Two studies were combined to confirm the factor structure of survey items and test the hypothesised interaction model. The results indicated that imaginative…
The potential for gaming techniques in radiology education and practice.
Reiner, Bruce; Siegel, Eliot
2008-02-01
Traditional means of communication, education and training, and research have been dramatically transformed with the advent of computerized medicine, and no other medical specialty has been more greatly affected than radiology. Of the myriad of newer computer applications currently available, computer gaming stands out for its unique potential to enhance end-user performance and job satisfaction. Research in other disciplines has demonstrated computer gaming to offer the potential for enhanced decision making, resource management, visual acuity, memory, and motor skills. Within medical imaging, video gaming provides a novel means to enhance radiologist and technologist performance and visual perception by increasing attentional capacity, visual field of view, and visual-motor coordination. These enhancements take on heightened importance with the increasing size and complexity of three-dimensional imaging datasets. Although these operational gains are important in themselves, psychologic gains intrinsic to video gaming offer the potential to reduce stress and improve job satisfaction by creating a fun and engaging means of spirited competition. By creating customized gaming programs and rewards systems, video game applications can be customized to the skill levels and preferences of individual users, thereby creating a comprehensive means to improve individual and collective job performance.
Mobility-Aware Caching and Computation Offloading in 5G Ultra-Dense Cellular Networks
Chen, Min; Hao, Yixue; Qiu, Meikang; Song, Jeungeun; Wu, Di; Humar, Iztok
2016-01-01
Recent trends show that Internet traffic is increasingly dominated by content, which is accompanied by the exponential growth of traffic. To cope with this phenomena, network caching is introduced to utilize the storage capacity of diverse network devices. In this paper, we first summarize four basic caching placement strategies, i.e., local caching, Device-to-Device (D2D) caching, Small cell Base Station (SBS) caching and Macrocell Base Station (MBS) caching. However, studies show that so far, much of the research has ignored the impact of user mobility. Therefore, taking the effect of the user mobility into consideration, we proposes a joint mobility-aware caching and SBS density placement scheme (MS caching). In addition, differences and relationships between caching and computation offloading are discussed. We present a design of a hybrid computation offloading and support it with experimental results, which demonstrate improved performance in terms of energy cost. Finally, we discuss the design of an incentive mechanism by considering network dynamics, differentiated user’s quality of experience (QoE) and the heterogeneity of mobile terminals in terms of caching and computing capabilities. PMID:27347975
Mobility-Aware Caching and Computation Offloading in 5G Ultra-Dense Cellular Networks.
Chen, Min; Hao, Yixue; Qiu, Meikang; Song, Jeungeun; Wu, Di; Humar, Iztok
2016-06-25
Recent trends show that Internet traffic is increasingly dominated by content, which is accompanied by the exponential growth of traffic. To cope with this phenomena, network caching is introduced to utilize the storage capacity of diverse network devices. In this paper, we first summarize four basic caching placement strategies, i.e., local caching, Device-to-Device (D2D) caching, Small cell Base Station (SBS) caching and Macrocell Base Station (MBS) caching. However, studies show that so far, much of the research has ignored the impact of user mobility. Therefore, taking the effect of the user mobility into consideration, we proposes a joint mobility-aware caching and SBS density placement scheme (MS caching). In addition, differences and relationships between caching and computation offloading are discussed. We present a design of a hybrid computation offloading and support it with experimental results, which demonstrate improved performance in terms of energy cost. Finally, we discuss the design of an incentive mechanism by considering network dynamics, differentiated user's quality of experience (QoE) and the heterogeneity of mobile terminals in terms of caching and computing capabilities.
Infrastructure Systems for Advanced Computing in E-science applications
NASA Astrophysics Data System (ADS)
Terzo, Olivier
2013-04-01
In the e-science field are growing needs for having computing infrastructure more dynamic and customizable with a model of use "on demand" that follow the exact request in term of resources and storage capacities. The integration of grid and cloud infrastructure solutions allows us to offer services that can adapt the availability in terms of up scaling and downscaling resources. The main challenges for e-sciences domains will on implement infrastructure solutions for scientific computing that allow to adapt dynamically the demands of computing resources with a strong emphasis on optimizing the use of computing resources for reducing costs of investments. Instrumentation, data volumes, algorithms, analysis contribute to increase the complexity for applications who require high processing power and storage for a limited time and often exceeds the computational resources that equip the majority of laboratories, research Unit in an organization. Very often it is necessary to adapt or even tweak rethink tools, algorithms, and consolidate existing applications through a phase of reverse engineering in order to adapt them to a deployment on Cloud infrastructure. For example, in areas such as rainfall monitoring, meteorological analysis, Hydrometeorology, Climatology Bioinformatics Next Generation Sequencing, Computational Electromagnetic, Radio occultation, the complexity of the analysis raises several issues such as the processing time, the scheduling of tasks of processing, storage of results, a multi users environment. For these reasons, it is necessary to rethink the writing model of E-Science applications in order to be already adapted to exploit the potentiality of cloud computing services through the uses of IaaS, PaaS and SaaS layer. An other important focus is on create/use hybrid infrastructure typically a federation between Private and public cloud, in fact in this way when all resources owned by the organization are all used it will be easy with a federate cloud infrastructure to add some additional resources form the Public cloud for following the needs in term of computational and storage resources and release them where process are finished. Following the hybrid model, the scheduling approach is important for managing both cloud models. Thanks to this model infrastructure every time resources are available for additional request in term of IT capacities that can used "on demand" for a limited time without having to proceed to purchase additional servers.
Cardiac PET/CT for the Evaluation of Known or Suspected Coronary Artery Disease
Murthy, Venkatesh L.
2011-01-01
Positron emission tomography (PET) is increasingly being applied in the evaluation of myocardial perfusion. Cardiac PET can be performed with an increasing variety of cyclotron- and generator-produced radiotracers. Compared with single photon emission computed tomography, PET offers lower radiation exposure, fewer artifacts, improved spatial resolution, and, most important, improved diagnostic performance. With its capacity to quantify rest–peak stress left ventricular systolic function as well as coronary flow reserve, PET is superior to other methods for the detection of multivessel coronary artery disease and, potentially, for risk stratification. Coronary artery calcium scoring may be included for further risk stratification in patients with normal perfusion imaging findings. Furthermore, PET allows quantification of absolute myocardial perfusion, which also carries substantial prognostic value. Hybrid PET–computed tomography scanners allow functional evaluation of myocardial perfusion combined with anatomic characterization of the epicardial coronary arteries, thereby offering great potential for both diagnosis and management. Additional studies to further validate the prognostic value and cost effectiveness of PET are warranted. © RSNA, 2011 PMID:21918042
Data centers as dispatchable loads to harness stranded power
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Kibaek; Yang, Fan; Zavala, Victor M.
Here, we analyze how traditional data center placement and optimal placement of dispatchable data centers affect power grid efficiency. We use detailed network models, stochastic optimization formulations, and diverse renewable generation scenarios to perform our analysis. Our results reveal that significant spillage and stranded power will persist in power grids as wind power levels are increased. A counter-intuitive finding is that collocating data centers with inflexible loads next to wind farms has limited impacts on renewable portfolio standard (RPS) goals because it provides limited system-level flexibility. Such an approach can, in fact, increase stranded power and fossil-fueled generation. In contrast,more » optimally placing data centers that are dispatchable provides system-wide flexibility, reduces stranded power, and improves efficiency. In short, optimally placed dispatchable computing loads can enable better scaling to high RPS. In our case study, we find that these dispatchable computing loads are powered to 60-80% of their requested capacity, indicating that there are significant economic incentives provided by stranded power.« less
Data centers as dispatchable loads to harness stranded power
Kim, Kibaek; Yang, Fan; Zavala, Victor M.; ...
2016-07-20
Here, we analyze how traditional data center placement and optimal placement of dispatchable data centers affect power grid efficiency. We use detailed network models, stochastic optimization formulations, and diverse renewable generation scenarios to perform our analysis. Our results reveal that significant spillage and stranded power will persist in power grids as wind power levels are increased. A counter-intuitive finding is that collocating data centers with inflexible loads next to wind farms has limited impacts on renewable portfolio standard (RPS) goals because it provides limited system-level flexibility. Such an approach can, in fact, increase stranded power and fossil-fueled generation. In contrast,more » optimally placing data centers that are dispatchable provides system-wide flexibility, reduces stranded power, and improves efficiency. In short, optimally placed dispatchable computing loads can enable better scaling to high RPS. In our case study, we find that these dispatchable computing loads are powered to 60-80% of their requested capacity, indicating that there are significant economic incentives provided by stranded power.« less
Sharma, Nandita; Gedeon, Tom
2012-12-01
Stress is a major growing concern in our day and age adversely impacting both individuals and society. Stress research has a wide range of benefits from improving personal operations, learning, and increasing work productivity to benefiting society - making it an interesting and socially beneficial area of research. This survey reviews sensors that have been used to measure stress and investigates techniques for modelling stress. It discusses non-invasive and unobtrusive sensors for measuring computed stress, a term we coin in the paper. Sensors that do not impede everyday activities that could be used by those who would like to monitor stress levels on a regular basis (e.g. vehicle drivers, patients with illnesses linked to stress) is the focus of the discussion. Computational techniques have the capacity to determine optimal sensor fusion and automate data analysis for stress recognition and classification. Several computational techniques have been developed to model stress based on techniques such as Bayesian networks, artificial neural networks, and support vector machines, which this survey investigates. The survey concludes with a summary and provides possible directions for further computational stress research. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Computational Embryology and Predictive Toxicology of Cleft Palate
Capacity to model and simulate key events in developmental toxicity using computational systems biology and biological knowledge steps closer to hazard identification across the vast landscape of untested environmental chemicals. In this context, we chose cleft palate as a model ...
ARTVAL user guide : user guide for the ARTerial eVALuation computational engine.
DOT National Transportation Integrated Search
2015-06-01
This document provides guidance on the use of the ARTVAL (Arterial Evaluation) computational : engine. The engine implements the Quick Estimation Method for Urban Streets (QEM-US) : described in Highway Capacity Manual (HCM2010) as the core computati...
Guarner, Jeannette; Amukele, Timothy; Mehari, Meheretu; Gemechu, Tufa; Woldeamanuel, Yimtubezinash; Winkler, Anne M; Asrat, Daniel; Wilson, Michael L; del Rio, Carlos
2015-03-01
To describe a 4-day laboratory medicine course for clinicians given at Addis Ababa University, Ethiopia, designed to improve the use of laboratory-based diagnoses. Each day was dedicated to one of the following topics: hematology, blood bank/transfusion medicine and coagulation, chemistry, and microbiology. The course included lectures, case-based learning, laboratory tours, and interactive computer case-based homework. The same 12-question knowledge quiz was given before and after the course. Twenty-eight participants took the quiz before and 21 after completing the course. The average score was 5.28 (range, 2-10) for the initial quiz and 8.09 (range, 4-11) for the second quiz (P = .0001). Two of 12 and 8 of 12 questions were answered correctly by more than 60% of trainees on the initial and second quiz, respectively. Knowledge and awareness of the role of the laboratory increased after participation in the course. Understanding of laboratory medicine principles by clinicians will likely improve use of laboratory services and build capacity in Africa. Copyright© by the American Society for Clinical Pathology.
NASA Astrophysics Data System (ADS)
Hoi, Bui Dinh; Yarmohammadi, Mohsen; Mirabbaszadeh, Kavoos
2017-04-01
Dirac theory and Green's function technique are carried out to compute the spin dependent band structures and corresponding electronic heat capacity (EHC) of monolayer (ML) and AB-stacked bilayer (BL) molybdenum disulfide (MoS2) two-dimensional (2D) crystals. We report the influence of induced exchange magnetic field (EMF) by magnetic insulator substrates on these quantities for both structures. The spin-up (down) subband gaps are shifted with EMF from conduction (valence) band to valence (conduction) band at both Dirac points in the ML because of the spin-orbit coupling (SOC) which leads to a critical EMF in the K point and EHC returns to its initial states for both spins. In the BL case, EMF results split states and the decrease (increase) behavior of spin-up (down) subband gaps has been observed at both K and K‧ valleys which is due to the combined effect of SOC and interlayer coupling. For low and high EMFs, EHC of BL MoS2 does not change for spin-up subbands while increases for spin-down subbands.
Numerical analysis on the action of centrifuge force in magnetic fluid rotating shaft seals
NASA Astrophysics Data System (ADS)
Zou, Jibin; Li, Xuehui; Lu, Yongping; Hu, Jianhui
2002-11-01
The magnetic fluid seal is suitable for high-speed rotating shaft seal applications. Centrifuge force will have evident influence on magnetic fluid rotating shaft seals. The seal capacity of the rotating shaft seal can be improved or increased by some measures. Through hydrodynamic analysis the moving status of the magnetic fluid is worked out. By numerical method, the magnetic field and the isobars in the magnetic fluid of a seal device are computed. Then the influence of the centrifuge force on the magnetic fluid seal is calculated quantitatively.
NASA Astrophysics Data System (ADS)
Dai, Yan
2018-04-01
With the increasing development of urban scale, the application of the underground frame structure is becoming more and more extensive. But because of the unreasonable setup, it hinders public transportation. Therefore, it is an effective solution to reinforce the underground frame structure and make it bear the traffic load. The simulation calculation of the reinforced underground frame structure is carried out in this paper. The conclusion is obtained that the structure satisfies the load of vehicle and the load of the crowd.
Why is CDMA the solution for mobile satellite communication
NASA Technical Reports Server (NTRS)
Gilhousen, Klein S.; Jacobs, Irwin M.; Padovani, Roberto; Weaver, Lindsay A.
1989-01-01
It is demonstrated that spread spectrum Code Division Multiple Access (CDMA) systems provide an economically superior solution to satellite mobile communications by increasing the system maximum capacity with respect to single channel per carrier Frequency Division Multiple Access (FDMA) systems. Following the comparative analysis of CDMA and FDMA systems, the design of a model that was developed to test the feasibility of the approach and the performance of a spread spectrum system in a mobile environment. Results of extensive computer simulations as well as laboratory and field tests results are presented.
1993-07-09
real-time simulation capabilities, highly non -linear control devices, work space path planing, active control of machine flexibilities and reliability...P.M., "The Information Capacity of the Human Motor System in Controlling the Amplitude of Movement," Journal of Experimental Psychology, Vol 47, No...driven many research groups in the challenging problem of flexible sy,;tems with an increasing interaction with finite element methodologies. Basic
[Information technologies in clinical cytology (a lecture)].
Shabalova, I P; Dzhangirova, T V; Kasoian, K T
2010-07-01
The lecture is devoted to the urgent problem that is to increase the quality of cytological diagnosis, by diminishing the subjectivism factor via introduction of up-to-date computer information technologies into a cytologist's practice. Its main lines from the standardization of cytological specimen preparation to the registration of a cytologist's opinion and the assessment of the specialist's work quality at the laboratories that successfully use the capacities of the current information systems are described. Information technology capabilities to improve the interpretation of the cellular composition of cytological specimens are detailed.
Research on damping properties optimization of variable-stiffness plate
NASA Astrophysics Data System (ADS)
Wen-kai, QI; Xian-tao, YIN; Cheng, SHEN
2016-09-01
This paper investigates damping optimization design of variable-stiffness composite laminated plate, which means fibre paths can be continuously curved and fibre angles are distinct for different regions. First, damping prediction model is developed based on modal dissipative energy principle and verified by comparing with modal testing results. Then, instead of fibre angles, the element stiffness and damping matrixes are translated to be design variables on the basis of novel Discrete Material Optimization (DMO) formulation, thus reducing the computation time greatly. Finally, the modal damping capacity of arbitrary order is optimized using MMA (Method of Moving Asymptotes) method. Meanwhile, mode tracking technique is employed to investigate the variation of modal shape. The convergent performance of interpolation function, first order specific damping capacity (SDC) optimization results and variation of modal shape in different penalty factor are discussed. The results show that the damping properties of the variable-stiffness plate can be increased by 50%-70% after optimization.
Nucleic Acid-Based Nanodevices in Biological Imaging.
Chakraborty, Kasturi; Veetil, Aneesh T; Jaffrey, Samie R; Krishnan, Yamuna
2016-06-02
The nanoscale engineering of nucleic acids has led to exciting molecular technologies for high-end biological imaging. The predictable base pairing, high programmability, and superior new chemical and biological methods used to access nucleic acids with diverse lengths and in high purity, coupled with computational tools for their design, have allowed the creation of a stunning diversity of nucleic acid-based nanodevices. Given their biological origin, such synthetic devices have a tremendous capacity to interface with the biological world, and this capacity lies at the heart of several nucleic acid-based technologies that are finding applications in biological systems. We discuss these diverse applications and emphasize the advantage, in terms of physicochemical properties, that the nucleic acid scaffold brings to these contexts. As our ability to engineer this versatile scaffold increases, its applications in structural, cellular, and organismal biology are clearly poised to massively expand.
[Innovative ET cover system and its hydrologic evaluation].
Liu, Chuan-shun; Cai, Jun-xiong; Wang, Jing-zhai; Rong, Yu
2010-07-01
The evapotranspiration (ET) cover system,as an alternative cover system of landfill, has been used in many remediation projects since 2003. It is an inexpensive, practical,and easily maintained biological system, but is mainly favorable in arid and semiarid sites due to limited water-holding capacity of the single loam layer and limited transpiration of grass. To improve the effectiveness of percolation control, an innovative scheme of ET was suggested in this paper: (1) a clay liner was added under the single loam layer to increase the water-holding capacity; (2) combined vegetation consisting of shrub and grass was used to replace the grass cover. Hydrologic evaluation of conventional cover,ET cover and the innovative ET cover under the same condition was performed using the computer program HELP, which showed the performance of the innovative ET cover is obviously superior to that of ET cover and conventional cover.
Climate simulations and services on HPC, Cloud and Grid infrastructures
NASA Astrophysics Data System (ADS)
Cofino, Antonio S.; Blanco, Carlos; Minondo Tshuma, Antonio
2017-04-01
Cloud, Grid and High Performance Computing have changed the accessibility and availability of computing resources for Earth Science research communities, specially for Climate community. These paradigms are modifying the way how climate applications are being executed. By using these technologies the number, variety and complexity of experiments and resources are increasing substantially. But, although computational capacity is increasing, traditional applications and tools used by the community are not good enough to manage this large volume and variety of experiments and computing resources. In this contribution, we evaluate the challenges to run climate simulations and services on Grid, Cloud and HPC infrestructures and how to tackle them. The Grid and Cloud infrastructures provided by EGI's VOs ( esr , earth.vo.ibergrid and fedcloud.egi.eu) will be evaluated, as well as HPC resources from PRACE infrastructure and institutional clusters. To solve those challenges, solutions using DRM4G framework will be shown. DRM4G provides a good framework to manage big volume and variety of computing resources for climate experiments. This work has been supported by the Spanish National R&D Plan under projects WRF4G (CGL2011-28864), INSIGNIA (CGL2016-79210-R) and MULTI-SDM (CGL2015-66583-R) ; the IS-ENES2 project from the 7FP of the European Commission (grant agreement no. 312979); the European Regional Development Fund—ERDF and the Programa de Personal Investigador en Formación Predoctoral from Universidad de Cantabria and Government of Cantabria.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-09
... (PPRs) to capture quarterly and annual reports for each project type (Infrastructure, Public Computer... Information Collection; Comment Request; Broadband Technology Opportunities Program (BTOP) Quarterly and..., which included competitive grants to expand public computer center capacity and innovative programs to...
29 CFR 541.0 - Introductory statement.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF LABOR REGULATIONS DEFINING AND... secondary schools), or in the capacity of an outside sales employee, as such terms are defined and delimited... requirements for computer systems analysts, computer programmers, software engineers, and other similarly...
29 CFR 541.0 - Introductory statement.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF LABOR REGULATIONS DEFINING AND... secondary schools), or in the capacity of an outside sales employee, as such terms are defined and delimited... requirements for computer systems analysts, computer programmers, software engineers, and other similarly...
29 CFR 541.0 - Introductory statement.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF LABOR REGULATIONS DEFINING AND... secondary schools), or in the capacity of an outside sales employee, as such terms are defined and delimited... requirements for computer systems analysts, computer programmers, software engineers, and other similarly...
29 CFR 541.0 - Introductory statement.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF LABOR REGULATIONS DEFINING AND... secondary schools), or in the capacity of an outside sales employee, as such terms are defined and delimited... requirements for computer systems analysts, computer programmers, software engineers, and other similarly...
Comparing capacity value estimation techniques for photovoltaic solar power
Madaeni, Seyed Hossein; Sioshansi, Ramteen; Denholm, Paul
2012-09-28
In this paper, we estimate the capacity value of photovoltaic (PV) solar plants in the western U.S. Our results show that PV plants have capacity values that range between 52% and 93%, depending on location and sun-tracking capability. We further compare more robust but data- and computationally-intense reliability-based estimation techniques with simpler approximation methods. We show that if implemented properly, these techniques provide accurate approximations of reliability-based methods. Overall, methods that are based on the weighted capacity factor of the plant provide the most accurate estimate. As a result, we also examine the sensitivity of PV capacity value to themore » inclusion of sun-tracking systems.« less
Adhesive-bonded scarf and stepped-lap joints
NASA Technical Reports Server (NTRS)
Hart-Smith, L. J.
1973-01-01
Continuum mechanics solutions are derived for the static load-carrying capacity of scarf and stepped-lap adhesive-bonded joints. The analyses account for adhesive plasticity and adherend stiffness imbalance and thermal mismatch. The scarf joint solutions include a simple algebraic formula which serves as a close lower bound, within a small fraction of a per cent of the true answer for most practical geometries and materials. Digital computer programs were developed and, for the stepped-lap joints, the critical adherend and adhesive stresses are computed for each step. The scarf joint solutions exhibit grossly different behavior from that for double-lap joints for long overlaps inasmuch as that the potential bond shear strength continues to increase with indefinitely long overlaps on the scarf joints. The stepped-lap joint solutions exhibit some characteristics of both the scarf and double-lap joints. The stepped-lap computer program handles arbitrary (different) step lengths and thickness and the solutions obtained have clarified potentially weak design details and the remedies. The program has been used effectively to optimize the joint proportions.
Angular dose anisotropy around gold nanoparticles exposed to X-rays.
Gadoue, Sherif M; Toomeh, Dolla; Zygmanski, Piotr; Sajo, Erno
2017-07-01
Gold nanoparticle (GNP) radiotherapy has recently emerged as a promising modality in cancer treatment. The use of high atomic number nanoparticles can lead to enhanced radiation dose in tumors due to low-energy leakage electrons depositing in the vicinity of the GNP. A single metric, the dose enhancement ratio has been used in the literature, often in substantial disagreement, to quantify the GNP's capacity to increase local energy deposition. This 1D approach neglects known sources of dose anisotropy and assumes that one average value is representative of the dose enhancement. Whether this assumption is correct and within what accuracy limits it could be trusted, have not been studied due to computational difficulties at the nanoscale. Using a next-generation deterministic computational method, we show that significant dose anisotropy exists which may have radiobiological consequences, and can impact the treatment outcome as well as the development of treatment planning computational methods. Copyright © 2017 Elsevier Inc. All rights reserved.
SIMD Optimization of Linear Expressions for Programmable Graphics Hardware
Bajaj, Chandrajit; Ihm, Insung; Min, Jungki; Oh, Jinsang
2009-01-01
The increased programmability of graphics hardware allows efficient graphical processing unit (GPU) implementations of a wide range of general computations on commodity PCs. An important factor in such implementations is how to fully exploit the SIMD computing capacities offered by modern graphics processors. Linear expressions in the form of ȳ = Ax̄ + b̄, where A is a matrix, and x̄, ȳ and b̄ are vectors, constitute one of the most basic operations in many scientific computations. In this paper, we propose a SIMD code optimization technique that enables efficient shader codes to be generated for evaluating linear expressions. It is shown that performance can be improved considerably by efficiently packing arithmetic operations into four-wide SIMD instructions through reordering of the operations in linear expressions. We demonstrate that the presented technique can be used effectively for programming both vertex and pixel shaders for a variety of mathematical applications, including integrating differential equations and solving a sparse linear system of equations using iterative methods. PMID:19946569
Big-BOE: Fusing Spanish Official Gazette with Big Data Technology.
Basanta-Val, Pablo; Sánchez-Fernández, Luis
2018-06-01
The proliferation of new data sources, stemmed from the adoption of open-data schemes, in combination with an increasing computing capacity causes the inception of new type of analytics that process Internet of things with low-cost engines to speed up data processing using parallel computing. In this context, the article presents an initiative, called BIG-Boletín Oficial del Estado (BOE), designed to process the Spanish official government gazette (BOE) with state-of-the-art processing engines, to reduce computation time and to offer additional speed up for big data analysts. The goal of including a big data infrastructure is to be able to process different BOE documents in parallel with specific analytics, to search for several issues in different documents. The application infrastructure processing engine is described from an architectural perspective and from performance, showing evidence on how this type of infrastructure improves the performance of different types of simple analytics as several machines cooperate.
The potential of multi-port optical memories in digital computing
NASA Technical Reports Server (NTRS)
Alford, C. O.; Gaylord, T. K.
1975-01-01
A high-capacity memory with a relatively high data transfer rate and multi-port simultaneous access capability may serve as the basis for new computer architectures. The implementation of a multi-port optical memory is discussed. Several computer structures are presented that might profitably use such a memory. These structures include (1) a simultaneous record access system, (2) a simultaneously shared memory computer system, and (3) a parallel digital processing structure.
Cognitive Support: Extending Human Knowledge and Processing Capacities.
ERIC Educational Resources Information Center
Neerincx, Mark A.; de Greef, H. Paul
1998-01-01
This study of 40 undergraduates examined whether aiding as cognitive support (i.e., offering computer users knowledge they are missing) can supplement lack of knowledge and capacity under tasks with high mental loading, such as dealing with irregularities in process control. Users of a railway traffic control simulator dealt better and faster with…
Improved Method for Determining the Heat Capacity of Metals
ERIC Educational Resources Information Center
Barth, Roger; Moran, Michael J.
2014-01-01
An improved procedure for laboratory determination of the heat capacities of metals is described. The temperature of cold water is continuously recorded with a computer-interfaced temperature probe and the room temperature metal is added. The method is more accurate and faster than previous methods. It allows students to get accurate measurements…
The Promise of Administrative Data in Education Research
ERIC Educational Resources Information Center
Figlio, David; Karbownik, Krzysztof; Salvanes, Kjell
2017-01-01
Thanks to extraordinary and exponential improvements in data storage and computing capacities, it is now possible to collect, manage, and analyze data in magnitudes and in manners that would have been inconceivable just a short time ago. As the world has developed this remarkable capacity to store and analyze data, so have the world's governments…
Comparison of two methods for calculating the P sorption capacity parameter in soils
USDA-ARS?s Scientific Manuscript database
Phosphorus (P) cycling in soils is an important process affecting P movement through the landscape. The P cycling routines in many computer models are based on the relationships developed for the EPIC model. An important parameter required for this model is the P sorption capacity parameter (PSP). I...
The Development of Educational and/or Training Computer Games for Students with Disabilities
ERIC Educational Resources Information Center
Kwon, Jungmin
2012-01-01
Computer and video games have much in common with the strategies used in special education. Free resources for game development are becoming more widely available, so lay computer users, such as teachers and other practitioners, now have the capacity to develop games using a low budget and a little self-teaching. This article provides a guideline…
ERIC Educational Resources Information Center
Cano, Diana Wright
2017-01-01
State Education Agencies (SEAs) face challenges to the implementation of computer-based accountability assessments. The change in the accountability assessments from paper-based to computer-based demands action from the states to enable schools and districts to build their technical capacity, train the staff, provide practice opportunities to the…
A computer program for analysis of fuelwood harvesting costs
George B. Harpole; Giuseppe Rensi
1985-01-01
The fuelwood harvesting computer program (FHP) is written in FORTRAN 60 and designed to select a collection of harvest units and systems from among alternatives to satisfy specified energy requirements at a lowest cost per million Btu's as recovered in a boiler, or thousand pounds of H2O evaporative capacity kiln drying. Computed energy costs are used as a...
Keep Laptop Cool with Simple Custom Riser
ERIC Educational Resources Information Center
Rynone, William
2012-01-01
Although the author's netbook computer--a diminished capacity laptop computer--uses less power than its big brother, when working with it on his lap, his thighs are roasted, even in the winter. When using the unit on a flat surface, such as a table top, the bottom surface of the computer and table top become quite warm--and it is generally…
A MAP fixed-point, packing-unpacking routine for the IBM 7094 computer
Robert S. Helfman
1966-01-01
Two MAP (Macro Assembly Program) computer routines for packing and unpacking fixed point data are described. Use of these routines with Fortran IV Programs provides speedy access to quantities of data which far exceed the normal storage capacity of IBM 7000-series computers. Many problems that could not be attempted because of the slow access-speed of tape...
Koltun, G.F.
2013-01-01
This report presents the results of a study to assess potential water availability from the Atwood, Leesville, and Tappan Lakes, located within the Muskingum River Watershed, Ohio. The assessment was based on the criterion that water withdrawals should not appreciably affect maintenance of recreation-season pool levels in current use. To facilitate and simplify the assessment, it was assumed that historical lake operations were successful in maintaining seasonal pool levels, and that any discharges from lakes constituted either water that was discharged to prevent exceeding seasonal pool levels or discharges intended to meet minimum in-stream flow targets downstream from the lakes. It further was assumed that the volume of water discharged in excess of the minimum in-stream flow target is available for use without negatively impacting seasonal pool levels or downstream water uses and that all or part of it is subject to withdrawal. Historical daily outflow data for the lakes were used to determine the quantity of water that potentially could be withdrawn and the resulting quantity of water that would flow downstream (referred to as “flow-by”) on a daily basis as a function of all combinations of three hypothetical target minimum flow-by amounts (1, 2, and 3 times current minimum in-stream flow targets) and three pumping capacities (1, 2, and 3 million gallons per day). Using both U.S. Geological Survey streamgage data and lake-outflow data provided by the U.S. Army Corps of Engineers resulted in analytical periods ranging from 51 calendar years for the Atwood Lake to 73 calendar years for the Leesville and Tappan Lakes. The observed outflow time series and the computed time series of daily flow-by amounts and potential withdrawals were analyzed to compute and report order statistics (95th, 75th, 50th, 25th, 10th, and 5th percentiles) and means for the analytical period, in aggregate, and broken down by calendar month. In addition, surplus-water mass curve data were tabulated for each of the lakes. Monthly order statistics of computed withdrawals indicated that, for the three pumping capacities considered, increasing the target minimum flow-by amount tended to reduce the amount of water that can be withdrawn. The reduction was greatest in the lower percentiles of withdrawal; however, increasing the flow-by amount had no impact on potential withdrawals during high flow. In addition, for a given target minimum flow-by amount, increasing the pumping rate increased the total amount of water that could be withdrawn; however, that increase was less than a direct multiple of the increase in pumping rate for most flow statistics. Potential monthly withdrawals were observed to be more variable and more limited in some calendar months than others. Monthly order statistics and means of computed daily mean flow-by amounts indicated that flow-by amounts generally tended to be lowest during June–October and February. Increasing the target minimum flow-by amount for a given pumping rate resulted in some small increases in the magnitudes of the mean and 50th percentile and lower order statistics of computed mean flow-by, but had no effect on the magnitudes of the higher percentile statistics. Increasing the pumping rate for a given target minimum flow-by amount resulted in decreases in magnitudes of higher-percentile flow-by statistics by an amount equal to the flow equivalent of the increase in pumping rate; however, some lower percentile statistics remained unchanged.
Eccentricity effects in vision and attention.
Staugaard, Camilla Funch; Petersen, Anders; Vangkilde, Signe
2016-11-01
Stimulus eccentricity affects visual processing in multiple ways. Performance on a visual task is often better when target stimuli are presented near or at the fovea compared to the retinal periphery. For instance, reaction times and error rates are often reported to increase with increasing eccentricity. Such findings have been interpreted as purely visual, reflecting neurophysiological differences in central and peripheral vision, as well as attentional, reflecting a central bias in the allocation of attentional resources. Other findings indicate that in some cases, information from the periphery is preferentially processed. Specifically, it has been suggested that visual processing speed increases with increasing stimulus eccentricity, and that this positive correlation is reduced, but not eliminated, when the amount of cortex activated by a stimulus is kept constant by magnifying peripheral stimuli (Carrasco et al., 2003). In this study, we investigated effects of eccentricity on visual attentional capacity with and without magnification, using computational modeling based on Bundesen's (1990) theory of visual attention. Our results suggest a general decrease in attentional capacity with increasing stimulus eccentricity, irrespective of magnification. We discuss these results in relation to the physiology of the visual system, the use of different paradigms for investigating visual perception across the visual field, and the use of different stimulus materials (e.g. Gabor patches vs. letters). Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
A computational model of spatial visualization capacity.
Lyon, Don R; Gunzelmann, Glenn; Gluck, Kevin A
2008-09-01
Visualizing spatial material is a cornerstone of human problem solving, but human visualization capacity is sharply limited. To investigate the sources of this limit, we developed a new task to measure visualization accuracy for verbally-described spatial paths (similar to street directions), and implemented a computational process model to perform it. In this model, developed within the Adaptive Control of Thought-Rational (ACT-R) architecture, visualization capacity is limited by three mechanisms. Two of these (associative interference and decay) are longstanding characteristics of ACT-R's declarative memory. A third (spatial interference) is a new mechanism motivated by spatial proximity effects in our data. We tested the model in two experiments, one with parameter-value fitting, and a replication without further fitting. Correspondence between model and data was close in both experiments, suggesting that the model may be useful for understanding why visualizing new, complex spatial material is so difficult.
Chung, Yongchul G.; Gómez-Gualdrón, Diego A.; Li, Peng; Leperi, Karson T.; Deria, Pravas; Zhang, Hongda; Vermeulen, Nicolaas A.; Stoddart, J. Fraser; You, Fengqi; Hupp, Joseph T.; Farha, Omar K.; Snurr, Randall Q.
2016-01-01
Discovery of new adsorbent materials with a high CO2 working capacity could help reduce CO2 emissions from newly commissioned power plants using precombustion carbon capture. High-throughput computational screening efforts can accelerate the discovery of new adsorbents but sometimes require significant computational resources to explore the large space of possible materials. We report the in silico discovery of high-performing adsorbents for precombustion CO2 capture by applying a genetic algorithm to efficiently search a large database of metal-organic frameworks (MOFs) for top candidates. High-performing MOFs identified from the in silico search were synthesized and activated and show a high CO2 working capacity and a high CO2/H2 selectivity. One of the synthesized MOFs shows a higher CO2 working capacity than any MOF reported in the literature under the operating conditions investigated here. PMID:27757420
Chuderski, Adam; Andrelczyk, Krzysztof
2015-02-01
Several existing computational models of working memory (WM) have predicted a positive relationship (later confirmed empirically) between WM capacity and the individual ratio of theta to gamma oscillatory band lengths. These models assume that each gamma cycle represents one WM object (e.g., a binding of its features), whereas the theta cycle integrates such objects into the maintained list. As WM capacity strongly predicts reasoning, it might be expected that this ratio also predicts performance in reasoning tasks. However, no computational model has yet explained how the differences in the theta-to-gamma ratio found among adult individuals might contribute to their scores on a reasoning test. Here, we propose a novel model of how WM capacity constraints figural analogical reasoning, aimed at explaining inter-individual differences in reasoning scores in terms of the characteristics of oscillatory patterns in the brain. In the model, the gamma cycle encodes the bindings between objects/features and the roles they play in the relations processed. Asynchrony between consecutive gamma cycles results from lateral inhibition between oscillating bindings. Computer simulations showed that achieving the highest WM capacity required reaching the optimal level of inhibition. When too strong, this inhibition eliminated some bindings from WM, whereas, when inhibition was too weak, the bindings became unstable and fell apart or became improperly grouped. The model aptly replicated several empirical effects and the distribution of individual scores, as well as the patterns of correlations found in the 100-people sample attempting the same reasoning task. Most importantly, the model's reasoning performance strongly depended on its theta-to-gamma ratio in same way as the performance of human participants depended on their WM capacity. The data suggest that proper regulation of oscillations in the theta and gamma bands may be crucial for both high WM capacity and effective complex cognition. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Computer Series, 86. Bits and Pieces, 35.
ERIC Educational Resources Information Center
Moore, John W., Ed.
1987-01-01
Describes eight applications of the use of computers in teaching chemistry. Includes discussions of audio frequency measurements of heat capacity ratios, quantum mechanics, ab initio calculations, problem solving using spreadsheets, simplex optimization, faradaic impedance diagrams, and the recording and tabulation of student laboratory data. (TW)
Vascular surgical data registries for small computers.
Kaufman, J L; Rosenberg, N
1984-08-01
Recent designs for computer-based vascular surgical registries and clinical data bases have employed large centralized systems with formal programming and mass storage. Small computers, of the types created for office use or for word processing, now contain sufficient speed and memory storage capacity to allow construction of decentralized office-based registries. Using a standardized dictionary of terms and a method of data organization adapted to word processing, we have created a new vascular surgery data registry, "VASREG." Data files are organized without programming, and a limited number of powerful logical statements in English are used for sorting. The capacity is 25,000 records with current inexpensive memory technology. VASREG is adaptable to computers made by a variety of manufacturers, and interface programs are available for conversion of the word processor formated registry data into forms suitable for analysis by programs written in a standard programming language. This is a low-cost clinical data registry available to any physician. With a standardized dictionary, preparation of regional and national statistical summaries may be facilitated.
A New Generation of Networks and Computing Models for High Energy Physics in the LHC Era
NASA Astrophysics Data System (ADS)
Newman, H.
2011-12-01
Wide area networks of increasing end-to-end capacity and capability are vital for every phase of high energy physicists' work. Our bandwidth usage, and the typical capacity of the major national backbones and intercontinental links used by our field have progressed by a factor of several hundred times over the past decade. With the opening of the LHC era in 2009-10 and the prospects for discoveries in the upcoming LHC run, the outlook is for a continuation or an acceleration of these trends using next generation networks over the next few years. Responding to the need to rapidly distribute and access datasets of tens to hundreds of terabytes drawn from multi-petabyte data stores, high energy physicists working with network engineers and computer scientists are learning to use long range networks effectively on an increasing scale, and aggregate flows reaching the 100 Gbps range have been observed. The progress of the LHC, and the unprecedented ability of the experiments to produce results rapidly using worldwide distributed data processing and analysis has sparked major, emerging changes in the LHC Computing Models, which are moving from the classic hierarchical model designed a decade ago to more agile peer-to-peer-like models that make more effective use of the resources at Tier2 and Tier3 sites located throughout the world. A new requirements working group has gauged the needs of Tier2 centers, and charged the LHCOPN group that runs the network interconnecting the LHC Tierls with designing a new architecture interconnecting the Tier2s. As seen from the perspective of ICFA's Standing Committee on Inter-regional Connectivity (SCIC), the Digital Divide that separates physicists in several regions of the developing world from those in the developed world remains acute, although many countries have made major advances through the rapid installation of modern network infrastructures. A case in point is Africa, where a new round of undersea cables promises to transform the continent.
Parameterizing the Variability and Uncertainty of Wind and Solar in CEMs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frew, Bethany
We present current and improved methods for estimating the capacity value and curtailment impacts from variable generation (VG) in capacity expansion models (CEMs). The ideal calculation of these variability metrics is through an explicit co-optimized investment-dispatch model using multiple years of VG and load data. Because of data and computational limitations, existing CEMs typically approximate these metrics using a subset of all hours from a single year and/or using statistical methods, which often do not capture the tail-event impacts or the broader set of interactions between VG, storage, and conventional generators. In our proposed new methods, we use hourly generationmore » and load values across all hours of the year to characterize the (1) contribution of VG to system capacity during high load hours, (2) the curtailment level of VG, and (3) the reduction in VG curtailment due to storage and shutdown of select thermal generators. Using CEM model outputs from a preceding model solve period, we apply these methods to exogenously calculate capacity value and curtailment metrics for the subsequent model solve period. Preliminary results suggest that these hourly methods offer improved capacity value and curtailment representations of VG in the CEM from existing approximation methods without additional computational burdens.« less
Robotic tape library system level testing at NSA: Present and planned
NASA Technical Reports Server (NTRS)
Shields, Michael F.
1994-01-01
In the present of declining Defense budgets, increased pressure has been placed on the DOD to utilize Commercial Off the Shelf (COTS) solutions to incrementally solve a wide variety of our computer processing requirements. With the rapid growth in processing power, significant expansion of high performance networking, and the increased complexity of applications data sets, the requirement for high performance, large capacity, reliable and secure, and most of all affordable robotic tape storage libraries has greatly increased. Additionally, the migration to a heterogeneous, distributed computing environment has further complicated the problem. With today's open system compute servers approaching yesterday's supercomputer capabilities, the need for affordable, reliable secure Mass Storage Systems (MSS) has taken on an ever increasing importance to our processing center's ability to satisfy operational mission requirements. To that end, NSA has established an in-house capability to acquire, test, and evaluate COTS products. Its goal is to qualify a set of COTS MSS libraries, thereby achieving a modicum of standardization for robotic tape libraries which can satisfy our low, medium, and high performance file and volume serving requirements. In addition, NSA has established relations with other Government Agencies to complete this in-house effort and to maximize our research, testing, and evaluation work. While the preponderance of the effort is focused at the high end of the storage ladder, considerable effort will be extended this year and next at the server class or mid range storage systems.
Ho, Pang-Yen; Chuang, Guo-Syong; Chao, An-Chong; Li, Hsing-Ya
2005-05-01
The capacity of complex biochemical reaction networks (consisting of 11 coupled non-linear ordinary differential equations) to show multiple steady states, was investigated. The system involved esterification of ethanol and oleic acid by lipase in an isothermal continuous stirred tank reactor (CSTR). The Deficiency One Algorithm and the Subnetwork Analysis were applied to determine the steady state multiplicity. A set of rate constants and two corresponding steady states are computed. The phenomena of bistability, hysteresis and bifurcation are discussed. Moreover, the capacity of steady state multiplicity is extended to the family of the studied reaction networks.
CGAT: a model for immersive personalized training in computational genomics
Sims, David; Ponting, Chris P.
2016-01-01
How should the next generation of genomics scientists be trained while simultaneously pursuing high quality and diverse research? CGAT, the Computational Genomics Analysis and Training programme, was set up in 2010 by the UK Medical Research Council to complement its investment in next-generation sequencing capacity. CGAT was conceived around the twin goals of training future leaders in genome biology and medicine, and providing much needed capacity to UK science for analysing genome scale data sets. Here we outline the training programme employed by CGAT and describe how it dovetails with collaborative research projects to launch scientists on the road towards independent research careers in genomics. PMID:25981124
Storage peak gas-turbine power unit
NASA Technical Reports Server (NTRS)
Tsinkotski, B.
1980-01-01
A storage gas-turbine power plant using a two-cylinder compressor with intermediate cooling is studied. On the basis of measured characteristics of a .25 Mw compressor computer calculations of the parameters of the loading process of a constant capacity storage unit (05.3 million cu m) were carried out. The required compressor power as a function of time with and without final cooling was computed. Parameters of maximum loading and discharging of the storage unit were calculated, and it was found that for the complete loading of a fully unloaded storage unit, a capacity of 1 to 1.5 million cubic meters is required, depending on the final cooling.
Estimation of regional gas and tissue volumes of the lung in supine man using computed tomography.
Denison, D M; Morgan, M D; Millar, A B
1986-08-01
This study was intended to discover how well computed tomography could recover the volume and weight of lung like foams in a body like shell, and then how well it could recover the volume and weight of the lungs in supine man. Model thoraces were made with various loaves of bread submerged in water. Computed tomography scans recovered the volume of the model lungs (true volume range 250-12,500 ml) within +0.2 (SD 68) ml and their weights (true range 72-3125 g) within +30 (78) g. Scans also recovered successive injections of 50 ml of water, within +/- 5 ml. Scans in 12 healthy supine men recovered their vital capacities, total lung capacities (TLC), and predicted tissue volumes with comparable accuracy. At total lung capacity the mean tissue volume of single lungs was 431 (64) ml and at residual volume (RV) it was 427 (63) ml. Tissue volume was then used to match inspiratory and expiratory slices and calculate regional ventilation. Throughout the mid 90% of lung the RV/TLC ratio was fairly constant--mean 21% (5%). New methods of presenting such regional data graphically and automatically are also described.
A Four-Dimensional Computed Tomography Comparison of Healthy vs. Asthmatic Human Lungs
Jahani, Nariman; Choi, Sanghun; Choi, Jiwoong; Haghighi, Babak; Hoffman, Eric A.; Comellas, Alejandro P.; Kline, Joel N.; Lin, Ching-Long
2017-01-01
The purpose of this study was to explore new insights in non-linearity, hysteresis and ventilation heterogeneity of asthmatic human lungs using four-dimensional computed tomography (4D-CT) image data acquired during tidal breathing. Volumetric image data were acquired for 5 non-severe and one severe asthmatic volunteers. Besides 4D-CT image data, function residual capacity and total lung capacity image data during breath-hold were acquired for comparison with dynamic scans. Quantitative results were compared with the previously reported analysis of five healthy human lungs. Using an image registration technique, local variables such as regional ventilation and anisotropic deformation index (ADI) were estimated. Regional ventilation characteristics of non-severe asthmatic subjects were similar to those of healthy subjects, but different from the severe asthmatic subject. Lobar airflow fractions were also well correlated between static and dynamic scans (R2 > 0.84). However, local ventilation heterogeneity significantly increased during tidal breathing in both healthy and asthmatic subjects relative to that of breath-hold perhaps because of airway resistance present only in dynamic breathing. ADI was used to quantify non-linearity and hysteresis of lung motion during tidal breathing. Nonlinearity was greater on inhalation than exhalation among all subjects. However, exhalation nonlinearity among asthmatic subjects was greater than healthy subjects and the difference diminished during inhalation. An increase of non-linearity during exhalation in asthmatic subjects accounted for lower hysteresis relative to that of healthy ones. Thus, assessment of nonlinearity differences between healthy and asthmatic lungs during exhalation may provide quantitative metrics for subject identification and outcome assessment of new interventions. PMID:28372795
The Application of Modeling and Simulation to the Behavioral Deficit of Autism
NASA Technical Reports Server (NTRS)
Anton, John J.
2010-01-01
This abstract describes a research effort to apply technological advances in virtual reality simulation and computer-based games to create behavioral modification programs for individuals with Autism Spectrum Disorder (ASD). The research investigates virtual social skills training within a 3D game environment to diminish the impact of ASD social impairments and to increase learning capacity for optimal intellectual capability. Individuals with autism will encounter prototypical social contexts via computer interface and will interact with 3D avatars with predefined roles within a game-like environment. Incremental learning objectives will combine to form a collaborative social environment. A secondary goal of the effort is to begin the research and development of virtual reality exercises aimed at triggering the release of neurotransmitters to promote critical aspects of synaptic maturation at an early age to change the course of the disease.
Improving student retention in computer engineering technology
NASA Astrophysics Data System (ADS)
Pierozinski, Russell Ivan
The purpose of this research project was to improve student retention in the Computer Engineering Technology program at the Northern Alberta Institute of Technology by reducing the number of dropouts and increasing the graduation rate. This action research project utilized a mixed methods approach of a survey and face-to-face interviews. The participants were male and female, with a large majority ranging from 18 to 21 years of age. The research found that participants recognized their skills and capability, but their capacity to remain in the program was dependent on understanding and meeting the demanding pace and rigour of the program. The participants recognized that curriculum delivery along with instructor-student interaction had an impact on student retention. To be successful in the program, students required support in four domains: academic, learning management, career, and social.
Big Computing in Astronomy: Perspectives and Challenges
NASA Astrophysics Data System (ADS)
Pankratius, Victor
2014-06-01
Hardware progress in recent years has led to astronomical instruments gathering large volumes of data. In radio astronomy for instance, the current generation of antenna arrays produces data at Tbits per second, and forthcoming instruments will expand these rates much further. As instruments are increasingly becoming software-based, astronomers will get more exposed to computer science. This talk therefore outlines key challenges that arise at the intersection of computer science and astronomy and presents perspectives on how both communities can collaborate to overcome these challenges.Major problems are emerging due to increases in data rates that are much larger than in storage and transmission capacity, as well as humans being cognitively overwhelmed when attempting to opportunistically scan through Big Data. As a consequence, the generation of scientific insight will become more dependent on automation and algorithmic instrument control. Intelligent data reduction will have to be considered across the entire acquisition pipeline. In this context, the presentation will outline the enabling role of machine learning and parallel computing.BioVictor Pankratius is a computer scientist who joined MIT Haystack Observatory following his passion for astronomy. He is currently leading efforts to advance astronomy through cutting-edge computer science and parallel computing. Victor is also involved in projects such as ALMA Phasing to enhance the ALMA Observatory with Very-Long Baseline Interferometry capabilities, the Event Horizon Telescope, as well as in the Radio Array of Portable Interferometric Detectors (RAPID) to create an analysis environment using parallel computing in the cloud. He has an extensive track record of research in parallel multicore systems and software engineering, with contributions to auto-tuning, debugging, and empirical experiments studying programmers. Victor has worked with major industry partners such as Intel, Sun Labs, and Oracle. He holds a distinguished doctorate and a Habilitation degree in Computer Science from the University of Karlsruhe. Contact him at pankrat@mit.edu, victorpankratius.com, or Twitter @vpankratius.
NASA Astrophysics Data System (ADS)
andreev, A. N.; Kolesnichenko, D. A.
2017-12-01
The possibility of increasing the energy efficiency of the production cycle in a roller bed is briefly reviewed and justified. The sequence diagram of operation of the electrical drive in a roller bed is analyzed, and the possible increase in the energy efficiency is calculated. A method for energy saving is described for the application of a frequency-controlled asynchronous electrical drive of drive rollers in a roller bed with an increased capacitor capacity in a dc link. A fine mathematical model is developed to describe the behavior of the electrical drive during the deceleration of a roller bed. An experimental setup is created and computer simulation and physical modeling are performed. The basic information flows of the general hierarchical automatic control system of an enterprise are described and determined with allowance for the proposed method of increasing the energy efficiency.
Bulk-Flow Analysis of Hybrid Thrust Bearings for Advanced Cryogenic Turbopumps
NASA Technical Reports Server (NTRS)
SanAndres, Luis
1998-01-01
A bulk-flow analysis and computer program for prediction of the static load performance and dynamic force coefficients of angled injection, orifice-compensated hydrostatic/hydrodynamic thrust bearings have been completed. The product of the research is an efficient computational tool for the design of high-speed thrust bearings for cryogenic fluid turbopumps. The study addresses the needs of a growing technology that requires of reliable fluid film bearings to provide the maximum operating life with optimum controllable rotordynamic characteristics at the lowest cost. The motion of a cryogenic fluid on the thin film lands of a thrust bearing is governed by a set of bulk-flow mass and momentum conservation and energy transport equations. Mass flow conservation and a simple model for momentum transport within the hydrostatic bearing recesses are also accounted for. The bulk-flow model includes flow turbulence with fluid inertia advection, Coriolis and centrifugal acceleration effects on the bearing recesses and film lands. The cryogenic fluid properties are obtained from realistic thermophysical equations of state. Turbulent bulk-flow shear parameters are based on Hirs' model with Moody's friction factor equations allowing a simple simulation for machined bearing surface roughness. A perturbation analysis leads to zeroth-order nonlinear equations governing the fluid flow for the thrust bearing operating at a static equilibrium position, and first-order linear equations describing the perturbed fluid flow for small amplitude shaft motions in the axial direction. Numerical solution to the zeroth-order flow field equations renders the bearing flow rate, thrust load, drag torque and power dissipation. Solution to the first-order equations determines the axial stiffness, damping and inertia force coefficients. The computational method uses well established algorithms and generic subprograms available from prior developments. The Fortran9O computer program hydrothrust runs on a Windows 95/NT personal computer. The program, help files and examples are licensed by Texas A&M University Technology License Office. The study of the static and dynamic performance of two hydrostatic/hydrodynamic bearings demonstrates the importance of centrifugal and advection fluid inertia effects for operation at high rotational speeds. The first example considers a conceptual hydrostatic thrust bearing for an advanced liquid hydrogen turbopump operating at 170,000 rpm. The large axial stiffness and damping coefficients of the bearing should provide accurate control and axial positioning of the turbopump and also allow for unshrouded impellers, therefore increasing the overall pump efficiency. The second bearing uses a refrigerant R134a, and its application in oil-free air conditioning compressors is of great technological importance and commercial value. The computed predictions reveal that the LH2 bearing load capacity and flow rate increase with the recess pressure (i.e. increasing orifice diameters). The bearing axial stiffness has a maximum for a recess pressure rati of approx. 0.55. while the axial damping coefficient decreases as the recess pressure ratio increases. The computer results from three flow models are compared. These models are a) inertialess, b) fluid inertia at recess edges only, and c) full fluid inertia at both recess edges and film lands. The full inertia model shows the lowest flow rates, axial load capacity and stiffness coefficient but on the other hand renders the largest damping coefficients and inertia coefficients. The most important findings are related to the reduction of the outflow through the inner radius and the appearance of subambient pressures. The performance of the refrigerant hybrid thrust bearing is evaluated at two operating speeds and pressure drops. The computed results are presented in dimensionless form to evidence consistent trends in the bearing performance characteristics. As the applied axial load increases, the bearing film thickness and flow rate decrease while the recess pressure increases. The axial stiffness coefficient shows a maximum for a certain intermediate load while the damping coefficient steadily increases. The computed results evidence the paramount of centrifugal fluid inertia at low recess pressures (i.e. low loads), and where there is actually an inflow through the bearing inner diameter, accompanied by subambient pressures just downstream of the bearing recess edge. These results are solely due to centrifugal fluid inertia and advection transport effects. Recommendations include the extension of the computer program to handle flexure pivot tilting pad hybrid bearings and the ability to calculate moment coefficients for shaft angular misalignments.
Value and limitations of transpulmonary pressure calculations during intra-abdominal hypertension.
Cortes-Puentes, Gustavo A; Gard, Kenneth E; Adams, Alexander B; Faltesek, Katherine A; Anderson, Christopher P; Dries, David J; Marini, John J
2013-08-01
To clarify the effect of progressively increasing intra-abdominal pressure on esophageal pressure, transpulmonary pressure, and functional residual capacity. Controlled application of increased intra-abdominal pressure at two positive end-expiratory pressure levels (1 and 10 cm H2O) in an anesthetized porcine model of controlled ventilation. Large animal laboratory of a university-affiliated hospital. Eleven deeply anesthetized swine (weight 46.2 ± 6.2 kg). Air-regulated intra-abdominal hypertension (0-25 mm Hg). Esophageal pressure, tidal compliance, bladder pressure, and end-expiratory lung aeration by gas dilution. Functional residual capacity was significantly reduced by increasing intra-abdominal pressure at both positive end-expiratory pressure levels (p ≤ 0.0001) without corresponding changes of end-expiratory esophageal pressure. Above intra-abdominal pressure 5 mm Hg, plateau airway pressure increased linearly by ~ 50% of the applied intra-abdominal pressure value, associated with commensurate changes of esophageal pressure. With tidal volume held constant, negligible changes occurred in transpulmonary pressure due to intra-abdominal pressure. Driving pressures calculated from airway pressures alone (plateau airway pressure--positive end-expiratory pressure) did not equate to those computed from transpulmonary pressure (tidal changes in transpulmonary pressure). Increasing positive end-expiratory pressure shifted the predominantly negative end-expiratory transpulmonary pressure at positive end-expiratory pressure 1 cm H2O (mean -3.5 ± 0.4 cm H2O) into the positive range at positive end-expiratory pressure 10 cm H2O (mean 0.58 ± 1.2 cm H2O). Despite its insensitivity to changes in functional residual capacity, measuring transpulmonary pressure may be helpful in explaining how different levels of positive end-expiratory pressure influence recruitment and collapse during tidal ventilation in the presence of increased intra-abdominal pressure and in calculating true transpulmonary driving pressure (tidal changes of transpulmonary pressure). Traditional interpretations of respiratory mechanics based on unmodified airway pressure were misleading regarding lung behavior in this setting.
10 CFR 503.6 - Cost calculations for new powerplants and installations.
Code of Federal Regulations, 2013 CFR
2013-01-01
... fuel and the cost of using imported oil is greater than zero. (3) There are two comparative cost...) percent of design capacity times 8760 hours for each year during the life of the powerplant, and compute... it will in fact expire in that year.) (7) If powerplants are being compared, the design capacities or...
10 CFR 503.6 - Cost calculations for new powerplants and installations.
Code of Federal Regulations, 2010 CFR
2010-01-01
... fuel and the cost of using imported oil is greater than zero. (3) There are two comparative cost...) percent of design capacity times 8760 hours for each year during the life of the powerplant, and compute... it will in fact expire in that year.) (7) If powerplants are being compared, the design capacities or...
10 CFR 503.6 - Cost calculations for new powerplants and installations.
Code of Federal Regulations, 2012 CFR
2012-01-01
... fuel and the cost of using imported oil is greater than zero. (3) There are two comparative cost...) percent of design capacity times 8760 hours for each year during the life of the powerplant, and compute... it will in fact expire in that year.) (7) If powerplants are being compared, the design capacities or...
10 CFR 503.6 - Cost calculations for new powerplants and installations.
Code of Federal Regulations, 2011 CFR
2011-01-01
... fuel and the cost of using imported oil is greater than zero. (3) There are two comparative cost...) percent of design capacity times 8760 hours for each year during the life of the powerplant, and compute... it will in fact expire in that year.) (7) If powerplants are being compared, the design capacities or...
76 FR 4458 - Privacy Act of 1974; Report of Modified or Altered System of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-25
..., any component of the Department, or any employee of the Department in his or her official capacity; (b... or her individual capacity where the Department of Justice has agreed to represent such employee, for... computer room is protected by an automatic sprinkler system, numerous automatic sensors (e.g., water, heat...
A Simulation To Determine the Effect of Modifying Local Revenue Capacity.
ERIC Educational Resources Information Center
House, Jess E.; And Others
Because the amount of state-equalization aid received by Ohio school districts is inevitably related to district wealth, the measure of district ability is a concern. This paper presents findings of a study that used computer simulation to examine the effect of proposed modifications to district-revenue capacity on the equity of Ohio…
Scientific Application Requirements for Leadership Computing at the Exascale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahern, Sean; Alam, Sadaf R; Fahey, Mark R
2007-12-01
The Department of Energy s Leadership Computing Facility, located at Oak Ridge National Laboratory s National Center for Computational Sciences, recently polled scientific teams that had large allocations at the center in 2007, asking them to identify computational science requirements for future exascale systems (capable of an exaflop, or 1018 floating point operations per second). These requirements are necessarily speculative, since an exascale system will not be realized until the 2015 2020 timeframe, and are expressed where possible relative to a recent petascale requirements analysis of similar science applications [1]. Our initial findings, which beg further data collection, validation, andmore » analysis, did in fact align with many of our expectations and existing petascale requirements, yet they also contained some surprises, complete with new challenges and opportunities. First and foremost, the breadth and depth of science prospects and benefits on an exascale computing system are striking. Without a doubt, they justify a large investment, even with its inherent risks. The possibilities for return on investment (by any measure) are too large to let us ignore this opportunity. The software opportunities and challenges are enormous. In fact, as one notable computational scientist put it, the scale of questions being asked at the exascale is tremendous and the hardware has gotten way ahead of the software. We are in grave danger of failing because of a software crisis unless concerted investments and coordinating activities are undertaken to reduce and close this hardwaresoftware gap over the next decade. Key to success will be a rigorous requirement for natural mapping of algorithms to hardware in a way that complements (rather than competes with) compilers and runtime systems. The level of abstraction must be raised, and more attention must be paid to functionalities and capabilities that incorporate intent into data structures, are aware of memory hierarchy, possess fault tolerance, exploit asynchronism, and are power-consumption aware. On the other hand, we must also provide application scientists with the ability to develop software without having to become experts in the computer science components. Numerical algorithms are scattered broadly across science domains, with no one particular algorithm being ubiquitous and no one algorithm going unused. Structured grids and dense linear algebra continue to dominate, but other algorithm categories will become more common. A significant increase is projected for Monte Carlo algorithms, unstructured grids, sparse linear algebra, and particle methods, and a relative decrease foreseen in fast Fourier transforms. These projections reflect the expectation of much higher architecture concurrency and the resulting need for very high scalability. The new algorithm categories that application scientists expect to be increasingly important in the next decade include adaptive mesh refinement, implicit nonlinear systems, data assimilation, agent-based methods, parameter continuation, and optimization. The attributes of leadership computing systems expected to increase most in priority over the next decade are (in order of importance) interconnect bandwidth, memory bandwidth, mean time to interrupt, memory latency, and interconnect latency. The attributes expected to decrease most in relative priority are disk latency, archival storage capacity, disk bandwidth, wide area network bandwidth, and local storage capacity. These choices by application developers reflect the expected needs of applications or the expected reality of available hardware. One interpretation is that the increasing priorities reflect the desire to increase computational efficiency to take advantage of increasing peak flops [floating point operations per second], while the decreasing priorities reflect the expectation that computational efficiency will not increase. Per-core requirements appear to be relatively static, while aggregate requirements will grow with the system. This projection is consistent with a relatively small increase in performance per core with a dramatic increase in the number of cores. Leadership system software must face and overcome issues that will undoubtedly be exacerbated at the exascale. The operating system (OS) must be as unobtrusive as possible and possess more stability, reliability, and fault tolerance during application execution. As applications will be more likely at the exascale to experience loss of resources during an execution, the OS must mitigate such a loss with a range of responses. New fault tolerance paradigms must be developed and integrated into applications. Just as application input and output must not be an afterthought in hardware design, job management, too, must not be an afterthought in system software design. Efficient scheduling of those resources will be a major obstacle faced by leadership computing centers at the exas...« less
US computer research networks: Domestic and international telecommunications capacity requirements
NASA Technical Reports Server (NTRS)
Kratochvil, D.; Sood, D.
1990-01-01
The future telecommunications capacity and connectivity requirements of the United States (US) research and development (R&D) community raise two concerns. First, would there be adequate privately-owned communications capacity to meet the ever-increasing requirements of the US R&D community for domestic and international connectivity? Second, is the method of piecemeal implementation of communications facilities by individual researchers cost effective when viewed from an integrated perspective? To address the capacity issue, Contel recently completed a study for NASA identifying the current domestic R&D telecommunications capacity and connectivity requirements, and projecting the same to the years 1991, 1996, 2000, and 2010. The work reported here extends the scope of an earlier study by factoring in the impact of international connectivity requirements on capacity and connectivity forecasts. Most researchers in foreign countries, as is the case with US researchers, rely on regional, national or continent-wide networks to collaborate with each other, and their US counterparts. The US researchers' international connectivity requirements, therefore, stem from the need to link the US domestic research networks to foreign research networks. The number of links and, more importantly, the speeds of links are invariably determined by the characteristics of the networks being linked. The major thrust of this study, therefore, was to identify and characterize the foreign research networks, to quantify the current status of their connectivity to the US networks, and to project growth in the connectivity requirements to years 1991, 1996, 2000, and 2010 so that a composite picture of the US research networks in the same years could be forecasted. The current (1990) US integrated research network, and its connectivity to foreign research networks is shown. As an example of projections, the same for the year 2010 is shown.
Uneke, Chigozie Jesse; Ezeoha, Abel Ebeh; Uro-Chukwu, Henry; Ezeonu, Chinonyelum Thecla; Ogbu, Ogbonnaya; Onwe, Friday; Edoga, Chima
2015-01-01
Information and communication technology (ICT) tools are known to facilitate communication and processing of information and sharing of knowledge by electronic means. In Nigeria, the lack of adequate capacity on the use of ICT by health sector policymakers constitutes a major impediment to the uptake of research evidence into the policymaking process. The objective of this study was to improve the knowledge and capacity of policymakers to access and utilize policy relevant evidence. A modified "before and after" intervention study design was used in which outcomes were measured on the target participants both before the intervention is implemented and after. A 4-point likert scale according to the degree of adequacy; 1 = grossly inadequate, 4 = very adequate was employed. This study was conducted in Ebonyi State, south-eastern Nigeria and the participants were career health policy makers. A two-day intensive ICT training workshop was organized for policymakers who had 52 participants in attendance. Topics covered included: (i). intersectoral partnership/collaboration; (ii). Engaging ICT in evidence-informed policy making; use of ICT for evidence synthesis; (iv) capacity development on the use of computer, internet and other ICT. The pre-workshop mean of knowledge and capacity for use of ICT ranged from 2.19-3.05, while the post-workshop mean ranged from 2.67-3.67 on 4-point scale. The percentage increase in mean of knowledge and capacity at the end of the workshop ranged from 8.3%-39.1%. Findings of this study suggest that policymakers' ICT competence relevant to evidence-informed policymaking can be enhanced through training workshop.
Uneke, Chigozie Jesse; Ezeoha, Abel Ebeh; Uro-Chukwu, Henry; Ezeonu, Chinonyelum Thecla; Ogbu, Ogbonnaya; Onwe, Friday; Edoga, Chima
2015-01-01
Information and communication technology (ICT) tools are known to facilitate communication and processing of information and sharing of knowledge by electronic means. In Nigeria, the lack of adequate capacity on the use of ICT by health sector policymakers constitutes a major impediment to the uptake of research evidence into the policymaking process. The objective of this study was to improve the knowledge and capacity of policymakers to access and utilize policy relevant evidence. A modified “before and after” intervention study design was used in which outcomes were measured on the target participants both before the intervention is implemented and after. A 4-point likert scale according to the degree of adequacy; 1 = grossly inadequate, 4 = very adequate was employed. This study was conducted in Ebonyi State, south-eastern Nigeria and the participants were career health policy makers. A two-day intensive ICT training workshop was organized for policymakers who had 52 participants in attendance. Topics covered included: (i). intersectoral partnership/collaboration; (ii). Engaging ICT in evidence-informed policy making; use of ICT for evidence synthesis; (iv) capacity development on the use of computer, internet and other ICT. The pre-workshop mean of knowledge and capacity for use of ICT ranged from 2.19-3.05, while the post-workshop mean ranged from 2.67-3.67 on 4-point scale. The percentage increase in mean of knowledge and capacity at the end of the workshop ranged from 8.3%-39.1%. Findings of this study suggest that policymakers’ ICT competence relevant to evidence-informed policymaking can be enhanced through training workshop. PMID:26448807
Models for estimating runway landing capacity with Microwave Landing System (MLS)
NASA Technical Reports Server (NTRS)
Tosic, V.; Horonjeff, R.
1975-01-01
A model is developed which is capable of computing the ultimate landing runway capacity, under ILS and MLS conditions, when aircraft population characteristics and air traffic control separation rules are given. This model can be applied in situations when only a horizontal separation between aircraft approaching a runway is allowed, as well as when both vertical and horizontal separations are possible. It is assumed that the system is free of errors, that is that aircraft arrive at specified points along the prescribed flight path precisely when the controllers intend for them to arrive at these points. Although in the real world there is no such thing as an error-free system, the assumption is adequate for a qualitative comparison of MLS with ILS. Results suggest that an increase in runway landing capacity, caused by introducing the MLS multiple approach paths, is to be expected only when an aircraft population consists of aircraft with significantly differing approach speeds and particularly in situations when vertical separation can be applied. Vertical separation can only be applied if one of the types of aircraft in the mix has a very steep descent angle.
Malinowsky, Camilla; Almkvist, Ove; Nygård, Louise; Kottorp, Anders
2012-03-01
The ability to manage everyday technology (ET), such as computers and microwave ovens, is increasingly required in the performance of everyday activities and participation in society. This study aimed to identify aspects that influence the ability to manage ET among older adults with and without cognitive impairment. Older adults with mild Alzheimer's disease and mild cognitive impairment and without known cognitive impairment were assessed as they managed their ET at home. Data were collected using the Management of Everyday Technology Assessment (META). Rasch-based measures of the person's ability to manage ET were analyzed. These measures were used as dependent variables in backward procedure ANOVA analyses. Different predefined aspects that could influence the ability to manage ET were used as independent variables. Three aspects had a significant effect upon the ability to manage ET. These were: (1) variability in intrapersonal capacities (such as "the capacity to pay attention and focus", (2) environmental characteristics (such as "the impact of the design") and (3) diagnostic group. Variability in intrapersonal capacities seems to be of more importance than the actual level of intrapersonal capacity in relation to the ability to manage ET for this sample. This implies that investigations of ability to manage ET should also include intraperson variability. Additionally, adaptations in environmental characteristics could simplify the management of ET to support older adults as technology users.
GISpark: A Geospatial Distributed Computing Platform for Spatiotemporal Big Data
NASA Astrophysics Data System (ADS)
Wang, S.; Zhong, E.; Wang, E.; Zhong, Y.; Cai, W.; Li, S.; Gao, S.
2016-12-01
Geospatial data are growing exponentially because of the proliferation of cost effective and ubiquitous positioning technologies such as global remote-sensing satellites and location-based devices. Analyzing large amounts of geospatial data can provide great value for both industrial and scientific applications. Data- and compute- intensive characteristics inherent in geospatial big data increasingly pose great challenges to technologies of data storing, computing and analyzing. Such challenges require a scalable and efficient architecture that can store, query, analyze, and visualize large-scale spatiotemporal data. Therefore, we developed GISpark - a geospatial distributed computing platform for processing large-scale vector, raster and stream data. GISpark is constructed based on the latest virtualized computing infrastructures and distributed computing architecture. OpenStack and Docker are used to build multi-user hosting cloud computing infrastructure for GISpark. The virtual storage systems such as HDFS, Ceph, MongoDB are combined and adopted for spatiotemporal data storage management. Spark-based algorithm framework is developed for efficient parallel computing. Within this framework, SuperMap GIScript and various open-source GIS libraries can be integrated into GISpark. GISpark can also integrated with scientific computing environment (e.g., Anaconda), interactive computing web applications (e.g., Jupyter notebook), and machine learning tools (e.g., TensorFlow/Orange). The associated geospatial facilities of GISpark in conjunction with the scientific computing environment, exploratory spatial data analysis tools, temporal data management and analysis systems make up a powerful geospatial computing tool. GISpark not only provides spatiotemporal big data processing capacity in the geospatial field, but also provides spatiotemporal computational model and advanced geospatial visualization tools that deals with other domains related with spatial property. We tested the performance of the platform based on taxi trajectory analysis. Results suggested that GISpark achieves excellent run time performance in spatiotemporal big data applications.
Munabi, Ian G; Buwembo, William; Bajunirwe, Francis; Kitara, David Lagoro; Joseph, Ruberwa; Peter, Kawungezi; Obua, Celestino; Quinn, John; Mwaka, Erisa S
2015-02-25
Effective utilization of computers and their applications in medical education and research is of paramount importance to students. The objective of this study was to determine the association between owning a computer and use of computers for research data analysis and the other factors influencing health professions students' computer use for data analysis. We conducted a cross sectional study among undergraduate health professions students at three public universities in Uganda using a self-administered questionnaire. The questionnaire was composed of questions on participant demographics, students' participation in research, computer ownership, and use of computers for data analysis. Descriptive and inferential statistics (uni-variable and multi- level logistic regression analysis) were used to analyse data. The level of significance was set at 0.05. Six hundred (600) of 668 questionnaires were completed and returned (response rate 89.8%). A majority of respondents were male (68.8%) and 75.3% reported owning computers. Overall, 63.7% of respondents reported that they had ever done computer based data analysis. The following factors were significant predictors of having ever done computer based data analysis: ownership of a computer (adj. OR 1.80, p = 0.02), recently completed course in statistics (Adj. OR 1.48, p =0.04), and participation in research (Adj. OR 2.64, p <0.01). Owning a computer, participation in research and undertaking courses in research methods influence undergraduate students' use of computers for research data analysis. Students are increasingly participating in research, and thus need to have competencies for the successful conduct of research. Medical training institutions should encourage both curricular and extra-curricular efforts to enhance research capacity in line with the modern theories of adult learning.
Study on load-bearing characteristics of a new pile group foundation for an offshore wind turbine.
Lang, Ruiqing; Liu, Run; Lian, Jijian; Ding, Hongyan
2014-01-01
Because offshore wind turbines are high-rise structures, they transfer large horizontal loads and moments to their foundations. One of the keys to designing a foundation is determining the sensitivities and laws affecting its load-bearing capacity. In this study, this procedure was carried out for a new high-rise cap pile group foundation adapted to the loading characteristics of offshore wind turbines. The sensitivities of influential factors affecting the bearing properties were determined using an orthogonal test. Through a combination of numerical simulations and model tests, the effects of the inclination angle, length, diameter, and number of side piles on the vertical bearing capacity, horizontal bearing capacity, and bending bearing capacity were determined. The results indicate that an increase in the inclination angle of the side piles will increase the vertical bearing capacity, horizontal bearing capacity, and bending bearing capacity. An increase in the length of the side piles will increase the vertical bearing capacity and bending bearing capacity. When the length of the side piles is close to the central pile, the increase is more apparent. Finally, increasing the number of piles will increase the horizontal bearing capacity; however, the growth rate is small because of the pile group effect.
Study on Load-Bearing Characteristics of a New Pile Group Foundation for an Offshore Wind Turbine
Liu, Run; Lian, Jijian; Ding, Hongyan
2014-01-01
Because offshore wind turbines are high-rise structures, they transfer large horizontal loads and moments to their foundations. One of the keys to designing a foundation is determining the sensitivities and laws affecting its load-bearing capacity. In this study, this procedure was carried out for a new high-rise cap pile group foundation adapted to the loading characteristics of offshore wind turbines. The sensitivities of influential factors affecting the bearing properties were determined using an orthogonal test. Through a combination of numerical simulations and model tests, the effects of the inclination angle, length, diameter, and number of side piles on the vertical bearing capacity, horizontal bearing capacity, and bending bearing capacity were determined. The results indicate that an increase in the inclination angle of the side piles will increase the vertical bearing capacity, horizontal bearing capacity, and bending bearing capacity. An increase in the length of the side piles will increase the vertical bearing capacity and bending bearing capacity. When the length of the side piles is close to the central pile, the increase is more apparent. Finally, increasing the number of piles will increase the horizontal bearing capacity; however, the growth rate is small because of the pile group effect. PMID:25250375
Theta Coordinated Error-Driven Learning in the Hippocampus
Ketz, Nicholas; Morkonda, Srinimisha G.; O'Reilly, Randall C.
2013-01-01
The learning mechanism in the hippocampus has almost universally been assumed to be Hebbian in nature, where individual neurons in an engram join together with synaptic weight increases to support facilitated recall of memories later. However, it is also widely known that Hebbian learning mechanisms impose significant capacity constraints, and are generally less computationally powerful than learning mechanisms that take advantage of error signals. We show that the differential phase relationships of hippocampal subfields within the overall theta rhythm enable a powerful form of error-driven learning, which results in significantly greater capacity, as shown in computer simulations. In one phase of the theta cycle, the bidirectional connectivity between CA1 and entorhinal cortex can be trained in an error-driven fashion to learn to effectively encode the cortical inputs in a compact and sparse form over CA1. In a subsequent portion of the theta cycle, the system attempts to recall an existing memory, via the pathway from entorhinal cortex to CA3 and CA1. Finally the full theta cycle completes when a strong target encoding representation of the current input is imposed onto the CA1 via direct projections from entorhinal cortex. The difference between this target encoding and the attempted recall of the same representation on CA1 constitutes an error signal that can drive the learning of CA3 to CA1 synapses. This CA3 to CA1 pathway is critical for enabling full reinstatement of recalled hippocampal memories out in cortex. Taken together, these new learning dynamics enable a much more robust, high-capacity model of hippocampal learning than was available previously under the classical Hebbian model. PMID:23762019
Frequency Reuse, Cell Separation, and Capacity Analysis of VHF Digital Link Mode 3 TDMA
NASA Technical Reports Server (NTRS)
Shamma, Mohammed A.; Nguyen, Thanh C.; Apaza, Rafael D.
2003-01-01
The most recent studies by the Federal Aviation Administration (FAA) and the aviation industry have indicated that it has become increasingly difficult to make new VHF frequency or channel assignments to meet the aviation needs for air-ground communications. FAA has planned for several aggressive improvement measures to the existing systems, but these measures would not meet the projected voice communications needs beyond 2009. FAA found that since 1974 there has been, on the average, a 4 percent annual increase in the number of channel assignments needed to satisfy the air-ground communication traffic (approximately 300 new channel assignments per year). With the planned improvement measures, the channel assignments are expected to reach a maximum number of 16615 channels by about 2010. Hence, the FAA proposed the use of VDL Mode 3 as a new integrated digital voice and data communications systems to meet the future air traffic demand. This paper presents analytical results of frequency reuse; cell separation and capacity estimation of VDL Mode 3 TDMA systems that FAA has planned to implement the future VHF air-ground communications system by the year 2010. For TDMA, it is well understood that the frequency reuse factor is a crucial parameter for capacity estimation. Formulation of this frequency reuse factor is shown, taking into account the limitation imposed by the requirement to have a sufficient Signal to Co-Channel Interference Ratio. Several different values for the Signal to Co-Channel Interference Ratio were utilized corresponding to the current analog VHF DSB-AM systems, and the future digital VDL Mode 3. The required separation of Co-Channel cells is computed for most of the Frequency Protected Service Volumes (FPSV's) currently in use by the FAA. Additionally, the ideal cell capacity for each FPSV is presented. Also, using actual traffic for the Detroit air space, a FPSV traffic distribution model is used to generate a typical cell for channel capacity prediction. Such prediction is useful for evaluating the improvement of future VDL Mode 3 deployment and capacity planning.
Story Games: Marrying Silicon, Celluloid, and CD-ROM.
ERIC Educational Resources Information Center
Gussin, Lawrence
1994-01-01
Reports on themes emphasized at the April 1994 Computer Game Developers Conference held in Santa Clara (California), including the exploding CD-ROM marketplace and the potential and challenge of using CD-ROM's multimedia capacity to build cinema-quality stories and characters into computer games. Strategies for introducing more complex plots are…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patton, A.D.; Ayoub, A.K.; Singh, C.
1982-07-01
This report describes the structure and operation of prototype computer programs developed for a Monte Carlo simulation model, GENESIS, and for two analytical models, OPCON and OPPLAN. It includes input data requirements and sample test cases.
Computer Technology and Its Impact on Recreation and Sport Programs.
ERIC Educational Resources Information Center
Ross, Craig M.
This paper describes several types of computer programs that can be useful to sports and recreation programs. Computerized tournament scheduling software is helpful to recreation and parks staff working with tournaments of 50 teams/individuals or more. Important features include team capacity, league formation, scheduling conflicts, scheduling…
31 CFR 353.11 - Computation of amount.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 31 Money and Finance:Treasury 2 2012-07-01 2012-07-01 false Computation of amount. 353.11 Section 353.11 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) FISCAL... separately from purchases in a fiduciary capacity. A pension or retirement fund, or an investment, insurance...
31 CFR 353.11 - Computation of amount.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false Computation of amount. 353.11 Section 353.11 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) FISCAL... separately from purchases in a fiduciary capacity. A pension or retirement fund, or an investment, insurance...
31 CFR 353.11 - Computation of amount.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 31 Money and Finance: Treasury 2 2014-07-01 2014-07-01 false Computation of amount. 353.11 Section 353.11 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) FISCAL... separately from purchases in a fiduciary capacity. A pension or retirement fund, or an investment, insurance...
31 CFR 353.11 - Computation of amount.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 31 Money and Finance:Treasury 2 2013-07-01 2013-07-01 false Computation of amount. 353.11 Section 353.11 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) FISCAL... separately from purchases in a fiduciary capacity. A pension or retirement fund, or an investment, insurance...
31 CFR 353.11 - Computation of amount.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 31 Money and Finance:Treasury 2 2011-07-01 2011-07-01 false Computation of amount. 353.11 Section 353.11 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) FISCAL... separately from purchases in a fiduciary capacity. A pension or retirement fund, or an investment, insurance...
Printing Cyrillic and Other "Funny Characters" from a Computer.
ERIC Educational Resources Information Center
Gribble, Charles E.
This paper reviews recent developments in the technology relating to microcomputer printing of the Cyrillic alphabet and related forms of Roman alphabet with diacritics used in Slavic and East European languages. The review includes information on the capacities of printers, computers (particularly the display capabilities), and interfaces…
Bioinformatics and Astrophysics Cluster (BinAc)
NASA Astrophysics Data System (ADS)
Krüger, Jens; Lutz, Volker; Bartusch, Felix; Dilling, Werner; Gorska, Anna; Schäfer, Christoph; Walter, Thomas
2017-09-01
BinAC provides central high performance computing capacities for bioinformaticians and astrophysicists from the state of Baden-Württemberg. The bwForCluster BinAC is part of the implementation concept for scientific computing for the universities in Baden-Württemberg. Community specific support is offered through the bwHPC-C5 project.
The Computational Infrastructure for Geodynamics as a Community of Practice
NASA Astrophysics Data System (ADS)
Hwang, L.; Kellogg, L. H.
2016-12-01
Computational Infrastructure for Geodynamics (CIG), geodynamics.org, originated in 2005 out of community recognition that the efforts of individual or small groups of researchers to develop scientifically-sound software is impossible to sustain, duplicates effort, and makes it difficult for scientists to adopt state-of-the art computational methods that promote new discovery. As a community of practice, participants in CIG share an interest in computational modeling in geodynamics and work together on open source software to build the capacity to support complex, extensible, scalable, interoperable, reliable, and reusable software in an effort to increase the return on investment in scientific software development and increase the quality of the resulting software. The group interacts regularly to learn from each other and better their practices formally through webinar series, workshops, and tutorials and informally through listservs and hackathons. Over the past decade, we have learned that successful scientific software development requires at a minimum: collaboration between domain-expert researchers, software developers and computational scientists; clearly identified and committed lead developer(s); well-defined scientific and computational goals that are regularly evaluated and updated; well-defined benchmarks and testing throughout development; attention throughout development to usability and extensibility; understanding and evaluation of the complexity of dependent libraries; and managed user expectations through education, training, and support. CIG's code donation standards provide the basis for recently formalized best practices in software development (geodynamics.org/cig/dev/best-practices/). Best practices include use of version control; widely used, open source software libraries; extensive test suites; portable configuration and build systems; extensive documentation internal and external to the code; and structured, human readable input formats.
Job Management and Task Bundling
NASA Astrophysics Data System (ADS)
Berkowitz, Evan; Jansen, Gustav R.; McElvain, Kenneth; Walker-Loud, André
2018-03-01
High Performance Computing is often performed on scarce and shared computing resources. To ensure computers are used to their full capacity, administrators often incentivize large workloads that are not possible on smaller systems. Measurements in Lattice QCD frequently do not scale to machine-size workloads. By bundling tasks together we can create large jobs suitable for gigantic partitions. We discuss METAQ and mpi_jm, software developed to dynamically group computational tasks together, that can intelligently backfill to consume idle time without substantial changes to users' current workflows or executables.
Computational Astrophysics Towards Exascale Computing and Big Data
NASA Astrophysics Data System (ADS)
Astsatryan, H. V.; Knyazyan, A. V.; Mickaelian, A. M.
2016-06-01
Traditionally, Armenia has a leading position both within the computer science and Information Technology and Astronomy and Astrophysics sectors in the South Caucasus region and beyond. For instance recent years Information Technology (IT) became one of the fastest growing industries of the Armenian economy (EIF 2013). The main objective of this article is to highlight the key activities that will spur Armenia to strengthen its computational astrophysics capacity thanks to the analysis made of the current trends of e-Infrastructures worldwide.
Simulation of Laboratory Tests of Steel Arch Support
NASA Astrophysics Data System (ADS)
Horyl, Petr; Šňupárek, Richard; Maršálek, Pavel; Pacześniowski, Krzysztof
2017-03-01
The total load-bearing capacity of steel arch yielding roadways supports is among their most important characteristics. These values can be obtained in two ways: experimental measurements in a specialized laboratory or computer modelling by FEM. Experimental measurements are significantly more expensive and more time-consuming. However, for proper tuning, a computer model is very valuable and can provide the necessary verification by experiment. In the cooperating workplaces of GIG Katowice, VSB-Technical University of Ostrava and the Institute of Geonics ASCR this verification was successful. The present article discusses the conditions and results of this verification for static problems. The output is a tuned computer model, which may be used for other calculations to obtain the load-bearing capacity of other types of steel arch supports. Changes in other parameters such as the material properties of steel, size torques, friction coefficient values etc. can be determined relatively quickly by changing the properties of the investigated steel arch supports.
System capacity and economic modeling computer tool for satellite mobile communications systems
NASA Technical Reports Server (NTRS)
Wiedeman, Robert A.; Wen, Doong; Mccracken, Albert G.
1988-01-01
A unique computer modeling tool that combines an engineering tool with a financial analysis program is described. The resulting combination yields a flexible economic model that can predict the cost effectiveness of various mobile systems. Cost modeling is necessary in order to ascertain if a given system with a finite satellite resource is capable of supporting itself financially and to determine what services can be supported. Personal computer techniques using Lotus 123 are used for the model in order to provide as universal an application as possible such that the model can be used and modified to fit many situations and conditions. The output of the engineering portion of the model consists of a channel capacity analysis and link calculations for several qualities of service using up to 16 types of earth terminal configurations. The outputs of the financial model are a revenue analysis, an income statement, and a cost model validation section.
Remote Earth Sciences data collection using ACTS
NASA Technical Reports Server (NTRS)
Evans, Robert H.
1992-01-01
Given the focus on global change and the attendant scope of such research, we anticipate significant growth of requirements for investigator interaction, processing system capabilities, and availability of data sets. The increased complexity of global processes requires interdisciplinary teams to address them; the investigators will need to interact on a regular basis; however, it is unlikely that a single institution will house sufficient investigators with the required breadth of skills. The complexity of the computations may also require resources beyond those located within a single institution; this lack of sufficient computational resources leads to a distributed system located at geographically dispersed institutions. Finally the combination of long term data sets like the Pathfinder datasets and the data to be gathered by new generations of satellites such as SeaWiFS and MODIS-N yield extra-ordinarily large amounts of data. All of these factors combine to increase demands on the communications facilities available; the demands are generating requirements for highly flexible, high capacity networks. We have been examining the applicability of the Advanced Communications Technology Satellite (ACTS) to address the scientific, computational, and, primarily, communications questions resulting from global change research. As part of this effort three scenarios for oceanographic use of ACTS have been developed; a full discussion of this is contained in Appendix B.
Gupta, Mansi; Bansal, Vishal; Chhabra, Sunil K
2013-08-01
Chronotropic incompetence (CI; failure to reach the targeted heart rate (HR) on exercise) and a delayed HR recovery (HRR; ≤12 beats decline within the first minute after cessation) reflect autonomic dysfunction (AD) and predict adverse cardiac prognosis. As chronic obstructive pulmonary disease (COPD) is known to be associated with AD, we hypothesized that these patients may manifest these responses on exercise. The prevalence and predictors of these responses in COPD and their association with its severity have not been evaluated. Normoxemic, stable male patients with COPD (n = 39) and 11 healthy controls underwent lung function testing and incremental leg ergometry. HR responses were monitored during exercise and recovery to compute the HRR and CI. Of all the patients, 33 (84.6%) had at least one of the two exercise responses as abnormal, with the majority (23, 58.9%) having both an abnormal HRR and CI. The frequency of abnormal responses increased with increasing Global Initiative for Chronic Obstructive Lung Disease stage and body mass index, airflow obstruction, dyspnoea and exercise capacity index. After adjusting for smoking history and post-bronchodilator forced expiratory volume in 1 second, only a reduced diffusion capacity for carbon monoxide predicted abnormal HRR, though weakly. We concluded that abnormal HRR and CI are common in patients with COPD. These responses are observed with increasing frequency as the severity of disease increases.
NASA Astrophysics Data System (ADS)
Doyle, Paul; Mtenzi, Fred; Smith, Niall; Collins, Adrian; O'Shea, Brendan
2012-09-01
The scientific community is in the midst of a data analysis crisis. The increasing capacity of scientific CCD instrumentation and their falling costs is contributing to an explosive generation of raw photometric data. This data must go through a process of cleaning and reduction before it can be used for high precision photometric analysis. Many existing data processing pipelines either assume a relatively small dataset or are batch processed by a High Performance Computing centre. A radical overhaul of these processing pipelines is required to allow reduction and cleaning rates to process terabyte sized datasets at near capture rates using an elastic processing architecture. The ability to access computing resources and to allow them to grow and shrink as demand fluctuates is essential, as is exploiting the parallel nature of the datasets. A distributed data processing pipeline is required. It should incorporate lossless data compression, allow for data segmentation and support processing of data segments in parallel. Academic institutes can collaborate and provide an elastic computing model without the requirement for large centralized high performance computing data centers. This paper demonstrates how a base 10 order of magnitude improvement in overall processing time has been achieved using the "ACN pipeline", a distributed pipeline spanning multiple academic institutes.
Designing Facilities for Collaborative Operations
NASA Technical Reports Server (NTRS)
Norris, Jeffrey; Powell, Mark; Backes, Paul; Steinke, Robert; Tso, Kam; Wales, Roxana
2003-01-01
A methodology for designing operational facilities for collaboration by multiple experts has begun to take shape as an outgrowth of a project to design such facilities for scientific operations of the planned 2003 Mars Exploration Rover (MER) mission. The methodology could also be applicable to the design of military "situation rooms" and other facilities for terrestrial missions. It was recognized in this project that modern mission operations depend heavily upon the collaborative use of computers. It was further recognized that tests have shown that layout of a facility exerts a dramatic effect on the efficiency and endurance of the operations staff. The facility designs (for example, see figure) and the methodology developed during the project reflect this recognition. One element of the methodology is a metric, called effective capacity, that was created for use in evaluating proposed MER operational facilities and may also be useful for evaluating other collaboration spaces, including meeting rooms and military situation rooms. The effective capacity of a facility is defined as the number of people in the facility who can be meaningfully engaged in its operations. A person is considered to be meaningfully engaged if the person can (1) see, hear, and communicate with everyone else present; (2) see the material under discussion (typically data on a piece of paper, computer monitor, or projection screen); and (3) provide input to the product under development by the group. The effective capacity of a facility is less than the number of people that can physically fit in the facility. For example, a typical office that contains a desktop computer has an effective capacity of .4, while a small conference room that contains a projection screen has an effective capacity of around 10. Little or no benefit would be derived from allowing the number of persons in an operational facility to exceed its effective capacity: At best, the operations staff would be underutilized; at worst, operational performance would deteriorate. Elements of this methodology were applied to the design of three operations facilities for a series of rover field tests. These tests were observed by human-factors researchers and their conclusions are being used to refine and extend the methodology to be used in the final design of the MER operations facility. Further work is underway to evaluate the use of personal digital assistant (PDA) units as portable input interfaces and communication devices in future mission operations facilities. A PDA equipped for wireless communication and Ethernet, Bluetooth, or another networking technology would cost less than a complete computer system, and would enable a collaborator to communicate electronically with computers and with other collaborators while moving freely within the virtual environment created by a shared immersive graphical display.
A traveling-salesman-based approach to aircraft scheduling in the terminal area
NASA Technical Reports Server (NTRS)
Luenberger, Robert A.
1988-01-01
An efficient algorithm is presented, based on the well-known algorithm for the traveling salesman problem, for scheduling aircraft arrivals into major terminal areas. The algorithm permits, but strictly limits, reassigning an aircraft from its initial position in the landing order. This limitation is needed so that no aircraft or aircraft category is unduly penalized. Results indicate, for the mix of arrivals investigated, a potential increase in capacity in the 3 to 5 percent range. Furthermore, it is shown that the computation time for the algorithm grows only linearly with problem size.
ERIC Educational Resources Information Center
Mobray, Deborah, Ed.
Papers on local area networks (LANs), modelling techniques, software improvement, capacity planning, software engineering, microcomputers and end user computing, cost accounting and chargeback, configuration and performance management, and benchmarking presented at this conference include: (1) "Theoretical Performance Analysis of Virtual…
The LHCb software and computing upgrade for Run 3: opportunities and challenges
NASA Astrophysics Data System (ADS)
Bozzi, C.; Roiser, S.; LHCb Collaboration
2017-10-01
The LHCb detector will be upgraded for the LHC Run 3 and will be readout at 30 MHz, corresponding to the full inelastic collision rate, with major implications on the full software trigger and offline computing. If the current computing model and software framework are kept, the data storage capacity and computing power required to process data at this rate, and to generate and reconstruct equivalent samples of simulated events, will exceed the current capacity by at least one order of magnitude. A redesign of the software framework, including scheduling, the event model, the detector description and the conditions database, is needed to fully exploit the computing power of multi-, many-core architectures, and coprocessors. Data processing and the analysis model will also change towards an early streaming of different data types, in order to limit storage resources, with further implications for the data analysis workflows. Fast simulation options will allow to obtain a reasonable parameterization of the detector response in considerably less computing time. Finally, the upgrade of LHCb will be a good opportunity to review and implement changes in the domains of software design, test and review, and analysis workflow and preservation. In this contribution, activities and recent results in all the above areas are presented.
The ISCB Student Council Internship Program: Expanding computational biology capacity worldwide.
Anupama, Jigisha; Francescatto, Margherita; Rahman, Farzana; Fatima, Nazeefa; DeBlasio, Dan; Shanmugam, Avinash Kumar; Satagopam, Venkata; Santos, Alberto; Kolekar, Pandurang; Michaut, Magali; Guney, Emre
2018-01-01
Education and training are two essential ingredients for a successful career. On one hand, universities provide students a curriculum for specializing in one's field of study, and on the other, internships complement coursework and provide invaluable training experience for a fruitful career. Consequently, undergraduates and graduates are encouraged to undertake an internship during the course of their degree. The opportunity to explore one's research interests in the early stages of their education is important for students because it improves their skill set and gives their career a boost. In the long term, this helps to close the gap between skills and employability among students across the globe and balance the research capacity in the field of computational biology. However, training opportunities are often scarce for computational biology students, particularly for those who reside in less-privileged regions. Aimed at helping students develop research and academic skills in computational biology and alleviating the divide across countries, the Student Council of the International Society for Computational Biology introduced its Internship Program in 2009. The Internship Program is committed to providing access to computational biology training, especially for students from developing regions, and improving competencies in the field. Here, we present how the Internship Program works and the impact of the internship opportunities so far, along with the challenges associated with this program.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luetzow, H.B.v.
1983-08-01
Following an introduction, the paper discusses in section 2 the collection or generation of final geodetic data from conventional surveys, satellite observations, satellite altimetry, the Global Positioning System, and moving base gravity gradiometers. Section 3 covers data utilization and accuracy aspects including gravity programmed inertial positioning and subterraneous mass detection. Section 4 addresses the usefulness and limitation of the collocation method of physical geodesy. Section 5 is concerned with the computation of classical climatological data. In section 6, meteorological data assimilation is considered. Section 7 deals with correlated aspects of initial data generation with emphasis on initial wind field determination,more » parameterized and classical hydrostatic prediction models, non-hydrostatic prediction, computational networks, and computer capacity. The paper concludes that geodetic and meteorological data are expected to become increasingly more diversified and voluminous both regionally and globally, that its general availability will be more or less restricted for some time to come, that its quality and quantity are subject to change, and that meteorological data generation, accuracy and density have to be considered in conjunction with advanced as well as cost-effective numerical weather prediction models and associated computational efforts.« less
Verification of a VRF Heat Pump Computer Model in EnergyPlus
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nigusse, Bereket; Raustad, Richard
2013-06-15
This paper provides verification results of the EnergyPlus variable refrigerant flow (VRF) heat pump computer model using manufacturer's performance data. The paper provides an overview of the VRF model, presents the verification methodology, and discusses the results. The verification provides quantitative comparison of full and part-load performance to manufacturer's data in cooling-only and heating-only modes of operation. The VRF heat pump computer model uses dual range bi-quadratic performance curves to represent capacity and Energy Input Ratio (EIR) as a function of indoor and outdoor air temperatures, and dual range quadratic performance curves as a function of part-load-ratio for modeling part-loadmore » performance. These performance curves are generated directly from manufacturer's published performance data. The verification compared the simulation output directly to manufacturer's performance data, and found that the dual range equation fit VRF heat pump computer model predicts the manufacturer's performance data very well over a wide range of indoor and outdoor temperatures and part-load conditions. The predicted capacity and electric power deviations are comparbale to equation-fit HVAC computer models commonly used for packaged and split unitary HVAC equipment.« less
Wilson, Gary L.; Richards, Joseph M.
2006-01-01
Because of the increasing use and importance of lakes for water supply to communities, a repeatable and reliable procedure to determine lake bathymetry and capacity is needed. A method to determine the accuracy of the procedure will help ensure proper collection and use of the data and resulting products. It is important to clearly define the intended products and desired accuracy before conducting the bathymetric survey to ensure proper data collection. A survey-grade echo sounder and differential global positioning system receivers were used to collect water-depth and position data in December 2003 at Sugar Creek Lake near Moberly, Missouri. Data were collected along planned transects, with an additional set of quality-assurance data collected for use in accuracy computations. All collected data were imported into a geographic information system database. A bathymetric surface model, contour map, and area/capacity tables were created from the geographic information system database. An accuracy assessment was completed on the collected data, bathymetric surface model, area/capacity table, and contour map products. Using established vertical accuracy standards, the accuracy of the collected data, bathymetric surface model, and contour map product was 0.67 foot, 0.91 foot, and 1.51 feet at the 95 percent confidence level. By comparing results from different transect intervals with the quality-assurance transect data, it was determined that a transect interval of 1 percent of the longitudinal length of Sugar Creek Lake produced nearly as good results as 0.5 percent transect interval for the bathymetric surface model, area/capacity table, and contour map products.
Hammer, Jort; Haftka, Joris J-H; Scherpenisse, Peter; Hermens, Joop L M; de Voogt, Pim W P
2017-02-01
To predict the fate and potential effects of organic contaminants, information about their hydrophobicity is required. However, common parameters to describe the hydrophobicity of organic compounds (e.g., octanol-water partition constant [K OW ]) proved to be inadequate for ionic and nonionic surfactants because of their surface-active properties. As an alternative approach to determine their hydrophobicity, the aim of the present study was therefore to measure the retention of a wide range of surfactants on a C 18 stationary phase. Capacity factors in pure water (k' 0 ) increased linearly with increasing number of carbon atoms in the surfactant structure. Fragment contribution values were determined for each structural unit with multilinear regression, and the results were consistent with the expected influence of these fragments on the hydrophobicity of surfactants. Capacity factors of reference compounds and log K OW values from the literature were used to estimate log K OW values for surfactants (log KOWHPLC). These log KOWHPLC values were also compared to log K OW values calculated with 4 computational programs: KOWWIN, Marvin calculator, SPARC, and COSMOThermX. In conclusion, capacity factors from a C 18 stationary phase are found to better reflect hydrophobicity of surfactants than their K OW values. Environ Toxicol Chem 2017;36:329-336. © 2016 The Authors. Environmental Toxicology and Chemistry Published by Wiley Periodicals, Inc. on behalf of SETAC. © 2016 The Authors. Environmental Toxicology and Chemistry Published by Wiley Periodicals, Inc. on behalf of SETAC.
Communication Limits Due to Photon-Detector Jitter
NASA Technical Reports Server (NTRS)
Moision, Bruce E.; Farr, William H.
2008-01-01
A theoretical and experimental study was conducted of the limit imposed by photon-detector jitter on the capacity of a pulse-position-modulated optical communication system in which the receiver operates in a photon-counting (weak-signal) regime. Photon-detector jitter is a random delay between impingement of a photon and generation of an electrical pulse by the detector. In the study, jitter statistics were computed from jitter measurements made on several photon detectors. The probability density of jitter was mathematically modeled by use of a weighted sum of Gaussian functions. Parameters of the model were adjusted to fit histograms representing the measured-jitter statistics. Likelihoods of assigning detector-output pulses to correct pulse time slots in the presence of jitter were derived and used to compute channel capacities and corresponding losses due to jitter. It was found that the loss, expressed as the ratio between the signal power needed to achieve a specified capacity in the presence of jitter and that needed to obtain the same capacity in the absence of jitter, is well approximated as a quadratic function of the standard deviation of the jitter in units of pulse-time-slot duration.
Spatiotemporal analysis of the agricultural drought risk in Heilongjiang Province, China
NASA Astrophysics Data System (ADS)
Pei, Wei; Fu, Qiang; Liu, Dong; Li, Tian-xiao; Cheng, Kun; Cui, Song
2017-06-01
Droughts are natural disasters that pose significant threats to agricultural production as well as living conditions, and a spatial-temporal difference analysis of agricultural drought risk can help determine the spatial distribution and temporal variation of the drought risk within a region. Moreover, this type of analysis can provide a theoretical basis for the identification, prevention, and mitigation of drought disasters. In this study, the overall dispersion and local aggregation of projection points were based on research by Friedman and Tukey (IEEE Trans on Computer 23:881-890, 1974). In this work, high-dimensional samples were clustered by cluster analysis. The clustering results were represented by the clustering matrix, which determined the local density in the projection index. This method avoids the problem of determining a cutoff radius. An improved projection pursuit model is proposed that combines cluster analysis and the projection pursuit model, which offer advantages for classification and assessment, respectively. The improved model was applied to analyze the agricultural drought risk of 13 cities in Heilongjiang Province over 6 years (2004, 2006, 2008, 2010, 2012, and 2014). The risk of an agricultural drought disaster was characterized by 14 indicators and the following four aspects: hazard, exposure, sensitivity, and resistance capacity. The spatial distribution and temporal variation characteristics of the agricultural drought risk in Heilongjiang Province were analyzed. The spatial distribution results indicated that Suihua, Qigihar, Daqing, Harbin, and Jiamusi are located in high-risk areas, Daxing'anling and Yichun are located in low-risk areas, and the differences among the regions were primarily caused by the aspects exposure and resistance capacity. The temporal variation results indicated that the risk of agricultural drought in most areas presented an initially increasing and then decreasing trend. A higher value for the exposure aspect increased the risk of drought, whereas a higher value for the resistance capacity aspect reduced the risk of drought. Over the long term, the exposure level of the region presented limited increases, whereas the resistance capacity presented considerable increases. Therefore, the risk of agricultural drought in Heilongjiang Province will continue to exhibit a decreasing trend.
CGAT: a model for immersive personalized training in computational genomics.
Sims, David; Ponting, Chris P; Heger, Andreas
2016-01-01
How should the next generation of genomics scientists be trained while simultaneously pursuing high quality and diverse research? CGAT, the Computational Genomics Analysis and Training programme, was set up in 2010 by the UK Medical Research Council to complement its investment in next-generation sequencing capacity. CGAT was conceived around the twin goals of training future leaders in genome biology and medicine, and providing much needed capacity to UK science for analysing genome scale data sets. Here we outline the training programme employed by CGAT and describe how it dovetails with collaborative research projects to launch scientists on the road towards independent research careers in genomics. © The Author 2015. Published by Oxford University Press.
High Performance Parallel Computational Nanotechnology
NASA Technical Reports Server (NTRS)
Saini, Subhash; Craw, James M. (Technical Monitor)
1995-01-01
At a recent press conference, NASA Administrator Dan Goldin encouraged NASA Ames Research Center to take a lead role in promoting research and development of advanced, high-performance computer technology, including nanotechnology. Manufacturers of leading-edge microprocessors currently perform large-scale simulations in the design and verification of semiconductor devices and microprocessors. Recently, the need for this intensive simulation and modeling analysis has greatly increased, due in part to the ever-increasing complexity of these devices, as well as the lessons of experiences such as the Pentium fiasco. Simulation, modeling, testing, and validation will be even more important for designing molecular computers because of the complex specification of millions of atoms, thousands of assembly steps, as well as the simulation and modeling needed to ensure reliable, robust and efficient fabrication of the molecular devices. The software for this capacity does not exist today, but it can be extrapolated from the software currently used in molecular modeling for other applications: semi-empirical methods, ab initio methods, self-consistent field methods, Hartree-Fock methods, molecular mechanics; and simulation methods for diamondoid structures. In as much as it seems clear that the application of such methods in nanotechnology will require powerful, highly powerful systems, this talk will discuss techniques and issues for performing these types of computations on parallel systems. We will describe system design issues (memory, I/O, mass storage, operating system requirements, special user interface issues, interconnects, bandwidths, and programming languages) involved in parallel methods for scalable classical, semiclassical, quantum, molecular mechanics, and continuum models; molecular nanotechnology computer-aided designs (NanoCAD) techniques; visualization using virtual reality techniques of structural models and assembly sequences; software required to control mini robotic manipulators for positional control; scalable numerical algorithms for reliability, verifications and testability. There appears no fundamental obstacle to simulating molecular compilers and molecular computers on high performance parallel computers, just as the Boeing 777 was simulated on a computer before manufacturing it.
A study of the feasibility of statistical analysis of airport performance simulation
NASA Technical Reports Server (NTRS)
Myers, R. H.
1982-01-01
The feasibility of conducting a statistical analysis of simulation experiments to study airport capacity is investigated. First, the form of the distribution of airport capacity is studied. Since the distribution is non-Gaussian, it is important to determine the effect of this distribution on standard analysis of variance techniques and power calculations. Next, power computations are made in order to determine how economic simulation experiments would be if they are designed to detect capacity changes from condition to condition. Many of the conclusions drawn are results of Monte-Carlo techniques.
Global distribution of plant-extractable water capacity of soil
Dunne, K.A.; Willmott, C.J.
1996-01-01
Plant-extractable water capacity of soil is the amount of water that can be extracted from the soil to fulfill evapotranspiration demands. It is often assumed to be spatially invariant in large-scale computations of the soil-water balance. Empirical evidence, however, suggests that this assumption is incorrect. In this paper, we estimate the global distribution of the plant-extractable water capacity of soil. A representative soil profile, characterized by horizon (layer) particle size data and thickness, was created for each soil unit mapped by FAO (Food and Agriculture Organization of the United Nations)/Unesco. Soil organic matter was estimated empirically from climate data. Plant rooting depths and ground coverages were obtained from a vegetation characteristic data set. At each 0.5?? ?? 0.5?? grid cell where vegetation is present, unit available water capacity (cm water per cm soil) was estimated from the sand, clay, and organic content of each profile horizon, and integrated over horizon thickness. Summation of the integrated values over the lesser of profile depth and root depth produced an estimate of the plant-extractable water capacity of soil. The global average of the estimated plant-extractable water capacities of soil is 8??6 cm (Greenland, Antarctica and bare soil areas excluded). Estimates are less than 5, 10 and 15 cm - over approximately 30, 60, and 89 per cent of the area, respectively. Estimates reflect the combined effects of soil texture, soil organic content, and plant root depth or profile depth. The most influential and uncertain parameter is the depth over which the plant-extractable water capacity of soil is computed, which is usually limited by root depth. Soil texture exerts a lesser, but still substantial, influence. Organic content, except where concentrations are very high, has relatively little effect.
Making sense by making sentient: effectance motivation increases anthropomorphism.
Waytz, Adam; Morewedge, Carey K; Epley, Nicholas; Monteleone, George; Gao, Jia-Hong; Cacioppo, John T
2010-09-01
People commonly anthropomorphize nonhuman agents, imbuing everything from computers to pets to gods with humanlike capacities and mental experiences. Although widely observed, the determinants of anthropomorphism are poorly understood and rarely investigated. We propose that people anthropomorphize, in part, to satisfy effectance motivation-the basic and chronic motivation to attain mastery of one's environment. Five studies demonstrated that increasing effectance motivation by manipulating the perceived unpredictability of a nonhuman agent or by increasing the incentives for mastery increases anthropomorphism. Neuroimaging data demonstrated that the neural correlates of this process are similar to those engaged when mentalizing other humans. A final study demonstrated that anthropomorphizing a stimulus makes it appear more predictable and understandable, suggesting that anthropomorphism satisfies effectance motivation. Anthropomorphizing nonhuman agents seems to satisfy the basic motivation to make sense of an otherwise uncertain environment. (PsycINFO Database Record (c) 2010 APA, all rights reserved).
Capacity planning in a transitional economy: What issues? Which models?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mubayi, V.; Leigh, R.W.; Bright, R.N.
1996-03-01
This paper is devoted to an exploration of the important issues facing the Russian power generation system and its evolution in the foreseeable future and the kinds of modeling approaches that capture those issues. These issues include, for example, (1) trade-offs between investments in upgrading and refurbishment of existing thermal (fossil-fired) capacity and safety enhancements in existing nuclear capacity versus investment in new capacity, (2) trade-offs between investment in completing unfinished (under construction) projects based on their original design versus investment in new capacity with improved design, (3) incorporation of demand-side management options (investments in enhancing end-use efficiency, for example)more » within the planning framework, (4) consideration of the spatial dimensions of system planning including investments in upgrading electric transmission networks or fuel shipment networks and incorporating hydroelectric generation, (5) incorporation of environmental constraints and (6) assessment of uncertainty and evaluation of downside risk. Models for exploring these issues include low power shutdown (LPS) which are computationally very efficient, though approximate, and can be used to perform extensive sensitivity analyses to more complex models which can provide more detailed answers but are computationally cumbersome and can only deal with limited issues. The paper discusses which models can usefully treat a wide range of issues within the priorities facing decision makers in the Russian power sector and integrate the results with investment decisions in the wider economy.« less
Coding Strategies and Implementations of Compressive Sensing
NASA Astrophysics Data System (ADS)
Tsai, Tsung-Han
This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others. This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system. Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity. Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or information from a noisy environment. Using engineering efforts to accomplish the same task usually requires multiple detectors, advanced computational algorithms, or artificial intelligence systems. Compressive acoustic sensing incorporates acoustic metamaterials in compressive sensing theory to emulate the abilities of sound localization and selective attention. This research investigates and optimizes the sensing capacity and the spatial sensitivity of the acoustic sensor. The well-modeled acoustic sensor allows localizing multiple speakers in both stationary and dynamic auditory scene; and distinguishing mixed conversations from independent sources with high audio recognition rate.
Extending a Flight Management Computer for Simulation and Flight Experiments
NASA Technical Reports Server (NTRS)
Madden, Michael M.; Sugden, Paul C.
2005-01-01
In modern transport aircraft, the flight management computer (FMC) has evolved from a flight planning aid to an important hub for pilot information and origin-to-destination optimization of flight performance. Current trends indicate increasing roles of the FMC in aviation safety, aviation security, increasing airport capacity, and improving environmental impact from aircraft. Related research conducted at the Langley Research Center (LaRC) often requires functional extension of a modern, full-featured FMC. Ideally, transport simulations would include an FMC simulation that could be tailored and extended for experiments. However, due to the complexity of a modern FMC, a large investment (millions of dollars over several years) and scarce domain knowledge are needed to create such a simulation for transport aircraft. As an intermediate alternative, the Flight Research Services Directorate (FRSD) at LaRC created a set of reusable software products to extend flight management functionality upstream of a Boeing-757 FMC, transparently simulating or sharing its operator interfaces. The paper details the design of these products and highlights their use on NASA projects.
NASA Astrophysics Data System (ADS)
Kamal, M. A.; Youlla, D.
2018-03-01
Municipal solid waste (MSW) transportation in Pontianak City becomes an issue that need to be tackled by the relevant agencies. The MSW transportation service in Pontianak City currently requires very high resources especially in vehicle usage. Increasing the number of fleets has not been able to increase service levels while garbage volume is growing every year along with population growth. In this research, vehicle routing optimization approach was used to find optimal and efficient routes of vehicle cost in transporting garbage from several Temporary Garbage Dump (TGD) to Final Garbage Dump (FGD). One of the problems of MSW transportation is that there is a TGD which exceed the the vehicle capacity and must be visited more than once. The optimal computation results suggest that the municipal authorities only use 3 vehicles from 5 vehicles provided with the total minimum cost of IDR. 778,870. The computation time to search optimal route and minimal cost is very time consuming. This problem is influenced by the number of constraints and decision variables that have are integer value.
Influence of Synaptic Depression on Memory Storage Capacity
NASA Astrophysics Data System (ADS)
Otsubo, Yosuke; Nagata, Kenji; Oizumi, Masafumi; Okada, Masato
2011-08-01
Synaptic efficacy between neurons is known to change within a short time scale dynamically. Neurophysiological experiments show that high-frequency presynaptic inputs decrease synaptic efficacy between neurons. This phenomenon is called synaptic depression, a short term synaptic plasticity. Many researchers have investigated how the synaptic depression affects the memory storage capacity. However, the noise has not been taken into consideration in their analysis. By introducing ``temperature'', which controls the level of the noise, into an update rule of neurons, we investigate the effects of synaptic depression on the memory storage capacity in the presence of the noise. We analytically compute the storage capacity by using a statistical mechanics technique called Self Consistent Signal to Noise Analysis (SCSNA). We find that the synaptic depression decreases the storage capacity in the case of finite temperature in contrast to the case of the low temperature limit, where the storage capacity does not change.
Quantifying relative importance: Computing standardized effects in models with binary outcomes
Grace, James B.; Johnson, Darren; Lefcheck, Jonathan S.; Byrnes, Jarrett E.K.
2018-01-01
Results from simulation studies show that both the LT and OE methods of standardization support a similarly-broad range of coefficient comparisons. The LT method estimates effects that reflect underlying latent-linear propensities, while the OE method computes a linear approximation for the effects of predictors on binary responses. The contrast between assumptions for the two methods is reflected in persistently weaker standardized effects associated with OE standardization. Reliance on standard deviations for standardization (the traditional approach) is critically examined and shown to introduce substantial biases when predictors are non-Gaussian. The use of relevant ranges in place of standard deviations has the capacity to place LT and OE standardized coefficients on a more comparable scale. As ecologists address increasingly complex hypotheses, especially those that involve comparing the influences of different controlling factors (e.g., top-down versus bottom-up or biotic versus abiotic controls), comparable coefficients become a necessary component for evaluations.
Multi-scaling modelling in financial markets
NASA Astrophysics Data System (ADS)
Liu, Ruipeng; Aste, Tomaso; Di Matteo, T.
2007-12-01
In the recent years, a new wave of interest spurred the involvement of complexity in finance which might provide a guideline to understand the mechanism of financial markets, and researchers with different backgrounds have made increasing contributions introducing new techniques and methodologies. In this paper, Markov-switching multifractal models (MSM) are briefly reviewed and the multi-scaling properties of different financial data are analyzed by computing the scaling exponents by means of the generalized Hurst exponent H(q). In particular we have considered H(q) for price data, absolute returns and squared returns of different empirical financial time series. We have computed H(q) for the simulated data based on the MSM models with Binomial and Lognormal distributions of the volatility components. The results demonstrate the capacity of the multifractal (MF) models to capture the stylized facts in finance, and the ability of the generalized Hurst exponents approach to detect the scaling feature of financial time series.
Physically Based Virtual Surgery Planning and Simulation Tools for Personal Health Care Systems
NASA Astrophysics Data System (ADS)
Dogan, Firat; Atilgan, Yasemin
The virtual surgery planning and simulation tools have gained a great deal of importance in the last decade in a consequence of increasing capacities at the information technology level. The modern hardware architectures, large scale database systems, grid based computer networks, agile development processes, better 3D visualization and all the other strong aspects of the information technology brings necessary instruments into almost every desk. The last decade’s special software and sophisticated super computer environments are now serving to individual needs inside “tiny smart boxes” for reasonable prices. However, resistance to learning new computerized environments, insufficient training and all the other old habits prevents effective utilization of IT resources by the specialists of the health sector. In this paper, all the aspects of the former and current developments in surgery planning and simulation related tools are presented, future directions and expectations are investigated for better electronic health care systems.
NASA Technical Reports Server (NTRS)
Gotsis, Pascal K.; Chamis, Christos C.
1992-01-01
The nonlinear behavior of a high-temperature metal-matrix composite (HT-MMC) was simulated by using the metal matrix composite analyzer (METCAN) computer code. The simulation started with the fabrication process, proceeded to thermomechanical cyclic loading, and ended with the application of a monotonic load. Classical laminate theory and composite micromechanics and macromechanics are used in METCAN, along with a multifactor interaction model for the constituents behavior. The simulation of the stress-strain behavior from the macromechanical and the micromechanical points of view, as well as the initiation and final failure of the constituents and the plies in the composite, were examined in detail. It was shown that, when the fibers and the matrix were perfectly bonded, the fracture started in the matrix and then propagated with increasing load to the fibers. After the fibers fractured, the composite lost its capacity to carry additional load and fractured.
NASA Astrophysics Data System (ADS)
Zhang, Yunju; Chen, Zhongyi; Guo, Ming; Lin, Shunsheng; Yan, Yinyang
2018-01-01
With the large capacity of the power system, the development trend of the large unit and the high voltage, the scheduling operation is becoming more frequent and complicated, and the probability of operation error increases. This paper aims at the problem of the lack of anti-error function, single scheduling function and low working efficiency for technical support system in regional regulation and integration, the integrated construction of the error prevention of the integrated architecture of the system of dispatching anti - error of dispatching anti - error of power network based on cloud computing has been proposed. Integrated system of error prevention of Energy Management System, EMS, and Operation Management System, OMS have been constructed either. The system architecture has good scalability and adaptability, which can improve the computational efficiency, reduce the cost of system operation and maintenance, enhance the ability of regional regulation and anti-error checking with broad development prospects.
Telehealth for "the digital illiterate"--elderly heart failure patients experiences.
Lind, Leili; Karlsson, Daniel
2014-01-01
Telehealth solutions should be available also for elderly patients with no interest in using, or capacity to use, computers and smartphones. Fourteen elderly, severely ill heart failure patients in home care participated in a telehealth study and used digital pens for daily reporting of their health state--a technology never used before by this patient group. After the study seven patients and two spouses were interviewed face-to-face. A qualitative content analysis of the interview material was performed. The informants had no experience of computers or the Internet and no interest in learning. Still, patients found the digital pen and the health diary form easy to use, thus effortlessly adopting to changes in care provision. They experienced an improved contact with the caregivers and had a sense of increased security despite a multimorbid state. Our study shows that, given that technologies are tailored to specific patient groups, even "the digital illiterate" may use the Internet.
The Development of Computational Biology in South Africa: Successes Achieved and Lessons Learnt
Mulder, Nicola J.; Christoffels, Alan; de Oliveira, Tulio; Gamieldien, Junaid; Hazelhurst, Scott; Joubert, Fourie; Kumuthini, Judit; Pillay, Ché S.; Snoep, Jacky L.; Tastan Bishop, Özlem; Tiffin, Nicki
2016-01-01
Bioinformatics is now a critical skill in many research and commercial environments as biological data are increasing in both size and complexity. South African researchers recognized this need in the mid-1990s and responded by working with the government as well as international bodies to develop initiatives to build bioinformatics capacity in the country. Significant injections of support from these bodies provided a springboard for the establishment of computational biology units at multiple universities throughout the country, which took on teaching, basic research and support roles. Several challenges were encountered, for example with unreliability of funding, lack of skills, and lack of infrastructure. However, the bioinformatics community worked together to overcome these, and South Africa is now arguably the leading country in bioinformatics on the African continent. Here we discuss how the discipline developed in the country, highlighting the challenges, successes, and lessons learnt. PMID:26845152
The Development of Computational Biology in South Africa: Successes Achieved and Lessons Learnt.
Mulder, Nicola J; Christoffels, Alan; de Oliveira, Tulio; Gamieldien, Junaid; Hazelhurst, Scott; Joubert, Fourie; Kumuthini, Judit; Pillay, Ché S; Snoep, Jacky L; Tastan Bishop, Özlem; Tiffin, Nicki
2016-02-01
Bioinformatics is now a critical skill in many research and commercial environments as biological data are increasing in both size and complexity. South African researchers recognized this need in the mid-1990s and responded by working with the government as well as international bodies to develop initiatives to build bioinformatics capacity in the country. Significant injections of support from these bodies provided a springboard for the establishment of computational biology units at multiple universities throughout the country, which took on teaching, basic research and support roles. Several challenges were encountered, for example with unreliability of funding, lack of skills, and lack of infrastructure. However, the bioinformatics community worked together to overcome these, and South Africa is now arguably the leading country in bioinformatics on the African continent. Here we discuss how the discipline developed in the country, highlighting the challenges, successes, and lessons learnt.
NASA Technical Reports Server (NTRS)
Gotsis, Pascal K.
1991-01-01
The nonlinear behavior of a high-temperature metal-matrix composite (HT-MMC) was simulated by using the metal matrix composite analyzer (METCAN) computer code. The simulation started with the fabrication process, proceeded to thermomechanical cyclic loading, and ended with the application of a monotonic load. Classical laminate theory and composite micromechanics and macromechanics are used in METCAN, along with a multifactor interaction model for the constituents behavior. The simulation of the stress-strain behavior from the macromechanical and the micromechanical points of view, as well as the initiation and final failure of the constituents and the plies in the composite, were examined in detail. It was shown that, when the fibers and the matrix were perfectly bonded, the fracture started in the matrix and then propagated with increasing load to the fibers. After the fibers fractured, the composite lost its capacity to carry additional load and fractured.
ERIC Educational Resources Information Center
Kim, Hye Yeong
2014-01-01
Effectively exploring the efficacy of synchronous computer-mediated communication (SCMC) for pedagogical purposes can be achieved through the careful investigation of potentially beneficial, inherent attributes of SCMC. This study provides empirical evidence for the capacity of task-based SCMC to draw learner attention to linguistic forms by…
Code of Federal Regulations, 2012 CFR
2012-10-01
... examiner means a person licensed as a doctor of medicine or doctor of osteopathy. A medical examiner can be..., braking capacity, and in-train force levels throughout the train; and (4) Is computer enhanced so that it... train; and (4) Is computer enhanced so that it can be programmed for specific train consists and the...
Code of Federal Regulations, 2011 CFR
2011-10-01
... examiner means a person licensed as a doctor of medicine or doctor of osteopathy. A medical examiner can be..., braking capacity, and in-train force levels throughout the train; and (4) Is computer enhanced so that it... train; and (4) Is computer enhanced so that it can be programmed for specific train consists and the...
Information Prosthetics for the Handicapped. Artificial Intelligence Memo No. 496.
ERIC Educational Resources Information Center
Papert, Seymour A.; Weir, Sylvia
The proposal outlines a study to assess the role of computers in assessing and instructing students with severe cerebral palsy in spatial and communication skills. The computer's capacity to make learning interesting and challenging to the severely disabled student is noted, along with its use as a diagnostic tool. Implications for theories on…
Program Manual for Estimating Use and Related Statistics on Developed Recreation Sites
Gary L. Tyre; Gene R. Welch
1972-01-01
This manual includes documentation of four computer programs supporting subroutines for estimating use, visitor origin, patterns of use, and occupancy rates at developed recreation sites. The programs are written in Fortran IV and should be easily adapted to any computer arrangement have the capacity to compile this language.
The History of the AutoChemist®: From Vision to Reality.
Peterson, H E; Jungner, I
2014-05-22
This paper discusses the early history and development of a clinical analyser system in Sweden (AutoChemist, 1965). It highlights the importance of such high capacity system both for clinical use and health care screening. The device was developed to assure the quality of results and to automatically handle the orders, store the results in digital form for later statistical analyses and distribute the results to the patients' physicians by using the computer used for the analyser. The most important result of the construction of an analyser able to produce analytical results on a mass scale was the development of a mechanical multi-channel analyser for clinical laboratories that handled discrete sample technology and could prevent carry-over to the next test samples while incorporating computer technology to improve the quality of test results. The AutoChemist could handle 135 samples per hour in an 8-hour shift and up to 24 possible analyses channels resulting in 3,200 results per hour. Later versions would double this capacity. Some customers used the equipment 24 hours per day. With a capacity of 3,000 to 6,000 analyses per hour, pneumatic driven pipettes, special units for corrosive liquids or special activities, and an integrated computer, the AutoChemist system was unique and the largest of its kind for many years. Its follower - The AutoChemist PRISMA (PRogrammable Individually Selective Modular Analyzer) - was smaller in size but had a higher capacity. Both analysers established new standards of operation for clinical laboratories and encouraged others to use new technologies for building new analysers.
Eide, Per Kristian
2016-12-01
OBJECTIVE The objective of this study was to examine how pulsatile and static intracranial pressure (ICP) scores correlate with indices of intracranial pressure-volume reserve capacity, i.e., intracranial elastance (ICE) and intracranial compliance (ICC), as determined during ventricular infusion testing. METHODS All patients undergoing ventricular infusion testing and overnight ICP monitoring during the 6-year period from 2007 to 2012 were included in the study. Clinical data were retrieved from a quality registry, and the ventricular infusion pressure data and ICP scores were retrieved from a pressure database. The ICE and ICC (= 1/ICE) were computed during the infusion phase of the infusion test. RESULTS During the period from 2007 to 2012, 82 patients with possible treatment-dependent hydrocephalus underwent ventricular infusion testing within the department of neurosurgery. The infusion tests revealed a highly significant positive correlation between ICE and the pulsatile ICP scores mean wave amplitude (MWA) and rise-time coefficient (RTC), and the static ICP score mean ICP. The ICE was negatively associated with linear measures of ventricular size. The overnight ICP recordings revealed significantly increased MWA (> 4 mm Hg) and RTC (> 20 mm Hg/sec) values in patients with impaired ICC (< 0.5 ml/mm Hg). CONCLUSIONS In this study cohort, there was a significant positive correlation between pulsatile ICP and ICE measured during ventricular infusion testing. In patients with impaired ICC during infusion testing (ICC < 0.5 ml/mm Hg), overnight ICP recordings showed increased pulsatile ICP (MWA > 4 mm Hg, RTC > 20 mm Hg/sec), but not increased mean ICP (< 10-15 mm Hg). The present data support the assumption that pulsatile ICP (MWA and RTC) may serve as substitute markers of pressure-volume reserve capacity, i.e., ICE and ICC.
Prat, Chantel S; Mason, Robert A; Just, Marcel Adam
2012-03-01
This study used fMRI to investigate the neural correlates of analogical mapping during metaphor comprehension, with a focus on dynamic configuration of neural networks with changing processing demands and individual abilities. Participants with varying vocabulary sizes and working memory capacities read 3-sentence passages ending in nominal critical utterances of the form "X is a Y." Processing demands were manipulated by varying preceding contexts. Three figurative conditions manipulated difficulty by varying the extent to which preceding contexts mentioned relevant semantic features for relating the vehicle and topic of the critical utterance to one another. In the easy condition, supporting information was mentioned. In the neutral condition, no relevant information was mentioned. In the most difficult condition, opposite features were mentioned, resulting in an ironic interpretation of the critical utterance. A fourth, literal condition included context that supported a literal interpretation of the critical utterance. Activation in lateral and medial frontal regions increased with increasing contextual difficulty. Lower vocabulary readers also had greater activation across conditions in the right inferior frontal gyrus. In addition, volumetric analyses showed increased right temporo-parietal junction and superior medial frontal activation for all figurative conditions over the literal condition. The results from this experiment imply that the cortical regions are dynamically recruited in language comprehension as a function of the processing demands of a task. Individual differences in cognitive capacities were also associated with differences in recruitment and modulation of working memory and executive function regions, highlighting the overlapping computations in metaphor comprehension and general thinking and reasoning. 2012 APA, all rights reserved
A History of High-Performance Computing
NASA Technical Reports Server (NTRS)
2006-01-01
Faster than most speedy computers. More powerful than its NASA data-processing predecessors. Able to leap large, mission-related computational problems in a single bound. Clearly, it s neither a bird nor a plane, nor does it need to don a red cape, because it s super in its own way. It's Columbia, NASA s newest supercomputer and one of the world s most powerful production/processing units. Named Columbia to honor the STS-107 Space Shuttle Columbia crewmembers, the new supercomputer is making it possible for NASA to achieve breakthroughs in science and engineering, fulfilling the Agency s missions, and, ultimately, the Vision for Space Exploration. Shortly after being built in 2004, Columbia achieved a benchmark rating of 51.9 teraflop/s on 10,240 processors, making it the world s fastest operational computer at the time of completion. Putting this speed into perspective, 20 years ago, the most powerful computer at NASA s Ames Research Center, home of the NASA Advanced Supercomputing Division (NAS), ran at a speed of about 1 gigaflop (one billion calculations per second). The Columbia supercomputer is 50,000 times faster than this computer and offers a tenfold increase in capacity over the prior system housed at Ames. What s more, Columbia is considered the world s largest Linux-based, shared-memory system. The system is offering immeasurable benefits to society and is the zenith of years of NASA/private industry collaboration that has spawned new generations of commercial, high-speed computing systems.
Framework Resources Multiply Computing Power
NASA Technical Reports Server (NTRS)
2010-01-01
As an early proponent of grid computing, Ames Research Center awarded Small Business Innovation Research (SBIR) funding to 3DGeo Development Inc., of Santa Clara, California, (now FusionGeo Inc., of The Woodlands, Texas) to demonstrate a virtual computer environment that linked geographically dispersed computer systems over the Internet to help solve large computational problems. By adding to an existing product, FusionGeo enabled access to resources for calculation- or data-intensive applications whenever and wherever they were needed. Commercially available as Accelerated Imaging and Modeling, the product is used by oil companies and seismic service companies, which require large processing and data storage capacities.
Experience with ethylene plant computer control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nasi, M.; Darby, M.L.; Sourander, M.
This article discusses the control strategies, results and opinions of management and operations of a computer based ethylene plant control system. The ethylene unit contains 9 cracking heaters, and its nameplate capacity is 200,000 tpa ethylene. Reports on control performance during different unit loading and using different feedstock types. By converting the yield and utility consumption benefits due to computer control into monetary units, the payback time of the system is less than 2 yrs.
Vazquez, Alexei; de Menezes, Marcio A; Barabási, Albert-László; Oltvai, Zoltan N
2008-10-01
The cell's cytoplasm is crowded by its various molecular components, resulting in a limited solvent capacity for the allocation of new proteins, thus constraining various cellular processes such as metabolism. Here we study the impact of the limited solvent capacity constraint on the metabolic rate, enzyme activities, and metabolite concentrations using a computational model of Saccharomyces cerevisiae glycolysis as a case study. We show that given the limited solvent capacity constraint, the optimal enzyme activities and the metabolite concentrations necessary to achieve a maximum rate of glycolysis are in agreement with their experimentally measured values. Furthermore, the predicted maximum glycolytic rate determined by the solvent capacity constraint is close to that measured in vivo. These results indicate that the limited solvent capacity is a relevant constraint acting on S. cerevisiae at physiological growth conditions, and that a full kinetic model together with the limited solvent capacity constraint can be used to predict both metabolite concentrations and enzyme activities in vivo.
Vazquez, Alexei; de Menezes, Marcio A.; Barabási, Albert-László; Oltvai, Zoltan N.
2008-01-01
The cell's cytoplasm is crowded by its various molecular components, resulting in a limited solvent capacity for the allocation of new proteins, thus constraining various cellular processes such as metabolism. Here we study the impact of the limited solvent capacity constraint on the metabolic rate, enzyme activities, and metabolite concentrations using a computational model of Saccharomyces cerevisiae glycolysis as a case study. We show that given the limited solvent capacity constraint, the optimal enzyme activities and the metabolite concentrations necessary to achieve a maximum rate of glycolysis are in agreement with their experimentally measured values. Furthermore, the predicted maximum glycolytic rate determined by the solvent capacity constraint is close to that measured in vivo. These results indicate that the limited solvent capacity is a relevant constraint acting on S. cerevisiae at physiological growth conditions, and that a full kinetic model together with the limited solvent capacity constraint can be used to predict both metabolite concentrations and enzyme activities in vivo. PMID:18846199
Tools for Early Prediction of Drug Loading in Lipid-Based Formulations
2015-01-01
Identification of the usefulness of lipid-based formulations (LBFs) for delivery of poorly water-soluble drugs is at date mainly experimentally based. In this work we used a diverse drug data set, and more than 2,000 solubility measurements to develop experimental and computational tools to predict the loading capacity of LBFs. Computational models were developed to enable in silico prediction of solubility, and hence drug loading capacity, in the LBFs. Drug solubility in mixed mono-, di-, triglycerides (Maisine 35-1 and Capmul MCM EP) correlated (R2 0.89) as well as the drug solubility in Carbitol and other ethoxylated excipients (PEG400, R2 0.85; Polysorbate 80, R2 0.90; Cremophor EL, R2 0.93). A melting point below 150 °C was observed to result in a reasonable solubility in the glycerides. The loading capacity in LBFs was accurately calculated from solubility data in single excipients (R2 0.91). In silico models, without the demand of experimentally determined solubility, also gave good predictions of the loading capacity in these complex formulations (R2 0.79). The framework established here gives a better understanding of drug solubility in single excipients and of LBF loading capacity. The large data set studied revealed that experimental screening efforts can be rationalized by solubility measurements in key excipients or from solid state information. For the first time it was shown that loading capacity in complex formulations can be accurately predicted using molecular information extracted from calculated descriptors and thermal properties of the crystalline drug. PMID:26568134
Tools for Early Prediction of Drug Loading in Lipid-Based Formulations.
Alskär, Linda C; Porter, Christopher J H; Bergström, Christel A S
2016-01-04
Identification of the usefulness of lipid-based formulations (LBFs) for delivery of poorly water-soluble drugs is at date mainly experimentally based. In this work we used a diverse drug data set, and more than 2,000 solubility measurements to develop experimental and computational tools to predict the loading capacity of LBFs. Computational models were developed to enable in silico prediction of solubility, and hence drug loading capacity, in the LBFs. Drug solubility in mixed mono-, di-, triglycerides (Maisine 35-1 and Capmul MCM EP) correlated (R(2) 0.89) as well as the drug solubility in Carbitol and other ethoxylated excipients (PEG400, R(2) 0.85; Polysorbate 80, R(2) 0.90; Cremophor EL, R(2) 0.93). A melting point below 150 °C was observed to result in a reasonable solubility in the glycerides. The loading capacity in LBFs was accurately calculated from solubility data in single excipients (R(2) 0.91). In silico models, without the demand of experimentally determined solubility, also gave good predictions of the loading capacity in these complex formulations (R(2) 0.79). The framework established here gives a better understanding of drug solubility in single excipients and of LBF loading capacity. The large data set studied revealed that experimental screening efforts can be rationalized by solubility measurements in key excipients or from solid state information. For the first time it was shown that loading capacity in complex formulations can be accurately predicted using molecular information extracted from calculated descriptors and thermal properties of the crystalline drug.
Load-bearing capacity of all-ceramic posterior inlay-retained fixed dental prostheses.
Puschmann, Djamila; Wolfart, Stefan; Ludwig, Klaus; Kern, Matthias
2009-06-01
The purpose of this in vitro study was to compare the quasi-static load-bearing capacity of all-ceramic resin-bonded three-unit inlay-retained fixed dental prostheses (IRFDPs) made from computer-aided design/computer-aided manufacturing (CAD/CAM)-manufactured yttria-stabilized tetragonal zirconia polycrystals (Y-TZP) frameworks with two different connector dimensions, with and without fatigue loading. Twelve IRFDPs each were made with connector dimensions 3 x 3 mm(2) (width x height) (control group) and 3 x 2 mm(2) (test group). Inlay-retained fixed dental prostheses were adhesively cemented on identical metal-models using composite resin cement. Subgroups of six specimens each were fatigued with maximal 1,200,000 loading cycles in a chewing simulator with a weight load of 25 kg and a load frequency of 1.5 Hz. The load-bearing capacity was tested in a universal testing machine for IRFDPs without fatigue loading and for IRFDPs that had not already fractured during fatigue loading. During fatigue testing one IRFDP (17%) of the test group failed. Under both loading conditions, IRFDPs of the control group exhibited statistically significantly higher load-bearing capacities than the test group. Fatigue loading reduced the load-bearing capacity in both groups. Considering the maximum chewing forces in the molar region, it seems possible to use zirconia ceramic as a core material for IRFDPs with a minimum connector dimension of 9 mm(2). A further reduction of the connector dimensions to 6 mm(2) results in a significant reduction of the load-bearing capacity.
Rybacka, Anna; Goździk-Spychalska, Joanna; Rybacki, Adam; Piorunek, Tomasz; Batura-Gabryel, Halina; Karmelita-Katulska, Katarzyna
2018-05-04
In cystic fibrosis, pulmonary function tests (PFTs) and computed tomography are used to assess lung function and structure, respectively. Although both techniques of assessment are congruent there are lingering doubts about which PFTs variables show the best congruence with computed tomography scoring. In this study we addressed the issue by reinvestigating the association between PFTs variables and the score of changes seen in computed tomography scans in patients with cystic fibrosis with and without pulmonary exacerbation. This retrospective study comprised 40 patients in whom PFTs and computed tomography were performed no longer than 3 weeks apart. Images (inspiratory: 0.625 mm slice thickness, 0.625 mm interval; expiratory: 1.250 mm slice thickness, 10 mm interval) were evaluated with the Bhalla scoring system. The most frequent structural abnormality found in scans were bronchiectases and peribronchial thickening. The strongest relationship was found between the Bhalla sore and forced expiratory volume in 1 s (FEV1). The Bhalla sore also was related to forced vital capacity (FVC), FEV1/FVC ratio, residual volume (RV), and RV/total lung capacity (TLC) ratio. We conclude that lung structural data obtained from the computed tomography examination are highly congruent to lung function data. Thus, computed tomography imaging may supersede functional assessment in cases of poor compliance with spirometry procedures in the lederly or children. Computed tomography also seems more sensitive than PFTs in the assessment of cystic fibrosis progression. Moreover, in early phases of cystic fibrosis, computed tomography, due to its excellent resolution, may be irreplaceable in monitoring pulmonary damage.
NASA Astrophysics Data System (ADS)
Fitch, W. Tecumseh
2014-09-01
Progress in understanding cognition requires a quantitative, theoretical framework, grounded in the other natural sciences and able to bridge between implementational, algorithmic and computational levels of explanation. I review recent results in neuroscience and cognitive biology that, when combined, provide key components of such an improved conceptual framework for contemporary cognitive science. Starting at the neuronal level, I first discuss the contemporary realization that single neurons are powerful tree-shaped computers, which implies a reorientation of computational models of learning and plasticity to a lower, cellular, level. I then turn to predictive systems theory (predictive coding and prediction-based learning) which provides a powerful formal framework for understanding brain function at a more global level. Although most formal models concerning predictive coding are framed in associationist terms, I argue that modern data necessitate a reinterpretation of such models in cognitive terms: as model-based predictive systems. Finally, I review the role of the theory of computation and formal language theory in the recent explosion of comparative biological research attempting to isolate and explore how different species differ in their cognitive capacities. Experiments to date strongly suggest that there is an important difference between humans and most other species, best characterized cognitively as a propensity by our species to infer tree structures from sequential data. Computationally, this capacity entails generative capacities above the regular (finite-state) level; implementationally, it requires some neural equivalent of a push-down stack. I dub this unusual human propensity "dendrophilia", and make a number of concrete suggestions about how such a system may be implemented in the human brain, about how and why it evolved, and what this implies for models of language acquisition. I conclude that, although much remains to be done, a neurally-grounded framework for theoretical cognitive science is within reach that can move beyond polarized debates and provide a more adequate theoretical future for cognitive biology.
Fitch, W Tecumseh
2014-09-01
Progress in understanding cognition requires a quantitative, theoretical framework, grounded in the other natural sciences and able to bridge between implementational, algorithmic and computational levels of explanation. I review recent results in neuroscience and cognitive biology that, when combined, provide key components of such an improved conceptual framework for contemporary cognitive science. Starting at the neuronal level, I first discuss the contemporary realization that single neurons are powerful tree-shaped computers, which implies a reorientation of computational models of learning and plasticity to a lower, cellular, level. I then turn to predictive systems theory (predictive coding and prediction-based learning) which provides a powerful formal framework for understanding brain function at a more global level. Although most formal models concerning predictive coding are framed in associationist terms, I argue that modern data necessitate a reinterpretation of such models in cognitive terms: as model-based predictive systems. Finally, I review the role of the theory of computation and formal language theory in the recent explosion of comparative biological research attempting to isolate and explore how different species differ in their cognitive capacities. Experiments to date strongly suggest that there is an important difference between humans and most other species, best characterized cognitively as a propensity by our species to infer tree structures from sequential data. Computationally, this capacity entails generative capacities above the regular (finite-state) level; implementationally, it requires some neural equivalent of a push-down stack. I dub this unusual human propensity "dendrophilia", and make a number of concrete suggestions about how such a system may be implemented in the human brain, about how and why it evolved, and what this implies for models of language acquisition. I conclude that, although much remains to be done, a neurally-grounded framework for theoretical cognitive science is within reach that can move beyond polarized debates and provide a more adequate theoretical future for cognitive biology. Copyright © 2014. Published by Elsevier B.V.
Influence of architecture and material properties on vanadium redox flow battery performance
NASA Astrophysics Data System (ADS)
Houser, Jacob; Clement, Jason; Pezeshki, Alan; Mench, Matthew M.
2016-01-01
This publication reports a design optimization study of all-vanadium redox flow batteries (VRBs), including performance testing, distributed current measurements, and flow visualization. Additionally, a computational flow simulation is used to support the conclusions made from the experimental results. This study demonstrates that optimal flow field design is not simply related to the best architecture, but is instead a more complex interplay between architecture, electrode properties, electrolyte properties, and operating conditions which combine to affect electrode convective transport. For example, an interdigitated design outperforms a serpentine design at low flow rates and with a thin electrode, accessing up to an additional 30% of discharge capacity; but a serpentine design can match the available discharge capacity of the interdigitated design by increasing the flow rate or the electrode thickness due to differing responses between the two flow fields. The results of this study should be useful to design engineers seeking to optimize VRB systems through enhanced performance and reduced pressure drop.
Iron and Manganese Pyrophosphates as Cathodes for Lithium-Ion Batteries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Hui; Upreti, Shailesh; Chernova, Natasha A.
2015-10-15
The mixed-metal phases, (Li{sub 2}Mn{sub 1-y}Fe{sub y}P{sub 2}O{sub 7}, 0 {le} y {le} 1), were synthesized using a 'wet method', and found to form a solid solution in the P2{sub 1}/a space group. Both thermogravimetric analysis and magnetic susceptibility measurements confirm the 2+ oxidation state for both the Mn and Fe. The electrochemical capacity improves as the Fe concentration increases, as do the intensities of the redox peaks of the cyclic voltammogram, indicating higher lithium-ion diffusivity in the iron phase. The two Li{sup +} ions in the three-dimensional tunnel structure of the pyrophosphate phase allows for the cycling of moremore » than one lithium per redox center. Cyclic voltammograms show a second oxidation peak at 5 V and 5.3 V, indicative of the extraction of the second lithium ion, in agreement with ab initio computation predictions. Thus, electrochemical capacities exceeding 200 Ah/kg may be achieved if a stable electrolyte is found.« less
Clothing creator trademark : Business plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stern, B.
SYMAGERY has developed a patented process to manufacture clothing without direct human labor. This CLOTHING CREATOR{trademark}, will have the ability to produce two (2) perfect garments every 45 seconds or one (1) every 30 seconds. The process will combine Computer Integrated Manufacturing (CIM) technology with heat molding and ultrasonic bonding/cutting techniques. This system for garment production, will have the capacity to produce garments of higher quality and at lower productions costs than convention cut and sew methods. ADVANTAGES of the process include: greatly reduced production costs; increased quality of garments; reduction in lead time; and capacity to make new classmore » of garments. This technology will accommodate a variety of knit, woven and nonwoven materials containing a majority of synthetic fibers. Among the many style of garments that could be manufactured by this process are: work clothing, career apparel, athletic garments, medical disposables, health care products, activewear, haz/mat garments, military clothing, cleanroom clothing, outdoor wear, upholstery, and highly contoured stuffed toy shells. 3 refs.« less
NASA Astrophysics Data System (ADS)
Shaat, Musbah; Bader, Faouzi
2010-12-01
Cognitive Radio (CR) systems have been proposed to increase the spectrum utilization by opportunistically access the unused spectrum. Multicarrier communication systems are promising candidates for CR systems. Due to its high spectral efficiency, filter bank multicarrier (FBMC) can be considered as an alternative to conventional orthogonal frequency division multiplexing (OFDM) for transmission over the CR networks. This paper addresses the problem of resource allocation in multicarrier-based CR networks. The objective is to maximize the downlink capacity of the network under both total power and interference introduced to the primary users (PUs) constraints. The optimal solution has high computational complexity which makes it unsuitable for practical applications and hence a low complexity suboptimal solution is proposed. The proposed algorithm utilizes the spectrum holes in PUs bands as well as active PU bands. The performance of the proposed algorithm is investigated for OFDM and FBMC based CR systems. Simulation results illustrate that the proposed resource allocation algorithm with low computational complexity achieves near optimal performance and proves the efficiency of using FBMC in CR context.
Explicit Content Caching at Mobile Edge Networks with Cross-Layer Sensing
Chen, Lingyu; Su, Youxing; Luo, Wenbin; Hong, Xuemin; Shi, Jianghong
2018-01-01
The deployment density and computational power of small base stations (BSs) are expected to increase significantly in the next generation mobile communication networks. These BSs form the mobile edge network, which is a pervasive and distributed infrastructure that can empower a variety of edge/fog computing applications. This paper proposes a novel edge-computing application called explicit caching, which stores selective contents at BSs and exposes such contents to local users for interactive browsing and download. We formulate the explicit caching problem as a joint content recommendation, caching, and delivery problem, which aims to maximize the expected user quality-of-experience (QoE) with varying degrees of cross-layer sensing capability. Optimal and effective heuristic algorithms are presented to solve the problem. The theoretical performance bounds of the explicit caching system are derived in simplified scenarios. The impacts of cache storage space, BS backhaul capacity, cross-layer information, and user mobility on the system performance are simulated and discussed in realistic scenarios. Results suggest that, compared with conventional implicit caching schemes, explicit caching can better exploit the mobile edge network infrastructure for personalized content dissemination. PMID:29565313
NASA Technical Reports Server (NTRS)
Flourens, F.; Morel, T.; Gauthier, D.; Serafin, D.
1991-01-01
Numerical techniques such as Finite Difference Time Domain (FDTD) computer programs, which were first developed to analyze the external electromagnetic environment of an aircraft during a wave illumination, a lightning event, or any kind of current injection, are now very powerful investigative tools. The program called GORFF-VE, was extended to compute the inner electromagnetic fields that are generated by the penetration of the outer fields through large apertures made in the all metallic body. Then, the internal fields can drive the electrical response of a cable network. The coupling between the inside and the outside of the helicopter is implemented using Huygen's principle. Moreover, the spectacular increase of computer resources, as calculations speed and memory capacity, allows the modellization structures as complex as these of helicopters with accuracy. This numerical model was exploited, first, to analyze the electromagnetic environment of an in-flight helicopter for several injection configurations, and second, to design a coaxial return path to simulate the lightning aircraft interaction with a strong current injection. The E field and current mappings are the result of these calculations.
Explicit Content Caching at Mobile Edge Networks with Cross-Layer Sensing.
Chen, Lingyu; Su, Youxing; Luo, Wenbin; Hong, Xuemin; Shi, Jianghong
2018-03-22
The deployment density and computational power of small base stations (BSs) are expected to increase significantly in the next generation mobile communication networks. These BSs form the mobile edge network, which is a pervasive and distributed infrastructure that can empower a variety of edge/fog computing applications. This paper proposes a novel edge-computing application called explicit caching, which stores selective contents at BSs and exposes such contents to local users for interactive browsing and download. We formulate the explicit caching problem as a joint content recommendation, caching, and delivery problem, which aims to maximize the expected user quality-of-experience (QoE) with varying degrees of cross-layer sensing capability. Optimal and effective heuristic algorithms are presented to solve the problem. The theoretical performance bounds of the explicit caching system are derived in simplified scenarios. The impacts of cache storage space, BS backhaul capacity, cross-layer information, and user mobility on the system performance are simulated and discussed in realistic scenarios. Results suggest that, compared with conventional implicit caching schemes, explicit caching can better exploit the mobile edge network infrastructure for personalized content dissemination.
Large-Scale NASA Science Applications on the Columbia Supercluster
NASA Technical Reports Server (NTRS)
Brooks, Walter
2005-01-01
Columbia, NASA's newest 61 teraflops supercomputer that became operational late last year, is a highly integrated Altix cluster of 10,240 processors, and was named to honor the crew of the Space Shuttle lost in early 2003. Constructed in just four months, Columbia increased NASA's computing capability ten-fold, and revitalized the Agency's high-end computing efforts. Significant cutting-edge science and engineering simulations in the areas of space and Earth sciences, as well as aeronautics and space operations, are already occurring on this largest operational Linux supercomputer, demonstrating its capacity and capability to accelerate NASA's space exploration vision. The presentation will describe how an integrated environment consisting not only of next-generation systems, but also modeling and simulation, high-speed networking, parallel performance optimization, and advanced data analysis and visualization, is being used to reduce design cycle time, accelerate scientific discovery, conduct parametric analysis of multiple scenarios, and enhance safety during the life cycle of NASA missions. The talk will conclude by discussing how NAS partnered with various NASA centers, other government agencies, computer industry, and academia, to create a national resource in large-scale modeling and simulation.
Outlet diffusers to increase culvert capacity.
DOT National Transportation Integrated Search
2016-06-01
Aging infrastructure and changing weather patterns present the need to increase the capacity of existing highway culverts. This research approaches this challenge through the use of diffuser outlet systems to increase pipe capacity and reduce outlet ...
Cloud-based large-scale air traffic flow optimization
NASA Astrophysics Data System (ADS)
Cao, Yi
The ever-increasing traffic demand makes the efficient use of airspace an imperative mission, and this paper presents an effort in response to this call. Firstly, a new aggregate model, called Link Transmission Model (LTM), is proposed, which models the nationwide traffic as a network of flight routes identified by origin-destination pairs. The traversal time of a flight route is assumed to be the mode of distribution of historical flight records, and the mode is estimated by using Kernel Density Estimation. As this simplification abstracts away physical trajectory details, the complexity of modeling is drastically decreased, resulting in efficient traffic forecasting. The predicative capability of LTM is validated against recorded traffic data. Secondly, a nationwide traffic flow optimization problem with airport and en route capacity constraints is formulated based on LTM. The optimization problem aims at alleviating traffic congestions with minimal global delays. This problem is intractable due to millions of variables. A dual decomposition method is applied to decompose the large-scale problem such that the subproblems are solvable. However, the whole problem is still computational expensive to solve since each subproblem is an smaller integer programming problem that pursues integer solutions. Solving an integer programing problem is known to be far more time-consuming than solving its linear relaxation. In addition, sequential execution on a standalone computer leads to linear runtime increase when the problem size increases. To address the computational efficiency problem, a parallel computing framework is designed which accommodates concurrent executions via multithreading programming. The multithreaded version is compared with its monolithic version to show decreased runtime. Finally, an open-source cloud computing framework, Hadoop MapReduce, is employed for better scalability and reliability. This framework is an "off-the-shelf" parallel computing model that can be used for both offline historical traffic data analysis and online traffic flow optimization. It provides an efficient and robust platform for easy deployment and implementation. A small cloud consisting of five workstations was configured and used to demonstrate the advantages of cloud computing in dealing with large-scale parallelizable traffic problems.
Maglio, Paul P; Wenger, Michael J; Copeland, Angelina M
2008-01-01
Epistemic actions are physical actions people take to simplify internal problem solving rather than to move closer to an external goal. When playing the video game Tetris, for instance, experts routinely rotate falling shapes more than is strictly needed to place the shapes. Maglio and Kirsh [Kirsh, D., & Maglio, P. (1994). On distinguishing epistemic from pragmatic action. Cognitive Science, 18, 513-549; Maglio, P. P. (1995). The computational basis of interactive skill. PhD thesis, University of California, San Diego] proposed that such actions might serve the purpose of priming memory by external means, reducing the need for internal computation (e.g., mental rotation), and resulting in performance improvements that exceed the cost of taking additional actions. The present study tests this priming hypothesis in a set of four experiments. The first three explored precisely the conditions under which priming produces benefits. Results showed that presentation of multiple orientations of a shape led to faster responses than did presentation of a single orientation, and that this effect depended on the interval between preview and test. The fourth explored whether the benefit of seeing shapes in multiple orientations outweighs the cost of taking the extra actions to rotate shapes physically. Benefits were measured using a novel statistical method for mapping reaction-time data onto an estimate of the increase in processing capacity afforded by seeing multiple orientations. Cost was measured using an empirical estimate of time needed to take action in Tetris. Results showed that indeed the increase in internal processing capacity obtained from seeing shapes in multiple orientations outweighed the time to take extra actions.
Biomolecular computing systems: principles, progress and potential.
Benenson, Yaakov
2012-06-12
The task of information processing, or computation, can be performed by natural and man-made 'devices'. Man-made computers are made from silicon chips, whereas natural 'computers', such as the brain, use cells and molecules. Computation also occurs on a much smaller scale in regulatory and signalling pathways in individual cells and even within single biomolecules. Indeed, much of what we recognize as life results from the remarkable capacity of biological building blocks to compute in highly sophisticated ways. Rational design and engineering of biological computing systems can greatly enhance our ability to study and to control biological systems. Potential applications include tissue engineering and regeneration and medical treatments. This Review introduces key concepts and discusses recent progress that has been made in biomolecular computing.
NASA Astrophysics Data System (ADS)
Khan, Akhtar Nawaz
2017-11-01
Currently, analytical models are used to compute approximate blocking probabilities in opaque and all-optical WDM networks with the homogeneous link capacities. Existing analytical models can also be extended to opaque WDM networking with heterogeneous link capacities due to the wavelength conversion at each switch node. However, existing analytical models cannot be utilized for all-optical WDM networking with heterogeneous structure of link capacities due to the wavelength continuity constraint and unequal numbers of wavelength channels on different links. In this work, a mathematical model is extended for computing approximate network blocking probabilities in heterogeneous all-optical WDM networks in which the path blocking is dominated by the link along the path with fewer number of wavelength channels. A wavelength assignment scheme is also proposed for dynamic traffic, termed as last-fit-first wavelength assignment, in which a wavelength channel with maximum index is assigned first to a lightpath request. Due to heterogeneous structure of link capacities and the wavelength continuity constraint, the wavelength channels with maximum indexes are utilized for minimum hop routes. Similarly, the wavelength channels with minimum indexes are utilized for multi-hop routes between source and destination pairs. The proposed scheme has lower blocking probability values compared to the existing heuristic for wavelength assignments. Finally, numerical results are computed in different network scenarios which are approximately equal to values obtained from simulations. Since January 2016, he is serving as Head of Department and an Assistant Professor in the Department of Electrical Engineering at UET, Peshawar-Jalozai Campus, Pakistan. From May 2013 to June 2015, he served Department of Telecommunication Engineering as an Assistant Professor at UET, Peshawar-Mardan Campus, Pakistan. He also worked as an International Internship scholar in the Fukuda Laboratory, National Institute of Informatics, Tokyo, Japan on the topic large-scale simulation for internet topology analysis. His research interests include design and analysis of optical WDM networks, network algorithms, network routing, and network resource optimization problems.
Design and characterization of an ultraresolution seamlessly tiled display for data visualization
NASA Astrophysics Data System (ADS)
Bordes, Nicole; Bleha, William P.; Pailthorpe, Bernard
2003-09-01
The demand for more pixels in digital displays is beginning to be met as manufacturers increase the native resolution of projector chips. Tiling several projectors still offers one solution to augment the pixel capacity of a display. However problems of color and illumination uniformity across projectors need to be addressed as well as the computer software required to drive such devices. In this paper we present the results obtained on a desktop size tiled projector array of three D-ILA projectors sharing a common illumination source. The composite image on a 3 x 1 array, is 3840 by 1024 pixels with a resolution of about 80 dpi. The system preserves desktop resolution, is compact and can fit in a normal room or laboratory. A fiber optic beam splitting system and a single set of red, green and blue dichroic filters are the key to color and illumination uniformity. The D-ILA chips inside each projector can be adjusted individually to set or change characteristics such as contrast, brightness or gamma curves. The projectors were matched carefully and photometric variations were corrected, leading to a seamless tiled image. Photometric measurements were performed to characterize the display and losses through the optical paths, and are reported here. This system is driven by a small PC computer cluster fitted with graphics cards and is running Linux. The Chromium API can be used for tiling graphics tiles across the display and interfacing to users' software applications. There is potential for scaling the design to accommodate larger arrays, up to 4x5 projectors, increasing display system capacity to 50 Megapixels. Further increases, beyond 100 Megapixels can be anticipated with new generation D-ILA chips capable of projecting QXGA (2k x 1.5k), with ongoing evolution as QUXGA (4k x 2k) becomes available.
White, Richard S A; Wintle, Brendan A; McHugh, Peter A; Booker, Douglas J; McIntosh, Angus R
2017-06-14
Despite growing concerns regarding increasing frequency of extreme climate events and declining population sizes, the influence of environmental stochasticity on the relationship between population carrying capacity and time-to-extinction has received little empirical attention. While time-to-extinction increases exponentially with carrying capacity in constant environments, theoretical models suggest increasing environmental stochasticity causes asymptotic scaling, thus making minimum viable carrying capacity vastly uncertain in variable environments. Using empirical estimates of environmental stochasticity in fish metapopulations, we showed that increasing environmental stochasticity resulting from extreme droughts was insufficient to create asymptotic scaling of time-to-extinction with carrying capacity in local populations as predicted by theory. Local time-to-extinction increased with carrying capacity due to declining sensitivity to demographic stochasticity, and the slope of this relationship declined significantly as environmental stochasticity increased. However, recent 1 in 25 yr extreme droughts were insufficient to extirpate populations with large carrying capacity. Consequently, large populations may be more resilient to environmental stochasticity than previously thought. The lack of carrying capacity-related asymptotes in persistence under extreme climate variability reveals how small populations affected by habitat loss or overharvesting, may be disproportionately threatened by increases in extreme climate events with global warming. © 2017 The Author(s).
ERIC Educational Resources Information Center
Brink, Dan
1987-01-01
Reviews the current state of printing software and printing hardware compatibility and capacity. Discusses the changing relationship between author and publisher resulting from the advent of desktop publishing. (LMO)
Endres, Michael J; Donkin, Chris; Finn, Peter R
2014-04-01
Externalizing psychopathology (EXT) is associated with low executive working memory (EWM) capacity and problems with inhibitory control and decision-making; however, the specific cognitive processes underlying these problems are not well known. This study used a linear ballistic accumulator computational model of go/no-go associative-incentive learning conducted with and without a working memory (WM) load to investigate these cognitive processes in 510 young adults varying in EXT (lifetime problems with substance use, conduct disorder, ADHD, adult antisocial behavior). High scores on an EXT factor were associated with low EWM capacity and higher scores on a latent variable reflecting the cognitive processes underlying disinhibited decision-making (more false alarms, faster evidence accumulation rates for false alarms [vFA], and lower scores on a Response Precision Index [RPI] measure of information processing efficiency). The WM load increased disinhibited decision-making, decisional uncertainty, and response caution for all subjects. Higher EWM capacity was associated with lower scores on the latent disinhibited decision-making variable (lower false alarms, lower vFAs and RPI scores) in both WM load conditions. EWM capacity partially mediated the association between EXT and disinhibited decision-making under no-WM load, and completely mediated this association under WM load. The results underline the role that EWM has in associative-incentive go/no-go learning and indicate that common to numerous types of EXT are impairments in the cognitive processes associated with the evidence accumulation-evaluation-decision process. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Endres, Michael J.; Donkin, Chris; Finn, Peter R.
2014-01-01
Externalizing psychopathology (EXT) is associated with low executive working memory (EWM) capacity and problems with inhibitory control and decision-making; however, the specific cognitive processes underlying these problems are not well known. This study used a linear ballistic accumulator computational model of go/no-go associative-incentive learning conducted with and without a working memory (WM) load to investigate these cognitive processes in 510 young adults varying in EXT (lifetime problems with substance use, conduct disorder, ADHD, adult antisocial behavior). High scores on an EXT factor were associated with low EWM capacity and higher scores on a latent variable reflecting the cognitive processes underlying disinhibited decision making (more false alarms, faster evidence accumulation rates for false alarms (vFA), and lower scores on a Response Precision Index (RPI) measure of information processing efficiency). The WM load increased disinhibited decision making, decisional uncertainty, and response caution for all subjects. Higher EWM capacity was associated with lower scores on the latent disinhibited decision making variable (lower false alarms, lower vFAs and RPI scores) in both WM load conditions. EWM capacity partially mediated the association between EXT and disinhibited decision making under no-WM load, and completely mediated this association under WM load. The results underline the role that EWM has in associative – incentive go/no-go learning and indicate that common to numerous types of EXT are impairments in the cognitive processes associated with the evidence accumulation – evaluation – decision process. PMID:24611834
Game Design & Development: Using Computer Games as Creative and Challenging Assignments
ERIC Educational Resources Information Center
Seals, Cheryl; Hundley, Jacqueline; Montgomery, Lacey Strange
2008-01-01
This paper describes a game design and development course. The rationale for forming this class was to use student excitement with video games as an intrinsic motivation over traditional courses. Today's students have grown up exposed to gaming, interactive environments, and vivid 3D. Computer gaming has the capacity to attract many new students…
ERIC Educational Resources Information Center
Harsh, Matthew; Bal, Ravtosh; Wetmore, Jameson; Zachary, G. Pascal; Holden, Kerry
2018-01-01
The emergence of vibrant research communities of computer scientists in Kenya and Uganda has occurred in the context of neoliberal privatization, commercialization, and transnational capital flows from donors and corporations. We explore how this funding environment configures research culture and research practices, which are conceptualized as…
ERIC Educational Resources Information Center
Thomas, Shailendra Nelle
2010-01-01
Purpose, scope, and method of study: Although computer technology has been a part of the educational community for many years, it is still not used at its optimal capacity (Gosmire & Grady, 2007b; Trotter, 2007). While teachers were identified early as playing important roles in the success of technology implementation, principals were often…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-02
... Rental Agent: the capacity to track the physical location of rented computers via WiFi hotspot locations. The information derived from WiFi hotspot contacts can frequently pinpoint a computer's location to a... a list of publicly available WiFi hotspots with the street addresses for the particular hotspots...
Editorial: Computational Creativity, Concept Invention, and General Intelligence
NASA Astrophysics Data System (ADS)
Besold, Tarek R.; Kühnberger, Kai-Uwe; Veale, Tony
2015-12-01
Over the last decade, computational creativity as a field of scientific investigation and computational systems engineering has seen growing popularity. Still, the levels of development between projects aiming at systems for artistic production or performance and endeavours addressing creative problem-solving or models of creative cognitive capacities is diverging. While the former have already seen several great successes, the latter still remain in their infancy. This volume collects reports on work trying to close the accrued gap.
Quantum computing with incoherent resources and quantum jumps.
Santos, M F; Cunha, M Terra; Chaves, R; Carvalho, A R R
2012-04-27
Spontaneous emission and the inelastic scattering of photons are two natural processes usually associated with decoherence and the reduction in the capacity to process quantum information. Here we show that, when suitably detected, these photons are sufficient to build all the fundamental blocks needed to perform quantum computation in the emitting qubits while protecting them from deleterious dissipative effects. We exemplify this by showing how to efficiently prepare graph states for the implementation of measurement-based quantum computation.
Computational modelling of memory retention from synapse to behaviour
NASA Astrophysics Data System (ADS)
van Rossum, Mark C. W.; Shippi, Maria
2013-03-01
One of our most intriguing mental abilities is the capacity to store information and recall it from memory. Computational neuroscience has been influential in developing models and concepts of learning and memory. In this tutorial review we focus on the interplay between learning and forgetting. We discuss recent advances in the computational description of the learning and forgetting processes on synaptic, neuronal, and systems levels, as well as recent data that open up new challenges for statistical physicists.
High-performance heat pipes for heat recovery applications
NASA Technical Reports Server (NTRS)
Saaski, E. W.; Hartl, J. H.
1980-01-01
Methods to improve the performance of reflux heat pipes for heat recovery applications were examined both analytically and experimentally. Various models for the estimation of reflux heat pipe transport capacity were surveyed in the literature and compared with experimental data. A high transport capacity reflux heat pipe was developed that provides up to a factor of 10 capacity improvement over conventional open tube designs; analytical models were developed for this device and incorporated into a computer program HPIPE. Good agreement of the model predictions with data for R-11 and benzene reflux heat pipes was obtained.
NASA Astrophysics Data System (ADS)
Zhou, Gan; An, Xin; Pu, Allen; Psaltis, Demetri; Mok, Fai H.
1999-11-01
The holographic disc is a high capacity, disk-based data storage device that can provide the performance for next generation mass data storage needs. With a projected capacity approaching 1 terabit on a single 12 cm platter, the holographic disc has the potential to become a highly efficient storage hardware for data warehousing applications. The high readout rate of holographic disc makes it especially suitable for generating multiple, high bandwidth data streams such as required for network server computers. Multimedia applications such as interactive video and HDTV can also potentially benefit from the high capacity and fast data access of holographic memory.
The ISCB Student Council Internship Program: Expanding computational biology capacity worldwide
Anupama, Jigisha; Shanmugam, Avinash Kumar; Santos, Alberto; Michaut, Magali
2018-01-01
Education and training are two essential ingredients for a successful career. On one hand, universities provide students a curriculum for specializing in one’s field of study, and on the other, internships complement coursework and provide invaluable training experience for a fruitful career. Consequently, undergraduates and graduates are encouraged to undertake an internship during the course of their degree. The opportunity to explore one’s research interests in the early stages of their education is important for students because it improves their skill set and gives their career a boost. In the long term, this helps to close the gap between skills and employability among students across the globe and balance the research capacity in the field of computational biology. However, training opportunities are often scarce for computational biology students, particularly for those who reside in less-privileged regions. Aimed at helping students develop research and academic skills in computational biology and alleviating the divide across countries, the Student Council of the International Society for Computational Biology introduced its Internship Program in 2009. The Internship Program is committed to providing access to computational biology training, especially for students from developing regions, and improving competencies in the field. Here, we present how the Internship Program works and the impact of the internship opportunities so far, along with the challenges associated with this program. PMID:29346365
The 2015 global production capacity of seasonal and pandemic influenza vaccine.
McLean, Kenneth A; Goldin, Shoshanna; Nannei, Claudia; Sparrow, Erin; Torelli, Guido
2016-10-26
A global shortage and inequitable access to influenza vaccines has been cause for concern for developing countries who face dire consequences in the event of a pandemic. The Global Action Plan for Influenza Vaccines (GAP) was launched in 2006 to increase global capacity for influenza vaccine production to address these concerns. It is widely recognized that well-developed infrastructure to produce seasonal influenza vaccines leads to increased capacity to produce pandemic influenza vaccines. This article summarizes the results of a survey administered to 44 manufacturers to assess their production capacity for seasonal influenza and pandemic influenza vaccine production. When the GAP was launched in 2006, global production capacity for seasonal and pandemic vaccines was estimated to be 500million and 1.5billion doses respectively. Since 2006 there has been a significant increase in capacity, with the 2013 survey estimating global capacity at 1.5billion seasonal and 6.2billion pandemic doses. Results of the current survey showed that global seasonal influenza vaccine production capacity has decreased since 2013 from 1.504billion doses to 1.467billion doses. However, notwithstanding the overall global decrease in seasonal vaccine capacity there were notable positive changes in the distribution of production capacity with increases noted in South East Asia (SEAR) and the Western Pacific (WPR) regions, albeit on a small scale. Despite a decrease in seasonal capacity, there has been a global increase of pandemic influenza vaccine production capacity from 6.2 billion doses in 2013 to 6.4 billion doses in 2015. This growth can be attributed to a shift towards more quadrivalent vaccine production and also to increased use of adjuvants. Pandemic influenza vaccine production capacity is at its highest recorded levels however challenges remain in maintaining this capacity and in ensuring access in the event of a pandemic to underserved regions. Copyright © 2016. Published by Elsevier Ltd.
A sound advantage: Increased auditory capacity in autism.
Remington, Anna; Fairnie, Jake
2017-09-01
Autism Spectrum Disorder (ASD) has an intriguing auditory processing profile. Individuals show enhanced pitch discrimination, yet often find seemingly innocuous sounds distressing. This study used two behavioural experiments to examine whether an increased capacity for processing sounds in ASD could underlie both the difficulties and enhanced abilities found in the auditory domain. Autistic and non-autistic young adults performed a set of auditory detection and identification tasks designed to tax processing capacity and establish the extent of perceptual capacity in each population. Tasks were constructed to highlight both the benefits and disadvantages of increased capacity. Autistic people were better at detecting additional unexpected and expected sounds (increased distraction and superior performance respectively). This suggests that they have increased auditory perceptual capacity relative to non-autistic people. This increased capacity may offer an explanation for the auditory superiorities seen in autism (e.g. heightened pitch detection). Somewhat counter-intuitively, this same 'skill' could result in the sensory overload that is often reported - which subsequently can interfere with social communication. Reframing autistic perceptual processing in terms of increased capacity, rather than a filtering deficit or inability to maintain focus, increases our understanding of this complex condition, and has important practical implications that could be used to develop intervention programs to minimise the distress that is often seen in response to sensory stimuli. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Slot-like capacity and resource-like coding in a neural model of multiple-item working memory.
Standage, Dominic; Pare, Martin
2018-06-27
For the past decade, research on the storage limitations of working memory has been dominated by two fundamentally different hypotheses. On the one hand, the contents of working memory may be stored in a limited number of `slots', each with a fixed resolution. On the other hand, any number of items may be stored, but with decreasing resolution. These two hypotheses have been invaluable in characterizing the computational structure of working memory, but neither provides a complete account of the available experimental data, nor speaks to the neural basis of the limitations it characterizes. To address these shortcomings, we simulated a multiple-item working memory task with a cortical network model, the cellular resolution of which allowed us to quantify the coding fidelity of memoranda as a function of memory load, as measured by the discriminability, regularity and reliability of simulated neural spiking. Our simulations account for a wealth of neural and behavioural data from human and non-human primate studies, and they demonstrate that feedback inhibition lowers both capacity and coding fidelity. Because the strength of inhibition scales with the number of items stored by the network, increasing this number progressively lowers fidelity until capacity is reached. Crucially, the model makes specific, testable predictions for neural activity on multiple-item working memory tasks.
High temperature charging efficiency and degradation behavior of high capacity Ni-MH batteries
NASA Astrophysics Data System (ADS)
Choi, Jeon; Kim, Joong
2001-02-01
Recently the Ni/MH secondary battery has been studied extensively to achieve higher energy density, longer cycle life and faster charging-discharging rate for electric vehicles and portable computers, and etc. In this work, the charging efficiency of the Ni-MH battery which uses Ni electrode with addition of various compounds and the degradation behavior of the 90Ah battery were studied. The battery using the Ni electrode with Ca(OH)2 addition showed the charging efficiency and the utilization ratio significantly better than electrodes without added compounds. After 418 cycles, the residual capacities at the Ni electrode showed nearly the same values in the upper, middle and lower regions. In the case of the MH electrode, the residual capacity in the upper region appeared lower than that in other regions. As a result of ICP analysis, the amount of dissolved elements in the three regions appeared almost the same. The faster degradation in the upper region of the MH electrode was caused by the TiO2 oxide film formed at the electrode surface because of overcharging. The thickness of the oxide film increases with cycling, so it will form a layer that is not able to allow hydrogen to penetrate into the MH electrode.
Redundant disk arrays: Reliable, parallel secondary storage. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Gibson, Garth Alan
1990-01-01
During the past decade, advances in processor and memory technology have given rise to increases in computational performance that far outstrip increases in the performance of secondary storage technology. Coupled with emerging small-disk technology, disk arrays provide the cost, volume, and capacity of current disk subsystems, by leveraging parallelism, many times their performance. Unfortunately, arrays of small disks may have much higher failure rates than the single large disks they replace. Redundant arrays of inexpensive disks (RAID) use simple redundancy schemes to provide high data reliability. The data encoding, performance, and reliability of redundant disk arrays are investigated. Organizing redundant data into a disk array is treated as a coding problem. Among alternatives examined, codes as simple as parity are shown to effectively correct single, self-identifying disk failures.
Capacity of PPM on Gaussian and Webb Channels
NASA Technical Reports Server (NTRS)
Divsalar, D.; Dolinar, S.; Pollara, F.; Hamkins, J.
2000-01-01
This paper computes and compares the capacities of M-ary PPM on various idealized channels that approximate the optical communication channel: (1) the standard additive white Gaussian noise (AWGN) channel;(2) a more general AWGN channel (AWGN2) allowing different variances in signal and noise slots;(3) a Webb-distributed channel (Webb2);(4) a Webb+Gaussian channel, modeling Gaussian thermal noise added to Webb-distributed channel outputs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schumacher, Kathryn M.; Chen, Richard Li-Yang; Cohn, Amy E. M.
2016-04-15
Here, we consider the problem of determining the capacity to assign to each arc in a given network, subject to uncertainty in the supply and/or demand of each node. This design problem underlies many real-world applications, such as the design of power transmission and telecommunications networks. We first consider the case where a set of supply/demand scenarios are provided, and we must determine the minimum-cost set of arc capacities such that a feasible flow exists for each scenario. We briefly review existing theoretical approaches to solving this problem and explore implementation strategies to reduce run times. With this as amore » foundation, our primary focus is on a chance-constrained version of the problem in which α% of the scenarios must be feasible under the chosen capacity, where α is a user-defined parameter and the specific scenarios to be satisfied are not predetermined. We describe an algorithm which utilizes a separation routine for identifying violated cut-sets which can solve the problem to optimality, and we present computational results. We also present a novel greedy algorithm, our primary contribution, which can be used to solve for a high quality heuristic solution. We present computational analysis to evaluate the performance of our proposed approaches.« less
Model documentation renewable fuels module of the National Energy Modeling System
NASA Astrophysics Data System (ADS)
1995-06-01
This report documents the objectives, analytical approach, and design of the National Energy Modeling System (NEMS) Renewable Fuels Module (RFM) as it relates to the production of the 1995 Annual Energy Outlook (AEO95) forecasts. The report catalogs and describes modeling assumptions, computational methodologies, data inputs, and parameter estimation techniques. A number of offline analyses used in lieu of RFM modeling components are also described. The RFM consists of six analytical submodules that represent each of the major renewable energy resources -- wood, municipal solid waste (MSW), solar energy, wind energy, geothermal energy, and alcohol fuels. The RFM also reads in hydroelectric facility capacities and capacity factors from a data file for use by the NEMS Electricity Market Module (EMM). The purpose of the RFM is to define the technological, cost, and resource size characteristics of renewable energy technologies. These characteristics are used to compute a levelized cost to be competed against other similarly derived costs from other energy sources and technologies. The competition of these energy sources over the NEMS time horizon determines the market penetration of these renewable energy technologies. The characteristics include available energy capacity, capital costs, fixed operating costs, variable operating costs, capacity factor, heat rate, construction lead time, and fuel product price.
Vanlerberghe, G C; McIntosh, L
1992-09-01
Suspension cells of NT1 tobacco (Nicotiana tabacum L. cv bright yellow) have been used to study the effect of growth temperature on the CN-resistant, salicylhydroxamic acid-sensitive alternative pathway of respiration. Mitochondria isolated from cells maintained at 30 degrees C had a low capacity to oxidize succinate via the alternative pathway, whereas mitochondria isolated from cells 24 h after transfer to 18 degrees C displayed, on average, a 5-fold increase in this capacity (from 7 to 32 nanoatoms oxygen per milligram protein per minute). This represented an increase in alternative pathway capacity from 18 to 45% of the total capacity of electron transport. This increased capacity was lost upon transfer of cells back to 30 degrees C. A monoclonal antibody to the terminal oxidase of the alternative pathway (the alternative oxidase) from Sauromatum guttatum (T.E. Elthon, R.L. Nickels, L. McIntosh [1989] Plant Physiology 89: 1311-1317) recognized a 35-kilodalton mitochondrial protein in tobacco. There was an excellent correlation between the capacity of the alternative path in isolated tobacco mitochondria and the levels of this 35-kilodalton alternative oxidase protein. Cycloheximide could inhibit both the increased level of the 35-kilodalton alternative oxidase protein and the increased alternative pathway capacity normally seen upon transfer to 18 degrees C. We conclude that transfer of tobacco cells to the lower temperature increases the capacity of the alternative pathway due, at least in part, to de novo synthesis of the 35-kilodalton alternative oxidase protein.
Energy-constrained two-way assisted private and quantum capacities of quantum channels
NASA Astrophysics Data System (ADS)
Davis, Noah; Shirokov, Maksim E.; Wilde, Mark M.
2018-06-01
With the rapid growth of quantum technologies, knowing the fundamental characteristics of quantum systems and protocols is essential for their effective implementation. A particular communication setting that has received increased focus is related to quantum key distribution and distributed quantum computation. In this setting, a quantum channel connects a sender to a receiver, and their goal is to distill either a secret key or entanglement, along with the help of arbitrary local operations and classical communication (LOCC). In this work, we establish a general theory of energy-constrained, LOCC-assisted private and quantum capacities of quantum channels, which are the maximum rates at which an LOCC-assisted quantum channel can reliably establish a secret key or entanglement, respectively, subject to an energy constraint on the channel input states. We prove that the energy-constrained squashed entanglement of a channel is an upper bound on these capacities. We also explicitly prove that a thermal state maximizes a relaxation of the squashed entanglement of all phase-insensitive, single-mode input bosonic Gaussian channels, generalizing results from prior work. After doing so, we prove that a variation of the method introduced by Goodenough et al. [New J. Phys. 18, 063005 (2016), 10.1088/1367-2630/18/6/063005] leads to improved upper bounds on the energy-constrained secret-key-agreement capacity of a bosonic thermal channel. We then consider a multipartite setting and prove that two known multipartite generalizations of the squashed entanglement are in fact equal. We finally show that the energy-constrained, multipartite squashed entanglement plays a role in bounding the energy-constrained LOCC-assisted private and quantum capacity regions of quantum broadcast channels.
Supra-plasma expanders: the future of treating blood loss and anemia without red cell transfusions?
Tsai, Amy G; Vázquez, Beatriz Y Salazar; Hofmann, Axel; Acharya, Seetharama A; Intaglietta, Marcos
2015-01-01
Oxygen delivery capacity during profoundly anemic conditions depends on blood's oxygen-carrying capacity and cardiac output. Oxygen-carrying blood substitutes and blood transfusion augment oxygen-carrying capacity, but both have given rise to safety concerns, and their efficacy remains unresolved. Anemia decreases oxygen-carrying capacity and blood viscosity. Present studies show that correcting the decrease of blood viscosity by increasing plasma viscosity with newly developed plasma expanders significantly improves tissue perfusion. These new plasma expanders promote tissue perfusion, increasing oxygen delivery capacity without increasing blood oxygen-carrying capacity, thus treating the effects of anemia while avoiding the transfusion of blood.
Adsorption and separation of n/iso-pentane on zeolites: A GCMC study.
Fu, Hui; Qin, Hansong; Wang, Yajun; Liu, Yibin; Yang, Chaohe; Shan, Honghong
2018-03-01
Separation of branched chain hydrocarbons and straight chain hydrocarbons is very important in the isomerization process. Grand canonical ensemble Monte Carlo simulations were used to investigate the adsorption and separation of iso-pentane and n-pentane in four types of zeolites: MWW, BOG, MFI, and LTA. The computation of the pure components indicates that the adsorption capacity is affected by physical properties of zeolite, like pore size and structures, and isosteric heat. In BOG, MFI and LTA, the amount of adsorption of n-pentane is higher than iso-pentane, while the phenomenon is contrary in MWW. For a given zeolite, a stronger adsorption heat corresponds to a higher loading. In the binary mixture simulations, the separation capacity of n-and iso-pentane increases with the elevated pressure and the increasing iso-pentane composition. The adsorption mechanism and competition process have been examined. Preferential adsorption contributions prevail at low pressure, however, the size effect becomes important with the increasing pressure, and the relatively smaller n-pentane gradually competes successfully in binary adsorption. Among these zeolites, MFI has the best separation performance due to its high shape selectivity. This work helps to better understand the adsorption and separation performance of n- and iso-pentane in different zeolites and explain the relationship between zeolite structures and adsorption performance. Copyright © 2017 Elsevier Inc. All rights reserved.
Noise facilitation in associative memories of exponential capacity.
Karbasi, Amin; Salavati, Amir Hesam; Shokrollahi, Amin; Varshney, Lav R
2014-11-01
Recent advances in associative memory design through structured pattern sets and graph-based inference algorithms have allowed reliable learning and recall of an exponential number of patterns that satisfy certain subspace constraints. Although these designs correct external errors in recall, they assume neurons that compute noiselessly, in contrast to the highly variable neurons in brain regions thought to operate associatively, such as hippocampus and olfactory cortex. Here we consider associative memories with boundedly noisy internal computations and analytically characterize performance. As long as the internal noise level is below a specified threshold, the error probability in the recall phase can be made exceedingly small. More surprising, we show that internal noise improves the performance of the recall phase while the pattern retrieval capacity remains intact: the number of stored patterns does not reduce with noise (up to a threshold). Computational experiments lend additional support to our theoretical analysis. This work suggests a functional benefit to noisy neurons in biological neuronal networks.
ERIC Educational Resources Information Center
Flanary, Dick
2009-01-01
The NASSP "Breaking Ranks" framework lays out multiple strategies for building capacity within a school, beginning with the leaders. To change an organization and increase its capacity to produce greater results, the people within the organization must change and increase their capacity. School change begins with changes in the principal, the…
Hubble Systems Optimize Hospital Schedules
NASA Technical Reports Server (NTRS)
2009-01-01
Don Rosenthal, a former Ames Research Center computer scientist who helped design the Hubble Space Telescope's scheduling software, co-founded Allocade Inc. of Menlo Park, California, in 2004. Allocade's OnCue software helps hospitals reclaim unused capacity and optimize constantly changing schedules for imaging procedures. After starting to use the software, one medical center soon reported noticeable improvements in efficiency, including a 12 percent increase in procedure volume, 35 percent reduction in staff overtime, and significant reductions in backlog and technician phone time. Allocade now offers versions for outpatient and inpatient magnetic resonance imaging (MRI), ultrasound, interventional radiology, nuclear medicine, Positron Emission Tomography (PET), radiography, radiography-fluoroscopy, and mammography.
Cytobank: providing an analytics platform for community cytometry data analysis and collaboration.
Chen, Tiffany J; Kotecha, Nikesh
2014-01-01
Cytometry is used extensively in clinical and laboratory settings to diagnose and track cell subsets in blood and tissue. High-throughput, single-cell approaches leveraging cytometry are developed and applied in the computational and systems biology communities by researchers, who seek to improve the diagnosis of human diseases, map the structures of cell signaling networks, and identify new cell types. Data analysis and management present a bottleneck in the flow of knowledge from bench to clinic. Multi-parameter flow and mass cytometry enable identification of signaling profiles of patient cell samples. Currently, this process is manual, requiring hours of work to summarize multi-dimensional data and translate these data for input into other analysis programs. In addition, the increase in the number and size of collaborative cytometry studies as well as the computational complexity of analytical tools require the ability to assemble sufficient and appropriately configured computing capacity on demand. There is a critical need for platforms that can be used by both clinical and basic researchers who routinely rely on cytometry. Recent advances provide a unique opportunity to facilitate collaboration and analysis and management of cytometry data. Specifically, advances in cloud computing and virtualization are enabling efficient use of large computing resources for analysis and backup. An example is Cytobank, a platform that allows researchers to annotate, analyze, and share results along with the underlying single-cell data.
Climate Modeling Computing Needs Assessment
NASA Astrophysics Data System (ADS)
Petraska, K. E.; McCabe, J. D.
2011-12-01
This paper discusses early findings of an assessment of computing needs for NASA science, engineering and flight communities. The purpose of this assessment is to document a comprehensive set of computing needs that will allow us to better evaluate whether our computing assets are adequately structured to meet evolving demand. The early results are interesting, already pointing out improvements we can make today to get more out of the computing capacity we have, as well as potential game changing innovations for the future in how we apply information technology to science computing. Our objective is to learn how to leverage our resources in the best way possible to do more science for less money. Our approach in this assessment is threefold: Development of use case studies for science workflows; Creating a taxonomy and structure for describing science computing requirements; and characterizing agency computing, analysis, and visualization resources. As projects evolve, science data sets increase in a number of ways: in size, scope, timelines, complexity, and fidelity. Generating, processing, moving, and analyzing these data sets places distinct and discernable requirements on underlying computing, analysis, storage, and visualization systems. The initial focus group for this assessment is the Earth Science modeling community within NASA's Science Mission Directorate (SMD). As the assessment evolves, this focus will expand to other science communities across the agency. We will discuss our use cases, our framework for requirements and our characterizations, as well as our interview process, what we learned and how we plan to improve our materials after using them in the first round of interviews in the Earth Science Modeling community. We will describe our plans for how to expand this assessment, first into the Earth Science data analysis and remote sensing communities, and then throughout the full community of science, engineering and flight at NASA.
Computation Directorate Annual Report 2003
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crawford, D L; McGraw, J R; Ashby, S F
Big computers are icons: symbols of the culture, and of the larger computing infrastructure that exists at Lawrence Livermore. Through the collective effort of Laboratory personnel, they enable scientific discovery and engineering development on an unprecedented scale. For more than three decades, the Computation Directorate has supplied the big computers that enable the science necessary for Laboratory missions and programs. Livermore supercomputing is uniquely mission driven. The high-fidelity weapon simulation capabilities essential to the Stockpile Stewardship Program compel major advances in weapons codes and science, compute power, and computational infrastructure. Computation's activities align with this vital mission of the Departmentmore » of Energy. Increasingly, non-weapons Laboratory programs also rely on computer simulation. World-class achievements have been accomplished by LLNL specialists working in multi-disciplinary research and development teams. In these teams, Computation personnel employ a wide array of skills, from desktop support expertise, to complex applications development, to advanced research. Computation's skilled professionals make the Directorate the success that it has become. These individuals know the importance of the work they do and the many ways it contributes to Laboratory missions. They make appropriate and timely decisions that move the entire organization forward. They make Computation a leader in helping LLNL achieve its programmatic milestones. I dedicate this inaugural Annual Report to the people of Computation in recognition of their continuing contributions. I am proud that we perform our work securely and safely. Despite increased cyber attacks on our computing infrastructure from the Internet, advanced cyber security practices ensure that our computing environment remains secure. Through Integrated Safety Management (ISM) and diligent oversight, we address safety issues promptly and aggressively. The safety of our employees, whether at work or at home, is a paramount concern. Even as the Directorate meets today's supercomputing requirements, we are preparing for the future. We are investigating open-source cluster technology, the basis of our highly successful Mulitprogrammatic Capability Resource (MCR). Several breakthrough discoveries have resulted from MCR calculations coupled with theory and experiment, prompting Laboratory scientists to demand ever-greater capacity and capability. This demand is being met by a new 23-TF system, Thunder, with architecture modeled on MCR. In preparation for the ''after-next'' computer, we are researching technology even farther out on the horizon--cell-based computers. Assuming that the funding and the technology hold, we will acquire the cell-based machine BlueGene/L within the next 12 months.« less
Using SRAM Based FPGAs for Power-Aware High Performance Wireless Sensor Networks
Valverde, Juan; Otero, Andres; Lopez, Miguel; Portilla, Jorge; de la Torre, Eduardo; Riesgo, Teresa
2012-01-01
While for years traditional wireless sensor nodes have been based on ultra-low power microcontrollers with sufficient but limited computing power, the complexity and number of tasks of today’s applications are constantly increasing. Increasing the node duty cycle is not feasible in all cases, so in many cases more computing power is required. This extra computing power may be achieved by either more powerful microcontrollers, though more power consumption or, in general, any solution capable of accelerating task execution. At this point, the use of hardware based, and in particular FPGA solutions, might appear as a candidate technology, since though power use is higher compared with lower power devices, execution time is reduced, so energy could be reduced overall. In order to demonstrate this, an innovative WSN node architecture is proposed. This architecture is based on a high performance high capacity state-of-the-art FPGA, which combines the advantages of the intrinsic acceleration provided by the parallelism of hardware devices, the use of partial reconfiguration capabilities, as well as a careful power-aware management system, to show that energy savings for certain higher-end applications can be achieved. Finally, comprehensive tests have been done to validate the platform in terms of performance and power consumption, to proof that better energy efficiency compared to processor based solutions can be achieved, for instance, when encryption is imposed by the application requirements. PMID:22736971
Using SRAM based FPGAs for power-aware high performance wireless sensor networks.
Valverde, Juan; Otero, Andres; Lopez, Miguel; Portilla, Jorge; de la Torre, Eduardo; Riesgo, Teresa
2012-01-01
While for years traditional wireless sensor nodes have been based on ultra-low power microcontrollers with sufficient but limited computing power, the complexity and number of tasks of today's applications are constantly increasing. Increasing the node duty cycle is not feasible in all cases, so in many cases more computing power is required. This extra computing power may be achieved by either more powerful microcontrollers, though more power consumption or, in general, any solution capable of accelerating task execution. At this point, the use of hardware based, and in particular FPGA solutions, might appear as a candidate technology, since though power use is higher compared with lower power devices, execution time is reduced, so energy could be reduced overall. In order to demonstrate this, an innovative WSN node architecture is proposed. This architecture is based on a high performance high capacity state-of-the-art FPGA, which combines the advantages of the intrinsic acceleration provided by the parallelism of hardware devices, the use of partial reconfiguration capabilities, as well as a careful power-aware management system, to show that energy savings for certain higher-end applications can be achieved. Finally, comprehensive tests have been done to validate the platform in terms of performance and power consumption, to proof that better energy efficiency compared to processor based solutions can be achieved, for instance, when encryption is imposed by the application requirements.
NASA Astrophysics Data System (ADS)
Ali, Saima; Rashid, Muhammad; Hassan, M.; Noor, N. A.; Mahmood, Q.; Laref, A.; Haq, Bakhtiar Ul
2018-05-01
Owing to the large energy storage capacity and higher working voltage, the spinel oxides LiV2O4 and LiCr2O4, have remained under intense research attention for utilization as electrode materials in lithium-ion batteries. In this study, we explore the half-metallic nature and thermoelectric response in both LiV2O4 and LiCr2O4 spinel oxides using ab-initio density functional theory (DFT) based computations. The ground-state energies of these compounds have been studied at the optimized structural parameters in the ferromagnetic phase. In order to obtain a correct picture of the electronic structure and magnetic properties, the modified Becke-Johnson (mBJ) potential is applied to compute the electronic structures. The half-metallic behavior is confirmed by the spin-polarized electronic band structures and density of state plots. The magnetic nature is elucidated by computing the John-Teller energy, direct and indirect exchange and crystal field splitting energies. Our computations indicate strong hybridization decreasing the V/Cr site magnetic moments and increasing magnetic momenta at the nonmagnetic atomic sites. We also present the computed parameters significant for expressing the thermoelectric response, which are electrical conductivity, thermal conductivity, See-beck coefficient and power factor. The computed properties are of immense interest owing to the potential spintronics and Li-ion battery applications of the studied spinel materials.
Leung, Janice M; Malagoli, Andrea; Santoro, Antonella; Besutti, Giulia; Ligabue, Guido; Scaglioni, Riccardo; Dai, Darlene; Hague, Cameron; Leipsic, Jonathon; Sin, Don D.; Man, SF Paul; Guaraldi, Giovanni
2016-01-01
Background Chronic obstructive pulmonary disease (COPD) and emphysema are common amongst patients with human immunodeficiency virus (HIV). We sought to determine the clinical factors that are associated with emphysema progression in HIV. Methods 345 HIV-infected patients enrolled in an outpatient HIV metabolic clinic with ≥2 chest computed tomography scans made up the study cohort. Images were qualitatively scored for emphysema based on percentage involvement of the lung. Emphysema progression was defined as any increase in emphysema score over the study period. Univariate analyses of clinical, respiratory, and laboratory data, as well as multivariable logistic regression models, were performed to determine clinical features significantly associated with emphysema progression. Results 17.4% of the cohort were emphysema progressors. Emphysema progression was most strongly associated with having a low baseline diffusion capacity of carbon monoxide (DLCO) and having combination centrilobular and paraseptal emphysema distribution. In adjusted models, the odds ratio (OR) for emphysema progression for every 10% increase in DLCO percent predicted was 0.58 (95% confidence interval [CI] 0.41–0.81). The equivalent OR (95% CI) for centrilobular and paraseptal emphysema distribution was 10.60 (2.93–48.98). Together, these variables had an area under the curve (AUC) statistic of 0.85 for predicting emphysema progression. This was an improvement over the performance of spirometry (forced expiratory volume in 1 second to forced vital capacity ratio), which predicted emphysema progression with an AUC of only 0.65. Conclusion Combined paraseptal and centrilobular emphysema distribution and low DLCO could identify HIV patients who may experience emphysema progression. PMID:27902753
Ma, Shuang-Chen; Yao, Juan-Juan; Gao, Li; Ma, Xiao-Ying; Zhao, Yi
2012-09-01
Experimental studies on desulfurization and denitrification were carried out using activated carbon irradiated by microwave. The influences of the concentrations of nitric oxide (NO) and sulfur dioxide (SO 2 ), the flue gas coexisting compositions, on adsorption properties of activated carbon and efficiencies of desulfurization and denitrification were investigated. The results show that adsorption capacity and removal efficiency of NO decrease with the increasing of SO 2 concentrations in flue gas; adsorption capacity of NO increases slightly first and drops to 12.79 mg/g, and desulfurization efficiency descends with the increasing SO 2 concentrations. Adsorption capacity of SO 2 declines with the increasing of O 2 content in flue gas, but adsorption capacity of NO increases, and removal efficiencies of NO and SO 2 could be larger than 99%. Adsorption capacity of NO declines with the increase of moisture in the flue gas, but adsorption capacity of SO 2 increases and removal efficiencies of NO and SO 2 would be relatively stable. Adsorption capacities of both NO and SO 2 decrease with the increasing of CO 2 content; efficiencies of desulfurization and denitrification augment at the beginning stage, then start to fall when CO 2 content exceeds 12.4%. The mechanisms of this process are also discussed. [Box: see text].
Computational characterization of lightweight multilayer MXene Li-ion battery anodes
NASA Astrophysics Data System (ADS)
Ashton, Michael; Hennig, Richard G.; Sinnott, Susan B.
2016-01-01
MXenes, a class of two-dimensional transition metal carbides and nitrides, have shown promise experimentally and computationally for use in energy storage applications. In particular, the most lightweight members of the monolayer MXene family (M = Sc, Ti, V, or Cr) are predicted to have gravimetric capacities above 400 mAh/g, higher than graphite. Additionally, intercalation of ions into multilayer MXenes can be accomplished at low voltages, and low diffusion barriers exist for Li diffusing across monolayer MXenes. However, large discrepancies have been observed between the calculated and experimental reversible capacities of MXenes. Here, dispersion-corrected density functional theory calculations are employed to predict reversible capacities and other battery-related properties for six of the most promising members of the MXene family (O-functionalized Ti- and V-based carbide MXenes) as bilayer structures. The calculated reversible capacities of the V2CO2 and Ti2CO2 bilayers agree more closely with experiment than do previous calculations for monolayers. Additionally, the minimum energy paths and corresponding energy barriers along the in-plane [1000] and [0100] directions for Li travelling between neighboring MXene layers are determined. V4C3O2 exhibits the lowest diffusion barrier of the compositions considered, at 0.42 eV, but its reversible capacity (148 mAh/g) is dragged down by its heavy formula unit. Conversely, the V2CO2 MXene shows good reversible capacity (276 mAh/g), but a high diffusion barrier (0.82 eV). We show that the diffusion barriers of all bilayer structures are significantly higher than those calculated for the corresponding monolayers, advocating the use of dispersed monolayer MXenes instead of multilayers in high performance anodes.
Computational chemistry and cheminformatics: an essay on the future.
Glen, Robert Charles
2012-01-01
Computers have changed the way we do science. Surrounded by a sea of data and with phenomenal computing capacity, the methodology and approach to scientific problems is evolving into a partnership between experiment, theory and data analysis. Given the pace of change of the last twenty-five years, it seems folly to speculate on the future, but along with unpredictable leaps of progress there will be a continuous evolution of capability, which points to opportunities and improvements that will certainly appear as our discipline matures.
Towards optimizing server performance in an educational MMORPG for teaching computer programming
NASA Astrophysics Data System (ADS)
Malliarakis, Christos; Satratzemi, Maya; Xinogalos, Stelios
2013-10-01
Web-based games have become significantly popular during the last few years. This is due to the gradual increase of internet speed, which has led to the ongoing multiplayer games development and more importantly the emergence of the Massive Multiplayer Online Role Playing Games (MMORPG) field. In parallel, similar technologies called educational games have started to be developed in order to be put into practice in various educational contexts, resulting in the field of Game Based Learning. However, these technologies require significant amounts of resources, such as bandwidth, RAM and CPU capacity etc. These amounts may be even larger in an educational MMORPG game that supports computer programming education, due to the usual inclusion of a compiler and the constant client/server data transmissions that occur during program coding, possibly leading to technical issues that could cause malfunctions during learning. Thus, the determination of the elements that affect the overall games resources' load is essential so that server administrators can configure them and ensure educational games' proper operation during computer programming education. In this paper, we propose a new methodology with which we can achieve monitoring and optimization of the load balancing, so that the essential resources for the creation and proper execution of an educational MMORPG for computer programming can be foreseen and bestowed without overloading the system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lyonnais, Marc; Smith, Matt; Mace, Kate P.
SCinet is the purpose-built network that operates during the International Conference for High Performance Computing,Networking, Storage and Analysis (Super Computing or SC). Created each year for the conference, SCinet brings to life a high-capacity network that supports applications and experiments that are a hallmark of the SC conference. The network links the convention center to research and commercial networks around the world. This resource serves as a platform for exhibitors to demonstrate the advanced computing resources of their home institutions and elsewhere by supporting a wide variety of applications. Volunteers from academia, government and industry work together to design andmore » deliver the SCinet infrastructure. Industry vendors and carriers donate millions of dollars in equipment and services needed to build and support the local and wide area networks. Planning begins more than a year in advance of each SC conference and culminates in a high intensity installation in the days leading up to the conference. The SCinet architecture for SC16 illustrates a dramatic increase in participation from the vendor community, particularly those that focus on network equipment. Software-Defined Networking (SDN) and Data Center Networking (DCN) are present in nearly all aspects of the design.« less
NASA Astrophysics Data System (ADS)
Srinath, Srikar; Poyneer, Lisa A.; Rudy, Alexander R.; Ammons, S. M.
2014-08-01
The advent of expensive, large-aperture telescopes and complex adaptive optics (AO) systems has strengthened the need for detailed simulation of such systems from the top of the atmosphere to control algorithms. The credibility of any simulation is underpinned by the quality of the atmosphere model used for introducing phase variations into the incident photons. Hitherto, simulations which incorporate wind layers have relied upon phase screen generation methods that tax the computation and memory capacities of the platforms on which they run. This places limits on parameters of a simulation, such as exposure time or resolution, thus compromising its utility. As aperture sizes and fields of view increase the problem will only get worse. We present an autoregressive method for evolving atmospheric phase that is efficient in its use of computation resources and allows for variability in the power contained in frozen flow or stochastic components of the atmosphere. Users have the flexibility of generating atmosphere datacubes in advance of runs where memory constraints allow to save on computation time or of computing the phase at each time step for long exposure times. Preliminary tests of model atmospheres generated using this method show power spectral density and rms phase in accordance with established metrics for Kolmogorov models.
DOT National Transportation Integrated Search
2001-11-01
The Department of Transportation's new Intelligent Transportation System (ITS) program mandates that computing, communications, electronics, and other advanced technologies be applied to improving the capacity and safety of the nation's transportatio...
Individual differences in working memory capacity and workload capacity.
Yu, Ju-Chi; Chang, Ting-Yun; Yang, Cheng-Ta
2014-01-01
We investigated the relationship between working memory capacity (WMC) and workload capacity (WLC). Each participant performed an operation span (OSPAN) task to measure his/her WMC and three redundant-target detection tasks to measure his/her WLC. WLC was computed non-parametrically (Experiments 1 and 2) and parametrically (Experiment 2). Both levels of analyses showed that participants high in WMC had larger WLC than those low in WMC only when redundant information came from visual and auditory modalities, suggesting that high-WMC participants had superior processing capacity in dealing with redundant visual and auditory information. This difference was eliminated when multiple processes required processing for only a single working memory subsystem in a color-shape detection task and a double-dot detection task. These results highlighted the role of executive control in integrating and binding information from the two working memory subsystems for perceptual decision making.
Koltun, G.F.
2014-01-01
This report presents the results of a study to assess potential water availability from the Charles Mill, Clendening, Piedmont, Pleasant Hill, Senecaville, and Wills Creek Lakes, located within the Muskingum River Watershed, Ohio. The assessment was based on the criterion that water withdrawals should not appreciably affect maintenance of recreation-season pool levels in current use. To facilitate and simplify the assessment, it was assumed that historical lake operations were successful in maintaining seasonal pool levels, and that any discharges from lakes constituted either water that was discharged to prevent exceeding seasonal pool levels or discharges intended to meet minimum in-stream flow targets downstream from the lakes. It further was assumed that the volume of water discharged in excess of the minimum in-stream flow target is available for use without negatively impacting seasonal pool levels or downstream water uses and that all or part of it is subject to withdrawal. Historical daily outflow data for the lakes were used to determine the quantity of water that potentially could be withdrawn and the resulting quantity of water that would flow downstream (referred to as “flow-by”) on a daily basis as a function of all combinations of three hypothetical target minimum flow-by amounts (1, 2, and 3 times current minimum in-stream flow targets) and three pumping capacities (1, 2, and 3 million gallons per day). Using both U.S. Geological Survey streamgage data (where available) and lake-outflow data provided by the U.S. Army Corps of Engineers resulted in analytical periods ranging from 51 calendar years for Charles Mill, Clendening, and Piedmont Lakes to 74 calendar years for Pleasant Hill, Senecaville, and Wills Creek Lakes. The observed outflow time series and the computed time series of daily flow-by amounts and potential withdrawals were analyzed to compute and report order statistics (95th, 75th, 50th, 25th, 10th, and 5th percentiles) and means for the analytical period, in aggregate, and broken down by calendar month. In addition, surplus-water mass curve data were tabulated for each of the lakes. Monthly order statistics of computed withdrawals indicated that, for the three pumping capacities considered, increasing the target minimum flow-by amount tended to reduce the amount of water that can be withdrawn. The reduction was greatest in the lower percentiles of withdrawal; however, increasing the flow-by amount had no impact on potential withdrawals during high flow. In addition, for a given target minimum flow-by amount, increasing the pumping rate typically increased the total amount of water that could be withdrawn; however, that increase was less than a direct multiple of the increase in pumping rate for most flow statistics. Potential monthly withdrawals were observed to be more variable and more limited in some calendar months than others. Monthly order statistics and means of computed daily mean flow-by amounts indicated that flow-by amounts generally tended to be lowest during June–October. Increasing the target minimum flow-by amount for a given pumping rate resulted in some small increases in the magnitudes of the mean and 50th percentile and lower order statistics of computed mean flow-by, but had no effect on the magnitudes of the higher percentile statistics. Increasing the pumping rate for a given target minimum flow-by amount resulted in decreases in magnitudes of higher-percentile flow-by statistics by an amount equal to the flow equivalent of the increase in pumping rate; however, some lower percentile statistics remained unchanged.
Grammatical Analysis as a Distributed Neurobiological Function
Bozic, Mirjana; Fonteneau, Elisabeth; Su, Li; Marslen-Wilson, William D
2015-01-01
Language processing engages large-scale functional networks in both hemispheres. Although it is widely accepted that left perisylvian regions have a key role in supporting complex grammatical computations, patient data suggest that some aspects of grammatical processing could be supported bilaterally. We investigated the distribution and the nature of grammatical computations across language processing networks by comparing two types of combinatorial grammatical sequences—inflectionally complex words and minimal phrases—and contrasting them with grammatically simple words. Novel multivariate analyses revealed that they engage a coalition of separable subsystems: inflected forms triggered left-lateralized activation, dissociable into dorsal processes supporting morphophonological parsing and ventral, lexically driven morphosyntactic processes. In contrast, simple phrases activated a consistently bilateral pattern of temporal regions, overlapping with inflectional activations in L middle temporal gyrus. These data confirm the role of the left-lateralized frontotemporal network in supporting complex grammatical computations. Critically, they also point to the capacity of bilateral temporal regions to support simple, linear grammatical computations. This is consistent with a dual neurobiological framework where phylogenetically older bihemispheric systems form part of the network that supports language function in the modern human, and where significant capacities for language comprehension remain intact even following severe left hemisphere damage. PMID:25421880
USDA-ARS?s Scientific Manuscript database
The capacity of US agriculture to increase the output of specific foods to accommodate increased demand is not well documented. This research uses geospatial modeling to examine the capacity of the US agricultural land base to increase the per capita availability of an example set of nutrient-dense ...
Zhang, Lun; Zhang, Meng; Yang, Wenchen; Dong, Decun
2015-01-01
This paper presents the modelling and analysis of the capacity expansion of urban road traffic network (ICURTN). Thebilevel programming model is first employed to model the ICURTN, in which the utility of the entire network is maximized with the optimal utility of travelers' route choice. Then, an improved hybrid genetic algorithm integrated with golden ratio (HGAGR) is developed to enhance the local search of simple genetic algorithms, and the proposed capacity expansion model is solved by the combination of the HGAGR and the Frank-Wolfe algorithm. Taking the traditional one-way network and bidirectional network as the study case, three numerical calculations are conducted to validate the presented model and algorithm, and the primary influencing factors on extended capacity model are analyzed. The calculation results indicate that capacity expansion of road network is an effective measure to enlarge the capacity of urban road network, especially on the condition of limited construction budget; the average computation time of the HGAGR is 122 seconds, which meets the real-time demand in the evaluation of the road network capacity. PMID:25802512
Changing how and what children learn in school with computer-based technologies.
Roschelle, J M; Pea, R D; Hoadley, C M; Gordin, D N; Means, B M
2000-01-01
Schools today face ever-increasing demands in their attempts to ensure that students are well equipped to enter the workforce and navigate a complex world. Research indicates that computer technology can help support learning, and that it is especially useful in developing the higher-order skills of critical thinking, analysis, and scientific inquiry. But the mere presence of computers in the classroom does not ensure their effective use. Some computer applications have been shown to be more successful than others, and many factors influence how well even the most promising applications are implemented. This article explores the various ways computer technology can be used to improve how and what children learn in the classroom. Several examples of computer-based applications are highlighted to illustrate ways technology can enhance how children learn by supporting four fundamental characteristics of learning: (1) active engagement, (2) participation in groups, (3) frequent interaction and feedback, and (4) connections to real-world contexts. Additional examples illustrate ways technology can expand what children learn by helping them to understand core concepts in subjects like math, science, and literacy. Research indicates, however, that the use of technology as an effective learning tool is more likely to take place when embedded in a broader education reform movement that includes improvements in teacher training, curriculum, student assessment, and a school's capacity for change. To help inform decisions about the future role of computers in the classroom, the authors conclude that further research is needed to identify the uses that most effectively support learning and the conditions required for successful implementation.
Effects of strategy on visual working memory capacity
Bengson, Jesse J.; Luck, Steven J.
2015-01-01
Substantial evidence suggests that individual differences in estimates of working memory capacity reflect differences in how effectively people use their intrinsic storage capacity. This suggests that estimated capacity could be increased by instructions that encourage more effective encoding strategies. The present study tested this by giving different participants explicit strategy instructions in a change detection task. Compared to a condition in which participants were simply told to do their best, we found that estimated capacity was increased for participants who were instructed to remember the entire visual display, even at set sizes beyond their capacity. However, no increase in estimated capacity was found for a group that was told to focus on a subset of the items in supracapacity arrays. This finding confirms the hypothesis that encoding strategies may influence visual working memory performance, and it is contrary to the hypothesis that the optimal strategy is to filter out any items beyond the storage capacity. PMID:26139356