DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamblin, T.
2014-08-29
Large-scale systems like Sequoia allow running small numbers of very large (1M+ process) jobs, but their resource managers and schedulers do not allow large numbers of small (4, 8, 16, etc.) process jobs to run efficiently. Cram is a tool that allows users to launch many small MPI jobs within one large partition, and to overcome the limitations of current resource management software for large ensembles of jobs.
Maximizing User Satisfaction With Office Practice Data Processing Systems
O'Flaherty, Thomas; Jussim, Judith
1980-01-01
Significant numbers of physicians are using data processing services and a large number of firms are offering an increasing variety of services. This paper quantifies user dissatisfaction with office practice data processing systems and analyzes factors affecting dissatisfaction in large group practices. Based on this analysis, a proposal is made for a more structured approach to obtaining data processing services in order to lower the risks and increase satisfaction with data processing.
1981-12-01
ocessors has led to the possibility of implementing a large number of image processing functions in near real time . ~CC~ jnro _ j:% UNLSSFE (b-.YC ASIIAINO...to the possibility of implementing a large number of image processing functions in near " real - time ," a result which is essential to establishing a...for example, and S) rapid image handling for near real - time in- teraction by a user at a display. For example, for a large resolution image, say
[Dual process in large number estimation under uncertainty].
Matsumuro, Miki; Miwa, Kazuhisa; Terai, Hitoshi; Yamada, Kento
2016-08-01
According to dual process theory, there are two systems in the mind: an intuitive and automatic System 1 and a logical and effortful System 2. While many previous studies about number estimation have focused on simple heuristics and automatic processes, the deliberative System 2 process has not been sufficiently studied. This study focused on the System 2 process for large number estimation. First, we described an estimation process based on participants’ verbal reports. The task, corresponding to the problem-solving process, consisted of creating subgoals, retrieving values, and applying operations. Second, we investigated the influence of such deliberative process by System 2 on intuitive estimation by System 1, using anchoring effects. The results of the experiment showed that the System 2 process could mitigate anchoring effects.
Small and Large Number Processing in Infants and Toddlers with Williams Syndrome
ERIC Educational Resources Information Center
Van Herwegen, Jo; Ansari, Daniel; Xu, Fei; Karmiloff-Smith, Annette
2008-01-01
Previous studies have suggested that typically developing 6-month-old infants are able to discriminate between small and large numerosities. However, discrimination between small numerosities in young infants is only possible when variables continuous with number (e.g. area or circumference) are confounded. In contrast, large number discrimination…
Markov-modulated Markov chains and the covarion process of molecular evolution.
Galtier, N; Jean-Marie, A
2004-01-01
The covarion (or site specific rate variation, SSRV) process of biological sequence evolution is a process by which the evolutionary rate of a nucleotide/amino acid/codon position can change in time. In this paper, we introduce time-continuous, space-discrete, Markov-modulated Markov chains as a model for representing SSRV processes, generalizing existing theory to any model of rate change. We propose a fast algorithm for diagonalizing the generator matrix of relevant Markov-modulated Markov processes. This algorithm makes phylogeny likelihood calculation tractable even for a large number of rate classes and a large number of states, so that SSRV models become applicable to amino acid or codon sequence datasets. Using this algorithm, we investigate the accuracy of the discrete approximation to the Gamma distribution of evolutionary rates, widely used in molecular phylogeny. We show that a relatively large number of classes is required to achieve accurate approximation of the exact likelihood when the number of analyzed sequences exceeds 20, both under the SSRV and among site rate variation (ASRV) models.
Dissociations and interactions between time, numerosity and space processing
Cappelletti, Marinella; Freeman, Elliot D.; Cipolotti, Lisa
2009-01-01
This study investigated time, numerosity and space processing in a patient (CB) with a right hemisphere lesion. We tested whether these magnitude dimensions share a common magnitude system or whether they are processed by dimension-specific magnitude systems. Five experimental tasks were used: Tasks 1–3 assessed time and numerosity independently and time and numerosity jointly. Tasks 4 and 5 investigated space processing independently and space and numbers jointly. Patient CB was impaired at estimating time and at discriminating between temporal intervals, his errors being underestimations. In contrast, his ability to process numbers and space was normal. A unidirectional interaction between numbers and time was found in both the patient and the control subjects. Strikingly, small numbers were perceived as lasting shorter and large numbers as lasting longer. In contrast, number processing was not affected by time, i.e. short durations did not result in perceiving fewer numbers and long durations in perceiving more numbers. Numbers and space also interacted, with small numbers answered faster when presented on the left side of space, and the reverse for large numbers. Our results demonstrate that time processing can be selectively impaired. This suggests that mechanisms specific for time processing may be partially independent from those involved in processing numbers and space. However, the interaction between numbers and time and between numbers and space also suggests that although independent, there maybe some overlap between time, numbers and space. These data suggest a partly shared mechanism between time, numbers and space which may be involved in magnitude processing or may be recruited to perform cognitive operations on magnitude dimensions. PMID:19501604
NASA Astrophysics Data System (ADS)
Gilra, D. P.; Pwa, T. H.; Arnal, E. M.; de Vries, J.
1982-06-01
In order to process and analyze high resolution IUE data on a large number of interstellar lines in a large number of images for a large number of stars, computer programs were developed for 115 lines in the short wavelength range and 40 in the long wavelength range. Programs include extraction, processing, plotting, averaging, and profile fitting. Wavelength calibration in high resolution spectra, fixed pattern noise, instrument profile and resolution, and the background problem in the region where orders are crowding are discussed. All the expected lines are detected in at least one spectrum.
Visual analysis of inter-process communication for large-scale parallel computing.
Muelder, Chris; Gygi, Francois; Ma, Kwan-Liu
2009-01-01
In serial computation, program profiling is often helpful for optimization of key sections of code. When moving to parallel computation, not only does the code execution need to be considered but also communication between the different processes which can induce delays that are detrimental to performance. As the number of processes increases, so does the impact of the communication delays on performance. For large-scale parallel applications, it is critical to understand how the communication impacts performance in order to make the code more efficient. There are several tools available for visualizing program execution and communications on parallel systems. These tools generally provide either views which statistically summarize the entire program execution or process-centric views. However, process-centric visualizations do not scale well as the number of processes gets very large. In particular, the most common representation of parallel processes is a Gantt char t with a row for each process. As the number of processes increases, these charts can become difficult to work with and can even exceed screen resolution. We propose a new visualization approach that affords more scalability and then demonstrate it on systems running with up to 16,384 processes.
Lepton number violation in theories with a large number of standard model copies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kovalenko, Sergey; Schmidt, Ivan; Paes, Heinrich
2011-03-01
We examine lepton number violation (LNV) in theories with a saturated black hole bound on a large number of species. Such theories have been advocated recently as a possible solution to the hierarchy problem and an explanation of the smallness of neutrino masses. On the other hand, the violation of the lepton number can be a potential phenomenological problem of this N-copy extension of the standard model as due to the low quantum gravity scale black holes may induce TeV scale LNV operators generating unacceptably large rates of LNV processes. We show, however, that this issue can be avoided bymore » introducing a spontaneously broken U{sub 1(B-L)}. Then, due to the existence of a specific compensation mechanism between contributions of different Majorana neutrino states, LNV processes in the standard model copy become extremely suppressed with rates far beyond experimental reach.« less
Incremental terrain processing for large digital elevation models
NASA Astrophysics Data System (ADS)
Ye, Z.
2012-12-01
Incremental terrain processing for large digital elevation models Zichuan Ye, Dean Djokic, Lori Armstrong Esri, 380 New York Street, Redlands, CA 92373, USA (E-mail: zye@esri.com, ddjokic@esri.com , larmstrong@esri.com) Efficient analyses of large digital elevation models (DEM) require generation of additional DEM artifacts such as flow direction, flow accumulation and other DEM derivatives. When the DEMs to analyze have a large number of grid cells (usually > 1,000,000,000) the generation of these DEM derivatives is either impractical (it takes too long) or impossible (software is incapable of processing such a large number of cells). Different strategies and algorithms can be put in place to alleviate this situation. This paper describes an approach where the overall DEM is partitioned in smaller processing units that can be efficiently processed. The processed DEM derivatives for each partition can then be either mosaicked back into a single large entity or managed on partition level. For dendritic terrain morphologies, the way in which partitions are to be derived and the order in which they are to be processed depend on the river and catchment patterns. These patterns are not available until flow pattern of the whole region is created, which in turn cannot be established upfront due to the size issues. This paper describes a procedure that solves this problem: (1) Resample the original large DEM grid so that the total number of cells is reduced to a level for which the drainage pattern can be established. (2) Run standard terrain preprocessing operations on the resampled DEM to generate the river and catchment system. (3) Define the processing units and their processing order based on the river and catchment system created in step (2). (4) Based on the processing order, apply the analysis, i.e., flow accumulation operation to each of the processing units, at the full resolution DEM. (5) As each processing unit is processed based on the processing order defined in (3), compare the resulting drainage pattern with the drainage pattern established at the coarser scale and adjust the drainage boundaries and rivers if necessary.
Beyond left and right: Automaticity and flexibility of number-space associations.
Antoine, Sophie; Gevers, Wim
2016-02-01
Close links exist between the processing of numbers and the processing of space: relatively small numbers are preferentially associated with a left-sided response while relatively large numbers are associated with a right-sided response (the SNARC effect). Previous work demonstrated that the SNARC effect is triggered in an automatic manner and is highly flexible. Besides the left-right dimension, numbers associate with other spatial response mappings such as close/far responses, where small numbers are associated with a close response and large numbers with a far response. In two experiments we investigate the nature of this association. Associations between magnitude and close/far responses were observed using a magnitude-irrelevant task (Experiment 1: automaticity) and using a variable referent task (Experiment 2: flexibility). While drawing a strong parallel between both response mappings, the present results are also informative with regard to the question about what type of processing mechanism underlies both the SNARC effect and the association between numerical magnitude and close/far response locations.
Spatio-temporal dynamics of processing non-symbolic number: An ERP source localization study
Hyde, Daniel C.; Spelke, Elizabeth S.
2013-01-01
Coordinated studies with adults, infants, and nonhuman animals provide evidence for two distinct systems of non-verbal number representation. The ‘parallel individuation’ system selects and retains information about 1–3 individual entities and the ‘numerical magnitude’ system establishes representations of the approximate cardinal value of a group. Recent ERP work has demonstrated that these systems reliably evoke functionally and temporally distinct patterns of brain response that correspond to established behavioral signatures. However, relatively little is known about the neural generators of these ERP signatures. To address this question, we targeted known ERP signatures of these systems, by contrasting processing of small versus large non-symbolic numbers, and used a source localization algorithm (LORETA) to identify their cortical origins. Early processing of small numbers, showing the signature effects of parallel individuation on the N1 (∼150 ms), was localized primarily to extrastriate visual regions. In contrast, qualitatively and temporally distinct processing of large numbers, showing the signatures of approximate number representation on the mid-latency P2p (∼200–250 ms), was localized primarily to right intraparietal regions. In comparison, mid-latency small number processing was localized to the right temporal-parietal junction and left-lateralized intraparietal regions. These results add spatial information to the emerging ERP literature documenting the process by which we represent number. Furthermore, these results substantiate recent claims that early attentional processes determine whether a collection of objects will be represented through parallel individuation or as an approximate numerical magnitude by providing evidence that downstream processing diverges to distinct cortical regions. PMID:21830257
Hyde, Daniel C; Spelke, Elizabeth S
2012-09-01
Coordinated studies with adults, infants, and nonhuman animals provide evidence for two distinct systems of nonverbal number representation. The "parallel individuation" (PI) system selects and retains information about one to three individual entities and the "numerical magnitude" system establishes representations of the approximate cardinal value of a group. Recent event-related potential (ERP) work has demonstrated that these systems reliably evoke functionally and temporally distinct patterns of brain response that correspond to established behavioral signatures. However, relatively little is known about the neural generators of these ERP signatures. To address this question, we targeted known ERP signatures of these systems, by contrasting processing of small versus large nonsymbolic numbers, and used a source localization algorithm (LORETA) to identify their cortical origins. Early processing of small numbers, showing the signature effects of PI on the N1 (∼150 ms), was localized primarily to extrastriate visual regions. In contrast, qualitatively and temporally distinct processing of large numbers, showing the signatures of approximate number representation on the mid-latency P2p (∼200-250 ms), was localized primarily to right intraparietal regions. In comparison, mid-latency small number processing was localized to the right temporal-parietal junction and left-lateralized intraparietal regions. These results add spatial information to the emerging ERP literature documenting the process by which we represent number. Furthermore, these results substantiate recent claims that early attentional processes determine whether a collection of objects will be represented through PI or as an approximate numerical magnitude by providing evidence that downstream processing diverges to distinct cortical regions. Copyright © 2011 Wiley Periodicals, Inc.
Krause, Florian; Lindemann, Oliver; Toni, Ivan; Bekkering, Harold
2014-04-01
A dominant hypothesis on how the brain processes numerical size proposes a spatial representation of numbers as positions on a "mental number line." An alternative hypothesis considers numbers as elements of a generalized representation of sensorimotor-related magnitude, which is not obligatorily spatial. Here we show that individuals' relative use of spatial and nonspatial representations has a cerebral counterpart in the structural organization of the posterior parietal cortex. Interindividual variability in the linkage between numbers and spatial responses (faster left responses to small numbers and right responses to large numbers; spatial-numerical association of response codes effect) correlated with variations in gray matter volume around the right precuneus. Conversely, differences in the disposition to link numbers to force production (faster soft responses to small numbers and hard responses to large numbers) were related to gray matter volume in the left angular gyrus. This finding suggests that numerical cognition relies on multiple mental representations of analogue magnitude using different neural implementations that are linked to individual traits.
NASA Astrophysics Data System (ADS)
Kotulla, Ralf; Gopu, Arvind; Hayashi, Soichi
2016-08-01
Processing astronomical data to science readiness was and remains a challenge, in particular in the case of multi detector instruments such as wide-field imagers. One such instrument, the WIYN One Degree Imager, is available to the astronomical community at large, and, in order to be scientifically useful to its varied user community on a short timescale, provides its users fully calibrated data in addition to the underlying raw data. However, time-efficient re-processing of the often large datasets with improved calibration data and/or software requires more than just a large number of CPU-cores and disk space. This is particularly relevant if all computing resources are general purpose and shared with a large number of users in a typical university setup. Our approach to address this challenge is a flexible framework, combining the best of both high performance (large number of nodes, internal communication) and high throughput (flexible/variable number of nodes, no dedicated hardware) computing. Based on the Advanced Message Queuing Protocol, we a developed a Server-Manager- Worker framework. In addition to the server directing the work flow and the worker executing the actual work, the manager maintains a list of available worker, adds and/or removes individual workers from the worker pool, and re-assigns worker to different tasks. This provides the flexibility of optimizing the worker pool to the current task and workload, improves load balancing, and makes the most efficient use of the available resources. We present performance benchmarks and scaling tests, showing that, today and using existing, commodity shared- use hardware we can process data with data throughputs (including data reduction and calibration) approaching that expected in the early 2020s for future observatories such as the Large Synoptic Survey Telescope.
Landerl, Karin
2013-01-01
Numerical processing has been demonstrated to be closely associated with arithmetic skills, however, our knowledge on the development of the relevant cognitive mechanisms is limited. The present longitudinal study investigated the developmental trajectories of numerical processing in 42 children with age-adequate arithmetic development and 41 children with dyscalculia over a 2-year period from beginning of Grade 2, when children were 7; 6 years old, to beginning of Grade 4. A battery of numerical processing tasks (dot enumeration, non-symbolic and symbolic comparison of one- and two-digit numbers, physical comparison, number line estimation) was given five times during the study (beginning and middle of each school year). Efficiency of numerical processing was a very good indicator of development in numerical processing while within-task effects remained largely constant and showed low long-term stability before middle of Grade 3. Children with dyscalculia showed less efficient numerical processing reflected in specifically prolonged response times. Importantly, they showed consistently larger slopes for dot enumeration in the subitizing range, an untypically large compatibility effect when processing two-digit numbers, and they were consistently less accurate in placing numbers on a number line. Thus, we were able to identify parameters that can be used in future research to characterize numerical processing in typical and dyscalculic development. These parameters can also be helpful for identification of children who struggle in their numerical development. PMID:23898310
Hierarchical Nearest-Neighbor Gaussian Process Models for Large Geostatistical Datasets.
Datta, Abhirup; Banerjee, Sudipto; Finley, Andrew O; Gelfand, Alan E
2016-01-01
Spatial process models for analyzing geostatistical data entail computations that become prohibitive as the number of spatial locations become large. This article develops a class of highly scalable nearest-neighbor Gaussian process (NNGP) models to provide fully model-based inference for large geostatistical datasets. We establish that the NNGP is a well-defined spatial process providing legitimate finite-dimensional Gaussian densities with sparse precision matrices. We embed the NNGP as a sparsity-inducing prior within a rich hierarchical modeling framework and outline how computationally efficient Markov chain Monte Carlo (MCMC) algorithms can be executed without storing or decomposing large matrices. The floating point operations (flops) per iteration of this algorithm is linear in the number of spatial locations, thereby rendering substantial scalability. We illustrate the computational and inferential benefits of the NNGP over competing methods using simulation studies and also analyze forest biomass from a massive U.S. Forest Inventory dataset at a scale that precludes alternative dimension-reducing methods. Supplementary materials for this article are available online.
Hierarchical Nearest-Neighbor Gaussian Process Models for Large Geostatistical Datasets
Datta, Abhirup; Banerjee, Sudipto; Finley, Andrew O.; Gelfand, Alan E.
2018-01-01
Spatial process models for analyzing geostatistical data entail computations that become prohibitive as the number of spatial locations become large. This article develops a class of highly scalable nearest-neighbor Gaussian process (NNGP) models to provide fully model-based inference for large geostatistical datasets. We establish that the NNGP is a well-defined spatial process providing legitimate finite-dimensional Gaussian densities with sparse precision matrices. We embed the NNGP as a sparsity-inducing prior within a rich hierarchical modeling framework and outline how computationally efficient Markov chain Monte Carlo (MCMC) algorithms can be executed without storing or decomposing large matrices. The floating point operations (flops) per iteration of this algorithm is linear in the number of spatial locations, thereby rendering substantial scalability. We illustrate the computational and inferential benefits of the NNGP over competing methods using simulation studies and also analyze forest biomass from a massive U.S. Forest Inventory dataset at a scale that precludes alternative dimension-reducing methods. Supplementary materials for this article are available online. PMID:29720777
Chan, Winnie Wai Lan; Wong, Terry Tin-Yau
2016-08-01
People map numbers onto space. The well-replicated SNARC (spatial-numerical association of response codes) effect indicates that people have a left-sided bias when responding to small numbers and a right-sided bias when responding to large numbers. This study examined whether such spatial codes were tagged to the ordinal or magnitude information of numbers among kindergarteners and whether it was related to early numerical abilities. Based on the traditional magnitude judgment task, we developed two variant tasks-namely the month judgment task and dot judgment task-to elicit ordinal and magnitude processing of numbers, respectively. Results showed that kindergarteners oriented small numbers toward the left side and large numbers toward the right side when processing the ordinal information of numbers in the month judgment task but not when processing the magnitude information in the number judgment task and dot judgment task, suggesting that the left-to-right spatial bias was probably tagged to the ordinal but not magnitude property of numbers. Moreover, the strength of the SNARC effect was not related to early numerical abilities. These findings have important implications for the early spatial representation of numbers and its role in numerical performance among kindergarteners. Copyright © 2016 Elsevier Inc. All rights reserved.
Euglena Transcript Processing.
McWatters, David C; Russell, Anthony G
2017-01-01
RNA transcript processing is an important stage in the gene expression pathway of all organisms and is subject to various mechanisms of control that influence the final levels of gene products. RNA processing involves events such as nuclease-mediated cleavage, removal of intervening sequences referred to as introns and modifications to RNA structure (nucleoside modification and editing). In Euglena, RNA transcript processing was initially examined in chloroplasts because of historical interest in the secondary endosymbiotic origin of this organelle in this organism. More recent efforts to examine mitochondrial genome structure and RNA maturation have been stimulated by the discovery of unusual processing pathways in other Euglenozoans such as kinetoplastids and diplonemids. Eukaryotes containing large genomes are now known to typically contain large collections of introns and regulatory RNAs involved in RNA processing events, and Euglena gracilis in particular has a relatively large genome for a protist. Studies examining the structure of nuclear genes and the mechanisms involved in nuclear RNA processing have revealed that indeed Euglena contains large numbers of introns in the limited set of genes so far examined and also possesses large numbers of specific classes of regulatory and processing RNAs, such as small nucleolar RNAs (snoRNAs). Most interestingly, these studies have also revealed that Euglena possesses novel processing pathways generating highly fragmented cytosolic ribosomal RNAs and subunits and non-conventional intron classes removed by unknown splicing mechanisms. This unexpected diversity in RNA processing pathways emphasizes the importance of identifying the components involved in these processing mechanisms and their evolutionary emergence in Euglena species.
Klein, Elise; Moeller, Korbinian; Kiechl-Kohlendorfer, Ursula; Kremser, Christian; Starke, Marc; Cohen Kadosh, Roi; Pupp-Peglow, Ulrike; Schocke, Michael; Kaufmann, Liane
2014-01-01
This study examined the neural correlates of intentional and automatic number processing (indexed by number comparison and physical Stroop task, respectively) in 6- and 7-year-old children born prematurely. Behavioral results revealed significant numerical distance and size congruity effects. Imaging results disclosed (1) largely overlapping fronto-parietal activation for intentional and automatic number processing, (2) a frontal to parietal shift of activation upon considering the risk factors gestational age and birth weight, and (3) a task-specific link between math proficiency and functional magnetic resonance imaging (fMRI) signal within distinct regions of the parietal lobes—indicating commonalities but also specificities of intentional and automatic number processing. PMID:25090014
Rank and Sparsity in Language Processing
ERIC Educational Resources Information Center
Hutchinson, Brian
2013-01-01
Language modeling is one of many problems in language processing that have to grapple with naturally high ambient dimensions. Even in large datasets, the number of unseen sequences is overwhelmingly larger than the number of observed ones, posing clear challenges for estimation. Although existing methods for building smooth language models tend to…
Massively parallel processor computer
NASA Technical Reports Server (NTRS)
Fung, L. W. (Inventor)
1983-01-01
An apparatus for processing multidimensional data with strong spatial characteristics, such as raw image data, characterized by a large number of parallel data streams in an ordered array is described. It comprises a large number (e.g., 16,384 in a 128 x 128 array) of parallel processing elements operating simultaneously and independently on single bit slices of a corresponding array of incoming data streams under control of a single set of instructions. Each of the processing elements comprises a bidirectional data bus in communication with a register for storing single bit slices together with a random access memory unit and associated circuitry, including a binary counter/shift register device, for performing logical and arithmetical computations on the bit slices, and an I/O unit for interfacing the bidirectional data bus with the data stream source. The massively parallel processor architecture enables very high speed processing of large amounts of ordered parallel data, including spatial translation by shifting or sliding of bits vertically or horizontally to neighboring processing elements.
1983-12-30
support among the scientific community. In the absence of some agreed criteria, the economic impact and legal aspects could be overwhelming. _ 17 The...processing large numbers of people. Guidance on CCS operations needs to incl]ue release limits and procedures for receiptinq for articles held for...contaminated articles and the re-clothiny of personreJ. 5P ""N ~ ’aEMNON Better procedures and equipment with which to rapidly process large numbers of
Quality Assurance in Large Scale Online Course Production
ERIC Educational Resources Information Center
Holsombach-Ebner, Cinda
2013-01-01
The course design and development process (often referred to here as the "production process") at Embry-Riddle Aeronautical University (ERAU-Worldwide) aims to produce turnkey style courses to be taught by a highly-qualified pool of over 800 instructors. Given the high number of online courses and tremendous number of live sections…
A Comparative Analysis of Extract, Transformation and Loading (ETL) Process
NASA Astrophysics Data System (ADS)
Runtuwene, J. P. A.; Tangkawarow, I. R. H. T.; Manoppo, C. T. M.; Salaki, R. J.
2018-02-01
The current growth of data and information occurs rapidly in varying amount and media. These types of development will eventually produce large number of data better known as the Big Data. Business Intelligence (BI) utilizes large number of data and information for analysis so that one can obtain important information. This type of information can be used to support decision-making process. In practice a process integrating existing data and information into data warehouse is needed. This data integration process is known as Extract, Transformation and Loading (ETL). In practice, many applications have been developed to carry out the ETL process, but selection which applications are more time, cost and power effective and efficient may become a challenge. Therefore, the objective of the study was to provide comparative analysis through comparison between the ETL process using Microsoft SQL Server Integration Service (SSIS) and one using Pentaho Data Integration (PDI).
Grade, Stéphane; Badets, Arnaud; Pesenti, Mauro
2017-05-01
Numerical magnitude and specific grasping action processing have been shown to interfere with each other because some aspects of numerical meaning may be grounded in sensorimotor transformation mechanisms linked to finger grip control. However, how specific these interactions are to grasping actions is still unknown. The present study tested the specificity of the number-grip relationship by investigating how the observation of different closing-opening stimuli that might or not refer to prehension-releasing actions was able to influence a random number generation task. Participants had to randomly produce numbers after they observed action stimuli representing either closure or aperture of the fingers, the hand or the mouth, or a colour change used as a control condition. Random number generation was influenced by the prior presentation of finger grip actions, whereby observing a closing finger grip led participants to produce small rather than large numbers, whereas observing an opening finger grip led them to produce large rather than small numbers. Hand actions had reduced or no influence on number production; mouth action influence was restricted to opening, with an overproduction of large numbers. Finally, colour changes did not influence number generation. These results show that some characteristics of observed finger, hand and mouth grip actions automatically prime number magnitude, with the strongest effect for finger grasping. The findings are discussed in terms of the functional and neural mechanisms shared between hand actions and number processing, but also between hand and mouth actions. The present study provides converging evidence that part of number semantics is grounded in sensory-motor mechanisms.
Investigating the Randomness of Numbers
ERIC Educational Resources Information Center
Pendleton, Kenn L.
2009-01-01
The use of random numbers is pervasive in today's world. Random numbers have practical applications in such far-flung arenas as computer simulations, cryptography, gambling, the legal system, statistical sampling, and even the war on terrorism. Evaluating the randomness of extremely large samples is a complex, intricate process. However, the…
Numbers Defy the Law of Large Numbers
ERIC Educational Resources Information Center
Falk, Ruma; Lann, Avital Lavie
2015-01-01
As the number of independent tosses of a fair coin grows, the rates of heads and tails tend to equality. This is misinterpreted by many students as being true also for the absolute numbers of the two outcomes, which, conversely, depart unboundedly from each other in the process. Eradicating that misconception, as by coin-tossing experiments,…
Towards the understanding of network information processing in biology
NASA Astrophysics Data System (ADS)
Singh, Vijay
Living organisms perform incredibly well in detecting a signal present in the environment. This information processing is achieved near optimally and quite reliably, even though the sources of signals are highly variable and complex. The work in the last few decades has given us a fair understanding of how individual signal processing units like neurons and cell receptors process signals, but the principles of collective information processing on biological networks are far from clear. Information processing in biological networks, like the brain, metabolic circuits, cellular-signaling circuits, etc., involves complex interactions among a large number of units (neurons, receptors). The combinatorially large number of states such a system can exist in makes it impossible to study these systems from the first principles, starting from the interactions between the basic units. The principles of collective information processing on such complex networks can be identified using coarse graining approaches. This could provide insights into the organization and function of complex biological networks. Here I study models of biological networks using continuum dynamics, renormalization, maximum likelihood estimation and information theory. Such coarse graining approaches identify features that are essential for certain processes performed by underlying biological networks. We find that long-range connections in the brain allow for global scale feature detection in a signal. These also suppress the noise and remove any gaps present in the signal. Hierarchical organization with long-range connections leads to large-scale connectivity at low synapse numbers. Time delays can be utilized to separate a mixture of signals with temporal scales. Our observations indicate that the rules in multivariate signal processing are quite different from traditional single unit signal processing.
24 CFR 103.205 - Systemic processing.
Code of Federal Regulations, 2010 CFR
2010-04-01
... are pervasive or institutional in nature, or that the processing of the complaint will involve complex issues, novel questions of fact or law, or will affect a large number of persons, the Assistant Secretary...
On the origins of logarithmic number-to-position mapping.
Dotan, Dror; Dehaene, Stanislas
2016-11-01
The number-to-position task, in which children and adults are asked to place numbers on a spatial number line, has become a classic measure of number comprehension. We present a detailed experimental and theoretical dissection of the processing stages that underlie this task. We used a continuous finger-tracking technique, which provides detailed information about the time course of processing stages. When adults map the position of 2-digit numbers onto a line, their final mapping is essentially linear, but intermediate finger location show a transient logarithmic mapping. We identify the origins of this log effect: Small numbers are processed faster than large numbers, so the finger deviates toward the target position earlier for small numbers than for large numbers. When the trajectories are aligned on the finger deviation onset, the log effect disappears. The small-number advantage and the log effect are enhanced in dual-task setting and are further enhanced when the delay between the 2 tasks is shortened, suggesting that these effects originate from a central stage of quantification and decision making. We also report cases of logarithmic mapping-by children and by a brain-injured individual-which cannot be explained by faster responding to small numbers. We show that these findings are captured by an ideal-observer model of the number-to-position mapping task, comprising 3 distinct stages: a quantification stage, whose duration is influenced by both exact and approximate representations of numerical quantity; a Bayesian accumulation-of-evidence stage, leading to a decision about the target location; and a pointing stage. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Speech Perception as a Cognitive Process: The Interactive Activation Model.
ERIC Educational Resources Information Center
Elman, Jeffrey L.; McClelland, James L.
Research efforts to model speech perception in terms of a processing system in which knowledge and processing are distributed over large numbers of highly interactive--but computationally primative--elements are described in this report. After discussing the properties of speech that demand a parallel interactive processing system, the report…
An investigation of small scales of turbulence in a boundary layer at high Reynolds numbers
NASA Technical Reports Server (NTRS)
Wallace, James M.; Ong, L.; Balint, J.-L.
1993-01-01
The assumption that turbulence at large wave-numbers is isotropic and has universal spectral characteristics which are independent of the flow geometry, at least for high Reynolds numbers, has been a cornerstone of closure theories as well as of the most promising recent development in the effort to predict turbulent flows, viz. large eddy simulations. This hypothesis was first advanced by Kolmogorov based on the supposition that turbulent kinetic energy cascades down the scales (up the wave-numbers) of turbulence and that, if the number of these cascade steps is sufficiently large (i.e. the wave-number range is large), then the effects of anisotropies at the large scales are lost in the energy transfer process. Experimental attempts were repeatedly made to verify this fundamental assumption. However, Van Atta has recently suggested that an examination of the scalar and velocity gradient fields is necessary to definitively verify this hypothesis or prove it to be unfounded. Of course, this must be carried out in a flow with a sufficiently high Reynolds number to provide the necessary separation of scales in order unambiguously to provide the possibility of local isotropy at large wave-numbers. An opportunity to use our 12-sensor hot-wire probe to address this issue directly was made available at the 80'x120' wind tunnel at the NASA Ames Research Center, which is normally used for full-scale aircraft tests. An initial report on this high Reynolds number experiment and progress toward its evaluation is presented.
USAF solar thermal applications overview
NASA Technical Reports Server (NTRS)
Hauger, J. S.; Simpson, J. A.
1981-01-01
Process heat applications were compared to solar thermal technologies. The generic process heat applications were analyzed for solar thermal technology utilization, using SERI's PROSYS/ECONOMAT model in an end use matching analysis and a separate analysis was made for solar ponds. Solar technologies appear attractive in a large number of applications. Low temperature applications at sites with high insolation and high fuel costs were found to be most attractive. No one solar thermal technology emerges as a clearly universal or preferred technology, however,, solar ponds offer a potential high payoff in a few, selected applications. It was shown that troughs and flat plate systems are cost effective in a large number of applications.
Improved microseismic event locations through large-N arrays and wave-equation imaging and inversion
NASA Astrophysics Data System (ADS)
Witten, B.; Shragge, J. C.
2016-12-01
The recent increased focus on small-scale seismicity, Mw < 4 has come about primarily for two reasons. First, there is an increase in induced seismicity related to injection operations primarily for wastewater disposal and hydraulic fracturing for oil and gas recovery and for geothermal energy production. While the seismicity associated with injection is sometimes felt, it is more often weak. Some weak events are detected on current sparse arrays; however, accurate location of the events often requires a larger number of (multi-component) sensors. This leads to the second reason for an increased focus on small magnitude seismicity: a greater number of seismometers are being deployed in large N-arrays. The greater number of sensors decreases the detection threshold and therefore significantly increases the number of weak events found. Overall, these two factors bring new challenges and opportunities. Many standard seismological location and inversion techniques are geared toward large, easily identifiable events recorded on a sparse number of stations. However, with large-N arrays we can detect small events by utilizing multi-trace processing techniques, and increased processing power equips us with tools that employ more complete physics for simultaneously locating events and inverting for P- and S-wave velocity structure. We present a method that uses large-N arrays and wave-equation-based imaging and inversion to jointly locate earthquakes and estimate the elastic velocities of the earth. The technique requires no picking and is thus suitable for weak events. We validate the methodology through synthetic and field data examples.
ERIC Educational Resources Information Center
Shim, Eunjae; Shim, Minsuk K.; Felner, Robert D.
Automation of the survey process has proved successful in many industries, yet it is still underused in educational research. This is largely due to the facts (1) that number crunching is usually carried out using software that was developed before information technology existed, and (2) that the educational research is to a great extent trapped…
Localization of multiple defects using the compact phased array (CPA) method
NASA Astrophysics Data System (ADS)
Senyurek, Volkan Y.; Baghalian, Amin; Tashakori, Shervin; McDaniel, Dwayne; Tansel, Ibrahim N.
2018-01-01
Array systems of transducers have found numerous applications in detection and localization of defects in structural health monitoring (SHM) of plate-like structures. Different types of array configurations and analysis algorithms have been used to improve the process of localization of defects. For accurate and reliable monitoring of large structures by array systems, a high number of actuator and sensor elements are often required. In this study, a compact phased array system consisting of only three piezoelectric elements is used in conjunction with an updated total focusing method (TFM) for localization of single and multiple defects in an aluminum plate. The accuracy of the localization process was greatly improved by including wave propagation information in TFM. Results indicated that the proposed CPA approach can locate single and multiple defects with high accuracy while decreasing the processing costs and the number of required transducers. This method can be utilized in critical applications such as aerospace structures where the use of a large number of transducers is not desirable.
Act on Numbers: Numerical Magnitude Influences Selection and Kinematics of Finger Movement
Rugani, Rosa; Betti, Sonia; Ceccarini, Francesco; Sartori, Luisa
2017-01-01
In the past decade hand kinematics has been reliably adopted for investigating cognitive processes and disentangling debated topics. One of the most controversial issues in numerical cognition literature regards the origin – cultural vs. genetically driven – of the mental number line (MNL), oriented from left (small numbers) to right (large numbers). To date, the majority of studies have investigated this effect by means of response times, whereas studies considering more culturally unbiased measures such as kinematic parameters are rare. Here, we present a new paradigm that combines a “free response” task with the kinematic analysis of movement. Participants were seated in front of two little soccer goals placed on a table, one on the left and one on the right side. They were presented with left- or right-directed arrows and they were instructed to kick a small ball with their right index toward the goal indicated by the arrow. In a few test trials participants were presented also with a small (2) or a large (8) number, and they were allowed to choose the kicking direction. Participants performed more left responses with the small number and more right responses with the large number. The whole kicking movement was segmented in two temporal phases in order to make a hand kinematics’ fine-grained analysis. The Kick Preparation and Kick Finalization phases were selected on the basis of peak trajectory deviation from the virtual midline between the two goals. Results show an effect of both small and large numbers on action execution timing. Participants were faster to finalize the action when responding to small numbers toward the left and to large number toward the right. Here, we provide the first experimental demonstration which highlights how numerical processing affects action execution in a new and not-overlearned context. The employment of this innovative and unbiased paradigm will permit to disentangle the role of nature and culture in shaping the direction of MNL and the role of finger in the acquisition of numerical skills. Last but not least, similar paradigms will allow to determine how cognition can influence action execution. PMID:28912743
An Assessment of Decision-Making Processes in Dual-Career Marriages.
ERIC Educational Resources Information Center
Kingsbury, Nancy M.
As large numbers of women enter the labor force, decision making and power processes have assumed greater importance in marital relationships. A sample of 51 (N=101) dual-career couples were interviewed to assess independent variables predictive of process power, process outcome, and subjective outcomes of decision making in dual-career families.…
Strategies for Large Scale Implementation of a Multiscale, Multiprocess Integrated Hydrologic Model
NASA Astrophysics Data System (ADS)
Kumar, M.; Duffy, C.
2006-05-01
Distributed models simulate hydrologic state variables in space and time while taking into account the heterogeneities in terrain, surface, subsurface properties and meteorological forcings. Computational cost and complexity associated with these model increases with its tendency to accurately simulate the large number of interacting physical processes at fine spatio-temporal resolution in a large basin. A hydrologic model run on a coarse spatial discretization of the watershed with limited number of physical processes needs lesser computational load. But this negatively affects the accuracy of model results and restricts physical realization of the problem. So it is imperative to have an integrated modeling strategy (a) which can be universally applied at various scales in order to study the tradeoffs between computational complexity (determined by spatio- temporal resolution), accuracy and predictive uncertainty in relation to various approximations of physical processes (b) which can be applied at adaptively different spatial scales in the same domain by taking into account the local heterogeneity of topography and hydrogeologic variables c) which is flexible enough to incorporate different number and approximation of process equations depending on model purpose and computational constraint. An efficient implementation of this strategy becomes all the more important for Great Salt Lake river basin which is relatively large (~89000 sq. km) and complex in terms of hydrologic and geomorphic conditions. Also the types and the time scales of hydrologic processes which are dominant in different parts of basin are different. Part of snow melt runoff generated in the Uinta Mountains infiltrates and contributes as base flow to the Great Salt Lake over a time scale of decades to centuries. The adaptive strategy helps capture the steep topographic and climatic gradient along the Wasatch front. Here we present the aforesaid modeling strategy along with an associated hydrologic modeling framework which facilitates a seamless, computationally efficient and accurate integration of the process model with the data model. The flexibility of this framework leads to implementation of multiscale, multiresolution, adaptive refinement/de-refinement and nested modeling simulations with least computational burden. However, performing these simulations and related calibration of these models over a large basin at higher spatio- temporal resolutions is computationally intensive and requires use of increasing computing power. With the advent of parallel processing architectures, high computing performance can be achieved by parallelization of existing serial integrated-hydrologic-model code. This translates to running the same model simulation on a network of large number of processors thereby reducing the time needed to obtain solution. The paper also discusses the implementation of the integrated model on parallel processors. Also will be discussed the mapping of the problem on multi-processor environment, method to incorporate coupling between hydrologic processes using interprocessor communication models, model data structure and parallel numerical algorithms to obtain high performance.
A factor involved in efficient breakdown of supersonic streamwise vortices
NASA Astrophysics Data System (ADS)
Hiejima, Toshihiko
2015-03-01
Spatially developing processes in supersonic streamwise vortices were numerically simulated at Mach number 5.0. The vortex evolution largely depended on the azimuthal vorticity thickness of the vortices, which governs the negative helicity profile. Large vorticity thickness greatly enhanced the centrifugal instability, with consequent development of perturbations with competing wavenumbers outside the vortex core. During the transition process, supersonic streamwise vortices could generate large-scale spiral structures and a number of hairpin like vortices. Remarkably, the transition caused a dramatic increase in the total fluctuation energy of hypersonic flows, because the negative helicity profile destabilizes the flows due to helicity instability. Unstable growth might also relate to the correlation length between the axial and azimuthal vorticities of the streamwise vortices. The knowledge gained in this study is important for realizing effective fuel-oxidizer mixing in supersonic combustion engines.
Lumping of degree-based mean-field and pair-approximation equations for multistate contact processes
NASA Astrophysics Data System (ADS)
Kyriakopoulos, Charalampos; Grossmann, Gerrit; Wolf, Verena; Bortolussi, Luca
2018-01-01
Contact processes form a large and highly interesting class of dynamic processes on networks, including epidemic and information-spreading networks. While devising stochastic models of such processes is relatively easy, analyzing them is very challenging from a computational point of view, particularly for large networks appearing in real applications. One strategy to reduce the complexity of their analysis is to rely on approximations, often in terms of a set of differential equations capturing the evolution of a random node, distinguishing nodes with different topological contexts (i.e., different degrees of different neighborhoods), such as degree-based mean-field (DBMF), approximate-master-equation (AME), or pair-approximation (PA) approaches. The number of differential equations so obtained is typically proportional to the maximum degree kmax of the network, which is much smaller than the size of the master equation of the underlying stochastic model, yet numerically solving these equations can still be problematic for large kmax. In this paper, we consider AME and PA, extended to cope with multiple local states, and we provide an aggregation procedure that clusters together nodes having similar degrees, treating those in the same cluster as indistinguishable, thus reducing the number of equations while preserving an accurate description of global observables of interest. We also provide an automatic way to build such equations and to identify a small number of degree clusters that give accurate results. The method is tested on several case studies, where it shows a high level of compression and a reduction of computational time of several orders of magnitude for large networks, with minimal loss in accuracy.
Winiecki, A.L.; Kroop, D.C.; McGee, M.K.; Lenkszus, F.R.
1984-01-01
An analytical instrument and particularly a time-of-flight-mass spectrometer for processing a large number of analog signals irregularly spaced over a spectrum, with programmable masking of portions of the spectrum where signals are unlikely in order to reduce memory requirements and/or with a signal capturing assembly having a plurality of signal capturing devices fewer in number than the analog signals for use in repeated cycles within the data processing time period.
Winiecki, Alan L.; Kroop, David C.; McGee, Marilyn K.; Lenkszus, Frank R.
1986-01-01
An analytical instrument and particularly a time-of-flight-mass spectrometer for processing a large number of analog signals irregularly spaced over a spectrum, with programmable masking of portions of the spectrum where signals are unlikely in order to reduce memory requirements and/or with a signal capturing assembly having a plurality of signal capturing devices fewer in number than the analog signals for use in repeated cycles within the data processing time period.
Process for producing fine and ultrafine filament superconductor wire
Kanithi, H.C.
1992-02-18
A process for producing a superconductor wire made up of a large number of round monofilament rods is provided for, comprising assembling a multiplicity of round monofilaments inside each of a multiplicity of thin wall hexagonal tubes and then assembling a number of said thin wall hexagonal tubes within an extrusion can and subsequently consolidating, extruding and drawing the entire assembly down to the desired wire size. 8 figs.
Process for producing fine and ultrafine filament superconductor wire
Kanithi, Hem C.
1992-01-01
A process for producing a superconductor wire made up of a large number of round monofilament rods is provided for, comprising assembling a multiplicity of round monofilaments inside each of a multiplicity of thin wall hexagonal tubes and then assembling a number of said thin wall hexagonal tubes within an extrusion can and subsequently consolidating, extruding and drawing the entire assembly down to the desired wire size.
High-Dimensional Bayesian Geostatistics
Banerjee, Sudipto
2017-01-01
With the growing capabilities of Geographic Information Systems (GIS) and user-friendly software, statisticians today routinely encounter geographically referenced data containing observations from a large number of spatial locations and time points. Over the last decade, hierarchical spatiotemporal process models have become widely deployed statistical tools for researchers to better understand the complex nature of spatial and temporal variability. However, fitting hierarchical spatiotemporal models often involves expensive matrix computations with complexity increasing in cubic order for the number of spatial locations and temporal points. This renders such models unfeasible for large data sets. This article offers a focused review of two methods for constructing well-defined highly scalable spatiotemporal stochastic processes. Both these processes can be used as “priors” for spatiotemporal random fields. The first approach constructs a low-rank process operating on a lower-dimensional subspace. The second approach constructs a Nearest-Neighbor Gaussian Process (NNGP) that ensures sparse precision matrices for its finite realizations. Both processes can be exploited as a scalable prior embedded within a rich hierarchical modeling framework to deliver full Bayesian inference. These approaches can be described as model-based solutions for big spatiotemporal datasets. The models ensure that the algorithmic complexity has ~ n floating point operations (flops), where n the number of spatial locations (per iteration). We compare these methods and provide some insight into their methodological underpinnings. PMID:29391920
High-Dimensional Bayesian Geostatistics.
Banerjee, Sudipto
2017-06-01
With the growing capabilities of Geographic Information Systems (GIS) and user-friendly software, statisticians today routinely encounter geographically referenced data containing observations from a large number of spatial locations and time points. Over the last decade, hierarchical spatiotemporal process models have become widely deployed statistical tools for researchers to better understand the complex nature of spatial and temporal variability. However, fitting hierarchical spatiotemporal models often involves expensive matrix computations with complexity increasing in cubic order for the number of spatial locations and temporal points. This renders such models unfeasible for large data sets. This article offers a focused review of two methods for constructing well-defined highly scalable spatiotemporal stochastic processes. Both these processes can be used as "priors" for spatiotemporal random fields. The first approach constructs a low-rank process operating on a lower-dimensional subspace. The second approach constructs a Nearest-Neighbor Gaussian Process (NNGP) that ensures sparse precision matrices for its finite realizations. Both processes can be exploited as a scalable prior embedded within a rich hierarchical modeling framework to deliver full Bayesian inference. These approaches can be described as model-based solutions for big spatiotemporal datasets. The models ensure that the algorithmic complexity has ~ n floating point operations (flops), where n the number of spatial locations (per iteration). We compare these methods and provide some insight into their methodological underpinnings.
Stationary States of Boundary Driven Exclusion Processes with Nonreversible Boundary Dynamics
NASA Astrophysics Data System (ADS)
Erignoux, C.; Landim, C.; Xu, T.
2018-05-01
We prove a law of large numbers for the empirical density of one-dimensional, boundary driven, symmetric exclusion processes with different types of non-reversible dynamics at the boundary. The proofs rely on duality techniques.
Subitizing Reflects Visuo-Spatial Object Individuation Capacity
ERIC Educational Resources Information Center
Piazza, Manuela; Fumarola, Antonia; Chinello, Alessandro; Melcher, David
2011-01-01
Subitizing is the immediate apprehension of the exact number of items in small sets. Despite more than a 100 years of research around this phenomenon, its nature and origin are still unknown. One view posits that it reflects a number estimation process common for small and large sets, which precision decreases as the number of items increases,…
Unified Approximations: A New Approach for Monoprotic Weak Acid-Base Equilibria
ERIC Educational Resources Information Center
Pardue, Harry; Odeh, Ihab N.; Tesfai, Teweldemedhin M.
2004-01-01
The unified approximations reduce the conceptual complexity by combining solutions for a relatively large number of different situations into just two similar sets of processes. Processes used to solve problems by either the unified or classical approximations require similar degrees of understanding of the underlying chemical processes.
NASA Astrophysics Data System (ADS)
Guervilly, C.; Cardin, P.
2017-12-01
Convection is the main heat transport process in the liquid cores of planets. The convective flows are thought to be turbulent and constrained by rotation (corresponding to high Reynolds numbers Re and low Rossby numbers Ro). Under these conditions, and in the absence of magnetic fields, the convective flows can produce coherent Reynolds stresses that drive persistent large-scale zonal flows. The formation of large-scale flows has crucial implications for the thermal evolution of planets and the generation of large-scale magnetic fields. In this work, we explore this problem with numerical simulations using a quasi-geostrophic approximation to model convective and zonal flows at Re 104 and Ro 10-4 for Prandtl numbers relevant for liquid metals (Pr 0.1). The formation of intense multiple zonal jets strongly affects the convective heat transport, leading to the formation of a mean temperature staircase. We also study the generation of magnetic fields by the quasi-geostrophic flows at low magnetic Prandtl numbers.
Accelerating root system phenotyping of seedlings through a computer-assisted processing pipeline.
Dupuy, Lionel X; Wright, Gladys; Thompson, Jacqueline A; Taylor, Anna; Dekeyser, Sebastien; White, Christopher P; Thomas, William T B; Nightingale, Mark; Hammond, John P; Graham, Neil S; Thomas, Catherine L; Broadley, Martin R; White, Philip J
2017-01-01
There are numerous systems and techniques to measure the growth of plant roots. However, phenotyping large numbers of plant roots for breeding and genetic analyses remains challenging. One major difficulty is to achieve high throughput and resolution at a reasonable cost per plant sample. Here we describe a cost-effective root phenotyping pipeline, on which we perform time and accuracy benchmarking to identify bottlenecks in such pipelines and strategies for their acceleration. Our root phenotyping pipeline was assembled with custom software and low cost material and equipment. Results show that sample preparation and handling of samples during screening are the most time consuming task in root phenotyping. Algorithms can be used to speed up the extraction of root traits from image data, but when applied to large numbers of images, there is a trade-off between time of processing the data and errors contained in the database. Scaling-up root phenotyping to large numbers of genotypes will require not only automation of sample preparation and sample handling, but also efficient algorithms for error detection for more reliable replacement of manual interventions.
Unger, Melissa D; Aldrich, Alison M; Hefner, Jennifer L; Rizer, Milisa K
2014-01-01
Successfully reporting meaningful use of electronic health records to the Centers for Medicare and Medicaid Services can be a challenging process, particularly for healthcare organizations with large numbers of eligible professionals. This case report describes a successful meaningful use attestation process undertaken at a major academic medical center. It identifies best practices in the areas of leadership, administration, communication, ongoing support, and technological implementation.
NASA Technical Reports Server (NTRS)
Globus, Al; Biegel, Bryan A.; Traugott, Steve
2004-01-01
AsterAnts is a concept calling for a fleet of solar sail powered spacecraft to retrieve large numbers of small (1/2-1 meter diameter) Near Earth Objects (NEOs) for orbital processing. AsterAnts could use the International Space Station (ISS) for NEO processing, solar sail construction, and to test NEO capture hardware. Solar sails constructed on orbit are expected to have substantially better performance than their ground built counterparts [Wright 1992]. Furthermore, solar sails may be used to hold geosynchronous communication satellites out-of-plane [Forward 1981] increasing the total number of slots by at least a factor of three. potentially generating $2 billion worth of orbital real estate over North America alone. NEOs are believed to contain large quantities of water, carbon, other life-support materials and metals. Thus. with proper processing, NEO materials could in principle be used to resupply the ISS, produce rocket propellant, manufacture tools, and build additional ISS working space. Unlike proposals requiring massive facilities, such as lunar bases, before returning any extraterrestrial larger than a typical inter-planetary mission. Furthermore, AsterAnts could be scaled up to deliver large amounts of material by building many copies of the same spacecraft, thereby achieving manufacturing economies of scale. Because AsterAnts would capture NEOs whole, NEO composition details, which are generally poorly characterized, are relatively unimportant and no complex extraction equipment is necessary. In combination with a materials processing facility at the ISS, AsterAnts might inaugurate an era of large-scale orbital construction using extraterrestrial materials.
NASA Astrophysics Data System (ADS)
Hasegawa, K.; Lim, C. S.; Ogure, K.
2003-09-01
We propose a two-zero-texture general Zee model, compatible with the large mixing angle Mikheyev-Smirnov-Wolfenstein solution. The washing out of the baryon number does not occur in this model for an adequate parameter range. We check the consistency of a model with the constraints coming from flavor changing neutral current processes, the recent cosmic microwave background observation, and the Z-burst scenario.
Landy, David; Silbert, Noah; Goldin, Aleah
2013-07-01
Despite their importance in public discourse, numbers in the range of 1 million to 1 trillion are notoriously difficult to understand. We examine magnitude estimation by adult Americans when placing large numbers on a number line and when qualitatively evaluating descriptions of imaginary geopolitical scenarios. Prior theoretical conceptions predict a log-to-linear shift: People will either place numbers linearly or will place numbers according to a compressive logarithmic or power-shaped function (Barth & Paladino, ; Siegler & Opfer, ). While about half of people did estimate numbers linearly over this range, nearly all the remaining participants placed 1 million approximately halfway between 1 thousand and 1 billion, but placed numbers linearly across each half, as though they believed that the number words "thousand, million, billion, trillion" constitute a uniformly spaced count list. Participants in this group also tended to be optimistic in evaluations of largely ineffective political strategies, relative to linear number-line placers. The results indicate that the surface structure of number words can heavily influence processes for dealing with numbers in this range, and it can amplify the possibility that analogous surface regularities are partially responsible for parallel phenomena in children. In addition, these results have direct implications for lawmakers and scientists hoping to communicate effectively with the public. Copyright © 2013 Cognitive Science Society, Inc.
Burgess, Annette; Roberts, Chris; Sureshkumar, Premala; Mossman, Karyn
2018-01-25
Multiple Mini Interviews (MMIs) are being used by a growing number of postgraduate training programs and medical schools as their interview process for selection entry. The Australian General Practice and Training (AGPT) used a National Assessment Centre (NAC) approach to selection into General Practice (GP) Training, which include MMIs. Interviewing is a resource intensive process, and implementation of the MMI requires a large number of interviewers, with a number of candidates being interviewed simultaneously. In 2015, 308 interviewers participated in the MMI process - a decrease from 340 interviewers in 2014, and 310 in 2013. At the same time, the number of applicants has steadily increased, with 1930 applications received in 2013; 2254 in 2014; and 2360 in 2015. This has raised concerns regarding the increasing recruitment needs, and the need to retain interviewers for subsequent years of MMIs. In order to investigate interviewers' reasons for participating in MMIs, we utilised self-determination theory (SDT) to consider interviewers' motivation to take part in MMIs at national selection centres. In 2015, 308 interviewers were recruited from 17 Regional Training Providers (RTPs) to participate in the MMI process at one of 15 NACs. For this study, a convenience sample of NAC sites was used. Forty interviewers were interviewed (n = 40; 40/308 = 13%) from five NACs. Framework analysis was used to code and categorise data into themes. Interviewers' motivation to take part as interviewers were largely related to their sense of duty, their desire to contribute their expertise to the process, and their desire to have input into selection of GP Registrars; a sense of duty to their profession; and an opportunity to meet with colleagues and future trainees. Interviewers also highlighted factors hindering motivation, which sometimes included the large number of candidates seen in one day. Interviewers' motivation for contributing to the MMIs was largely related to their desire to contribute to their profession, and ultimately improve future patient care. Interviewers recognised the importance of interviewing, and felt their individual roles made a crucial contribution to the profession of general practice. Good administration and leadership at each NAC is needed. By gaining an understanding of interviewers' motivation, and enhancing this, engagement and retention of interviewers may be increased.
Kyriakopoulos, Charalampos; Grossmann, Gerrit; Wolf, Verena; Bortolussi, Luca
2018-01-01
Contact processes form a large and highly interesting class of dynamic processes on networks, including epidemic and information-spreading networks. While devising stochastic models of such processes is relatively easy, analyzing them is very challenging from a computational point of view, particularly for large networks appearing in real applications. One strategy to reduce the complexity of their analysis is to rely on approximations, often in terms of a set of differential equations capturing the evolution of a random node, distinguishing nodes with different topological contexts (i.e., different degrees of different neighborhoods), such as degree-based mean-field (DBMF), approximate-master-equation (AME), or pair-approximation (PA) approaches. The number of differential equations so obtained is typically proportional to the maximum degree k_{max} of the network, which is much smaller than the size of the master equation of the underlying stochastic model, yet numerically solving these equations can still be problematic for large k_{max}. In this paper, we consider AME and PA, extended to cope with multiple local states, and we provide an aggregation procedure that clusters together nodes having similar degrees, treating those in the same cluster as indistinguishable, thus reducing the number of equations while preserving an accurate description of global observables of interest. We also provide an automatic way to build such equations and to identify a small number of degree clusters that give accurate results. The method is tested on several case studies, where it shows a high level of compression and a reduction of computational time of several orders of magnitude for large networks, with minimal loss in accuracy.
PRACTICAL EXPERIENCES WITH TECHNOLOGIES FOR DECONTAMINATION OF B. ANTHRACIS IN LARGE BUILDINGS.
In the Fall of 2001 a number of buildings were contaminated with B. anthracis (B.A.) from letters processed through United States Postal Service and other mail handling facilities. All of the buildings have now been decontaminated using a variety of technologies. In a number of...
Assuring Quality in Large-Scale Online Course Development
ERIC Educational Resources Information Center
Parscal, Tina; Riemer, Deborah
2010-01-01
Student demand for online education requires colleges and universities to rapidly expand the number of courses and programs offered online while maintaining high quality. This paper outlines two universities respective processes to assure quality in large-scale online programs that integrate instructional design, eBook custom publishing, Quality…
ERIC Educational Resources Information Center
Stifle, Jack
The PLATO IV computer-based instructional system consists of a large scale centrally located CDC 6400 computer and a large number of remote student terminals. This is a brief and general description of the proposed input/output hardware necessary to interface the student terminals with the computer's central processing unit (CPU) using available…
Laboratory and modeling studies of chemistry in dense molecular clouds
NASA Technical Reports Server (NTRS)
Huntress, W. T., Jr.; Prasad, S. S.; Mitchell, G. F.
1980-01-01
A chemical evolutionary model with a large number of species and a large chemical library is used to examine the principal chemical processes in interstellar clouds. Simple chemical equilibrium arguments show the potential for synthesis of very complex organic species by ion-molecule radiative association reactions.
The parallel algorithm for the 2D discrete wavelet transform
NASA Astrophysics Data System (ADS)
Barina, David; Najman, Pavel; Kleparnik, Petr; Kula, Michal; Zemcik, Pavel
2018-04-01
The discrete wavelet transform can be found at the heart of many image-processing algorithms. Until now, the transform on general-purpose processors (CPUs) was mostly computed using a separable lifting scheme. As the lifting scheme consists of a small number of operations, it is preferred for processing using single-core CPUs. However, considering a parallel processing using multi-core processors, this scheme is inappropriate due to a large number of steps. On such architectures, the number of steps corresponds to the number of points that represent the exchange of data. Consequently, these points often form a performance bottleneck. Our approach appropriately rearranges calculations inside the transform, and thereby reduces the number of steps. In other words, we propose a new scheme that is friendly to parallel environments. When evaluating on multi-core CPUs, we consistently overcome the original lifting scheme. The evaluation was performed on 61-core Intel Xeon Phi and 8-core Intel Xeon processors.
Characterizing and Optimizing the Performance of the MAESTRO 49-Core Processor
2014-03-27
process large volumes of data, it is necessary during testing to vary the dimensions of the inbound data matrix to determine what effect this has on the...needed that can process the extra data these systems seek to collect. However, the space environment presents a number of threats, such as ambient or...induced faults, and that also have sufficient computational power to handle the large flow of data they encounter. This research investigates one
Unger, Melissa D.; Aldrich, Alison M.; Hefner, Jennifer L.; Rizer, Milisa K.
2014-01-01
Successfully reporting meaningful use of electronic health records to the Centers for Medicare and Medicaid Services can be a challenging process, particularly for healthcare organizations with large numbers of eligible professionals. This case report describes a successful meaningful use attestation process undertaken at a major academic medical center. It identifies best practices in the areas of leadership, administration, communication, ongoing support, and technological implementation. PMID:25593572
The application of waste fly ash and construction-waste in cement filling material in goaf
NASA Astrophysics Data System (ADS)
Chen, W. X.; Xiao, F. K.; Guan, X. H.; Cheng, Y.; Shi, X. P.; Liu, S. M.; Wang, W. W.
2018-01-01
As the process of urbanization accelerated, resulting in a large number of abandoned fly ash and construction waste, which have occupied the farmland and polluted the environment. In this paper, a large number of construction waste and abandoned fly ash are mixed into the filling material in goaf, the best formula of the filling material which containing a large amount of abandoned fly ash and construction waste is obtained, and the performance of the filling material is analyzed. The experimental results show that the cost of filling material is very low while the performance is very good, which have a good prospect in goaf.
Microbiological profile of selected mucks
NASA Astrophysics Data System (ADS)
Dąbek-Szreniawska, M.; Wyczółkowski, A. I.
2009-04-01
INTRODUCTION Matyka-Sarzynska and Sokolowska (2000) emphasize that peats and peat soils comprise large areas of Poland. The creation of soil begins when the formation of swamp has ended. Gawlik (2000) states that the degree of influence of the mucky process of organic soils on the differentiations of the conditions of growth and development of plants is mainly connected with the changes of moisture-retentive properties of mucks which constitute the material for these soils, and the loss of their wetting capacities. The above-mentioned changes, which usually occur gradually and show a clear connection with the extent of dehydration and, at times, with its duration, intensify significantly when the soils are under cultivation. The mucky process of peat soils leads to transformations of their physical, chemical and biological properties. The main ingredient of peat soils is organic substance. The substance is maintained inside them by the protective activity of water. The process of land improvement reduces the humidity of the environment, and that Intensifies the pace of the activity of soil microorganisms which cause the decay of organic substance. The decay takes place in the direction of two parallel processes: mineralization and humification. All groups of chemical substances constituting peat undergo mineralization. Special attention should be called to the mineralization of carbon and nitrogen compounds, which constitute a large percentage of theorganic substance of the peat organic mass. Okruszko (1976) has examined scientificbases of the classification of peat soils depending on the intensity of the muck process. The aim of this publication was to conduct a microbiological characteristic of selected mucky material. METHODS AND MATERIALS Soil samples used in the experiments were acquired from the Leczynsko-Wlodawski Lake Region, a large area of which constitutes a part of the Poleski National Park, which is covered to a large extent with high peat bogs. It was a mucky-peat soil with different degrees of muck process, described by Gawlik (2000) as MtI - first step of muck process, and MtII - second step of muck process. The numbers of selected groups of microorganisms were established using the cultivation method. The total number of microorganisms, zymogenic, aerobic and anaerobic microorganisms (Fred, Waksman 1928), oligotrophic microorganisms, the number of fungi (Parkinson 1982), ammonifiers (Parkinson et al 1971), nitrogen reducers and amolytic microorganisms (Pochon and Tardieux 1962), were determined. RESULTS The interpretation of the obtained results should take into consideration not only the characteristics of the studied objects, but also the characteristics of the methods used and of the examined microorganisms. As a result of the experiments that were carried out, significant differences of the numbers of the examined groups of microorganisms, depending on the degree of the muck process, have been observed. The number of the examined groups was significantly higher in the soil at the first step muck process than the second step of muck process. Amylolytic bacteria were an exception. Probably, during the muck process, ammonification, nitrification and nitrogen reduction process take place at the same time, which is indicated by the number of individual groups of examined microorganisms. CONCLUSIONS During the muck process, the number of microorganisms in the soil decreases. It can be presupposed that during the muck process, the basic process realized by microorganisms is the degradation of organic substance, using nitrates as oxidizers. Dąbek-Szreniawska M.: 1992 Results of microbiological analysis related to soil physical properties. Zesz. Probl. Post. Nauk Roln., 398, 1-6. Fred E.B., Waksman S.A.: 1928 Laboratory manual of general microbiology. Mc Graw-Hill Book Company, New York - London pp. 145. Gawlik J.: 2000 Division of differently silted peat formations into classes according to their state of secondary transformations. Acta Agrophysica, 26, 17-24. Maciak F.: 1985 MateriaŁ y do ćwiczeń z rekultywacji teren
Evidence accumulation as a model for lexical selection.
Anders, R; Riès, S; van Maanen, L; Alario, F X
2015-11-01
We propose and demonstrate evidence accumulation as a plausible theoretical and/or empirical model for the lexical selection process of lexical retrieval. A number of current psycholinguistic theories consider lexical selection as a process related to selecting a lexical target from a number of alternatives, which each have varying activations (or signal supports), that are largely resultant of an initial stimulus recognition. We thoroughly present a case for how such a process may be theoretically explained by the evidence accumulation paradigm, and we demonstrate how this paradigm can be directly related or combined with conventional psycholinguistic theory and their simulatory instantiations (generally, neural network models). Then with a demonstrative application on a large new real data set, we establish how the empirical evidence accumulation approach is able to provide parameter results that are informative to leading psycholinguistic theory, and that motivate future theoretical development. Copyright © 2015 Elsevier Inc. All rights reserved.
Tuning the thickness of electrochemically grafted layers in large area molecular junctions
NASA Astrophysics Data System (ADS)
Fluteau, T.; Bessis, C.; Barraud, C.; Della Rocca, M. L.; Martin, P.; Lacroix, J.-C.; Lafarge, P.
2014-09-01
We have investigated the thickness, the surface roughness, and the transport properties of oligo(1-(2-bisthienyl)benzene) (BTB) thin films grafted on evaporated Au electrodes, thanks to a diazonium-based electro-reduction process. The thickness of the organic film is tuned by varying the number of electrochemical cycles during the growth process. Atomic force microscopy measurements reveal the evolution of the thickness in the range of 2-27 nm. Its variation displays a linear dependence with the number of cycles followed by a saturation attributed to the insulating behavior of the organic films. Both ultrathin (2 nm) and thin (12 and 27 nm) large area BTB-based junctions have then been fabricated using standard CMOS processes and finally electrically characterized. The electronic responses are fully consistent with a tunneling barrier in case of ultrathin BTB film whereas a pronounced rectifying behavior is reported for thicker molecular films.
Multiplicative Forests for Continuous-Time Processes
Weiss, Jeremy C.; Natarajan, Sriraam; Page, David
2013-01-01
Learning temporal dependencies between variables over continuous time is an important and challenging task. Continuous-time Bayesian networks effectively model such processes but are limited by the number of conditional intensity matrices, which grows exponentially in the number of parents per variable. We develop a partition-based representation using regression trees and forests whose parameter spaces grow linearly in the number of node splits. Using a multiplicative assumption we show how to update the forest likelihood in closed form, producing efficient model updates. Our results show multiplicative forests can be learned from few temporal trajectories with large gains in performance and scalability. PMID:25284967
Multiplicative Forests for Continuous-Time Processes.
Weiss, Jeremy C; Natarajan, Sriraam; Page, David
2012-01-01
Learning temporal dependencies between variables over continuous time is an important and challenging task. Continuous-time Bayesian networks effectively model such processes but are limited by the number of conditional intensity matrices, which grows exponentially in the number of parents per variable. We develop a partition-based representation using regression trees and forests whose parameter spaces grow linearly in the number of node splits. Using a multiplicative assumption we show how to update the forest likelihood in closed form, producing efficient model updates. Our results show multiplicative forests can be learned from few temporal trajectories with large gains in performance and scalability.
Efficient collective influence maximization in cascading processes with first-order transitions
Pei, Sen; Teng, Xian; Shaman, Jeffrey; Morone, Flaviano; Makse, Hernán A.
2017-01-01
In many social and biological networks, the collective dynamics of the entire system can be shaped by a small set of influential units through a global cascading process, manifested by an abrupt first-order transition in dynamical behaviors. Despite its importance in applications, efficient identification of multiple influential spreaders in cascading processes still remains a challenging task for large-scale networks. Here we address this issue by exploring the collective influence in general threshold models of cascading process. Our analysis reveals that the importance of spreaders is fixed by the subcritical paths along which cascades propagate: the number of subcritical paths attached to each spreader determines its contribution to global cascades. The concept of subcritical path allows us to introduce a scalable algorithm for massively large-scale networks. Results in both synthetic random graphs and real networks show that the proposed method can achieve larger collective influence given the same number of seeds compared with other scalable heuristic approaches. PMID:28349988
Efficient collective influence maximization in cascading processes with first-order transitions
NASA Astrophysics Data System (ADS)
Pei, Sen; Teng, Xian; Shaman, Jeffrey; Morone, Flaviano; Makse, Hernán A.
2017-03-01
In many social and biological networks, the collective dynamics of the entire system can be shaped by a small set of influential units through a global cascading process, manifested by an abrupt first-order transition in dynamical behaviors. Despite its importance in applications, efficient identification of multiple influential spreaders in cascading processes still remains a challenging task for large-scale networks. Here we address this issue by exploring the collective influence in general threshold models of cascading process. Our analysis reveals that the importance of spreaders is fixed by the subcritical paths along which cascades propagate: the number of subcritical paths attached to each spreader determines its contribution to global cascades. The concept of subcritical path allows us to introduce a scalable algorithm for massively large-scale networks. Results in both synthetic random graphs and real networks show that the proposed method can achieve larger collective influence given the same number of seeds compared with other scalable heuristic approaches.
NASA Astrophysics Data System (ADS)
Petrila, S.; Brabie, G.; Chirita, B.
2016-08-01
The analysis performed on manufacturing flows within industrial enterprises producing hydrostatic components twos made on a number of factors that influence smooth running of production such: distance between pieces, waiting time from one surgery to another; time achievement of setups on CNC machines; tool changing in case of a large number of operators and manufacturing complexity of large files [2]. To optimize the manufacturing flow it was used the software Tecnomatix. This software represents a complete portfolio of manufacturing solutions digital manufactured by Siemens. It provides innovation by linking all production methods of a product from process design, process simulation, validation and ending the manufacturing process. Among its many capabilities to create a wide range of simulations, the program offers various demonstrations regarding the behavior manufacturing cycles. This program allows the simulation and optimization of production systems and processes in several areas such as: car suppliers, production of industrial equipment; electronics manufacturing, design and production of aerospace and defense parts.
Large Data at Small Universities: Astronomical processing using a computer classroom
NASA Astrophysics Data System (ADS)
Fuller, Nathaniel James; Clarkson, William I.; Fluharty, Bill; Belanger, Zach; Dage, Kristen
2016-06-01
The use of large computing clusters for astronomy research is becoming more commonplace as datasets expand, but access to these required resources is sometimes difficult for research groups working at smaller Universities. As an alternative to purchasing processing time on an off-site computing cluster, or purchasing dedicated hardware, we show how one can easily build a crude on-site cluster by utilizing idle cycles on instructional computers in computer-lab classrooms. Since these computers are maintained as part of the educational mission of the University, the resource impact on the investigator is generally low.By using open source Python routines, it is possible to have a large number of desktop computers working together via a local network to sort through large data sets. By running traditional analysis routines in an “embarrassingly parallel” manner, gains in speed are accomplished without requiring the investigator to learn how to write routines using highly specialized methodology. We demonstrate this concept here applied to 1. photometry of large-format images and 2. Statistical significance-tests for X-ray lightcurve analysis. In these scenarios, we see a speed-up factor which scales almost linearly with the number of cores in the cluster. Additionally, we show that the usage of the cluster does not severely limit performance for a local user, and indeed the processing can be performed while the computers are in use for classroom purposes.
Probabilistic Estimation of Rare Random Collisions in 3 Space
2009-03-01
extended Poisson process as a feature of probability theory. With the bulk of research in extended Poisson processes going into parame- ter estimation, the...application of extended Poisson processes to spatial processes is largely untouched. Faddy performed a short study of spatial data, but overtly...the theory of extended Poisson processes . To date, the processes are limited in that the rates only depend on the number of arrivals at some time
Functional Network Architecture of Reading-Related Regions across Development
ERIC Educational Resources Information Center
Vogel, Alecia C.; Church, Jessica A.; Power, Jonathan D.; Miezin, Fran M.; Petersen, Steven E.; Schlaggar, Bradley L.
2013-01-01
Reading requires coordinated neural processing across a large number of brain regions. Studying relationships between reading-related regions informs the specificity of information processing performed in each region. Here, regions of interest were defined from a meta-analysis of reading studies, including a developmental study. Relationships…
NASA Astrophysics Data System (ADS)
El, Andrej; Muronga, Azwinndini; Xu, Zhe; Greiner, Carsten
2010-12-01
Relativistic dissipative hydrodynamic equations are extended by taking into account particle number changing processes in a gluon system, which expands in one dimension boost-invariantly. Chemical equilibration is treated by a rate equation for the particle number density based on Boltzmann equation and Grad's ansatz for the off-equilibrium particle phase space distribution. We find that not only the particle production, but also the temperature and the momentum spectra of the gluon system, obtained from the hydrodynamic calculations, are sensitive to the rates of particle number changing processes. Comparisons of the hydrodynamic calculations with the transport ones employing the parton cascade BAMPS show the inaccuracy of the rate equation at large shear viscosity to entropy density ratio. To improve the rate equation, Grad's ansatz has to be modified beyond the second moments in momentum.
Forde, C G; van Kuijk, N; Thaler, T; de Graaf, C; Martin, N
2013-01-01
The modern food supply is often dominated by a large variety of energy dense, softly textured foods that can be eaten quickly. Previous studies suggest that particular oral processing characteristics such as large bite size and lack of chewing activity contribute to the low satiating efficiency of these foods. To better design meals that promote greater feelings of satiation, we need an accurate picture of the oral processing characteristics of a range of solid food items that could be used to replace softer textures during a normal hot meal. The primary aim of this study was to establish an accurate picture of the oral processing characteristics of a set of solid savoury meal components. The secondary aim was to determine the associations between oral processing characteristics, food composition, sensory properties, and expected satiation. In a within subjects design, 15 subjects consumed 50 g of 35 different savoury food items over 5 sessions. The 35 foods represented various staples, vegetables and protein rich foods such a meat and fish. Subjects were video-recorded during consumption and measures included observed number of bites, number of chews, number of swallows and derived measures such as chewing rate, eating rate, bite size, and oral exposure time. Subjects rated expected satiation for a standard 200 g portion of each food using a 100mm and the sensory differences between foods were quantified using descriptive analysis with a trained sensory panel. Statistical analysis focussed on the oral processing characteristics and associations between nutritional, sensory and expected satiation parameters of each food. Average number of chews for 50 g of food varied from 27 for mashed potatoes to 488 for tortilla chips. Oral exposure time was highly correlated with the total number of chews, and varied from 27 s for canned tomatoes to 350 s for tortilla chips. Chewing rate was relatively constant with an overall average chewing rate of approximately 1 chew/s. Differences in oral processing were not correlated with any macronutrients specifically. Expected satiation was positively related to protein and the sensory attributes chewiness and saltiness. Foods that consumed in smaller bites, were chewed more and for longer and expected to impart a higher satiation. This study shows a large and reliable variation in oral exposure time, number of required chews before swallowing and expected satiation across a wide variety of foods. We conclude that bite size and oral-sensory exposure time could contribute to higher satiation within a meal for equal calories. Copyright © 2012 Elsevier Ltd. All rights reserved.
Turbulence and entrainment length scales in large wind farms.
Andersen, Søren J; Sørensen, Jens N; Mikkelsen, Robert F
2017-04-13
A number of large wind farms are modelled using large eddy simulations to elucidate the entrainment process. A reference simulation without turbines and three farm simulations with different degrees of imposed atmospheric turbulence are presented. The entrainment process is assessed using proper orthogonal decomposition, which is employed to detect the largest and most energetic coherent turbulent structures. The dominant length scales responsible for the entrainment process are shown to grow further into the wind farm, but to be limited in extent by the streamwise turbine spacing, which could be taken into account when developing farm layouts. The self-organized motion or large coherent structures also yield high correlations between the power productions of consecutive turbines, which can be exploited through dynamic farm control.This article is part of the themed issue 'Wind energy in complex terrains'. © 2017 The Author(s).
Turbulence and entrainment length scales in large wind farms
2017-01-01
A number of large wind farms are modelled using large eddy simulations to elucidate the entrainment process. A reference simulation without turbines and three farm simulations with different degrees of imposed atmospheric turbulence are presented. The entrainment process is assessed using proper orthogonal decomposition, which is employed to detect the largest and most energetic coherent turbulent structures. The dominant length scales responsible for the entrainment process are shown to grow further into the wind farm, but to be limited in extent by the streamwise turbine spacing, which could be taken into account when developing farm layouts. The self-organized motion or large coherent structures also yield high correlations between the power productions of consecutive turbines, which can be exploited through dynamic farm control. This article is part of the themed issue ‘Wind energy in complex terrains’. PMID:28265028
RF Environment Sensing Using Transceivers in Motion
2014-05-02
NUMBER 5b. GRANT NUMBER 5a. CONTRACT NUMBER Form Approved OMB NO. 0704-0188 3. DATES COVERED (From - To) - UU UU UU UU 02-05-2014 3-Aug-2012 2-Aug...Crossing Information in Wireless Networks, 2013 IEEE Global Conference on Signal and Information Processing. 03-DEC-13, . : , Dustin Maas, Joey Wilson...transceivers may be required to cover the entire monitored area. Second, and very importantly, there may not be sufficient time to deploy a large number of
White, James M.; Faber, Vance; Saltzman, Jeffrey S.
1992-01-01
An image population having a large number of attributes is processed to form a display population with a predetermined smaller number of attributes which represent the larger number of attributes. In a particular application, the color values in an image are compressed for storage in a discrete lookup table (LUT) where an 8-bit data signal is enabled to form a display of 24-bit color values. The LUT is formed in a sampling and averaging process from the image color values with no requirement to define discrete Voronoi regions for color compression. Image color values are assigned 8-bit pointers to their closest LUT value whereby data processing requires only the 8-bit pointer value to provide 24-bit color values from the LUT.
EFL Students' Perceptions of Corpus-Tools as Writing References
ERIC Educational Resources Information Center
Lai, Shu-Li
2015-01-01
A number of studies have suggested the potentials of corpus tools in vocabulary learning. However, there are still some concerns. Corpus tools might be too complicated to use; example sentences retrieved from corpus tools might be too difficult to understand; processing large number of sample sentences could be challenging and time-consuming;…
USDA-ARS?s Scientific Manuscript database
Glycosylation is a common post-translational modification of plant proteins that impacts a large number of important biological processes. Nevertheless, the impacts of differential site occupancy and the nature of specific glycoforms are obscure. Historically, characterization of glycoproteins has b...
Microwave Readout Techniques for Very Large Arrays of Nuclear Sensors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ullom, Joel
During this project, we transformed the use of microwave readout techniques for nuclear sensors from a speculative idea to reality. The core of the project consisted of the development of a set of microwave electronics able to generate and process large numbers of microwave tones. The tones can be used to probe a circuit containing a series of electrical resonances whose frequency locations and widths depend on the state of a network of sensors, with one sensor per resonance. The amplitude and phase of the tones emerging from the circuit are processed by the same electronics and are reduced tomore » the sensor signals after two demodulation steps. This approach allows a large number of sensors to be interrogated using a single pair of coaxial cables. We successfully developed hardware, firmware, and software to complete a scalable implementation of these microwave control electronics and demonstrated their use in two areas. First, we showed that the electronics can be used at room temperature to read out a network of diverse sensor types relevant to safeguards or process monitoring. Second, we showed that the electronics can be used to measure large numbers of ultrasensitive cryogenic sensors such as gamma-ray microcalorimeters. In particular, we demonstrated the undegraded readout of up to 128 channels and established a path to even higher multiplexing factors. These results have transformed the prospects for gamma-ray spectrometers based on cryogenic microcalorimeter arrays by enabling spectrometers whose collecting areas and count rates can be competitive with high purity germanium but with 10x better spectral resolution.« less
Maeda, Jin; Suzuki, Tatsuya; Takayama, Kozo
2012-12-01
A large-scale design space was constructed using a Bayesian estimation method with a small-scale design of experiments (DoE) and small sets of large-scale manufacturing data without enforcing a large-scale DoE. The small-scale DoE was conducted using various Froude numbers (X(1)) and blending times (X(2)) in the lubricant blending process for theophylline tablets. The response surfaces, design space, and their reliability of the compression rate of the powder mixture (Y(1)), tablet hardness (Y(2)), and dissolution rate (Y(3)) on a small scale were calculated using multivariate spline interpolation, a bootstrap resampling technique, and self-organizing map clustering. The constant Froude number was applied as a scale-up rule. Three experiments under an optimal condition and two experiments under other conditions were performed on a large scale. The response surfaces on the small scale were corrected to those on a large scale by Bayesian estimation using the large-scale results. Large-scale experiments under three additional sets of conditions showed that the corrected design space was more reliable than that on the small scale, even if there was some discrepancy in the pharmaceutical quality between the manufacturing scales. This approach is useful for setting up a design space in pharmaceutical development when a DoE cannot be performed at a commercial large manufacturing scale.
Wood, Fiona; Kowalczuk, Jenny; Elwyn, Glyn; Mitchell, Clive; Gallacher, John
2011-08-01
Population based genetics studies are dependent on large numbers of individuals in the pursuit of small effect sizes. Recruiting and consenting a large number of participants is both costly and time consuming. We explored whether an online consent process for large-scale genetics studies is acceptable for prospective participants using an example online genetics study. We conducted semi-structured interviews with 42 members of the public stratified by age group, gender and newspaper readership (a measure of social status). Respondents were asked to use a website designed to recruit for a large-scale genetic study. After using the website a semi-structured interview was conducted to explore opinions and any issues they would have. Responses were analysed using thematic content analysis. The majority of respondents said they would take part in the research (32/42). Those who said they would decline to participate saw fewer benefits from the research, wanted more information and expressed a greater number of concerns about the study. Younger respondents had concerns over time commitment. Middle aged respondents were concerned about privacy and security. Older respondents were more altruistic in their motivation to participate. Common themes included trust in the authenticity of the website, security of personal data, curiosity about their own genetic profile, operational concerns and a desire for more information about the research. Online consent to large-scale genetic studies is likely to be acceptable to the public. The online consent process must establish trust quickly and effectively by asserting authenticity and credentials, and provide access to a range of information to suit different information preferences.
Optimizing Web-Based Instruction: A Case Study Using Poultry Processing Unit Operations
ERIC Educational Resources Information Center
O' Bryan, Corliss A.; Crandall, Philip G.; Shores-Ellis, Katrina; Johnson, Donald M.; Ricke, Steven C.; Marcy, John
2009-01-01
Food companies and supporting industries need inexpensive, revisable training methods for large numbers of hourly employees due to continuing improvements in Hazard Analysis Critical Control Point (HACCP) programs, new processing equipment, and high employee turnover. HACCP-based food safety programs have demonstrated their value by reducing the…
Residual Ductility and Microstructural Evolution in Continuous-Bending-under-Tension of AA-6022-T4
Zecevic, Milovan; Roemer, Timothy J.; Knezevic, Marko; Korkolis, Yannis P.; Kinsey, Brad L.
2016-01-01
A ubiquitous experiment to characterize the formability of sheet metal is the simple tension test. Past research has shown that if the material is repeatedly bent and unbent during this test (i.e., Continuous-Bending-under-Tension, CBT), the percent elongation at failure can significantly increase. In this paper, this phenomenon is evaluated in detail for AA-6022-T4 sheets using a custom-built CBT device. In particular, the residual ductility of specimens that are subjected to CBT processing is investigated. This is achieved by subjecting a specimen to CBT processing and then creating subsize tensile test and microstructural samples from the specimens after varying numbers of CBT cycles. Interestingly, the engineering stress initially increases after CBT processing to a certain number of cycles, but then decreases with less elongation achieved for increasing numbers of CBT cycles. Additionally, a detailed microstructure and texture characterization are performed using standard scanning electron microscopy and electron backscattered diffraction imaging. The results show that the material under CBT preserves high integrity to large plastic strains due to a uniform distribution of damage formation and evolution in the material. The ability to delay ductile fracture during the CBT process to large plastic strains, results in formation of a strong <111> fiber texture throughout the material. PMID:28773257
NASA Astrophysics Data System (ADS)
Knox, H. A.; Draelos, T.; Young, C. J.; Lawry, B.; Chael, E. P.; Faust, A.; Peterson, M. G.
2015-12-01
The quality of automatic detections from seismic sensor networks depends on a large number of data processing parameters that interact in complex ways. The largely manual process of identifying effective parameters is painstaking and does not guarantee that the resulting controls are the optimal configuration settings. Yet, achieving superior automatic detection of seismic events is closely related to these parameters. We present an automated sensor tuning (AST) system that learns near-optimal parameter settings for each event type using neuro-dynamic programming (reinforcement learning) trained with historic data. AST learns to test the raw signal against all event-settings and automatically self-tunes to an emerging event in real-time. The overall goal is to reduce the number of missed legitimate event detections and the number of false event detections. Reducing false alarms early in the seismic pipeline processing will have a significant impact on this goal. Applicable both for existing sensor performance boosting and new sensor deployment, this system provides an important new method to automatically tune complex remote sensing systems. Systems tuned in this way will achieve better performance than is currently possible by manual tuning, and with much less time and effort devoted to the tuning process. With ground truth on detections in seismic waveforms from a network of stations, we show that AST increases the probability of detection while decreasing false alarms.
New trends in logic synthesis for both digital designing and data processing
NASA Astrophysics Data System (ADS)
Borowik, Grzegorz; Łuba, Tadeusz; Poźniak, Krzysztof
2016-09-01
FPGA devices are equipped with memory-based structures. These memories act as very large logic cells where the number of inputs equals the number of address lines. At the same time, there is a huge demand in the market of Internet of Things for devices implementing virtual routers, intrusion detection systems, etc.; where such memories are crucial for realizing pattern matching circuits, IP address tables, and other. Unfortunately, existing CAD tools are not well suited to utilize capabilities that such large memory blocks offer due to the lack of appropriate synthesis procedures. This paper presents methods which are useful for memory-based implementations: minimization of the number of input variables and functional decomposition.
Exploration of multiphoton entangled states by using weak nonlinearities
He, Ying-Qiu; Ding, Dong; Yan, Feng-Li; Gao, Ting
2016-01-01
We propose a fruitful scheme for exploring multiphoton entangled states based on linear optics and weak nonlinearities. Compared with the previous schemes the present method is more feasible because there are only small phase shifts instead of a series of related functions of photon numbers in the process of interaction with Kerr nonlinearities. In the absence of decoherence we analyze the error probabilities induced by homodyne measurement and show that the maximal error probability can be made small enough even when the number of photons is large. This implies that the present scheme is quite tractable and it is possible to produce entangled states involving a large number of photons. PMID:26751044
NASA Astrophysics Data System (ADS)
de Boer, D. H.; Hassan, M. A.; MacVicar, B.; Stone, M.
2005-01-01
Contributions by Canadian fluvial geomorphologists between 1999 and 2003 are discussed under four major themes: sediment yield and sediment dynamics of large rivers; cohesive sediment transport; turbulent flow structure and sediment transport; and bed material transport and channel morphology. The paper concludes with a section on recent technical advances. During the review period, substantial progress has been made in investigating the details of fluvial processes at relatively small scales. Examples of this emphasis are the studies of flow structure, turbulence characteristics and bedload transport, which continue to form central themes in fluvial research in Canada. Translating the knowledge of small-scale, process-related research to an understanding of the behaviour of large-scale fluvial systems, however, continues to be a formidable challenge. Models play a prominent role in elucidating the link between small-scale processes and large-scale fluvial geomorphology, and, as a result, a number of papers describing models and modelling results have been published during the review period. In addition, a number of investigators are now approaching the problem by directly investigating changes in the system of interest at larger scales, e.g. a channel reach over tens of years, and attempting to infer what processes may have led to the result. It is to be expected that these complementary approaches will contribute to an increased understanding of fluvial systems at a variety of spatial and temporal scales. Copyright
NASA Technical Reports Server (NTRS)
Jensen, E. J.; Toon, O. B.
1994-01-01
We have investigated the processes that control ice crystal nucleation in the upper troposphere using a numerical model. Nucleation of ice resulting from cooling was simulated for a range of aerosol number densities, initial temperatures, and cooling rates. In contrast to observations of stratus clouds, we find that the number of ice crystals that nucleate in cirrus is relatively insensitive to the number of aerosols present. The ice crystal size distribution at the end of the nucleation process is unaffected by the assumed initial aerosol number density. Essentially, nucleation continues until enough ice crystals are present such that their deposition growth rapidly depletes the vapor and shuts off any further nucleation. However, the number of ice crystals nucleated increases rapidly with decreasing initial temperature and increasing cooling rate. This temperature dependence alone could explain the large ice crystal number density observed in very cold tropical cirrus.
Asteroid Systems: Binaries, Triples, and Pairs
NASA Astrophysics Data System (ADS)
Margot, J.-L.; Pravec, P.; Taylor, P.; Carry, B.; Jacobson, S.
In the past decade, the number of known binary near-Earth asteroids has more than quadrupled and the number of known large main-belt asteroids with satellites has doubled. Half a dozen triple asteroids have been discovered, and the previously unrecognized populations of asteroid pairs and small main-belt binaries have been identified. The current observational evidence confirms that small (≲20 km) binaries form by rotational fission and establishes that the Yarkovsky-O'Keefe-Radzievskii-Paddack (YORP) effect powers the spin-up process. A unifying paradigm based on rotational fission and post-fission dynamics can explain the formation of small binaries, triples, and pairs. Large (>~20 km) binaries with small satellites are most likely created during large collisions.
Using a Model of Analysts' Judgments to Augment an Item Calibration Process
ERIC Educational Resources Information Center
Hauser, Carl; Thum, Yeow Meng; He, Wei; Ma, Lingling
2015-01-01
When conducting item reviews, analysts evaluate an array of statistical and graphical information to assess the fit of a field test (FT) item to an item response theory model. The process can be tedious, particularly when the number of human reviews (HR) to be completed is large. Furthermore, such a process leads to decisions that are susceptible…
2018-01-01
Background Electronic health (eHealth) and mobile health (mHealth) tools can support and improve the whole process of workplace health promotion (WHP) projects. However, several challenges and opportunities have to be considered while integrating these tools in WHP projects. Currently, a large number of eHealth tools are developed for changing health behavior, but these tools can support the whole WHP process, including group administration, information flow, assessment, intervention development process, or evaluation. Objective To support a successful implementation of eHealth tools in the whole WHP processes, we introduce a concept of WHP (life cycle model of WHP) with 7 steps and present critical and success factors for the implementation of eHealth tools in each step. Methods We developed a life cycle model of WHP based on the World Health Organization (WHO) model of healthy workplace continual improvement process. We suggest adaptations to the WHO model to demonstrate the large number of possibilities to implement eHealth tools in WHP as well as possible critical points in the implementation process. Results eHealth tools can enhance the efficiency of WHP in each of the 7 steps of the presented life cycle model of WHP. Specifically, eHealth tools can support by offering easier administration, providing an information and communication platform, supporting assessments, presenting and discussing assessment results in a dashboard, and offering interventions to change individual health behavior. Important success factors include the possibility to give automatic feedback about health parameters, create incentive systems, or bring together a large number of health experts in one place. Critical factors such as data security, anonymity, or lack of management involvement have to be addressed carefully to prevent nonparticipation and dropouts. Conclusions Using eHealth tools can support WHP, but clear regulations for the usage and implementation of these tools at the workplace are needed to secure quality and reach sustainable results. PMID:29475828
NASA Astrophysics Data System (ADS)
Silber, Armin; Gonzalez, Christian; Pino, Francisco; Escarate, Patricio; Gairing, Stefan
2014-08-01
With expanding sizes and increasing complexity of large astronomical observatories on remote observing sites, the call for an efficient and recourses saving maintenance concept becomes louder. The increasing number of subsystems on telescopes and instruments forces large observatories, like in industries, to rethink conventional maintenance strategies for reaching this demanding goal. The implementation of full-, or semi-automatic processes for standard service activities can help to keep the number of operating staff on an efficient level and to reduce significantly the consumption of valuable consumables or equipment. In this contribution we will demonstrate on the example of the 80 Cryogenic subsystems of the ALMA Front End instrument, how an implemented automatic service process increases the availability of spare parts and Line Replaceable Units. Furthermore how valuable staff recourses can be freed from continuous repetitive maintenance activities, to allow focusing more on system diagnostic tasks, troubleshooting and the interchanging of line replaceable units. The required service activities are decoupled from the day-to-day work, eliminating dependencies on workload peaks or logistic constrains. The automatic refurbishing processes running in parallel to the operational tasks with constant quality and without compromising the performance of the serviced system components. Consequentially that results in an efficiency increase, less down time and keeps the observing schedule on track. Automatic service processes in combination with proactive maintenance concepts are providing the necessary flexibility for the complex operational work structures of large observatories. The gained planning flexibility is allowing an optimization of operational procedures and sequences by considering the required cost efficiency.
A full picture of large lepton number asymmetries of the Universe
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barenboim, Gabriela; Park, Wan-Il, E-mail: Gabriela.Barenboim@uv.es, E-mail: wipark@jbnu.ac.kr
A large lepton number asymmetry of O(0.1−1) at present Universe might not only be allowed but also necessary for consistency among cosmological data. We show that, if a sizeable lepton number asymmetry were produced before the electroweak phase transition, the requirement for not producing too much baryon number asymmetry through sphalerons processes, forces the high scale lepton number asymmetry to be larger than about 03. Therefore a mild entropy release causing O(10-100) suppression of pre-existing particle density should take place, when the background temperature of the Universe is around T = O(10{sup −2}-10{sup 2}) GeV for a large but experimentallymore » consistent asymmetry to be present today. We also show that such a mild entropy production can be obtained by the late-time decays of the saxion, constraining the parameters of the Peccei-Quinn sector such as the mass and the vacuum expectation value of the saxion field to be m {sub φ} ∼> O(10) TeV and φ{sub 0} ∼> O(10{sup 14}) GeV, respectively.« less
Aegerter, Philippe; Bendersky, Noelle; Tran, Thi-Chien; Ropers, Jacques; Taright, Namik; Chatellier, Gilles
2014-01-01
Recruitment of large samples of patients is crucial for evidence level and efficacy of clinical trials (CT). Clinical Trial Recruitment Support Systems (CTRSS) used to estimate patient recruitment are generally specific to Hospital Information Systems and few were evaluated on a large number of trials. Our aim was to assess, on a large number of CT, the usefulness of commonly available data as Diagnosis Related Groups (DRG) databases in order to estimate potential recruitment. We used the DRG database of a large French multicenter medical institution (1.2 million inpatient stays and 400 new trials each year). Eligibility criteria of protocols were broken down into in atomic entities (diagnosis, procedures, treatments...) then translated into codes and operators recorded in a standardized form. A program parsed the forms and generated requests on the DRG database. A large majority of selection criteria could be coded and final estimations of number of eligible patients were close to observed ones (median difference = 25). Such a system could be part of the feasability evaluation and center selection process before the start of the clinical trial.
Numerical predictors of arithmetic success in grades 1-6.
Lyons, Ian M; Price, Gavin R; Vaessen, Anniek; Blomert, Leo; Ansari, Daniel
2014-09-01
Math relies on mastery and integration of a wide range of simpler numerical processes and concepts. Recent work has identified several numerical competencies that predict variation in math ability. We examined the unique relations between eight basic numerical skills and early arithmetic ability in a large sample (N = 1391) of children across grades 1-6. In grades 1-2, children's ability to judge the relative magnitude of numerical symbols was most predictive of early arithmetic skills. The unique contribution of children's ability to assess ordinality in numerical symbols steadily increased across grades, overtaking all other predictors by grade 6. We found no evidence that children's ability to judge the relative magnitude of approximate, nonsymbolic numbers was uniquely predictive of arithmetic ability at any grade. Overall, symbolic number processing was more predictive of arithmetic ability than nonsymbolic number processing, though the relative importance of symbolic number ability appears to shift from cardinal to ordinal processing. © 2014 John Wiley & Sons Ltd.
During the last decade, a number of initiatives have been undertaken to create systematic national and global data sets of processed satellite imagery. An important application of these data is the derivation of large area (i.e. multi-scene) land cover products. Such products, ho...
Participation and Collaborative Learning in Large Class Sizes: Wiki, Can You Help Me?
ERIC Educational Resources Information Center
de Arriba, Raúl
2017-01-01
Collaborative learning has a long tradition within higher education. However, its application in classes with a large number of students is complicated, since it is a teaching method that requires a high level of participation from the students and careful monitoring of the process by the educator. This article presents an experience in…
Criminal Intent with Property: A Study of Real Estate Fraud Prediction and Detection
ERIC Educational Resources Information Center
Blackman, David H.
2013-01-01
The large number of real estate transactions across the United States, combined with closing process complexity, creates extremely large data sets that conceal anomalies indicative of fraud. The quantitative amount of damage due to fraud is immeasurable to the lives of individuals who are victims, not to mention the financial impact to…
NASA Astrophysics Data System (ADS)
Bier, A.; Burkhardt, U.; Bock, L.
2017-11-01
The atmospheric state, aircraft emissions, and engine properties determine formation and initial properties of contrails. The synoptic situation controls microphysical and dynamical processes and causes a wide variability of contrail cirrus life cycles. A reduction of soot particle number emissions, resulting, for example, from the use of alternative fuels, strongly impacts initial ice crystal numbers and microphysical process rates of contrail cirrus. We use the European Centre/Hamburg (ECHAM) climate model version 5 including a contrail cirrus modul, studying process rates, properties, and life cycles of contrail cirrus clusters within different synoptic situations. The impact of reduced soot number emissions is approximated by a reduction in the initial ice crystal number, exemplarily studied for 80%. Contrail cirrus microphysical and macrophysical properties can depend much more strongly on the synoptic situation than on the initial ice crystal number. They can attain a large cover, optical depth, and ice water content in long-lived and large-scale ice-supersaturated areas, making them particularly climate-relevant. In those synoptic situations, the accumulated ice crystal loss due to sedimentation is increased by around 15% and the volume of contrail cirrus, exceeding an optical depth of 0.02, and their short-wave radiative impact are strongly decreased due to reduced soot emissions. These reductions are of little consequence in short-lived and small-scale ice-supersaturated areas, where contrail cirrus stay optically very thin and attain a low cover. The synoptic situations in which long-lived and climate-relevant contrail cirrus clusters can be found over the eastern U.S. occur in around 25% of cases.
Chemical Reactions in Turbulent Mixing Flows.
1986-06-15
length from Reynolds and Schmidt numbers at high Reynolds number, 2. the linear dependence of flame length on the stoichiometric mixture ratio, and, 3...processes are unsteady and the observed large scale flame length fluctuations are the best evidence of the individual cascade. A more detailed examination...Damk~hler number. When the same ideas are used in a model of fuel jets burning in air, it explains (Broadwell 1982): 1. the independence of flame
MicroRNAs in large herpesvirus DNA genomes: recent advances.
Sorel, Océane; Dewals, Benjamin G
2016-08-01
MicroRNAs (miRNAs) are small non-coding RNAs (ncRNAs) that regulate gene expression. They alter mRNA translation through base-pair complementarity, leading to regulation of genes during both physiological and pathological processes. Viruses have evolved mechanisms to take advantage of the host cells to multiply and/or persist over the lifetime of the host. Herpesviridae are a large family of double-stranded DNA viruses that are associated with a number of important diseases, including lymphoproliferative diseases. Herpesviruses establish lifelong latent infections through modulation of the interface between the virus and its host. A number of reports have identified miRNAs in a very large number of human and animal herpesviruses suggesting that these short non-coding transcripts could play essential roles in herpesvirus biology. This review will specifically focus on the recent advances on the functions of herpesvirus miRNAs in infection and pathogenesis.
Deckard, Gloria J; Borkowski, Nancy; Diaz, Deisell; Sanchez, Carlos; Boisette, Serge A
2010-01-01
Designated primary care clinics largely serve low-income and uninsured patients who present a disproportionate number of chronic illnesses and face great difficulty in obtaining the medical care they need, particularly the access to specialty physicians. With limited capacity for providing specialty care, these primary care clinics generally refer patients to safety net hospitals' specialty ambulatory care clinics. A large public safety net health system successfully improved the effectiveness and efficiency of the specialty clinic referral process through application of Lean Six Sigma, an advanced process-improvement methodology and set of tools driven by statistics and engineering concepts.
Doubled heterogeneous crystal nucleation in sediments of hard sphere binary-mass mixtures
NASA Astrophysics Data System (ADS)
Löwen, Hartmut; Allahyarov, Elshad
2011-10-01
Crystallization during the sedimentation process of a binary colloidal hard spheres mixture is explored by Brownian dynamics computer simulations. The two species are different in buoyant mass but have the same interaction diameter. Starting from a completely mixed system in a finite container, gravity is suddenly turned on, and the crystallization process in the sample is monitored. If the Peclet numbers of the two species are both not too large, crystalline layers are formed at the bottom of the cell. The composition of lighter particles in the sedimented crystal is non-monotonic in the altitude: it is first increasing, then decreasing, and then increasing again. If one Peclet number is large and the other is small, we observe the occurrence of a doubled heterogeneous crystal nucleation process. First, crystalline layers are formed at the bottom container wall which are separated from an amorphous sediment. At the amorphous-fluid interface, a secondary crystal nucleation of layers is identified. This doubled heterogeneous nucleation can be verified in real-space experiments on colloidal mixtures.
NASA Astrophysics Data System (ADS)
Swallow, B.; Rigby, M. L.; Rougier, J.; Manning, A.; Thomson, D.; Webster, H. N.; Lunt, M. F.; O'Doherty, S.
2016-12-01
In order to understand underlying processes governing environmental and physical phenomena, a complex mathematical model is usually required. However, there is an inherent uncertainty related to the parameterisation of unresolved processes in these simulators. Here, we focus on the specific problem of accounting for uncertainty in parameter values in an atmospheric chemical transport model. Systematic errors introduced by failing to account for these uncertainties have the potential to have a large effect on resulting estimates in unknown quantities of interest. One approach that is being increasingly used to address this issue is known as emulation, in which a large number of forward runs of the simulator are carried out, in order to approximate the response of the output to changes in parameters. However, due to the complexity of some models, it is often unfeasible to run large numbers of training runs that is usually required for full statistical emulators of the environmental processes. We therefore present a simplified model reduction method for approximating uncertainties in complex environmental simulators without the need for very large numbers of training runs. We illustrate the method through an application to the Met Office's atmospheric transport model NAME. We show how our parameter estimation framework can be incorporated into a hierarchical Bayesian inversion, and demonstrate the impact on estimates of UK methane emissions, using atmospheric mole fraction data. We conclude that accounting for uncertainties in the parameterisation of complex atmospheric models is vital if systematic errors are to be minimized and all relevant uncertainties accounted for. We also note that investigations of this nature can prove extremely useful in highlighting deficiencies in the simulator that might otherwise be missed.
Scheme for Entering Binary Data Into a Quantum Computer
NASA Technical Reports Server (NTRS)
Williams, Colin
2005-01-01
A quantum algorithm provides for the encoding of an exponentially large number of classical data bits by use of a smaller (polynomially large) number of quantum bits (qubits). The development of this algorithm was prompted by the need, heretofore not satisfied, for a means of entering real-world binary data into a quantum computer. The data format provided by this algorithm is suitable for subsequent ultrafast quantum processing of the entered data. Potential applications lie in disciplines (e.g., genomics) in which one needs to search for matches between parts of very long sequences of data. For example, the algorithm could be used to encode the N-bit-long human genome in only log2N qubits. The resulting log2N-qubit state could then be used for subsequent quantum data processing - for example, to perform rapid comparisons of sequences.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Fuke, E-mail: wufuke@mail.hust.edu.cn; Tian, Tianhai, E-mail: tianhai.tian@sci.monash.edu.au; Rawlings, James B., E-mail: james.rawlings@wisc.edu
The frequently used reduction technique is based on the chemical master equation for stochastic chemical kinetics with two-time scales, which yields the modified stochastic simulation algorithm (SSA). For the chemical reaction processes involving a large number of molecular species and reactions, the collection of slow reactions may still include a large number of molecular species and reactions. Consequently, the SSA is still computationally expensive. Because the chemical Langevin equations (CLEs) can effectively work for a large number of molecular species and reactions, this paper develops a reduction method based on the CLE by the stochastic averaging principle developed in themore » work of Khasminskii and Yin [SIAM J. Appl. Math. 56, 1766–1793 (1996); ibid. 56, 1794–1819 (1996)] to average out the fast-reacting variables. This reduction method leads to a limit averaging system, which is an approximation of the slow reactions. Because in the stochastic chemical kinetics, the CLE is seen as the approximation of the SSA, the limit averaging system can be treated as the approximation of the slow reactions. As an application, we examine the reduction of computation complexity for the gene regulatory networks with two-time scales driven by intrinsic noise. For linear and nonlinear protein production functions, the simulations show that the sample average (expectation) of the limit averaging system is close to that of the slow-reaction process based on the SSA. It demonstrates that the limit averaging system is an efficient approximation of the slow-reaction process in the sense of the weak convergence.« less
NASA Technical Reports Server (NTRS)
Hawke, Veronica; Gage, Peter; Manning, Ted
2007-01-01
ComGeom2, a tool developed to generate Common Geometry representation for multidisciplinary analysis, has been used to create a large set of geometries for use in a design study requiring analysis by two computational codes. This paper describes the process used to generate the large number of configurations and suggests ways to further automate the process and make it more efficient for future studies. The design geometry for this study is the launch abort system of the NASA Crew Launch Vehicle.
NASA Astrophysics Data System (ADS)
Duffy, Ken; Lobunets, Olena; Suhov, Yuri
2007-05-01
We propose a model of a loss averse investor who aims to maximize his expected wealth under certain constraints. The constraints are that he avoids, with high probability, incurring an (suitably defined) unacceptable loss. The methodology employed comes from the theory of large deviations. We explore a number of fundamental properties of the model and illustrate its desirable features. We demonstrate its utility by analyzing assets that follow some commonly used financial return processes: Fractional Brownian Motion, Jump Diffusion, Variance Gamma and Truncated Lévy.
Statistical distributions of earthquake numbers: consequence of branching process
NASA Astrophysics Data System (ADS)
Kagan, Yan Y.
2010-03-01
We discuss various statistical distributions of earthquake numbers. Previously, we derived several discrete distributions to describe earthquake numbers for the branching model of earthquake occurrence: these distributions are the Poisson, geometric, logarithmic and the negative binomial (NBD). The theoretical model is the `birth and immigration' population process. The first three distributions above can be considered special cases of the NBD. In particular, a point branching process along the magnitude (or log seismic moment) axis with independent events (immigrants) explains the magnitude/moment-frequency relation and the NBD of earthquake counts in large time/space windows, as well as the dependence of the NBD parameters on the magnitude threshold (magnitude of an earthquake catalogue completeness). We discuss applying these distributions, especially the NBD, to approximate event numbers in earthquake catalogues. There are many different representations of the NBD. Most can be traced either to the Pascal distribution or to the mixture of the Poisson distribution with the gamma law. We discuss advantages and drawbacks of both representations for statistical analysis of earthquake catalogues. We also consider applying the NBD to earthquake forecasts and describe the limits of the application for the given equations. In contrast to the one-parameter Poisson distribution so widely used to describe earthquake occurrence, the NBD has two parameters. The second parameter can be used to characterize clustering or overdispersion of a process. We determine the parameter values and their uncertainties for several local and global catalogues, and their subdivisions in various time intervals, magnitude thresholds, spatial windows, and tectonic categories. The theoretical model of how the clustering parameter depends on the corner (maximum) magnitude can be used to predict future earthquake number distribution in regions where very large earthquakes have not yet occurred.
Viscous versus inviscid exact coherent states in high Reynolds number wall flows
NASA Astrophysics Data System (ADS)
Montemuro, Brandon; Klewicki, Joe; White, Chris; Chini, Greg
2017-11-01
Streamwise-averaged motions consisting of streamwise-oriented streaks and vortices are key components of exact coherent states (ECS) arising in incompressible wall-bounded shear flows. These invariant solutions are believed to provide a scaffold in phase space for the turbulent dynamics realized at large Reynolds number Re . Nevertheless, many ECS, including upper-branch states, have a large- Re asymptotic structure in which the effective Reynolds number governing the streak and roll dynamics is order unity. Although these viscous ECS very likely play a role in the dynamics of the near-wall region, they cannot be relevant to the inertial layer, where the leading-order mean dynamics are known to be inviscid. In particular, viscous ECS cannot account for the observed regions of quasi-uniform streamwise momentum and interlaced internal shear layers (or `vortical fissures') within the inertial layer. In this work, a large- Re asymptotic analysis is performed to extend the existing self-sustaining-process/vortex-wave-interaction theory to account for largely inviscid ECS. The analysis highlights feedback mechanisms between the fissures and uniform momentum zones that can enable their self-sustenance at extreme Reynolds number. NSF CBET Award 1437851.
NASA Technical Reports Server (NTRS)
Kolesar, C. E.
1987-01-01
Research activity on an airfoil designed for a large airplane capable of very long endurance times at a low Mach number of 0.22 is examined. Airplane mission objectives and design optimization resulted in requirements for a very high design lift coefficient and a large amount of laminar flow at high Reynolds number to increase the lift/drag ratio and reduce the loiter lift coefficient. Natural laminar flow was selected instead of distributed mechanical suction for the measurement technique. A design lift coefficient of 1.5 was identified as the highest which could be achieved with a large extent of laminar flow. A single element airfoil was designed using an inverse boundary layer solution and inverse airfoil design computer codes to create an airfoil section that would achieve performance goals. The design process and results, including airfoil shape, pressure distributions, and aerodynamic characteristics are presented. A two dimensional wind tunnel model was constructed and tested in a NASA Low Turbulence Pressure Tunnel which enabled testing at full scale design Reynolds number. A comparison is made between theoretical and measured results to establish accuracy and quality of the airfoil design technique.
Decay of homogeneous two-dimensional quantum turbulence
NASA Astrophysics Data System (ADS)
Baggaley, Andrew W.; Barenghi, Carlo F.
2018-03-01
We numerically simulate the free decay of two-dimensional quantum turbulence in a large, homogeneous Bose-Einstein condensate. The large number of vortices, the uniformity of the density profile, and the absence of boundaries (where vortices can drift out of the condensate) isolate the annihilation of vortex-antivortex pairs as the only mechanism which reduces the number of vortices, Nv, during the turbulence decay. The results clearly reveal that vortex annihilation is a four-vortex process, confirming the decay law Nv˜t-1 /3 where t is time, which was inferred from experiments with relatively few vortices in small harmonically trapped condensates.
Laws of Large Numbers and Langevin Approximations for Stochastic Neural Field Equations
2013-01-01
In this study, we consider limit theorems for microscopic stochastic models of neural fields. We show that the Wilson–Cowan equation can be obtained as the limit in uniform convergence on compacts in probability for a sequence of microscopic models when the number of neuron populations distributed in space and the number of neurons per population tend to infinity. This result also allows to obtain limits for qualitatively different stochastic convergence concepts, e.g., convergence in the mean. Further, we present a central limit theorem for the martingale part of the microscopic models which, suitably re-scaled, converges to a centred Gaussian process with independent increments. These two results provide the basis for presenting the neural field Langevin equation, a stochastic differential equation taking values in a Hilbert space, which is the infinite-dimensional analogue of the chemical Langevin equation in the present setting. On a technical level, we apply recently developed law of large numbers and central limit theorems for piecewise deterministic processes taking values in Hilbert spaces to a master equation formulation of stochastic neuronal network models. These theorems are valid for processes taking values in Hilbert spaces, and by this are able to incorporate spatial structures of the underlying model. Mathematics Subject Classification (2000): 60F05, 60J25, 60J75, 92C20. PMID:23343328
Vector computer memory bank contention
NASA Technical Reports Server (NTRS)
Bailey, D. H.
1985-01-01
A number of vector supercomputers feature very large memories. Unfortunately the large capacity memory chips that are used in these computers are much slower than the fast central processing unit (CPU) circuitry. As a result, memory bank reservation times (in CPU ticks) are much longer than on previous generations of computers. A consequence of these long reservation times is that memory bank contention is sharply increased, resulting in significantly lowered performance rates. The phenomenon of memory bank contention in vector computers is analyzed using both a Markov chain model and a Monte Carlo simulation program. The results of this analysis indicate that future generations of supercomputers must either employ much faster memory chips or else feature very large numbers of independent memory banks.
Vector computer memory bank contention
NASA Technical Reports Server (NTRS)
Bailey, David H.
1987-01-01
A number of vector supercomputers feature very large memories. Unfortunately the large capacity memory chips that are used in these computers are much slower than the fast central processing unit (CPU) circuitry. As a result, memory bank reservation times (in CPU ticks) are much longer than on previous generations of computers. A consequence of these long reservation times is that memory bank contention is sharply increased, resulting in significantly lowered performance rates. The phenomenon of memory bank contention in vector computers is analyzed using both a Markov chain model and a Monte Carlo simulation program. The results of this analysis indicate that future generations of supercomputers must either employ much faster memory chips or else feature very large numbers of independent memory banks.
NASA Astrophysics Data System (ADS)
Donders, S.; Pluymers, B.; Ragnarsson, P.; Hadjit, R.; Desmet, W.
2010-04-01
In the vehicle design process, design decisions are more and more based on virtual prototypes. Due to competitive and regulatory pressure, vehicle manufacturers are forced to improve product quality, to reduce time-to-market and to launch an increasing number of design variants on the global market. To speed up the design iteration process, substructuring and component mode synthesis (CMS) methods are commonly used, involving the analysis of substructure models and the synthesis of the substructure analysis results. Substructuring and CMS enable efficient decentralized collaboration across departments and allow to benefit from the availability of parallel computing environments. However, traditional CMS methods become prohibitively inefficient when substructures are coupled along large interfaces, i.e. with a large number of degrees of freedom (DOFs) at the interface between substructures. The reason is that the analysis of substructures involves the calculation of a number of enrichment vectors, one for each interface degree of freedom (DOF). Since large interfaces are common in vehicles (e.g. the continuous line connections to connect the body with the windshield, roof or floor), this interface bottleneck poses a clear limitation in the vehicle noise, vibration and harshness (NVH) design process. Therefore there is a need to describe the interface dynamics more efficiently. This paper presents a wave-based substructuring (WBS) approach, which allows reducing the interface representation between substructures in an assembly by expressing the interface DOFs in terms of a limited set of basis functions ("waves"). As the number of basis functions can be much lower than the number of interface DOFs, this greatly facilitates the substructure analysis procedure and results in faster design predictions. The waves are calculated once from a full nominal assembly analysis, but these nominal waves can be re-used for the assembly of modified components. The WBS approach thus enables efficient structural modification predictions of the global modes, so that efficient vibro-acoustic design modification, optimization and robust design become possible. The results show that wave-based substructuring offers a clear benefit for vehicle design modifications, by improving both the speed of component reduction processes and the efficiency and accuracy of design iteration predictions, as compared to conventional substructuring approaches.
2000-06-01
As the number of sensors, platforms, exploitation sites, and command and control nodes continues to grow in response to Joint Vision 2010 information ... dominance requirements, Commanders and analysts will have an ever increasing need to collect and process vast amounts of data over wide areas using a large number of disparate sensors and information gathering sources.
The gating effect by thousands of bubble-propelled micromotors in macroscale channels
NASA Astrophysics Data System (ADS)
Teo, Wei Zhe; Wang, Hong; Pumera, Martin
2015-07-01
Increasing interest in the utilization of self-propelled micro-/nanomotors for environmental remediation requires the examination of their efficiency at the macroscale level. As such, we investigated the effect of micro-/nanomotors' propulsion and bubbling on the rate of sodium hydroxide dissolution and the subsequent dispersion of OH- ions across more than 30 cm, so as to understand how these factors might affect the dispersion of remediation agents in real systems which might require these agents to travel long distances to reach the pollutants. Experimental results showed that the presence of large numbers of active bubble-propelled tubular bimetallic Cu/Pt micromotors (4.5 × 104) induced a gating effect on the dissolution and dispersion process, slowing down the change in pH of the solution considerably. The retardation was found to be dependent on the number of active micromotors present in the range of 1.5 × 104 to 4.5 × 104 micromotors. At lower numbers (0.75 × 104), however, propelling micromotors did speed up the dissolution and dispersion process. The understanding of the combined effects of large number of micro-/nanomotors' motion and bubbling on its macroscale mixing behavior is of significant importance for future applications of these devices.
2008-12-01
Figure 4. B4C plates formed via hot pressing with a curved shape. Commercial B4C shows a large number of lenticular graphitic inclusions, Figure 5...materials and they act as crack initiation points in flexure testing. Figure 5. SEM micrograph showing large lenticular graphitic inclusions in commercial
Extension of electronic speckle correlation interferometry to large deformations
NASA Astrophysics Data System (ADS)
Sciammarella, Cesar A.; Sciammarella, Federico M.
1998-07-01
The process of fringe formation under simultaneous illumination in two orthogonal directions is analyzed. Procedures to extend the applicability of this technique to large deformation and high density of fringes are introduced. The proposed techniques are applied to a number of technical problems. Good agreement is obtained when the experimental results are compared with results obtained by other methods.
Superconducting Optoelectronic Circuits for Neuromorphic Computing
NASA Astrophysics Data System (ADS)
Shainline, Jeffrey M.; Buckley, Sonia M.; Mirin, Richard P.; Nam, Sae Woo
2017-03-01
Neural networks have proven effective for solving many difficult computational problems, yet implementing complex neural networks in software is computationally expensive. To explore the limits of information processing, it is necessary to implement new hardware platforms with large numbers of neurons, each with a large number of connections to other neurons. Here we propose a hybrid semiconductor-superconductor hardware platform for the implementation of neural networks and large-scale neuromorphic computing. The platform combines semiconducting few-photon light-emitting diodes with superconducting-nanowire single-photon detectors to behave as spiking neurons. These processing units are connected via a network of optical waveguides, and variable weights of connection can be implemented using several approaches. The use of light as a signaling mechanism overcomes fanout and parasitic constraints on electrical signals while simultaneously introducing physical degrees of freedom which can be employed for computation. The use of supercurrents achieves the low power density (1 mW /cm2 at 20-MHz firing rate) necessary to scale to systems with enormous entropy. Estimates comparing the proposed hardware platform to a human brain show that with the same number of neurons (1 011) and 700 independent connections per neuron, the hardware presented here may achieve an order of magnitude improvement in synaptic events per second per watt.
Computations on Wings With Full-Span Oscillating Control Surfaces Using Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
Guruswamy, Guru P.
2013-01-01
A dual-level parallel procedure is presented for computing large databases to support aerospace vehicle design. This procedure has been developed as a single Unix script within the Parallel Batch Submission environment utilizing MPIexec and runs MPI based analysis software. It has been developed to provide a process for aerospace designers to generate data for large numbers of cases with the highest possible fidelity and reasonable wall clock time. A single job submission environment has been created to avoid keeping track of multiple jobs and the associated system administration overhead. The process has been demonstrated for computing large databases for the design of typical aerospace configurations, a launch vehicle and a rotorcraft.
A Debugger for Computational Grid Applications
NASA Technical Reports Server (NTRS)
Hood, Robert; Jost, Gabriele
2000-01-01
The p2d2 project at NAS has built a debugger for applications running on heterogeneous computational grids. It employs a client-server architecture to simplify the implementation. Its user interface has been designed to provide process control and state examination functions on a computation containing a large number of processes. It can find processes participating in distributed computations even when those processes were not created under debugger control. These process identification techniques work both on conventional distributed executions as well as those on a computational grid.
ERIC Educational Resources Information Center
Cheek, Kim A.
2013-01-01
Research about geologic time conceptions generally focuses on the placement of events on the geologic timescale, with few studies dealing with the duration of geologic processes or events. Those studies indicate that students often have very poor conceptions about temporal durations of geologic processes, but the reasons for that are relatively…
Regulatory agencies are confronted with a daunting task of developing fish consumption advisories for a large number of lakes and rivers with little resources. A feasible mechanism to develop region-wide fish advisories is by using a process-based mathematical model. One model of...
A Process for Reviewing and Evaluating Generated Test Items
ERIC Educational Resources Information Center
Gierl, Mark J.; Lai, Hollis
2016-01-01
Testing organization needs large numbers of high-quality items due to the proliferation of alternative test administration methods and modern test designs. But the current demand for items far exceeds the supply. Test items, as they are currently written, evoke a process that is both time-consuming and expensive because each item is written,…
Prime Numbers Comparison using Sieve of Eratosthenes and Sieve of Sundaram Algorithm
NASA Astrophysics Data System (ADS)
Abdullah, D.; Rahim, R.; Apdilah, D.; Efendi, S.; Tulus, T.; Suwilo, S.
2018-03-01
Prime numbers are numbers that have their appeal to researchers due to the complexity of these numbers, many algorithms that can be used to generate prime numbers ranging from simple to complex computations, Sieve of Eratosthenes and Sieve of Sundaram are two algorithm that can be used to generate Prime numbers of randomly generated or sequential numbered random numbers, testing in this study to find out which algorithm is better used for large primes in terms of time complexity, the test also assisted with applications designed using Java language with code optimization and Maximum memory usage so that the testing process can be simultaneously and the results obtained can be objective
Auditory Processing of Complex Sounds Across Frequency Channels.
1992-06-26
towards gaining an understanding how the auditory system processes complex sounds. "The results of binaural psychophysical experiments in human subjects...suggest (1) that spectrally synthetic binaural processing is the rule when the number of components in the tone complex are relatively few (less than...10) and there are no dynamic binaural cues to aid segregation of the target from the background, and (2) that waveforms having large effective
An Improved Data Collection and Processing System
1988-05-01
of -use of Turbo made it the compiler of choice . These applications in- cluded data storage, processing and output. Thus, those programs...changed. As it was not envisioned that the settings would 0 87 remain constant for a large number of tests in a row, the update process is executed every...store,going); * A repeat of the above reading process is done for the file containing Ic. U:. The only difference is that the first line of this
Problems of Automation and Management Principles Information Flow in Manufacturing
NASA Astrophysics Data System (ADS)
Grigoryuk, E. N.; Bulkin, V. V.
2017-07-01
Automated control systems of technological processes are complex systems that are characterized by the presence of elements of the overall focus, the systemic nature of the implemented algorithms for the exchange and processing of information, as well as a large number of functional subsystems. The article gives examples of automatic control systems and automated control systems of technological processes held parallel between them by identifying strengths and weaknesses. Other proposed non-standard control system of technological process.
Theoretical and experimental study of a new algorithm for factoring numbers
NASA Astrophysics Data System (ADS)
Tamma, Vincenzo
The security of codes, for example in credit card and government information, relies on the fact that the factorization of a large integer N is a rather costly process on a classical digital computer. Such a security is endangered by Shor's algorithm which employs entangled quantum systems to find, with a polynomial number of resources, the period of a function which is connected with the factors of N. We can surely expect a possible future realization of such a method for large numbers, but so far the period of Shor's function has been only computed for the number 15. Inspired by Shor's idea, our work aims to methods of factorization based on the periodicity measurement of a given continuous periodic "factoring function" which is physically implementable using an analogue computer. In particular, we have focused on both the theoretical and the experimental analysis of Gauss sums with continuous arguments leading to a new factorization algorithm. The procedure allows, for the first time, to factor several numbers by measuring the periodicity of Gauss sums performing first-order "factoring" interfer ence processes. We experimentally implemented this idea by exploiting polychromatic optical interference in the visible range with a multi-path interferometer, and achieved the factorization of seven digit numbers. The physical principle behind this "factoring" interference procedure can be potentially exploited also on entangled systems, as multi-photon entangled states, in order to achieve a polynomial scaling in the number of resources.
Judgement of discrete and continuous quantity in adults: number counts!
Nys, Julie; Content, Alain
2012-01-01
Three experiments involving a Stroop-like paradigm were conducted. In Experiment 1, adults received a number comparison task in which large sets of dots, orthogonally varying along a discrete dimension (number of dots) and a continuous dimension (cumulative area), were presented. Incongruent trials were processed more slowly and with less accuracy than congruent trials, suggesting that continuous dimensions such as cumulative area are automatically processed and integrated during a discrete quantity judgement task. Experiment 2, in which adults were asked to perform area comparison on the same stimuli, revealed the reciprocal interference from number on the continuous quantity judgements. Experiment 3, in which participants received both the number and area comparison tasks, confirmed the results of Experiments 1 and 2. Contrasting with earlier statements, the results support the view that number acts as a more salient cue than continuous dimensions in adults. Furthermore, the individual predisposition to automatically access approximate number representations was found to correlate significantly with adults' exact arithmetical skills.
Velasco, J Marquez; Giamini, S A; Kelaidis, N; Tsipas, P; Tsoutsou, D; Kordas, G; Raptis, Y S; Boukos, N; Dimoulas, A
2015-10-09
Controlling the number of layers of graphene grown by chemical vapor deposition is crucial for large scale graphene application. We propose here an etching process of graphene which can be applied immediately after growth to control the number of layers. We use nickel (Ni) foil at high temperature (T = 900 °C) to produce multilayer-AB-stacked-graphene (MLG). The etching process is based on annealing the samples in a hydrogen/argon atmosphere at a relatively low temperature (T = 450 °C) inside the growth chamber. The extent of etching is mainly controlled by the annealing process duration. Using Raman spectroscopy we demonstrate that the number of layers was reduced, changing from MLG to few-layer-AB-stacked-graphene and in some cases to randomly oriented few layer graphene near the substrate. Furthermore, our method offers the significant advantage that it does not introduce defects in the samples, maintaining their original high quality. This fact and the low temperature our method uses make it a good candidate for controlling the layer number of already grown graphene in processes with a low thermal budget.
Simple Deterministically Constructed Recurrent Neural Networks
NASA Astrophysics Data System (ADS)
Rodan, Ali; Tiňo, Peter
A large number of models for time series processing, forecasting or modeling follows a state-space formulation. Models in the specific class of state-space approaches, referred to as Reservoir Computing, fix their state-transition function. The state space with the associated state transition structure forms a reservoir, which is supposed to be sufficiently complex so as to capture a large number of features of the input stream that can be potentially exploited by the reservoir-to-output readout mapping. The largely "black box" character of reservoirs prevents us from performing a deeper theoretical investigation of the dynamical properties of successful reservoirs. Reservoir construction is largely driven by a series of (more-or-less) ad-hoc randomized model building stages, with both the researchers and practitioners having to rely on a series of trials and errors. We show that a very simple deterministically constructed reservoir with simple cycle topology gives performances comparable to those of the Echo State Network (ESN) on a number of time series benchmarks. Moreover, we argue that the memory capacity of such a model can be made arbitrarily close to the proved theoretical limit.
NASA Technical Reports Server (NTRS)
Nagaraja, K. S.; Kraft, R. H.
1999-01-01
The HSCT Flight Controls Group has developed longitudinal control laws, utilizing PTC aeroelastic flexible models to minimize aeroservoelastic interaction effects, for a number of flight conditions. The control law design process resulted in a higher order controller and utilized a large number of sensors distributed along the body for minimizing the flexibility effects. Processes were developed to implement these higher order control laws for performing the dynamic gust loads and flutter analyses. The processes and its validation were documented in Reference 2, for selected flight condition. The analytical results for additional flight conditions are presented in this document for further validation.
Dwivedi, M; Shetty, K D; Nath, L Narendra
2009-01-01
An anthropometric device (AD) was designed and developed to collect data on foot and knee of locomotor disabled people. The aim was to standardize the sizes of knee-ankle-foot orthoses (KAFOs) in a standard modular form so that they can be mass produced to cater for fitting to a large number of locomotor disabled people. The anthropometric data collected on large numbers of locomotor disabled people were processed, with the help of a computer programme, to arrive at standard sizes for three modules, i.e. a foot plate (seven sizes), knee pieces (six sizes) and a lateral upright in a universal size. These modules were produced by plastic injection moulding and compression moulding processes using glass-reinforced polypropylene. KAFOs were assembled and fitted to locomotor disabled people. Feedback obtained was encouraging and this vindicated the concept, design and utility of the AD.
The Burn Wound Microenvironment
Rose, Lloyd F.; Chan, Rodney K.
2016-01-01
Significance: While the survival rate of the severely burned patient has improved significantly, relatively little progress has been made in treatment or prevention of burn-induced long-term sequelae, such as contraction and fibrosis. Recent Advances: Our knowledge of the molecular pathways involved in burn wounds has increased dramatically, and technological advances now allow large-scale genomic studies, providing a global view of wound healing processes. Critical Issues: Translating findings from a large number of in vitro and preclinical animal studies into clinical practice represents a gap in our understanding, and the failures of a number of clinical trials suggest that targeting single pathways or cytokines may not be the best approach. Significant opportunities for improvement exist. Future Directions: Study of the underlying molecular influences of burn wound healing progression will undoubtedly continue as an active research focus. Increasing our knowledge of these processes will identify additional therapeutic targets, supporting informed clinical studies that translate into clinical relevance and practice. PMID:26989577
SignalPlant: an open signal processing software platform.
Plesinger, F; Jurco, J; Halamek, J; Jurak, P
2016-07-01
The growing technical standard of acquisition systems allows the acquisition of large records, often reaching gigabytes or more in size as is the case with whole-day electroencephalograph (EEG) recordings, for example. Although current 64-bit software for signal processing is able to process (e.g. filter, analyze, etc) such data, visual inspection and labeling will probably suffer from rather long latency during the rendering of large portions of recorded signals. For this reason, we have developed SignalPlant-a stand-alone application for signal inspection, labeling and processing. The main motivation was to supply investigators with a tool allowing fast and interactive work with large multichannel records produced by EEG, electrocardiograph and similar devices. The rendering latency was compared with EEGLAB and proves significantly faster when displaying an image from a large number of samples (e.g. 163-times faster for 75 × 10(6) samples). The presented SignalPlant software is available free and does not depend on any other computation software. Furthermore, it can be extended with plugins by third parties ensuring its adaptability to future research tasks and new data formats.
A Theory For The Variability of The Baroclinic Quasi-geostrophic Winnd Driven Circulation.
NASA Astrophysics Data System (ADS)
Ben Jelloul, M.; Huck, T.
We propose a theory of the wind driven circulation based on the large scale (i.e. small Burger number) quasi-geostrophic assumptions retained in the Rhines and Young (1982) classical study of the steady baroclinic flow. We therefore use multiple time scale and asymptotic expansions to separate steady and the time dependent component of the flow. The barotropic flow is given by the Sverdrup balance. At first order in Burger number, the baroclinic flow can be decom- posed in two parts. A steady contribution ensures no flow in the deep layer which is at rest in absence of dissipative processes. Since the baroclinic instability is inhibited at large scale a spectrum of neutral modes also arises. These are of three type, classical Rossby basin modes deformed through advection by the barotropic flow, recirculating modes localized in the recirculation gyre and blocked modes corresponding to closed potential vorticity contours. At next order in Burger number, amplitude equations for baroclinic modes are derived. If dissipative processes are included at this order, the system adjusts towards Rhines and Young solution with a homogenized potential vorticity pool.
ERIC Educational Resources Information Center
Xu, Xueli; von Davier, Matthias
2010-01-01
One of the major objectives of large-scale educational surveys is reporting trends in academic achievement. For this purpose, a substantial number of items are carried from one assessment cycle to the next. The linking process that places academic abilities measured in different assessments on a common scale is usually based on a concurrent…
Leading the Way: Changing the Focus from Teaching to Learning in Large Subjects with Limited Budgets
ERIC Educational Resources Information Center
Fildes, Karen; Kuit, Tracey; O'Brien, Glennys; Keevers, Lynne; Bedford, Simon
2015-01-01
To lead positive change in the teaching practice of teams that service large numbers of diverse students from multiple degree programs provides many challenges. The primary aim of this study was to provide a clear framework on which to plan the process of change that can be utilized by academic departments sector wide. Barriers to change were…
Chockalingam, Sriram; Aluru, Maneesha; Aluru, Srinivas
2016-09-19
Pre-processing of microarray data is a well-studied problem. Furthermore, all popular platforms come with their own recommended best practices for differential analysis of genes. However, for genome-scale network inference using microarray data collected from large public repositories, these methods filter out a considerable number of genes. This is primarily due to the effects of aggregating a diverse array of experiments with different technical and biological scenarios. Here we introduce a pre-processing pipeline suitable for inferring genome-scale gene networks from large microarray datasets. We show that partitioning of the available microarray datasets according to biological relevance into tissue- and process-specific categories significantly extends the limits of downstream network construction. We demonstrate the effectiveness of our pre-processing pipeline by inferring genome-scale networks for the model plant Arabidopsis thaliana using two different construction methods and a collection of 11,760 Affymetrix ATH1 microarray chips. Our pre-processing pipeline and the datasets used in this paper are made available at http://alurulab.cc.gatech.edu/microarray-pp.
Real-time fast physical random number generator with a photonic integrated circuit.
Ugajin, Kazusa; Terashima, Yuta; Iwakawa, Kento; Uchida, Atsushi; Harayama, Takahisa; Yoshimura, Kazuyuki; Inubushi, Masanobu
2017-03-20
Random number generators are essential for applications in information security and numerical simulations. Most optical-chaos-based random number generators produce random bit sequences by offline post-processing with large optical components. We demonstrate a real-time hardware implementation of a fast physical random number generator with a photonic integrated circuit and a field programmable gate array (FPGA) electronic board. We generate 1-Tbit random bit sequences and evaluate their statistical randomness using NIST Special Publication 800-22 and TestU01. All of the BigCrush tests in TestU01 are passed using 410-Gbit random bit sequences. A maximum real-time generation rate of 21.1 Gb/s is achieved for random bit sequences in binary format stored in a computer, which can be directly used for applications involving secret keys in cryptography and random seeds in large-scale numerical simulations.
Some anomalies between wind tunnel and flight transition results
NASA Technical Reports Server (NTRS)
Harvey, W. D.; Bobbitt, P. J.
1981-01-01
A review of environmental disturbance influence and boundary layer transition measurements on a large collection of reference sharp cone tests in wind tunnels and of recent transonic-supersonic cone flight results have previously demonstrated the dominance of free-stream disturbance level on the transition process from the beginning to end. Variation of the ratio of transition Reynolds number at onset-to-end with Mach number has been shown to be consistently different between flight and wind tunnels. Previous correlations of the end of transition with disturbance level give good results for flight and large number of tunnels, however, anomalies occur for similar correlation based on transition onset. Present cone results with a tunnel sonic throat reduced the disturbance level by an order of magnitude with transition values comparable to flight.
Study on road surface source pollution controlled by permeable pavement
NASA Astrophysics Data System (ADS)
Zheng, Chaocheng
2018-06-01
The increase of impermeable pavement in urban construction not only increases the runoff of the pavement, but also produces a large number of Non-Point Source Pollution. In the process of controlling road surface runoff by permeable pavement, a large number of particulate matter will be withheld when rainwater is being infiltrated, so as to control the source pollution at the source. In this experiment, we determined the effect of permeable road surface to remove heavy pollutants in the laboratory and discussed the related factors that affect the non-point pollution of permeable pavement, so as to provide a theoretical basis for the application of permeable pavement.
Acrolein Microspheres Are Bonded To Large-Area Substrates
NASA Technical Reports Server (NTRS)
Rembaum, Alan; Yen, Richard C. K.
1988-01-01
Reactive cross-linked microspheres produced under influence of ionizing radiation in aqueous solutions of unsaturated aldehydes, such as acrolein, with sodium dodecyl sulfate. Diameters of spheres depend on concentrations of ingredients. If polystyrene, polymethylmethacrylate, or polypropylene object immersed in solution during irradiation, microspheres become attached to surface. Resulting modified surface has grainy coating with reactivity similar to free microspheres. Aldehyde-substituted-functional microspheres react under mild conditions with number of organic reagents and with most proteins. Microsphere-coated macrospheres or films used to immobilize high concentrations of proteins, enzymes, hormones, viruses, cells, and large number of organic compounds. Applications include separation techniques, clinical diagnostic tests, catalytic processes, and battery separators.
Jimenez, Paulino; Bregenzer, Anita
2018-02-23
Electronic health (eHealth) and mobile health (mHealth) tools can support and improve the whole process of workplace health promotion (WHP) projects. However, several challenges and opportunities have to be considered while integrating these tools in WHP projects. Currently, a large number of eHealth tools are developed for changing health behavior, but these tools can support the whole WHP process, including group administration, information flow, assessment, intervention development process, or evaluation. To support a successful implementation of eHealth tools in the whole WHP processes, we introduce a concept of WHP (life cycle model of WHP) with 7 steps and present critical and success factors for the implementation of eHealth tools in each step. We developed a life cycle model of WHP based on the World Health Organization (WHO) model of healthy workplace continual improvement process. We suggest adaptations to the WHO model to demonstrate the large number of possibilities to implement eHealth tools in WHP as well as possible critical points in the implementation process. eHealth tools can enhance the efficiency of WHP in each of the 7 steps of the presented life cycle model of WHP. Specifically, eHealth tools can support by offering easier administration, providing an information and communication platform, supporting assessments, presenting and discussing assessment results in a dashboard, and offering interventions to change individual health behavior. Important success factors include the possibility to give automatic feedback about health parameters, create incentive systems, or bring together a large number of health experts in one place. Critical factors such as data security, anonymity, or lack of management involvement have to be addressed carefully to prevent nonparticipation and dropouts. Using eHealth tools can support WHP, but clear regulations for the usage and implementation of these tools at the workplace are needed to secure quality and reach sustainable results. ©Paulino Jimenez, Anita Bregenzer. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 23.02.2018.
a Critical Review of Automated Photogrammetric Processing of Large Datasets
NASA Astrophysics Data System (ADS)
Remondino, F.; Nocerino, E.; Toschi, I.; Menna, F.
2017-08-01
The paper reports some comparisons between commercial software able to automatically process image datasets for 3D reconstruction purposes. The main aspects investigated in the work are the capability to correctly orient large sets of image of complex environments, the metric quality of the results, replicability and redundancy. Different datasets are employed, each one featuring a diverse number of images, GSDs at cm and mm resolutions, and ground truth information to perform statistical analyses of the 3D results. A summary of (photogrammetric) terms is also provided, in order to provide rigorous terms of reference for comparisons and critical analyses.
Massie, Isobel; Selden, Clare; Hodgson, Humphrey; Gibbons, Stephanie; Morris, G. John
2014-01-01
Cryopreservation protocols are increasingly required in regenerative medicine applications but must deliver functional products at clinical scale and comply with Good Manufacturing Process (GMP). While GMP cryopreservation is achievable on a small scale using a Stirling cryocooler-based controlled rate freezer (CRF) (EF600), successful large-scale GMP cryopreservation is more challenging due to heat transfer issues and control of ice nucleation, both complex events that impact success. We have developed a large-scale cryocooler-based CRF (VIA Freeze) that can process larger volumes and have evaluated it using alginate-encapsulated liver cell (HepG2) spheroids (ELS). It is anticipated that ELS will comprise the cellular component of a bioartificial liver and will be required in volumes of ∼2 L for clinical use. Sample temperatures and Stirling cryocooler power consumption was recorded throughout cooling runs for both small (500 μL) and large (200 mL) volume samples. ELS recoveries were assessed using viability (FDA/PI staining with image analysis), cell number (nuclei count), and function (protein secretion), along with cryoscanning electron microscopy and freeze substitution techniques to identify possible injury mechanisms. Slow cooling profiles were successfully applied to samples in both the EF600 and the VIA Freeze, and a number of cooling and warming profiles were evaluated. An optimized cooling protocol with a nonlinear cooling profile from ice nucleation to −60°C was implemented in both the EF600 and VIA Freeze. In the VIA Freeze the nucleation of ice is detected by the control software, allowing both noninvasive detection of the nucleation event for quality control purposes and the potential to modify the cooling profile following ice nucleation in an active manner. When processing 200 mL of ELS in the VIA Freeze—viabilities at 93.4%±7.4%, viable cell numbers at 14.3±1.7 million nuclei/mL alginate, and protein secretion at 10.5±1.7 μg/mL/24 h were obtained which, compared well with control ELS (viability −98.1%±0.9%; viable cell numbers −18.3±1.0 million nuclei/mL alginate; and protein secretion −18.7±1.8 μg/mL/24 h). Large volume GMP cryopreservation of ELS is possible with good functional recovery using the VIA Freeze and may also be applied to other regenerative medicine applications. PMID:24410575
Technology development in support of the TWRS process flowsheet. Revision 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Washenfelder, D.J.
1995-10-11
The Tank Waste Remediation System is to treat and dispose of Hanford`s Single-Shell and Double-Shell Tank Waste. The TWRS Process Flowsheet, (WHC-SD-WM-TI-613 Rev. 1) described a flowsheet based on a large number of assumptions and engineering judgements that require verification or further definition through process and technology development activities. This document takes off from the TWRS Process Flowsheet to identify and prioritize tasks that should be completed to strengthen the technical foundation for the flowsheet.
Population attribute compression
White, James M.; Faber, Vance; Saltzman, Jeffrey S.
1995-01-01
An image population having a large number of attributes is processed to form a display population with a predetermined smaller number of attributes that represent the larger number of attributes. In a particular application, the color values in an image are compressed for storage in a discrete look-up table (LUT). Color space containing the LUT color values is successively subdivided into smaller volumes until a plurality of volumes are formed, each having no more than a preselected maximum number of color values. Image pixel color values can then be rapidly placed in a volume with only a relatively few LUT values from which a nearest neighbor is selected. Image color values are assigned 8 bit pointers to their closest LUT value whereby data processing requires only the 8 bit pointer value to provide 24 bit color values from the LUT.
ERIC Educational Resources Information Center
Hirça, Necati
2015-01-01
Although science experiments are the basis of teaching science process skills (SPS), it has been observed that a large number of prospective primary teachers (PPTs), by virtue of their background, feel anxious about doing science experiments. To overcome this problem, a proposal was suggested for primary school teachers (PSTs) to teach science and…
ERIC Educational Resources Information Center
Gustafson, Robert L.; Thomsen, Steven R.
Induction and mentoring have been described as the processes during which new professors become integrated into the teaching profession. Both are particularly important in advertising and public relations education, where a large number of new faculty hires are former practitioners. A survey of 113 Association of Schools of Journalism and Mass…
Pharmacology of Ischemia-Reperfusion. Translational Research Considerations.
Prieto-Moure, Beatriz; Lloris-Carsí, José M; Barrios-Pitarque, Carlos; Toledo-Pereyra, Luis-H; Lajara-Romance, José María; Berda-Antolí, M; Lloris-Cejalvo, J M; Cejalvo-Lapeña, Dolores
2016-08-01
Ischemia-reperfusion (IRI) is a complex physiopathological mechanism involving a large number of metabolic processes that can eventually lead to cell apoptosis and ultimately tissue necrosis. Treatment approaches intended to reduce or palliate the effects of IRI are varied, and are aimed basically at: inhibiting cell apoptosis and the complement system in the inflammatory process deriving from IRI, modulating calcium levels, maintaining mitochondrial membrane integrity, reducing the oxidative effects of IRI and levels of inflammatory cytokines, or minimizing the action of macrophages, neutrophils, and other cell types. This study involved an extensive, up-to-date review of the bibliography on the currently most widely used active products in the treatment and prevention of IRI, and their mechanisms of action, in an aim to obtain an overview of current and potential future treatments for this pathological process. The importance of IRI is clearly reflected by the large number of studies published year after year, and by the variety of pathophysiological processes involved in this major vascular problem. A quick study of the evolution of IRI-related publications in PubMed shows that in a single month in 2014, 263 articles were published, compared to 806 articles in the entire 1990.
Garcea, Frank E.; Dombovy, Mary; Mahon, Bradford Z.
2013-01-01
A number of studies have observed that the motor system is activated when processing the semantics of manipulable objects. Such phenomena have been taken as evidence that simulation over motor representations is a necessary and intermediary step in the process of conceptual understanding. Cognitive neuropsychological evaluations of patients with impairments for action knowledge permit a direct test of the necessity of motor simulation in conceptual processing. Here, we report the performance of a 47-year-old male individual (Case AA) and six age-matched control participants on a number of tests probing action and object knowledge. Case AA had a large left-hemisphere frontal-parietal lesion and hemiplegia affecting his right arm and leg. Case AA presented with impairments for object-associated action production, and his conceptual knowledge of actions was severely impaired. In contrast, his knowledge of objects such as tools and other manipulable objects was largely preserved. The dissociation between action and object knowledge is difficult to reconcile with strong forms of the embodied cognition hypothesis. We suggest that these, and other similar findings, point to the need to develop tractable hypotheses about the dynamics of information exchange among sensory, motor and conceptual processes. PMID:23641205
Pair production processes and flavor in gauge-invariant perturbation theory
NASA Astrophysics Data System (ADS)
Egger, Larissa; Maas, Axel; Sondenheimer, René
2017-12-01
Gauge-invariant perturbation theory is an extension of ordinary perturbation theory which describes strictly gauge-invariant states in theories with a Brout-Englert-Higgs effect. Such gauge-invariant states are composite operators which have necessarily only global quantum numbers. As a consequence, flavor is exchanged for custodial quantum numbers in the Standard Model, recreating the fermion spectrum in the process. Here, we study the implications of such a description, possibly also for the generation structure of the Standard Model. In particular, this implies that scattering processes are essentially bound-state-bound-state interactions, and require a suitable description. We analyze the implications for the pair-production process e+e-→f¯f at a linear collider to leading order. We show how ordinary perturbation theory is recovered as the leading contribution. Using a PDF-type language, we also assess the impact of sub-leading contributions. To lowest order, we find that the result is mainly influenced by how large the contribution of the Higgs at large x is. This gives an interesting, possibly experimentally testable, scenario for the formal field theory underlying the electroweak sector of the Standard Model.
NASA Astrophysics Data System (ADS)
Yuen, Anthony C. Y.; Yeoh, Guan H.; Timchenko, Victoria; Cheung, Sherman C. P.; Chan, Qing N.; Chen, Timothy
2017-09-01
An in-house large eddy simulation (LES) based fire field model has been developed for large-scale compartment fire simulations. The model incorporates four major components, including subgrid-scale turbulence, combustion, soot and radiation models which are fully coupled. It is designed to simulate the temporal and fluid dynamical effects of turbulent reaction flow for non-premixed diffusion flame. Parametric studies were performed based on a large-scale fire experiment carried out in a 39-m long test hall facility. Several turbulent Prandtl and Schmidt numbers ranging from 0.2 to 0.5, and Smagorinsky constants ranging from 0.18 to 0.23 were investigated. It was found that the temperature and flow field predictions were most accurate with turbulent Prandtl and Schmidt numbers of 0.3, respectively, and a Smagorinsky constant of 0.2 applied. In addition, by utilising a set of numerically verified key modelling parameters, the smoke filling process was successfully captured by the present LES model.
Rare behavior of growth processes via umbrella sampling of trajectories
NASA Astrophysics Data System (ADS)
Klymko, Katherine; Geissler, Phillip L.; Garrahan, Juan P.; Whitelam, Stephen
2018-03-01
We compute probability distributions of trajectory observables for reversible and irreversible growth processes. These results reveal a correspondence between reversible and irreversible processes, at particular points in parameter space, in terms of their typical and atypical trajectories. Thus key features of growth processes can be insensitive to the precise form of the rate constants used to generate them, recalling the insensitivity to microscopic details of certain equilibrium behavior. We obtained these results using a sampling method, inspired by the "s -ensemble" large-deviation formalism, that amounts to umbrella sampling in trajectory space. The method is a simple variant of existing approaches, and applies to ensembles of trajectories controlled by the total number of events. It can be used to determine large-deviation rate functions for trajectory observables in or out of equilibrium.
The benefits of adaptive parametrization in multi-objective Tabu Search optimization
NASA Astrophysics Data System (ADS)
Ghisu, Tiziano; Parks, Geoffrey T.; Jaeggi, Daniel M.; Jarrett, Jerome P.; Clarkson, P. John
2010-10-01
In real-world optimization problems, large design spaces and conflicting objectives are often combined with a large number of constraints, resulting in a highly multi-modal, challenging, fragmented landscape. The local search at the heart of Tabu Search, while being one of its strengths in highly constrained optimization problems, requires a large number of evaluations per optimization step. In this work, a modification of the pattern search algorithm is proposed: this modification, based on a Principal Components' Analysis of the approximation set, allows both a re-alignment of the search directions, thereby creating a more effective parametrization, and also an informed reduction of the size of the design space itself. These changes make the optimization process more computationally efficient and more effective - higher quality solutions are identified in fewer iterations. These advantages are demonstrated on a number of standard analytical test functions (from the ZDT and DTLZ families) and on a real-world problem (the optimization of an axial compressor preliminary design).
The SNARC effect is not a unitary phenomenon.
Basso Moro, Sara; Dell'Acqua, Roberto; Cutini, Simone
2018-04-01
Models of the spatial-numerical association of response codes (SNARC) effect-faster responses to small numbers using left effectors, and the converse for large numbers-diverge substantially in localizing the root cause of this effect along the numbers' processing chain. One class of models ascribes the cause of the SNARC effect to the inherently spatial nature of the semantic representation of numerical magnitude. A different class of models ascribes the effect's cause to the processing dynamics taking place during response selection. To disentangle these opposing views, we devised a paradigm combining magnitude comparison and stimulus-response switching in order to monitor modulations of the SNARC effect while concurrently tapping both semantic and response-related processing stages. We observed that the SNARC effect varied nonlinearly as a function of both manipulated factors, a result that can hardly be reconciled with a unitary cause of the SNARC effect.
Distinguishing Fast and Slow Processes in Accuracy - Response Time Data.
Coomans, Frederik; Hofman, Abe; Brinkhuis, Matthieu; van der Maas, Han L J; Maris, Gunter
2016-01-01
We investigate the relation between speed and accuracy within problem solving in its simplest non-trivial form. We consider tests with only two items and code the item responses in two binary variables: one indicating the response accuracy, and one indicating the response speed. Despite being a very basic setup, it enables us to study item pairs stemming from a broad range of domains such as basic arithmetic, first language learning, intelligence-related problems, and chess, with large numbers of observations for every pair of problems under consideration. We carry out a survey over a large number of such item pairs and compare three types of psychometric accuracy-response time models present in the literature: two 'one-process' models, the first of which models accuracy and response time as conditionally independent and the second of which models accuracy and response time as conditionally dependent, and a 'two-process' model which models accuracy contingent on response time. We find that the data clearly violates the restrictions imposed by both one-process models and requires additional complexity which is parsimoniously provided by the two-process model. We supplement our survey with an analysis of the erroneous responses for an example item pair and demonstrate that there are very significant differences between the types of errors in fast and slow responses.
Transplant Image Processing Technology under Windows into the Platform Based on MiniGUI
NASA Astrophysics Data System (ADS)
Gan, Lan; Zhang, Xu; Lv, Wenya; Yu, Jia
MFC has a large number of digital image processing-related API functions, object-oriented and class mechanisms which provides image processing technology strong support in Windows. But in embedded systems, image processing technology dues to the restrictions of hardware and software do not have the environment of MFC in Windows. Therefore, this paper draws on the experience of image processing technology of Windows and transplants it into MiniGUI embedded systems. The results show that MiniGUI/Embedded graphical user interface applications about image processing which used in embedded image processing system can be good results.
NASA Astrophysics Data System (ADS)
Mirsafianf, Atefeh S.; Isfahani, Shirin N.; Kasaei, Shohreh; Mobasheri, Hamid
Here we present an approach for processing neural cells images to analyze their growth process in culture environment. We have applied several image processing techniques for: 1- Environmental noise reduction, 2- Neural cells segmentation, 3- Neural cells classification based on their dendrites' growth conditions, and 4- neurons' features Extraction and measurement (e.g., like cell body area, number of dendrites, axon's length, and so on). Due to the large amount of noise in the images, we have used feed forward artificial neural networks to detect edges more precisely.
Huber, Stefan; Nuerk, Hans-Christoph; Reips, Ulf-Dietrich; Soltanlou, Mojtaba
2017-12-23
Symbolic magnitude comparison is one of the most well-studied cognitive processes in research on numerical cognition. However, while the cognitive mechanisms of symbolic magnitude processing have been intensively studied, previous studies have paid less attention to individual differences influencing symbolic magnitude comparison. Employing a two-digit number comparison task in an online setting, we replicated previous effects, including the distance effect, the unit-decade compatibility effect, and the effect of cognitive control on the adaptation to filler items, in a large-scale study in 452 adults. Additionally, we observed that the most influential individual differences were participants' first language, time spent playing computer games and gender, followed by reported alcohol consumption, age and mathematical ability. Participants who used a first language with a left-to-right reading/writing direction were faster than those who read and wrote in the right-to-left direction. Reported playing time for computer games was correlated with faster reaction times. Female participants showed slower reaction times and a larger unit-decade compatibility effect than male participants. Participants who reported never consuming alcohol showed overall slower response times than others. Older participants were slower, but more accurate. Finally, higher grades in mathematics were associated with faster reaction times. We conclude that typical experiments on numerical cognition that employ a keyboard as an input device can also be run in an online setting. Moreover, while individual differences have no influence on domain-specific magnitude processing-apart from age, which increases the decade distance effect-they generally influence performance on a two-digit number comparison task.
ERIC Educational Resources Information Center
Larsson, Ken
2014-01-01
This paper looks at the process of managing large numbers of exams efficiently and secure with the use of a dedicated IT support. The system integrates regulations on different levels, from national to local, (even down to departments) and ensures that the rules are employed in all stages of handling the exams. The system has a proven record of…
Short-Term Uplift Rates and the Mountain Building Process in Southern Alaska
NASA Technical Reports Server (NTRS)
Sauber, Jeanne; Herring, Thomas A.; Meigs, Andrew; Meigs, Andrew
1998-01-01
We have used GPS at 10 stations in southern Alaska with three epochs of measurements to estimate short-term uplift rates. A number of great earthquakes as well as recent large earthquakes characterize the seismicity of the region this century. To reliably estimate uplift rates from GPS data, numerical models that included both the slip distribution in recent large earthquakes and the general slab geometry were constructed.
Large-Scale Production of Nanographite by Tube-Shear Exfoliation in Water
Engström, Ann-Christine; Hummelgård, Magnus; Andres, Britta; Forsberg, Sven; Olin, Håkan
2016-01-01
The number of applications based on graphene, few-layer graphene, and nanographite is rapidly increasing. A large-scale process for production of these materials is critically needed to achieve cost-effective commercial products. Here, we present a novel process to mechanically exfoliate industrial quantities of nanographite from graphite in an aqueous environment with low energy consumption and at controlled shear conditions. This process, based on hydrodynamic tube shearing, produced nanometer-thick and micrometer-wide flakes of nanographite with a production rate exceeding 500 gh-1 with an energy consumption about 10 Whg-1. In addition, to facilitate large-area coating, we show that the nanographite can be mixed with nanofibrillated cellulose in the process to form highly conductive, robust and environmentally friendly composites. This composite has a sheet resistance below 1.75 Ω/sq and an electrical resistivity of 1.39×10-4 Ωm and may find use in several applications, from supercapacitors and batteries to printed electronics and solar cells. A batch of 100 liter was processed in less than 4 hours. The design of the process allow scaling to even larger volumes and the low energy consumption indicates a low-cost process. PMID:27128841
Multi-scale structures of turbulent magnetic reconnection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakamura, T. K. M., E-mail: takuma.nakamura@oeaw.ac.at; Nakamura, R.; Narita, Y.
2016-05-15
We have analyzed data from a series of 3D fully kinetic simulations of turbulent magnetic reconnection with a guide field. A new concept of the guide filed reconnection process has recently been proposed, in which the secondary tearing instability and the resulting formation of oblique, small scale flux ropes largely disturb the structure of the primary reconnection layer and lead to 3D turbulent features [W. Daughton et al., Nat. Phys. 7, 539 (2011)]. In this paper, we further investigate the multi-scale physics in this turbulent, guide field reconnection process by introducing a wave number band-pass filter (k-BPF) technique in whichmore » modes for the small scale (less than ion scale) fluctuations and the background large scale (more than ion scale) variations are separately reconstructed from the wave number domain to the spatial domain in the inverse Fourier transform process. Combining with the Fourier based analyses in the wave number domain, we successfully identify spatial and temporal development of the multi-scale structures in the turbulent reconnection process. When considering a strong guide field, the small scale tearing mode and the resulting flux ropes develop over a specific range of oblique angles mainly along the edge of the primary ion scale flux ropes and reconnection separatrix. The rapid merging of these small scale modes leads to a smooth energy spectrum connecting ion and electron scales. When the guide field is sufficiently weak, the background current sheet is strongly kinked and oblique angles for the small scale modes are widely scattered at the kinked regions. Similar approaches handling both the wave number and spatial domains will be applicable to the data from multipoint, high-resolution spacecraft observations such as the NASA magnetospheric multiscale (MMS) mission.« less
Multi-scale structures of turbulent magnetic reconnection
NASA Astrophysics Data System (ADS)
Nakamura, T. K. M.; Nakamura, R.; Narita, Y.; Baumjohann, W.; Daughton, W.
2016-05-01
We have analyzed data from a series of 3D fully kinetic simulations of turbulent magnetic reconnection with a guide field. A new concept of the guide filed reconnection process has recently been proposed, in which the secondary tearing instability and the resulting formation of oblique, small scale flux ropes largely disturb the structure of the primary reconnection layer and lead to 3D turbulent features [W. Daughton et al., Nat. Phys. 7, 539 (2011)]. In this paper, we further investigate the multi-scale physics in this turbulent, guide field reconnection process by introducing a wave number band-pass filter (k-BPF) technique in which modes for the small scale (less than ion scale) fluctuations and the background large scale (more than ion scale) variations are separately reconstructed from the wave number domain to the spatial domain in the inverse Fourier transform process. Combining with the Fourier based analyses in the wave number domain, we successfully identify spatial and temporal development of the multi-scale structures in the turbulent reconnection process. When considering a strong guide field, the small scale tearing mode and the resulting flux ropes develop over a specific range of oblique angles mainly along the edge of the primary ion scale flux ropes and reconnection separatrix. The rapid merging of these small scale modes leads to a smooth energy spectrum connecting ion and electron scales. When the guide field is sufficiently weak, the background current sheet is strongly kinked and oblique angles for the small scale modes are widely scattered at the kinked regions. Similar approaches handling both the wave number and spatial domains will be applicable to the data from multipoint, high-resolution spacecraft observations such as the NASA magnetospheric multiscale (MMS) mission.
Fragment size distribution in viscous bag breakup of a drop
NASA Astrophysics Data System (ADS)
Kulkarni, Varun; Bulusu, Kartik V.; Plesniak, Michael W.; Sojka, Paul E.
2015-11-01
In this study we examine the drop size distribution resulting from the fragmentation of a single drop in the presence of a continuous air jet. Specifically, we study the effect of Weber number, We, and Ohnesorge number, Oh on the disintegration process. The regime of breakup considered is observed between 12 <= We <= 16 for Oh <= 0.1. Experiments are conducted using phase Doppler anemometry. Both the number and volume fragment size probability distributions are plotted. The volume probability distribution revealed a bi-modal behavior with two distinct peaks: one corresponding to the rim fragments and the other to the bag fragments. This behavior was suppressed in the number probability distribution. Additionally, we employ an in-house particle detection code to isolate the rim fragment size distribution from the total probability distributions. Our experiments showed that the bag fragments are smaller in diameter and larger in number, while the rim fragments are larger in diameter and smaller in number. Furthermore, with increasing We for a given Ohwe observe a large number of small-diameter drops and small number of large-diameter drops. On the other hand, with increasing Oh for a fixed We the opposite is seen.
Scalable Performance Measurement and Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamblin, Todd
2009-01-01
Concurrency levels in large-scale, distributed-memory supercomputers are rising exponentially. Modern machines may contain 100,000 or more microprocessor cores, and the largest of these, IBM's Blue Gene/L, contains over 200,000 cores. Future systems are expected to support millions of concurrent tasks. In this dissertation, we focus on efficient techniques for measuring and analyzing the performance of applications running on very large parallel machines. Tuning the performance of large-scale applications can be a subtle and time-consuming task because application developers must measure and interpret data from many independent processes. While the volume of the raw data scales linearly with the number ofmore » tasks in the running system, the number of tasks is growing exponentially, and data for even small systems quickly becomes unmanageable. Transporting performance data from so many processes over a network can perturb application performance and make measurements inaccurate, and storing such data would require a prohibitive amount of space. Moreover, even if it were stored, analyzing the data would be extremely time-consuming. In this dissertation, we present novel methods for reducing performance data volume. The first draws on multi-scale wavelet techniques from signal processing to compress systemwide, time-varying load-balance data. The second uses statistical sampling to select a small subset of running processes to generate low-volume traces. A third approach combines sampling and wavelet compression to stratify performance data adaptively at run-time and to reduce further the cost of sampled tracing. We have integrated these approaches into Libra, a toolset for scalable load-balance analysis. We present Libra and show how it can be used to analyze data from large scientific applications scalably.« less
Taking OSCE examiner training on the road: reaching the masses.
Reid, Katharine; Smallwood, David; Collins, Margo; Sutherland, Ruth; Dodds, Agnes
To ensure the rigour of objective structured clinical examinations (OSCEs) in assessing medical students, medical school educators must educate examiners with a view to standardising examiner assessment behaviour. Delivering OSCE examiner training is a necessary yet challenging part of the OSCE process. A novel approach to implementing training for current and potential OSCE examiners was trialled by delivering large-group education sessions at major teaching hospitals. The 'OSCE Roadshow' comprised a short training session delivered in the context of teaching hospital 'Grand Rounds' to current and potential OSCE examiners. The training was developed to educate clinicians about OSCE processes, clarify the examiners' role and required behaviours, and to review marking guides and mark allocation in an effort to standardise OSCE processes and encourage consistency in examiner marking behaviour. A short exercise allowed participants to practise marking a mock OSCE to investigate examiner marking behaviour after the training. OSCE Roadshows at four metropolitan and one rural teaching hospital were well received and well attended by 171 clinicians across six sessions. Unexpectedly, medical students also attended in large numbers ( n= 220). After training, participants' average scores for the mock OSCE clustered closely around the ideal score of 28 (out of 40), and the average scores did not differ according to the levels of clinical experience. The OSCE Roadshow demonstrated the potential of brief familiarisation training in reaching large numbers of current and potential OSCE examiners in a time and cost-effective manner to promote standardisation of OSCE processes.
Taking OSCE examiner training on the road: reaching the masses.
Reid, Katharine; Smallwood, David; Collins, Margo; Sutherland, Ruth; Dodds, Agnes
2016-01-01
Background To ensure the rigour of objective structured clinical examinations (OSCEs) in assessing medical students, medical school educators must educate examiners with a view to standardising examiner assessment behaviour. Delivering OSCE examiner training is a necessary yet challenging part of the OSCE process. A novel approach to implementing training for current and potential OSCE examiners was trialled by delivering large-group education sessions at major teaching hospitals. Methods The 'OSCE Roadshow' comprised a short training session delivered in the context of teaching hospital 'Grand Rounds' to current and potential OSCE examiners. The training was developed to educate clinicians about OSCE processes, clarify the examiners' role and required behaviours, and to review marking guides and mark allocation in an effort to standardise OSCE processes and encourage consistency in examiner marking behaviour. A short exercise allowed participants to practise marking a mock OSCE to investigate examiner marking behaviour after the training. Results OSCE Roadshows at four metropolitan and one rural teaching hospital were well received and well attended by 171 clinicians across six sessions. Unexpectedly, medical students also attended in large numbers (n=220). After training, participants' average scores for the mock OSCE clustered closely around the ideal score of 28 (out of 40), and the average scores did not differ according to the levels of clinical experience. Conclusion The OSCE Roadshow demonstrated the potential of brief familiarisation training in reaching large numbers of current and potential OSCE examiners in a time and cost-effective manner to promote standardisation of OSCE processes.
Spatial Distribution of Small Water Body Types in Indiana Ecoregions
Due to their large numbers and biogeochemical activity, small water bodies (SWBs), such as ponds and wetlands, can have substantial cumulative effects on hydrologic and biogeochemical processes. Using updated National Wetland Inventory data, we describe the spatial distribution o...
LANDSCAPE ASSESSMENT TOOLS FOR WATERSHED CHARACTERIZATION
A combination of process-based, empirical and statistical models has been developed to assist states in their efforts to assess water quality, locate impairments over large areas, and calculate TMDL allocations. By synthesizing outputs from a number of these tools, LIPS demonstr...
Quantitative Assessment of Neurite Outgrowth in PC12 Cells
In vitro test methods can provide a rapid approach for the screening of large numbers of chemicals for their potential to produce toxicity. In order to identify potential developmental neurotoxicants, assessment of critical neurodevelopmental processes such as neuronal differenti...
NASA Astrophysics Data System (ADS)
Manrubia, S. C.; Prieto Ballesteros, O.; González Kessler, C.; Fernández Remolar, D.; Córdoba-Jabonero, C.; Selsis, F.; Bérczi, S.; Gánti, T.; Horváth, A.; Sik, A.; Szathmáry, E.
2004-03-01
We carry out a comparative analysis of the morphological and seasonal features of two regions in the Martian Southern Polar Region: the Inca City (82S 65W) and the Pityusa Patera zone (66S 37E). These two sites are representative of a large number of areas which are subjected to dynamical, seasonal processes that deeply modify the local conditions of those regions. Due to varitions in sunlight, seasonal CO2 accumulates during autumn and winter and starts defrosting in spring. By mid summer the seasonal ice has disappeared. Despite a number of relevant differences in the morphology of the seasonal features observed, they seem to result from similar processes.
Customer Service Analysis of Tactical Air Command Base Level Supply Support
1990-09-01
function. A large number of respondents described customer service as an activity such as order processing , handling of complaints, or troubleshooting...thru 14 General Service .69 19 thru 28 Demeanor of Supply .86 Representatives 29 thru 36 Order Processing .82 37 thru 40 Order Cycle Time .84 41 thru...Representatives 23 thru 30 Order Processing .83 31 thru 34 Order Cycle Time .75 35 thru 39 Item Availability .80 40 thru 45 Responsiveness .86 Univariate
Machine learning for fab automated diagnostics
NASA Astrophysics Data System (ADS)
Giollo, Manuel; Lam, Auguste; Gkorou, Dimitra; Liu, Xing Lan; van Haren, Richard
2017-06-01
Process optimization depends largely on field engineer's knowledge and expertise. However, this practice turns out to be less sustainable due to the fab complexity which is continuously increasing in order to support the extreme miniaturization of Integrated Circuits. On the one hand, process optimization and root cause analysis of tools is necessary for a smooth fab operation. On the other hand, the growth in number of wafer processing steps is adding a considerable new source of noise which may have a significant impact at the nanometer scale. This paper explores the ability of historical process data and Machine Learning to support field engineers in production analysis and monitoring. We implement an automated workflow in order to analyze a large volume of information, and build a predictive model of overlay variation. The proposed workflow addresses significant problems that are typical in fab production, like missing measurements, small number of samples, confounding effects due to heterogeneity of data, and subpopulation effects. We evaluate the proposed workflow on a real usecase and we show that it is able to predict overlay excursions observed in Integrated Circuits manufacturing. The chosen design focuses on linear and interpretable models of the wafer history, which highlight the process steps that are causing defective products. This is a fundamental feature for diagnostics, as it supports process engineers in the continuous improvement of the production line.
Nonlinear Acoustic Processes in a Solid Rocket Engine
1994-03-29
conceptual framwork for the study number (M), weakly viscous internal flow sustained of solid rocket engine chamber flow dynamics which by mass...same magnitude. The formulation and results provide a conceptual framwork for the study of injected cylinder flow dynamics which supplements traditional...towards the axial direction. Until recently, conceptual understanding of this flow turning process has been based largely on the viscous properties of the
NASA Astrophysics Data System (ADS)
Simoni, Daniele; Lengani, Davide; Ubaldi, Marina; Zunino, Pietro; Dellacasagrande, Matteo
2017-06-01
The effects of free-stream turbulence intensity (FSTI) on the transition process of a pressure-induced laminar separation bubble have been studied for different Reynolds numbers (Re) by means of time-resolved (TR) PIV. Measurements have been performed along a flat plate installed within a double-contoured test section, designed to produce an adverse pressure gradient typical of ultra-high-lift turbine blade profiles. A test matrix spanning 3 FSTI levels and 3 Reynolds numbers has been considered allowing estimation of cross effects of these parameters on the instability mechanisms driving the separated flow transition process. Boundary layer integral parameters, spatial growth rate and saturation level of velocity fluctuations are discussed for the different cases in order to characterize the base flow response as well as the time-mean properties of the Kelvin-Helmholtz instability. The inspection of the instantaneous velocity vector maps highlights the dynamics of the large-scale structures shed near the bubble maximum displacement, as well as the low-frequency motion of the fore part of the separated shear layer. Proper Orthogonal Decomposition (POD) has been implemented to reduce the large amount of data for each condition allowing a rapid evaluation of the group velocity, spatial wavelength and dominant frequency of the vortex shedding process. The dimensionless shedding wave number parameter makes evident that the modification of the shear layer thickness at separation due to Reynolds number variation mainly drives the length scale of the rollup vortices, while higher FSTI levels force the onset of the shedding phenomenon to occur upstream due to the higher velocity fluctuations penetrating into the separating boundary layer.
Characterizing and Assessing a Large-Scale Software Maintenance Organization
NASA Technical Reports Server (NTRS)
Briand, Lionel; Melo, Walcelio; Seaman, Carolyn; Basili, Victor
1995-01-01
One important component of a software process is the organizational context in which the process is enacted. This component is often missing or incomplete in current process modeling approaches. One technique for modeling this perspective is the Actor-Dependency (AD) Model. This paper reports on a case study which used this approach to analyze and assess a large software maintenance organization. Our goal was to identify the approach's strengths and weaknesses while providing practical recommendations for improvement and research directions. The AD model was found to be very useful in capturing the important properties of the organizational context of the maintenance process, and aided in the understanding of the flaws found in this process. However, a number of opportunities for extending and improving the AD model were identified. Among others, there is a need to incorporate quantitative information to complement the qualitative model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schell, Daniel J
The goal of this work is to use the large fermentation vessels in the National Renewable Energy Laboratory's (NREL) Integrated Biorefinery Research Facility (IBRF) to scale-up Lygos' biological-based process for producing malonic acid and to generate performance data. Initially, work at the 1 L scale validated successful transfer of Lygos' fermentation protocols to NREL using a glucose substrate. Outside of the scope of the CRADA with NREL, Lygos tested their process on lignocellulosic sugars produced by NREL at Lawrence Berkeley National Laboratory's (LBNL) Advanced Biofuels Process Development Unit (ABPDU). NREL produced these cellulosic sugar solutions from corn stover using amore » separate cellulose/hemicellulose process configuration. Finally, NREL performed fermentations using glucose in large fermentors (1,500- and 9,000-L vessels) to intermediate product and to demonstrate successful performance of Lygos' technology at larger scales.« less
Dahling, Daniel R
2002-01-01
Large-scale virus studies of groundwater systems require practical and sensitive procedures for both sample processing and viral assay. Filter adsorption-elution procedures have traditionally been used to process large-volume water samples for viruses. In this study, five filter elution procedures using cartridge filters were evaluated for their effectiveness in processing samples. Of the five procedures tested, the third method, which incorporated two separate beef extract elutions (one being an overnight filter immersion in beef extract), recovered 95% of seeded poliovirus compared with recoveries of 36 to 70% for the other methods. For viral enumeration, an expanded roller bottle quantal assay was evaluated using seeded poliovirus. This cytopathic-based method was considerably more sensitive than the standard plaque assay method. The roller bottle system was more economical than the plaque assay for the evaluation of comparable samples. Using roller bottles required less time and manipulation than the plaque procedure and greatly facilitated the examination of large numbers of samples. The combination of the improved filter elution procedure and the roller bottle assay for viral analysis makes large-scale virus studies of groundwater systems practical. This procedure was subsequently field tested during a groundwater study in which large-volume samples (exceeding 800 L) were processed through the filters.
Flow and Acoustic Properties of Low Reynolds Number Underexpanded Supersonic Jets. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Hu, Tieh-Feng
1981-01-01
Jet noise on underexpanded supersonic jets are studied with emphasis on determining the role played by large scale organized flow fluctuations in the flow and acoustic processes. The experimental conditions of the study were chosen as low Reynolds number (Re=8,000) Mach 1.4 and 2.1, and moderate Reynolds number (Re=68,000) Mach 1.6 underexpanded supersonic jets exhausting from convergent nozzles. At these chosen conditions, detailed experimental measurements were performed to improve the understanding of the flow and acoustic properties of underexpanded supersonic jets.
Placement-aware decomposition of a digital standard cells library for double patterning lithography
NASA Astrophysics Data System (ADS)
Wassal, Amr G.; Sharaf, Heba; Hammouda, Sherif
2012-11-01
To continue scaling the circuit features down, Double Patterning (DP) technology is needed in 22nm technologies and lower. DP requires decomposing the layout features into two masks for pitch relaxation, such that the spacing between any two features on each mask is greater than the minimum allowed mask spacing. The relaxed pitches of each mask are then processed on two separate exposure steps. In many cases, post-layout decomposition fails to decompose the layout into two masks due to the presence of conflicts. Post-layout decomposition of a standard cells block can result in native conflicts inside the cells (internal conflict), or native conflicts on the boundary between two cells (boundary conflict). Resolving native conflicts requires a redesign and/or multiple iterations for the placement and routing phases to get a clean decomposition. Therefore, DP compliance must be considered in earlier phases, before getting the final placed cell block. The main focus of this paper is generating a library of decomposed standard cells to be used in a DP-aware placer. This library should contain all possible decompositions for each standard cell, i.e., these decompositions consider all possible combinations of boundary conditions. However, the large number of combinations of boundary conditions for each standard cell will significantly increase the processing time and effort required to obtain all possible decompositions. Therefore, an efficient methodology is required to reduce this large number of combinations. In this paper, three different reduction methodologies are proposed to reduce the number of different combinations processed to get the decomposed library. Experimental results show a significant reduction in the number of combinations and decompositions needed for the library processing. To generate and verify the proposed flow and methodologies, a prototype for a placement-aware DP-ready cell-library is developed with an optimized number of cell views.
Examination of turbulent entrainment-mixing mechanisms using a combined approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, C.; Liu, Y.; Niu, S.
2011-10-01
Turbulent entrainment-mixing mechanisms are investigated by applying a combined approach to the aircraft measurements of three drizzling and two nondrizzling stratocumulus clouds collected over the U.S. Department of Energy's Atmospheric Radiation Measurement Southern Great Plains site during the March 2000 cloud Intensive Observation Period. Microphysical analysis shows that the inhomogeneous entrainment-mixing process occurs much more frequently than the homogeneous counterpart, and most cases of the inhomogeneous entrainment-mixing process are close to the extreme scenario, having drastically varying cloud droplet concentration but roughly constant volume-mean radius. It is also found that the inhomogeneous entrainment-mixing process can occur both near the cloudmore » top and in the middle level of a cloud, and in both the nondrizzling clouds and nondrizzling legs in the drizzling clouds. A new dimensionless number, the scale number, is introduced as a dynamical measure for different entrainment-mixing processes, with a larger scale number corresponding to a higher degree of homogeneous entrainment mixing. Further empirical analysis shows that the scale number that separates the homogeneous from the inhomogeneous entrainment-mixing process is around 50, and most legs have smaller scale numbers. Thermodynamic analysis shows that sampling average of filament structures finer than the instrumental spatial resolution also contributes to the dominance of inhomogeneous entrainment-mixing mechanism. The combined microphysical-dynamical-thermodynamic analysis sheds new light on developing parameterization of entrainment-mixing processes and their microphysical and radiative effects in large-scale models.« less
Rotor assembly and method for automatically processing liquids
Burtis, Carl A.; Johnson, Wayne F.; Walker, William A.
1992-01-01
A rotor assembly for performing a relatively large number of processing steps upon a sample, such as a whole blood sample, and a diluent, such as water, includes a rotor body for rotation about an axis and including a network of chambers within which various processing steps are performed upon the sample and diluent and passageways through which the sample and diluent are transferred. A transfer mechanism is movable through the rotor body by the influence of a magnetic field generated adjacent the transfer mechanism and movable along the rotor body, and the assembly utilizes centrifugal force, a transfer of momentum and capillary action to perform any of a number of processing steps such as separation, aliquoting, transference, washing, reagent addition and mixing of the sample and diluent within the rotor body. The rotor body is particularly suitable for automatic immunoassay analyses.
The semantic richness of abstract concepts
Recchia, Gabriel; Jones, Michael N.
2012-01-01
We contrasted the predictive power of three measures of semantic richness—number of features (NFs), contextual dispersion (CD), and a novel measure of number of semantic neighbors (NSN)—for a large set of concrete and abstract concepts on lexical decision and naming tasks. NSN (but not NF) facilitated processing for abstract concepts, while NF (but not NSN) facilitated processing for the most concrete concepts, consistent with claims that linguistic information is more relevant for abstract concepts in early processing. Additionally, converging evidence from two datasets suggests that when NSN and CD are controlled for, the features that most facilitate processing are those associated with a concept's physical characteristics and real-world contexts. These results suggest that rich linguistic contexts (many semantic neighbors) facilitate early activation of abstract concepts, whereas concrete concepts benefit more from rich physical contexts (many associated objects and locations). PMID:23205008
ERIC Educational Resources Information Center
Pettersson, Rune
2014-01-01
Information design has practical and theoretical components. As an academic discipline we may view information design as a combined discipline, a practical theory, or as a theoretical practice. So far information design has incorporated facts, influences, methods, practices, principles, processes, strategies, and tools from a large number of…
Spatial Distribution of Small Water Body Types across Indiana Ecoregions
Due to their large numbers and biogeochemical activity, small water bodies (SWB), such as ponds and wetlands, can have substantial cumulative effects on hydrologic, biogeochemical, and biological processes; yet the spatial distributions of various SWB types are often unknown. Usi...
Microcomputers in the Anesthesia Library.
ERIC Educational Resources Information Center
Wright, A. J.
The combination of computer technology and library operation is helping to alleviate such library problems as escalating costs, increasing collection size, deteriorating materials, unwieldy arrangement schemes, poor subject control, and the acquisition and processing of large numbers of rarely used documents. Small special libraries such as…
``Large''- vs Small-scale friction control in turbulent channel flow
NASA Astrophysics Data System (ADS)
Canton, Jacopo; Örlü, Ramis; Chin, Cheng; Schlatter, Philipp
2017-11-01
We reconsider the ``large-scale'' control scheme proposed by Hussain and co-workers (Phys. Fluids 10, 1049-1051 1998 and Phys. Rev. Fluids, 2, 62601 2017), using new direct numerical simulations (DNS). The DNS are performed in a turbulent channel at friction Reynolds number Reτ of up to 550 in order to eliminate low-Reynolds-number effects. The purpose of the present contribution is to re-assess this control method in the light of more modern developments in the field, in particular also related to the discovery of (very) large-scale motions. The goals of the paper are as follows: First, we want to better characterise the physics of the control, and assess what external contribution (vortices, forcing, wall motion) are actually needed. Then, we investigate the optimal parameters and, finally, determine which aspects of this control technique actually scale in outer units and can therefore be of use in practical applications. In addition to discussing the mentioned drag-reduction effects, the present contribution will also address the potential effect of the naturally occurring large-scale motions on frictional drag, and give indications on the physical processes for potential drag reduction possible at all Reynolds numbers.
Processing Satellite Images on Tertiary Storage: A Study of the Impact of Tile Size on Performance
NASA Technical Reports Server (NTRS)
Yu, JieBing; DeWitt, David J.
1996-01-01
Before raw data from a satellite can be used by an Earth scientist, it must first undergo a number of processing steps including basic processing, cleansing, and geo-registration. Processing actually expands the volume of data collected by a factor of 2 or 3 and the original data is never deleted. Thus processing and storage requirements can exceed 2 terrabytes/day. Once processed data is ready for analysis, a series of algorithms (typically developed by the Earth scientists) is applied to a large number of images in a data set. The focus of this paper is how best to handle such images stored on tape using the following assumptions: (1) all images of interest to a scientist are stored on a single tape, (2) images are accessed and processed in the order that they are stored on tape, and (3) the analysis requires access to only a portion of each image and not the entire image.
Testing and checkout experiences in the National Transonic Facility since becoming operational
NASA Technical Reports Server (NTRS)
Bruce, W. E., Jr.; Gloss, B. B.; Mckinney, L. W.
1988-01-01
The U.S. National Transonic Facility, constructed by NASA to meet the national needs for High Reynolds Number Testing, has been operational in a checkout and test mode since the operational readiness review (ORR) in late 1984. During this time, there have been problems centered around the effect of large temperature excursions on the mechanical movement of large components, the reliable performance of instrumentation systems, and an unexpected moisture problem with dry insulation. The more significant efforts since the ORR are reviewed and NTF status concerning hardware, instrumentation and process controls systems, operating constraints imposed by the cryogenic environment, and data quality and process controls is summarized.
Mitigating Large Fires in Drossel-Schwabl Forest Fire Models
NASA Astrophysics Data System (ADS)
Yoder, M.; Turcotte, D.; Rundle, J.; Morein, G.
2008-12-01
We employ variations of the traditional Drossel-Schwabl cellular automata Forest Fire Models (FFM) to study wildfire dynamics. The traditional FFM produces a very robust power law distribution of events, as a function of size, with frequency-size slope very close to -1. Observed data from Australia, the US and northern Mexico suggest that real wild fires closely follow power laws in frequency size with slopes ranging from close to -2 to -1.3 (B.D. Malamud et al. 2005). We suggest two models that, by fracturing and trimming large clusters, reduce the number of large fires while maintaining scale invariance. These fracturing and trimming processes can be justified in terms of real physical processes. For each model, we achieve slopes in the frequency-size relation ranging from approximately -1.77 to -1.06.
A role for autophagic protein beclin 1 early in lymphocyte development.
Arsov, Ivica; Adebayo, Adeola; Kucerova-Levisohn, Martina; Haye, Joanna; MacNeil, Margaret; Papavasiliou, F Nina; Yue, Zhenyu; Ortiz, Benjamin D
2011-02-15
Autophagy is a highly regulated and evolutionarily conserved process of cellular self-digestion. Recent evidence suggests that this process plays an important role in regulating T cell homeostasis. In this study, we used Rag1(-/-) (recombination activating gene 1(-/-)) blastocyst complementation and in vitro embryonic stem cell differentiation to address the role of Beclin 1, one of the key autophagic proteins, in lymphocyte development. Beclin 1-deficient Rag1(-/-) chimeras displayed a dramatic reduction in thymic cellularity compared with control mice. Using embryonic stem cell differentiation in vitro, we found that the inability to maintain normal thymic cellularity is likely caused by impaired maintenance of thymocyte progenitors. Interestingly, despite drastically reduced thymocyte numbers, the peripheral T cell compartment of Beclin 1-deficient Rag1(-/-) chimeras is largely normal. Peripheral T cells displayed normal in vitro proliferation despite significantly reduced numbers of autophagosomes. In addition, these chimeras had greatly reduced numbers of early B cells in the bone marrow compared with controls. However, the peripheral B cell compartment was not dramatically impacted by Beclin 1 deficiency. Collectively, our results suggest that Beclin 1 is required for maintenance of undifferentiated/early lymphocyte progenitor populations. In contrast, Beclin 1 is largely dispensable for the initial generation and function of the peripheral T and B cell compartments. This indicates that normal lymphocyte development involves Beclin 1-dependent, early-stage and distinct, Beclin 1-independent, late-stage processes.
Electronic Structure Methods Based on Density Functional Theory
2010-01-01
0188 The public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing...chapter in the ASM Handbook , Volume 22A: Fundamentals of Modeling for Metals Processing, 2010. PAO Case Number: 88ABW-2009-3258; Clearance Date: 16 Jul...are represented using a linear combination, or basis, of plane waves. Over time several methods were developed to avoid the large number of planewaves
A robust real-time abnormal region detection framework from capsule endoscopy images
NASA Astrophysics Data System (ADS)
Cheng, Yanfen; Liu, Xu; Li, Huiping
2009-02-01
In this paper we present a novel method to detect abnormal regions from capsule endoscopy images. Wireless Capsule Endoscopy (WCE) is a recent technology where a capsule with an embedded camera is swallowed by the patient to visualize the gastrointestinal tract. One challenge is one procedure of diagnosis will send out over 50,000 images, making physicians' reviewing process expensive. Physicians' reviewing process involves in identifying images containing abnormal regions (tumor, bleeding, etc) from this large number of image sequence. In this paper we construct a novel framework for robust and real-time abnormal region detection from large amount of capsule endoscopy images. The detected potential abnormal regions can be labeled out automatically to let physicians review further, therefore, reduce the overall reviewing process. In this paper we construct an abnormal region detection framework with the following advantages: 1) Trainable. Users can define and label any type of abnormal region they want to find; The abnormal regions, such as tumor, bleeding, etc., can be pre-defined and labeled using the graphical user interface tool we provided. 2) Efficient. Due to the large number of image data, the detection speed is very important. Our system can detect very efficiently at different scales due to the integral image features we used; 3) Robust. After feature selection we use a cascade of classifiers to further enforce the detection accuracy.
Efficient development and processing of thermal math models of very large space truss structures
NASA Technical Reports Server (NTRS)
Warren, Andrew H.; Arelt, Joseph E.; Lalicata, Anthony L.
1993-01-01
As the spacecraft moves along the orbit, the truss members are subjected to direct and reflected solar, albedo and planetary infra-red (IR) heating rates, as well as IR heating and shadowing from other spacecraft components. This is a transient process with continuously changing heating loads and the shadowing effects. The resulting nonuniform temperature distribution may cause nonuniform thermal expansion, deflection and stress in the truss elements, truss warping and thermal distortions. There are three challenges in the thermal-structural analysis of the large truss structures. The first is the development of the thermal and structural math models, the second - model processing, and the third - the data transfer between the models. All three tasks require considerable time and computer resources to be done because of a very large number of components involved. To address these challenges a series of techniques of automated thermal math modeling and efficient processing of very large space truss structures were developed. In the process the finite element and finite difference methods are interfaced. A very substantial reduction of the quantity of computations was achieved while assuring a desired accuracy of the results. The techniques are illustrated on the thermal analysis of a segment of the Space Station main truss.
Three dimensional hair model by means particles using Blender
NASA Astrophysics Data System (ADS)
Alvarez-Cedillo, Jesús Antonio; Almanza-Nieto, Roberto; Herrera-Lozada, Juan Carlos
2010-09-01
The simulation and modeling of human hair is a process whose computational complexity is very large, this due to the large number of factors that must be calculated to give a realistic appearance. Generally, the method used in the film industry to simulate hair is based on particle handling graphics. In this paper we present a simple approximation of how to model human hair using particles in Blender. [Figure not available: see fulltext.
A scalable moment-closure approximation for large-scale biochemical reaction networks
Kazeroonian, Atefeh; Theis, Fabian J.; Hasenauer, Jan
2017-01-01
Abstract Motivation: Stochastic molecular processes are a leading cause of cell-to-cell variability. Their dynamics are often described by continuous-time discrete-state Markov chains and simulated using stochastic simulation algorithms. As these stochastic simulations are computationally demanding, ordinary differential equation models for the dynamics of the statistical moments have been developed. The number of state variables of these approximating models, however, grows at least quadratically with the number of biochemical species. This limits their application to small- and medium-sized processes. Results: In this article, we present a scalable moment-closure approximation (sMA) for the simulation of statistical moments of large-scale stochastic processes. The sMA exploits the structure of the biochemical reaction network to reduce the covariance matrix. We prove that sMA yields approximating models whose number of state variables depends predominantly on local properties, i.e. the average node degree of the reaction network, instead of the overall network size. The resulting complexity reduction is assessed by studying a range of medium- and large-scale biochemical reaction networks. To evaluate the approximation accuracy and the improvement in computational efficiency, we study models for JAK2/STAT5 signalling and NFκB signalling. Our method is applicable to generic biochemical reaction networks and we provide an implementation, including an SBML interface, which renders the sMA easily accessible. Availability and implementation: The sMA is implemented in the open-source MATLAB toolbox CERENA and is available from https://github.com/CERENADevelopers/CERENA. Contact: jan.hasenauer@helmholtz-muenchen.de or atefeh.kazeroonian@tum.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28881983
Cost effective technologies and renewable substrates for biosurfactants’ production
Banat, Ibrahim M.; Satpute, Surekha K.; Cameotra, Swaranjit S.; Patil, Rajendra; Nyayanit, Narendra V.
2014-01-01
Diverse types of microbial surface active amphiphilic molecules are produced by a range of microbial communities. The extraordinary properties of biosurfactant/bioemulsifier (BS/BE) as surface active products allows them to have key roles in various field of applications such as bioremediation, biodegradation, enhanced oil recovery, pharmaceutics, food processing among many others. This leads to a vast number of potential applications of these BS/BE in different industrial sectors. Despite the huge number of reports and patents describing BS and BE applications and advantages, commercialization of these compounds remain difficult, costly and to a large extent irregular. This is mainly due to the usage of chemically synthesized media for growing producing microorganism and in turn the production of preferred quality products. It is important to note that although a number of developments have taken place in the field of BS industries, large scale production remains economically challenging for many types of these products. This is mainly due to the huge monetary difference between the investment and achievable productivity from the commercial point of view. This review discusses low cost, renewable raw substrates, and fermentation technology in BS/BE production processes and their role in reducing the production cost. PMID:25566213
Mental Imagery, Impact, and Affect: A Mediation Model for Charitable Giving
Dickert, Stephan; Kleber, Janet; Västfjäll, Daniel; Slovic, Paul
2016-01-01
One of the puzzling phenomena in philanthropy is that people can show strong compassion for identified individual victims but remain unmoved by catastrophes that affect large numbers of victims. Two prominent findings in research on charitable giving reflect this idiosyncrasy: The (1) identified victim and (2) victim number effects. The first of these suggests that identifying victims increases donations and the second refers to the finding that people’s willingness to donate often decreases as the number of victims increases. While these effects have been documented in the literature, their underlying psychological processes need further study. We propose a model in which identified victim and victim number effects operate through different cognitive and affective mechanisms. In two experiments we present empirical evidence for such a model and show that different affective motivations (donor-focused vs. victim-focused feelings) are related to the cognitive processes of impact judgments and mental imagery. Moreover, we argue that different mediation pathways exist for identifiability and victim number effects. PMID:26859848
Mental Imagery, Impact, and Affect: A Mediation Model for Charitable Giving.
Dickert, Stephan; Kleber, Janet; Västfjäll, Daniel; Slovic, Paul
2016-01-01
One of the puzzling phenomena in philanthropy is that people can show strong compassion for identified individual victims but remain unmoved by catastrophes that affect large numbers of victims. Two prominent findings in research on charitable giving reflect this idiosyncrasy: The (1) identified victim and (2) victim number effects. The first of these suggests that identifying victims increases donations and the second refers to the finding that people's willingness to donate often decreases as the number of victims increases. While these effects have been documented in the literature, their underlying psychological processes need further study. We propose a model in which identified victim and victim number effects operate through different cognitive and affective mechanisms. In two experiments we present empirical evidence for such a model and show that different affective motivations (donor-focused vs. victim-focused feelings) are related to the cognitive processes of impact judgments and mental imagery. Moreover, we argue that different mediation pathways exist for identifiability and victim number effects.
Design and Development of a Prototype Organizational Effectiveness Information System
1984-11-01
information from a large number of people. The existing survey support process for the GOQ is not satisfac- * tory. Most OESOs elect not to use it, because...reporting process uses screen queries and menus to simplify data entry, it is estimated that only 4-6 hours of data entry time would be required for ...description for the file named EVEDIR. The Resource System allows users of the Event Directory to select from the following processing options. o Add a new
Rate laws of the self-induced aggregation kinetics of Brownian particles
NASA Astrophysics Data System (ADS)
Mondal, Shrabani; Sen, Monoj Kumar; Baura, Alendu; Bag, Bidhan Chandra
2016-03-01
In this paper we have studied the self induced aggregation kinetics of Brownian particles in the presence of both multiplicative and additive noises. In addition to the drift due to the self aggregation process, the environment may induce a drift term in the presence of a multiplicative noise. Then there would be an interplay between the two drift terms. It may account qualitatively the appearance of the different laws of aggregation process. At low strength of white multiplicative noise, the cluster number decreases as a Gaussian function of time. If the noise strength becomes appreciably large then the variation of cluster number with time is fitted well by the mono exponentially decaying function of time. For additive noise driven case, the decrease of cluster number can be described by the power law. But in case of multiplicative colored driven process, cluster number decays multi exponentially. However, we have explored how the rate constant (in the mono exponentially cluster number decaying case) depends on strength of interference of the noises and their intensity. We have also explored how the structure factor at long time depends on the strength of the cross correlation (CC) between the additive and the multiplicative noises.
Zhao, Shanrong; Prenger, Kurt; Smith, Lance
2013-01-01
RNA-Seq is becoming a promising replacement to microarrays in transcriptome profiling and differential gene expression study. Technical improvements have decreased sequencing costs and, as a result, the size and number of RNA-Seq datasets have increased rapidly. However, the increasing volume of data from large-scale RNA-Seq studies poses a practical challenge for data analysis in a local environment. To meet this challenge, we developed Stormbow, a cloud-based software package, to process large volumes of RNA-Seq data in parallel. The performance of Stormbow has been tested by practically applying it to analyse 178 RNA-Seq samples in the cloud. In our test, it took 6 to 8 hours to process an RNA-Seq sample with 100 million reads, and the average cost was $3.50 per sample. Utilizing Amazon Web Services as the infrastructure for Stormbow allows us to easily scale up to handle large datasets with on-demand computational resources. Stormbow is a scalable, cost effective, and open-source based tool for large-scale RNA-Seq data analysis. Stormbow can be freely downloaded and can be used out of box to process Illumina RNA-Seq datasets. PMID:25937948
Zhao, Shanrong; Prenger, Kurt; Smith, Lance
2013-01-01
RNA-Seq is becoming a promising replacement to microarrays in transcriptome profiling and differential gene expression study. Technical improvements have decreased sequencing costs and, as a result, the size and number of RNA-Seq datasets have increased rapidly. However, the increasing volume of data from large-scale RNA-Seq studies poses a practical challenge for data analysis in a local environment. To meet this challenge, we developed Stormbow, a cloud-based software package, to process large volumes of RNA-Seq data in parallel. The performance of Stormbow has been tested by practically applying it to analyse 178 RNA-Seq samples in the cloud. In our test, it took 6 to 8 hours to process an RNA-Seq sample with 100 million reads, and the average cost was $3.50 per sample. Utilizing Amazon Web Services as the infrastructure for Stormbow allows us to easily scale up to handle large datasets with on-demand computational resources. Stormbow is a scalable, cost effective, and open-source based tool for large-scale RNA-Seq data analysis. Stormbow can be freely downloaded and can be used out of box to process Illumina RNA-Seq datasets.
Recommendations for the design and the installation of large laser scanning microscopy systems
NASA Astrophysics Data System (ADS)
Helm, P. Johannes
2012-03-01
Laser Scanning Microscopy (LSM) has since the inventions of the Confocal Scanning Laser Microscope (CLSM) and the Multi Photon Laser Scanning Microscope (MPLSM) developed into an essential tool in contemporary life science and material science. The market provides an increasing number of turn-key and hands-off commercial LSM systems, un-problematic to purchase, set up and integrate even into minor research groups. However, the successful definition, financing, acquisition, installation and effective use of one or more large laser scanning microscopy systems, possibly of core facility character, often requires major efforts by senior staff members of large academic or industrial units. Here, a set of recommendations is presented, which are helpful during the process of establishing large systems for confocal or non-linear laser scanning microscopy as an effective operational resource in the scientific or industrial production process. Besides the description of technical difficulties and possible pitfalls, the article also illuminates some seemingly "less scientific" processes, i.e. the definition of specific laboratory demands, advertisement of the intention to purchase one or more large systems, evaluation of quotations, establishment of contracts and preparation of the local environment and laboratory infrastructure.
Relating xylem cavitation to gas exchange in cotton
USDA-ARS?s Scientific Manuscript database
Acoustic emissions (AEs) from xylem cavitation events are characteristic of transpiration processes. Though a body of work using AE exists with a large number of species, cotton and other agronomically important crops have either not been investigated, or limited information exists. The objective of...
Relating xylem cavitation to transpiration in cotton
USDA-ARS?s Scientific Manuscript database
Acoustic emmisions (AEs) from xylem cavitation events are characteristic of transpiration processes. Even though a body of work employing AE exists with a large number of species, cotton and other agronomically important crops have either not been investigated, or limited information exists. A few s...
Conservation--In the People's Hands.
ERIC Educational Resources Information Center
Foster, A.B.; And Others
Because of increasing population and industrialization and decreasing supplies of natural resources, the need for sound conservation practices has become paramount, particularly within the last decade. In addition, the process of urbanization has limited the contacts large numbers of children have with the outdoors. Education must assume a…
Plasma surface figuring of large optical components
NASA Astrophysics Data System (ADS)
Jourdain, R.; Castelli, M.; Morantz, P.; Shore, P.
2012-04-01
Fast figuring of large optical components is well known as a highly challenging manufacturing issue. Different manufacturing technologies including: magnetorheological finishing, loose abrasive polishing, ion beam figuring are presently employed. Yet, these technologies are slow and lead to expensive optics. This explains why plasma-based processes operating at atmospheric pressure have been researched as a cost effective means for figure correction of metre scale optical surfaces. In this paper, fast figure correction of a large optical surface is reported using the Reactive Atom Plasma (RAP) process. Achievements are shown following the scaling-up of the RAP figuring process to a 400 mm diameter area of a substrate made of Corning ULE®. The pre-processing spherical surface is characterized by a 3 metres radius of curvature, 2.3 μm PVr (373nm RMS), and 1.2 nm Sq nanometre roughness. The nanometre scale correction figuring system used for this research work is named the HELIOS 1200, and it is equipped with a unique plasma torch which is driven by a dedicated tool path algorithm. Topography map measurements were carried out using a vertical work station instrumented by a Zygo DynaFiz interferometer. Figuring results, together with the processing times, convergence levels and number of iterations, are reported. The results illustrate the significant potential and advantage of plasma processing for figuring correction of large silicon based optical components.
ChIP-seq reveals broad roles of SARD1 and CBP60g in regulating plant immunity.
Sun, Tongjun; Zhang, Yaxi; Li, Yan; Zhang, Qian; Ding, Yuli; Zhang, Yuelin
2015-12-18
Recognition of pathogens by host plants leads to rapid transcriptional reprogramming and activation of defence responses. The expression of many defence regulators is induced in this process, but the mechanisms of how they are controlled transcriptionally are largely unknown. Here we use chromatin immunoprecipitation sequencing to show that the transcription factors SARD1 and CBP60g bind to the promoter regions of a large number of genes encoding key regulators of plant immunity. Among them are positive regulators of systemic immunity and signalling components for effector-triggered immunity and PAMP-triggered immunity, which is consistent with the critical roles of SARD1 and CBP60g in these processes. In addition, SARD1 and CBP60g target a number of genes encoding negative regulators of plant immunity, suggesting that they are also involved in negative feedback regulation of defence responses. Based on these findings we propose that SARD1 and CBP60g function as master regulators of plant immune responses.
Statistical and clustering analysis for disturbances: A case study of voltage dips in wind farms
Garcia-Sanchez, Tania; Gomez-Lazaro, Emilio; Muljadi, Eduard; ...
2016-01-28
This study proposes and evaluates an alternative statistical methodology to analyze a large number of voltage dips. For a given voltage dip, a set of lengths is first identified to characterize the root mean square (rms) voltage evolution along the disturbance, deduced from partial linearized time intervals and trajectories. Principal component analysis and K-means clustering processes are then applied to identify rms-voltage patterns and propose a reduced number of representative rms-voltage profiles from the linearized trajectories. This reduced group of averaged rms-voltage profiles enables the representation of a large amount of disturbances, which offers a visual and graphical representation ofmore » their evolution along the events, aspects that were not previously considered in other contributions. The complete process is evaluated on real voltage dips collected in intense field-measurement campaigns carried out in a wind farm in Spain among different years. The results are included in this paper.« less
Statistical mechanics of complex economies
NASA Astrophysics Data System (ADS)
Bardoscia, Marco; Livan, Giacomo; Marsili, Matteo
2017-04-01
In the pursuit of ever increasing efficiency and growth, our economies have evolved to remarkable degrees of complexity, with nested production processes feeding each other in order to create products of greater sophistication from less sophisticated ones, down to raw materials. The engine of such an expansion have been competitive markets that, according to general equilibrium theory (GET), achieve efficient allocations under specific conditions. We study large random economies within the GET framework, as templates of complex economies, and we find that a non-trivial phase transition occurs: the economy freezes in a state where all production processes collapse when either the number of primary goods or the number of available technologies fall below a critical threshold. As in other examples of phase transitions in large random systems, this is an unintended consequence of the growth in complexity. Our findings suggest that the Industrial Revolution can be regarded as a sharp transition between different phases, but also imply that well developed economies can collapse if too many intermediate goods are introduced.
Proteomic Analysis of the Mediator Complex Interactome in Saccharomyces cerevisiae.
Uthe, Henriette; Vanselow, Jens T; Schlosser, Andreas
2017-02-27
Here we present the most comprehensive analysis of the yeast Mediator complex interactome to date. Particularly gentle cell lysis and co-immunopurification conditions allowed us to preserve even transient protein-protein interactions and to comprehensively probe the molecular environment of the Mediator complex in the cell. Metabolic 15 N-labeling thereby enabled stringent discrimination between bona fide interaction partners and nonspecifically captured proteins. Our data indicates a functional role for Mediator beyond transcription initiation. We identified a large number of Mediator-interacting proteins and protein complexes, such as RNA polymerase II, general transcription factors, a large number of transcriptional activators, the SAGA complex, chromatin remodeling complexes, histone chaperones, highly acetylated histones, as well as proteins playing a role in co-transcriptional processes, such as splicing, mRNA decapping and mRNA decay. Moreover, our data provides clear evidence, that the Mediator complex interacts not only with RNA polymerase II, but also with RNA polymerases I and III, and indicates a functional role of the Mediator complex in rRNA processing and ribosome biogenesis.
TomoMiner and TomoMinerCloud: A software platform for large-scale subtomogram structural analysis
Frazier, Zachary; Xu, Min; Alber, Frank
2017-01-01
SUMMARY Cryo-electron tomography (cryoET) captures the 3D electron density distribution of macromolecular complexes in close to native state. With the rapid advance of cryoET acquisition technologies, it is possible to generate large numbers (>100,000) of subtomograms, each containing a macromolecular complex. Often, these subtomograms represent a heterogeneous sample due to variations in structure and composition of a complex in situ form or because particles are a mixture of different complexes. In this case subtomograms must be classified. However, classification of large numbers of subtomograms is a time-intensive task and often a limiting bottleneck. This paper introduces an open source software platform, TomoMiner, for large-scale subtomogram classification, template matching, subtomogram averaging, and alignment. Its scalable and robust parallel processing allows efficient classification of tens to hundreds of thousands of subtomograms. Additionally, TomoMiner provides a pre-configured TomoMinerCloud computing service permitting users without sufficient computing resources instant access to TomoMiners high-performance features. PMID:28552576
Adams, Bradley J; Aschheim, Kenneth W
2016-01-01
Comparison of antemortem and postmortem dental records is a leading method of victim identification, especially for incidents involving a large number of decedents. This process may be expedited with computer software that provides a ranked list of best possible matches. This study provides a comparison of the most commonly used conventional coding and sorting algorithms used in the United States (WinID3) with a simplified coding format that utilizes an optimized sorting algorithm. The simplified system consists of seven basic codes and utilizes an optimized algorithm based largely on the percentage of matches. To perform this research, a large reference database of approximately 50,000 antemortem and postmortem records was created. For most disaster scenarios, the proposed simplified codes, paired with the optimized algorithm, performed better than WinID3 which uses more complex codes. The detailed coding system does show better performance with extremely large numbers of records and/or significant body fragmentation. © 2015 American Academy of Forensic Sciences.
ERIC Educational Resources Information Center
Bailey, Anthony
2013-01-01
The nominal group technique (NGT) is a structured process to gather information from a group. The technique was first described in 1975 and has since become a widely-used standard to facilitate working groups. The NGT is effective for generating large numbers of creative new ideas and for group priority setting. This paper describes the process of…
Survey of selective solar absorbers and their limitations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mattox, D.M.; Sowell, R.R.
1980-01-01
A number of selective absorber coating systems with high solar absorptance exist which may be used in the mid-temperature range. Some of the systems are more chemically and thermally stable than others. Unfortunately, there are large gaps in the stability data for a large number of the systems. In an inert environment, the principle degradation mechanisms are interdiffusion between the layers or phases and changes in surface morphology. These degradation mechanisms would be minimized by using refractory metals and compounds for the absorbing layer and using refractory materials or diffusion barriers for the underlayer. For use in a reactive environment,more » the choice of materials is much more restrictive since internal chemical reactions can change phase compositions and interfacial reactions can lead to loss of adhesion. For a coating process to be useful, it is necessary to determine what parameters influence the performance of the coating and the limits to these parameters. This process sensitivity has a direct influence on the production process controls necessary to produce a good product. Experience with electroplated black chrome has been rather disappointing. Electroplating should be a low cost deposition process but the extensive bath analysis and optical monitoring necessary to produce a thermally stable produce for use to 320/sup 0/C has increased cost signficantly. 49 references.« less
Interferometry-based free space communication and information processing
NASA Astrophysics Data System (ADS)
Arain, Muzammil Arshad
This dissertation studies, analyzes, and experimentally demonstrates the innovative use of interference phenomenon in the field of opto-electronic information processing and optical communications. A number of optical systems using interferometric techniques both in the optical and the electronic domains has been demonstrated in the filed of signal transmission and processing, optical metrology, defense, and physical sensors. Specifically it has been shown that the interference of waves in the form of holography can be exploited to realize a novel optical scanner called Code Multiplexed Optical Scanner (C-MOS). The C-MOS features large aperture, wide scan angles, 3-D beam control, no moving parts, and high beam scanning resolution. A C-MOS based free space optical transceiver for bi-directional communication has also been experimentally demonstrated. For high speed, large bandwidth, and high frequency operation, an optically implemented reconfigurable RF transversal filter design is presented that implements wide range of filtering algorithms. A number of techniques using heterodyne interferometry via acousto-optic device for optical path length measurements have been described. Finally, a whole new class of interferometric sensors for optical metrology and sensing applications is presented. A non-traditional interferometric output signal processing scheme has been developed. Applications include, for example, temperature sensors for harsh environments for a wide temperature range from room temperature to 1000°C.
NASA Astrophysics Data System (ADS)
Hyun, Seung; Kwon, Owoong; Lee, Bom-Yi; Seol, Daehee; Park, Beomjin; Lee, Jae Yong; Lee, Ju Hyun; Kim, Yunseok; Kim, Jin Kon
2016-01-01
Multiple data writing-based multi-level non-volatile memory has gained strong attention for next-generation memory devices to quickly accommodate an extremely large number of data bits because it is capable of storing multiple data bits in a single memory cell at once. However, all previously reported devices have failed to store a large number of data bits due to the macroscale cell size and have not allowed fast access to the stored data due to slow single data writing. Here, we introduce a novel three-dimensional multi-floor cascading polymeric ferroelectric nanostructure, successfully operating as an individual cell. In one cell, each floor has its own piezoresponse and the piezoresponse of one floor can be modulated by the bias voltage applied to the other floor, which means simultaneously written data bits in both floors can be identified. This could achieve multi-level memory through a multiple data writing process.Multiple data writing-based multi-level non-volatile memory has gained strong attention for next-generation memory devices to quickly accommodate an extremely large number of data bits because it is capable of storing multiple data bits in a single memory cell at once. However, all previously reported devices have failed to store a large number of data bits due to the macroscale cell size and have not allowed fast access to the stored data due to slow single data writing. Here, we introduce a novel three-dimensional multi-floor cascading polymeric ferroelectric nanostructure, successfully operating as an individual cell. In one cell, each floor has its own piezoresponse and the piezoresponse of one floor can be modulated by the bias voltage applied to the other floor, which means simultaneously written data bits in both floors can be identified. This could achieve multi-level memory through a multiple data writing process. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr07377d
Geerts, Hugo; Hofmann-Apitius, Martin; Anastasio, Thomas J
2017-11-01
Neurodegenerative diseases such as Alzheimer's disease (AD) follow a slowly progressing dysfunctional trajectory, with a large presymptomatic component and many comorbidities. Using preclinical models and large-scale omics studies ranging from genetics to imaging, a large number of processes that might be involved in AD pathology at different stages and levels have been identified. The sheer number of putative hypotheses makes it almost impossible to estimate their contribution to the clinical outcome and to develop a comprehensive view on the pathological processes driving the clinical phenotype. Traditionally, bioinformatics approaches have provided correlations and associations between processes and phenotypes. Focusing on causality, a new breed of advanced and more quantitative modeling approaches that use formalized domain expertise offer new opportunities to integrate these different modalities and outline possible paths toward new therapeutic interventions. This article reviews three different computational approaches and their possible complementarities. Process algebras, implemented using declarative programming languages such as Maude, facilitate simulation and analysis of complicated biological processes on a comprehensive but coarse-grained level. A model-driven Integration of Data and Knowledge, based on the OpenBEL platform and using reverse causative reasoning and network jump analysis, can generate mechanistic knowledge and a new, mechanism-based taxonomy of disease. Finally, Quantitative Systems Pharmacology is based on formalized implementation of domain expertise in a more fine-grained, mechanism-driven, quantitative, and predictive humanized computer model. We propose a strategy to combine the strengths of these individual approaches for developing powerful modeling methodologies that can provide actionable knowledge for rational development of preventive and therapeutic interventions. Development of these computational approaches is likely to be required for further progress in understanding and treating AD. Copyright © 2017 the Alzheimer's Association. Published by Elsevier Inc. All rights reserved.
Square Kilometre Array Science Data Processing
NASA Astrophysics Data System (ADS)
Nikolic, Bojan; SDP Consortium, SKA
2014-04-01
The Square Kilometre Array (SKA) is planned to be, by a large factor, the largest and most sensitive radio telescope ever constructed. The first phase of the telescope (SKA1), now in the design phase, will in itself represent a major leap in capabilities compared to current facilities. These advances are to a large extent being made possible by advances in available computer processing power so that that larger numbers of smaller, simpler and cheaper receptors can be used. As a result of greater reliance and demands on computing, ICT is becoming an ever more integral part of the telescope. The Science Data Processor is the part of the SKA system responsible for imaging, calibration, pulsar timing, confirmation of pulsar candidates, derivation of some further derived data products, archiving and providing the data to the users. It will accept visibilities at data rates at several TB/s and require processing power for imaging in range 100 petaFLOPS -- ~1 ExaFLOPS, putting SKA1 into the regime of exascale radio astronomy. In my talk I will present the overall SKA system requirements and how they drive these high data throughput and processing requirements. Some of the key challenges for the design of SDP are: - Identifying sufficient parallelism to utilise very large numbers of separate compute cores that will be required to provide exascale computing throughput - Managing efficiently the high internal data flow rates - A conceptual architecture and software engineering approach that will allow adaptation of the algorithms as we learn about the telescope and the atmosphere during the commissioning and operational phases - System management that will deal gracefully with (inevitably frequent) failures of individual units of the processing system In my talk I will present possible initial architectures for the SDP system that attempt to address these and other challenges.
Forecasting distribution of numbers of large fires
Eidenshink, Jeffery C.; Preisler, Haiganoush K.; Howard, Stephen; Burgan, Robert E.
2014-01-01
Systems to estimate forest fire potential commonly utilize one or more indexes that relate to expected fire behavior; however they indicate neither the chance that a large fire will occur, nor the expected number of large fires. That is, they do not quantify the probabilistic nature of fire danger. In this work we use large fire occurrence information from the Monitoring Trends in Burn Severity project, and satellite and surface observations of fuel conditions in the form of the Fire Potential Index, to estimate two aspects of fire danger: 1) the probability that a 1 acre ignition will result in a 100+ acre fire, and 2) the probabilities of having at least 1, 2, 3, or 4 large fires within a Predictive Services Area in the forthcoming week. These statistical processes are the main thrust of the paper and are used to produce two daily national forecasts that are available from the U.S. Geological Survey, Earth Resources Observation and Science Center and via the Wildland Fire Assessment System. A validation study of our forecasts for the 2013 fire season demonstrated good agreement between observed and forecasted values.
Relationships between number and space processing in adults with and without dyscalculia.
Mussolin, Christophe; Martin, Romain; Schiltz, Christine
2011-09-01
A large body of evidence indicates clear relationships between number and space processing in healthy and brain-damaged adults, as well as in children. The present paper addressed this issue regarding atypical math development. Adults with a diagnosis of dyscalculia (DYS) during childhood were compared to adults with average or high abilities in mathematics across two bisection tasks. Participants were presented with Arabic number triplets and had to judge either the number magnitude or the spatial location of the middle number relative to the two outer numbers. For the numerical judgment, adults with DYS were slower than both groups of control peers. They were also more strongly affected by the factors related to number magnitude such as the range of the triplets or the distance between the middle number and the real arithmetical mean. By contrast, adults with DYS were as accurate and fast as adults who never experienced math disability when they had to make a spatial judgment. Moreover, number-space congruency affected performance similarly in the three experimental groups. These findings support the hypothesis of a deficit of number magnitude representation in DYS with a relative preservation of some spatial mechanisms in DYS. Results are discussed in terms of direct and indirect number-space interactions. Copyright © 2011 Elsevier B.V. All rights reserved.
Using Mosix for Wide-Area Compuational Resources
Maddox, Brian G.
2004-01-01
One of the problems with using traditional Beowulf-type distributed processing clusters is that they require an investment in dedicated computer resources. These resources are usually needed in addition to pre-existing ones such as desktop computers and file servers. Mosix is a series of modifications to the Linux kernel that creates a virtual computer, featuring automatic load balancing by migrating processes from heavily loaded nodes to less used ones. An extension of the Beowulf concept is to run a Mosixenabled Linux kernel on a large number of computer resources in an organization. This configuration would provide a very large amount of computational resources based on pre-existing equipment. The advantage of this method is that it provides much more processing power than a traditional Beowulf cluster without the added costs of dedicating resources.
Semantic orchestration of image processing services for environmental analysis
NASA Astrophysics Data System (ADS)
Ranisavljević, Élisabeth; Devin, Florent; Laffly, Dominique; Le Nir, Yannick
2013-09-01
In order to analyze environmental dynamics, a major process is the classification of the different phenomena of the site (e.g. ice and snow for a glacier). When using in situ pictures, this classification requires data pre-processing. Not all the pictures need the same sequence of processes depending on the disturbances. Until now, these sequences have been done manually, which restricts the processing of large amount of data. In this paper, we present how to realize a semantic orchestration to automate the sequencing for the analysis. It combines two advantages: solving the problem of the amount of processing, and diversifying the possibilities in the data processing. We define a BPEL description to express the sequences. This BPEL uses some web services to run the data processing. Each web service is semantically annotated using an ontology of image processing. The dynamic modification of the BPEL is done using SPARQL queries on these annotated web services. The results obtained by a prototype implementing this method validate the construction of the different workflows that can be applied to a large number of pictures.
The Automatic Recognition of the Abnormal Sky-subtraction Spectra Based on Hadoop
NASA Astrophysics Data System (ADS)
An, An; Pan, Jingchang
2017-10-01
The skylines, superimposing on the target spectrum as a main noise, If the spectrum still contains a large number of high strength skylight residuals after sky-subtraction processing, it will not be conducive to the follow-up analysis of the target spectrum. At the same time, the LAMOST can observe a quantity of spectroscopic data in every night. We need an efficient platform to proceed the recognition of the larger numbers of abnormal sky-subtraction spectra quickly. Hadoop, as a distributed parallel data computing platform, can deal with large amounts of data effectively. In this paper, we conduct the continuum normalization firstly and then a simple and effective method will be presented to automatic recognize the abnormal sky-subtraction spectra based on Hadoop platform. Obtain through the experiment, the Hadoop platform can implement the recognition with more speed and efficiency, and the simple method can recognize the abnormal sky-subtraction spectra and find the abnormal skyline positions of different residual strength effectively, can be applied to the automatic detection of abnormal sky-subtraction of large number of spectra.
Xu, Jun; Bu, Fan-Xing; Guo, Yi-Fei; Zhang, Wei; Hu, Ming; Jiang, Ji-Sen
2018-05-01
Radioactive cesium pollution have received considerable attention due to the increasing risks in development of the nuclear power plants in the world. Although various functional porous materials are utilized to adsorb Cs+ ions in water, Prussian blue analogues (PBAs) are an impressive class of candidates because of their super affinity of Cs+ ions. The adsorption ability of the PBAs strongly relate to the mesostructure and interstitial sites. To design a hollow PBA with large number of interstitial sites, the traditional hollowing methods are not suitable owing to the difficulty in processing the specific PBAs with large number of interstitial sites. In this work, we empolyed a rational strategy which was to form a "metal oxide"@"PBA" core-shell structure via coordination replication at first, then utilized a mild etching to remove the metal oxide core, led to hollow PBA finally. The obtained hollow PBAs were of high crystallinity and large number of interstitial sites, showing a super adsorption performance for Cs+ ions (221.6 mg/g) within a short period (10 min).
Magnitude knowledge: the common core of numerical development.
Siegler, Robert S
2016-05-01
The integrated theory of numerical development posits that a central theme of numerical development from infancy to adulthood is progressive broadening of the types and ranges of numbers whose magnitudes are accurately represented. The process includes four overlapping trends: (1) representing increasingly precisely the magnitudes of non-symbolic numbers, (2) connecting small symbolic numbers to their non-symbolic referents, (3) extending understanding from smaller to larger whole numbers, and (4) accurately representing the magnitudes of rational numbers. The present review identifies substantial commonalities, as well as differences, in these four aspects of numerical development. With both whole and rational numbers, numerical magnitude knowledge is concurrently correlated with, longitudinally predictive of, and causally related to multiple aspects of mathematical understanding, including arithmetic and overall math achievement. Moreover, interventions focused on increasing numerical magnitude knowledge often generalize to other aspects of mathematics. The cognitive processes of association and analogy seem to play especially large roles in this development. Thus, acquisition of numerical magnitude knowledge can be seen as the common core of numerical development. © 2016 John Wiley & Sons Ltd.
Benchmarking Memory Performance with the Data Cube Operator
NASA Technical Reports Server (NTRS)
Frumkin, Michael A.; Shabanov, Leonid V.
2004-01-01
Data movement across a computer memory hierarchy and across computational grids is known to be a limiting factor for applications processing large data sets. We use the Data Cube Operator on an Arithmetic Data Set, called ADC, to benchmark capabilities of computers and of computational grids to handle large distributed data sets. We present a prototype implementation of a parallel algorithm for computation of the operatol: The algorithm follows a known approach for computing views from the smallest parent. The ADC stresses all levels of grid memory and storage by producing some of 2d views of an Arithmetic Data Set of d-tuples described by a small number of integers. We control data intensity of the ADC by selecting the tuple parameters, the sizes of the views, and the number of realized views. Benchmarking results of memory performance of a number of computer architectures and of a small computational grid are presented.
Chini, G P; Montemuro, B; White, C M; Klewicki, J
2017-03-13
Field observations and laboratory experiments suggest that at high Reynolds numbers Re the outer region of turbulent boundary layers self-organizes into quasi-uniform momentum zones (UMZs) separated by internal shear layers termed 'vortical fissures' (VFs). Motivated by this emergent structure, a conceptual model is proposed with dynamical components that collectively have the potential to generate a self-sustaining interaction between a single VF and adjacent UMZs. A large-Re asymptotic analysis of the governing incompressible Navier-Stokes equation is performed to derive reduced equation sets for the streamwise-averaged and streamwise-fluctuating flow within the VF and UMZs. The simplified equations reveal the dominant physics within-and isolate possible coupling mechanisms among-these different regions of the flow.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).
Reducing hospital admissions of healthy children with functional constipation: a quality initiative
Deneau, Mark; Mutyala, Ramakrishna; Sandweiss, David; Harnsberger, Janet; Varier, Raghu; Pohl, John F; Allen, Lauren; Thackeray, Callie; Zobell, Sarah; Maloney, Christopher
2017-01-01
Functional constipation (FC) is a common medical problem in children, with minimal risk of long-term complications. We determined that a large number of children were being admitted to our children’s hospital for FC in which there was no neurological or anatomical cause. Our hospital experienced a patient complication in which a patient died after inpatient treatment of FC. Subsequently, we developed a standardised approach to determine when paediatric patients needed hospitalisation for FC, as well as to develop a regimented outpatient therapeutic approach for such children to prevent hospitalisation. Our quality improvement initiative resulted in a large decrease in the number of children with FC admitted into the hospital as well as a decrease in the number of children needing faecal disimpaction in the operating room. Our quality improvement process can be used to decrease hospitalisations, decrease healthcare costs and improve patient care for paediatric FC. PMID:29450284
Reducing hospital admissions of healthy children with functional constipation: a quality initiative.
Deneau, Mark; Mutyala, Ramakrishna; Sandweiss, David; Harnsberger, Janet; Varier, Raghu; Pohl, John F; Allen, Lauren; Thackeray, Callie; Zobell, Sarah; Maloney, Christopher
2017-01-01
Functional constipation (FC) is a common medical problem in children, with minimal risk of long-term complications. We determined that a large number of children were being admitted to our children's hospital for FC in which there was no neurological or anatomical cause. Our hospital experienced a patient complication in which a patient died after inpatient treatment of FC. Subsequently, we developed a standardised approach to determine when paediatric patients needed hospitalisation for FC, as well as to develop a regimented outpatient therapeutic approach for such children to prevent hospitalisation. Our quality improvement initiative resulted in a large decrease in the number of children with FC admitted into the hospital as well as a decrease in the number of children needing faecal disimpaction in the operating room. Our quality improvement process can be used to decrease hospitalisations, decrease healthcare costs and improve patient care for paediatric FC.
Constraining the astrophysical origin of the p-nuclei through nuclear physics and meteoritic data.
Rauscher, T; Dauphas, N; Dillmann, I; Fröhlich, C; Fülöp, Zs; Gyürky, Gy
2013-06-01
A small number of naturally occurring, proton-rich nuclides (the p-nuclei) cannot be made in the s- and r-processes. Their origin is not well understood. Massive stars can produce p-nuclei through photodisintegration of pre-existing intermediate and heavy nuclei. This so-called γ-process requires high stellar plasma temperatures and occurs mainly in explosive O/Ne burning during a core-collapse supernova. Although the γ-process in massive stars has been successful in producing a large range of p-nuclei, significant deficiencies remain. An increasing number of processes and sites has been studied in recent years in search of viable alternatives replacing or supplementing the massive star models. A large number of unstable nuclei, however, with only theoretically predicted reaction rates are included in the reaction network and thus the nuclear input may also bear considerable uncertainties. The current status of astrophysical models, nuclear input and observational constraints is reviewed. After an overview of currently discussed models, the focus is on the possibility to better constrain those models through different means. Meteoritic data not only provide the actual isotopic abundances of the p-nuclei but can also put constraints on the possible contribution of proton-rich nucleosynthesis. The main part of the review focuses on the nuclear uncertainties involved in the determination of the astrophysical reaction rates required for the extended reaction networks used in nucleosynthesis studies. Experimental approaches are discussed together with their necessary connection to theory, which is especially pronounced for reactions with intermediate and heavy nuclei in explosive nuclear burning, even close to stability.
Sub-grid-scale description of turbulent magnetic reconnection in magnetohydrodynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Widmer, F., E-mail: widmer@mps.mpg.de; Institut für Astrophysik, Georg-August-Universität, Friedrich-Hund-Platz 1, 37077 Göttingen; Büchner, J.
Magnetic reconnection requires, at least locally, a non-ideal plasma response. In collisionless space and astrophysical plasmas, turbulence could transport energy from large to small scales where binary particle collisions are rare. We have investigated the influence of small scale magnetohydrodynamics (MHD) turbulence on the reconnection rate in the framework of a compressible MHD approach including sub-grid-scale (SGS) turbulence. For this sake, we considered Harris-type and force-free current sheets with finite guide magnetic fields directed out of the reconnection plane. The goal is to find out whether unresolved by conventional simulations MHD turbulence can enhance the reconnection process in high-Reynolds-number astrophysicalmore » plasmas. Together with the MHD equations, we solve evolution equations for the SGS energy and cross-helicity due to turbulence according to a Reynolds-averaged turbulence model. The SGS turbulence is self-generated and -sustained through the inhomogeneities of the mean fields. By this way, the feedback of the unresolved turbulence into the MHD reconnection process is taken into account. It is shown that the turbulence controls the regimes of reconnection by its characteristic timescale τ{sub t}. The dependence on resistivity was investigated for large-Reynolds-number plasmas for Harris-type as well as force-free current sheets with guide field. We found that magnetic reconnection depends on the relation between the molecular and apparent effective turbulent resistivity. We found that the turbulence timescale τ{sub t} decides whether fast reconnection takes place or whether the stored energy is just diffused away to small scale turbulence. If the amount of energy transferred from large to small scales is enhanced, fast reconnection can take place. Energy spectra allowed us to characterize the different regimes of reconnection. It was found that reconnection is even faster for larger Reynolds numbers controlled by the molecular resistivity η, as long as the initial level of turbulence is not too large. This implies that turbulence plays an important role to reach the limit of fast reconnection in large Reynolds number plasmas even for smaller amounts of turbulence.« less
Adaptive MCMC in Bayesian phylogenetics: an application to analyzing partitioned data in BEAST.
Baele, Guy; Lemey, Philippe; Rambaut, Andrew; Suchard, Marc A
2017-06-15
Advances in sequencing technology continue to deliver increasingly large molecular sequence datasets that are often heavily partitioned in order to accurately model the underlying evolutionary processes. In phylogenetic analyses, partitioning strategies involve estimating conditionally independent models of molecular evolution for different genes and different positions within those genes, requiring a large number of evolutionary parameters that have to be estimated, leading to an increased computational burden for such analyses. The past two decades have also seen the rise of multi-core processors, both in the central processing unit (CPU) and Graphics processing unit processor markets, enabling massively parallel computations that are not yet fully exploited by many software packages for multipartite analyses. We here propose a Markov chain Monte Carlo (MCMC) approach using an adaptive multivariate transition kernel to estimate in parallel a large number of parameters, split across partitioned data, by exploiting multi-core processing. Across several real-world examples, we demonstrate that our approach enables the estimation of these multipartite parameters more efficiently than standard approaches that typically use a mixture of univariate transition kernels. In one case, when estimating the relative rate parameter of the non-coding partition in a heterochronous dataset, MCMC integration efficiency improves by > 14-fold. Our implementation is part of the BEAST code base, a widely used open source software package to perform Bayesian phylogenetic inference. guy.baele@kuleuven.be. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Taking OSCE examiner training on the road: reaching the masses
Reid, Katharine; Smallwood, David; Collins, Margo; Sutherland, Ruth; Dodds, Agnes
2016-01-01
Background To ensure the rigour of objective structured clinical examinations (OSCEs) in assessing medical students, medical school educators must educate examiners with a view to standardising examiner assessment behaviour. Delivering OSCE examiner training is a necessary yet challenging part of the OSCE process. A novel approach to implementing training for current and potential OSCE examiners was trialled by delivering large-group education sessions at major teaching hospitals. Methods The ‘OSCE Roadshow’ comprised a short training session delivered in the context of teaching hospital ‘Grand Rounds’ to current and potential OSCE examiners. The training was developed to educate clinicians about OSCE processes, clarify the examiners’ role and required behaviours, and to review marking guides and mark allocation in an effort to standardise OSCE processes and encourage consistency in examiner marking behaviour. A short exercise allowed participants to practise marking a mock OSCE to investigate examiner marking behaviour after the training. Results OSCE Roadshows at four metropolitan and one rural teaching hospital were well received and well attended by 171 clinicians across six sessions. Unexpectedly, medical students also attended in large numbers (n=220). After training, participants’ average scores for the mock OSCE clustered closely around the ideal score of 28 (out of 40), and the average scores did not differ according to the levels of clinical experience. Conclusion The OSCE Roadshow demonstrated the potential of brief familiarisation training in reaching large numbers of current and potential OSCE examiners in a time and cost-effective manner to promote standardisation of OSCE processes. PMID:27687287
Rainbow: a tool for large-scale whole-genome sequencing data analysis using cloud computing.
Zhao, Shanrong; Prenger, Kurt; Smith, Lance; Messina, Thomas; Fan, Hongtao; Jaeger, Edward; Stephens, Susan
2013-06-27
Technical improvements have decreased sequencing costs and, as a result, the size and number of genomic datasets have increased rapidly. Because of the lower cost, large amounts of sequence data are now being produced by small to midsize research groups. Crossbow is a software tool that can detect single nucleotide polymorphisms (SNPs) in whole-genome sequencing (WGS) data from a single subject; however, Crossbow has a number of limitations when applied to multiple subjects from large-scale WGS projects. The data storage and CPU resources that are required for large-scale whole genome sequencing data analyses are too large for many core facilities and individual laboratories to provide. To help meet these challenges, we have developed Rainbow, a cloud-based software package that can assist in the automation of large-scale WGS data analyses. Here, we evaluated the performance of Rainbow by analyzing 44 different whole-genome-sequenced subjects. Rainbow has the capacity to process genomic data from more than 500 subjects in two weeks using cloud computing provided by the Amazon Web Service. The time includes the import and export of the data using Amazon Import/Export service. The average cost of processing a single sample in the cloud was less than 120 US dollars. Compared with Crossbow, the main improvements incorporated into Rainbow include the ability: (1) to handle BAM as well as FASTQ input files; (2) to split large sequence files for better load balance downstream; (3) to log the running metrics in data processing and monitoring multiple Amazon Elastic Compute Cloud (EC2) instances; and (4) to merge SOAPsnp outputs for multiple individuals into a single file to facilitate downstream genome-wide association studies. Rainbow is a scalable, cost-effective, and open-source tool for large-scale WGS data analysis. For human WGS data sequenced by either the Illumina HiSeq 2000 or HiSeq 2500 platforms, Rainbow can be used straight out of the box. Rainbow is available for third-party implementation and use, and can be downloaded from http://s3.amazonaws.com/jnj_rainbow/index.html.
Rainbow: a tool for large-scale whole-genome sequencing data analysis using cloud computing
2013-01-01
Background Technical improvements have decreased sequencing costs and, as a result, the size and number of genomic datasets have increased rapidly. Because of the lower cost, large amounts of sequence data are now being produced by small to midsize research groups. Crossbow is a software tool that can detect single nucleotide polymorphisms (SNPs) in whole-genome sequencing (WGS) data from a single subject; however, Crossbow has a number of limitations when applied to multiple subjects from large-scale WGS projects. The data storage and CPU resources that are required for large-scale whole genome sequencing data analyses are too large for many core facilities and individual laboratories to provide. To help meet these challenges, we have developed Rainbow, a cloud-based software package that can assist in the automation of large-scale WGS data analyses. Results Here, we evaluated the performance of Rainbow by analyzing 44 different whole-genome-sequenced subjects. Rainbow has the capacity to process genomic data from more than 500 subjects in two weeks using cloud computing provided by the Amazon Web Service. The time includes the import and export of the data using Amazon Import/Export service. The average cost of processing a single sample in the cloud was less than 120 US dollars. Compared with Crossbow, the main improvements incorporated into Rainbow include the ability: (1) to handle BAM as well as FASTQ input files; (2) to split large sequence files for better load balance downstream; (3) to log the running metrics in data processing and monitoring multiple Amazon Elastic Compute Cloud (EC2) instances; and (4) to merge SOAPsnp outputs for multiple individuals into a single file to facilitate downstream genome-wide association studies. Conclusions Rainbow is a scalable, cost-effective, and open-source tool for large-scale WGS data analysis. For human WGS data sequenced by either the Illumina HiSeq 2000 or HiSeq 2500 platforms, Rainbow can be used straight out of the box. Rainbow is available for third-party implementation and use, and can be downloaded from http://s3.amazonaws.com/jnj_rainbow/index.html. PMID:23802613
Sosson, Charlotte; Georges, Carrie; Guillaume, Mathieu; Schuller, Anne-Marie; Schiltz, Christine
2018-01-01
Numbers are thought to be spatially organized along a left-to-right horizontal axis with small/large numbers on its left/right respectively. Behavioral evidence for this mental number line (MNL) comes from studies showing that the reallocation of spatial attention by active left/right head rotation facilitated the generation of small/large numbers respectively. While spatial biases in random number generation (RNG) during active movement are well established in adults, comparable evidence in children is lacking and it remains unclear whether and how children's access to the MNL is affected by active head rotation. To get a better understanding of the development of embodied number processing, we investigated the effect of active head rotation on the mean of generated numbers as well as the mean difference between each number and its immediately preceding response (the first order difference; FOD) not only in adults ( n = 24), but also in 7- to 11-year-old elementary school children ( n = 70). Since the sign and absolute value of FODs carry distinct information regarding spatial attention shifts along the MNL, namely their direction (left/right) and size (narrow/wide) respectively, we additionally assessed the influence of rotation on the total of negative and positive FODs regardless of their numerical values as well as on their absolute values. In line with previous studies, adults produced on average smaller numbers and generated smaller mean FODs during left than right rotation. More concretely, they produced more negative/positive FODs during left/right rotation respectively and the size of negative FODs was larger (in terms of absolute value) during left than right rotation. Importantly, as opposed to adults, no significant differences in RNG between left and right head rotations were observed in children. Potential explanations for such age-related changes in the effect of active head rotation on RNG are discussed. Altogether, the present study confirms that numerical processing is spatially grounded in adults and suggests that its embodied aspect undergoes significant developmental changes.
Pouillot, Régis; Chen, Yuhuan; Hoelzer, Karin
2015-02-01
When developing quantitative risk assessment models, a fundamental consideration for risk assessors is to decide whether to evaluate changes in bacterial levels in terms of concentrations or in terms of bacterial numbers. Although modeling bacteria in terms of integer numbers may be regarded as a more intuitive and rigorous choice, modeling bacterial concentrations is more popular as it is generally less mathematically complex. We tested three different modeling approaches in a simulation study. The first approach considered bacterial concentrations; the second considered the number of bacteria in contaminated units, and the third considered the expected number of bacteria in contaminated units. Simulation results indicate that modeling concentrations tends to overestimate risk compared to modeling the number of bacteria. A sensitivity analysis using a regression tree suggests that processes which include drastic scenarios consisting of combinations of large bacterial inactivation followed by large bacterial growth frequently lead to a >10-fold overestimation of the average risk when modeling concentrations as opposed to bacterial numbers. Alternatively, the approach of modeling the expected number of bacteria in positive units generates results similar to the second method and is easier to use, thus potentially representing a promising compromise. Published by Elsevier Ltd.
2017-02-01
note, a number of different measures implemented in both MATLAB and Python as functions are used to quantify similarity/distance between 2 vector-based...this technical note are widely used and may have an important role when computing the distance and similarity of large datasets and when considering high...throughput processes. In this technical note, a number of different measures implemented in both MAT- LAB and Python as functions are used to
Solutal Convection in Porous Media
NASA Astrophysics Data System (ADS)
Liang, Y.; Wen, B.; DiCarlo, D. A.; Hesse, M. A.
2017-12-01
Atmospheric CO2 is one important component of greenhouse gases, which can greatly affect the temperature of the Earth. There are four trapping mechanisms for CO2sequestration, including structural & stratigraphic trapping, residual trapping, dissolution trapping and mineral trapping. Leakage potential is a serious problem for its storage efficiency, and dissolution trapping is a method that can prevent such leakages effectively. Convective dissolution trapping process can be simplified to an interesting physical problem: in porous media, dissolution can initiate convection, and then its dynamics can be affected by the continuous convection conversely. However, it is difficult to detect whether the convective dissolution may take place, as well as how fast and in what pattern it may take place. Previous studies have established a model and related scaling (Rayleigh number and Sherwood number) to describe this physical problem. To testify this model with a large range of Rayleigh numbers, we conducted a series of convective dissolution experiments in porous media. In addition, this large experimental assembly can allow us to quantify relation between wavenumber of the convective motion and the controlling factors of the system for the first time. The result of our laboratory experiments are revolutionary: On one hand, it shows that previous scaling of the convective dissolution becomes invalid once the permeability is large enough; On the other hand, the relation between wavenumber and Rayleigh number demonstrates an opposite trend against the classic model. According to our experimental results, we propose a new model to describe the solutal convection in porous media, and our model can describe and explain our experimental observations. Also, simulation work has been conducted to confirm our model. In the future, our model and relevant knowledge can be unscaled to industrial applications which are relevant to convective dissolution process.
Long-Delayed Aftershocks in New Zealand and the 2016 M7.8 Kaikoura Earthquake
NASA Astrophysics Data System (ADS)
Shebalin, P.; Baranov, S.
2017-10-01
We study aftershock sequences of six major earthquakes in New Zealand, including the 2016 M7.8 Kaikaoura and 2016 M7.1 North Island earthquakes. For Kaikaoura earthquake, we assess the expected number of long-delayed large aftershocks of M5+ and M5.5+ in two periods, 0.5 and 3 years after the main shocks, using 75 days of available data. We compare results with obtained for other sequences using same 75-days period. We estimate the errors by considering a set of magnitude thresholds and corresponding periods of data completeness and consistency. To avoid overestimation of the expected rates of large aftershocks, we presume a break of slope of the magnitude-frequency relation in the aftershock sequences, and compare two models, with and without the break of slope. Comparing estimations to the actual number of long-delayed large aftershocks, we observe, in general, a significant underestimation of their expected number. We can suppose that the long-delayed aftershocks may reflect larger-scale processes, including interaction of faults, that complement an isolated relaxation process. In the spirit of this hypothesis, we search for symptoms of the capacity of the aftershock zone to generate large events months after the major earthquake. We adapt an algorithm EAST, studying statistics of early aftershocks, to the case of secondary aftershocks within aftershock sequences of major earthquakes. In retrospective application to the considered cases, the algorithm demonstrates an ability to detect in advance long-delayed aftershocks both in time and space domains. Application of the EAST algorithm to the 2016 M7.8 Kaikoura earthquake zone indicates that the most likely area for a delayed aftershock of M5.5+ or M6+ is at the northern end of the zone in Cook Strait.
Set size, individuation, and attention to shape
Cantrell, Lisa; Smith, Linda B.
2013-01-01
Much research has demonstrated a shape bias in categorizing and naming solid objects. This research has shown that when an entity is conceptualized as an individual object, adults and children attend to the object’s shape. Separate research in the domain of numerical cognition suggest that there are distinct processes for quantifying small and large sets of discrete items. This research shows that small set discrimination, comparison, and apprehension is often precise for 1–3 and sometimes 4 items; however, large numerosity representation is imprecise. Results from three experiments suggest a link between the processes for small and large number representation and the shape bias in a forced choice categorization task using naming and non-naming procedures. Experiment 1 showed that adults generalized a newly learned name for an object to new instances of the same shape only when those instances were presented in sets of less than 3 or 4. Experiment 2 showed that preschool children who were monolingual speakers of three different languages were also influenced by set size when categorizing objects in sets. Experiment 3 extended these results and showed the same effect in a non-naming task and when the novel noun was presented in a count-noun syntax frame. The results are discussed in terms of a relation between the precision of object representation and the precision of small and large number representation. PMID:23167969
Set size, individuation, and attention to shape.
Cantrell, Lisa; Smith, Linda B
2013-02-01
Much research has demonstrated a shape bias in categorizing and naming solid objects. This research has shown that when an entity is conceptualized as an individual object, adults and children attend to the object's shape. Separate research in the domain of numerical cognition suggest that there are distinct processes for quantifying small and large sets of discrete items. This research shows that small set discrimination, comparison, and apprehension is often precise for 1-3 and sometimes 4 items; however, large numerosity representation is imprecise. Results from three experiments suggest a link between the processes for small and large number representation and the shape bias in a forced choice categorization task using naming and non-naming procedures. Experiment 1 showed that adults generalized a newly learned name for an object to new instances of the same shape only when those instances were presented in sets of less than 3 or 4. Experiment 2 showed that preschool children who were monolingual speakers of three different languages were also influenced by set size when categorizing objects in sets. Experiment 3 extended these results and showed the same effect in a non-naming task and when the novel noun was presented in a count-noun syntax frame. The results are discussed in terms of a relation between the precision of object representation and the precision of small and large number representation. Copyright © 2012 Elsevier B.V. All rights reserved.
Resistivity Problems in Electrostatic Precipitation
ERIC Educational Resources Information Center
White, Harry J.
1974-01-01
The process of electrostatic precipitation has ever-increasing application in more efficient collection of fine particles from industrial air emissions. This article details a large number of new developments in the field. The emphasis is on high resistivity particles which are a common cause of poor precipitator performance. (LS)
The Stream Table in Physical Geography Instruction.
ERIC Educational Resources Information Center
Wikle, Thomas A.; Lightfoot, Dale R.
1997-01-01
Outlines a number of activities to be conducted with a stream table (large wooden box filled with sediment and designed for water to pass through) in class. Activities illustrate such fluvial processes as stream meandering, erosion, transportation, and deposition. Includes a diagram for constructing a stream table. (MJP)
Development of Multimedia Computer Applications for Clinical Pharmacy Training.
ERIC Educational Resources Information Center
Schlict, John R.; Livengood, Bruce; Shepherd, John
1997-01-01
Computer simulations in clinical pharmacy education help expose students to clinical patient management earlier and enable training of large numbers of students outside conventional clinical practice sites. Multimedia instruction and its application to pharmacy training are described, the general process for developing multimedia presentations is…
Algorithm Calculates Cumulative Poisson Distribution
NASA Technical Reports Server (NTRS)
Bowerman, Paul N.; Nolty, Robert C.; Scheuer, Ernest M.
1992-01-01
Algorithm calculates accurate values of cumulative Poisson distribution under conditions where other algorithms fail because numbers are so small (underflow) or so large (overflow) that computer cannot process them. Factors inserted temporarily to prevent underflow and overflow. Implemented in CUMPOIS computer program described in "Cumulative Poisson Distribution Program" (NPO-17714).
ERIC Educational Resources Information Center
Schreyer, Paul R.
2012-01-01
Given our aging and crowded schools, today's administrators have to focus their attention on modernizing facilities. The Job Order Contracting (JOC) procurement method allows school administrators to complete a large number of high quality maintenance projects quickly with a single, competitively bid contract. The JOC process fits schools' unique…
ASSESSMENT OF SYNAPSE FORMATION IN RAT PRIMARY NEURAL CELL CULTURE USING HIGH CONTENT MICROSCOPY.
Cell-based assays can model neurodevelopmental processes including neurite growth and synaptogenesis, and may be useful for screening and evaluation of large numbers of chemicals for developmental neurotoxicity. This work describes the use of high content screening (HCS) to dete...
Experimental Studies on Hypersonic Stagnation Point Chemical Environment
2006-02-01
conditions [60]. Having this complete definition we will focus on the chemical environment produce in the SPR. 3.2 Chemical environment evaluation Flow ... chemistry involves a very large number of processes and microscopic phenomena, they are usually summarized in a set of chemical reactions, with their own
Recent patents on the extraction of carotenoids.
Riggi, Ezio
2010-01-01
This article reviews the patents that have been presented during the last decade related to the extraction of carotenoids from various forms of organic matter (fruit, vegetables, animals), with an emphasis on the methods and mechanisms exploited by these technologies, and on technical solutions for the practical problems related to these technologies. I present and classify 29 methods related to the extraction processes (physical, mechanical, chemical, and enzymatic). The large number of processes for extraction by means of supercritical fluids and the growing number of large-scale industrial plants suggest a positive trend towards using this technique that is currently slowed by its cost. This trend should be reinforced by growing restrictions imposed on the use of most organic solvents for extraction of food products and by increasingly strict waste management regulations that are indirectly promoting the use of extraction processes that leave the residual (post-extraction) matrix substantially free from solvents and compounds that must subsequently be removed or treated. None of the reviewed approaches is the best answer for every extractable compound and source, so each should be considered as one of several alternatives, including the use of a combination of extraction approaches.
A simple and rapid microplate assay for glycoprotein-processing glycosidases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kang, M.S.; Zwolshen, J.H.; Harry, B.S.
1989-08-15
A simple and convenient microplate assay for glycosidases involved in the glycoprotein-processing reactions is described. The assay is based on specific binding of high-mannose-type oligosaccharide substrates to concanavalin A-Sepharose, while monosaccharides liberated by enzymatic hydrolysis do not bind to concanavalin A-Sepharose. By the use of radiolabeled substrates (( 3H)glucose for glucosidases and (3H)mannose for mannosidases), the radioactivity in the liberated monosaccharides can be determined as a measure of the enzymatic activity. This principle was employed earlier for developing assays for glycosidases previously reported. These authors have reported the separation of substrate from the product by concanavalin A-Sepharose column chromatography. Thismore » procedure is handicapped by the fact that it cannot be used for a large number of samples and is time consuming. We have simplified this procedure and adapted it to the use of a microplate (96-well plate). This would help in processing a large number of samples in a short time. In this report we show that the assay is comparable to the column assay previously reported. It is linear with time and enzyme concentration and shows expected kinetics with castanospermine, a known inhibitor of alpha-glucosidase I.« less
DECOVALEX Project: from 1992 to 2007
NASA Astrophysics Data System (ADS)
Tsang, Chin-Fu; Stephansson, Ove; Jing, Lanru; Kautsky, Fritz
2009-05-01
The DECOVALEX project is a unique international research collaboration, initiated in 1992, for advancing the understanding and mathematical modelling of coupled thermo-hydro-mechanical (THM) and thermo-hydro-mechanical-chemical (THMC) processes in geological systems—subjects of importance for performance assessment of radioactive waste repositories in geological formations. From 1992 up to 2007, the project has made important progress and played a key role in the development of numerical modelling of coupled processes in fractured rocks and buffer/backfill materials. The project has been conducted by research teams supported by a large number of radioactive-waste-management organizations and regulatory authorities, including those of Canada, China, Finland, France, Japan, Germany, Spain, Sweden, UK, and the USA. Through this project, in-depth knowledge has been gained of coupled THM and THMC processes associated with nuclear waste repositories, as well as numerical simulation models for their quantitative analysis. The knowledge accumulated from this project, in the form of a large number of research reports and international journal and conference papers in the open literature, has been applied effectively in the implementation and review of national radioactive-waste-management programmes in the participating countries. This paper presents an overview of the project.
Rotor assembly and method for automatically processing liquids
Burtis, C.A.; Johnson, W.F.; Walker, W.A.
1992-12-22
A rotor assembly is described for performing a relatively large number of processing steps upon a sample, such as a whole blood sample, and a diluent, such as water. It includes a rotor body for rotation about an axis and includes a network of chambers within which various processing steps are performed upon the sample and diluent and passageways through which the sample and diluent are transferred. A transfer mechanism is movable through the rotor body by the influence of a magnetic field generated adjacent the transfer mechanism and movable along the rotor body, and the assembly utilizes centrifugal force, a transfer of momentum and capillary action to perform any of a number of processing steps such as separation, aliquoting, transference, washing, reagent addition and mixing of the sample and diluent within the rotor body. The rotor body is particularly suitable for automatic immunoassay analyses. 34 figs.
A road map for multi-way calibration models.
Escandar, Graciela M; Olivieri, Alejandro C
2017-08-07
A large number of experimental applications of multi-way calibration are known, and a variety of chemometric models are available for the processing of multi-way data. While the main focus has been directed towards three-way data, due to the availability of various instrumental matrix measurements, a growing number of reports are being produced on order signals of increasing complexity. The purpose of this review is to present a general scheme for selecting the appropriate data processing model, according to the properties exhibited by the multi-way data. In spite of the complexity of the multi-way instrumental measurements, simple criteria can be proposed for model selection, based on the presence and number of the so-called multi-linearity breaking modes (instrumental modes that break the low-rank multi-linearity of the multi-way arrays), and also on the existence of mutually dependent instrumental modes. Recent literature reports on multi-way calibration are reviewed, with emphasis on the models that were selected for data processing.
Friesner, Dan; Neufelder, Donna; Raisor, Janet; Bozman, Carl S
2009-01-01
The authors present a methodology that measures improvement in customer satisfaction scores when those scores are already high and the production process is slow and thus does not generate a large amount of useful data in any given time period. The authors used these techniques with data from a midsized rehabilitation institute affiliated with a regional, nonprofit medical center. Thus, this article functions as a case study, the findings of which may be applicable to a large number of other healthcare providers that share both the mission and challenges faced by this facility. The methodology focused on 2 factors: use of the unique characteristics of panel data to overcome the paucity of observations and a dynamic benchmarking approach to track process variability over time. By focusing on these factors, the authors identify some additional areas for process improvement despite the institute's past operational success.
The Friction Force Determination of Large-Sized Composite Rods in Pultrusion
NASA Astrophysics Data System (ADS)
Grigoriev, S. N.; Krasnovskii, A. N.; Kazakov, I. A.
2014-08-01
Nowadays, the simple pull-force models of pultrusion process are not suitable for large sized rods because they are not considered a chemical shrinkage and thermal expansion acting in cured material inside the die. But the pulling force of the resin-impregnated fibers as they travels through the heated die is essential factor in the pultrusion process. In order to minimize the number of trial-and-error experiments a new mathematical approach to determine the frictional force is presented. The governing equations of the model are stated in general terms and various simplifications are implemented in order to obtain solutions without extensive numerical efforts. The influence of different pultrusion parameters on the frictional force value is investigated. The results obtained by the model can establish a foundation by which process control parameters are selected to achieve an appropriate pull-force and can be used for optimization pultrusion process.
Electronics manufacturing and assembly in Japan
NASA Technical Reports Server (NTRS)
Kukowski, John A.; Boulton, William R.
1995-01-01
In the consumer electronics industry, precision processing technology is the basis for enhancing product functions and for minimizing components and end products. Throughout Japan, manufacturing technology is seen as critical to the production and assembly of advanced products. While its population has increased less than 30 percent over twenty-five years, Japan's gross national product has increase thirtyfold; this growth has resulted in large part from rapid replacement of manual operations with innovative, high-speed, large-scale, continuously running, complex machines that process a growing number of miniaturized components. The JTEC panel found that introduction of next-generation electronics products in Japan goes hand-in-hand with introduction of new and improved production equipment. In the panel's judgment, Japan's advanced process technologies and equipment development and its highly automated factories are crucial elements of its domination of the consumer electronics marketplace - and Japan's expertise in manufacturing consumer electronics products gives it potentially unapproachable process expertise in all electronics markets.
NASA Technical Reports Server (NTRS)
Williams, G. M.; Fraser, J. C.
1991-01-01
The objective was to examine state-of-the-art optical sensing and processing technology applied to control the motion of flexible spacecraft. Proposed large flexible space systems, such an optical telescopes and antennas, will require control over vast surfaces. Most likely distributed control will be necessary involving many sensors to accurately measure the surface. A similarly large number of actuators must act upon the system. The used technical approach included reviewing proposed NASA missions to assess system needs and requirements. A candidate mission was chosen as a baseline study spacecraft for comparison of conventional and optical control components. Control system requirements of the baseline system were used for designing both a control system containing current off-the-shelf components and a system utilizing electro-optical devices for sensing and processing. State-of-the-art surveys of conventional sensor, actuator, and processor technologies were performed. A technology development plan is presented that presents a logical, effective way to develop and integrate advancing technologies.
Biometric Attendance and Big Data Analysis for Optimizing Work Processes.
Verma, Neetu; Xavier, Teenu; Agrawal, Deepak
2016-01-01
Although biometric attendance management is available, large healthcare organizations have difficulty in big data analysis for optimization of work processes. The aim of this project was to assess the implementation of a biometric attendance system and its utility following big data analysis. In this prospective study the implementation of biometric system was evaluated over 3 month period at our institution. Software integration with other existing systems for data analysis was also evaluated. Implementation of the biometric system could be successfully done over a two month period with enrollment of 10,000 employees into the system. However generating reports and taking action this large number of staff was a challenge. For this purpose software was made for capturing the duty roster of each employee and integrating it with the biometric system and adding an SMS gateway. This helped in automating the process of sending SMSs to each employee who had not signed in. Standalone biometric systems have limited functionality in large organizations unless it is meshed with employee duty roster.
Extensive Error in the Number of Genes Inferred from Draft Genome Assemblies
Denton, James F.; Lugo-Martinez, Jose; Tucker, Abraham E.; Schrider, Daniel R.; Warren, Wesley C.; Hahn, Matthew W.
2014-01-01
Current sequencing methods produce large amounts of data, but genome assemblies based on these data are often woefully incomplete. These incomplete and error-filled assemblies result in many annotation errors, especially in the number of genes present in a genome. In this paper we investigate the magnitude of the problem, both in terms of total gene number and the number of copies of genes in specific families. To do this, we compare multiple draft assemblies against higher-quality versions of the same genomes, using several new assemblies of the chicken genome based on both traditional and next-generation sequencing technologies, as well as published draft assemblies of chimpanzee. We find that upwards of 40% of all gene families are inferred to have the wrong number of genes in draft assemblies, and that these incorrect assemblies both add and subtract genes. Using simulated genome assemblies of Drosophila melanogaster, we find that the major cause of increased gene numbers in draft genomes is the fragmentation of genes onto multiple individual contigs. Finally, we demonstrate the usefulness of RNA-Seq in improving the gene annotation of draft assemblies, largely by connecting genes that have been fragmented in the assembly process. PMID:25474019
Extensive error in the number of genes inferred from draft genome assemblies.
Denton, James F; Lugo-Martinez, Jose; Tucker, Abraham E; Schrider, Daniel R; Warren, Wesley C; Hahn, Matthew W
2014-12-01
Current sequencing methods produce large amounts of data, but genome assemblies based on these data are often woefully incomplete. These incomplete and error-filled assemblies result in many annotation errors, especially in the number of genes present in a genome. In this paper we investigate the magnitude of the problem, both in terms of total gene number and the number of copies of genes in specific families. To do this, we compare multiple draft assemblies against higher-quality versions of the same genomes, using several new assemblies of the chicken genome based on both traditional and next-generation sequencing technologies, as well as published draft assemblies of chimpanzee. We find that upwards of 40% of all gene families are inferred to have the wrong number of genes in draft assemblies, and that these incorrect assemblies both add and subtract genes. Using simulated genome assemblies of Drosophila melanogaster, we find that the major cause of increased gene numbers in draft genomes is the fragmentation of genes onto multiple individual contigs. Finally, we demonstrate the usefulness of RNA-Seq in improving the gene annotation of draft assemblies, largely by connecting genes that have been fragmented in the assembly process.
Pollution prevention at Air Force Plant {number{underscore}sign}4
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daniels, E.D.; Brown, C.J.; Strukely, T.
1999-07-01
Air Force Plant {number{underscore}sign}4 in Fort Worth, Texas is home to Lockheed Martin Tactical Aircraft Systems (LMTAS). This multi-million square foot facility provides all of the design, fabrication, assembly, and testing capabilities necessary to produce the F-16 fighter and the center fuselage of the F-22 fighter. A large number of chemical products and processes are required to achieve the complex manufacturing goals. Since the early 1980s, a pollution prevention program has been in place at LMTAS to eliminate or minimize the use of hazardous chemicals and processes. The structure involves an interdepartmental teaming arrangement to determine satisfactory alternatives to existingmore » procedures as well as development of environmentally friendly methods for new design. Many of the successes are a result of teaming arrangements between LMTAS and the USAF.« less
Onsite aerosol measurements for various engineered nanomaterials at industrial manufacturing plants
NASA Astrophysics Data System (ADS)
Ogura, I.; Sakurai, H.; Gamo, M.
2011-07-01
Evaluation of the health impact of and control over exposure to airborne engineered nanomaterials (ENMs) requires information, inter alia, the magnitude of environmental release during various industrial processes, as well as the size distribution and morphology of the airborne ENM particles. In this study, we performed onsite aerosol measurements for various ENMs at industrial manufacturing plants. The industrial processes investigated were the collection of SiC from synthesis reactors, synthesis and bagging of LiFePO4, and bagging of ZnO. Real-time aerosol monitoring using condensation particle counters, optical particle counters, and an electrical low-pressure impactor revealed frequent increases in the number concentrations of submicron- and micron-sized aerosol particles, but few increases in the number concentrations of nanoparticles. In the SEM observations, a large number of submicron- and micron-sized agglomerated ENM particles were observed.
Exploration of laser-driven electron-multirescattering dynamics in high-order harmonic generation
Li, Peng -Cheng; Sheu, Yae -Lin; Jooya, Hossein Z.; ...
2016-09-06
Multiple rescattering processes play an important role in high-order harmonic generation (HHG) in an intense laser field. However, the underlying multi-rescattering dynamics are still largely unexplored. Here we investigate the dynamical origin of multiple rescattering processes in HHG associated with the odd and even number of returning times of the electron to the parent ion. We perform fully ab initio quantum calculations and extend the empirical mode decomposition method to extract the individual multiple scattering contributions in HHG. We find that the tunneling ionization regime is responsible for the odd number times of rescattering and the corresponding short trajectories aremore » dominant. On the other hand, the multiphoton ionization regime is responsible for the even number times of rescattering and the corresponding long trajectories are dominant. Moreover, we discover that the multiphoton- and tunneling-ionization regimes in multiple rescattering processes occur alternatively. Our results uncover the dynamical origin of multiple rescattering processes in HHG for the first time. As a result, it also provides new insight regarding the control of the multiple rescattering processes for the optimal generation of ultrabroad band supercontinuum spectra and the production of single ultrashort attosecond laser pulse.« less
Exploration of laser-driven electron-multirescattering dynamics in high-order harmonic generation
Li, Peng-Cheng; Sheu, Yae-Lin; Jooya, Hossein Z.; Zhou, Xiao-Xin; Chu, Shih-I
2016-01-01
Multiple rescattering processes play an important role in high-order harmonic generation (HHG) in an intense laser field. However, the underlying multi-rescattering dynamics are still largely unexplored. Here we investigate the dynamical origin of multiple rescattering processes in HHG associated with the odd and even number of returning times of the electron to the parent ion. We perform fully ab initio quantum calculations and extend the empirical mode decomposition method to extract the individual multiple scattering contributions in HHG. We find that the tunneling ionization regime is responsible for the odd number times of rescattering and the corresponding short trajectories are dominant. On the other hand, the multiphoton ionization regime is responsible for the even number times of rescattering and the corresponding long trajectories are dominant. Moreover, we discover that the multiphoton- and tunneling-ionization regimes in multiple rescattering processes occur alternatively. Our results uncover the dynamical origin of multiple rescattering processes in HHG for the first time. It also provides new insight regarding the control of the multiple rescattering processes for the optimal generation of ultrabroad band supercontinuum spectra and the production of single ultrashort attosecond laser pulse. PMID:27596056
Exploration of laser-driven electron-multirescattering dynamics in high-order harmonic generation.
Li, Peng-Cheng; Sheu, Yae-Lin; Jooya, Hossein Z; Zhou, Xiao-Xin; Chu, Shih-I
2016-09-06
Multiple rescattering processes play an important role in high-order harmonic generation (HHG) in an intense laser field. However, the underlying multi-rescattering dynamics are still largely unexplored. Here we investigate the dynamical origin of multiple rescattering processes in HHG associated with the odd and even number of returning times of the electron to the parent ion. We perform fully ab initio quantum calculations and extend the empirical mode decomposition method to extract the individual multiple scattering contributions in HHG. We find that the tunneling ionization regime is responsible for the odd number times of rescattering and the corresponding short trajectories are dominant. On the other hand, the multiphoton ionization regime is responsible for the even number times of rescattering and the corresponding long trajectories are dominant. Moreover, we discover that the multiphoton- and tunneling-ionization regimes in multiple rescattering processes occur alternatively. Our results uncover the dynamical origin of multiple rescattering processes in HHG for the first time. It also provides new insight regarding the control of the multiple rescattering processes for the optimal generation of ultrabroad band supercontinuum spectra and the production of single ultrashort attosecond laser pulse.
Cormode, Graham; Dasgupta, Anirban; Goyal, Amit; Lee, Chi Hoon
2018-01-01
Many modern applications of AI such as web search, mobile browsing, image processing, and natural language processing rely on finding similar items from a large database of complex objects. Due to the very large scale of data involved (e.g., users' queries from commercial search engines), computing such near or nearest neighbors is a non-trivial task, as the computational cost grows significantly with the number of items. To address this challenge, we adopt Locality Sensitive Hashing (a.k.a, LSH) methods and evaluate four variants in a distributed computing environment (specifically, Hadoop). We identify several optimizations which improve performance, suitable for deployment in very large scale settings. The experimental results demonstrate our variants of LSH achieve the robust performance with better recall compared with "vanilla" LSH, even when using the same amount of space.
Short-Term Forecasting of Taiwanese Earthquakes Using a Universal Model of Fusion-Fission Processes
Cheong, Siew Ann; Tan, Teck Liang; Chen, Chien-Chih; Chang, Wu-Lung; Liu, Zheng; Chew, Lock Yue; Sloot, Peter M. A.; Johnson, Neil F.
2014-01-01
Predicting how large an earthquake can be, where and when it will strike remains an elusive goal in spite of the ever-increasing volume of data collected by earth scientists. In this paper, we introduce a universal model of fusion-fission processes that can be used to predict earthquakes starting from catalog data. We show how the equilibrium dynamics of this model very naturally explains the Gutenberg-Richter law. Using the high-resolution earthquake catalog of Taiwan between Jan 1994 and Feb 2009, we illustrate how out-of-equilibrium spatio-temporal signatures in the time interval between earthquakes and the integrated energy released by earthquakes can be used to reliably determine the times, magnitudes, and locations of large earthquakes, as well as the maximum numbers of large aftershocks that would follow. PMID:24406467
Emerging behavior in electronic bidding.
Yang, I; Jeong, H; Kahng, B; Barabási, A-L
2003-07-01
We characterize the statistical properties of a large number of agents on two major online auction sites. The measurements indicate that the total number of bids placed in a single category and the number of distinct auctions frequented by a given agent follow power-law distributions, implying that a few agents are responsible for a significant fraction of the total bidding activity on the online market. We find that these agents exert an unproportional influence on the final price of the auctioned items. This domination of online auctions by an unusually active minority may be a generic feature of all online mercantile processes.
Emerging behavior in electronic bidding
NASA Astrophysics Data System (ADS)
Yang, I.; Jeong, H.; Kahng, B.; Barabási, A.-L.
2003-07-01
We characterize the statistical properties of a large number of agents on two major online auction sites. The measurements indicate that the total number of bids placed in a single category and the number of distinct auctions frequented by a given agent follow power-law distributions, implying that a few agents are responsible for a significant fraction of the total bidding activity on the online market. We find that these agents exert an unproportional influence on the final price of the auctioned items. This domination of online auctions by an unusually active minority may be a generic feature of all online mercantile processes.
Precise Control of the Number of Layers of Graphene by Picosecond Laser Thinning.
Lin, Zhe; Ye, Xiaohui; Han, Jinpeng; Chen, Qiao; Fan, Peixun; Zhang, Hongjun; Xie, Dan; Zhu, Hongwei; Zhong, Minlin
2015-06-26
The properties of graphene can vary as a function of the number of layers (NOL). Controlling the NOL in large area graphene is still challenging. In this work, we demonstrate a picosecond (ps) laser thinning removal of graphene layers from multi-layered graphene to obtain desired NOL when appropriate pulse threshold energy is adopted. The thinning process is conducted in atmosphere without any coating and it is applicable for graphene films on arbitrary substrates. This method provides many advantages such as one-step process, non-contact operation, substrate and environment-friendly, and patternable, which will enable its potential applications in the manufacturing of graphene-based electronic devices.
Precise Control of the Number of Layers of Graphene by Picosecond Laser Thinning
NASA Astrophysics Data System (ADS)
Lin, Zhe; Ye, Xiaohui; Han, Jinpeng; Chen, Qiao; Fan, Peixun; Zhang, Hongjun; Xie, Dan; Zhu, Hongwei; Zhong, Minlin
2015-06-01
The properties of graphene can vary as a function of the number of layers (NOL). Controlling the NOL in large area graphene is still challenging. In this work, we demonstrate a picosecond (ps) laser thinning removal of graphene layers from multi-layered graphene to obtain desired NOL when appropriate pulse threshold energy is adopted. The thinning process is conducted in atmosphere without any coating and it is applicable for graphene films on arbitrary substrates. This method provides many advantages such as one-step process, non-contact operation, substrate and environment-friendly, and patternable, which will enable its potential applications in the manufacturing of graphene-based electronic devices.
Encryption and decryption algorithm using algebraic matrix approach
NASA Astrophysics Data System (ADS)
Thiagarajan, K.; Balasubramanian, P.; Nagaraj, J.; Padmashree, J.
2018-04-01
Cryptographic algorithms provide security of data against attacks during encryption and decryption. However, they are computationally intensive process which consume large amount of CPU time and space at time of encryption and decryption. The goal of this paper is to study the encryption and decryption algorithm and to find space complexity of the encrypted and decrypted data by using of algorithm. In this paper, we encrypt and decrypt the message using key with the help of cyclic square matrix provides the approach applicable for any number of words having more number of characters and longest word. Also we discussed about the time complexity of the algorithm. The proposed algorithm is simple but difficult to break the process.
Apparatus and process for microbial detection and enumeration
NASA Technical Reports Server (NTRS)
Wilkins, J. R.; Grana, D. (Inventor)
1982-01-01
An apparatus and process for detecting and enumerating specific microorganisms from large volume samples containing small numbers of the microorganisms is presented. The large volume samples are filtered through a membrane filter to concentrate the microorganisms. The filter is positioned between two absorbent pads and previously moistened with a growth medium for the microorganisms. A pair of electrodes are disposed against the filter and the pad electrode filter assembly is retained within a petri dish by retainer ring. The cover is positioned on base of petri dish and sealed at the edges by a parafilm seal prior to being electrically connected via connectors to a strip chart recorder for detecting and enumerating the microorganisms collected on filter.
An implementation of the look-ahead Lanczos algorithm for non-Hermitian matrices
NASA Technical Reports Server (NTRS)
Freund, Roland W.; Gutknecht, Martin H.; Nachtigal, Noel M.
1991-01-01
The nonsymmetric Lanczos method can be used to compute eigenvalues of large sparse non-Hermitian matrices or to solve large sparse non-Hermitian linear systems. However, the original Lanczos algorithm is susceptible to possible breakdowns and potential instabilities. An implementation is presented of a look-ahead version of the Lanczos algorithm that, except for the very special situation of an incurable breakdown, overcomes these problems by skipping over those steps in which a breakdown or near-breakdown would occur in the standard process. The proposed algorithm can handle look-ahead steps of any length and requires the same number of matrix-vector products and inner products as the standard Lanczos process without look-ahead.
NASA Astrophysics Data System (ADS)
Aria, Adrianus Indrat
In this thesis, dry chemical modification methods involving UV/ozone, oxygen plasma, and vacuum annealing treatments are explored to precisely control the wettability of CNT arrays. The effect of oxidation using UV/ozone and oxygen plasma treatments is highly reversible as long as the O/C ratio of the CNT arrays is kept below 18%. At O/C ratios higher than 18%, the effect of oxidation is no longer reversible. This irreversible oxidation is caused by irreversible changes to the CNT atomic structure during the oxidation process. During the oxidation process, CNT arrays undergo three different processes. For CNT arrays with O/C ratios lower than 40%, the oxidation process results in the functionalization of CNT outer walls by oxygenated groups. Although this functionalization process introduces defects, vacancies and micropores opening, the graphitic structure of the CNT is still largely intact. For CNT arrays with O/C ratios between 40% and 45%, the oxidation process results in the etching of CNT outer walls. This etching process introduces large scale defects and holes that can be obviously seen under TEM at high magnification. Most of these holes are found to be several layers deep and, in some cases, a large portion of the CNT side walls are cut open. For CNT arrays with O/C ratios higher than 45%, the oxidation process results in the exfoliation of the CNT walls and amorphization of the remaining CNT structure. This amorphization process can be implied from the disappearance of C-C sp2 peak in the XPS spectra associated with the pi-bond network. The impact behavior of water droplet impinging on superhydrophobic CNT arrays in a low viscosity regime is investigated for the first time. Here, the experimental data are presented in the form of several important impact behavior characteristics including critical Weber number, volume ratio, restitution coefficient, and maximum spreading diameter. As observed experimentally, three different impact regimes are identified while another impact regime is proposed. These regimes are partitioned by three critical Weber numbers, two of which are experimentally observed. The volume ratio between the primary and the secondary droplets is found to decrease with the increase of Weber number in all impact regimes other than the first one. In the first impact regime, this is found to be independent of Weber number since the droplet remains intact during and subsequent to the impingement. Experimental data show that the coefficient of restitution decreases with the increase of Weber number in all impact regimes. The rate of decrease of the coefficient of restitution in the high Weber number regime is found to be higher than that in the low and moderate Weber number. Experimental data also show that the maximum spreading factor increases with the increase of Weber number in all impact regimes. The rate of increase of the maximum spreading factor in the high Weber number regime is found to be higher than that in the low and moderate Weber number. Phenomenological approximations and interpretations of the experimental data, as well as brief comparisons to the previously proposed scaling laws, are shown here. Dry oxidation methods are used for the first time to characterize the influence of oxidation on the capacitive behavior of CNT array EDLCs. The capacitive behavior of CNT array EDLCs can be tailored by varying their oxygen content, represented by their O/C ratio. The specific capacitance of these CNT arrays increases with the increase of their oxygen content in both KOH and Et4NBF4/PC electrolytes. As a result, their gravimetric energy density increases with the increase of their oxygen content. However, their gravimetric power density decreases with the increase of their oxygen content. The optimally oxidized CNT arrays are able to withstand more than 35,000 charge/discharge cycles in Et4NBF4/PC at a current density of 5 A/g while only losing 10% of their original capacitance. (Abstract shortened by UMI.)
Can microbes economically remove sulfur
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fox, J.L.
Researchers have reported that refiners who now rely on costly physic-chemical procedures to desulfurize petroleum will soon have an alternative microbial-enzyme-based approach to this process. This new approach is still under development and considerable number chemical engineering problems need to be solved before this process is ready for large-scale use. This paper reviews the several research projects dedicated solving the problems that keep a biotechnology-based alternative from competing with chemical desulfurization.
Casado-Sánchez, Antonio; Gómez-Ballesteros, Rocío; Tato, Francisco; Soriano, Francisco J; Pascual-Coca, Gustavo; Cabrera, Silvia; Alemán, José
2016-07-12
A new catalytic system for the photooxidation of sulfides based on Pt(ii) complexes is presented. The catalyst is capable of oxidizing a large number of sulfides containing aryl, alkyl, allyl, benzyl, as well as more complex structures such as heterocycles and methionine amino acid, with complete chemoselectivity. In addition, the first sulfur oxidation in a continuous flow process has been developed.
Information Model for Reusability in Clinical Trial Documentation
ERIC Educational Resources Information Center
Bahl, Bhanu
2013-01-01
In clinical research, New Drug Application (NDA) to health agencies requires generation of a large number of documents throughout the clinical development life cycle, many of which are also submitted to public databases and external partners. Current processes to assemble the information, author, review and approve the clinical research documents,…
The development of methods and processes to mass produce nanocomponents, materials with characteristic lengths less than 100 nm, has led to the emergence of a large number of consumer goods (nanoproducts) containing these materials. The unknown health effects and risks associate...
Group to Use Chemistry to Solve Developing Countries' Ills.
ERIC Educational Resources Information Center
O'Sullivan, Dermot A.
1983-01-01
Chemical engineers have begun savoring the first fruits of a massive effort to gather, determine, and evaluate data of physical properties and predictive methods for large numbers of compounds and mixtures processed in the chemical industry. The use of this centralized data source is highlighted. (Author/JN)
Simulating Multi-Scale Mercury Fate and Transport in a Coastal Plain Watershed
Mercury is the toxicant responsible for the largest number of fish advisories across the United States, with 1.1 million river miles under advisory. The processes governing fate, transport, and transformation of mercury in streams and rivers are not well understood, in large part...
In order to screen large numbers of chemicals for their potential to produce developmental neurotoxicity new, in vitro methods are needed. One approach is to develop methods based on the biologic processes which underlie brain development including the growth and maturation of ce...
A Description of Instructional Coaching and Its Relationship to Consultation
ERIC Educational Resources Information Center
Denton, Carolyn A.; Hasbrouck, Jan
2009-01-01
In large numbers of elementary and secondary schools across the United States teachers are being called upon to provide support to colleagues through a process called "instructional coaching." Despite widespread implementation of this role, resulting in part from federal initiatives, there is little consensus regarding its operational…
Thyroid hormone (TH) is essential for a number of physiological processes and is particularly critical during nervous system development. The hippocampus is a structure strongly implicated in cognition and is sensitive to developmental hypothyroidism. The impact of TH insuffici...
Learning a Foreign Language: A New Path to Enhancement of Cognitive Functions
ERIC Educational Resources Information Center
Shoghi Javan, Sara; Ghonsooly, Behzad
2018-01-01
The complicated cognitive processes involved in natural (primary) bilingualism lead to significant cognitive development. Executive functions as a fundamental component of human cognition are deemed to be affected by language learning. To date, a large number of studies have investigated how natural (primary) bilingualism influences executive…
NASA Astrophysics Data System (ADS)
Baru, Chaitan; Nandigam, Viswanath; Krishnan, Sriram
2010-05-01
Increasingly, the geoscience user community expects modern IT capabilities to be available in service of their research and education activities, including the ability to easily access and process large remote sensing datasets via online portals such as GEON (www.geongrid.org) and OpenTopography (opentopography.org). However, serving such datasets via online data portals presents a number of challenges. In this talk, we will evaluate the pros and cons of alternative storage strategies for management and processing of such datasets using binary large object implementations (BLOBs) in database systems versus implementation in Hadoop files using the Hadoop Distributed File System (HDFS). The storage and I/O requirements for providing online access to large datasets dictate the need for declustering data across multiple disks, for capacity as well as bandwidth and response time performance. This requires partitioning larger files into a set of smaller files, and is accompanied by the concomitant requirement for managing large numbers of file. Storing these sub-files as blobs in a shared-nothing database implemented across a cluster provides the advantage that all the distributed storage management is done by the DBMS. Furthermore, subsetting and processing routines can be implemented as user-defined functions (UDFs) on these blobs and would run in parallel across the set of nodes in the cluster. On the other hand, there are both storage overheads and constraints, and software licensing dependencies created by such an implementation. Another approach is to store the files in an external filesystem with pointers to them from within database tables. The filesystem may be a regular UNIX filesystem, a parallel filesystem, or HDFS. In the HDFS case, HDFS would provide the file management capability, while the subsetting and processing routines would be implemented as Hadoop programs using the MapReduce model. Hadoop and its related software libraries are freely available. Another consideration is the strategy used for partitioning large data collections, and large datasets within collections, using round-robin vs hash partitioning vs range partitioning methods. Each has different characteristics in terms of spatial locality of data and resultant degree of declustering of the computations on the data. Furthermore, we have observed that, in practice, there can be large variations in the frequency of access to different parts of a large data collection and/or dataset, thereby creating "hotspots" in the data. We will evaluate the ability of different approaches for dealing effectively with such hotspots and alternative strategies for dealing with hotspots.
Medical students perceive better group learning processes when large classes are made to seem small.
Hommes, Juliette; Arah, Onyebuchi A; de Grave, Willem; Schuwirth, Lambert W T; Scherpbier, Albert J J A; Bos, Gerard M J
2014-01-01
Medical schools struggle with large classes, which might interfere with the effectiveness of learning within small groups due to students being unfamiliar to fellow students. The aim of this study was to assess the effects of making a large class seem small on the students' collaborative learning processes. A randomised controlled intervention study was undertaken to make a large class seem small, without the need to reduce the number of students enrolling in the medical programme. The class was divided into subsets: two small subsets (n=50) as the intervention groups; a control group (n=102) was mixed with the remaining students (the non-randomised group n∼100) to create one large subset. The undergraduate curriculum of the Maastricht Medical School, applying the Problem-Based Learning principles. In this learning context, students learn mainly in tutorial groups, composed randomly from a large class every 6-10 weeks. The formal group learning activities were organised within the subsets. Students from the intervention groups met frequently within the formal groups, in contrast to the students from the large subset who hardly enrolled with the same students in formal activities. Three outcome measures assessed students' group learning processes over time: learning within formally organised small groups, learning with other students in the informal context and perceptions of the intervention. Formal group learning processes were perceived more positive in the intervention groups from the second study year on, with a mean increase of β=0.48. Informal group learning activities occurred almost exclusively within the subsets as defined by the intervention from the first week involved in the medical curriculum (E-I indexes>-0.69). Interviews tapped mainly positive effects and negligible negative side effects of the intervention. Better group learning processes can be achieved in large medical schools by making large classes seem small.
Medical Students Perceive Better Group Learning Processes when Large Classes Are Made to Seem Small
Hommes, Juliette; Arah, Onyebuchi A.; de Grave, Willem; Schuwirth, Lambert W. T.; Scherpbier, Albert J. J. A.; Bos, Gerard M. J.
2014-01-01
Objective Medical schools struggle with large classes, which might interfere with the effectiveness of learning within small groups due to students being unfamiliar to fellow students. The aim of this study was to assess the effects of making a large class seem small on the students' collaborative learning processes. Design A randomised controlled intervention study was undertaken to make a large class seem small, without the need to reduce the number of students enrolling in the medical programme. The class was divided into subsets: two small subsets (n = 50) as the intervention groups; a control group (n = 102) was mixed with the remaining students (the non-randomised group n∼100) to create one large subset. Setting The undergraduate curriculum of the Maastricht Medical School, applying the Problem-Based Learning principles. In this learning context, students learn mainly in tutorial groups, composed randomly from a large class every 6–10 weeks. Intervention The formal group learning activities were organised within the subsets. Students from the intervention groups met frequently within the formal groups, in contrast to the students from the large subset who hardly enrolled with the same students in formal activities. Main Outcome Measures Three outcome measures assessed students' group learning processes over time: learning within formally organised small groups, learning with other students in the informal context and perceptions of the intervention. Results Formal group learning processes were perceived more positive in the intervention groups from the second study year on, with a mean increase of β = 0.48. Informal group learning activities occurred almost exclusively within the subsets as defined by the intervention from the first week involved in the medical curriculum (E-I indexes>−0.69). Interviews tapped mainly positive effects and negligible negative side effects of the intervention. Conclusion Better group learning processes can be achieved in large medical schools by making large classes seem small. PMID:24736272
High-speed visualization of fuel spray impingement in the near-wall region using a DISI injector
NASA Astrophysics Data System (ADS)
Kawahara, N.; Kintaka, K.; Tomita, E.
2017-02-01
We used a multi-hole injector to spray isooctane under atmospheric conditions and observed droplet impingement behaviors. It is generally known that droplet impact regimes such as splashing, deposition, or bouncing are governed by the Weber number. However, owing to its complexity, little has been reported on microscopic visualization of poly-dispersed spray. During the spray impingement process, a large number of droplets approach, hit, then interact with the wall. It is therefore difficult to focus on a single droplet and observe the impingement process. We solved this difficulty using high-speed microscopic visualization. The spray/wall interaction processes were recorded by a high-speed camera (Shimadzu HPV-X2) with a long-distance microscope. We captured several impinging microscopic droplets. After optimizing the magnification and frame rate, the atomization behaviors, splashing and deposition, were recorded. Then, we processed the images obtained to determine droplet parameters such as the diameter, velocity, and impingement angle. Based on this information, the critical threshold between splashing and deposition was investigated in terms of the normal and parallel components of the Weber number with respect to the wall. The results suggested that, on a dry wall, we should set the normal critical Weber number to 300.
Park, KeeHyun; Lim, SeungHyeon
2015-01-01
In this paper, a multilayer secure biomedical data management system for managing a very large number of diverse personal health devices is proposed. The system has the following characteristics: the system supports international standard communication protocols to achieve interoperability. The system is integrated in the sense that both a PHD communication system and a remote PHD management system work together as a single system. Finally, the system proposed in this paper provides user/message authentication processes to securely transmit biomedical data measured by PHDs based on the concept of a biomedical signature. Some experiments, including the stress test, have been conducted to show that the system proposed/constructed in this study performs very well even when a very large number of PHDs are used. For a stress test, up to 1,200 threads are made to represent the same number of PHD agents. The loss ratio of the ISO/IEEE 11073 messages in the normal system is as high as 14% when 1,200 PHD agents are connected. On the other hand, no message loss occurs in the multilayered system proposed in this study, which demonstrates the superiority of the multilayered system to the normal system with regard to heavy traffic.
Lim, SeungHyeon
2015-01-01
In this paper, a multilayer secure biomedical data management system for managing a very large number of diverse personal health devices is proposed. The system has the following characteristics: the system supports international standard communication protocols to achieve interoperability. The system is integrated in the sense that both a PHD communication system and a remote PHD management system work together as a single system. Finally, the system proposed in this paper provides user/message authentication processes to securely transmit biomedical data measured by PHDs based on the concept of a biomedical signature. Some experiments, including the stress test, have been conducted to show that the system proposed/constructed in this study performs very well even when a very large number of PHDs are used. For a stress test, up to 1,200 threads are made to represent the same number of PHD agents. The loss ratio of the ISO/IEEE 11073 messages in the normal system is as high as 14% when 1,200 PHD agents are connected. On the other hand, no message loss occurs in the multilayered system proposed in this study, which demonstrates the superiority of the multilayered system to the normal system with regard to heavy traffic. PMID:26247034
Fast Image Subtraction Using Multi-cores and GPUs
NASA Astrophysics Data System (ADS)
Hartung, Steven; Shukla, H.
2013-01-01
Many important image processing techniques in astronomy require a massive number of computations per pixel. Among them is an image differencing technique known as Optimal Image Subtraction (OIS), which is very useful for detecting and characterizing transient phenomena. Like many image processing routines, OIS computations increase proportionally with the number of pixels being processed, and the number of pixels in need of processing is increasing rapidly. Utilizing many-core graphical processing unit (GPU) technology in a hybrid conjunction with multi-core CPU and computer clustering technologies, this work presents a new astronomy image processing pipeline architecture. The chosen OIS implementation focuses on the 2nd order spatially-varying kernel with the Dirac delta function basis, a powerful image differencing method that has seen limited deployment in part because of the heavy computational burden. This tool can process standard image calibration and OIS differencing in a fashion that is scalable with the increasing data volume. It employs several parallel processing technologies in a hierarchical fashion in order to best utilize each of their strengths. The Linux/Unix based application can operate on a single computer, or on an MPI configured cluster, with or without GPU hardware. With GPU hardware available, even low-cost commercial video cards, the OIS convolution and subtraction times for large images can be accelerated by up to three orders of magnitude.
Data-driven CT protocol review and management—experience from a large academic hospital.
Zhang, Da; Savage, Cristy A; Li, Xinhua; Liu, Bob
2015-03-01
Protocol review plays a critical role in CT quality assurance, but large numbers of protocols and inconsistent protocol names on scanners and in exam records make thorough protocol review formidable. In this investigation, we report on a data-driven cataloging process that can be used to assist in the reviewing and management of CT protocols. We collected lists of scanner protocols, as well as 18 months of recent exam records, for 10 clinical scanners. We developed computer algorithms to automatically deconstruct the protocol names on the scanner and in the exam records into core names and descriptive components. Based on the core names, we were able to group the scanner protocols into a much smaller set of "core protocols," and to easily link exam records with the scanner protocols. We calculated the percentage of usage for each core protocol, from which the most heavily used protocols were identified. From the percentage-of-usage data, we found that, on average, 18, 33, and 49 core protocols per scanner covered 80%, 90%, and 95%, respectively, of all exams. These numbers are one order of magnitude smaller than the typical numbers of protocols that are loaded on a scanner (200-300, as reported in the literature). Duplicated, outdated, and rarely used protocols on the scanners were easily pinpointed in the cataloging process. The data-driven cataloging process can facilitate the task of protocol review. Copyright © 2015 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Families of phosphoinositide-specific phospholipase C: structure and function.
Katan, M
1998-12-08
A large number of extracellular signals stimulate hydrolysis of phosphatidylinositol 4,5-bisphosphate by phosphoinositide-specific phospholipase C (PI-PLC). PI-PLC isozymes have been found in a broad spectrum of organisms and although they have common catalytic properties, their regulation involves different signalling pathways. A number of recent studies provided an insight into domain organisation of PI-PLC isozymes and contributed towards better understanding of the structural basis for catalysis, cellular localisation and molecular changes that could underlie the process of their activation.
Extrasolar planets: constraints for planet formation models.
Santos, Nuno C; Benz, Willy; Mayor, Michel
2005-10-14
Since 1995, more than 150 extrasolar planets have been discovered, most of them in orbits quite different from those of the giant planets in our own solar system. The number of discovered extrasolar planets demonstrates that planetary systems are common but also that they may possess a large variety of properties. As the number of detections grows, statistical studies of the properties of exoplanets and their host stars can be conducted to unravel some of the key physical and chemical processes leading to the formation of planetary systems.
Future Facilities for Gamma-Ray Pulsar Studies
NASA Technical Reports Server (NTRS)
Thompson, D. J.
2003-01-01
Pulsars seen at gamma-ray energies offer insight into particle acceleration to very high energies, along with information about the geometry and interaction processes in the magnetospheres of these rotating neutron stars. During the next decade, a number of new gamma-ray facilities will become available for pulsar studies. This brief review describes the motivation for gamma-ray pulsar studies, the opportunities for such studies, and some specific discussion of the capabilities of the Gamma-ray Large Area Space Telescope (GLAST) Large Area Telescope (LAT) for pulsar measurements.
Current Fluctuations in a Semiconductor Quantum Dot with Large Energy Spacing
NASA Astrophysics Data System (ADS)
Jeong, Heejun
2014-12-01
We report on the measurements of the current noise properties of electron tunneling through a split-gate GaAs quantum dot with large energy level spacing and a small number of electrons. Shot noise is full Poissonian or suppressed in the Coulomb-blockaded regime, while it is enhanced to show as super-Poissonian when an excited energy level is involved by finite source-drain bias. The results can be explained by multiple Poissonian processes through multilevel sequential tunneling.
SCP -- A Simple CCD Processing Package
NASA Astrophysics Data System (ADS)
Lewis, J. R.
This note describes a small set of programs, written at RGO, which deal with basic CCD frame processing (e.g. bias subtraction, flat fielding, trimming etc.). The need to process large numbers of CCD frames from devices such as FOS or ISIS in order to extract spectra has prompted the writing of routines which will do the basic hack-work with a minimal amount of interaction from the user. Although they were written with spectral data in mind, there are no ``spectrum-specific'' features in the software which means they can be applied to any CCD data.
Programming with process groups: Group and multicast semantics
NASA Technical Reports Server (NTRS)
Birman, Kenneth P.; Cooper, Robert; Gleeson, Barry
1991-01-01
Process groups are a natural tool for distributed programming and are increasingly important in distributed computing environments. Discussed here is a new architecture that arose from an effort to simplify Isis process group semantics. The findings include a refined notion of how the clients of a group should be treated, what the properties of a multicast primitive should be when systems contain large numbers of overlapping groups, and a new construct called the causality domain. A system based on this architecture is now being implemented in collaboration with the Chorus and Mach projects.
Exploring the unconscious using faces.
Axelrod, Vadim; Bar, Moshe; Rees, Geraint
2015-01-01
Understanding the mechanisms of unconscious processing is one of the most substantial endeavors of cognitive science. While there are many different empirical ways to address this question, the use of faces in such research has proven exceptionally fruitful. We review here what has been learned about unconscious processing through the use of faces and face-selective neural correlates. A large number of cognitive systems can be explored with faces, including emotions, social cueing and evaluation, attention, multisensory integration, and various aspects of face processing. Copyright © 2014 Elsevier Ltd. All rights reserved.
Time Processing in Dyscalculia
Cappelletti, Marinella; Freeman, Elliot D.; Butterworth, Brian L.
2011-01-01
To test whether atypical number development may affect other types of quantity processing, we investigated temporal discrimination in adults with developmental dyscalculia (DD). This also allowed us to test whether number and time may be sub-served by a common quantity system or decision mechanisms: if they do, both should be impaired in dyscalculia, but if number and time are distinct they should dissociate. Participants judged which of two successively presented horizontal lines was longer in duration, the first line being preceded by either a small or a large number prime (“1” or “9”) or by a neutral symbol (“#”), or in a third task participants decided which of two Arabic numbers (either “1,” “5,” “9”) lasted longer. Results showed that (i) DD’s temporal discriminability was normal as long as numbers were not part of the experimental design, even as task-irrelevant stimuli; however (ii) task-irrelevant numbers dramatically disrupted DD’s temporal discriminability the more their salience increased, though the actual magnitude of the numbers had no effect; in contrast (iii) controls’ time perception was robust to the presence of numbers but modulated by numerical quantity: therefore small number primes or numerical stimuli seemed to make durations appear shorter than veridical, but longer for larger numerical prime or numerical stimuli. This study is the first to show spared temporal discrimination – a dimension of continuous quantity – in a population with a congenital number impairment. Our data reinforce the idea of a partially shared quantity system across numerical and temporal dimensions, which supports both dissociations and interactions among dimensions; however, they suggest that impaired number in DD is unlikely to originate from systems initially dedicated to continuous quantity processing like time. PMID:22194731
Time processing in dyscalculia.
Cappelletti, Marinella; Freeman, Elliot D; Butterworth, Brian L
2011-01-01
To test whether atypical number development may affect other types of quantity processing, we investigated temporal discrimination in adults with developmental dyscalculia (DD). This also allowed us to test whether number and time may be sub-served by a common quantity system or decision mechanisms: if they do, both should be impaired in dyscalculia, but if number and time are distinct they should dissociate. Participants judged which of two successively presented horizontal lines was longer in duration, the first line being preceded by either a small or a large number prime ("1" or "9") or by a neutral symbol ("#"), or in a third task participants decided which of two Arabic numbers (either "1," "5," "9") lasted longer. Results showed that (i) DD's temporal discriminability was normal as long as numbers were not part of the experimental design, even as task-irrelevant stimuli; however (ii) task-irrelevant numbers dramatically disrupted DD's temporal discriminability the more their salience increased, though the actual magnitude of the numbers had no effect; in contrast (iii) controls' time perception was robust to the presence of numbers but modulated by numerical quantity: therefore small number primes or numerical stimuli seemed to make durations appear shorter than veridical, but longer for larger numerical prime or numerical stimuli. This study is the first to show spared temporal discrimination - a dimension of continuous quantity - in a population with a congenital number impairment. Our data reinforce the idea of a partially shared quantity system across numerical and temporal dimensions, which supports both dissociations and interactions among dimensions; however, they suggest that impaired number in DD is unlikely to originate from systems initially dedicated to continuous quantity processing like time.
Priority-setting and hospital strategic planning: a qualitative case study.
Martin, Douglas; Shulman, Ken; Santiago-Sorrell, Patricia; Singer, Peter
2003-10-01
To describe and evaluate the priority-setting element of a hospital's strategic planning process. Qualitative case study and evaluation against the conditions of 'accountability for reasonableness' of a strategic planning process at a large urban university-affiliated hospital. The hospital's strategic planning process met the conditions of 'accountability for reasonableness' in large part. Specifically: the hospital based its decisions on reasons (both information and criteria) that the participants felt were relevant to the hospital; the number and type of participants were very extensive; the process, decisions and reasons were well communicated throughout the organization, using multiple communication vehicles; and the process included an ethical framework linked to an effort to evaluate and improve the process. However, there were opportunities to improve the process, particularly by giving participants more time to absorb the information relevant to priority-setting decisions, more time to take difficult decisions and some means to appeal or revise decisions. A case study linked to an evaluation using 'accountability for reasonableness' can serve to improve priority-setting in the context of hospital strategic planning.
Market structure and competition in the healthcare industry : Results from a transition economy.
Lábaj, Martin; Silanič, Peter; Weiss, Christoph; Yontcheva, Biliana
2018-02-14
The present paper provides first empirical evidence on the relationship between market size and the number of firms in the healthcare industry for a transition economy. We estimate market-size thresholds required to support different numbers of suppliers (firms) for three occupations in the healthcare industry in a large number of distinct geographic markets in Slovakia, taking into account the spatial interaction between local markets. The empirical analysis is carried out for three time periods (1995, 2001 and 2010) which characterise different stages of the transition process. Our results suggest that the relationship between market size and the number of firms differs both across industries and across periods. In particular, we find that pharmacies, as the only completely liberalised market in our dataset, experience the largest change in competitive behaviour during the transition process. Furthermore, we find evidence for correlation in entry decisions across administrative borders, suggesting that future market analysis should aim to capture these regional effects.
Tang, Shiming; Zhang, Yimeng; Li, Zhihao; Li, Ming; Liu, Fang; Jiang, Hongfei; Lee, Tai Sing
2018-04-26
One general principle of sensory information processing is that the brain must optimize efficiency by reducing the number of neurons that process the same information. The sparseness of the sensory representations in a population of neurons reflects the efficiency of the neural code. Here, we employ large-scale two-photon calcium imaging to examine the responses of a large population of neurons within the superficial layers of area V1 with single-cell resolution, while simultaneously presenting a large set of natural visual stimuli, to provide the first direct measure of the population sparseness in awake primates. The results show that only 0.5% of neurons respond strongly to any given natural image - indicating a ten-fold increase in the inferred sparseness over previous measurements. These population activities are nevertheless necessary and sufficient to discriminate visual stimuli with high accuracy, suggesting that the neural code in the primary visual cortex is both super-sparse and highly efficient. © 2018, Tang et al.
An efficient parallel-processing method for transposing large matrices in place.
Portnoff, M R
1999-01-01
We have developed an efficient algorithm for transposing large matrices in place. The algorithm is efficient because data are accessed either sequentially in blocks or randomly within blocks small enough to fit in cache, and because the same indexing calculations are shared among identical procedures operating on independent subsets of the data. This inherent parallelism makes the method well suited for a multiprocessor computing environment. The algorithm is easy to implement because the same two procedures are applied to the data in various groupings to carry out the complete transpose operation. Using only a single processor, we have demonstrated nearly an order of magnitude increase in speed over the previously published algorithm by Gate and Twigg for transposing a large rectangular matrix in place. With multiple processors operating in parallel, the processing speed increases almost linearly with the number of processors. A simplified version of the algorithm for square matrices is presented as well as an extension for matrices large enough to require virtual memory.
Kuich, P. Henning J. L.; Hoffmann, Nils; Kempa, Stefan
2015-01-01
A current bottleneck in GC–MS metabolomics is the processing of raw machine data into a final datamatrix that contains the quantities of identified metabolites in each sample. While there are many bioinformatics tools available to aid the initial steps of the process, their use requires both significant technical expertise and a subsequent manual validation of identifications and alignments if high data quality is desired. The manual validation is tedious and time consuming, becoming prohibitively so as sample numbers increase. We have, therefore, developed Maui-VIA, a solution based on a visual interface that allows experts and non-experts to simultaneously and quickly process, inspect, and correct large numbers of GC–MS samples. It allows for the visual inspection of identifications and alignments, facilitating a unique and, due to its visualization and keyboard shortcuts, very fast interaction with the data. Therefore, Maui-Via fills an important niche by (1) providing functionality that optimizes the component of data processing that is currently most labor intensive to save time and (2) lowering the threshold of expertise required to process GC–MS data. Maui-VIA projects are initiated with baseline-corrected raw data, peaklists, and a database of metabolite spectra and retention indices used for identification. It provides functionality for retention index calculation, a targeted library search, the visual annotation, alignment, correction interface, and metabolite quantification, as well as the export of the final datamatrix. The high quality of data produced by Maui-VIA is illustrated by its comparison to data attained manually by an expert using vendor software on a previously published dataset concerning the response of Chlamydomonas reinhardtii to salt stress. In conclusion, Maui-VIA provides the opportunity for fast, confident, and high-quality data processing validation of large numbers of GC–MS samples by non-experts. PMID:25654076
From drop impact physics to spray cooling models: a critical review
NASA Astrophysics Data System (ADS)
Breitenbach, Jan; Roisman, Ilia V.; Tropea, Cameron
2018-03-01
Spray-wall interaction is an important process encountered in a large number of existing and emerging technologies and is the underlying phenomenon associated with spray cooling. Spray cooling is a very efficient technology, surpassing all other conventional cooling methods, especially those not involving phase change and not exploiting the latent heat of vaporization. However, the effectiveness of spray cooling is dependent on a large number of parameters, including spray characteristics like drop size, velocity and number density, the surface morphology, but also on the temperature range and thermal properties of the materials involved. Indeed, the temperature of the substrate can have significant influence on the hydrodynamics of drop and spray impact, an aspect which is seldom considered in model formulation. This process is extremely complex, thus most design rules to date are highly empirical in nature. On the other hand, significant theoretical progress has been made in recent years about the interaction of single drops with heated walls and improvements to the fundamentals of spray cooling can now be anticipated. The present review has the objective of summarizing some of these recent advances and to establish a framework for future development of more reliable and universal physics-based correlations to describe quantities involved in spray cooling.
Chen, Wenjin; Wong, Chung; Vosburgh, Evan; Levine, Arnold J; Foran, David J; Xu, Eugenia Y
2014-07-08
The increasing number of applications of three-dimensional (3D) tumor spheroids as an in vitro model for drug discovery requires their adaptation to large-scale screening formats in every step of a drug screen, including large-scale image analysis. Currently there is no ready-to-use and free image analysis software to meet this large-scale format. Most existing methods involve manually drawing the length and width of the imaged 3D spheroids, which is a tedious and time-consuming process. This study presents a high-throughput image analysis software application - SpheroidSizer, which measures the major and minor axial length of the imaged 3D tumor spheroids automatically and accurately; calculates the volume of each individual 3D tumor spheroid; then outputs the results in two different forms in spreadsheets for easy manipulations in the subsequent data analysis. The main advantage of this software is its powerful image analysis application that is adapted for large numbers of images. It provides high-throughput computation and quality-control workflow. The estimated time to process 1,000 images is about 15 min on a minimally configured laptop, or around 1 min on a multi-core performance workstation. The graphical user interface (GUI) is also designed for easy quality control, and users can manually override the computer results. The key method used in this software is adapted from the active contour algorithm, also known as Snakes, which is especially suitable for images with uneven illumination and noisy background that often plagues automated imaging processing in high-throughput screens. The complimentary "Manual Initialize" and "Hand Draw" tools provide the flexibility to SpheroidSizer in dealing with various types of spheroids and diverse quality images. This high-throughput image analysis software remarkably reduces labor and speeds up the analysis process. Implementing this software is beneficial for 3D tumor spheroids to become a routine in vitro model for drug screens in industry and academia.
Toyota, M; Canzian, F; Ushijima, T; Hosoya, Y; Kuramoto, T; Serikawa, T; Imai, K; Sugimura, T; Nagao, M
1996-01-01
Representational difference analysis (RDA) was applied to isolate chromosomal markers in the rat. Four series of RDA [restriction enzymes, BamHI and HindIII; subtraction of ACI/N (ACI) amplicon from BUF/Nac (BUF) amplicon and vice versa] yielded 131 polymorphic markers; 125 of these markers were mapped to all chromosomes except for chromosome X. This was done by using a mapping panel of 105 ACI x BUF F2 rats. To complement the relative paucity of chromosomal markers in the rat, genetically directed RDA, which allows isolation of polymorphic markers in the specific chromosomal region, was performed. By changing the F2 driver-DNA allele frequency around the region, four markers were isolated from the D1Ncc1 locus. Twenty-five of 27 RDA markers were informative regarding the dot blot analysis of amplicons, hybridizing only with tester amplicons. Dot blot analysis at a high density per unit of area made it possible to process a large number of samples. Quantitative trait loci can now be mapped in the rat genome by processing a large number of samples with RDA markers and then by isolating markers close to the loci of interest by genetically directed RDA. Images Fig. 1 Fig. 3 Fig. 4 PMID:8632989
Accuracy or precision: Implications of sample design and methodology on abundance estimation
Kowalewski, Lucas K.; Chizinski, Christopher J.; Powell, Larkin A.; Pope, Kevin L.; Pegg, Mark A.
2015-01-01
Sampling by spatially replicated counts (point-count) is an increasingly popular method of estimating population size of organisms. Challenges exist when sampling by point-count method, and it is often impractical to sample entire area of interest and impossible to detect every individual present. Ecologists encounter logistical limitations that force them to sample either few large-sample units or many small sample-units, introducing biases to sample counts. We generated a computer environment and simulated sampling scenarios to test the role of number of samples, sample unit area, number of organisms, and distribution of organisms in the estimation of population sizes using N-mixture models. Many sample units of small area provided estimates that were consistently closer to true abundance than sample scenarios with few sample units of large area. However, sample scenarios with few sample units of large area provided more precise abundance estimates than abundance estimates derived from sample scenarios with many sample units of small area. It is important to consider accuracy and precision of abundance estimates during the sample design process with study goals and objectives fully recognized, although and with consequence, consideration of accuracy and precision of abundance estimates is often an afterthought that occurs during the data analysis process.
A finite-volume ELLAM for three-dimensional solute-transport modeling
Russell, T.F.; Heberton, C.I.; Konikow, Leonard F.; Hornberger, G.Z.
2003-01-01
A three-dimensional finite-volume ELLAM method has been developed, tested, and successfully implemented as part of the U.S. Geological Survey (USGS) MODFLOW-2000 ground water modeling package. It is included as a solver option for the Ground Water Transport process. The FVELLAM uses space-time finite volumes oriented along the streamlines of the flow field to solve an integral form of the solute-transport equation, thus combining local and global mass conservation with the advantages of Eulerian-Lagrangian characteristic methods. The USGS FVELLAM code simulates solute transport in flowing ground water for a single dissolved solute constituent and represents the processes of advective transport, hydrodynamic dispersion, mixing from fluid sources, retardation, and decay. Implicit time discretization of the dispersive and source/sink terms is combined with a Lagrangian treatment of advection, in which forward tracking moves mass to the new time level, distributing mass among destination cells using approximate indicator functions. This allows the use of large transport time increments (large Courant numbers) with accurate results, even for advection-dominated systems (large Peclet numbers). Four test cases, including comparisons with analytical solutions and benchmarking against other numerical codes, are presented that indicate that the FVELLAM can usually yield excellent results, even if relatively few transport time steps are used, although the quality of the results is problem-dependent.
Algorithms and programming tools for image processing on the MPP, part 2
NASA Technical Reports Server (NTRS)
Reeves, Anthony P.
1986-01-01
A number of algorithms were developed for image warping and pyramid image filtering. Techniques were investigated for the parallel processing of a large number of independent irregular shaped regions on the MPP. In addition some utilities for dealing with very long vectors and for sorting were developed. Documentation pages for the algorithms which are available for distribution are given. The performance of the MPP for a number of basic data manipulations was determined. From these results it is possible to predict the efficiency of the MPP for a number of algorithms and applications. The Parallel Pascal development system, which is a portable programming environment for the MPP, was improved and better documentation including a tutorial was written. This environment allows programs for the MPP to be developed on any conventional computer system; it consists of a set of system programs and a library of general purpose Parallel Pascal functions. The algorithms were tested on the MPP and a presentation on the development system was made to the MPP users group. The UNIX version of the Parallel Pascal System was distributed to a number of new sites.
Position measurement of the direct drive motor of Large Aperture Telescope
NASA Astrophysics Data System (ADS)
Li, Ying; Wang, Daxing
2010-07-01
Along with the development of space and astronomy science, production of large aperture telescope and super large aperture telescope will definitely become the trend. It's one of methods to solve precise drive of large aperture telescope using direct drive technology unified designed of electricity and magnetism structure. A direct drive precise rotary table with diameter of 2.5 meters researched and produced by us is a typical mechanical & electrical integration design. This paper mainly introduces position measurement control system of direct drive motor. In design of this motor, position measurement control system requires having high resolution, and precisely aligning the position of rotor shaft and making measurement, meanwhile transferring position information to position reversing information corresponding to needed motor pole number. This system has chosen high precision metal band coder and absolute type coder, processing information of coders, and has sent 32-bit RISC CPU making software processing, and gained high resolution composite coder. The paper gives relevant laboratory test results at the end, indicating the position measurement can apply to large aperture telescope control system. This project is subsidized by Chinese National Natural Science Funds (10833004).
An actual load forecasting methodology by interval grey modeling based on the fractional calculus.
Yang, Yang; Xue, Dingyü
2017-07-17
The operation processes for thermal power plant are measured by the real-time data, and a large number of historical interval data can be obtained from the dataset. Within defined periods of time, the interval information could provide important information for decision making and equipment maintenance. Actual load is one of the most important parameters, and the trends hidden in the historical data will show the overall operation status of the equipments. However, based on the interval grey parameter numbers, the modeling and prediction process is more complicated than the one with real numbers. In order not lose any information, the geometric coordinate features are used by the coordinates of area and middle point lines in this paper, which are proved with the same information as the original interval data. The grey prediction model for interval grey number by the fractional-order accumulation calculus is proposed. Compared with integer-order model, the proposed method could have more freedom with better performance for modeling and prediction, which can be widely used in the modeling process and prediction for the small amount interval historical industry sequence samples. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Modeling Polio Data Using the First Order Non-Negative Integer-Valued Autoregressive, INAR(1), Model
NASA Astrophysics Data System (ADS)
Vazifedan, Turaj; Shitan, Mahendran
Time series data may consists of counts, such as the number of road accidents, the number of patients in a certain hospital, the number of customers waiting for service at a certain time and etc. When the value of the observations are large it is usual to use Gaussian Autoregressive Moving Average (ARMA) process to model the time series. However if the observed counts are small, it is not appropriate to use ARMA process to model the observed phenomenon. In such cases we need to model the time series data by using Non-Negative Integer valued Autoregressive (INAR) process. The modeling of counts data is based on the binomial thinning operator. In this paper we illustrate the modeling of counts data using the monthly number of Poliomyelitis data in United States between January 1970 until December 1983. We applied the AR(1), Poisson regression model and INAR(1) model and the suitability of these models were assessed by using the Index of Agreement(I.A.). We found that INAR(1) model is more appropriate in the sense it had a better I.A. and it is natural since the data are counts.
Scalable approximate policies for Markov decision process models of hospital elective admissions.
Zhu, George; Lizotte, Dan; Hoey, Jesse
2014-05-01
To demonstrate the feasibility of using stochastic simulation methods for the solution of a large-scale Markov decision process model of on-line patient admissions scheduling. The problem of admissions scheduling is modeled as a Markov decision process in which the states represent numbers of patients using each of a number of resources. We investigate current state-of-the-art real time planning methods to compute solutions to this Markov decision process. Due to the complexity of the model, traditional model-based planners are limited in scalability since they require an explicit enumeration of the model dynamics. To overcome this challenge, we apply sample-based planners along with efficient simulation techniques that given an initial start state, generate an action on-demand while avoiding portions of the model that are irrelevant to the start state. We also propose a novel variant of a popular sample-based planner that is particularly well suited to the elective admissions problem. Results show that the stochastic simulation methods allow for the problem size to be scaled by a factor of almost 10 in the action space, and exponentially in the state space. We have demonstrated our approach on a problem with 81 actions, four specialities and four treatment patterns, and shown that we can generate solutions that are near-optimal in about 100s. Sample-based planners are a viable alternative to state-based planners for large Markov decision process models of elective admissions scheduling. Copyright © 2014 Elsevier B.V. All rights reserved.
Intentional Voice Command Detection for Trigger-Free Speech Interface
NASA Astrophysics Data System (ADS)
Obuchi, Yasunari; Sumiyoshi, Takashi
In this paper we introduce a new framework of audio processing, which is essential to achieve a trigger-free speech interface for home appliances. If the speech interface works continually in real environments, it must extract occasional voice commands and reject everything else. It is extremely important to reduce the number of false alarms because the number of irrelevant inputs is much larger than the number of voice commands even for heavy users of appliances. The framework, called Intentional Voice Command Detection, is based on voice activity detection, but enhanced by various speech/audio processing techniques such as emotion recognition. The effectiveness of the proposed framework is evaluated using a newly-collected large-scale corpus. The advantages of combining various features were tested and confirmed, and the simple LDA-based classifier demonstrated acceptable performance. The effectiveness of various methods of user adaptation is also discussed.
Inductive reasoning and implicit memory: evidence from intact and impaired memory systems.
Girelli, Luisa; Semenza, Carlo; Delazer, Margarete
2004-01-01
In this study, we modified a classic problem solving task, number series completion, in order to explore the contribution of implicit memory to inductive reasoning. Participants were required to complete number series sharing the same underlying algorithm (e.g., +2), differing in both constituent elements (e.g., 2468 versus 57911) and correct answers (e.g., 10 versus 13). In Experiment 1, reliable priming effects emerged, whether primes and targets were separated by four or ten fillers. Experiment 2 provided direct evidence that the observed facilitation arises at central stages of problem solving, namely the identification of the algorithm and its subsequent extrapolation. The observation of analogous priming effects in a severely amnesic patient strongly supports the hypothesis that the facilitation in number series completion was largely determined by implicit memory processes. These findings demonstrate that the influence of implicit processes extends to higher level cognitive domain such as induction reasoning.
Multi-format all-optical processing based on a large-scale, hybridly integrated photonic circuit.
Bougioukos, M; Kouloumentas, Ch; Spyropoulou, M; Giannoulis, G; Kalavrouziotis, D; Maziotis, A; Bakopoulos, P; Harmon, R; Rogers, D; Harrison, J; Poustie, A; Maxwell, G; Avramopoulos, H
2011-06-06
We investigate through numerical studies and experiments the performance of a large scale, silica-on-silicon photonic integrated circuit for multi-format regeneration and wavelength-conversion. The circuit encompasses a monolithically integrated array of four SOAs inside two parallel Mach-Zehnder structures, four delay interferometers and a large number of silica waveguides and couplers. Exploiting phase-incoherent techniques, the circuit is capable of processing OOK signals at variable bit rates, DPSK signals at 22 or 44 Gb/s and DQPSK signals at 44 Gbaud. Simulation studies reveal the wavelength-conversion potential of the circuit with enhanced regenerative capabilities for OOK and DPSK modulation formats and acceptable quality degradation for DQPSK format. Regeneration of 22 Gb/s OOK signals with amplified spontaneous emission (ASE) noise and DPSK data signals degraded with amplitude, phase and ASE noise is experimentally validated demonstrating a power penalty improvement up to 1.5 dB.
Fracture Tests of Etched Components Using a Focused Ion Beam Machine
NASA Technical Reports Server (NTRS)
Kuhn, Jonathan, L.; Fettig, Rainer K.; Moseley, S. Harvey; Kutyrev, Alexander S.; Orloff, Jon; Powers, Edward I. (Technical Monitor)
2000-01-01
Many optical MEMS device designs involve large arrays of thin (0.5 to 1 micron components subjected to high stresses due to cyclic loading. These devices are fabricated from a variety of materials, and the properties strongly depend on size and processing. Our objective is to develop standard and convenient test methods that can be used to measure the properties of large numbers of witness samples, for every device we build. In this work we explore a variety of fracture test configurations for 0.5 micron thick silicon nitride membranes machined using the Reactive Ion Etching (RIE) process. Testing was completed using an FEI 620 dual focused ion beam milling machine. Static loads were applied using a probe. and dynamic loads were applied through a piezo-electric stack mounted at the base of the probe. Results from the tests are presented and compared, and application for predicting fracture probability of large arrays of devices are considered.
2018-01-01
Many modern applications of AI such as web search, mobile browsing, image processing, and natural language processing rely on finding similar items from a large database of complex objects. Due to the very large scale of data involved (e.g., users’ queries from commercial search engines), computing such near or nearest neighbors is a non-trivial task, as the computational cost grows significantly with the number of items. To address this challenge, we adopt Locality Sensitive Hashing (a.k.a, LSH) methods and evaluate four variants in a distributed computing environment (specifically, Hadoop). We identify several optimizations which improve performance, suitable for deployment in very large scale settings. The experimental results demonstrate our variants of LSH achieve the robust performance with better recall compared with “vanilla” LSH, even when using the same amount of space. PMID:29346410
Mass movements and tree rings: A guide to dendrogeomorphic field sampling and dating
NASA Astrophysics Data System (ADS)
Stoffel, Markus; Butler, David R.; Corona, Christophe
2013-10-01
Trees affected by mass movements record the evidence of geomorphic disturbance in the growth-ring series, and thereby provide a precise geochronological tool for the reconstruction of past activity of mass movement. The identification of past activity of processes was typically based on the presence of growth anomalies in affected trees and focused on the presence of scars, tilted or buried trunks, as well as on apex decapitation. For the analyses and interpretation of disturbances in tree-ring records, in contrast, clear guidelines have not been established, with largely differing or no thresholds used to distinguish signal from noise. At the same time, processes with a large spatial footprint (e.g., snow avalanches, landslides, or floods) will likely leave growth anomalies in a large number of trees, whereas a falling rock would only cause scars in one or a few trees along its trajectory.
A parallel-pipelined architecture for a multi carrier demodulator
NASA Astrophysics Data System (ADS)
Kwatra, S. C.; Jamali, M. M.; Eugene, Linus P.
1991-03-01
Analog devices have been used for processing the information on board the satellites. Presently, digital devices are being used because they are economical and flexible as compared to their analog counterparts. Several schemes of digital transmission can be used depending on the data rate requirement of the user. An economical scheme of transmission for small earth stations uses single channel per carrier/frequency division multiple access (SCPC/FDMA) on the uplink and time division multiplexing (TDM) on the downlink. This is a typical communication service offered to low data rate users in commercial mass market. These channels usually pertain to either voice or data transmission. An efficient digital demodulator architecture is provided for a large number of law data rate users. A demodulator primarily consists of carrier, clock, and data recovery modules. This design uses principles of parallel processing, pipelining, and time sharing schemes to process large numbers of voice or data channels. It maintains the optimum throughput which is derived from the designed architecture and from the use of high speed components. The design is optimized for reduced power and area requirements. This is essential for satellite applications. The design is also flexible in processing a group of a varying number of channels. The algorithms that are used are verified by the use of a computer aided software engineering (CASE) tool called the Block Oriented System Simulator. The data flow, control circuitry, and interface of the hardware design is simulated in C language. Also, a multiprocessor approach is provided to map, model, and simulate the demodulation algorithms mainly from a speed view point. A hypercude based architecture implementation is provided for such a scheme of operation. The hypercube structure and the demodulation models on hypercubes are simulated in Ada.
Segev, Danny; Levi, Retsef; Dunn, Peter F; Sandberg, Warren S
2012-06-01
Transportation of patients is a key hospital operational activity. During a large construction project, our patient admission and prep area will relocate from immediately adjacent to the operating room suite to another floor of a different building. Transportation will require extra distance and elevator trips to deliver patients and recycle transporters (specifically: personnel who transport patients). Management intuition suggested that starting all 52 first cases simultaneously would require many of the 18 available elevators. To test this, we developed a data-driven simulation tool to allow decision makers to simultaneously address planning and evaluation questions about patient transportation. We coded a stochastic simulation tool for a generalized model treating all factors contributing to the process as JAVA objects. The model includes elevator steps, explicitly accounting for transporter speed and distance to be covered. We used the model for sensitivity analyses of the number of dedicated elevators, dedicated transporters, transporter speed and the planned process start time on lateness of OR starts and the number of cases with serious delays (i.e., more than 15 min). Allocating two of the 18 elevators and 7 transporters reduced lateness and the number of cases with serious delays. Additional elevators and/or transporters yielded little additional benefit. If the admission process produced ready-for-transport patients 20 min earlier, almost all delays would be eliminated. Modeling results contradicted clinical managers' intuition that starting all first cases on time requires many dedicated elevators. This is explained by the principle of decreasing marginal returns for increasing capacity when there are other limiting constraints in the system.
NASA Technical Reports Server (NTRS)
Kwatra, S. C.; Jamali, M. M.; Eugene, Linus P.
1991-01-01
Analog devices have been used for processing the information on board the satellites. Presently, digital devices are being used because they are economical and flexible as compared to their analog counterparts. Several schemes of digital transmission can be used depending on the data rate requirement of the user. An economical scheme of transmission for small earth stations uses single channel per carrier/frequency division multiple access (SCPC/FDMA) on the uplink and time division multiplexing (TDM) on the downlink. This is a typical communication service offered to low data rate users in commercial mass market. These channels usually pertain to either voice or data transmission. An efficient digital demodulator architecture is provided for a large number of law data rate users. A demodulator primarily consists of carrier, clock, and data recovery modules. This design uses principles of parallel processing, pipelining, and time sharing schemes to process large numbers of voice or data channels. It maintains the optimum throughput which is derived from the designed architecture and from the use of high speed components. The design is optimized for reduced power and area requirements. This is essential for satellite applications. The design is also flexible in processing a group of a varying number of channels. The algorithms that are used are verified by the use of a computer aided software engineering (CASE) tool called the Block Oriented System Simulator. The data flow, control circuitry, and interface of the hardware design is simulated in C language. Also, a multiprocessor approach is provided to map, model, and simulate the demodulation algorithms mainly from a speed view point. A hypercude based architecture implementation is provided for such a scheme of operation. The hypercube structure and the demodulation models on hypercubes are simulated in Ada.
Determining the Optimal Number of Clusters with the Clustergram
NASA Technical Reports Server (NTRS)
Fluegemann, Joseph K.; Davies, Misty D.; Aguirre, Nathan D.
2011-01-01
Cluster analysis aids research in many different fields, from business to biology to aerospace. It consists of using statistical techniques to group objects in large sets of data into meaningful classes. However, this process of ordering data points presents much uncertainty because it involves several steps, many of which are subject to researcher judgment as well as inconsistencies depending on the specific data type and research goals. These steps include the method used to cluster the data, the variables on which the cluster analysis will be operating, the number of resulting clusters, and parts of the interpretation process. In most cases, the number of clusters must be guessed or estimated before employing the clustering method. Many remedies have been proposed, but none is unassailable and certainly not for all data types. Thus, the aim of current research for better techniques of determining the number of clusters is generally confined to demonstrating that the new technique excels other methods in performance for several disparate data types. Our research makes use of a new cluster-number-determination technique based on the clustergram: a graph that shows how the number of objects in the cluster and the cluster mean (the ordinate) change with the number of clusters (the abscissa). We use the features of the clustergram to make the best determination of the cluster-number.
Development of visual 3D virtual environment for control software
NASA Technical Reports Server (NTRS)
Hirose, Michitaka; Myoi, Takeshi; Amari, Haruo; Inamura, Kohei; Stark, Lawrence
1991-01-01
Virtual environments for software visualization may enable complex programs to be created and maintained. A typical application might be for control of regional electric power systems. As these encompass broader computer networks than ever, construction of such systems becomes very difficult. Conventional text-oriented environments are useful in programming individual processors. However, they are obviously insufficient to program a large and complicated system, that includes large numbers of computers connected to each other; such programming is called 'programming in the large.' As a solution for this problem, the authors are developing a graphic programming environment wherein one can visualize complicated software in virtual 3D world. One of the major features of the environment is the 3D representation of concurrent process. 3D representation is used to supply both network-wide interprocess programming capability (capability for 'programming in the large') and real-time programming capability. The authors' idea is to fuse both the block diagram (which is useful to check relationship among large number of processes or processors) and the time chart (which is useful to check precise timing for synchronization) into a single 3D space. The 3D representation gives us a capability for direct and intuitive planning or understanding of complicated relationship among many concurrent processes. To realize the 3D representation, a technology to enable easy handling of virtual 3D object is a definite necessity. Using a stereo display system and a gesture input device (VPL DataGlove), our prototype of the virtual workstation has been implemented. The workstation can supply the 'sensation' of the virtual 3D space to a programmer. Software for the 3D programming environment is implemented on the workstation. According to preliminary assessments, a 50 percent reduction of programming effort is achieved by using the virtual 3D environment. The authors expect that the 3D environment has considerable potential in the field of software engineering.
Measuring collective transport by defined numbers of processive and nonprocessive kinesin motors.
Furuta, Ken'ya; Furuta, Akane; Toyoshima, Yoko Y; Amino, Misako; Oiwa, Kazuhiro; Kojima, Hiroaki
2013-01-08
Intracellular transport is thought to be achieved by teams of motor proteins bound to a cargo. However, the coordination within a team remains poorly understood as a result of the experimental difficulty in controlling the number and composition of motors. Here, we developed an experimental system that links together defined numbers of motors with defined spacing on a DNA scaffold. By using this system, we linked multiple molecules of two different types of kinesin motors, processive kinesin-1 or nonprocessive Ncd (kinesin-14), in vitro. Both types of kinesins markedly increased their processivities with motor number. Remarkably, despite the poor processivity of individual Ncd motors, the coupling of two Ncd motors enables processive movement for more than 1 μm along microtubules (MTs). This improvement was further enhanced with decreasing spacing between motors. Force measurements revealed that the force generated by groups of Ncd is additive when two to four Ncd motors work together, which is much larger than that generated by single motors. By contrast, the force of multiple kinesin-1s depends only weakly on motor number. Numerical simulations and single-molecule unbinding measurements suggest that this additive nature of the force exerted by Ncd relies on fast MT binding kinetics and the large drag force of individual Ncd motors. These features would enable small groups of Ncd motors to crosslink MTs while rapidly modulating their force by forming clusters. Thus, our experimental system may provide a platform to study the collective behavior of motor proteins from the bottom up.
Research on characteristics of radiated noise of large cargo ship in shallow water
NASA Astrophysics Data System (ADS)
Liu, Yongdong; Zhang, Liang
2017-01-01
With the rapid development of the shipping industry, the number of the world's ship is gradually increasing. The characteristics of the radiated noise of the ship are also of concern. Since the noise source characteristics of multichannel interference, the surface wave and the sea temperature microstructure and other reasons, the sound signal received in the time-frequency domain has varying characteristics. The signal of the radiated noise of the large cargo ship JOCHOH from horizontal hydrophone array in some shallow water of China is processed and analyzed in the summer of 2015, and the results show that a large cargo ship JOCHOH has a number of noise sources in the direction of the ship's bow and stern lines, such as host, auxiliary and propellers. The radiating sound waves generated by these sources do not meet the spherical wave law at lower frequency in the ocean, and its radiated noise has inherent spatial distribution, the variation characteristics of the radiated noise the large cargo ship in time and frequency domain are given. The research method and results are of particular importance.
Mining subspace clusters from DNA microarray data using large itemset techniques.
Chang, Ye-In; Chen, Jiun-Rung; Tsai, Yueh-Chi
2009-05-01
Mining subspace clusters from the DNA microarrays could help researchers identify those genes which commonly contribute to a disease, where a subspace cluster indicates a subset of genes whose expression levels are similar under a subset of conditions. Since in a DNA microarray, the number of genes is far larger than the number of conditions, those previous proposed algorithms which compute the maximum dimension sets (MDSs) for any two genes will take a long time to mine subspace clusters. In this article, we propose the Large Itemset-Based Clustering (LISC) algorithm for mining subspace clusters. Instead of constructing MDSs for any two genes, we construct only MDSs for any two conditions. Then, we transform the task of finding the maximal possible gene sets into the problem of mining large itemsets from the condition-pair MDSs. Since we are only interested in those subspace clusters with gene sets as large as possible, it is desirable to pay attention to those gene sets which have reasonable large support values in the condition-pair MDSs. From our simulation results, we show that the proposed algorithm needs shorter processing time than those previous proposed algorithms which need to construct gene-pair MDSs.
Identification of forgeries in handwritten petitions for ballot propositions
NASA Astrophysics Data System (ADS)
Srihari, Sargur; Ramakrishnan, Veshnu; Malgireddy, Manavender; Ball, Gregory R.
2009-01-01
Many governments have some form of "direct democracy" legislation procedure whereby individual citizens can propose various measures creating or altering laws. Generally, such a process is started with the gathering of a large number of signatures. There is interest in whether or not there are fraudulent signatures present in such a petition, and if so what percentage of the signatures are indeed fraudulent. However, due to the large number of signatures (tens of thousands), it is not feasible to have a document examiner verify the signatures directly. Instead, there is interest in creating a subset of signatures where there is a high probability of fraud that can be verified. We present a method by which a pairwise comparison of signatures can be performed and subsequent sorting can generate such subsets.
Diagnosis and management of carotid stenosis: a review.
Nussbaum, E S
2000-01-01
Since its introduction in the 1950s, carotid endarterectomy has become one of the most frequently performed operations in the United States. The tremendous appeal of a procedure that decreases the risk of stroke, coupled with the large number of individuals in the general population with carotid stenosis, has contributed to its popularity. To provide optimal patient care, the practicing physician must have a firm understanding of the proper evaluation and management of carotid stenosis. Nevertheless, because of the large number of clinical trials performed over the last decade addressing the treatment of stroke and carotid endarterectomy, the care of patients with carotid stenosis remains a frequently misunderstood topic. This review summarizes the current evaluation and treatment options for carotid stenosis and provides a rational management algorithm for this prevalent disease process.
Olk, Bettina; Tsankova, Elena; Petca, A Raisa; Wilhelm, Adalbert F X
2014-10-01
The Posner cueing paradigm is one of the most widely used paradigms in attention research. Importantly, when employing it, it is critical to understand which type of orienting a cue triggers. It has been suggested that large effects elicited by predictive arrow cues reflect an interaction of involuntary and voluntary orienting. This conclusion is based on comparisons of cueing effects of predictive arrows, nonpredictive arrows (involuntary orienting), and predictive numbers (voluntary orienting). Experiment 1 investigated whether this conclusion is restricted to comparisons with number cues and showed similar results to those of previous studies, but now for comparisons to predictive colour cues, indicating that the earlier conclusion can be generalized. Experiment 2 assessed whether the size of a cueing effect is related to the ease of deriving direction information from a cue, based on the rationale that effects for arrows may be larger, because it may be easier to process direction information given by symbols such as arrows than that given by other cues. Indeed, direction information is derived faster and more accurately from arrows than from colour and number cues in a direction judgement task, and cueing effects are larger for arrows than for the other cues. Importantly though, performance in the two tasks is not correlated. Hence, the large cueing effects of arrows are not a result of the ease of information processing, but of the types of orienting that the arrows elicit.
Cui, De-Mi; Yan, Weizhong; Wang, Xiao-Quan; Lu, Lie-Min
2017-10-25
Low strain pile integrity testing (LSPIT), due to its simplicity and low cost, is one of the most popular NDE methods used in pile foundation construction. While performing LSPIT in the field is generally quite simple and quick, determining the integrity of the test piles by analyzing and interpreting the test signals (reflectograms) is still a manual process performed by experienced experts only. For foundation construction sites where the number of piles to be tested is large, it may take days before the expert can complete interpreting all of the piles and delivering the integrity assessment report. Techniques that can automate test signal interpretation, thus shortening the LSPIT's turnaround time, are of great business value and are in great need. Motivated by this need, in this paper, we develop a computer-aided reflectogram interpretation (CARI) methodology that can interpret a large number of LSPIT signals quickly and consistently. The methodology, built on advanced signal processing and machine learning technologies, can be used to assist the experts in performing both qualitative and quantitative interpretation of LSPIT signals. Specifically, the methodology can ease experts' interpretation burden by screening all test piles quickly and identifying a small number of suspected piles for experts to perform manual, in-depth interpretation. We demonstrate the methodology's effectiveness using the LSPIT signals collected from a number of real-world pile construction sites. The proposed methodology can potentially enhance LSPIT and make it even more efficient and effective in quality control of deep foundation construction.
NASA Astrophysics Data System (ADS)
Warrier, M.; Bhardwaj, U.; Hemani, H.; Schneider, R.; Mutzke, A.; Valsakumar, M. C.
2015-12-01
We report on molecular Dynamics (MD) simulations carried out in fcc Cu and bcc W using the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) code to study (i) the statistical variations in the number of interstitials and vacancies produced by energetic primary knock-on atoms (PKA) (0.1-5 keV) directed in random directions and (ii) the in-cascade cluster size distributions. It is seen that around 60-80 random directions have to be explored for the average number of displaced atoms to become steady in the case of fcc Cu, whereas for bcc W around 50-60 random directions need to be explored. The number of Frenkel pairs produced in the MD simulations are compared with that from the Binary Collision Approximation Monte Carlo (BCA-MC) code SDTRIM-SP and the results from the NRT model. It is seen that a proper choice of the damage energy, i.e. the energy required to create a stable interstitial, is essential for the BCA-MC results to match the MD results. On the computational front it is seen that in-situ processing saves the need to input/output (I/O) atomic position data of several tera-bytes when exploring a large number of random directions and there is no difference in run-time because the extra run-time in processing data is offset by the time saved in I/O.
The Long Duration Exposure Facility (LDEF). Mission 1 Experiments.
ERIC Educational Resources Information Center
Clark, Lenwood G., Ed.; And Others
The Long Duration Exposure Facility (LDEF) has been designed to take advantage of the two-way transportation capability of the space shuttle by providing a large number of economical opportunities for science and technology experiments that require modest electrical power and data processing while in space and which benefit from postflight…
ERIC Educational Resources Information Center
Jung, Jae Yup; McCormick, John
2010-01-01
This exploratory study investigated the occupational decision-related processes of senior high school students, in terms of the extent to which they may be amotivated in choosing a future occupation. Data were gathered using a newly developed questionnaire, which was largely adapted from a number of psychometrically proven instruments, and…
USDA-ARS?s Scientific Manuscript database
With the rapid development of small imaging sensors and unmanned aerial vehicles (UAVs), remote sensing is undergoing a revolution with greatly increased spatial and temporal resolutions. While more relevant detail becomes available, it is a challenge to analyze the large number of images to extract...
Mercury is the toxicant responsible for the largest number of fish advisories across the United States, with 1.25 million miles of rivers under advisory. The processes governing fate, transport, and transformation of mercury in lotic ecosystems are not well-understood, in large p...
ERIC Educational Resources Information Center
Boydell, T. H.
There is considerable evidence that a large number of recently appointed training specialists would welcome a straightforward account of job analysis. It is in the hope of providing such an account and of providing practical guidance that this booklet has been written. Major sections of this guide include: (1) Job Analysis--A Process, (2)…
Helping Young Children Understand Graphs: A Demonstration Study.
ERIC Educational Resources Information Center
Freeland, Kent; Madden, Wendy
1990-01-01
Outlines a demonstration lesson showing third graders how to make and interpret graphs. Includes descriptions of purpose, vocabulary, and learning activities in which students graph numbers of students with dogs at home and analyze the contents of M&M candy packages by color. Argues process helps students understand large amounts of abstract…
Semi-Automatic Grading of Students' Answers Written in Free Text
ERIC Educational Resources Information Center
Escudeiro, Nuno; Escudeiro, Paula; Cruz, Augusto
2011-01-01
The correct grading of free text answers to exam questions during an assessment process is time consuming and subject to fluctuations in the application of evaluation criteria, particularly when the number of answers is high (in the hundreds). In consequence of these fluctuations, inherent to human nature, and largely determined by emotional…
Course Recommendation Based on Query Classification Approach
ERIC Educational Resources Information Center
Gulzar, Zameer; Leema, A. Anny
2018-01-01
This article describes how with a non-formal education, a scholar has to choose courses among various domains to meet the research aims. In spite of this, the availability of large number of courses, makes the process of selecting the appropriate course a tedious, time-consuming, and risky decision, and the course selection will directly affect…
ERIC Educational Resources Information Center
Cheek, Kim A.
2017-01-01
Ideas about temporal (and spatial) scale impact students' understanding across science disciplines. Learners have difficulty comprehending the long time periods associated with natural processes because they have no referent for the magnitudes involved. When people have a good "feel" for quantity, they estimate cardinal number magnitude…
USDA-ARS?s Scientific Manuscript database
The low cost of next generation sequencing (NGS) technology and the availability of a large number of well annotated plant genomes has made sequencing technology useful to breeding programs. With the published high quality tomato reference genome of the processing cultivar Heinz 1706, we can now uti...
Hypervariable minisatellites: recombinators or innocent bystanders?
Jarman, A P; Wells, R A
1989-11-01
It has become apparent in recent years that unexpectedly large numbers of minisatellites exist within the eukaryotic genome. Their use in genetics is well known, but as with any new class of sequence, there is also much speculation about their involvement in a range of biological processes. How much is known of their biology?
NASA Technical Reports Server (NTRS)
Park, Steve
1990-01-01
A large and diverse number of computational techniques are routinely used to process and analyze remotely sensed data. These techniques include: univariate statistics; multivariate statistics; principal component analysis; pattern recognition and classification; other multivariate techniques; geometric correction; registration and resampling; radiometric correction; enhancement; restoration; Fourier analysis; and filtering. Each of these techniques will be considered, in order.
Rethinking and Restructuring an Assessment System via Effective Deployment of Technology
ERIC Educational Resources Information Center
Okonkwo, Charity
2010-01-01
Every instructional process involves a strategic assessment system for a complete teaching-learning circle. Assessment system which is seriously challenged calls for a change in the approach. The National Open University of Nigeria (NOUN) assessment system at present is challenged. The large number of students and numerous courses offered by NOUN…
Lesion Analysis of the Brain Areas Involved in Language Comprehension
ERIC Educational Resources Information Center
Dronkers, Nina F.; Wilkins, David P.; Van Valin, Robert D., Jr.; Redfern, Brenda B.; Jaeger, Jeri J.
2004-01-01
The cortical regions of the brain traditionally associated with the comprehension of language are Wernicke's area and Broca's area. However, recent evidence suggests that other brain regions might also be involved in this complex process. This paper describes the opportunity to evaluate a large number of brain-injured patients to determine which…
ASPEN--A Web-Based Application for Managing Student Server Accounts
ERIC Educational Resources Information Center
Sandvig, J. Christopher
2004-01-01
The growth of the Internet has greatly increased the demand for server-side programming courses at colleges and universities. Students enrolled in such courses must be provided with server-based accounts that support the technologies that they are learning. The process of creating, managing and removing large numbers of student server accounts is…
Time-division multiplexer uses digital gates
NASA Technical Reports Server (NTRS)
Myers, C. E.; Vreeland, A. E.
1977-01-01
Device eliminates errors caused by analog gates in multiplexing a large number of channels at high frequency. System was designed for use in aerospace work to multiplex signals for monitoring such variables as fuel consumption, pressure, temperature, strain, and stress. Circuit may be useful in monitoring variables in process control and medicine as well.
Affective Experiences of International and Home Students during the Information Search Process
ERIC Educational Resources Information Center
Haley, Adele Nicole; Clough, Paul
2017-01-01
An increasing number of students are studying abroad requiring that they interact with information in languages other than their mother tongue. The UK in particular has seen a large growth in international students within Higher Education. These nonnative English speaking students present a distinct user group for university information services,…
One dimensional Linescan x-ray detection of pits in fresh cherries
USDA-ARS?s Scientific Manuscript database
The presence of pits in processed cherries is a concern for both processors and consumers, in many cases causing injury and potential lawsuits. While machines used for pitting cherries are extremely efficient, if one or more plungers in a pitting head become misaligned, a large number of pits may p...
Striatal Degeneration Impairs Language Learning: Evidence from Huntington's Disease
ERIC Educational Resources Information Center
De Diego-Balaguer, R.; Couette, M.; Dolbeau, G.; Durr, A.; Youssov, K.; Bachoud-Levi, A.-C.
2008-01-01
Although the role of the striatum in language processing is still largely unclear, a number of recent proposals have outlined its specific contribution. Different studies report evidence converging to a picture where the striatum may be involved in those aspects of rule-application requiring non-automatized behaviour. This is the main…
Mercury (Hg) is the toxicant responsible for the largest number of fish advisories across the United States, with 1.25 million river miles under advisory. The processes governing fate, transport, and transformation of Hg in lotic ecosystems are not well-understood, in large part...
Mobile Learning as Alternative to Assistive Technology Devices for Special Needs Students
ERIC Educational Resources Information Center
Ismaili, Jalal; Ibrahimi, El Houcine Ouazzani
2017-01-01
Assistive Technology (AT) revolutionized the process of learning for special needs students during the past three decades. Thanks to this technology, accessibility and educational inclusion became attainable more than any time in the history of special education. Meanwhile, assistive technology devices remain unreachable for a large number of…
Judicious Discipline: A Constitutional Approach for Public High Schools.
ERIC Educational Resources Information Center
Grandmont, Richard P.
2003-01-01
Examines the practices in a large public high school where constitutional language and democratic citizenship education--judicious discipline--are introduced into the decision-making processes of the classroom. Data analysis suggests that a considerable number of students felt they possessed a high level of respect and responsibility as a result.…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spentzouris, Panagiotis; /Fermilab; Cary, John
The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessarymore » accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors.« less
Evolutionary design optimization of traffic signals applied to Quito city.
Armas, Rolando; Aguirre, Hernán; Daolio, Fabio; Tanaka, Kiyoshi
2017-01-01
This work applies evolutionary computation and machine learning methods to study the transportation system of Quito from a design optimization perspective. It couples an evolutionary algorithm with a microscopic transport simulator and uses the outcome of the optimization process to deepen our understanding of the problem and gain knowledge about the system. The work focuses on the optimization of a large number of traffic lights deployed on a wide area of the city and studies their impact on travel time, emissions and fuel consumption. An evolutionary algorithm with specialized mutation operators is proposed to search effectively in large decision spaces, evolving small populations for a short number of generations. The effects of the operators combined with a varying mutation schedule are studied, and an analysis of the parameters of the algorithm is also included. In addition, hierarchical clustering is performed on the best solutions found in several runs of the algorithm. An analysis of signal clusters and their geolocation, estimation of fuel consumption, spatial analysis of emissions, and an analysis of signal coordination provide an overall picture of the systemic effects of the optimization process.
Evolutionary design optimization of traffic signals applied to Quito city
2017-01-01
This work applies evolutionary computation and machine learning methods to study the transportation system of Quito from a design optimization perspective. It couples an evolutionary algorithm with a microscopic transport simulator and uses the outcome of the optimization process to deepen our understanding of the problem and gain knowledge about the system. The work focuses on the optimization of a large number of traffic lights deployed on a wide area of the city and studies their impact on travel time, emissions and fuel consumption. An evolutionary algorithm with specialized mutation operators is proposed to search effectively in large decision spaces, evolving small populations for a short number of generations. The effects of the operators combined with a varying mutation schedule are studied, and an analysis of the parameters of the algorithm is also included. In addition, hierarchical clustering is performed on the best solutions found in several runs of the algorithm. An analysis of signal clusters and their geolocation, estimation of fuel consumption, spatial analysis of emissions, and an analysis of signal coordination provide an overall picture of the systemic effects of the optimization process. PMID:29236733
Intensity-hue-saturation-based image fusion using iterative linear regression
NASA Astrophysics Data System (ADS)
Cetin, Mufit; Tepecik, Abdulkadir
2016-10-01
The image fusion process basically produces a high-resolution image by combining the superior features of a low-resolution spatial image and a high-resolution panchromatic image. Despite its common usage due to its fast computing capability and high sharpening ability, the intensity-hue-saturation (IHS) fusion method may cause some color distortions, especially when a large number of gray value differences exist among the images to be combined. This paper proposes a spatially adaptive IHS (SA-IHS) technique to avoid these distortions by automatically adjusting the exact spatial information to be injected into the multispectral image during the fusion process. The SA-IHS method essentially suppresses the effects of those pixels that cause the spectral distortions by assigning weaker weights to them and avoiding a large number of redundancies on the fused image. The experimental database consists of IKONOS images, and the experimental results both visually and statistically prove the enhancement of the proposed algorithm when compared with the several other IHS-like methods such as IHS, generalized IHS, fast IHS, and generalized adaptive IHS.
Metapopulation models for historical inference.
Wakeley, John
2004-04-01
The genealogical process for a sample from a metapopulation, in which local populations are connected by migration and can undergo extinction and subsequent recolonization, is shown to have a relatively simple structure in the limit as the number of populations in the metapopulation approaches infinity. The result, which is an approximation to the ancestral behaviour of samples from a metapopulation with a large number of populations, is the same as that previously described for other metapopulation models, namely that the genealogical process is closely related to Kingman's unstructured coalescent. The present work considers a more general class of models that includes two kinds of extinction and recolonization, and the possibility that gamete production precedes extinction. In addition, following other recent work, this result for a metapopulation divided into many populations is shown to hold both for finite population sizes and in the usual diffusion limit, which assumes that population sizes are large. Examples illustrate when the usual diffusion limit is appropriate and when it is not. Some shortcomings and extensions of the model are considered, and the relevance of such models to understanding human history is discussed.
Spherical Ornstein-Uhlenbeck Processes
NASA Astrophysics Data System (ADS)
Wilkinson, Michael; Pumir, Alain
2011-10-01
The paper considers random motion of a point on the surface of a sphere, in the case where the angular velocity is determined by an Ornstein-Uhlenbeck process. The solution is fully characterised by only one dimensionless number, the persistence angle, which is the typical angle of rotation during the correlation time of the angular velocity. We first show that the two-dimensional case is exactly solvable. When the persistence angle is large, a series for the correlation function has the surprising property that its sum varies much more slowly than any of its individual terms. In three dimensions we obtain asymptotic forms for the correlation function, in the limits where the persistence angle is very small and very large. The latter case exhibits a complicated transient, followed by a much slower exponential decay. The decay rate is determined by the solution of a radial Schrödinger equation in which the angular momentum quantum number takes an irrational value, namely j=1/2(sqrt{17}-1). Possible applications of the model to objects tumbling in a turbulent environment are discussed.
Proteomic Analysis of the Mediator Complex Interactome in Saccharomyces cerevisiae
Uthe, Henriette; Vanselow, Jens T.; Schlosser, Andreas
2017-01-01
Here we present the most comprehensive analysis of the yeast Mediator complex interactome to date. Particularly gentle cell lysis and co-immunopurification conditions allowed us to preserve even transient protein-protein interactions and to comprehensively probe the molecular environment of the Mediator complex in the cell. Metabolic 15N-labeling thereby enabled stringent discrimination between bona fide interaction partners and nonspecifically captured proteins. Our data indicates a functional role for Mediator beyond transcription initiation. We identified a large number of Mediator-interacting proteins and protein complexes, such as RNA polymerase II, general transcription factors, a large number of transcriptional activators, the SAGA complex, chromatin remodeling complexes, histone chaperones, highly acetylated histones, as well as proteins playing a role in co-transcriptional processes, such as splicing, mRNA decapping and mRNA decay. Moreover, our data provides clear evidence, that the Mediator complex interacts not only with RNA polymerase II, but also with RNA polymerases I and III, and indicates a functional role of the Mediator complex in rRNA processing and ribosome biogenesis. PMID:28240253
Floating basaltic lava balloons - constrains on the eruptive process based on morphologic parameters
NASA Astrophysics Data System (ADS)
Pacheco, J. M.; Zanon, V.; Kueppers, U.
2011-12-01
The 1998-2001 submarine Serreta eruption brought to science a new challenge. This eruption took place offshore of Terceira Island (Azores), on the so-called Serreta Submarine Ridge, corresponding to a basaltic fissure zone with alkaline volcanism, within a tectonic setting controlled by an hyper-slow spreading rift (the Terceira Rift). The inferred eruptive centers are alignment along a NE-SW direction over an area with depths ranging from 300 to more than 1000 meters. The most remarkable products of this eruption, were large basaltic balloons observed floating at the sea surface. Those balloons, designated as Lava Balloons, are spherical to ellipsoidal structures, ranging from 0.4 up to about 3 m in length, consisting of a thin lava shell enveloping a closed hollow interior, normally formed by a single large vesicle, or a few large convoluted vesicles, that grants an overall density below water density. The cross section of the lava shell usually ranges between 3 and 8 cm and has a distinct layered structure, with different layers defined by different vesicularity, bubble number density and crystal content. The outermost layer is characterized by very small vesicles and high bubble number density whereas the innermost layer has larger vesicles, lower bubble number density and higher crystal content. These observations indicate that the rapidly quenched outer layer preserved the original small vesicles present on the magma at the time of the balloon's formation while the inner layer continued to evolve, producing higher crystal content and allowing time for the expansion of vesicles inward and their efficient coalescence. The outer surface of the balloons exhibits patches of very smooth glassy surface and areas with striation and grooves resulting from small scale fluidal deformation. These surface textures are interpreted as the result of the extrusion process and were produced in a similar manner to the striation found on subaerial toothpaste lavas. Such characteristics are indicative that the outer surface of the balloon quenched as it was being extruded and preserved the scars of a squeeze-up process. On this outer surface, several superficial expansion cracks reveal that after its generation the balloon endured some expansion before reaching the sea surface, most likely due to hydrostatic decompression during its rise. The entire shell of the balloons shows bends and folds resulting from large ductile deformations, also suggesting an origin as an effusive process of squeezing-up a large vesicle through a fissure in a thin lava crust, similarly to the extrusion of a gas filled lava toe. Actually, the volume of the lava shell is not enough to produce all the gas in the balloons interior. More likely, at an earlier stage, degassing of magma as an open system allowed gas to segregate and accumulate to form large vesicles. The development of very large vesicles would be favored by a ponding system such as a lava lake.
The statistical power to detect cross-scale interactions at macroscales
Wagner, Tyler; Fergus, C. Emi; Stow, Craig A.; Cheruvelil, Kendra S.; Soranno, Patricia A.
2016-01-01
Macroscale studies of ecological phenomena are increasingly common because stressors such as climate and land-use change operate at large spatial and temporal scales. Cross-scale interactions (CSIs), where ecological processes operating at one spatial or temporal scale interact with processes operating at another scale, have been documented in a variety of ecosystems and contribute to complex system dynamics. However, studies investigating CSIs are often dependent on compiling multiple data sets from different sources to create multithematic, multiscaled data sets, which results in structurally complex, and sometimes incomplete data sets. The statistical power to detect CSIs needs to be evaluated because of their importance and the challenge of quantifying CSIs using data sets with complex structures and missing observations. We studied this problem using a spatially hierarchical model that measures CSIs between regional agriculture and its effects on the relationship between lake nutrients and lake productivity. We used an existing large multithematic, multiscaled database, LAke multiscaled GeOSpatial, and temporal database (LAGOS), to parameterize the power analysis simulations. We found that the power to detect CSIs was more strongly related to the number of regions in the study rather than the number of lakes nested within each region. CSI power analyses will not only help ecologists design large-scale studies aimed at detecting CSIs, but will also focus attention on CSI effect sizes and the degree to which they are ecologically relevant and detectable with large data sets.
Fast parallel algorithm for slicing STL based on pipeline
NASA Astrophysics Data System (ADS)
Ma, Xulong; Lin, Feng; Yao, Bo
2016-05-01
In Additive Manufacturing field, the current researches of data processing mainly focus on a slicing process of large STL files or complicated CAD models. To improve the efficiency and reduce the slicing time, a parallel algorithm has great advantages. However, traditional algorithms can't make full use of multi-core CPU hardware resources. In the paper, a fast parallel algorithm is presented to speed up data processing. A pipeline mode is adopted to design the parallel algorithm. And the complexity of the pipeline algorithm is analyzed theoretically. To evaluate the performance of the new algorithm, effects of threads number and layers number are investigated by a serial of experiments. The experimental results show that the threads number and layers number are two remarkable factors to the speedup ratio. The tendency of speedup versus threads number reveals a positive relationship which greatly agrees with the Amdahl's law, and the tendency of speedup versus layers number also keeps a positive relationship agreeing with Gustafson's law. The new algorithm uses topological information to compute contours with a parallel method of speedup. Another parallel algorithm based on data parallel is used in experiments to show that pipeline parallel mode is more efficient. A case study at last shows a suspending performance of the new parallel algorithm. Compared with the serial slicing algorithm, the new pipeline parallel algorithm can make full use of the multi-core CPU hardware, accelerate the slicing process, and compared with the data parallel slicing algorithm, the new slicing algorithm in this paper adopts a pipeline parallel model, and a much higher speedup ratio and efficiency is achieved.
NASA Astrophysics Data System (ADS)
Zhang, Guang-Ming; Harvey, David M.
2012-03-01
Various signal processing techniques have been used for the enhancement of defect detection and defect characterisation. Cross-correlation, filtering, autoregressive analysis, deconvolution, neural network, wavelet transform and sparse signal representations have all been applied in attempts to analyse ultrasonic signals. In ultrasonic nondestructive evaluation (NDE) applications, a large number of materials have multilayered structures. NDE of multilayered structures leads to some specific problems, such as penetration, echo overlap, high attenuation and low signal-to-noise ratio. The signals recorded from a multilayered structure are a class of very special signals comprised of limited echoes. Such signals can be assumed to have a sparse representation in a proper signal dictionary. Recently, a number of digital signal processing techniques have been developed by exploiting the sparse constraint. This paper presents a review of research to date, showing the up-to-date developments of signal processing techniques made in ultrasonic NDE. A few typical ultrasonic signal processing techniques used for NDE of multilayered structures are elaborated. The practical applications and limitations of different signal processing methods in ultrasonic NDE of multilayered structures are analysed.
Gawande, Nitin A; Reinhart, Debra R; Yeh, Gour-Tsyh
2010-02-01
Biodegradation process modeling of municipal solid waste (MSW) bioreactor landfills requires the knowledge of various process reactions and corresponding kinetic parameters. Mechanistic models available to date are able to simulate biodegradation processes with the help of pre-defined species and reactions. Some of these models consider the effect of critical parameters such as moisture content, pH, and temperature. Biomass concentration is a vital parameter for any biomass growth model and often not compared with field and laboratory results. A more complex biodegradation model includes a large number of chemical and microbiological species. Increasing the number of species and user defined process reactions in the simulation requires a robust numerical tool. A generalized microbiological and chemical model, BIOKEMOD-3P, was developed to simulate biodegradation processes in three-phases (Gawande et al. 2009). This paper presents the application of this model to simulate laboratory-scale MSW bioreactors under anaerobic conditions. BIOKEMOD-3P was able to closely simulate the experimental data. The results from this study may help in application of this model to full-scale landfill operation.
Subgrid-scale models for large-eddy simulation of rotating turbulent channel flows
NASA Astrophysics Data System (ADS)
Silvis, Maurits H.; Bae, Hyunji Jane; Trias, F. Xavier; Abkar, Mahdi; Moin, Parviz; Verstappen, Roel
2017-11-01
We aim to design subgrid-scale models for large-eddy simulation of rotating turbulent flows. Rotating turbulent flows form a challenging test case for large-eddy simulation due to the presence of the Coriolis force. The Coriolis force conserves the total kinetic energy while transporting it from small to large scales of motion, leading to the formation of large-scale anisotropic flow structures. The Coriolis force may also cause partial flow laminarization and the occurrence of turbulent bursts. Many subgrid-scale models for large-eddy simulation are, however, primarily designed to parametrize the dissipative nature of turbulent flows, ignoring the specific characteristics of transport processes. We, therefore, propose a new subgrid-scale model that, in addition to the usual dissipative eddy viscosity term, contains a nondissipative nonlinear model term designed to capture transport processes, such as those due to rotation. We show that the addition of this nonlinear model term leads to improved predictions of the energy spectra of rotating homogeneous isotropic turbulence as well as of the Reynolds stress anisotropy in spanwise-rotating plane-channel flows. This work is financed by the Netherlands Organisation for Scientific Research (NWO) under Project Number 613.001.212.
An evaluation of MPI message rate on hybrid-core processors
Barrett, Brian W.; Brightwell, Ron; Grant, Ryan; ...
2014-11-01
Power and energy concerns are motivating chip manufacturers to consider future hybrid-core processor designs that may combine a small number of traditional cores optimized for single-thread performance with a large number of simpler cores optimized for throughput performance. This trend is likely to impact the way in which compute resources for network protocol processing functions are allocated and managed. In particular, the performance of MPI match processing is critical to achieving high message throughput. In this paper, we analyze the ability of simple and more complex cores to perform MPI matching operations for various scenarios in order to gain insightmore » into how MPI implementations for future hybrid-core processors should be designed.« less
Future Gamma-Ray Observations of Pulsars and their Environments
NASA Technical Reports Server (NTRS)
Thompson, David J.
2006-01-01
Pulsars and pulsar wind nebulae seen at gamma-ray energies offer insight into particle acceleration to very high energies under extreme conditions. Pulsed emission provides information about the geometry and interaction processes in the magnetospheres of these rotating neutron stars, while the pulsar wind nebulae yield information about high-energy particles interacting with their surroundings. During the next decade, a number of new and expanded gamma-ray facilities will become available for pulsar studies, including Astro-rivelatore Gamma a Immagini LEggero (AGILE) and Gamma-ray Large Area Space Telescope (GLAST) in space and a number of higher-energy ground-based systems. This review describes the capabilities of such observatories to answer some of the open questions about the highest-energy processes involving neutron stars.
Verschuur, Carl
2009-03-01
Difficulties in speech recognition experienced by cochlear implant users may be attributed both to information loss caused by signal processing and to information loss associated with the interface between the electrode array and auditory nervous system, including cross-channel interaction. The objective of the work reported here was to attempt to partial out the relative contribution of these different factors to consonant recognition. This was achieved by comparing patterns of consonant feature recognition as a function of channel number and presence/absence of background noise in users of the Nucleus 24 device with normal hearing subjects listening to acoustic models that mimicked processing of that device. Additionally, in the acoustic model experiment, a simulation of cross-channel spread of excitation, or "channel interaction," was varied. Results showed that acoustic model experiments were highly correlated with patterns of performance in better-performing cochlear implant users. Deficits to consonant recognition in this subgroup could be attributed to cochlear implant processing, whereas channel interaction played a much smaller role in determining performance errors. The study also showed that large changes to channel number in the Advanced Combination Encoder signal processing strategy led to no substantial changes in performance.
FDTD method for laser absorption in metals for large scale problems.
Deng, Chun; Ki, Hyungson
2013-10-21
The FDTD method has been successfully used for many electromagnetic problems, but its application to laser material processing has been limited because even a several-millimeter domain requires a prohibitively large number of grids. In this article, we present a novel FDTD method for simulating large-scale laser beam absorption problems, especially for metals, by enlarging laser wavelength while maintaining the material's reflection characteristics. For validation purposes, the proposed method has been tested with in-house FDTD codes to simulate p-, s-, and circularly polarized 1.06 μm irradiation on Fe and Sn targets, and the simulation results are in good agreement with theoretical predictions.
NASA Technical Reports Server (NTRS)
Fleming, J. R.; Holden, S. C.; Wolfson, R. G.
1979-01-01
The use of multiblade slurry sawing to produce silicon wafers from ingots was investigated. The commercially available state of the art process was improved by 20% in terms of area of silicon wafers produced from an ingot. The process was improved 34% on an experimental basis. Economic analyses presented show that further improvements are necessary to approach the desired wafer costs, mostly reduction in expendable materials costs. Tests which indicate that such reduction is possible are included, although demonstration of such reduction was not completed. A new, large capacity saw was designed and tested. Performance comparable with current equipment (in terms of number of wafers/cm) was demonstrated.
Hynes, Denise M.; Perrin, Ruth A.; Rappaport, Steven; Stevens, Joanne M.; Demakis, John G.
2004-01-01
Information systems are increasingly important for measuring and improving health care quality. A number of integrated health care delivery systems use advanced information systems and integrated decision support to carry out quality assurance activities, but none as large as the Veterans Health Administration (VHA). The VHA's Quality Enhancement Research Initiative (QUERI) is a large-scale, multidisciplinary quality improvement initiative designed to ensure excellence in all areas where VHA provides health care services, including inpatient, outpatient, and long-term care settings. In this paper, we describe the role of information systems in the VHA QUERI process, highlight the major information systems critical to this quality improvement process, and discuss issues associated with the use of these systems. PMID:15187063
NASA Astrophysics Data System (ADS)
Weigel, Martin
2011-09-01
Over the last couple of years it has been realized that the vast computational power of graphics processing units (GPUs) could be harvested for purposes other than the video game industry. This power, which at least nominally exceeds that of current CPUs by large factors, results from the relative simplicity of the GPU architectures as compared to CPUs, combined with a large number of parallel processing units on a single chip. To benefit from this setup for general computing purposes, the problems at hand need to be prepared in a way to profit from the inherent parallelism and hierarchical structure of memory accesses. In this contribution I discuss the performance potential for simulating spin models, such as the Ising model, on GPU as compared to conventional simulations on CPU.
An implementation of the look-ahead Lanczos algorithm for non-Hermitian matrices, part 1
NASA Technical Reports Server (NTRS)
Freund, Roland W.; Gutknecht, Martin H.; Nachtigal, Noel M.
1990-01-01
The nonsymmetric Lanczos method can be used to compute eigenvalues of large sparse non-Hermitian matrices or to solve large sparse non-Hermitian linear systems. However, the original Lanczos algorithm is susceptible to possible breakdowns and potential instabilities. We present an implementation of a look-ahead version of the Lanczos algorithm which overcomes these problems by skipping over those steps in which a breakdown or near-breakdown would occur in the standard process. The proposed algorithm can handle look-ahead steps of any length and is not restricted to steps of length 2, as earlier implementations are. Also, our implementation has the feature that it requires roughly the same number of inner products as the standard Lanczos process without look-ahead.
An overview of the 1984 Battelle outside users payload model
NASA Astrophysics Data System (ADS)
Day, J. B.; Conlon, R. J.; Neale, D. B.; Fischer, N. H.
1984-10-01
The methodology and projections from a model for the market for non-NASA, non-DOD, reimbursable payloads from the non-Soviet bloc countries over the 1984-2000 AD time period are summarized. High and low forecast ranges were made based on demand forecasts by industrial users, NASA estimates, and other publications. The launches were assumed to be alloted to either the Shuttle or the Ariane. The greatest demand for launch services is expected to come form communications and materials processing payloads, the latter either becoming a large user or remaining a research item. The number of Shuttle payload equivalents over the reference time spanis projected as 84-194, showing the large variance that is dependent on the progress in materials processing operations.
Impact phenomena as factors in the evolution of the Earth
NASA Technical Reports Server (NTRS)
Grieve, R. A. F.; Parmentier, E. M.
1984-01-01
It is estimated that 30 to 200 large impact basins could have been formed on the early Earth. These large impacts may have resulted in extensive volcanism and enhanced endogenic geologic activity over large areas. Initial modelling of the thermal and subsidence history of large terrestrial basins indicates that they created geologic and thermal anomalies which lasted for geologically significant times. The role of large-scale impact in the biological evolution of the Earth has been highlighted by the discovery of siderophile anomalies at the Cretaceous-Tertiary boundary and associated with North American microtektites. Although in neither case has an associated crater been identified, the observations are consistent with the deposition of projectile-contaminated high-speed ejecta from major impact events. Consideration of impact processes reveals a number of mechanisms by which large-scale impact may induce extinctions.
Superlinear scaling of offspring at criticality in branching processes
NASA Astrophysics Data System (ADS)
Saichev, A.; Sornette, D.
2014-01-01
For any branching process, we demonstrate that the typical total number rmp(ντ) of events triggered over all generations within any sufficiently large time window τ exhibits, at criticality, a superlinear dependence rmp(ντ)˜(ντ)γ (with γ >1) on the total number ντ of the immigrants arriving at the Poisson rate ν. In branching processes in which immigrants (or sources) are characterized by fertilities distributed according to an asymptotic power-law tail with tail exponent 1<γ ⩽2, the exponent of the superlinear law for rmp(ντ) is identical to the exponent γ of the distribution of fertilities. For γ >2 and for standard branching processes without power-law distribution of fertilities, rmp(ντ)˜(ντ)2. This scaling law replaces and tames the divergence ντ /(1-n) of the mean total number R¯t(τ) of events, as the branching ratio (defined as the average number of triggered events of first generation per source) tends to 1. The derivation uses the formalism of generating probability functions. The corresponding prediction is confirmed by numerical calculations, and an heuristic derivation enlightens its underlying mechanism. We also show that R¯t(τ) is always linear in ντ even at criticality (n =1). Our results thus illustrate the fundamental difference between the mean total number, which is controlled by a few extremely rare realizations, and the typical behavior represented by rmp(ντ).
NASA Astrophysics Data System (ADS)
Wise, M.; Dowdeswell, J. A.; Larter, R. D.; Jakobsson, M.
2016-12-01
Seafloor ploughmarks provide evidence of past and present iceberg dimensions and drift direction. Today, Pine Island and Thwaites glaciers, which account for 35% of mass loss from the West Antarctic Ice Sheet (WAIS), calve mainly large, tabular icebergs, which, when grounded, produce `toothcomb-like' multi-keeled ploughmarks. High-resolution multi-beam swath bathymetry of the mid-shelf Pine Island Trough and adjacent banks, reveals many linear-curvilinear depressions interpreted as iceberg-keel ploughmarks, the majority of which are single-keeled in form. From measurements of ploughmark planform and cross-sections, we find iceberg calving from the palaeo-Pine Island-Thwaites Ice Stream was not characterised by small numbers of large, tabular icebergs, but instead, by a large number of `smaller' icebergs with v-shaped keels. Geological evidence of ploughmark form and water-depth distribution indicates calving-margin thicknesses ( 950 m) and subaerial ice-cliff elevations ( 100 m) equivalent to the theoretical threshold recently predicted to trigger ice-cliff structural collapse through Marine Ice Cliff Instability (MICI) processes. Significantly, our proposed period of iceberg ploughing predates the early Holocene climate optimum, and likely occurred in an absence of widespread surface melt. We therefore provide the first observational evidence of rapid retreat of the Palaeo-Pine Island-Thwaites ice stream from the crest of a large, mid-shelf sedimentary depocentre or grounding-zone wedge, to a restabilising position 112 km offshore of the December 2013 calving line, driven by MICI processes commencing 12.3 cal. ka BP. We emphasise the effective operation of MICI processes without extensive surface melt and induced hydrofracture, and conclude that such processes are unlikely to be confined to the past, given the steep, retrograde bed-slope which the modern grounding lines of Pine Island and Thwaites Glaciers are approaching, and the absence of any discernible restabilising features upstream of the modern grounding-zone. We expect MICI to contribute significantly to future ice retreat and sea-level rise under a warming climate, and emphasise the importance of its inclusion in future modelling studies.
Spatial attention determines the nature of nonverbal number representation.
Hyde, Daniel C; Wood, Justin N
2011-09-01
Coordinated studies of adults, infants, and nonhuman animals provide evidence for two systems of nonverbal number representation: a "parallel individuation" system that represents individual items and a "numerical magnitude" system that represents the approximate cardinal value of a group. However, there is considerable debate about the nature and functions of these systems, due largely to the fact that some studies show a dissociation between small (1-3) and large (>3) number representation, whereas others do not. Using event-related potentials, we show that it is possible to determine which system will represent the numerical value of a small number set (1-3 items) by manipulating spatial attention. Specifically, when attention can select individual objects, an early brain response (N1) scales with the cardinal value of the display, the signature of parallel individuation. In contrast, when attention cannot select individual objects or is occupied by another task, a later brain response (P2p) scales with ratio, the signature of the approximate numerical magnitude system. These results provide neural evidence that small numbers can be represented as approximate numerical magnitudes. Further, they empirically demonstrate the importance of early attentional processes to number representation by showing that the way in which attention disperses across a scene determines which numerical system will deploy in a given context.
Automatic detection of key innovations, rate shifts, and diversity-dependence on phylogenetic trees.
Rabosky, Daniel L
2014-01-01
A number of methods have been developed to infer differential rates of species diversification through time and among clades using time-calibrated phylogenetic trees. However, we lack a general framework that can delineate and quantify heterogeneous mixtures of dynamic processes within single phylogenies. I developed a method that can identify arbitrary numbers of time-varying diversification processes on phylogenies without specifying their locations in advance. The method uses reversible-jump Markov Chain Monte Carlo to move between model subspaces that vary in the number of distinct diversification regimes. The model assumes that changes in evolutionary regimes occur across the branches of phylogenetic trees under a compound Poisson process and explicitly accounts for rate variation through time and among lineages. Using simulated datasets, I demonstrate that the method can be used to quantify complex mixtures of time-dependent, diversity-dependent, and constant-rate diversification processes. I compared the performance of the method to the MEDUSA model of rate variation among lineages. As an empirical example, I analyzed the history of speciation and extinction during the radiation of modern whales. The method described here will greatly facilitate the exploration of macroevolutionary dynamics across large phylogenetic trees, which may have been shaped by heterogeneous mixtures of distinct evolutionary processes.
Automatic Detection of Key Innovations, Rate Shifts, and Diversity-Dependence on Phylogenetic Trees
Rabosky, Daniel L.
2014-01-01
A number of methods have been developed to infer differential rates of species diversification through time and among clades using time-calibrated phylogenetic trees. However, we lack a general framework that can delineate and quantify heterogeneous mixtures of dynamic processes within single phylogenies. I developed a method that can identify arbitrary numbers of time-varying diversification processes on phylogenies without specifying their locations in advance. The method uses reversible-jump Markov Chain Monte Carlo to move between model subspaces that vary in the number of distinct diversification regimes. The model assumes that changes in evolutionary regimes occur across the branches of phylogenetic trees under a compound Poisson process and explicitly accounts for rate variation through time and among lineages. Using simulated datasets, I demonstrate that the method can be used to quantify complex mixtures of time-dependent, diversity-dependent, and constant-rate diversification processes. I compared the performance of the method to the MEDUSA model of rate variation among lineages. As an empirical example, I analyzed the history of speciation and extinction during the radiation of modern whales. The method described here will greatly facilitate the exploration of macroevolutionary dynamics across large phylogenetic trees, which may have been shaped by heterogeneous mixtures of distinct evolutionary processes. PMID:24586858
Tsuji, Shintarou; Nishimoto, Naoki; Ogasawara, Katsuhiko
2008-07-20
Although large medical texts are stored in electronic format, they are seldom reused because of the difficulty of processing narrative texts by computer. Morphological analysis is a key technology for extracting medical terms correctly and automatically. This process parses a sentence into its smallest unit, the morpheme. Phrases consisting of two or more technical terms, however, cause morphological analysis software to fail in parsing the sentence and output unprocessed terms as "unknown words." The purpose of this study was to reduce the number of unknown words in medical narrative text processing. The results of parsing the text with additional dictionaries were compared with the analysis of the number of unknown words in the national examination for radiologists. The ratio of unknown words was reduced 1.0% to 0.36% by adding terminologies of radiological technology, MeSH, and ICD-10 labels. The terminology of radiological technology was the most effective resource, being reduced by 0.62%. This result clearly showed the necessity of additional dictionary selection and trends in unknown words. The potential for this investigation is to make available a large body of clinical information that would otherwise be inaccessible for applications other than manual health care review by personnel.
Ghosh, Purabi R.; Fawcett, Derek; Sharma, Shashi B.; Poinern, Gerrard E. J.
2017-01-01
The quantities of organic waste produced globally by aquacultural and horticulture are extremely large and offer an attractive renewable source of biomolecules and bioactive compounds. The availability of such large and diverse sources of waste materials creates a unique opportunity to develop new recycling and food waste utilisation strategies. The aim of this review is to report the current status of research in the emerging field of producing high-value nanoparticles from food waste. Eco-friendly biogenic processes are quite rapid, and are usually carried out at normal room temperature and pressure. These alternative clean technologies do not rely on the use of the toxic chemicals and solvents commonly associated with traditional nanoparticle manufacturing processes. The relatively small number of research articles in the field have been surveyed and evaluated. Among the diversity of waste types, promising candidates and their ability to produce various high-value nanoparticles are discussed. Experimental parameters, nanoparticle characteristics and potential applications for nanoparticles in pharmaceuticals and biomedical applications are discussed. In spite of the advantages, there are a number of challenges, including nanoparticle reproducibility and understanding the formation mechanisms between different food waste products. Thus, there is considerable scope and opportunity for further research in this emerging field. PMID:28773212
Workflow management in large distributed systems
NASA Astrophysics Data System (ADS)
Legrand, I.; Newman, H.; Voicu, R.; Dobre, C.; Grigoras, C.
2011-12-01
The MonALISA (Monitoring Agents using a Large Integrated Services Architecture) framework provides a distributed service system capable of controlling and optimizing large-scale, data-intensive applications. An essential part of managing large-scale, distributed data-processing facilities is a monitoring system for computing facilities, storage, networks, and the very large number of applications running on these systems in near realtime. All this monitoring information gathered for all the subsystems is essential for developing the required higher-level services—the components that provide decision support and some degree of automated decisions—and for maintaining and optimizing workflow in large-scale distributed systems. These management and global optimization functions are performed by higher-level agent-based services. We present several applications of MonALISA's higher-level services including optimized dynamic routing, control, data-transfer scheduling, distributed job scheduling, dynamic allocation of storage resource to running jobs and automated management of remote services among a large set of grid facilities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crater, Jason; Galleher, Connor; Lievense, Jeff
NREL is developing an advanced aerobic bubble column model using Aspen Custom Modeler (ACM). The objective of this work is to integrate the new fermentor model with existing techno-economic models in Aspen Plus and Excel to establish a new methodology for guiding process design. To assist this effort, NREL has contracted Genomatica to critique and make recommendations for improving NREL's bioreactor model and large scale aerobic bioreactor design for biologically producing lipids at commercial scale. Genomatica has highlighted a few areas for improving the functionality and effectiveness of the model. Genomatica recommends using a compartment model approach with an integratedmore » black-box kinetic model of the production microbe. We also suggest including calculations for stirred tank reactors to extend the models functionality and adaptability for future process designs. Genomatica also suggests making several modifications to NREL's large-scale lipid production process design. The recommended process modifications are based on Genomatica's internal techno-economic assessment experience and are focused primarily on minimizing capital and operating costs. These recommendations include selecting/engineering a thermotolerant yeast strain with lipid excretion; using bubble column fermentors; increasing the size of production fermentors; reducing the number of vessels; employing semi-continuous operation; and recycling cell mass.« less
Mohammed, Mohammed A; Panesar, Jagdeep S; Laney, David B; Wilson, Richard
2013-04-01
The use of statistical process control (SPC) charts in healthcare is increasing. The primary purpose of SPC is to distinguish between common-cause variation which is attributable to the underlying process, and special-cause variation which is extrinsic to the underlying process. This is important because improvement under common-cause variation requires action on the process, whereas special-cause variation merits an investigation to first find the cause. Nonetheless, when dealing with attribute or count data (eg, number of emergency admissions) involving very large sample sizes, traditional SPC charts often produce tight control limits with most of the data points appearing outside the control limits. This can give a false impression of common and special-cause variation, and potentially misguide the user into taking the wrong actions. Given the growing availability of large datasets from routinely collected databases in healthcare, there is a need to present a review of this problem (which arises because traditional attribute charts only consider within-subgroup variation) and its solutions (which consider within and between-subgroup variation), which involve the use of the well-established measurements chart and the more recently developed attribute charts based on Laney's innovative approach. We close by making some suggestions for practice.
Thermal activation of dislocations in large scale obstacle bypass
NASA Astrophysics Data System (ADS)
Sobie, Cameron; Capolungo, Laurent; McDowell, David L.; Martinez, Enrique
2017-08-01
Dislocation dynamics simulations have been used extensively to predict hardening caused by dislocation-obstacle interactions, including irradiation defect hardening in the athermal case. Incorporating the role of thermal energy on these interactions is possible with a framework provided by harmonic transition state theory (HTST) enabling direct access to thermally activated reaction rates using the Arrhenius equation, including rates of dislocation-obstacle bypass processes. Moving beyond unit dislocation-defect reactions to a representative environment containing a large number of defects requires coarse-graining the activation energy barriers of a population of obstacles into an effective energy barrier that accurately represents the large scale collective process. The work presented here investigates the relationship between unit dislocation-defect bypass processes and the distribution of activation energy barriers calculated for ensemble bypass processes. A significant difference between these cases is observed, which is attributed to the inherent cooperative nature of dislocation bypass processes. In addition to the dislocation-defect interaction, the morphology of the dislocation segments pinned to the defects play an important role on the activation energies for bypass. A phenomenological model for activation energy stress dependence is shown to describe well the effect of a distribution of activation energies, and a probabilistic activation energy model incorporating the stress distribution in a material is presented.
Inferring Aquifer Transmissivity from River Flow Data
NASA Astrophysics Data System (ADS)
Trichakis, Ioannis; Pistocchi, Alberto
2016-04-01
Daily streamflow data is the measurable result of many different hydrological processes within a basin; therefore, it includes information about all these processes. In this work, recession analysis applied to a pan-European dataset of measured streamflow was used to estimate hydrogeological parameters of the aquifers that contribute to the stream flow. Under the assumption that base-flow in times of no precipitation is mainly due to groundwater, we estimated parameters of European shallow aquifers connected with the stream network, and identified on the basis of the 1:1,500,000 scale Hydrogeological map of Europe. To this end, Master recession curves (MRCs) were constructed based on the RECESS model of the USGS for 1601 stream gauge stations across Europe. The process consists of three stages. Firstly, the model analyses the stream flow time-series. Then, it uses regression to calculate the recession index. Finally, it infers characteristics of the aquifer from the recession index. During time-series analysis, the model identifies those segments, where the number of successive recession days is above a certain threshold. The reason for this pre-processing lies in the necessity for an adequate number of points when performing regression at a later stage. The recession index derives from the semi-logarithmic plot of stream flow over time, and the post processing involves the calculation of geometrical parameters of the watershed through a GIS platform. The program scans the full stream flow dataset of all the stations. For each station, it identifies the segments with continuous recession that exceed a predefined number of days. When the algorithm finds all the segments of a certain station, it analyses them and calculates the best linear fit between time and the logarithm of flow. The algorithm repeats this procedure for the full number of segments, thus it calculates many different values of recession index for each station. After the program has found all the recession segments, it performs calculations to determine the expression for the MRC. Further processing of the MRCs can yield estimates of transmissivity or response time representative of the aquifers upstream of the station. These estimates can be useful for large scale (e.g. continental) groundwater modelling. The above procedure allowed calculating values of transmissivity for a large share of European aquifers, ranging from Tmin = 4.13E-04 m²/d to Tmax = 8.12E+03 m²/d, with an average value Taverage = 9.65E+01 m²/d. These results are in line with the literature, indicating that the procedure may provide realistic results for large-scale groundwater modelling. In this contribution we present the results in the perspective of their application for the parameterization of a pan-European bi-dimensional shallow groundwater flow model.
Computer-aided software development process design
NASA Technical Reports Server (NTRS)
Lin, Chi Y.; Levary, Reuven R.
1989-01-01
The authors describe an intelligent tool designed to aid managers of software development projects in planning, managing, and controlling the development process of medium- to large-scale software projects. Its purpose is to reduce uncertainties in the budget, personnel, and schedule planning of software development projects. It is based on dynamic model for the software development and maintenance life-cycle process. This dynamic process is composed of a number of time-varying, interacting developmental phases, each characterized by its intended functions and requirements. System dynamics is used as a modeling methodology. The resulting Software LIfe-Cycle Simulator (SLICS) and the hybrid expert simulation system of which it is a subsystem are described.
Future of antibody purification.
Low, Duncan; O'Leary, Rhona; Pujar, Narahari S
2007-03-15
Antibody purification seems to be safely ensconced in a platform, now well-established by way of multiple commercialized antibody processes. However, natural evolution compels us to peer into the future. This is driven not only by a large, projected increase in the number of antibody therapies, but also by dramatic improvements in upstream productivity, and process economics. Although disruptive technologies have yet escaped downstream processes, evolution of the so-called platform is already evident in antibody processes in late-stage development. Here we perform a wide survey of technologies that are competing to be part of that platform, and provide our [inherently dangerous] assessment of those that have the most promise.
NASA Technical Reports Server (NTRS)
Consoli, Robert David; Sobieszczanski-Sobieski, Jaroslaw
1990-01-01
Advanced multidisciplinary analysis and optimization methods, namely system sensitivity analysis and non-hierarchical system decomposition, are applied to reduce the cost and improve the visibility of an automated vehicle design synthesis process. This process is inherently complex due to the large number of functional disciplines and associated interdisciplinary couplings. Recent developments in system sensitivity analysis as applied to complex non-hierarchic multidisciplinary design optimization problems enable the decomposition of these complex interactions into sub-processes that can be evaluated in parallel. The application of these techniques results in significant cost, accuracy, and visibility benefits for the entire design synthesis process.
2017 in review: FDA approvals of new molecular entities.
Kinch, Michael S; Griesenauer, Rebekah H
2018-05-08
An overview of drugs approved by the FDA in 2017 reflected a reversion to the mean after a low number of NME approvals in 2016. This reversal was largely driven by the largest number of biologics-based NMEs recorded to date, which offset an average number of small-molecule approvals. Oncology indications continued to dominate followed by novel treatments for infectious, immunologic and neurologic diseases. From a mechanistic standpoint, the industry has continued a trend of target diversification, reflecting advances in scientific understanding of disease processes. Finally, 2017 continued a period of relatively few mergers and acquisitions, which broke a more-than-a-decade-long decline in the number of organizations contributing to research and development. Copyright © 2018 Elsevier Ltd. All rights reserved.
The multifractal nature of plume structure in high-Rayleigh-number convection
NASA Astrophysics Data System (ADS)
Puthenveettil, Baburaj A.; Ananthakrishna, G.; Arakeri, Jaywant H.
2005-03-01
The geometrically different planforms of near-wall plume structure in turbulent natural convection, visualized by driving the convection using concentration differences across a membrane, are shown to have a common multifractal spectrum of singularities for Rayleigh numbers in the range 1010-1011 at Schmidt number of 602. The scaling is seen for a length scale range of 25 and is independent of the Rayleigh number, the flux, the strength and nature of the large-scale flow, and the aspect ratio. Similar scaling is observed for the plume structures obtained in the presence of a weak flow across the membrane. This common non-trivial spatial scaling is proposed to be due to the same underlying generating process for the near-wall plume structures.
Birds have primate-like numbers of neurons in the forebrain
Olkowicz, Seweryn; Kocourek, Martin; Lučan, Radek K.; Porteš, Michal; Fitch, W. Tecumseh; Herculano-Houzel, Suzana; Němec, Pavel
2016-01-01
Some birds achieve primate-like levels of cognition, even though their brains tend to be much smaller in absolute size. This poses a fundamental problem in comparative and computational neuroscience, because small brains are expected to have a lower information-processing capacity. Using the isotropic fractionator to determine numbers of neurons in specific brain regions, here we show that the brains of parrots and songbirds contain on average twice as many neurons as primate brains of the same mass, indicating that avian brains have higher neuron packing densities than mammalian brains. Additionally, corvids and parrots have much higher proportions of brain neurons located in the pallial telencephalon compared with primates or other mammals and birds. Thus, large-brained parrots and corvids have forebrain neuron counts equal to or greater than primates with much larger brains. We suggest that the large numbers of neurons concentrated in high densities in the telencephalon substantially contribute to the neural basis of avian intelligence. PMID:27298365
Large scale tracking algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett
2015-01-01
Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For highermore » resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.« less
Episodic, generalized, and semantic memory tests: switching and strength effects.
Humphreys, Michael S; Murray, Krista L
2011-09-01
We continue the process of investigating the probabilistic paired associate paradigm in an effort to understand the memory access control processes involved and to determine whether the memory structure produced is in transition between episodic and semantic memory. In this paradigm two targets are probabilistically paired with a cue across a large number of short lists. Participants can recall the target paired with the cue in the most recent list (list specific test), produce the first of the two targets that have been paired with that cue to come to mind (generalised test), and produce a free association response (semantic test). Switching between a generalised test and a list specific test did not produce a switching cost indicating a general similarity in the control processes involved. In addition, there was evidence for a dissociation between two different strength manipulations (amount of study time and number of cue-target pairings) such that number of pairings influenced the list specific, generalised and the semantic test but amount of study time only influenced the list specific and generalised test. © 2011 Canadian Psychological Association
Arithmetic processing in the brain shaped by cultures
Tang, Yiyuan; Zhang, Wutian; Chen, Kewei; Feng, Shigang; Ji, Ye; Shen, Junxian; Reiman, Eric M.; Liu, Yijun
2006-01-01
The universal use of Arabic numbers in mathematics raises a question whether these digits are processed the same way in people speaking various languages, such as Chinese and English, which reflect differences in Eastern and Western cultures. Using functional MRI, we demonstrated a differential cortical representation of numbers between native Chinese and English speakers. Contrasting to native English speakers, who largely employ a language process that relies on the left perisylvian cortices for mental calculation such as a simple addition task, native Chinese speakers, instead, engage a visuo-premotor association network for the same task. Whereas in both groups the inferior parietal cortex was activated by a task for numerical quantity comparison, functional MRI connectivity analyses revealed a functional distinction between Chinese and English groups among the brain networks involved in the task. Our results further indicate that the different biological encoding of numbers may be shaped by visual reading experience during language acquisition and other cultural factors such as mathematics learning strategies and education systems, which cannot be explained completely by the differences in languages per se. PMID:16815966
Automation of Technology for Cancer Research.
van der Ent, Wietske; Veneman, Wouter J; Groenewoud, Arwin; Chen, Lanpeng; Tulotta, Claudia; Hogendoorn, Pancras C W; Spaink, Herman P; Snaar-Jagalska, B Ewa
2016-01-01
Zebrafish embryos can be obtained for research purposes in large numbers at low cost and embryos develop externally in limited space, making them highly suitable for high-throughput cancer studies and drug screens. Non-invasive live imaging of various processes within the larvae is possible due to their transparency during development, and a multitude of available fluorescent transgenic reporter lines.To perform high-throughput studies, handling large amounts of embryos and larvae is required. With such high number of individuals, even minute tasks may become time-consuming and arduous. In this chapter, an overview is given of the developments in the automation of various steps of large scale zebrafish cancer research for discovering important cancer pathways and drugs for the treatment of human disease. The focus lies on various tools developed for cancer cell implantation, embryo handling and sorting, microfluidic systems for imaging and drug treatment, and image acquisition and analysis. Examples will be given of employment of these technologies within the fields of toxicology research and cancer research.
Hydra: a scalable proteomic search engine which utilizes the Hadoop distributed computing framework
2012-01-01
Background For shotgun mass spectrometry based proteomics the most computationally expensive step is in matching the spectra against an increasingly large database of sequences and their post-translational modifications with known masses. Each mass spectrometer can generate data at an astonishingly high rate, and the scope of what is searched for is continually increasing. Therefore solutions for improving our ability to perform these searches are needed. Results We present a sequence database search engine that is specifically designed to run efficiently on the Hadoop MapReduce distributed computing framework. The search engine implements the K-score algorithm, generating comparable output for the same input files as the original implementation. The scalability of the system is shown, and the architecture required for the development of such distributed processing is discussed. Conclusion The software is scalable in its ability to handle a large peptide database, numerous modifications and large numbers of spectra. Performance scales with the number of processors in the cluster, allowing throughput to expand with the available resources. PMID:23216909
Hydra: a scalable proteomic search engine which utilizes the Hadoop distributed computing framework.
Lewis, Steven; Csordas, Attila; Killcoyne, Sarah; Hermjakob, Henning; Hoopmann, Michael R; Moritz, Robert L; Deutsch, Eric W; Boyle, John
2012-12-05
For shotgun mass spectrometry based proteomics the most computationally expensive step is in matching the spectra against an increasingly large database of sequences and their post-translational modifications with known masses. Each mass spectrometer can generate data at an astonishingly high rate, and the scope of what is searched for is continually increasing. Therefore solutions for improving our ability to perform these searches are needed. We present a sequence database search engine that is specifically designed to run efficiently on the Hadoop MapReduce distributed computing framework. The search engine implements the K-score algorithm, generating comparable output for the same input files as the original implementation. The scalability of the system is shown, and the architecture required for the development of such distributed processing is discussed. The software is scalable in its ability to handle a large peptide database, numerous modifications and large numbers of spectra. Performance scales with the number of processors in the cluster, allowing throughput to expand with the available resources.
Process service quality evaluation based on Dempster-Shafer theory and support vector machine.
Pei, Feng-Que; Li, Dong-Bo; Tong, Yi-Fei; He, Fei
2017-01-01
Human involvement influences traditional service quality evaluations, which triggers an evaluation's low accuracy, poor reliability and less impressive predictability. This paper proposes a method by employing a support vector machine (SVM) and Dempster-Shafer evidence theory to evaluate the service quality of a production process by handling a high number of input features with a low sampling data set, which is called SVMs-DS. Features that can affect production quality are extracted by a large number of sensors. Preprocessing steps such as feature simplification and normalization are reduced. Based on three individual SVM models, the basic probability assignments (BPAs) are constructed, which can help the evaluation in a qualitative and quantitative way. The process service quality evaluation results are validated by the Dempster rules; the decision threshold to resolve conflicting results is generated from three SVM models. A case study is presented to demonstrate the effectiveness of the SVMs-DS method.
Automation of a N-S S and C Database Generation for the Harrier in Ground Effect
NASA Technical Reports Server (NTRS)
Murman, Scott M.; Chaderjian, Neal M.; Pandya, Shishir; Kwak, Dochan (Technical Monitor)
2001-01-01
A method of automating the generation of a time-dependent, Navier-Stokes static stability and control database for the Harrier aircraft in ground effect is outlined. Reusable, lightweight components arc described which allow different facets of the computational fluid dynamic simulation process to utilize a consistent interface to a remote database. These components also allow changes and customizations to easily be facilitated into the solution process to enhance performance, without relying upon third-party support. An analysis of the multi-level parallel solver OVERFLOW-MLP is presented, and the results indicate that it is feasible to utilize large numbers of processors (= 100) even with a grid system with relatively small number of cells (= 10(exp 6)). A more detailed discussion of the simulation process, as well as refined data for the scaling of the OVERFLOW-MLP flow solver will be included in the full paper.
A theoretical study of hydrodynamic cavitation.
Arrojo, S; Benito, Y
2008-03-01
The optimization of hydrodynamic cavitation as an AOP requires identifying the key parameters and studying their effects on the process. Specific simulations of hydrodynamic bubbles reveal that time scales play a major role on the process. Rarefaction/compression periods generate a number of opposing effects which have demonstrated to be quantitatively different from those found in ultrasonic cavitation. Hydrodynamic cavitation can be upscaled and offers an energy efficient way of generating cavitation. On the other hand, the large characteristic time scales hinder bubble collapse and generate a low number of cavitation cycles per unit time. By controlling the pressure pulse through a flexible cavitation chamber design these limitations can be partially compensated. The chemical processes promoted by this technique are also different from those found in ultrasonic cavitation. Properties such as volatility or hydrophobicity determine the potential applicability of HC and therefore have to be taken into account.
Heerman, Lisa; DeAngelis, Donald L.; Borcherding, Jost
2017-01-01
Usually, the origin of a within-cohort bimodal size distribution is assumed to be caused by initial size differences or by one discrete period of accelerated growth for one part of the population. The aim of this study was to determine if more continuous pathways exist allowing shifts from the small to the large fraction within a bimodal age-cohort. Therefore, a Eurasian perch population, which had already developed a bimodal size-distribution and had differential resource use of the two size-cohorts, was examined. Results revealed that formation of a bimodal size-distribution can be a continuous process. Perch from the small size-cohort were able to grow into the large size-cohort by feeding on macroinvertebrates not used by their conspecifics. The diet shifts were accompanied by morphological shape changes. Intra-specific competition seemed to trigger the development towards an increasing number of large individuals. A stage-structured matrix model confirmed these assumptions. The fact that bimodality can be a continuous process is important to consider for the understanding of ecological processes and links within ecosystems.
Process and information integration via hypermedia
NASA Technical Reports Server (NTRS)
Hammen, David G.; Labasse, Daniel L.; Myers, Robert M.
1990-01-01
Success stories for advanced automation prototypes abound in the literature but the deployments of practical large systems are few in number. There are several factors that militate against the maturation of such prototypes into products. Here, the integration of advanced automation software into large systems is discussed. Advanced automation systems tend to be specific applications that need to be integrated and aggregated into larger systems. Systems integration can be achieved by providing expert user-developers with verified tools to efficiently create small systems that interface to large systems through standard interfaces. The use of hypermedia as such a tool in the context of the ground control centers that support Shuttle and space station operations is explored. Hypermedia can be an integrating platform for data, conventional software, and advanced automation software, enabling data integration through the display of diverse types of information and through the creation of associative links between chunks of information. Further, hypermedia enables process integration through graphical invoking of system functions. Through analysis and examples, researchers illustrate how diverse information and processing paradigms can be integrated into a single software platform.
DeAngelis, Donald L.; Borcherding, Jost
2017-01-01
Usually, the origin of a within-cohort bimodal size distribution is assumed to be caused by initial size differences or by one discrete period of accelerated growth for one part of the population. The aim of this study was to determine if more continuous pathways exist allowing shifts from the small to the large fraction within a bimodal age-cohort. Therefore, a Eurasian perch population, which had already developed a bimodal size-distribution and had differential resource use of the two size-cohorts, was examined. Results revealed that formation of a bimodal size-distribution can be a continuous process. Perch from the small size-cohort were able to grow into the large size-cohort by feeding on macroinvertebrates not used by their conspecifics. The diet shifts were accompanied by morphological shape changes. Intra-specific competition seemed to trigger the development towards an increasing number of large individuals. A stage-structured matrix model confirmed these assumptions. The fact that bimodality can be a continuous process is important to consider for the understanding of ecological processes and links within ecosystems. PMID:28650963
Collective dynamics during cell division
NASA Astrophysics Data System (ADS)
Zapperi, Stefano; Bertalan, Zsolt; Budrikis, Zoe; La Porta, Caterina A. M.
In order to correctly divide, cells have to move all their chromosomes at the center, a process known as congression. This task is performed by the combined action of molecular motors and randomly growing and shrinking microtubules. Chromosomes are captured by growing microtubules and transported by motors using the same microtubules as tracks. Coherent motion occurs as a result of a large collection of random and deterministic dynamical events. Understanding this process is important since a failure in chromosome segregation can lead to chromosomal instability one of the hallmarks of cancer. We describe this complex process in a three dimensional computational model involving thousands of microtubules. The results show that coherent and robust chromosome congression can only happen if the total number of microtubules is neither too small, nor too large. Our results allow for a coherent interpretation a variety of biological factors already associated in the past with chromosomal instability and related pathological conditions.
NASA Astrophysics Data System (ADS)
Parsons, Todd L.; Rogers, Tim
2017-10-01
Systems composed of large numbers of interacting agents often admit an effective coarse-grained description in terms of a multidimensional stochastic dynamical system, driven by small-amplitude intrinsic noise. In applications to biological, ecological, chemical and social dynamics it is common for these models to posses quantities that are approximately conserved on short timescales, in which case system trajectories are observed to remain close to some lower-dimensional subspace. Here, we derive explicit and general formulae for a reduced-dimension description of such processes that is exact in the limit of small noise and well-separated slow and fast dynamics. The Michaelis-Menten law of enzyme-catalysed reactions, and the link between the Lotka-Volterra and Wright-Fisher processes are explored as a simple worked examples. Extensions of the method are presented for infinite dimensional systems and processes coupled to non-Gaussian noise sources.
A fast low-power optical memory based on coupled micro-ring lasers
NASA Astrophysics Data System (ADS)
Hill, Martin T.; Dorren, Harmen J. S.; de Vries, Tjibbe; Leijtens, Xaveer J. M.; den Besten, Jan Hendrik; Smalbrugge, Barry; Oei, Yok-Siang; Binsma, Hans; Khoe, Giok-Djan; Smit, Meint K.
2004-11-01
The increasing speed of fibre-optic-based telecommunications has focused attention on high-speed optical processing of digital information. Complex optical processing requires a high-density, high-speed, low-power optical memory that can be integrated with planar semiconductor technology for buffering of decisions and telecommunication data. Recently, ring lasers with extremely small size and low operating power have been made, and we demonstrate here a memory element constructed by interconnecting these microscopic lasers. Our device occupies an area of 18 × 40µm2 on an InP/InGaAsP photonic integrated circuit, and switches within 20ps with 5.5fJ optical switching energy. Simulations show that the element has the potential for much smaller dimensions and switching times. Large numbers of such memory elements can be densely integrated and interconnected on a photonic integrated circuit: fast digital optical information processing systems employing large-scale integration should now be viable.
Launch processing system transition from development to operation
NASA Technical Reports Server (NTRS)
Paul, H. C.
1977-01-01
The Launch Processing System has been under development at Kennedy Space Center since 1973. A prototype system was developed and delivered to Marshall Space Flight Center for Solid Rocket Booster checkout in July 1976. The first production hardware arrived in late 1976. The System uses a distributed computer network for command and monitoring and is supported by a dual large scale computer system for 'off line' processing. A high level of automation is anticipated for Shuttle and Payload testing and launch operations to gain the advantages of short turnaround capability, repeatability of operations, and minimization of operations and maintenance (O&M) manpower. Learning how to efficiently apply the system is our current problem. We are searching for more effective ways to convey LPS system performance characteristics from the designer to a large number of users. Once we have done this, we can realize the advantages of LPS system design.
Molecular Dynamics Studies of Structure and Functions of Water-Membrane Interfaces
NASA Technical Reports Server (NTRS)
Pohorille, Andrew; Wilson, Michael A.; DeVincenzi, Donald L. (Technical Monitor)
2001-01-01
A large number of essential cellular processes occur at the interfaces between water and membranes. The selectivity and dynamics of these processes are largely determined by the structural and electrical properties of the water-membrane interface. We investigate these properties by the molecular dynamics method. Over the time scales of the simulations, the membrane undergoes fluctuations described by the capillary wave model. These fluctuations produce occasional thinning defects in the membrane which provide effective pathways for passive transport of ions and small molecules across the membrane. Ions moving through the membrane markedly disrupt its structure and allow for significant water penetration into the membrane interior. Selectivity of transport, with respect to ionic charge, is determined by the interfacial electrostatic potential. Many small molecules. of potential significance in catalysis, bioenergetics and pharmacology, are shown to bind to the interface. The energetics and dynamics of this process will be discussed.
Update on conjunctival pathology
Mudhar, Hardeep Singh
2017-01-01
Conjunctival biopsies constitute a fairly large number of cases in a typical busy ophthalmic pathology practice. They range from a single biopsy through multiple mapping biopsies to assess the extent of a particular pathological process. Like most anatomical sites, the conjunctiva is subject to a very wide range of pathological processes. This article will cover key, commonly encountered nonneoplastic and neoplastic entities. Where relevant, sections will include recommendations on how best to submit specimens to the ophthalmic pathology laboratory and the relevance of up-to-date molecular techniques. PMID:28905821
NASA Astrophysics Data System (ADS)
Aghasibeig, M.; Mousavi, M.; Ben Ettouill, F.; Moreau, C.; Wuthrich, R.; Dolatabadi, A.
2014-01-01
Ni-based electrode coatings with enhanced surface areas, for hydrogen production, were developed using atmospheric plasma spray (APS) and suspension plasma spray (SPS) processes. The results revealed a larger electrochemical active surface area for the coatings produced by SPS compared to those produced by APS process. SEM micrographs showed that the surface microstructure of the sample with the largest surface area was composed of a large number of small cauliflower-like aggregates with an average diameter of 10 μm.
Statistical error in simulations of Poisson processes: Example of diffusion in solids
NASA Astrophysics Data System (ADS)
Nilsson, Johan O.; Leetmaa, Mikael; Vekilova, Olga Yu.; Simak, Sergei I.; Skorodumova, Natalia V.
2016-08-01
Simulations of diffusion in solids often produce poor statistics of diffusion events. We present an analytical expression for the statistical error in ion conductivity obtained in such simulations. The error expression is not restricted to any computational method in particular, but valid in the context of simulation of Poisson processes in general. This analytical error expression is verified numerically for the case of Gd-doped ceria by running a large number of kinetic Monte Carlo calculations.
A Code Generation Approach for Auto-Vectorization in the Spade Compiler
NASA Astrophysics Data System (ADS)
Wang, Huayong; Andrade, Henrique; Gedik, Buğra; Wu, Kun-Lung
We describe an auto-vectorization approach for the Spade stream processing programming language, comprising two ideas. First, we provide support for vectors as a primitive data type. Second, we provide a C++ library with architecture-specific implementations of a large number of pre-vectorized operations as the means to support language extensions. We evaluate our approach with several stream processing operators, contrasting Spade's auto-vectorization with the native auto-vectorization provided by the GNU gcc and Intel icc compilers.
Computer measurement of particle sizes in electron microscope images
NASA Technical Reports Server (NTRS)
Hall, E. L.; Thompson, W. B.; Varsi, G.; Gauldin, R.
1976-01-01
Computer image processing techniques have been applied to particle counting and sizing in electron microscope images. Distributions of particle sizes were computed for several images and compared to manually computed distributions. The results of these experiments indicate that automatic particle counting within a reasonable error and computer processing time is feasible. The significance of the results is that the tedious task of manually counting a large number of particles can be eliminated while still providing the scientist with accurate results.
1984-09-01
C-033-82 (1982). "Development of the Narrow Gap Submerged Arc Welding Process - NSA Process," Hirai, Y. et al., Kawasaki Steel Technical Report, 5, 81...upsurge in the resources committed to research in the neurosciences in general, and to membrane phenomena specifically. Because of this large...reader a review of most of the current research being conducted in Japan in the neuroscience and membrane physiology areas. The presentation of the
Laser modified processes: bremsstrahlung and inelastic photon atom scattering
NASA Astrophysics Data System (ADS)
Budriga, Olimpia; Dondera, Mihai; Florescu, Viorica
2007-08-01
We consider the influence of a low-frequency monochromatic external electromagnetic field (the laser) on two basic atomic processes: electron Coulomb bremsstrahlung and inelastic photon scattering on an electron bound in the ground state of a hydrogenic atom. We briefly describe the approximations adopted and illustrate in figures how the laser parameters modify the shape of the differential cross-sections and extend the energy domain for emitted electrons, due to simultaneous absorption or emission of a large number (hundreds) of laser photons.
NASA Astrophysics Data System (ADS)
Vilar, Jose M. G.; Saiz, Leonor
2006-06-01
DNA looping plays a fundamental role in a wide variety of biological processes, providing the backbone for long range interactions on DNA. Here we develop the first model for DNA looping by an arbitrarily large number of proteins and solve it analytically in the case of identical binding. We uncover a switchlike transition between looped and unlooped phases and identify the key parameters that control this transition. Our results establish the basis for the quantitative understanding of fundamental cellular processes like DNA recombination, gene silencing, and telomere maintenance.
Emergency planning and preparedness for the deliberate release of toxic industrial chemicals.
Russell, David; Simpson, John
2010-03-01
Society in developed and developing countries is hugely dependent upon chemicals for health, wealth, and economic prosperity, with the chemical industry contributing significantly to the global economy. Many chemicals are synthesized, stored, and transported in vast quantities and classified as high production volume chemicals; some are recognized as being toxic industrial chemicals (TICs). Chemical accidents involving chemical installations and transportation are well recognized. Such chemical accidents occur with relative frequency and may result in large numbers of casualties with acute and chronic health effects as well as fatalities. The large-scale production of TICs, the potential for widespread exposure and significant public health impact, together with their relative ease of acquisition, makes deliberate release an area of potential concern. The large numbers of chemicals, together with the large number of potential release scenarios means that the number of possible forms of chemical incident are almost infinite. Therefore, prior to undertaking emergency planning and preparedness, it is necessary to prioritize risk and subsequently mitigate. This is a multi-faceted process, including implementation of industrial protection layers, substitution of hazardous chemicals, and relocation away from communities. Residual risk provides the basis for subsequent planning. Risk-prioritized emergency planning is a tool for identifying gaps, enhancing communication and collaboration, and for policy development. It also serves to enhance preparedness, a necessary prelude to preventing or mitigating the public health risk to deliberate release. Planning is an iterative and on-going process that requires multi-disciplinary agency input, culminating in the formation of a chemical incident plan complimentary to major incident planning. Preparedness is closely related and reflects a state of readiness. It is comprised of several components, including training and exercising. Toxicologists have a role to play in developing syndromic surveillance, recognizing clinical presentation of chemical incidents, developing toxicological datasheets, and the requisition and stockpiling of medical countermeasures. The chemical industry is global and many chemicals are synthesized and transported in vast quantities. Many of these chemicals are toxic and readily available, necessitating the need for identifying and assessing hazard and risks and subsequently planning and preparing for the deliberate release of TICs.
State of the art for the biosorption process--a review.
Michalak, Izabela; Chojnacka, Katarzyna; Witek-Krowiak, Anna
2013-07-01
In recent years, biosorption process has become an economic and eco-friendly alternative treatment technology in the water and wastewater industry. In this light, a number of biosorbents were developed and are successfully employed for treating various pollutants including metals, dyes, phenols, fluoride, and pharmaceuticals in solutions (aqueous/oil). However, still there are few technical barriers in the biosorption process that impede its commercialization and thus to overcome these problems there has been a steadily growing interest in this research field. This resulted in large numbers of publications and patents each year. This review reports the state of the art in biosorption research. In this review, we provide a compendium of know-how in laboratory methodology, mathematical modeling of equilibrium and kinetics, identification of the biosorption mechanism. Various mathematical models of biosorption were discussed: the process in packed-bed column arrangement, as well as by suspended biomass. Particular attention was paid to patents in biosorption and pilot-scale systems. In addition, we provided future aspects in biosorption research.
Studies in nonlinear problems of energy. Progress report, October 1, 1993--September 30, 1994
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matkowsky, B.J.
1994-09-01
The authors concentrate on modeling, analysis and large scale scientific computation of combustion and flame propagation phenomena, with emphasis on the transition from laminar to turbulent combustion. In the transition process a flame passed through a stages exhibiting increasingly complex spatial and temporal patterns which serve as signatures identifying each stage. Often the transitions arise via bifurcation. The authors investigate nonlinear dynamics, bifurcation and pattern formation in the successive stage of transition. They describe the stability of combustion waves, and transitions to combustion waves exhibiting progressively higher degrees of spatio-temporal complexity. One aspect of this research program is the systematicmore » derivation of appropriate, approximate models from the original models governing combustion. The approximate models are then analyzed. The authors are particularly interested in understanding the basic mechanisms affecting combustion, which is a prerequisite to effective control of the process. They are interested in determining the effects of varying various control parameters, such as Nusselt number, Lewis number, heat release, activation energy, Damkohler number, Reynolds number, Prandtl number, Peclet number, etc. The authors have also considered a number of problems in self-propagating high-temperature synthesis (SHS), in which combustion waves are employed to synthesize advanced materials. Efforts are directed toward understanding fundamental mechanisms. 167 refs.« less
Sosson, Charlotte; Georges, Carrie; Guillaume, Mathieu; Schuller, Anne-Marie; Schiltz, Christine
2018-01-01
Numbers are thought to be spatially organized along a left-to-right horizontal axis with small/large numbers on its left/right respectively. Behavioral evidence for this mental number line (MNL) comes from studies showing that the reallocation of spatial attention by active left/right head rotation facilitated the generation of small/large numbers respectively. While spatial biases in random number generation (RNG) during active movement are well established in adults, comparable evidence in children is lacking and it remains unclear whether and how children’s access to the MNL is affected by active head rotation. To get a better understanding of the development of embodied number processing, we investigated the effect of active head rotation on the mean of generated numbers as well as the mean difference between each number and its immediately preceding response (the first order difference; FOD) not only in adults (n = 24), but also in 7- to 11-year-old elementary school children (n = 70). Since the sign and absolute value of FODs carry distinct information regarding spatial attention shifts along the MNL, namely their direction (left/right) and size (narrow/wide) respectively, we additionally assessed the influence of rotation on the total of negative and positive FODs regardless of their numerical values as well as on their absolute values. In line with previous studies, adults produced on average smaller numbers and generated smaller mean FODs during left than right rotation. More concretely, they produced more negative/positive FODs during left/right rotation respectively and the size of negative FODs was larger (in terms of absolute value) during left than right rotation. Importantly, as opposed to adults, no significant differences in RNG between left and right head rotations were observed in children. Potential explanations for such age-related changes in the effect of active head rotation on RNG are discussed. Altogether, the present study confirms that numerical processing is spatially grounded in adults and suggests that its embodied aspect undergoes significant developmental changes. PMID:29541048
Contribution to terminology internationalization by word alignment in parallel corpora.
Deléger, Louise; Merkel, Magnus; Zweigenbaum, Pierre
2006-01-01
Creating a complete translation of a large vocabulary is a time-consuming task, which requires skilled and knowledgeable medical translators. Our goal is to examine to which extent such a task can be alleviated by a specific natural language processing technique, word alignment in parallel corpora. We experiment with translation from English to French. Build a large corpus of parallel, English-French documents, and automatically align it at the document, sentence and word levels using state-of-the-art alignment methods and tools. Then project English terms from existing controlled vocabularies to the aligned word pairs, and examine the number and quality of the putative French translations obtained thereby. We considered three American vocabularies present in the UMLS with three different translation statuses: the MeSH, SNOMED CT, and the MedlinePlus Health Topics. We obtained several thousand new translations of our input terms, this number being closely linked to the number of terms in the input vocabularies. Our study shows that alignment methods can extract a number of new term translations from large bodies of text with a moderate human reviewing effort, and thus contribute to help a human translator obtain better translation coverage of an input vocabulary. Short-term perspectives include their application to a corpus 20 times larger than that used here, together with more focused methods for term extraction.
Contribution to Terminology Internationalization by Word Alignment in Parallel Corpora
Deléger, Louise; Merkel, Magnus; Zweigenbaum, Pierre
2006-01-01
Background and objectives Creating a complete translation of a large vocabulary is a time-consuming task, which requires skilled and knowledgeable medical translators. Our goal is to examine to which extent such a task can be alleviated by a specific natural language processing technique, word alignment in parallel corpora. We experiment with translation from English to French. Methods Build a large corpus of parallel, English-French documents, and automatically align it at the document, sentence and word levels using state-of-the-art alignment methods and tools. Then project English terms from existing controlled vocabularies to the aligned word pairs, and examine the number and quality of the putative French translations obtained thereby. We considered three American vocabularies present in the UMLS with three different translation statuses: the MeSH, SNOMED CT, and the MedlinePlus Health Topics. Results We obtained several thousand new translations of our input terms, this number being closely linked to the number of terms in the input vocabularies. Conclusion Our study shows that alignment methods can extract a number of new term translations from large bodies of text with a moderate human reviewing effort, and thus contribute to help a human translator obtain better translation coverage of an input vocabulary. Short-term perspectives include their application to a corpus 20 times larger than that used here, together with more focused methods for term extraction. PMID:17238328
Recent progress in 3-D imaging of sea freight containers
NASA Astrophysics Data System (ADS)
Fuchs, Theobald; Schön, Tobias; Dittmann, Jonas; Sukowski, Frank; Hanke, Randolf
2015-03-01
The inspection of very large objects like sea freight containers with X-ray Computed Tomography (CT) is an emerging technology. A complete 3-D CT scan of a see-freight container takes several hours. Of course, this is too slow to apply it to a large number of containers. However, the benefits of a 3-D CT for sealed freight are obvious: detection of potential threats or illicit cargo without being confronted with legal complications or high time consumption and risks for the security personnel during a manual inspection. Recently distinct progress was made in the field of reconstruction of projections with only a relatively low number of angular positions. Instead of today's 500 to 1000 rotational steps, as needed for conventional CT reconstruction techniques, this new class of algorithms provides the potential to reduce the number of projection angles approximately by a factor of 10. The main drawback of these advanced iterative methods is the high consumption for numerical processing. But as computational power is getting steadily cheaper, there will be practical applications of these complex algorithms in a foreseeable future. In this paper, we discuss the properties of iterative image reconstruction algorithms and show results of their application to CT of extremely large objects scanning a sea-freight container. A specific test specimen is used to quantitatively evaluate the image quality in terms of spatial and contrast resolution and depending on different number of projections.
How number line estimation skills relate to neural activations in single digit subtraction problems
Berteletti, I.; Man, G.; Booth, J.R.
2014-01-01
The Number Line (NL) task requires judging the relative numerical magnitude of a number and estimating its value spatially on a continuous line. Children's skill on this task has been shown to correlate with and predict future mathematical competence. Neurofunctionally, this task has been shown to rely on brain regions involved in numerical processing. However, there is no direct evidence that performance on the NL task is related to brain areas recruited during arithmetical processing and that these areas are domain-specific to numerical processing. In this study, we test whether 8- to 14-year-old's behavioral performance on the NL task is related to fMRI activation during small and large single-digit subtraction problems. Domain-specific areas for numerical processing were independently localized through a numerosity judgment task. Results show a direct relation between NL estimation performance and the amount of the activation in key areas for arithmetical processing. Better NL estimators showed a larger problem size effect than poorer NL estimators in numerical magnitude (i.e., intraparietal sulcus) and visuospatial areas (i.e., posterior superior parietal lobules), marked by less activation for small problems. In addition, the direction of the activation with problem size within the IPS was associated to differences in accuracies for small subtraction problems. This study is the first to show that performance in the NL task, i.e. estimating the spatial position of a number on an interval, correlates with brain activity observed during single-digit subtraction problem in regions thought to be involved numerical magnitude and spatial processes. PMID:25497398
Perez-Diaz de Cerio, David; Hernández, Ángela; Valenzuela, Jose Luis; Valdovinos, Antonio
2017-01-01
The purpose of this paper is to evaluate from a real perspective the performance of Bluetooth Low Energy (BLE) as a technology that enables fast and reliable discovery of a large number of users/devices in a short period of time. The BLE standard specifies a wide range of configurable parameter values that determine the discovery process and need to be set according to the particular application requirements. Many previous works have been addressed to investigate the discovery process through analytical and simulation models, according to the ideal specification of the standard. However, measurements show that additional scanning gaps appear in the scanning process, which reduce the discovery capabilities. These gaps have been identified in all of the analyzed devices and respond to both regular patterns and variable events associated with the decoding process. We have demonstrated that these non-idealities, which are not taken into account in other studies, have a severe impact on the discovery process performance. Extensive performance evaluation for a varying number of devices and feasible parameter combinations has been done by comparing simulations and experimental measurements. This work also includes a simple mathematical model that closely matches both the standard implementation and the different chipset peculiarities for any possible parameter value specified in the standard and for any number of simultaneous advertising devices under scanner coverage. PMID:28273801
Perez-Diaz de Cerio, David; Hernández, Ángela; Valenzuela, Jose Luis; Valdovinos, Antonio
2017-03-03
The purpose of this paper is to evaluate from a real perspective the performance of Bluetooth Low Energy (BLE) as a technology that enables fast and reliable discovery of a large number of users/devices in a short period of time. The BLE standard specifies a wide range of configurable parameter values that determine the discovery process and need to be set according to the particular application requirements. Many previous works have been addressed to investigate the discovery process through analytical and simulation models, according to the ideal specification of the standard. However, measurements show that additional scanning gaps appear in the scanning process, which reduce the discovery capabilities. These gaps have been identified in all of the analyzed devices and respond to both regular patterns and variable events associated with the decoding process. We have demonstrated that these non-idealities, which are not taken into account in other studies, have a severe impact on the discovery process performance. Extensive performance evaluation for a varying number of devices and feasible parameter combinations has been done by comparing simulations and experimental measurements. This work also includes a simple mathematical model that closely matches both the standard implementation and the different chipset peculiarities for any possible parameter value specified in the standard and for any number of simultaneous advertising devices under scanner coverage.
Attentional bias induced by solving simple and complex addition and subtraction problems.
Masson, Nicolas; Pesenti, Mauro
2014-01-01
The processing of numbers has been shown to induce shifts of spatial attention in simple probe detection tasks, with small numbers orienting attention to the left and large numbers to the right side of space. Recently, the investigation of this spatial-numerical association has been extended to mental arithmetic with the hypothesis that solving addition or subtraction problems may induce attentional displacements (to the right and to the left, respectively) along a mental number line onto which the magnitude of the numbers would range from left to right, from small to large numbers. Here we investigated such attentional shifts using a target detection task primed by arithmetic problems in healthy participants. The constituents of the addition and subtraction problems (first operand; operator; second operand) were flashed sequentially in the centre of a screen, then followed by a target on the left or the right side of the screen, which the participants had to detect. This paradigm was employed with arithmetic facts (Experiment 1) and with more complex arithmetic problems (Experiment 2) in order to assess the effects of the operation, the magnitude of the operands, the magnitude of the results, and the presence or absence of a requirement for the participants to carry or borrow numbers. The results showed that arithmetic operations induce some spatial shifts of attention, possibly through a semantic link between the operation and space.
Discussion on the Development of Green Chemistry and Chemical Engineering
NASA Astrophysics Data System (ADS)
Zhang, Yunshen
2017-11-01
Chemical industry plays a vital role in the development process of national economy. However, in view of the special nature of the chemical industry, a large number of poisonous and harmful substances pose a great threat to the ecological environment and human health in the entire process of raw material acquisition, production, transportation, product manufacturing, and the final practical application. Therefore, it is a general trend to promote the development of chemistry and chemical engineering towards a greener environment. This article will focus on some basic problems occurred in the development process of green chemistry and chemical engineering.
The growth receptors and their role in wound healing.
Rolfe, Kerstin J; Grobbelaar, Adriaan O
2010-11-01
Abnormal wound healing is a major problem in healthcare today, with both scarring and chronic wounds affecting large numbers of individuals worldwide. Wound healing is a complex process involving several variables, including growth factors and their receptors. Chronic wounds fail to complete the wound healing process, while scarring is considered to be an overzealous wound healing process. Growth factor receptors and their ligands are being investigated to assess their potential in the development of therapeutic strategies to improve wound healing. This review discusses potential therapeutics for manipulating growth factors and their corresponding receptors for the treatment of abnormal wound healing.
Verdú-López, Francisco; Beisse, Rudolf
2014-01-01
Thoracoscopic surgery or video-assisted thoracic surgery (VATS) of the thoracic and lumbar spine has evolved greatly since it appeared less than 20 years ago. It is currently used in a large number of processes and injuries. The aim of this article, in its two parts, is to review the current status of VATS of the thoracic and lumbar spine in its entire spectrum. After reviewing the current literature, we developed each of the large groups of indications where VATS takes place, one by one. This second part reviews and discusses the management, treatment and specific thoracoscopic technique in thoracic disc herniation, spinal deformities, tumour pathology, infections of the spine and other possible indications for VATS. Thoracoscopic surgery is in many cases an alternative to conventional open surgery. The transdiaphragmatic approach has made endoscopic treatment of many thoracolumbar junction processes possible, thus widening the spectrum of therapeutic indications. These include the treatment of spinal deformities, spinal tumours, infections and other pathological processes, as well as the reconstruction of injured spinal segments and decompression of the spinal canal if lesion placement is favourable to antero-lateral approach. Good clinical results of thoracoscopic surgery are supported by growing experience reflected in a large number of articles. The degree of complications in thoracoscopic surgery is comparable to open surgery, with benefits in regard to morbidity of the approach and subsequent patient recovery. Copyright © 2012 Sociedad Española de Neurocirugía. Published by Elsevier España. All rights reserved.
NASA Astrophysics Data System (ADS)
Singh, Sarabjeet; Schneider, David J.; Myers, Christopher R.
2014-03-01
Branching processes have served as a model for chemical reactions, biological growth processes, and contagion (of disease, information, or fads). Through this connection, these seemingly different physical processes share some common universalities that can be elucidated by analyzing the underlying branching process. In this work we focus on coupled branching processes as a model of infectious diseases spreading from one population to another. An exceedingly important example of such coupled outbreaks are zoonotic infections that spill over from animal populations to humans. We derive several statistical quantities characterizing the first spillover event from animals to humans, including the probability of spillover, the first passage time distribution for human infection, and disease prevalence in the animal population at spillover. Large stochastic fluctuations in those quantities can make inference of the state of the system at the time of spillover difficult. Focusing on outbreaks in the human population, we then characterize the critical threshold for a large outbreak, the distribution of outbreak sizes, and associated scaling laws. These all show a strong dependence on the basic reproduction number in the animal population and indicate the existence of a novel multicritical point with altered scaling behavior. The coupling of animal and human infection dynamics has crucial implications, most importantly allowing for the possibility of large human outbreaks even when human-to-human transmission is subcritical.
1991-03-31
nodes, directional arrows show the parent and child rela- and the graphics driver runs on the CP, i.e., the tionship of processes. Although there is a...about ODB plus some number of transitory primitives, whether or not its child primitives are resident. Transitory primitives are discarded as needed...true if this Hnode’s child primitives approached. are not resident. This method of ODB decomposition has the ability to distribute a very large number of
Popigai Impact Structure Modeling: Morphology and Worldwide Ejecta
NASA Technical Reports Server (NTRS)
Ivanov, B. A.; Artemieva, N. A.; Pierazzo, E.
2004-01-01
The approx. 100 km in diameter, 35.7 0.2 Ma old Popigai structure [1], northern Siberia (Russia), is the best-preserved of the large terrestrial complex crater structures containing a central-peak ring [2- 4]. Although remotely located, the excellent outcrops, large number of drill cores, and wealth of geochemical data make Popigai ideal for the general study of the cratering processes. It is most famous for its impact-diamonds [2,5]. Popigai is the best candidate for the source crater of the worldwide late Eocene ejecta [6,7].
Automated Absorber Attachment for X-ray Microcalorimeter Arrays
NASA Technical Reports Server (NTRS)
Moseley, S.; Allen, Christine; Kilbourne, Caroline; Miller, Timothy M.; Costen, Nick; Schulte, Eric; Moseley, Samuel J.
2007-01-01
Our goal is to develop a method for the automated attachment of large numbers of absorber tiles to large format detector arrays. This development includes the fabrication of high quality, closely spaced HgTe absorber tiles that are properly positioned for pick-and-place by our FC150 flip chip bonder. The FC150 also transfers the appropriate minute amount of epoxy to the detectors for permanent attachment of the absorbers. The success of this development will replace an arduous, risky and highly manual task with a reliable, high-precision automated process.
NASA Astrophysics Data System (ADS)
Cortesi, A. B.; Smith, B. L.; Yadigaroglu, G.; Banerjee, S.
1999-01-01
The direct numerical simulation (DNS) of a temporally-growing mixing layer has been carried out, for a variety of initial conditions at various Richardson and Prandtl numbers, by means of a pseudo-spectral technique; the main objective being to elucidate how the entrainment and mixing processes in mixing-layer turbulence are altered under the combined influence of stable stratification and thermal conductivity. Stratification is seen to significantly modify the way by which entrainment and mixing occur by introducing highly-localized, convective instabilities, which in turn cause a substantially different three-dimensionalization of the flow compared to the unstratified situation. Fluid which was able to cross the braid region mainly undisturbed (unmixed) in the unstratified case, pumped by the action of rib pairs and giving rise to well-formed mushroom structures, is not available with stratified flow. This is because of the large number of ribs which efficiently mix the fluid crossing the braid region. More efficient entrainment and mixing has been noticed for high Prandtl number computations, where vorticity is significantly reinforced by the baroclinic torque. In liquid sodium, however, for which the Prandtl number is very low, the generation of vorticity is very effectively suppressed by the large thermal conduction, since only small temperature gradients, and thus negligible baroclinic vorticity reinforcement, are then available to counterbalance the effects of buoyancy. This is then reflected in less efficient entrainment and mixing. The influence of the stratification and the thermal conductivity can also be clearly identified from the calculated entrainment coefficients and turbulent Prandtl numbers, which were seen to accurately match experimental data. The turbulent Prandtl number increases rapidly with increasing stratification in liquid sodium, whereas for air and water the stratification effect is less significant. A general law for the entrainment coefficient as a function of the Richardson and Prandtl numbers is proposed, and critically assessed against experimental data.
An array processing system for lunar geochemical and geophysical data
NASA Technical Reports Server (NTRS)
Eliason, E. M.; Soderblom, L. A.
1977-01-01
A computerized array processing system has been developed to reduce, analyze, display, and correlate a large number of orbital and earth-based geochemical, geophysical, and geological measurements of the moon on a global scale. The system supports the activities of a consortium of about 30 lunar scientists involved in data synthesis studies. The system was modeled after standard digital image-processing techniques but differs in that processing is performed with floating point precision rather than integer precision. Because of flexibility in floating-point image processing, a series of techniques that are impossible or cumbersome in conventional integer processing were developed to perform optimum interpolation and smoothing of data. Recently color maps of about 25 lunar geophysical and geochemical variables have been generated.
An Ancestral Recombination Graph for Diploid Populations with Skewed Offspring Distribution
Birkner, Matthias; Blath, Jochen; Eldon, Bjarki
2013-01-01
A large offspring-number diploid biparental multilocus population model of Moran type is our object of study. At each time step, a pair of diploid individuals drawn uniformly at random contributes offspring to the population. The number of offspring can be large relative to the total population size. Similar “heavily skewed” reproduction mechanisms have been recently considered by various authors (cf. e.g., Eldon and Wakeley 2006, 2008) and reviewed by Hedgecock and Pudovkin (2011). Each diploid parental individual contributes exactly one chromosome to each diploid offspring, and hence ancestral lineages can coalesce only when in distinct individuals. A separation-of-timescales phenomenon is thus observed. A result of Möhle (1998) is extended to obtain convergence of the ancestral process to an ancestral recombination graph necessarily admitting simultaneous multiple mergers of ancestral lineages. The usual ancestral recombination graph is obtained as a special case of our model when the parents contribute only one offspring to the population each time. Due to diploidy and large offspring numbers, novel effects appear. For example, the marginal genealogy at each locus admits simultaneous multiple mergers in up to four groups, and different loci remain substantially correlated even as the recombination rate grows large. Thus, genealogies for loci far apart on the same chromosome remain correlated. Correlation in coalescence times for two loci is derived and shown to be a function of the coalescence parameters of our model. Extending the observations by Eldon and Wakeley (2008), predictions of linkage disequilibrium are shown to be functions of the reproduction parameters of our model, in addition to the recombination rate. Correlations in ratios of coalescence times between loci can be high, even when the recombination rate is high and sample size is large, in large offspring-number populations, as suggested by simulations, hinting at how to distinguish between different population models. PMID:23150600
ERIC Educational Resources Information Center
Szekely, Eszter; Herba, Catherine M.; Arp, Pascal P.; Uitterlinden, Andre G.; Jaddoe, Vincent W. V.; Hofman, Albert; Verhulst, Frank C.; Hudziak, James J.; Tiemeier, Henning
2011-01-01
Background: Previous research highlights the significance of a functional polymorphism located in the promoter region (5-HTTLPR) of the serotonin transporter gene in emotional behaviour. This study examined the effect of the 5-HTTLPR polymorphism on emotion processing in a large number of healthy preschoolers. Methods: The 5-HTTLPR genotype was…
Decision-Making Rationales among Quebec VET Student Aged 25 and Older
ERIC Educational Resources Information Center
Cournoyer, Louis; Deschenaux, Frédéric
2017-01-01
Each year, a large number of students aged 25 years and over take part in vocational and education training (VET) programs in the Province of Quebec, Canada. The life experiences of many of these adults are marked by complex psychosocial and professional events, which may have influenced their career decision-making processes. This paper aimed to…
Improving the Validity and Reliability of Large Scale Writing Assessment.
ERIC Educational Resources Information Center
Fenton, Ray; Straugh, Tom; Stofflet, Fred; Garrison, Steve
This paper examines the efforts of the Anchorage School District, Alaska, to improve the validity of its writing assessment as a useful tool for the training of teachers and the characterization of the quality of student writing. The paper examines how a number of changes in the process and scoring of the Anchorage Writing Assessment affected the…
USDA-ARS?s Scientific Manuscript database
The ability to rapidly screen a large number of individuals is the key to any successful plant breeding program. One of the primary bottlenecks in high throughput screening is the preparation of DNA samples, particularly the quantification and normalization of samples for downstream processing. A ...
ERIC Educational Resources Information Center
Hyde, Daniel C.; Spelke, Elizabeth S.
2011-01-01
Behavioral research suggests that two cognitive systems are at the foundations of numerical thinking: one for representing 1-3 objects in parallel and one for representing and comparing large, approximate numerical magnitudes. We tested for dissociable neural signatures of these systems in preverbal infants by recording event-related potentials…
A design aid for determining width of filter strips
M.G. Dosskey; M.J. Helmers; D.E. Eisenhauer
2008-01-01
watershed planners need a tool for determining width of filter strips that is accurate enough for developing cost-effective site designs and easy enough to use for making quick determinations on a large number and variety of sites.This study employed the process-based Vegetative Filter Strip Model to evaluate the relationship between filter strip width and trapping...
Edwards, Jacky
Scarring has major psychological and physical repercussions--for example, scarring on the face and visible regions of the body can be very distressing for the patient, whether it is simple acne scars or large, raised surgical or traumatic scars. This article discusses the process of scar formation, the differences between scars and proposes a number of ways in which the nurse can manage scars.
Code of Federal Regulations, 2010 CFR
2010-04-01
... cigarettes, or small cigars to be shipped; (e) The number and total sale price of large cigars having a sale...) TOBACCO IMPORTATION OF TOBACCO PRODUCTS, CIGARETTE PAPERS AND TUBES, AND PROCESSED TOBACCO Puerto Rican Tobacco Products and Cigarette Papers and Tubes, Brought Into the United States Deferred Payment of Tax in...
Promoting College and Career Success: Portfolio Assessment for Student Veterans
ERIC Educational Resources Information Center
Council for Adult and Experiential Learning, 2014
2014-01-01
Active service members and veterans are pursuing postsecondary degrees in record numbers today, due in large part to the GI Bill education benefits that can cover much or all of the cost. An important tool for helping service members and veterans succeed in postsecondary education is prior learning assessment (PLA). PLA is a process that includes…
ERIC Educational Resources Information Center
White, Bruce
2017-01-01
Studies of data-driven deselection overwhelmingly emphasise the importance of circulation counts and date-of-last-use in the weeding process. When applied to research collections, however, this approach fails to take account of highly influential and significant titles that have not been of interest to large numbers of borrowers but that have been…
Self-Esteem and Hopelessness, and Resiliency: An Exploratory Study of Adolescents in Turkey
ERIC Educational Resources Information Center
Karatas, Zeynep; Cakar, Firdevs Savi
2011-01-01
Adolescence is a time of rapid development and change. In this developmental period, adolescents have to struggle with a large number of stress factors. In this process resilience is important to have as an adaptive, stress-resistant personal quality. The recent research considers that numerous factors contribute to resilience in adolescents; the…
Important parameters for smoke plume rise simulation with Daysmoke
L. Liu; G.L. Achtemeier; S.L. Goodrick; W. Jackson
2010-01-01
Daysmoke is a local smoke transport model and has been used to provide smoke plume rise information. It includes a large number of parameters describing the dynamic and stochastic processes of particle upward movement, fallout, fluctuation, and burn emissions. This study identifies the important parameters for Daysmoke simulations of plume rise and seeks to understand...
Code of Federal Regulations, 2012 CFR
2012-04-01
... cigarettes, or small cigars to be shipped; (e) The number and total sale price of large cigars having a sale...) TOBACCO IMPORTATION OF TOBACCO PRODUCTS, CIGARETTE PAPERS AND TUBES, AND PROCESSED TOBACCO Puerto Rican Tobacco Products and Cigarette Papers and Tubes, Brought Into the United States Deferred Payment of Tax in...
Code of Federal Regulations, 2014 CFR
2014-04-01
... cigarettes, or small cigars to be shipped; (e) The number and total sale price of large cigars having a sale...) TOBACCO IMPORTATION OF TOBACCO PRODUCTS, CIGARETTE PAPERS AND TUBES, AND PROCESSED TOBACCO Puerto Rican Tobacco Products and Cigarette Papers and Tubes, Brought Into the United States Deferred Payment of Tax in...
Code of Federal Regulations, 2011 CFR
2011-04-01
... cigarettes, or small cigars to be shipped; (e) The number and total sale price of large cigars having a sale...) TOBACCO IMPORTATION OF TOBACCO PRODUCTS, CIGARETTE PAPERS AND TUBES, AND PROCESSED TOBACCO Puerto Rican Tobacco Products and Cigarette Papers and Tubes, Brought Into the United States Deferred Payment of Tax in...
Code of Federal Regulations, 2013 CFR
2013-04-01
... cigarettes, or small cigars to be shipped; (e) The number and total sale price of large cigars having a sale...) TOBACCO IMPORTATION OF TOBACCO PRODUCTS, CIGARETTE PAPERS AND TUBES, AND PROCESSED TOBACCO Puerto Rican Tobacco Products and Cigarette Papers and Tubes, Brought Into the United States Deferred Payment of Tax in...
ERIC Educational Resources Information Center
Rienties, Bart; Tempelaar, Dirk; Giesbers, Bas; Segers, Mien; Gijselaers, Wim
2014-01-01
A large number of studies in CMC have assessed how social interaction, processes and learning outcomes are intertwined. The present research explores how the degree of self-determination of learners, that is the motivational orientation of a learner, influences the communication and interaction patterns in an online Problem Based Learning…
ERIC Educational Resources Information Center
Misischia, Cynthia M.
2010-01-01
A large number of undergraduate students have naive understandings about the processes of Diffusion and Osmosis. Some students overcome these misconceptions, but others do not. The study involved nineteen undergraduate movement science students at a Midwest University. Participants' were asked to complete a short answer (fill-in the blank) test,…
Supporting Teachers' Use of Research-Based Instructional Sequences
ERIC Educational Resources Information Center
Cobb, Paul; Jackson, Kara
2015-01-01
In this paper, we frame the dissemination of the products of classroom design studies as a process of supporting the learning of large numbers of teachers. We argue that high-quality pull-out professional development is essential but not sufficient, and go on to consider teacher collaboration and one-on-one coaching in the classroom as additional…
ERIC Educational Resources Information Center
Glantz, Richard S.
Until recently, the emphasis in information storage and retrieval systems has been towards batch-processing of large files. In contrast, SHOEBOX is designed for the unformatted, personal file collection of the computer-naive individual. Operating through display terminals in a time-sharing, interactive environment on the IBM 360, the user can…
Individualizing the Teaching of Reading through Test Management Systems.
ERIC Educational Resources Information Center
Fry, Edward
Test management systems are suggested for individualizing the teaching of reading in the elementary classroom. Test management systems start with a list of objectives or specific goals which cover all or some major areas of the learning to read process. They then develop a large number of criterion referenced tests which match the skill areas at…
Text Messaging for Student Communication and Voting
ERIC Educational Resources Information Center
McClean, Stephen; Hagan, Paul; Morgan, Jason
2010-01-01
Text messaging has gained widespread popularity in higher education as a communication tool and as a means of engaging students in the learning process. In this study we report on the use of text messaging in a large, year-one introductory chemistry module where students were encouraged to send questions and queries to a dedicated text number both…
Music Education and the Brain: What Does It Take to Make a Change?
ERIC Educational Resources Information Center
Collins, Anita
2014-01-01
Neuroscientists have worked for over two decades to understand how the brain processes music, affects emotions, and changes brain development. Much of this research has been based on a model that compares the brain function of participants classified as musicians and nonmusicians. This body of knowledge reveals a large number of benefits from…
USDA-ARS?s Scientific Manuscript database
Process based and distributed watershed models possess a large number of parameters that are not directly measured in field and need to be calibrated through matching modeled in-stream fluxes with monitored data. Recently, there have been waves of concern about the reliability of this common practic...
ERIC Educational Resources Information Center
Llambi, Laura; Esteves, Elba; Martinez, Elisa; Forster, Thais; Garcia, Sofia; Miranda, Natalia; Arredondo, Antonio Lopez; Margolis, Alvaro
2011-01-01
Introduction: Since 2004, with the ratification of the Framework Convention on Tobacco Control, Uruguay has implemented a wide range of legal restrictions designed to reduce the devastating impacts of tobacco. This legal process generated an increase in demand for tobacco cessation treatment, which led to the need to train a large number of…
An Integrated On-Line Transfer Credit Evaluation System-Admissions through Graduation Audit.
ERIC Educational Resources Information Center
Schuman, Chester D.
This document discusses a computerized transfer evaluation system designed by Pennsylvania College of Technology, a comprehensive two-year institution with an enrollment of over 4,800 students. It is noted that the Admissions Office processes approximately 500 transfer applications for a fall semester, as well as a large number of evaluations for…
NASA Technical Reports Server (NTRS)
Sewell, James S.; Bozada, Christopher A.
1994-01-01
Advanced radar and communication systems rely heavily on state-of-the-art microelectronics. Systems such as the phased-array radar require many transmit/receive (T/R) modules which are made up of many millimeter wave - microwave integrated circuits (MMIC's). The heart of a MMIC chip is the Gallium Arsenide (GaAs) field-effect transistor (FET). The transistor gate length is the critical feature that determines the operating frequency of the radar system. A smaller gate length will typically result in a higher frequency. In order to make a phased array radar system economically feasible, manufacturers must be capable of producing very large quantities of small-gate-length MMIC chips at a relatively low cost per chip. This requires the processing of a large number of wafers with a large number of chips per wafer, minimum processing time, and a very high chip yield. One of the bottlenecks in the fabrication of MIMIC chips is the transistor gate definition. The definition of sub-half-micron gates for GaAs-based field-effect transistors is generally performed by direct-write electron beam lithography (EBL). Because of the throughput limitations of EBL, the gate-layer fabrication is conventionally divided into two lithographic processes where EBL is used to generate the gate fingers and optical lithography is used to generate the large-area gate pads and interconnects. As a result, two complete sequences of resist application, exposure, development, metallization and lift-off are required for the entire gate structure. We have baselined a hybrid process, referred to as EBOL (electron beam/optical lithography), in which a single application of a multi-level resist is used for both exposures. The entire gate structure, (gate fingers, interconnects and pads), is then formed with a single metallization and lift-off process. The EBOL process thus retains the advantages of the high-resolution E-beam lithography and the high throughput of optical lithography while essentially eliminating an entire lithography/metallization/lift-off process sequence. This technique has been proven to be reliable for both trapezoidal and mushroom gates and has been successfully applied to metal-semiconductor and high-electron-mobility field-effect transistor (MESFET and HEMT) wafers containing devices with gate lengths down to 0.10 micron and 75 x 75 micron gate pads. The yields and throughput of these wafers have been very high with no loss in device performance. We will discuss the entire EBOL process technology including the multilayer resist structure, exposure conditions, process sensitivities, metal edge definition, device results, comparison to the standard gate-layer process, and its suitability for manufacturing.
NASA Astrophysics Data System (ADS)
Sewell, James S.; Bozada, Christopher A.
1994-02-01
Advanced radar and communication systems rely heavily on state-of-the-art microelectronics. Systems such as the phased-array radar require many transmit/receive (T/R) modules which are made up of many millimeter wave - microwave integrated circuits (MMIC's). The heart of a MMIC chip is the Gallium Arsenide (GaAs) field-effect transistor (FET). The transistor gate length is the critical feature that determines the operating frequency of the radar system. A smaller gate length will typically result in a higher frequency. In order to make a phased array radar system economically feasible, manufacturers must be capable of producing very large quantities of small-gate-length MMIC chips at a relatively low cost per chip. This requires the processing of a large number of wafers with a large number of chips per wafer, minimum processing time, and a very high chip yield. One of the bottlenecks in the fabrication of MIMIC chips is the transistor gate definition. The definition of sub-half-micron gates for GaAs-based field-effect transistors is generally performed by direct-write electron beam lithography (EBL). Because of the throughput limitations of EBL, the gate-layer fabrication is conventionally divided into two lithographic processes where EBL is used to generate the gate fingers and optical lithography is used to generate the large-area gate pads and interconnects. As a result, two complete sequences of resist application, exposure, development, metallization and lift-off are required for the entire gate structure. We have baselined a hybrid process, referred to as EBOL (electron beam/optical lithography), in which a single application of a multi-level resist is used for both exposures. The entire gate structure, (gate fingers, interconnects and pads), is then formed with a single metallization and lift-off process. The EBOL process thus retains the advantages of the high-resolution E-beam lithography and the high throughput of optical lithography while essentially eliminating an entire lithography/metallization/lift-off process sequence. This technique has been proven to be reliable for both trapezoidal and mushroom gates and has been successfully applied to metal-semiconductor and high-electron-mobility field-effect transistor (MESFET and HEMT) wafers containing devices with gate lengths down to 0.10 micron and 75 x 75 micron gate pads. The yields and throughput of these wafers have been very high with no loss in device performance. We will discuss the entire EBOL process technology including the multilayer resist structure, exposure conditions, process sensitivities, metal edge definition, device results, comparison to the standard gate-layer process, and its suitability for manufacturing.
From Discovery to Production: Biotechnology of Marine Fungi for the Production of New Antibiotics.
Silber, Johanna; Kramer, Annemarie; Labes, Antje; Tasdemir, Deniz
2016-07-21
Filamentous fungi are well known for their capability of producing antibiotic natural products. Recent studies have demonstrated the potential of antimicrobials with vast chemodiversity from marine fungi. Development of such natural products into lead compounds requires sustainable supply. Marine biotechnology can significantly contribute to the production of new antibiotics at various levels of the process chain including discovery, production, downstream processing, and lead development. However, the number of biotechnological processes described for large-scale production from marine fungi is far from the sum of the newly-discovered natural antibiotics. Methods and technologies applied in marine fungal biotechnology largely derive from analogous terrestrial processes and rarely reflect the specific demands of the marine fungi. The current developments in metabolic engineering and marine microbiology are not yet transferred into processes, but offer numerous options for improvement of production processes and establishment of new process chains. This review summarises the current state in biotechnological production of marine fungal antibiotics and points out the enormous potential of biotechnology in all stages of the discovery-to-development pipeline. At the same time, the literature survey reveals that more biotechnology transfer and method developments are needed for a sustainable and innovative production of marine fungal antibiotics.
Extreme Quantum Memory Advantage for Rare-Event Sampling
NASA Astrophysics Data System (ADS)
Aghamohammadi, Cina; Loomis, Samuel P.; Mahoney, John R.; Crutchfield, James P.
2018-02-01
We introduce a quantum algorithm for memory-efficient biased sampling of rare events generated by classical memoryful stochastic processes. Two efficiency metrics are used to compare quantum and classical resources for rare-event sampling. For a fixed stochastic process, the first is the classical-to-quantum ratio of required memory. We show for two example processes that there exists an infinite number of rare-event classes for which the memory ratio for sampling is larger than r , for any large real number r . Then, for a sequence of processes each labeled by an integer size N , we compare how the classical and quantum required memories scale with N . In this setting, since both memories can diverge as N →∞ , the efficiency metric tracks how fast they diverge. An extreme quantum memory advantage exists when the classical memory diverges in the limit N →∞ , but the quantum memory has a finite bound. We then show that finite-state Markov processes and spin chains exhibit memory advantage for sampling of almost all of their rare-event classes.
Coarse-grained models of key self-assembly processes in HIV-1
NASA Astrophysics Data System (ADS)
Grime, John
Computational molecular simulations can elucidate microscopic information that is inaccessible to conventional experimental techniques. However, many processes occur over time and length scales that are beyond the current capabilities of atomic-resolution molecular dynamics (MD). One such process is the self-assembly of the HIV-1 viral capsid, a biological structure that is crucial to viral infectivity. The nucleation and growth of capsid structures requires the interaction of large numbers of capsid proteins within a complicated molecular environment. Coarse-grained (CG) models, where degrees of freedom are removed to produce more computationally efficient models, can in principle access large-scale phenomena such as the nucleation and growth of HIV-1 capsid lattice. We report here studies of the self-assembly behaviors of a CG model of HIV-1 capsid protein, including the influence of the local molecular environment on nucleation and growth processes. Our results suggest a multi-stage process, involving several characteristic structures, eventually producing metastable capsid lattice morphologies that are amenable to subsequent capsid dissociation in order to transmit the viral infection.
NEAT1 Scaffolds RNA Binding Proteins and the Microprocessor to Globally Enhance Pri-miRNA Processing
Jiang, Li; Shao, Changwei; Wu, Qi-Jia; Chen, Geng; Zhou, Jie; Yang, Bo; Li, Hairi; Gou, Lan-Tao; Zhang, Yi; Wang, Yangming; Yeo, Gene W.; Zhou, Yu; Fu, Xiang-Dong
2018-01-01
Summary MicroRNA biogenesis is known to be modulated by a variety of RNA binding proteins (RBPs), but in most cases, individual RBPs appear to influence the processing of a small subset of target miRNAs. We herein report that the RNA binding NONO/PSF heterodimer binds a large number of expressed pri-miRNAs in HeLa cells to globally enhance pri-miRNA processing by the Drosha/DGCR8 Microprocessor. Because NONO/PSF are key components of paraspeckles organized by the lncRNA NEAT1, we further demonstrate that NEAT1 also has a profound effect on global pri-miRNA processing. Mechanistic dissection reveals that NEAT1 broadly interacts with NONO/PSF as well as many other RBPs, and that multiple RNA segments in NEAT1, including a “pseudo pri-miRNA” near its 3′ end, help attract the Microprocessor. These findings suggest a bird nest model for a large non-coding RNA to orchestrate efficient processing of almost an entire class of small non-coding RNAs in the nucleus. PMID:28846091
Validation of Rapid Radiochemical Method for Californium ...
Technical Brief In the event of a radiological/nuclear contamination event, the response community would need tools and methodologies to rapidly assess the nature and the extent of contamination. To characterize a radiologically contaminated outdoor area and to inform risk assessment, large numbers of environmental samples would be collected and analyzed over a short period of time. To address the challenge of quickly providing analytical results to the field, the U.S. EPA developed a robust analytical method. This method allows response officials to characterize contaminated areas and to assess the effectiveness of remediation efforts, both rapidly and accurately, in the intermediate and late phases of environmental cleanup. Improvement in sample processing and analysis leads to increased laboratory capacity to handle the analysis of a large number of samples following the intentional or unintentional release of a radiological/nuclear contaminant.
A high-throughput microRNA expression profiling system.
Guo, Yanwen; Mastriano, Stephen; Lu, Jun
2014-01-01
As small noncoding RNAs, microRNAs (miRNAs) regulate diverse biological functions, including physiological and pathological processes. The expression and deregulation of miRNA levels contain rich information with diagnostic and prognostic relevance and can reflect pharmacological responses. The increasing interest in miRNA-related research demands global miRNA expression profiling on large numbers of samples. We describe here a robust protocol that supports high-throughput sample labeling and detection on hundreds of samples simultaneously. This method employs 96-well-based miRNA capturing from total RNA samples and on-site biochemical reactions, coupled with bead-based detection in 96-well format for hundreds of miRNAs per sample. With low-cost, high-throughput, high detection specificity, and flexibility to profile both small and large numbers of samples, this protocol can be adapted in a wide range of laboratory settings.
GW Calculations of Materials on the Intel Xeon-Phi Architecture
NASA Astrophysics Data System (ADS)
Deslippe, Jack; da Jornada, Felipe H.; Vigil-Fowler, Derek; Biller, Ariel; Chelikowsky, James R.; Louie, Steven G.
Intel Xeon-Phi processors are expected to power a large number of High-Performance Computing (HPC) systems around the United States and the world in the near future. We evaluate the ability of GW and pre-requisite Density Functional Theory (DFT) calculations for materials on utilizing the Xeon-Phi architecture. We describe the optimization process and performance improvements achieved. We find that the GW method, like other higher level Many-Body methods beyond standard local/semilocal approximations to Kohn-Sham DFT, is particularly well suited for many-core architectures due to the ability to exploit a large amount of parallelism over plane-waves, band-pairs and frequencies. Support provided by the SCIDAC program, Department of Energy, Office of Science, Advanced Scientic Computing Research and Basic Energy Sciences. Grant Numbers DE-SC0008877 (Austin) and DE-AC02-05CH11231 (LBNL).
Distributed magnetic field positioning system using code division multiple access
NASA Technical Reports Server (NTRS)
Prigge, Eric A. (Inventor)
2003-01-01
An apparatus and methods for a magnetic field positioning system use a fundamentally different, and advantageous, signal structure and multiple access method, known as Code Division Multiple Access (CDMA). This signal architecture, when combined with processing methods, leads to advantages over the existing technologies, especially when applied to a system with a large number of magnetic field generators (beacons). Beacons at known positions generate coded magnetic fields, and a magnetic sensor measures a sum field and decomposes it into component fields to determine the sensor position and orientation. The apparatus and methods can have a large `building-sized` coverage area. The system allows for numerous beacons to be distributed throughout an area at a number of different locations. A method to estimate position and attitude, with no prior knowledge, uses dipole fields produced by these beacons in different locations.