Code of Federal Regulations, 2013 CFR
2013-07-01
... persons who import, manufacture, process, distribute in commerce, or use chemicals containing... records by persons who import, manufacture, process, distribute in commerce, or use chemicals containing inadvertently generated PCBs. (a) Persons who import, manufacture, process, distribute in commerce, or use...
Code of Federal Regulations, 2011 CFR
2011-07-01
... persons who import, manufacture, process, distribute in commerce, or use chemicals containing... records by persons who import, manufacture, process, distribute in commerce, or use chemicals containing inadvertently generated PCBs. (a) Persons who import, manufacture, process, distribute in commerce, or use...
Code of Federal Regulations, 2012 CFR
2012-07-01
... persons who import, manufacture, process, distribute in commerce, or use chemicals containing... records by persons who import, manufacture, process, distribute in commerce, or use chemicals containing inadvertently generated PCBs. (a) Persons who import, manufacture, process, distribute in commerce, or use...
Code of Federal Regulations, 2014 CFR
2014-07-01
... persons who import, manufacture, process, distribute in commerce, or use chemicals containing... records by persons who import, manufacture, process, distribute in commerce, or use chemicals containing inadvertently generated PCBs. (a) Persons who import, manufacture, process, distribute in commerce, or use...
NASA Astrophysics Data System (ADS)
Pershin, I. M.; Pervukhin, D. A.; Ilyushin, Y. V.; Afanaseva, O. V.
2017-10-01
The paper considers an important problem of designing distributed systems of hydrolithosphere processes management. The control actions on the hydrolithosphere processes under consideration are implemented by a set of extractive wells. The article shows the method of defining the approximation links for description of the dynamic characteristics of hydrolithosphere processes. The structure of distributed regulators, used in the management systems by the considered processes, is presented. The paper analyses the results of the synthesis of the distributed management system and the results of modelling the closed-loop control system by the parameters of the hydrolithosphere process.
Code of Federal Regulations, 2010 CFR
2010-07-01
... persons who import, manufacture, process, distribute in commerce, or use chemicals containing..., DISTRIBUTION IN COMMERCE, AND USE PROHIBITIONS General Records and Reports § 761.193 Maintenance of monitoring records by persons who import, manufacture, process, distribute in commerce, or use chemicals containing...
Post-processing of metal matrix composites by friction stir processing
NASA Astrophysics Data System (ADS)
Sharma, Vipin; Singla, Yogesh; Gupta, Yashpal; Raghuwanshi, Jitendra
2018-05-01
In metal matrix composites non-uniform distribution of reinforcement particles resulted in adverse affect on the mechanical properties. It is of great interest to explore post-processing techniques that can eliminate particle distribution heterogeneity. Friction stir processing is a relatively newer technique used for post-processing of metal matrix composites to improve homogeneity in particles distribution. In friction stir processing, synergistic effect of stirring, extrusion and forging resulted in refinement of grains, reduction of reinforcement particles size, uniformity in particles distribution, reduction in microstructural heterogeneity and elimination of defects.
Count distribution for mixture of two exponentials as renewal process duration with applications
NASA Astrophysics Data System (ADS)
Low, Yeh Ching; Ong, Seng Huat
2016-06-01
A count distribution is presented by considering a renewal process where the distribution of the duration is a finite mixture of exponential distributions. This distribution is able to model over dispersion, a feature often found in observed count data. The computation of the probabilities and renewal function (expected number of renewals) are examined. Parameter estimation by the method of maximum likelihood is considered with applications of the count distribution to real frequency count data exhibiting over dispersion. It is shown that the mixture of exponentials count distribution fits over dispersed data better than the Poisson process and serves as an alternative to the gamma count distribution.
Standard services for the capture, processing, and distribution of packetized telemetry data
NASA Technical Reports Server (NTRS)
Stallings, William H.
1989-01-01
Standard functional services for the capture, processing, and distribution of packetized data are discussed with particular reference to the future implementation of packet processing systems, such as those for the Space Station Freedom. The major functions are listed under the following major categories: input processing, packet processing, and output processing. A functional block diagram of a packet data processing facility is presented, showing the distribution of the various processing functions as well as the primary data flow through the facility.
Hierarchical analysis of species distributions and abundance across environmental gradients
Jeffery Diez; Ronald H. Pulliam
2007-01-01
Abiotic and biotic processes operate at multiple spatial and temporal scales to shape many ecological processes, including species distributions and demography. Current debate about the relative roles of niche-based and stochastic processes in shaping species distributions and community composition reflects, in part, the challenge of understanding how these processes...
The class of L ∩ D and its application to renewal reward process
NASA Astrophysics Data System (ADS)
Kamışlık, Aslı Bektaş; Kesemen, Tülay; Khaniyev, Tahir
2018-01-01
The class of L ∩ D is generated by intersection of two important subclasses of heavy tailed distributions: The long tailed distributions and dominated varying distributions. This class itself is also an important member of heavy tailed distributions and has some principal application areas especially in renewal, renewal reward and random walk processes. The aim of this study is to observe some well and less known results on renewal functions generated by the class of L ∩ D and apply them into a special renewal reward process which is known in the literature a semi Markovian inventory model of type (s, S). Especially we focused on Pareto distribution which belongs to the L ∩ D subclass of heavy tailed distributions. As a first step we obtained asymptotic results for renewal function generated by Pareto distribution from the class of L ∩ D using some well-known results by Embrechts and Omey [1]. Then we applied the results we obtained for Pareto distribution to renewal reward processes. As an application we investigate inventory model of type (s, S) when demands have Pareto distribution from the class of L ∩ D. We obtained asymptotic expansion for ergodic distribution function and finally we reached asymptotic expansion for nth order moments of distribution of this process.
Hierarchical species distribution models
Hefley, Trevor J.; Hooten, Mevin B.
2016-01-01
Determining the distribution pattern of a species is important to increase scientific knowledge, inform management decisions, and conserve biodiversity. To infer spatial and temporal patterns, species distribution models have been developed for use with many sampling designs and types of data. Recently, it has been shown that count, presence-absence, and presence-only data can be conceptualized as arising from a point process distribution. Therefore, it is important to understand properties of the point process distribution. We examine how the hierarchical species distribution modeling framework has been used to incorporate a wide array of regression and theory-based components while accounting for the data collection process and making use of auxiliary information. The hierarchical modeling framework allows us to demonstrate how several commonly used species distribution models can be derived from the point process distribution, highlight areas of potential overlap between different models, and suggest areas where further research is needed.
An empirical analysis of the distribution of overshoots in a stationary Gaussian stochastic process
NASA Technical Reports Server (NTRS)
Carter, M. C.; Madison, M. W.
1973-01-01
The frequency distribution of overshoots in a stationary Gaussian stochastic process is analyzed. The primary processes involved in this analysis are computer simulation and statistical estimation. Computer simulation is used to simulate stationary Gaussian stochastic processes that have selected autocorrelation functions. An analysis of the simulation results reveals a frequency distribution for overshoots with a functional dependence on the mean and variance of the process. Statistical estimation is then used to estimate the mean and variance of a process. It is shown that for an autocorrelation function, the mean and the variance for the number of overshoots, a frequency distribution for overshoots can be estimated.
Exploring empirical rank-frequency distributions longitudinally through a simple stochastic process.
Finley, Benjamin J; Kilkki, Kalevi
2014-01-01
The frequent appearance of empirical rank-frequency laws, such as Zipf's law, in a wide range of domains reinforces the importance of understanding and modeling these laws and rank-frequency distributions in general. In this spirit, we utilize a simple stochastic cascade process to simulate several empirical rank-frequency distributions longitudinally. We focus especially on limiting the process's complexity to increase accessibility for non-experts in mathematics. The process provides a good fit for many empirical distributions because the stochastic multiplicative nature of the process leads to an often observed concave rank-frequency distribution (on a log-log scale) and the finiteness of the cascade replicates real-world finite size effects. Furthermore, we show that repeated trials of the process can roughly simulate the longitudinal variation of empirical ranks. However, we find that the empirical variation is often less that the average simulated process variation, likely due to longitudinal dependencies in the empirical datasets. Finally, we discuss the process limitations and practical applications.
Managed traffic evacuation using distributed sensor processing
NASA Astrophysics Data System (ADS)
Ramuhalli, Pradeep; Biswas, Subir
2005-05-01
This paper presents an integrated sensor network and distributed event processing architecture for managed in-building traffic evacuation during natural and human-caused disasters, including earthquakes, fire and biological/chemical terrorist attacks. The proposed wireless sensor network protocols and distributed event processing mechanisms offer a new distributed paradigm for improving reliability in building evacuation and disaster management. The networking component of the system is constructed using distributed wireless sensors for measuring environmental parameters such as temperature, humidity, and detecting unusual events such as smoke, structural failures, vibration, biological/chemical or nuclear agents. Distributed event processing algorithms will be executed by these sensor nodes to detect the propagation pattern of the disaster and to measure the concentration and activity of human traffic in different parts of the building. Based on this information, dynamic evacuation decisions are taken for maximizing the evacuation speed and minimizing unwanted incidents such as human exposure to harmful agents and stampedes near exits. A set of audio-visual indicators and actuators are used for aiding the automated evacuation process. In this paper we develop integrated protocols, algorithms and their simulation models for the proposed sensor networking and the distributed event processing framework. Also, efficient harnessing of the individually low, but collectively massive, processing abilities of the sensor nodes is a powerful concept behind our proposed distributed event processing algorithms. Results obtained through simulation in this paper are used for a detailed characterization of the proposed evacuation management system and its associated algorithmic components.
Data Transparency | Distributed Generation Interconnection Collaborative |
quality and availability are increasingly vital for reducing the costs of distributed generation completion in certain areas, increasing accountability for utility application processing. As distributed PV NREL, HECO, TSRG Improving Data Transparency for the Distributed PV Interconnection Process: Emergent
2006-04-01
and Scalability, (2) Sensors and Platforms, (3) Distributed Computing and Processing , (4) Information Management, (5) Fusion and Resource Management...use of the deployed system. 3.3 Distributed Computing and Processing Session The Distributed Computing and Processing Session consisted of three
Distributed Data Processing in a United States Naval Shipyard.
1979-12-01
25 1. Evolution ........ ..................... 25 2. Motivations for Distributed Processing ... ....... 30 a. Extensibility...51 B. EVOLUTION ...... ........................ ... 51 C. CONCEPTS .... ... ........................ . 55 D. FORM AND STRUCTURE OF THE...motivations for, and the characteristics of, distributed processing as they apply to management information systems. 1. Evolution Prior to the advent of
Negative Binomial Process Count and Mixture Modeling.
Zhou, Mingyuan; Carin, Lawrence
2015-02-01
The seemingly disjoint problems of count and mixture modeling are united under the negative binomial (NB) process. A gamma process is employed to model the rate measure of a Poisson process, whose normalization provides a random probability measure for mixture modeling and whose marginalization leads to an NB process for count modeling. A draw from the NB process consists of a Poisson distributed finite number of distinct atoms, each of which is associated with a logarithmic distributed number of data samples. We reveal relationships between various count- and mixture-modeling distributions and construct a Poisson-logarithmic bivariate distribution that connects the NB and Chinese restaurant table distributions. Fundamental properties of the models are developed, and we derive efficient Bayesian inference. It is shown that with augmentation and normalization, the NB process and gamma-NB process can be reduced to the Dirichlet process and hierarchical Dirichlet process, respectively. These relationships highlight theoretical, structural, and computational advantages of the NB process. A variety of NB processes, including the beta-geometric, beta-NB, marked-beta-NB, marked-gamma-NB and zero-inflated-NB processes, with distinct sharing mechanisms, are also constructed. These models are applied to topic modeling, with connections made to existing algorithms under Poisson factor analysis. Example results show the importance of inferring both the NB dispersion and probability parameters.
Exploring Empirical Rank-Frequency Distributions Longitudinally through a Simple Stochastic Process
Finley, Benjamin J.; Kilkki, Kalevi
2014-01-01
The frequent appearance of empirical rank-frequency laws, such as Zipf’s law, in a wide range of domains reinforces the importance of understanding and modeling these laws and rank-frequency distributions in general. In this spirit, we utilize a simple stochastic cascade process to simulate several empirical rank-frequency distributions longitudinally. We focus especially on limiting the process’s complexity to increase accessibility for non-experts in mathematics. The process provides a good fit for many empirical distributions because the stochastic multiplicative nature of the process leads to an often observed concave rank-frequency distribution (on a log-log scale) and the finiteness of the cascade replicates real-world finite size effects. Furthermore, we show that repeated trials of the process can roughly simulate the longitudinal variation of empirical ranks. However, we find that the empirical variation is often less that the average simulated process variation, likely due to longitudinal dependencies in the empirical datasets. Finally, we discuss the process limitations and practical applications. PMID:24755621
ERIC Educational Resources Information Center
Leech, Robert; Saygin, Ayse Pinar
2011-01-01
Using functional MRI, we investigated whether auditory processing of both speech and meaningful non-linguistic environmental sounds in superior and middle temporal cortex relies on a complex and spatially distributed neural system. We found that evidence for spatially distributed processing of speech and environmental sounds in a substantial…
Laadan, Oren; Nieh, Jason; Phung, Dan
2012-10-02
Methods, media and systems for managing a distributed application running in a plurality of digital processing devices are provided. In some embodiments, a method includes running one or more processes associated with the distributed application in virtualized operating system environments on a plurality of digital processing devices, suspending the one or more processes, and saving network state information relating to network connections among the one or more processes. The method further include storing process information relating to the one or more processes, recreating the network connections using the saved network state information, and restarting the one or more processes using the stored process information.
A stochastic diffusion process for Lochner's generalized Dirichlet distribution
Bakosi, J.; Ristorcelli, J. R.
2013-10-01
The method of potential solutions of Fokker-Planck equations is used to develop a transport equation for the joint probability of N stochastic variables with Lochner’s generalized Dirichlet distribution as its asymptotic solution. Individual samples of a discrete ensemble, obtained from the system of stochastic differential equations, equivalent to the Fokker-Planck equation developed here, satisfy a unit-sum constraint at all times and ensure a bounded sample space, similarly to the process developed in for the Dirichlet distribution. Consequently, the generalized Dirichlet diffusion process may be used to represent realizations of a fluctuating ensemble of N variables subject to a conservation principle.more » Compared to the Dirichlet distribution and process, the additional parameters of the generalized Dirichlet distribution allow a more general class of physical processes to be modeled with a more general covariance matrix.« less
Land transportation model for supply chain manufacturing industries
NASA Astrophysics Data System (ADS)
Kurniawan, Fajar
2017-12-01
Supply chain is a system that integrates production, inventory, distribution and information processes for increasing productivity and minimize costs. Transportation is an important part of the supply chain system, especially for supporting the material distribution process, work in process products and final products. In fact, Jakarta as the distribution center of manufacturing industries for the industrial area. Transportation system has a large influences on the implementation of supply chain process efficiency. The main problem faced in Jakarta is traffic jam that will affect on the time of distribution. Based on the system dynamic model, there are several scenarios that can provide solutions to minimize timing of distribution that will effect on the cost such as the construction of ports approaching industrial areas other than Tanjung Priok, widening road facilities, development of railways system, and the development of distribution center.
A Technical Survey on Optimization of Processing Geo Distributed Data
NASA Astrophysics Data System (ADS)
Naga Malleswari, T. Y. J.; Ushasukhanya, S.; Nithyakalyani, A.; Girija, S.
2018-04-01
With growing cloud services and technology, there is growth in some geographically distributed data centers to store large amounts of data. Analysis of geo-distributed data is required in various services for data processing, storage of essential information, etc., processing this geo-distributed data and performing analytics on this data is a challenging task. The distributed data processing is accompanied by issues in storage, computation and communication. The key issues to be dealt with are time efficiency, cost minimization, utility maximization. This paper describes various optimization methods like end-to-end multiphase, G-MR, etc., using the techniques like Map-Reduce, CDS (Community Detection based Scheduling), ROUT, Workload-Aware Scheduling, SAGE, AMP (Ant Colony Optimization) to handle these issues. In this paper various optimization methods and techniques used are analyzed. It has been observed that end-to end multiphase achieves time efficiency; Cost minimization concentrates to achieve Quality of Service, Computation and reduction of Communication cost. SAGE achieves performance improvisation in processing geo-distributed data sets.
Kalvelage, T.; Willems, Jennifer
2003-01-01
The design of the EOS Data and Information Systems (EOSDIS) to acquire, archive, manage and distribute Earth observation data to the broadest possible user community was discussed. A number of several integrated retrieval, processing and distribution capabilities have been explained. The value of these functions to the users were described and potential future improvements were laid out for the users. The users were interested in acquiring the retrieval, processing and archiving systems integrated so that they can get the data they want in the format and delivery mechanism of their choice.
Information Acquisition, Analysis and Integration
2016-08-03
of sensing and processing, theory, applications, signal processing, image and video processing, machine learning , technology transfer. 16. SECURITY... learning . 5. Solved elegantly old problems like image and video debluring, intro- ducing new revolutionary approaches. 1 DISTRIBUTION A: Distribution...Polatkan, G. Sapiro, D. Blei, D. B. Dunson, and L. Carin, “ Deep learning with hierarchical convolution factor analysis,” IEEE 6 DISTRIBUTION A
Local Anesthetic Microencapsulation.
1983-11-04
tollowing I.M. injection of microencapsulated lidocaine and etidocaine than following solution injections. Local toxicity of these microcapsule injections...Distribution 41 Table 12 Processing Summary of Lidocaine (Base) 43 Microencapsulation Table 13 Lidocaine (Base) Microcapsule Size 44 Distribution...Table 14 Processing Summary of Et’idocaine-HCl 45 Microencapsulation Table 15 Etidocaine-HCl Microcapsule Size 47 Distribution Table 16 Process Summary
NASA Astrophysics Data System (ADS)
Pershin, I. M.; Pervukhin, D. A.; Ilyushin, Y. V.; Afanaseva, O. V.
2017-10-01
The article considers the important issue of designing the distributed systems of hydrolithospere processes management. Control effects on the hydrolithospere processes are implemented by a set of extractive wells. The article shows how to determine the optimal number of extractive wells that provide a distributed control impact on the management object.
Experiments to Distribute Map Generalization Processes
NASA Astrophysics Data System (ADS)
Berli, Justin; Touya, Guillaume; Lokhat, Imran; Regnauld, Nicolas
2018-05-01
Automatic map generalization requires the use of computationally intensive processes often unable to deal with large datasets. Distributing the generalization process is the only way to make them scalable and usable in practice. But map generalization is a highly contextual process, and the surroundings of a generalized map feature needs to be known to generalize the feature, which is a problem as distribution might partition the dataset and parallelize the processing of each part. This paper proposes experiments to evaluate the past propositions to distribute map generalization, and to identify the main remaining issues. The past propositions to distribute map generalization are first discussed, and then the experiment hypotheses and apparatus are described. The experiments confirmed that regular partitioning was the quickest strategy, but also the less effective in taking context into account. The geographical partitioning, though less effective for now, is quite promising regarding the quality of the results as it better integrates the geographical context.
Inheritance on processes, exemplified on distributed termination detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomsen, K.S.
1987-02-01
A multiple inheritance mechanism on processes is designed and presented within the framework of a small object oriented language. Processes are described in classes, and the different action parts of a process inherited from different classes are executed in a coroutine-like style called alternation. The inheritance mechanism is a useful tool for factorizing the description of common aspects of processes. This is demonstrated within the domain of distributed programming by using the inheritance mechanism to factorize the description of distributed termination detection algorithms from the description of the distributed main computations for which termination is to be detected. A clearmore » separation of concerns is obtained, and arbitrary combinations of terminations detection algorithms and main computations can be formed. The same termination detection classes can also be used for more general purposes within distributed programming, such as detecting termination of each phase in a multi-phase main computation.« less
A dynamic re-partitioning strategy based on the distribution of key in Spark
NASA Astrophysics Data System (ADS)
Zhang, Tianyu; Lian, Xin
2018-05-01
Spark is a memory-based distributed data processing framework, has the ability of processing massive data and becomes a focus in Big Data. But the performance of Spark Shuffle depends on the distribution of data. The naive Hash partition function of Spark can not guarantee load balancing when data is skewed. The time of job is affected by the node which has more data to process. In order to handle this problem, dynamic sampling is used. In the process of task execution, histogram is used to count the key frequency distribution of each node, and then generate the global key frequency distribution. After analyzing the distribution of key, load balance of data partition is achieved. Results show that the Dynamic Re-Partitioning function is better than the default Hash partition, Fine Partition and the Balanced-Schedule strategy, it can reduce the execution time of the task and improve the efficiency of the whole cluster.
A Stochastic Diffusion Process for the Dirichlet Distribution
Bakosi, J.; Ristorcelli, J. R.
2013-03-01
The method of potential solutions of Fokker-Planck equations is used to develop a transport equation for the joint probability ofNcoupled stochastic variables with the Dirichlet distribution as its asymptotic solution. To ensure a bounded sample space, a coupled nonlinear diffusion process is required: the Wiener processes in the equivalent system of stochastic differential equations are multiplicative with coefficients dependent on all the stochastic variables. Individual samples of a discrete ensemble, obtained from the stochastic process, satisfy a unit-sum constraint at all times. The process may be used to represent realizations of a fluctuating ensemble ofNvariables subject to a conservation principle.more » Similar to the multivariate Wright-Fisher process, whose invariant is also Dirichlet, the univariate case yields a process whose invariant is the beta distribution. As a test of the results, Monte Carlo simulations are used to evolve numerical ensembles toward the invariant Dirichlet distribution.« less
Distributed query plan generation using multiobjective genetic algorithm.
Panicker, Shina; Kumar, T V Vijay
2014-01-01
A distributed query processing strategy, which is a key performance determinant in accessing distributed databases, aims to minimize the total query processing cost. One way to achieve this is by generating efficient distributed query plans that involve fewer sites for processing a query. In the case of distributed relational databases, the number of possible query plans increases exponentially with respect to the number of relations accessed by the query and the number of sites where these relations reside. Consequently, computing optimal distributed query plans becomes a complex problem. This distributed query plan generation (DQPG) problem has already been addressed using single objective genetic algorithm, where the objective is to minimize the total query processing cost comprising the local processing cost (LPC) and the site-to-site communication cost (CC). In this paper, this DQPG problem is formulated and solved as a biobjective optimization problem with the two objectives being minimize total LPC and minimize total CC. These objectives are simultaneously optimized using a multiobjective genetic algorithm NSGA-II. Experimental comparison of the proposed NSGA-II based DQPG algorithm with the single objective genetic algorithm shows that the former performs comparatively better and converges quickly towards optimal solutions for an observed crossover and mutation probability.
Distributed Query Plan Generation Using Multiobjective Genetic Algorithm
Panicker, Shina; Vijay Kumar, T. V.
2014-01-01
A distributed query processing strategy, which is a key performance determinant in accessing distributed databases, aims to minimize the total query processing cost. One way to achieve this is by generating efficient distributed query plans that involve fewer sites for processing a query. In the case of distributed relational databases, the number of possible query plans increases exponentially with respect to the number of relations accessed by the query and the number of sites where these relations reside. Consequently, computing optimal distributed query plans becomes a complex problem. This distributed query plan generation (DQPG) problem has already been addressed using single objective genetic algorithm, where the objective is to minimize the total query processing cost comprising the local processing cost (LPC) and the site-to-site communication cost (CC). In this paper, this DQPG problem is formulated and solved as a biobjective optimization problem with the two objectives being minimize total LPC and minimize total CC. These objectives are simultaneously optimized using a multiobjective genetic algorithm NSGA-II. Experimental comparison of the proposed NSGA-II based DQPG algorithm with the single objective genetic algorithm shows that the former performs comparatively better and converges quickly towards optimal solutions for an observed crossover and mutation probability. PMID:24963513
Space vehicle electrical power processing distribution and control study. Volume 1: Summary
NASA Technical Reports Server (NTRS)
Krausz, A.
1972-01-01
A concept for the processing, distribution, and control of electric power for manned space vehicles and future aircraft is presented. Emphasis is placed on the requirements of the space station and space shuttle configurations. The systems involved are referred to as the processing distribution and control system (PDCS), electrical power system (EPS), and electric power generation system (EPGS).
The Raid distributed database system
NASA Technical Reports Server (NTRS)
Bhargava, Bharat; Riedl, John
1989-01-01
Raid, a robust and adaptable distributed database system for transaction processing (TP), is described. Raid is a message-passing system, with server processes on each site to manage concurrent processing, consistent replicated copies during site failures, and atomic distributed commitment. A high-level layered communications package provides a clean location-independent interface between servers. The latest design of the package delivers messages via shared memory in a configuration with several servers linked into a single process. Raid provides the infrastructure to investigate various methods for supporting reliable distributed TP. Measurements on TP and server CPU time are presented, along with data from experiments on communications software, consistent replicated copy control during site failures, and concurrent distributed checkpointing. A software tool for evaluating the implementation of TP algorithms in an operating-system kernel is proposed.
Winslow, Luke A.; Read, Jordan S.; Hanson, Paul C.; Stanley, Emily H.
2014-01-01
With lake abundances in the thousands to millions, creating an intuitive understanding of the distribution of morphology and processes in lakes is challenging. To improve researchers’ understanding of large-scale lake processes, we developed a parsimonious mathematical model based on the Pareto distribution to describe the distribution of lake morphology (area, perimeter and volume). While debate continues over which mathematical representation best fits any one distribution of lake morphometric characteristics, we recognize the need for a simple, flexible model to advance understanding of how the interaction between morphometry and function dictates scaling across large populations of lakes. These models make clear the relative contribution of lakes to the total amount of lake surface area, volume, and perimeter. They also highlight the critical thresholds at which total perimeter, area and volume would be evenly distributed across lake size-classes have Pareto slopes of 0.63, 1 and 1.12, respectively. These models of morphology can be used in combination with models of process to create overarching “lake population” level models of process. To illustrate this potential, we combine the model of surface area distribution with a model of carbon mass accumulation rate. We found that even if smaller lakes contribute relatively less to total surface area than larger lakes, the increasing carbon accumulation rate with decreasing lake size is strong enough to bias the distribution of carbon mass accumulation towards smaller lakes. This analytical framework provides a relatively simple approach to upscaling morphology and process that is easily generalizable to other ecosystem processes.
The Brain as a Distributed Intelligent Processing System: An EEG Study
da Rocha, Armando Freitas; Rocha, Fábio Theoto; Massad, Eduardo
2011-01-01
Background Various neuroimaging studies, both structural and functional, have provided support for the proposal that a distributed brain network is likely to be the neural basis of intelligence. The theory of Distributed Intelligent Processing Systems (DIPS), first developed in the field of Artificial Intelligence, was proposed to adequately model distributed neural intelligent processing. In addition, the neural efficiency hypothesis suggests that individuals with higher intelligence display more focused cortical activation during cognitive performance, resulting in lower total brain activation when compared with individuals who have lower intelligence. This may be understood as a property of the DIPS. Methodology and Principal Findings In our study, a new EEG brain mapping technique, based on the neural efficiency hypothesis and the notion of the brain as a Distributed Intelligence Processing System, was used to investigate the correlations between IQ evaluated with WAIS (Whechsler Adult Intelligence Scale) and WISC (Wechsler Intelligence Scale for Children), and the brain activity associated with visual and verbal processing, in order to test the validity of a distributed neural basis for intelligence. Conclusion The present results support these claims and the neural efficiency hypothesis. PMID:21423657
Log-normal distribution from a process that is not multiplicative but is additive.
Mouri, Hideaki
2013-10-01
The central limit theorem ensures that a sum of random variables tends to a Gaussian distribution as their total number tends to infinity. However, for a class of positive random variables, we find that the sum tends faster to a log-normal distribution. Although the sum tends eventually to a Gaussian distribution, the distribution of the sum is always close to a log-normal distribution rather than to any Gaussian distribution if the summands are numerous enough. This is in contrast to the current consensus that any log-normal distribution is due to a product of random variables, i.e., a multiplicative process, or equivalently to nonlinearity of the system. In fact, the log-normal distribution is also observable for a sum, i.e., an additive process that is typical of linear systems. We show conditions for such a sum, an analytical example, and an application to random scalar fields such as those of turbulence.
NASA Astrophysics Data System (ADS)
Olasz, A.; Nguyen Thai, B.; Kristóf, D.
2016-06-01
Within recent years, several new approaches and solutions for Big Data processing have been developed. The Geospatial world is still facing the lack of well-established distributed processing solutions tailored to the amount and heterogeneity of geodata, especially when fast data processing is a must. The goal of such systems is to improve processing time by distributing data transparently across processing (and/or storage) nodes. These types of methodology are based on the concept of divide and conquer. Nevertheless, in the context of geospatial processing, most of the distributed computing frameworks have important limitations regarding both data distribution and data partitioning methods. Moreover, flexibility and expendability for handling various data types (often in binary formats) are also strongly required. This paper presents a concept for tiling, stitching and processing of big geospatial data. The system is based on the IQLib concept (https://github.com/posseidon/IQLib/) developed in the frame of the IQmulus EU FP7 research and development project (http://www.iqmulus.eu). The data distribution framework has no limitations on programming language environment and can execute scripts (and workflows) written in different development frameworks (e.g. Python, R or C#). It is capable of processing raster, vector and point cloud data. The above-mentioned prototype is presented through a case study dealing with country-wide processing of raster imagery. Further investigations on algorithmic and implementation details are in focus for the near future.
Raster Data Partitioning for Supporting Distributed GIS Processing
NASA Astrophysics Data System (ADS)
Nguyen Thai, B.; Olasz, A.
2015-08-01
In the geospatial sector big data concept also has already impact. Several studies facing originally computer science techniques applied in GIS processing of huge amount of geospatial data. In other research studies geospatial data is considered as it were always been big data (Lee and Kang, 2015). Nevertheless, we can prove data acquisition methods have been improved substantially not only the amount, but the resolution of raw data in spectral, spatial and temporal aspects as well. A significant portion of big data is geospatial data, and the size of such data is growing rapidly at least by 20% every year (Dasgupta, 2013). The produced increasing volume of raw data, in different format, representation and purpose the wealth of information derived from this data sets represents only valuable results. However, the computing capability and processing speed rather tackle with limitations, even if semi-automatic or automatic procedures are aimed on complex geospatial data (Kristóf et al., 2014). In late times, distributed computing has reached many interdisciplinary areas of computer science inclusive of remote sensing and geographic information processing approaches. Cloud computing even more requires appropriate processing algorithms to be distributed and handle geospatial big data. Map-Reduce programming model and distributed file systems have proven their capabilities to process non GIS big data. But sometimes it's inconvenient or inefficient to rewrite existing algorithms to Map-Reduce programming model, also GIS data can not be partitioned as text-based data by line or by bytes. Hence, we would like to find an alternative solution for data partitioning, data distribution and execution of existing algorithms without rewriting or with only minor modifications. This paper focuses on technical overview of currently available distributed computing environments, as well as GIS data (raster data) partitioning, distribution and distributed processing of GIS algorithms. A proof of concept implementation have been made for raster data partitioning, distribution and processing. The first results on performance have been compared against commercial software ERDAS IMAGINE 2011 and 2014. Partitioning methods heavily depend on application areas, therefore we may consider data partitioning as a preprocessing step before applying processing services on data. As a proof of concept we have implemented a simple tile-based partitioning method splitting an image into smaller grids (NxM tiles) and comparing the processing time to existing methods by NDVI calculation. The concept is demonstrated using own development open source processing framework.
Organization of the secure distributed computing based on multi-agent system
NASA Astrophysics Data System (ADS)
Khovanskov, Sergey; Rumyantsev, Konstantin; Khovanskova, Vera
2018-04-01
Nowadays developing methods for distributed computing is received much attention. One of the methods of distributed computing is using of multi-agent systems. The organization of distributed computing based on the conventional network computers can experience security threats performed by computational processes. Authors have developed the unified agent algorithm of control system of computing network nodes operation. Network PCs is used as computing nodes. The proposed multi-agent control system for the implementation of distributed computing allows in a short time to organize using of the processing power of computers any existing network to solve large-task by creating a distributed computing. Agents based on a computer network can: configure a distributed computing system; to distribute the computational load among computers operated agents; perform optimization distributed computing system according to the computing power of computers on the network. The number of computers connected to the network can be increased by connecting computers to the new computer system, which leads to an increase in overall processing power. Adding multi-agent system in the central agent increases the security of distributed computing. This organization of the distributed computing system reduces the problem solving time and increase fault tolerance (vitality) of computing processes in a changing computing environment (dynamic change of the number of computers on the network). Developed a multi-agent system detects cases of falsification of the results of a distributed system, which may lead to wrong decisions. In addition, the system checks and corrects wrong results.
Exact probability distribution function for the volatility of cumulative production
NASA Astrophysics Data System (ADS)
Zadourian, Rubina; Klümper, Andreas
2018-04-01
In this paper we study the volatility and its probability distribution function for the cumulative production based on the experience curve hypothesis. This work presents a generalization of the study of volatility in Lafond et al. (2017), which addressed the effects of normally distributed noise in the production process. Due to its wide applicability in industrial and technological activities we present here the mathematical foundation for an arbitrary distribution function of the process, which we expect will pave the future research on forecasting of the production process.
When the mean is not enough: Calculating fixation time distributions in birth-death processes.
Ashcroft, Peter; Traulsen, Arne; Galla, Tobias
2015-10-01
Studies of fixation dynamics in Markov processes predominantly focus on the mean time to absorption. This may be inadequate if the distribution is broad and skewed. We compute the distribution of fixation times in one-step birth-death processes with two absorbing states. These are expressed in terms of the spectrum of the process, and we provide different representations as forward-only processes in eigenspace. These allow efficient sampling of fixation time distributions. As an application we study evolutionary game dynamics, where invading mutants can reach fixation or go extinct. We also highlight the median fixation time as a possible analog of mixing times in systems with small mutation rates and no absorbing states, whereas the mean fixation time has no such interpretation.
Adaptive Distributed Intelligent Control Architecture for Future Propulsion Systems (Preprint)
2007-04-01
weight will be reduced by replacing heavy harness assemblies and FADECs , with distributed processing elements interconnected. This paper reviews...Digital Electronic Controls ( FADECs ), with distributed processing elements interconnected through a serial bus. Efficient data flow throughout the...because intelligence is embedded in components while overall control is maintained in the FADEC . The need for Distributed Control Systems in
Development of Electro-Optical Standard Processes for Application
2011-11-01
AERONAUTICS AND SPACE ADMINISTRATION DISTRIBUTION A: APPROVED FOR PUBLIC RELEASE DISTRIBUTION IS UNLIMITED Report Documentation Page Form ApprovedOMB No...0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing...DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release; distribution unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT Defines the process of
NASA Astrophysics Data System (ADS)
Eschenbächer, Jens; Seifert, Marcus; Thoben, Klaus-Dieter
Distributed innovation processes are considered as a new option to handle both the complexity and the speed in which new products and services need to be prepared. Indeed most research on innovation processes was focused on multinational companies with an intra-organisational perspective. The phenomena of innovation processes in networks - with an inter-organisational perspective - have been almost neglected. Collaborative networks present a perfect playground for such distributed innovation processes whereas the authors highlight in specific Virtual Organisation because of their dynamic behaviour. Research activities supporting distributed innovation processes in VO are rather new so that little knowledge about the management of such research is available. With the presentation of the collaborative network relationship analysis this gap will be addressed. It will be shown that a qualitative planning of collaboration intensities can support real business cases by proving knowledge and planning data.
Distributed Estimation using Bayesian Consensus Filtering
2014-06-06
Convergence rate analysis of distributed gossip (linear parameter) estimation: Fundamental limits and tradeoffs,” IEEE J. Sel. Topics Signal Process...Dimakis, S. Kar, J. Moura, M. Rabbat, and A. Scaglione, “ Gossip algorithms for distributed signal processing,” Proc. of the IEEE, vol. 98, no. 11, pp
Determination of material distribution in heading process of small bimetallic bar
NASA Astrophysics Data System (ADS)
Presz, Wojciech; Cacko, Robert
2018-05-01
The electrical connectors mostly have silver contacts joined by riveting. In order to reduce costs, the core of the contact rivet can be replaced with cheaper material, e.g. copper. There is a wide range of commercially available bimetallic (silver-copper) rivets on the market for the production of contacts. Following that, new conditions in the riveting process are created because the bi-metal object is riveted. In the analyzed example, it is a small size object, which can be placed on the border of microforming. Based on the FEM modeling of the load process of bimetallic rivets with different material distributions, the desired distribution was chosen and the choice was justified. Possible material distributions were parameterized with two parameters referring to desirable distribution characteristics. The parameter: Coefficient of Mutual Interactions of Plastic Deformations and the method of its determination have been proposed. The parameter is determined based of two-parameter stress-strain curves and is a function of these parameters and the range of equivalent strains occurring in the analyzed process. The proposed method was used for the upsetting process of the bimetallic head of the electrical contact. A nomogram was established to predict the distribution of materials in the head of the rivet and the appropriate selection of a pair of materials to achieve the desired distribution.
Analysis of overdispersed count data by mixtures of Poisson variables and Poisson processes.
Hougaard, P; Lee, M L; Whitmore, G A
1997-12-01
Count data often show overdispersion compared to the Poisson distribution. Overdispersion is typically modeled by a random effect for the mean, based on the gamma distribution, leading to the negative binomial distribution for the count. This paper considers a larger family of mixture distributions, including the inverse Gaussian mixture distribution. It is demonstrated that it gives a significantly better fit for a data set on the frequency of epileptic seizures. The same approach can be used to generate counting processes from Poisson processes, where the rate or the time is random. A random rate corresponds to variation between patients, whereas a random time corresponds to variation within patients.
Features of development process displacement of earth’s surface when dredging coal in Eastern Donbas
NASA Astrophysics Data System (ADS)
Posylniy, Yu V.; Versilov, S. O.; Shurygin, D. N.; Kalinchenko, V. M.
2017-10-01
The results of studies of the process of the earth’s surface displacement due to the influence of the adjacent longwalls are presented. It is established that the actual distributions of soil subsidence in the fall and revolt of the reservoir with the same boundary settlement processes differ both from each other and by the distribution of subsidence, recommended by the rules of structures protection. The application of the new boundary criteria - the relative subsidence of 0.03 - allows one to go from two distributions to one distribution, which is also different from the sedimentation distribution of protection rules. The use of a new geometrical element - a virtual point of the mould - allows one to transform the actual distribution of subsidence in the model distribution of rules of constructions protection. When transforming the curves of subsidence, the boundary points vary and, consequently, the boundary corners do.
Historical Time-Domain: Data Archives, Processing, and Distribution
NASA Astrophysics Data System (ADS)
Grindlay, Jonathan E.; Griffin, R. Elizabeth
2012-04-01
The workshop on Historical Time-Domain Astronomy (TDA) was attended by a near-capacity gathering of ~30 people. From information provided in turn by those present, an up-to-date overview was created of available plate archives, progress in their digitization, the extent of actual processing of those data, and plans for data distribution. Several recommendations were made for prioritising the processing and distribution of historical TDA data.
Contingency theoretic methodology for agent-based web-oriented manufacturing systems
NASA Astrophysics Data System (ADS)
Durrett, John R.; Burnell, Lisa J.; Priest, John W.
2000-12-01
The development of distributed, agent-based, web-oriented, N-tier Information Systems (IS) must be supported by a design methodology capable of responding to the convergence of shifts in business process design, organizational structure, computing, and telecommunications infrastructures. We introduce a contingency theoretic model for the use of open, ubiquitous software infrastructure in the design of flexible organizational IS. Our basic premise is that developers should change in the way they view the software design process from a view toward the solution of a problem to one of the dynamic creation of teams of software components. We postulate that developing effective, efficient, flexible, component-based distributed software requires reconceptualizing the current development model. The basic concepts of distributed software design are merged with the environment-causes-structure relationship from contingency theory; the task-uncertainty of organizational- information-processing relationships from information processing theory; and the concept of inter-process dependencies from coordination theory. Software processes are considered as employees, groups of processes as software teams, and distributed systems as software organizations. Design techniques already used in the design of flexible business processes and well researched in the domain of the organizational sciences are presented. Guidelines that can be utilized in the creation of component-based distributed software will be discussed.
Stabilization process of human population: a descriptive approach.
Kayani, A K; Krotki, K J
1981-01-01
An attempt is made to inquire into the process of stabilization of a human population. The same age distribution distorted by past variations in fertility is subjected to several fixed schedules of fertility. The schedules are different from each other monotonically over a narrow range. The primary concern is with the process, almost year by year, through which the populations become stable. There is particular interest in the differential impact in the same original age distribution of the narrowly different fixed fertility schedules. The exercise is prepared in 3 stages: general background of the process of stabilization; methodology and data used; and analysis and discussion of the stabilization process. Among the several approaches through which the analysis of stable population is possible, 2 are popular: the integral equation and the projection matrix. In this presentation the interest is in evaluating the effects of fertility on the stabilization process of a population. Therefore, only 1 initial age distribution and only 1 life table but a variety of narrowly different schedules of fertility have been used. Specifically, the U.S. 1963 female population is treated as the initial population. The process of stabilization is viewed in the light of the changes in the slopes between 2 successive age groups of an age distribution. A high fertility schedule with the given initial age distribution and mortality level overcomes the oscillations more quickly than the low fertility schedule. Simulation confirms the intuitively expected positive relationship between the mean of the slope and the level of fertility. The variance of the slope distribution is an indicator of the aging of the distribution.
NASA Astrophysics Data System (ADS)
Kodama, Yu; Hamagami, Tomoki
Distributed processing system for restoration of electric power distribution network using two-layered CNP is proposed. The goal of this study is to develop the restoration system which adjusts to the future power network with distributed generators. The state of the art of this study is that the two-layered CNP is applied for the distributed computing environment in practical use. The two-layered CNP has two classes of agents, named field agent and operating agent in the network. In order to avoid conflicts of tasks, operating agent controls privilege for managers to send the task announcement messages in CNP. This technique realizes the coordination between agents which work asynchronously in parallel with others. Moreover, this study implements the distributed processing system using a de-fact standard multi-agent framework, JADE(Java Agent DEvelopment framework). This study conducts the simulation experiments of power distribution network restoration and compares the proposed system with the previous system. We confirmed the results show effectiveness of the proposed system.
Limiting Distributions of Functionals of Markov Chains.
1984-08-01
limiting distributions; periodic * nonhomoger.,!ous Poisson processes . 19 ANS? MACY IConuui oe nonoe’ee if necorglooy and edern thty by block numbers...homogeneous Poisson processes is of interest in itself. The problem considered in this paper is of interest in the theory of partially observable...where we obtain the limiting distribution of the interevent times. Key Words: Markov Chains, Limiting Distributions, Periodic Nonhomogeneous Poisson
The impact of distributed computing on education
NASA Technical Reports Server (NTRS)
Utku, S.; Lestingi, J.; Salama, M.
1982-01-01
In this paper, developments in digital computer technology since the early Fifties are reviewed briefly, and the parallelism which exists between these developments and developments in analysis and design procedures of structural engineering is identified. The recent trends in digital computer technology are examined in order to establish the fact that distributed processing is now an accepted philosophy for further developments. The impact of this on the analysis and design practices of structural engineering is assessed by first examining these practices from a data processing standpoint to identify the key operations and data bases, and then fitting them to the characteristics of distributed processing. The merits and drawbacks of the present philosophy in educating structural engineers are discussed and projections are made for the industry-academia relations in the distributed processing environment of structural analysis and design. An ongoing experiment of distributed computing in a university environment is described.
Mistaking geography for biology: inferring processes from species distributions.
Warren, Dan L; Cardillo, Marcel; Rosauer, Dan F; Bolnick, Daniel I
2014-10-01
Over the past few decades, there has been a rapid proliferation of statistical methods that infer evolutionary and ecological processes from data on species distributions. These methods have led to considerable new insights, but they often fail to account for the effects of historical biogeography on present-day species distributions. Because the geography of speciation can lead to patterns of spatial and temporal autocorrelation in the distributions of species within a clade, this can result in misleading inferences about the importance of deterministic processes in generating spatial patterns of biodiversity. In this opinion article, we discuss ways in which patterns of species distributions driven by historical biogeography are often interpreted as evidence of particular evolutionary or ecological processes. We focus on three areas that are especially prone to such misinterpretations: community phylogenetics, environmental niche modelling, and analyses of beta diversity (compositional turnover of biodiversity). Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.
A distributed computing approach to mission operations support. [for spacecraft
NASA Technical Reports Server (NTRS)
Larsen, R. L.
1975-01-01
Computing mission operation support includes orbit determination, attitude processing, maneuver computation, resource scheduling, etc. The large-scale third-generation distributed computer network discussed is capable of fulfilling these dynamic requirements. It is shown that distribution of resources and control leads to increased reliability, and exhibits potential for incremental growth. Through functional specialization, a distributed system may be tuned to very specific operational requirements. Fundamental to the approach is the notion of process-to-process communication, which is effected through a high-bandwidth communications network. Both resource-sharing and load-sharing may be realized in the system.
Library Book Circulation and the Beta-Binomial Distribution.
ERIC Educational Resources Information Center
Gelman, E.; Sichel, H. S.
1987-01-01
Argues that library book circulation is a binomial rather than a Poisson process, and that individual book popularities are continuous beta distributions. Three examples demonstrate the superiority of beta over negative binomial distribution, and it is suggested that a bivariate-binomial process would be helpful in predicting future book…
Resource depletion promotes automatic processing: implications for distribution of practice.
Scheel, Matthew H
2010-12-01
Recent models of cognition include two processing systems: an automatic system that relies on associative learning, intuition, and heuristics, and a controlled system that relies on deliberate consideration. Automatic processing requires fewer resources and is more likely when resources are depleted. This study showed that prolonged practice on a resource-depleting mental arithmetic task promoted automatic processing on a subsequent problem-solving task, as evidenced by faster responding and more errors. Distribution of practice effects (0, 60, 120, or 180 sec. between problems) on rigidity also disappeared when groups had equal time on resource-depleting tasks. These results suggest that distribution of practice effects is reducible to resource availability. The discussion includes implications for interpreting discrepancies in the traditional distribution of practice effect.
Cross-coherent vector sensor processing for spatially distributed glider networks.
Nichols, Brendan; Sabra, Karim G
2015-09-01
Autonomous underwater gliders fitted with vector sensors can be used as a spatially distributed sensor array to passively locate underwater sources. However, to date, the positional accuracy required for robust array processing (especially coherent processing) is not achievable using dead-reckoning while the gliders remain submerged. To obtain such accuracy, the gliders can be temporarily surfaced to allow for global positioning system contact, but the acoustically active sea surface introduces locally additional sensor noise. This letter demonstrates that cross-coherent array processing, which inherently mitigates the effects of local noise, outperforms traditional incoherent processing source localization methods for this spatially distributed vector sensor network.
Newton's second law and the multiplication of distributions
NASA Astrophysics Data System (ADS)
Sarrico, C. O. R.; Paiva, A.
2018-01-01
Newton's second law is applied to study the motion of a particle subjected to a time dependent impulsive force containing a Dirac delta distribution. Within this setting, we prove that this problem can be rigorously solved neither by limit processes nor by using the theory of distributions (limited to the classical Schwartz products). However, using a distributional multiplication, not defined by a limit process, a rigorous solution emerges.
NASA Astrophysics Data System (ADS)
Lasocki, Stanislaw; Urban, Pawel; Kwiatek, Grzegorz; Martinez-Garzón, Particia
2017-04-01
Injection induced seismicity (IIS) is an undesired dynamic rockmass response to massive fluid injections. This includes reactions, among others, to hydro-fracturing for shale gas exploitation. Complexity and changeability of technological factors that induce IIS, may result in significant deviations of the observed distributions of seismic process parameters from the models, which perform well in natural, tectonic seismic processes. Classic formulations of probabilistic seismic hazard analysis in natural seismicity assume the seismic marked point process to be a stationary Poisson process, whose marks - magnitudes are governed by a Gutenberg-Richter born exponential distribution. It is well known that the use of an inappropriate earthquake occurrence model and/or an inappropriate of magnitude distribution model leads to significant systematic errors of hazard estimates. It is therefore of paramount importance to check whether the mentioned, commonly used in natural seismicity assumptions on the seismic process, can be safely used in IIS hazard problems or not. Seismicity accompanying shale gas operations is widely studied in the framework of the project "Shale Gas Exploration and Exploitation Induced Risks" (SHEER). Here we present results of SHEER project investigations of such seismicity from Oklahoma and of a proxy of such seismicity - IIS data from The Geysers geothermal field. We attempt to answer to the following questions: • Do IIS earthquakes follow the Gutenberg-Richter distribution law, so that the magnitude distribution can be modelled by an exponential distribution? • Is the occurrence process of IIS earthquakes Poissonian? Is it segmentally Poissonian? If yes, how are these segments linked to cycles of technological operations? Statistical tests indicate that the Gutenberg-Richter relation born exponential distribution model for magnitude is, in general, inappropriate. The magnitude distribution can be complex, multimodal, with no ready-to-use functional model. In this connection, we recommend to use in hazard analyses non-parametric, kernel estimators of magnitude distribution. The earthquake occurrence process of IIS is not a Poisson process. When earthquakes' occurrences are influenced by a multitude of inducing factors, the interevent time distribution can be modelled by the Weibull distribution supporting a negative ageing property of the process. When earthquake occurrences are due to a specific injection activity, the earthquake rate directly depends on the injection rate and responds immediately to the changes of the injection rate. Furthermore, this response is not limited only to correlated variations of the seismic activity but it also concerns significant changes of the shape of interevent time distribution. Unlike the event rate, the shape of magnitude distribution does not exhibit correlation with the injection rate. This work was supported within SHEER: "Shale Gas Exploration and Exploitation Induced Risks" project funded from Horizon 2020 - R&I Framework Programme, call H2020-LCE 16-2014-1 and within statutory activities No3841/E-41/S/2016 of Ministry of Science and Higher Education of Poland.
Transforming for Distribution Based Logistics
2005-05-26
distribution process, and extracts elements of distribution and distribution management . Finally characteristics of an effective Army distribution...eventually evolve into a Distribution Management Element. Each organization is examined based on their ability to provide centralized command, with an...distribution and distribution management that together form the distribution system. Clearly all of the physical distribution activities including
Application Processing | Distributed Generation Interconnection
delivering swift customer service. The rapid rise of distributed generation (DG) PV interconnection speed processing, reduce paperwork, and improve customer service. Webinars and publications are
Serial Spike Time Correlations Affect Probability Distribution of Joint Spike Events.
Shahi, Mina; van Vreeswijk, Carl; Pipa, Gordon
2016-01-01
Detecting the existence of temporally coordinated spiking activity, and its role in information processing in the cortex, has remained a major challenge for neuroscience research. Different methods and approaches have been suggested to test whether the observed synchronized events are significantly different from those expected by chance. To analyze the simultaneous spike trains for precise spike correlation, these methods typically model the spike trains as a Poisson process implying that the generation of each spike is independent of all the other spikes. However, studies have shown that neural spike trains exhibit dependence among spike sequences, such as the absolute and relative refractory periods which govern the spike probability of the oncoming action potential based on the time of the last spike, or the bursting behavior, which is characterized by short epochs of rapid action potentials, followed by longer episodes of silence. Here we investigate non-renewal processes with the inter-spike interval distribution model that incorporates spike-history dependence of individual neurons. For that, we use the Monte Carlo method to estimate the full shape of the coincidence count distribution and to generate false positives for coincidence detection. The results show that compared to the distributions based on homogeneous Poisson processes, and also non-Poisson processes, the width of the distribution of joint spike events changes. Non-renewal processes can lead to both heavy tailed or narrow coincidence distribution. We conclude that small differences in the exact autostructure of the point process can cause large differences in the width of a coincidence distribution. Therefore, manipulations of the autostructure for the estimation of significance of joint spike events seem to be inadequate.
Serial Spike Time Correlations Affect Probability Distribution of Joint Spike Events
Shahi, Mina; van Vreeswijk, Carl; Pipa, Gordon
2016-01-01
Detecting the existence of temporally coordinated spiking activity, and its role in information processing in the cortex, has remained a major challenge for neuroscience research. Different methods and approaches have been suggested to test whether the observed synchronized events are significantly different from those expected by chance. To analyze the simultaneous spike trains for precise spike correlation, these methods typically model the spike trains as a Poisson process implying that the generation of each spike is independent of all the other spikes. However, studies have shown that neural spike trains exhibit dependence among spike sequences, such as the absolute and relative refractory periods which govern the spike probability of the oncoming action potential based on the time of the last spike, or the bursting behavior, which is characterized by short epochs of rapid action potentials, followed by longer episodes of silence. Here we investigate non-renewal processes with the inter-spike interval distribution model that incorporates spike-history dependence of individual neurons. For that, we use the Monte Carlo method to estimate the full shape of the coincidence count distribution and to generate false positives for coincidence detection. The results show that compared to the distributions based on homogeneous Poisson processes, and also non-Poisson processes, the width of the distribution of joint spike events changes. Non-renewal processes can lead to both heavy tailed or narrow coincidence distribution. We conclude that small differences in the exact autostructure of the point process can cause large differences in the width of a coincidence distribution. Therefore, manipulations of the autostructure for the estimation of significance of joint spike events seem to be inadequate. PMID:28066225
Survey: National Environmental Satellite Service
NASA Technical Reports Server (NTRS)
1977-01-01
The national Environmental Satellite Service (NESS) receives data at periodic intervals from satellites of the Synchronous Meteorological Satellite/Geostationary Operational Environmental Satellite series and from the Improved TIROS (Television Infrared Observational Satellite) Operational Satellite. Within the conterminous United States, direct readout and processed products are distributed to users over facsimile networks from a central processing and data distribution facility. In addition, the NESS Satellite Field Stations analyze, interpret, and distribute processed geostationary satellite products to regional weather service activities.
Wu, Kuo-Tsai; Hwang, Sheng-Jye; Lee, Huei-Huang
2017-01-01
Although wafer-level camera lenses are a very promising technology, problems such as warpage with time and non-uniform thickness of products still exist. In this study, finite element simulation was performed to simulate the compression molding process for acquiring the pressure distribution on the product on completion of the process and predicting the deformation with respect to the pressure distribution. Results show that the single-gate compression molding process significantly increases the pressure at the center of the product, whereas the multi-gate compressing molding process can effectively distribute the pressure. This study evaluated the non-uniform thickness of product and changes in the process parameters through computer simulations, which could help to improve the compression molding process. PMID:28617315
Economic design of control charts considering process shift distributions
NASA Astrophysics Data System (ADS)
Vommi, Vijayababu; Kasarapu, Rukmini V.
2014-09-01
Process shift is an important input parameter in the economic design of control charts. Earlier control chart designs considered constant shifts to occur in the mean of the process for a given assignable cause. This assumption has been criticized by many researchers since it may not be realistic to produce a constant shift whenever an assignable cause occurs. To overcome this difficulty, in the present work, a distribution for the shift parameter has been considered instead of a single value for a given assignable cause. Duncan's economic design model for chart has been extended to incorporate the distribution for the process shift parameter. It is proposed to minimize total expected loss-cost to obtain the control chart parameters. Further, three types of process shifts namely, positively skewed, uniform and negatively skewed distributions are considered and the situations where it is appropriate to use the suggested methodology are recommended.
Alternative Fuels Data Center: Propane Production and Distribution
produced from liquid components recovered during natural gas processing. These components include ethane & Incentives Propane Production and Distribution Propane is a by-product of natural gas processing distribution showing propane originating from three sources: 1) gas well and gas plant, 2) oil well and
First-Passage-Time Distribution for Variable-Diffusion Processes
NASA Astrophysics Data System (ADS)
Barney, Liberty; Gunaratne, Gemunu H.
2017-05-01
First-passage-time distribution, which presents the likelihood of a stock reaching a pre-specified price at a given time, is useful in establishing the value of financial instruments and in designing trading strategies. First-passage-time distribution for Wiener processes has a single peak, while that for stocks exhibits a notable second peak within a trading day. This feature has only been discussed sporadically—often dismissed as due to insufficient/incorrect data or circumvented by conversion to tick time—and to the best of our knowledge has not been explained in terms of the underlying stochastic process. It was shown previously that intra-day variations in the market can be modeled by a stochastic process containing two variable-diffusion processes (Hua et al. in, Physica A 419:221-233, 2015). We show here that the first-passage-time distribution of this two-stage variable-diffusion model does exhibit a behavior similar to the empirical observation. In addition, we find that an extended model incorporating overnight price fluctuations exhibits intra- and inter-day behavior similar to those of empirical first-passage-time distributions.
NASA Astrophysics Data System (ADS)
Yamada, Yuhei; Yamazaki, Yoshihiro
2018-04-01
This study considered a stochastic model for cluster growth in a Markov process with a cluster size dependent additive noise. According to this model, the probability distribution of the cluster size transiently becomes an exponential or a log-normal distribution depending on the initial condition of the growth. In this letter, a master equation is obtained for this model, and derivation of the distributions is discussed.
Electric power processing, distribution and control for advanced aerospace vehicles.
NASA Technical Reports Server (NTRS)
Krausz, A.; Felch, J. L.
1972-01-01
The results of a current study program to develop a rational basis for selection of power processing, distribution, and control configurations for future aerospace vehicles including the Space Station, Space Shuttle, and high-performance aircraft are presented. Within the constraints imposed by the characteristics of power generation subsystems and the load utilization equipment requirements, the power processing, distribution and control subsystem can be optimized by selection of the proper distribution voltage, frequency, and overload/fault protection method. It is shown that, for large space vehicles which rely on static energy conversion to provide electric power, high-voltage dc distribution (above 100 V dc) is preferable to conventional 28 V dc and 115 V ac distribution per MIL-STD-704A. High-voltage dc also has advantages over conventional constant frequency ac systems in many aircraft applications due to the elimination of speed control, wave shaping, and synchronization equipment.
The influence of emotion on lexical processing: insights from RT distributional analysis.
Yap, Melvin J; Seow, Cui Shan
2014-04-01
In two lexical decision experiments, the present study was designed to examine emotional valence effects on visual lexical decision (standard and go/no-go) performance, using traditional analyses of means and distributional analyses of response times. Consistent with an earlier study by Kousta, Vinson, and Vigliocco (Cognition 112:473-481, 2009), we found that emotional words (both negative and positive) were responded to faster than neutral words. Finer-grained distributional analyses further revealed that the facilitation afforded by valence was reflected by a combination of distributional shifting and an increase in the slow tail of the distribution. This suggests that emotional valence effects in lexical decision are unlikely to be entirely mediated by early, preconscious processes, which are associated with pure distributional shifting. Instead, our results suggest a dissociation between early preconscious processes and a later, more task-specific effect that is driven by feedback from semantically rich representations.
The application of artificial intelligence techniques to large distributed networks
NASA Technical Reports Server (NTRS)
Dubyah, R.; Smith, T. R.; Star, J. L.
1985-01-01
Data accessibility and transfer of information, including the land resources information system pilot, are structured as large computer information networks. These pilot efforts include the reduction of the difficulty to find and use data, reducing processing costs, and minimize incompatibility between data sources. Artificial Intelligence (AI) techniques were suggested to achieve these goals. The applicability of certain AI techniques are explored in the context of distributed problem solving systems and the pilot land data system (PLDS). The topics discussed include: PLDS and its data processing requirements, expert systems and PLDS, distributed problem solving systems, AI problem solving paradigms, query processing, and distributed data bases.
The Role of Graphlets in Viral Processes on Networks
NASA Astrophysics Data System (ADS)
Khorshidi, Samira; Al Hasan, Mohammad; Mohler, George; Short, Martin B.
2018-05-01
Predicting the evolution of viral processes on networks is an important problem with applications arising in biology, the social sciences, and the study of the Internet. In existing works, mean-field analysis based upon degree distribution is used for the prediction of viral spreading across networks of different types. However, it has been shown that degree distribution alone fails to predict the behavior of viruses on some real-world networks and recent attempts have been made to use assortativity to address this shortcoming. In this paper, we show that adding assortativity does not fully explain the variance in the spread of viruses for a number of real-world networks. We propose using the graphlet frequency distribution in combination with assortativity to explain variations in the evolution of viral processes across networks with identical degree distribution. Using a data-driven approach by coupling predictive modeling with viral process simulation on real-world networks, we show that simple regression models based on graphlet frequency distribution can explain over 95% of the variance in virality on networks with the same degree distribution but different network topologies. Our results not only highlight the importance of graphlets but also identify a small collection of graphlets which may have the highest influence over the viral processes on a network.
Analysis of nonuniformity in intron phase distribution.
Fedorov, A; Suboch, G; Bujakov, M; Fedorova, L
1992-01-01
The distribution of different intron groups with respect to phases has been analyzed. It has been established that group II introns and nuclear introns have a minimum frequency of phase 2 introns. Since the phase of introns is an extremely conservative measure the observed minimum reflects evolutionary processes. A sample of all known, group I introns was too small to provide a valid characteristic of their phase distribution. The findings observed for the unequal distribution of phases cannot be explained solely on the basis of the mobile properties of introns. One of the most likely explanations for this nonuniformity in the intron phase distribution is the process of exon shuffling. It is proposed that group II introns originated at the early stages of evolution and were involved in the process of exon shuffling. PMID:1598214
Shope, William G.; ,
1987-01-01
The US Geological Survey is utilizing a national network of more than 1000 satellite data-collection stations, four satellite-relay direct-readout ground stations, and more than 50 computers linked together in a private telecommunications network to acquire, process, and distribute hydrological data in near real-time. The four Survey offices operating a satellite direct-readout ground station provide near real-time hydrological data to computers located in other Survey offices through the Survey's Distributed Information System. The computerized distribution system permits automated data processing and distribution to be carried out in a timely manner under the control and operation of the Survey office responsible for the data-collection stations and for the dissemination of hydrological information to the water-data users.
LWT Based Sensor Node Signal Processing in Vehicle Surveillance Distributed Sensor Network
NASA Astrophysics Data System (ADS)
Cha, Daehyun; Hwang, Chansik
Previous vehicle surveillance researches on distributed sensor network focused on overcoming power limitation and communication bandwidth constraints in sensor node. In spite of this constraints, vehicle surveillance sensor node must have signal compression, feature extraction, target localization, noise cancellation and collaborative signal processing with low computation and communication energy dissipation. In this paper, we introduce an algorithm for light-weight wireless sensor node signal processing based on lifting scheme wavelet analysis feature extraction in distributed sensor network.
1983-06-01
LOSARDO Project Engineer APPROVED: .MARMCINIhI, Colonel. USAF Chief, Coaud and Control Division FOR THE CCOaIDKR: Acting Chief, Plea Off ice * **711...WORK UNIT NUMBERS General Dynamics Corporation 62702F Data Systems Division P 0 Box 748, Fort Worth TX 76101 55811829 I1. CONTROLLING OFFICE NAME AND...Processing System for 29 the Operation/Direction Center(s) 4-3 Distribution of Processing Control 30 for the Operation/Direction Center(s) 4-4 Generalized
Computer Sciences and Data Systems, volume 1
NASA Technical Reports Server (NTRS)
1987-01-01
Topics addressed include: software engineering; university grants; institutes; concurrent processing; sparse distributed memory; distributed operating systems; intelligent data management processes; expert system for image analysis; fault tolerant software; and architecture research.
Measurement Comparisons Towards Improving the Understanding of Aerosol-Cloud Processing
NASA Astrophysics Data System (ADS)
Noble, Stephen R.
Cloud processing of aerosol is an aerosol-cloud interaction that is not heavily researched but could have implications on climate. The three types of cloud processing are chemical processing, collision and coalescence processing, and Brownian capture of interstitial particles. All types improve cloud condensation nuclei (CCN) in size or hygroscopicity (kappa). These improved CCN affect subsequent clouds. This dissertation focuses on measurement comparisons to improve our observations and understanding of aerosol-cloud processing. Particle size distributions measured at the continental Southern Great Plains (SGP) site were compared with ground based measurements of cloud fraction (CF) and cloud base altitude (CBA). Particle size distributions were described by a new objective shape parameter to define bimodality rather than an old subjective one. Cloudy conditions at SGP were found to be correlated with lagged shape parameter. Horizontal wind speed and regional CF explained 42%+ of this lag time. Many of these surface particle size distributions were influenced by aerosol-cloud processing. Thus, cloud processing may be more widespread with more implications than previously thought. Particle size distributions measured during two aircraft field campaigns (MArine Stratus/stratocumulus Experiment; MASE; and Ice in Cloud Experiment-Tropical; ICE-T) were compared to CCN distributions. Tuning particle size to critical supersaturation revealed hygroscopicity expressed as ? when the distributions were overlain. Distributions near cumulus clouds (ICE-T) had a higher frequency of the same ?s (48% in ICE-T to 42% in MASE) between the accumulation (processed) and Aitken (unprocessed) modes. This suggested physical processing domination in ICE-T. More MASE (stratus cloud) kappa differences between modes pointed to chemical cloud processing. Chemistry measurements made in MASE showed increases in sulfates and nitrates with distributions that were more processed. This supported chemical cloud processing in MASE. This new method to determine kappa provides the needed information without interrupting ambient measurements. MODIS derived cloud optical thickness (COT), cloud liquid water path (LWP), and cloud effective radius (re) were compared to the same in situ derived variables from cloud probe measurements of two stratus/stratocumulus cloud campaigns (MASE and Physics Of Stratocumulus Tops; POST). In situ data were from complete vertical cloud penetrations, while MODIS data were from pixels along the aircraft penetration path. Comparisons were well correlated except that MODIS LWP (14-36%) and re (20-30%) were biased high. The LWP bias was from re bias and was not improved by using the vertically stratified assumption. MODIS re bias was almost removed when compared to cloud top maximum in situ re, but, that does not describe re for the full depth of the cloud. COT is validated by in situ COT. High correlations suggest that MODIS variables are useful in self-comparisons such as gradient changes in stratus cloud re during aerosol-cloud processing.
NASA Astrophysics Data System (ADS)
Samsinar, Riza; Suseno, Jatmiko Endro; Widodo, Catur Edi
2018-02-01
The distribution network is the closest power grid to the customer Electric service providers such as PT. PLN. The dispatching center of power grid companies is also the data center of the power grid where gathers great amount of operating information. The valuable information contained in these data means a lot for power grid operating management. The technique of data warehousing online analytical processing has been used to manage and analysis the great capacity of data. Specific methods for online analytics information systems resulting from data warehouse processing with OLAP are chart and query reporting. The information in the form of chart reporting consists of the load distribution chart based on the repetition of time, distribution chart on the area, the substation region chart and the electric load usage chart. The results of the OLAP process show the development of electric load distribution, as well as the analysis of information on the load of electric power consumption and become an alternative in presenting information related to peak load.
South Carolina | Midmarket Solar Policies in the United States | Solar
voluntary renewable energy goal of 2% distributed energy in 2021. Carve-out: 0.25% of total generation from energy portfolio standard, but a goal for distributed generation by 2021. The Distributed Energy Resource Fast Track Process Study Process System size limit: Not specified; South Carolina Public Service
Uncertainty estimation of the self-thinning process by Maximum-Entropy Principle
Shoufan Fang; George Z. Gertner
2000-01-01
When available information is scarce, the Maximum-Entropy Principle can estimate the distributions of parameters. In our case study, we estimated the distributions of the parameters of the forest self-thinning process based on literature information, and we derived the conditional distribution functions and estimated the 95 percent confidence interval (CI) of the self-...
Wen J. Wang; Hong S. He; Frank R. Thompson; Martin A. Spetich; Jacob S. Fraser
2018-01-01
Demographic processes (fecundity, dispersal, colonization, growth, and mortality) and their interactions with environmental changes are notwell represented in current climate-distribution models (e.g., niche and biophysical process models) and constitute a large uncertainty in projections of future tree species distribution shifts.We investigate how species biological...
2001-10-31
832-4736. DISTRIBUTION STATEMENT A Approved for Public Release Distribution Unlimited Attorney Docket No. 83042 BUSINESS DEVELOPMENT PROCESS TO... BUSINESS DEVELOPMENT PROCESS 3 4 STATEMENT OF GOVERNMENT INTEREST 5 The invention described herein may be manufactured and used 6 by or for the...INVENTION 11 (1) Field of the Invention 12 This invention generally relates to a business 13 development process for assessing new business ideas
Red mud flocculation process in alumina production
NASA Astrophysics Data System (ADS)
Fedorova, E. R.; Firsov, A. Yu
2018-05-01
The process of thickening and washing red mud is a gooseneck of alumina production. The existing automated systems of the thickening process control involve stabilizing the parameters of the primary technological circuits of the thickener. The actual direction of scientific research is the creation and improvement of models and systems of the thickening process control by model. But the known models do not fully consider the presence of perturbing effects, in particular the particle size distribution in the feed process, distribution of floccules by size after the aggregation process in the feed barrel. The article is devoted to the basic concepts and terms used in writing the population balance algorithm. The population balance model is implemented in the MatLab environment. The result of the simulation is the particle size distribution after the flocculation process. This model allows one to foreseen the distribution range of floccules after the process of aggregation of red mud in the feed barrel. The mud of Jamaican bauxite was acting as an industrial sample of red mud; Cytec Industries of HX-3000 series with a concentration of 0.5% was acting as a flocculant. When simulating, model constants obtained in a tubular tank in the laboratories of CSIRO (Australia) were used.
Distribution Functions of Sizes and Fluxes Determined from Supra-Arcade Downflows
NASA Technical Reports Server (NTRS)
McKenzie, D.; Savage, S.
2011-01-01
The frequency distributions of sizes and fluxes of supra-arcade downflows (SADs) provide information about the process of their creation. For example, a fractal creation process may be expected to yield a power-law distribution of sizes and/or fluxes. We examine 120 cross-sectional areas and magnetic flux estimates found by Savage & McKenzie for SADs, and find that (1) the areas are consistent with a log-normal distribution and (2) the fluxes are consistent with both a log-normal and an exponential distribution. Neither set of measurements is compatible with a power-law distribution nor a normal distribution. As a demonstration of the applicability of these findings to improved understanding of reconnection, we consider a simple SAD growth scenario with minimal assumptions, capable of producing a log-normal distribution.
NASA Astrophysics Data System (ADS)
Lacasa, Lucas
2014-09-01
Dynamical processes can be transformed into graphs through a family of mappings called visibility algorithms, enabling the possibility of (i) making empirical time series analysis and signal processing and (ii) characterizing classes of dynamical systems and stochastic processes using the tools of graph theory. Recent works show that the degree distribution of these graphs encapsulates much information on the signals' variability, and therefore constitutes a fundamental feature for statistical learning purposes. However, exact solutions for the degree distributions are only known in a few cases, such as for uncorrelated random processes. Here we analytically explore these distributions in a list of situations. We present a diagrammatic formalism which computes for all degrees their corresponding probability as a series expansion in a coupling constant which is the number of hidden variables. We offer a constructive solution for general Markovian stochastic processes and deterministic maps. As case tests we focus on Ornstein-Uhlenbeck processes, fully chaotic and quasiperiodic maps. Whereas only for certain degree probabilities can all diagrams be summed exactly, in the general case we show that the perturbation theory converges. In a second part, we make use of a variational technique to predict the complete degree distribution for special classes of Markovian dynamics with fast-decaying correlations. In every case we compare the theory with numerical experiments.
Hierarchical charge distribution controls self-assembly process of silk in vitro
NASA Astrophysics Data System (ADS)
Zhang, Yi; Zhang, Cencen; Liu, Lijie; Kaplan, David L.; Zhu, Hesun; Lu, Qiang
2015-12-01
Silk materials with different nanostructures have been developed without the understanding of the inherent transformation mechanism. Here we attempt to reveal the conversion road of the various nanostructures and determine the critical regulating factors. The regulating conversion processes influenced by a hierarchical charge distribution were investigated, showing different transformations between molecules, nanoparticles and nanofibers. Various repulsion and compressive forces existed among silk fibroin molecules and aggregates due to the exterior and interior distribution of charge, which further controlled their aggregating and deaggregating behaviors and finally formed nanofibers with different sizes. Synergistic action derived from molecular mobility and concentrations could also tune the assembly process and final nanostructures. It is suggested that the complicated silk fibroin assembly processes comply a same rule based on charge distribution, offering a promising way to develop silk-based materials with designed nanostructures.
NASA Technical Reports Server (NTRS)
Mcclain, Charles R.; Ishizaka, Joji; Hofmann, Eileen E.
1990-01-01
Five coastal-zone-color-scanner images from the southeastern U.S. continental shelf are combined with concurrent moored current meter measurements to assess the processes controlling the variability in chlorophyll concentration and distribution in this region. An equation governing the space and time distribution of a nonconservative quantity such as chlorophyll is used in the calculations. The terms of the equation, estimated from observations, show that advective, diffusive, and local processes contribute to the plankton distributions and vary with time and location. The results from this calculation are compared with similar results obtained using a numerical physical-biological model with circulation fields derived from an optimal interpolation of the current meter observations and it is concluded that the two approaches produce different estimates of the processes controlling phytoplankton variability.
NASA Astrophysics Data System (ADS)
Aguilar, Juan C.; Berriel-Valdos, L. R.; Aguilar, J. Felix; Mejia-Romero, S.
An optical system formed by four point-diffraction interferometers is used for measuring the refractive index distribution of a phase object. The phase of the object is assumed enough smooth to be computed in terms of the Radon Transform and it is processed with a tomographic iterative algorithm. Then, the associated refractive index distribution is calculated. To recovery the phase from the inteferograms we use the Kreis method, which is useful for interferograms having only few fringes. As an application of our technique, the temperature distribution of a candle flame is retrieved, this was made with the aid of the Gladstone-Dale equation. We also describe the process of manufacturing the point-diffraction interferometer (PDI) plates. These were made by means of the thermocavitation process. The obtained three dimensional distribution of temperature is presented.
Three-dimensional distribution of cortical synapses: a replicated point pattern-based analysis
Anton-Sanchez, Laura; Bielza, Concha; Merchán-Pérez, Angel; Rodríguez, José-Rodrigo; DeFelipe, Javier; Larrañaga, Pedro
2014-01-01
The biggest problem when analyzing the brain is that its synaptic connections are extremely complex. Generally, the billions of neurons making up the brain exchange information through two types of highly specialized structures: chemical synapses (the vast majority) and so-called gap junctions (a substrate of one class of electrical synapse). Here we are interested in exploring the three-dimensional spatial distribution of chemical synapses in the cerebral cortex. Recent research has showed that the three-dimensional spatial distribution of synapses in layer III of the neocortex can be modeled by a random sequential adsorption (RSA) point process, i.e., synapses are distributed in space almost randomly, with the only constraint that they cannot overlap. In this study we hypothesize that RSA processes can also explain the distribution of synapses in all cortical layers. We also investigate whether there are differences in both the synaptic density and spatial distribution of synapses between layers. Using combined focused ion beam milling and scanning electron microscopy (FIB/SEM), we obtained three-dimensional samples from the six layers of the rat somatosensory cortex and identified and reconstructed the synaptic junctions. A total volume of tissue of approximately 4500μm3 and around 4000 synapses from three different animals were analyzed. Different samples, layers and/or animals were aggregated and compared using RSA replicated spatial point processes. The results showed no significant differences in the synaptic distribution across the different rats used in the study. We found that RSA processes described the spatial distribution of synapses in all samples of each layer. We also found that the synaptic distribution in layers II to VI conforms to a common underlying RSA process with different densities per layer. Interestingly, the results showed that synapses in layer I had a slightly different spatial distribution from the other layers. PMID:25206325
Three-dimensional distribution of cortical synapses: a replicated point pattern-based analysis.
Anton-Sanchez, Laura; Bielza, Concha; Merchán-Pérez, Angel; Rodríguez, José-Rodrigo; DeFelipe, Javier; Larrañaga, Pedro
2014-01-01
The biggest problem when analyzing the brain is that its synaptic connections are extremely complex. Generally, the billions of neurons making up the brain exchange information through two types of highly specialized structures: chemical synapses (the vast majority) and so-called gap junctions (a substrate of one class of electrical synapse). Here we are interested in exploring the three-dimensional spatial distribution of chemical synapses in the cerebral cortex. Recent research has showed that the three-dimensional spatial distribution of synapses in layer III of the neocortex can be modeled by a random sequential adsorption (RSA) point process, i.e., synapses are distributed in space almost randomly, with the only constraint that they cannot overlap. In this study we hypothesize that RSA processes can also explain the distribution of synapses in all cortical layers. We also investigate whether there are differences in both the synaptic density and spatial distribution of synapses between layers. Using combined focused ion beam milling and scanning electron microscopy (FIB/SEM), we obtained three-dimensional samples from the six layers of the rat somatosensory cortex and identified and reconstructed the synaptic junctions. A total volume of tissue of approximately 4500μm(3) and around 4000 synapses from three different animals were analyzed. Different samples, layers and/or animals were aggregated and compared using RSA replicated spatial point processes. The results showed no significant differences in the synaptic distribution across the different rats used in the study. We found that RSA processes described the spatial distribution of synapses in all samples of each layer. We also found that the synaptic distribution in layers II to VI conforms to a common underlying RSA process with different densities per layer. Interestingly, the results showed that synapses in layer I had a slightly different spatial distribution from the other layers.
Ceramic matrix composite article and process of fabricating a ceramic matrix composite article
Cairo, Ronald Robert; DiMascio, Paul Stephen; Parolini, Jason Robert
2016-01-12
A ceramic matrix composite article and a process of fabricating a ceramic matrix composite are disclosed. The ceramic matrix composite article includes a matrix distribution pattern formed by a manifold and ceramic matrix composite plies laid up on the matrix distribution pattern, includes the manifold, or a combination thereof. The manifold includes one or more matrix distribution channels operably connected to a delivery interface, the delivery interface configured for providing matrix material to one or more of the ceramic matrix composite plies. The process includes providing the manifold, forming the matrix distribution pattern by transporting the matrix material through the manifold, and contacting the ceramic matrix composite plies with the matrix material.
A renewal jump-diffusion process with threshold dividend strategy
NASA Astrophysics Data System (ADS)
Li, Bo; Wu, Rong; Song, Min
2009-06-01
In this paper, we consider a jump-diffusion risk process with the threshold dividend strategy. Both the distributions of the inter-arrival times and the claims are assumed to be in the class of phase-type distributions. The expected discounted dividend function and the Laplace transform of the ruin time are discussed. Motivated by Asmussen [S. Asmussen, Stationary distributions for fluid flow models with or without Brownian noise, Stochastic Models 11 (1) (1995) 21-49], instead of studying the original process, we study the constructed fluid flow process and their closed-form formulas are obtained in terms of matrix expression. Finally, numerical results are provided to illustrate the computation.
Thermal analysis of void cavity for heat pipe receiver under microgravity
NASA Astrophysics Data System (ADS)
Gui, Xiaohong; Song, Xiange; Nie, Baisheng
2017-04-01
Based on theoretical analysis of PCM (Phase Change Material) solidification process, the model of improved void cavity distribution tending to high temperature region is established. Numerical results are compared with NASA (National Aeronautics and Space Administration) results. Analysis results show that the outer wall temperature, the melting ratio of PCM and the temperature gradient of PCM canister, have great difference in different void cavity distribution. The form of void distribution has a great effect on the process of phase change. Based on simulation results under the model of improved void cavity distribution, phase change heat transfer process in thermal storage container is analyzed. The main goal of the improved designing for PCM canister is to take measures in reducing the concentration distribution of void cavity by adding some foam metal into phase change material.
Colen, Hadewig B; Neef, Cees; Schuring, Roel W
2003-06-01
Worldwide patient safety has become a major social policy problem for healthcare organisations. As in other organisations, the patients in our hospital also suffer from an inadequate distribution process, as becomes clear from incident reports involving medication errors. Medisch Spectrum Twente is a top primary-care, clinical, teaching hospital. The hospital pharmacy takes care of 1070 internal beds and 1120 beds in an affiliated psychiatric hospital and nursing homes. In the beginning of 1999, our pharmacy group started a large interdisciplinary research project to develop a safe, effective and efficient drug distribution system by using systematic process redesign. The process redesign includes both organisational and technological components. This article describes the identification and verification of critical performance dimensions for the design of drug distribution processes in hospitals (phase 1 of the systematic process redesign of drug distribution). Based on reported errors and related causes, we suggested six generic performance domains. To assess the role of the performance dimensions, we used three approaches: flowcharts, interviews with stakeholders and review of the existing performance using time studies and medication error studies. We were able to set targets for costs, quality of information, responsiveness, employee satisfaction, and degree of innovation. We still have to establish what drug distribution system, in respect of quality and cost-effectiveness, represents the best and most cost-effective way of preventing medication errors. We intend to develop an evaluation model, using the critical performance dimensions as a starting point. This model can be used as a simulation template to compare different drug distribution concepts in order to define the differences in quality and cost-effectiveness.
Judging The Effectiveness Of Wool Combing By The Entropy Of The Images Of Wool Slivers
NASA Astrophysics Data System (ADS)
Rodrigues, F. Carvalho; Carvalho, Fernando D.; Peixoto, J. Pinto; Silva, M. Santos
1989-04-01
In general it can be said that the textile industry endeavours to render a bunch of fibers chaotically distributed in space into an ordered spatial distribution. This fact is independent of the nature the fibers, i.e., the aim of getting into higher order states in the spatial distribution of the fibers dictates different industrial processes depending on whether the fibers are wool, cotton or man made but the all effect is centred on obtaining at every step of any of the processes a more ordered state regarding the spatial distribution of the fibers. Thinking about the textile processes as a method of getting order out of chaos, the concept of entropy appears as the most appropriate judging parameter on the effectiveness of a step in the chain of an industrial process to produce a regular textile. In fact, entropy is the hidden parameter not only for the textile industry but also for the non woven and paper industrial processes. It happens that in these industries the state of order is linked with the spatial distribution of fibers and to obtain an image of a spatial distribution is an easy matter. To compute the image entropy from the grey level distribution requires only the use of the Shannon formula. In this paper to illustrate the usefulness of employing the entropy of an image concept to textiles the evolution of the entropy of wool slivers along the combing process is matched against the state of parallelization of the fibbers along the seven steps as measured by the existing method. The advantages of the entropy method over the previous method based on diffraction is also demonstrated.
Electric power processing, distribution, management and energy storage
NASA Astrophysics Data System (ADS)
Giudici, R. J.
1980-07-01
Power distribution subsystems are required for three elements of the SPS program: (1) orbiting satellite, (2) ground rectenna, and (3) Electric Orbiting Transfer Vehicle (EOTV). Power distribution subsystems receive electrical power from the energy conversion subsystem and provide the power busses rotary power transfer devices, switchgear, power processing, energy storage, and power management required to deliver control, high voltage plasma interactions, electric thruster interactions, and spacecraft charging of the SPS and the EOTV are also included as part of the power distribution subsystem design.
Electric power processing, distribution, management and energy storage
NASA Technical Reports Server (NTRS)
Giudici, R. J.
1980-01-01
Power distribution subsystems are required for three elements of the SPS program: (1) orbiting satellite, (2) ground rectenna, and (3) Electric Orbiting Transfer Vehicle (EOTV). Power distribution subsystems receive electrical power from the energy conversion subsystem and provide the power busses rotary power transfer devices, switchgear, power processing, energy storage, and power management required to deliver control, high voltage plasma interactions, electric thruster interactions, and spacecraft charging of the SPS and the EOTV are also included as part of the power distribution subsystem design.
Warner, D; Sale, J; Viirre, E
1996-01-01
Recent trends in healthcare informatics and telemedicine indicate that systems are being developed with a primary focus on technology and business, not on the process of medicine itself. Distributed Medical Intelligence promotes the development of an integrative medical communication system which addresses the process of providing expert medical knowledge to the point of need.
Frequency-Wavenumber (F-K) Processing for Infrasound Distributed Arrays
2012-10-01
UNCLASSIFIED Approved for public release; distribution is unlimited (U) Frequency-Wavenumber (F-K) Processing for Infrasound Distributed...have conventionally been used to detect infrasound . Pipe arrays, used in conjunction with microbarometers, provide noise reduction by averaging wind...signals. This is especially true for infrasound and low-frequency acoustic sources of tactical interest in the 1 to 100 Hz range. The work described
Holographic monitoring of spatial distributions of singlet oxygen in water
NASA Astrophysics Data System (ADS)
Belashov, A. V.; Bel'tyukova, D. M.; Vasyutinskii, O. S.; Petrov, N. V.; Semenova, I. V.; Chupov, A. S.
2014-12-01
A method for monitoring spatial distributions of singlet oxygen in biological media has been developed. Singlet oxygen was generated using Radachlorin® photosensitizer, while thermal disturbances caused by nonradiative deactivation of singlet oxygen were detected by the holographic interferometry technique. Processing of interferograms yields temperature maps that characterize the deactivation process and show the distribution of singlet oxygen species.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL ACT ASBESTOS Prohibition of the Manufacture, Importation, Processing, and Distribution in Commerce of Certain Asbestos... the manufacture, import, processing, or distribution in commerce of asbestos-containing products in...
Multiplicative processes in visual cognition
NASA Astrophysics Data System (ADS)
Credidio, H. F.; Teixeira, E. N.; Reis, S. D. S.; Moreira, A. A.; Andrade, J. S.
2014-03-01
The Central Limit Theorem (CLT) is certainly one of the most important results in the field of statistics. The simple fact that the addition of many random variables can generate the same probability curve, elucidated the underlying process for a broad spectrum of natural systems, ranging from the statistical distribution of human heights to the distribution of measurement errors, to mention a few. An extension of the CLT can be applied to multiplicative processes, where a given measure is the result of the product of many random variables. The statistical signature of these processes is rather ubiquitous, appearing in a diverse range of natural phenomena, including the distributions of incomes, body weights, rainfall, and fragment sizes in a rock crushing process. Here we corroborate results from previous studies which indicate the presence of multiplicative processes in a particular type of visual cognition task, namely, the visual search for hidden objects. Precisely, our results from eye-tracking experiments show that the distribution of fixation times during visual search obeys a log-normal pattern, while the fixational radii of gyration follow a power-law behavior.
NASA Astrophysics Data System (ADS)
Muslih, M. Refai; Sumirat, I.; Sairun; Purwanta
2008-03-01
The distribution of residual stress of SUS304 samples that were undergone TIG welding process with four different electric currents has been measured. The welding has been done in the middle part of the samples that was previously grooved by milling machine. Before they were welded the samples were annealed at 650 degree Celsius for one hour. The annealing process was done to eliminate residual stress generated by grooving process so that the residual stress within the samples was merely produced from welding process. The calculation of distribution of residual stress was carried out by measuring the strains within crystal planes of Fe(220) SUS304. Strain, Young modulus, and Poisson ratio of Fe(220) SUS304 were measured using DN1-M neutron diffractometer. Young modulus and Poisson ratio of Fe(220) SUS304 sample were measured in-situ. The result of calculations showed that distribution of residual stress of SUS304 in the vicinity of welded area is influenced both by treatments given at the samples-making process and by the electric current used during welding process.
Ancient fluvial processes in the equatorial highlands of Mars
NASA Technical Reports Server (NTRS)
Craddock, Robert A.; Maxwell, Ted A.
1991-01-01
Martian highland craters typically lack ejecta deposits, have no noticeable rim, and are flat floored. In addition, crater size frequency distribution curves show that highland craters have depleted populations less than 20 km in diameter. A variety of processes have been suggested to explain these observations including deposition of aeolian or volcanic materials up to the crater rim crests, thermal creep, terrain softening, and mass wasting. However, none of these processes adequately explains both the crater morphology and population distribution. In order to explain both the Martian highland crater morphology and population distribution, a fluvial process is proposed which is capable of removing the loose crater rim material. The resulting effect is to decrease the crater diameter, thereby causing the population curves to bendover. The eroded material is redistributed, burying or partially burying smaller diameter craters before complete erosion. This material may also be deposited into local topographic lows, creating the depositional basins observed. A fluvial process explains both sets of observations: crater morphology and crater population distribution curves.
Empirical comparison of heuristic load distribution in point-to-point multicomputer networks
NASA Technical Reports Server (NTRS)
Grunwald, Dirk C.; Nazief, Bobby A. A.; Reed, Daniel A.
1990-01-01
The study compared several load placement algorithms using instrumented programs and synthetic program models. Salient characteristics of these program traces (total computation time, total number of messages sent, and average message time) span two orders of magnitude. Load distribution algorithms determine the initial placement for processes, a precursor to the more general problem of load redistribution. It is found that desirable workload distribution strategies will place new processes globally, rather than locally, to spread processes rapidly, but that local information should be used to refine global placement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
The New Brunswick government intends to award a franchise to establish natural gas distribution in the province. To this end, the province wishes to invite bids from qualified entities to establish gas distribution facilities. The province will select the preferred bidder(s) through a two-stage competitive bidding process. This document details the province`s policy objectives, questions and issues to be addressed in stage 1 of the process, and the schedule for the process. Appendices include copies of relevant provincial statutes and regulations.
Parallel computing method for simulating hydrological processesof large rivers under climate change
NASA Astrophysics Data System (ADS)
Wang, H.; Chen, Y.
2016-12-01
Climate change is one of the proverbial global environmental problems in the world.Climate change has altered the watershed hydrological processes in time and space distribution, especially in worldlarge rivers.Watershed hydrological process simulation based on physically based distributed hydrological model can could have better results compared with the lumped models.However, watershed hydrological process simulation includes large amount of calculations, especially in large rivers, thus needing huge computing resources that may not be steadily available for the researchers or at high expense, this seriously restricted the research and application. To solve this problem, the current parallel method are mostly parallel computing in space and time dimensions.They calculate the natural features orderly thatbased on distributed hydrological model by grid (unit, a basin) from upstream to downstream.This articleproposes ahigh-performancecomputing method of hydrological process simulation with high speedratio and parallel efficiency.It combinedthe runoff characteristics of time and space of distributed hydrological model withthe methods adopting distributed data storage, memory database, distributed computing, parallel computing based on computing power unit.The method has strong adaptability and extensibility,which means it canmake full use of the computing and storage resources under the condition of limited computing resources, and the computing efficiency can be improved linearly with the increase of computing resources .This method can satisfy the parallel computing requirements ofhydrological process simulation in small, medium and large rivers.
On the generation of log-Lévy distributions and extreme randomness
NASA Astrophysics Data System (ADS)
Eliazar, Iddo; Klafter, Joseph
2011-10-01
The log-normal distribution is prevalent across the sciences, as it emerges from the combination of multiplicative processes and the central limit theorem (CLT). The CLT, beyond yielding the normal distribution, also yields the class of Lévy distributions. The log-Lévy distributions are the Lévy counterparts of the log-normal distribution, they appear in the context of ultraslow diffusion processes, and they are categorized by Mandelbrot as belonging to the class of extreme randomness. In this paper, we present a natural stochastic growth model from which both the log-normal distribution and the log-Lévy distributions emerge universally—the former in the case of deterministic underlying setting, and the latter in the case of stochastic underlying setting. In particular, we establish a stochastic growth model which universally generates Mandelbrot’s extreme randomness.
The Equilibrium Allele Frequency Distribution for a Population with Reproductive Skew
Der, Ricky; Plotkin, Joshua B.
2014-01-01
We study the population genetics of two neutral alleles under reversible mutation in a model that features a skewed offspring distribution, called the Λ-Fleming–Viot process. We describe the shape of the equilibrium allele frequency distribution as a function of the model parameters. We show that the mutation rates can be uniquely identified from this equilibrium distribution, but the form of the offspring distribution cannot itself always be so identified. We introduce an estimator for the mutation rate that is consistent, independent of the form of reproductive skew. We also introduce a two-allele infinite-sites version of the Λ-Fleming–Viot process, and we use it to study how reproductive skew influences standing genetic diversity in a population. We derive asymptotic formulas for the expected number of segregating sites as a function of sample size and offspring distribution. We find that the Wright–Fisher model minimizes the equilibrium genetic diversity, for a given mutation rate and variance effective population size, compared to all other Λ-processes. PMID:24473932
Lognormal field size distributions as a consequence of economic truncation
Attanasi, E.D.; Drew, L.J.
1985-01-01
The assumption of lognormal (parent) field size distributions has for a long time been applied to resource appraisal and evaluation of exploration strategy by the petroleum industry. However, frequency distributions estimated with observed data and used to justify this hypotheses are conditional. Examination of various observed field size distributions across basins and over time shows that such distributions should be regarded as the end result of an economic filtering process. Commercial discoveries depend on oil and gas prices and field development costs. Some new fields are eliminated due to location, depths, or water depths. This filtering process is called economic truncation. Economic truncation may occur when predictions of a discovery process are passed through an economic appraisal model. We demonstrate that (1) economic resource appraisals, (2) forecasts of levels of petroleum industry activity, and (3) expected benefits of developing and implementing cost reducing technology are sensitive to assumptions made about the nature of that portion of (parent) field size distribution subject to economic truncation. ?? 1985 Plenum Publishing Corporation.
Statistical distributions of earthquake numbers: consequence of branching process
NASA Astrophysics Data System (ADS)
Kagan, Yan Y.
2010-03-01
We discuss various statistical distributions of earthquake numbers. Previously, we derived several discrete distributions to describe earthquake numbers for the branching model of earthquake occurrence: these distributions are the Poisson, geometric, logarithmic and the negative binomial (NBD). The theoretical model is the `birth and immigration' population process. The first three distributions above can be considered special cases of the NBD. In particular, a point branching process along the magnitude (or log seismic moment) axis with independent events (immigrants) explains the magnitude/moment-frequency relation and the NBD of earthquake counts in large time/space windows, as well as the dependence of the NBD parameters on the magnitude threshold (magnitude of an earthquake catalogue completeness). We discuss applying these distributions, especially the NBD, to approximate event numbers in earthquake catalogues. There are many different representations of the NBD. Most can be traced either to the Pascal distribution or to the mixture of the Poisson distribution with the gamma law. We discuss advantages and drawbacks of both representations for statistical analysis of earthquake catalogues. We also consider applying the NBD to earthquake forecasts and describe the limits of the application for the given equations. In contrast to the one-parameter Poisson distribution so widely used to describe earthquake occurrence, the NBD has two parameters. The second parameter can be used to characterize clustering or overdispersion of a process. We determine the parameter values and their uncertainties for several local and global catalogues, and their subdivisions in various time intervals, magnitude thresholds, spatial windows, and tectonic categories. The theoretical model of how the clustering parameter depends on the corner (maximum) magnitude can be used to predict future earthquake number distribution in regions where very large earthquakes have not yet occurred.
NASA Astrophysics Data System (ADS)
Straub, K. M.; Ganti, V. K.; Paola, C.; Foufoula-Georgiou, E.
2010-12-01
Stratigraphy preserved in alluvial basins houses the most complete record of information necessary to reconstruct past environmental conditions. Indeed, the character of the sedimentary record is inextricably related to the surface processes that formed it. In this presentation we explore how the signals of surface processes are recorded in stratigraphy through the use of physical and numerical experiments. We focus on linking surface processes to stratigraphy in 1D by quantifying the probability distributions of processes that govern the evolution of depositional systems to the probability distribution of preserved bed thicknesses. In this study we define a bed as a package of sediment bounded above and below by erosional surfaces. In a companion presentation we document heavy-tailed statistics of erosion and deposition from high-resolution temporal elevation data recorded during a controlled physical experiment. However, the heavy tails in the magnitudes of erosional and depositional events are not preserved in the experimental stratigraphy. Similar to many bed thickness distributions reported in field studies we find that an exponential distribution adequately describes the thicknesses of beds preserved in our experiment. We explore the generation of exponential bed thickness distributions from heavy-tailed surface statistics using 1D numerical models. These models indicate that when the full distribution of elevation fluctuations (both erosional and depositional events) is symmetrical, the resulting distribution of bed thicknesses is exponential in form. Finally, we illustrate that a predictable relationship exists between the coefficient of variation of surface elevation fluctuations and the scale-parameter of the resulting exponential distribution of bed thicknesses.
Data-driven process decomposition and robust online distributed modelling for large-scale processes
NASA Astrophysics Data System (ADS)
Shu, Zhang; Lijuan, Li; Lijuan, Yao; Shipin, Yang; Tao, Zou
2018-02-01
With the increasing attention of networked control, system decomposition and distributed models show significant importance in the implementation of model-based control strategy. In this paper, a data-driven system decomposition and online distributed subsystem modelling algorithm was proposed for large-scale chemical processes. The key controlled variables are first partitioned by affinity propagation clustering algorithm into several clusters. Each cluster can be regarded as a subsystem. Then the inputs of each subsystem are selected by offline canonical correlation analysis between all process variables and its controlled variables. Process decomposition is then realised after the screening of input and output variables. When the system decomposition is finished, the online subsystem modelling can be carried out by recursively block-wise renewing the samples. The proposed algorithm was applied in the Tennessee Eastman process and the validity was verified.
schwimmbad: A uniform interface to parallel processing pools in Python
NASA Astrophysics Data System (ADS)
Price-Whelan, Adrian M.; Foreman-Mackey, Daniel
2017-09-01
Many scientific and computing problems require doing some calculation on all elements of some data set. If the calculations can be executed in parallel (i.e. without any communication between calculations), these problems are said to be perfectly parallel. On computers with multiple processing cores, these tasks can be distributed and executed in parallel to greatly improve performance. A common paradigm for handling these distributed computing problems is to use a processing "pool": the "tasks" (the data) are passed in bulk to the pool, and the pool handles distributing the tasks to a number of worker processes when available. schwimmbad provides a uniform interface to parallel processing pools and enables switching easily between local development (e.g., serial processing or with multiprocessing) and deployment on a cluster or supercomputer (via, e.g., MPI or JobLib).
An approach for heterogeneous and loosely coupled geospatial data distributed computing
NASA Astrophysics Data System (ADS)
Chen, Bin; Huang, Fengru; Fang, Yu; Huang, Zhou; Lin, Hui
2010-07-01
Most GIS (Geographic Information System) applications tend to have heterogeneous and autonomous geospatial information resources, and the availability of these local resources is unpredictable and dynamic under a distributed computing environment. In order to make use of these local resources together to solve larger geospatial information processing problems that are related to an overall situation, in this paper, with the support of peer-to-peer computing technologies, we propose a geospatial data distributed computing mechanism that involves loosely coupled geospatial resource directories and a term named as Equivalent Distributed Program of global geospatial queries to solve geospatial distributed computing problems under heterogeneous GIS environments. First, a geospatial query process schema for distributed computing as well as a method for equivalent transformation from a global geospatial query to distributed local queries at SQL (Structured Query Language) level to solve the coordinating problem among heterogeneous resources are presented. Second, peer-to-peer technologies are used to maintain a loosely coupled network environment that consists of autonomous geospatial information resources, thus to achieve decentralized and consistent synchronization among global geospatial resource directories, and to carry out distributed transaction management of local queries. Finally, based on the developed prototype system, example applications of simple and complex geospatial data distributed queries are presented to illustrate the procedure of global geospatial information processing.
ERIC Educational Resources Information Center
Brookes, Bertram C.; Griffiths, Jose M.
1978-01-01
Frequency, rank, and frequency rank distributions are defined. Extensive discussion on several aspects of frequency rank distributions includes the Poisson process as a means of exploring the stability of ranks; the correlation of frequency rank distributions; and the transfer coefficient, a new measure in frequency rank distribution. (MBR)
Code of Federal Regulations, 2013 CFR
2013-07-01
... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL ACT ASBESTOS Prohibition of the Manufacture, Importation, Processing, and Distribution in Commerce of Certain Asbestos..., importation, processing, and distribution in commerce of the asbestos-containing products identified and at...
Code of Federal Regulations, 2010 CFR
2010-07-01
... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL ACT ASBESTOS Prohibition of the Manufacture, Importation, Processing, and Distribution in Commerce of Certain Asbestos..., importation, processing, and distribution in commerce of the asbestos-containing products identified and at...
Code of Federal Regulations, 2012 CFR
2012-07-01
... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL ACT ASBESTOS Prohibition of the Manufacture, Importation, Processing, and Distribution in Commerce of Certain Asbestos..., importation, processing, and distribution in commerce of the asbestos-containing products identified and at...
Code of Federal Regulations, 2011 CFR
2011-07-01
... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL ACT ASBESTOS Prohibition of the Manufacture, Importation, Processing, and Distribution in Commerce of Certain Asbestos..., importation, processing, and distribution in commerce of the asbestos-containing products identified and at...
Attention distributed across sensory modalities enhances perceptual performance
Mishra, Jyoti; Gazzaley, Adam
2012-01-01
This study investigated the interaction between top-down attentional control and multisensory processing in humans. Using semantically congruent and incongruent audiovisual stimulus streams, we found target detection to be consistently improved in the setting of distributed audiovisual attention versus focused visual attention. This performance benefit was manifested as faster reaction times for congruent audiovisual stimuli, and as accuracy improvements for incongruent stimuli, resulting in a resolution of stimulus interference. Electrophysiological recordings revealed that these behavioral enhancements were associated with reduced neural processing of both auditory and visual components of the audiovisual stimuli under distributed vs. focused visual attention. These neural changes were observed at early processing latencies, within 100–300 ms post-stimulus onset, and localized to auditory, visual, and polysensory temporal cortices. These results highlight a novel neural mechanism for top-down driven performance benefits via enhanced efficacy of sensory neural processing during distributed audiovisual attention relative to focused visual attention. PMID:22933811
The Land Processes Distributed Active Archive Center (LP DAAC)
Golon, Danielle K.
2016-10-03
The Land Processes Distributed Active Archive Center (LP DAAC) operates as a partnership with the U.S. Geological Survey and is 1 of 12 DAACs within the National Aeronautics and Space Administration (NASA) Earth Observing System Data and Information System (EOSDIS). The LP DAAC ingests, archives, processes, and distributes NASA Earth science remote sensing data. These data are provided to the public at no charge. Data distributed by the LP DAAC provide information about Earth’s surface from daily to yearly intervals and at 15 to 5,600 meter spatial resolution. Data provided by the LP DAAC can be used to study changes in agriculture, vegetation, ecosystems, elevation, and much more. The LP DAAC provides several ways to access, process, and interact with these data. In addition, the LP DAAC is actively archiving new datasets to provide users with a variety of data to study the Earth.
Distributed Computing Framework for Synthetic Radar Application
NASA Technical Reports Server (NTRS)
Gurrola, Eric M.; Rosen, Paul A.; Aivazis, Michael
2006-01-01
We are developing an extensible software framework, in response to Air Force and NASA needs for distributed computing facilities for a variety of radar applications. The objective of this work is to develop a Python based software framework, that is the framework elements of the middleware that allows developers to control processing flow on a grid in a distributed computing environment. Framework architectures to date allow developers to connect processing functions together as interchangeable objects, thereby allowing a data flow graph to be devised for a specific problem to be solved. The Pyre framework, developed at the California Institute of Technology (Caltech), and now being used as the basis for next-generation radar processing at JPL, is a Python-based software framework. We have extended the Pyre framework to include new facilities to deploy processing components as services, including components that monitor and assess the state of the distributed network for eventual real-time control of grid resources.
Forced guidance and distribution of practice in sequential information processing.
NASA Technical Reports Server (NTRS)
Decker, L. R.; Rogers, C. A., Jr.
1973-01-01
Distribution of practice and forced guidance were used in a sequential information-processing task in an attempt to increase the capacity of human information-processing mechanisms. A reaction time index of the psychological refractory period was used as the response measure. Massing of practice lengthened response times while forced guidance shortened them. Interpretation was in terms of load reduction upon the response-selection stage of the information-processing system.-
Louis R. Iverson; Frank R. Thompson; Stephen Matthews; Matthew Peters; Anantha Prasad; William D. Dijak; Jacob Fraser; Wen J. Wang; Brice Hanberry; Hong He; Maria Janowiak; Patricia Butler; Leslie Brandt; Chris Swanston
2016-01-01
Context. Species distribution models (SDM) establish statistical relationships between the current distribution of species and key attributes whereas process-based models simulate ecosystem and tree species dynamics based on representations of physical and biological processes. TreeAtlas, which uses DISTRIB SDM, and Linkages and LANDIS PRO, process...
Jeffrey E. Schneiderman; Hong S. He; Frank R. Thompson; William D. Dijak; Jacob S. Fraser
2015-01-01
Tree species distribution and abundance are affected by forces operating across a hierarchy of ecological scales. Process and species distribution models have been developed emphasizing forces at different scales. Understanding model agreement across hierarchical scales provides perspective on prediction uncertainty and ultimately enables policy makers and managers to...
Wagner, Peter J
2012-02-23
Rate distributions are important considerations when testing hypotheses about morphological evolution or phylogeny. They also have implications about general processes underlying character evolution. Molecular systematists often assume that rates are Poisson processes with gamma distributions. However, morphological change is the product of multiple probabilistic processes and should theoretically be affected by hierarchical integration of characters. Both factors predict lognormal rate distributions. Here, a simple inverse modelling approach assesses the best single-rate, gamma and lognormal models given observed character compatibility for 115 invertebrate groups. Tests reject the single-rate model for nearly all cases. Moreover, the lognormal outperforms the gamma for character change rates and (especially) state derivation rates. The latter in particular is consistent with integration affecting morphological character evolution.
Characterization of intermittency in renewal processes: Application to earthquakes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akimoto, Takuma; Hasumi, Tomohiro; Aizawa, Yoji
2010-03-15
We construct a one-dimensional piecewise linear intermittent map from the interevent time distribution for a given renewal process. Then, we characterize intermittency by the asymptotic behavior near the indifferent fixed point in the piecewise linear intermittent map. Thus, we provide a framework to understand a unified characterization of intermittency and also present the Lyapunov exponent for renewal processes. This method is applied to the occurrence of earthquakes using the Japan Meteorological Agency and the National Earthquake Information Center catalog. By analyzing the return map of interevent times, we find that interevent times are not independent and identically distributed random variablesmore » but that the conditional probability distribution functions in the tail obey the Weibull distribution.« less
Architecture for distributed design and fabrication
NASA Astrophysics Data System (ADS)
McIlrath, Michael B.; Boning, Duane S.; Troxel, Donald E.
1997-01-01
We describe a flexible, distributed system architecture capable of supporting collaborative design and fabrication of semi-conductor devices and integrated circuits. Such capabilities are of particular importance in the development of new technologies, where both equipment and expertise are limited. Distributed fabrication enables direct, remote, physical experimentation in the development of leading edge technology, where the necessary manufacturing resources are new, expensive, and scarce. Computational resources, software, processing equipment, and people may all be widely distributed; their effective integration is essential in order to achieve the realization of new technologies for specific product requirements. Our architecture leverages is essential in order to achieve the realization of new technologies for specific product requirements. Our architecture leverages current vendor and consortia developments to define software interfaces and infrastructure based on existing and merging networking, CIM, and CAD standards. Process engineers and product designers access processing and simulation results through a common interface and collaborate across the distributed manufacturing environment.
NASA Astrophysics Data System (ADS)
Duarte Queirós, Sílvio M.
2012-07-01
We discuss the modification of the Kapteyn multiplicative process using the q-product of Borges [E.P. Borges, A possible deformed algebra and calculus inspired in nonextensive thermostatistics, Physica A 340 (2004) 95]. Depending on the value of the index q a generalisation of the log-Normal distribution is yielded. Namely, the distribution increases the tail for small (when q<1) or large (when q>1) values of the variable upon analysis. The usual log-Normal distribution is retrieved when q=1, which corresponds to the traditional Kapteyn multiplicative process. The main statistical features of this distribution as well as related random number generators and tables of quantiles of the Kolmogorov-Smirnov distance are presented. Finally, we illustrate the validity of this scenario by describing a set of variables of biological and financial origin.
Transitioning from Distributed and Traditional to Distributed and Agile: An Experience Report
NASA Astrophysics Data System (ADS)
Wildt, Daniel; Prikladnicki, Rafael
Global companies that experienced extensive waterfall phased plans are trying to improve their existing processes to expedite team engagement. Agile methodologies have become an acceptable path to follow because it comprises project management as part of its practices. Agile practices have been used with the objective of simplifying project control through simple processes, easy to update documentation and higher team iteration over exhaustive documentation, focusing rather on team continuous improvement and aiming to add value to business processes. The purpose of this chapter is to describe the experience of a global multinational company on transitioning from distributed and traditional to distributed and agile. This company has development centers across North America, South America and Asia. This chapter covers challenges faced by the project teams of two pilot projects, including strengths of using agile practices in a globally distributed environment and practical recommendations for similar endeavors.
ERIC Educational Resources Information Center
Siakaluk, Paul D.; Pexman, Penny M.; Sears, Christopher R.; Owen, William J.
2007-01-01
The ambiguity disadvantage (slower processing of ambiguous words relative to unambiguous words) has been taken as evidence for a distributed semantic representational system like that embodied in parallel distributed processing (PDP) models. In the present study, we investigated whether semantic ambiguity slows meaning activation, as PDP models…
Information distribution in distributed microprocessor based flight control systems
NASA Technical Reports Server (NTRS)
Montgomery, R. C.; Lee, P. S.
1977-01-01
This paper presents an optimal control theory that accounts for variable time intervals in the information distribution to control effectors in a distributed microprocessor based flight control system. The theory is developed using a linear process model for the aircraft dynamics and the information distribution process is modeled as a variable time increment process where, at the time that information is supplied to the control effectors, the control effectors know the time of the next information update only in a stochastic sense. An optimal control problem is formulated and solved that provides the control law that minimizes the expected value of a quadratic cost function. An example is presented where the theory is applied to the control of the longitudinal motions of the F8-DFBW aircraft. Theoretical and simulation results indicate that, for the example problem, the optimal cost obtained using a variable time increment Markov information update process where the control effectors know only the past information update intervals and the Markov transition mechanism is almost identical to that obtained using a known uniform information update interval.
Electronic holography using binary phase modulation
NASA Astrophysics Data System (ADS)
Matoba, Osamu
2014-06-01
A 3D display system by using a phase-only distribution is presented. Especially, binary phase distribution is used to reconstruct a 3D object for wide viewing zone angle. To obtain the phase distribution to be displayed on a phase-mode spatial light modulator, both of experimental and numerical processes are available. In this paper, we present a numerical process by using a computer graphics data. A random phase distribution is attached to all polygons of an input 3D object to reconstruct a 3D object well from the binary phase distribution. Numerical and experimental results are presented to show the effectiveness of the proposed system.
A Framework for Distributed Problem Solving
NASA Astrophysics Data System (ADS)
Leone, Joseph; Shin, Don G.
1989-03-01
This work explores a distributed problem solving (DPS) approach, namely the AM/AG model, to cooperative memory recall. The AM/AG model is a hierarchic social system metaphor for DPS based on the Mintzberg's model of organizations. At the core of the model are information flow mechanisms, named amplification and aggregation. Amplification is a process of expounding a given task, called an agenda, into a set of subtasks with magnified degree of specificity and distributing them to multiple processing units downward in the hierarchy. Aggregation is a process of combining the results reported from multiple processing units into a unified view, called a resolution, and promoting the conclusion upward in the hierarchy. The combination of amplification and aggregation can account for a memory recall process which primarily relies on the ability of making associations between vast amounts of related concepts, sorting out the combined results, and promoting the most plausible ones. The amplification process is discussed in detail. An implementation of the amplification process is presented. The process is illustrated by an example.
A Debugger for Computational Grid Applications
NASA Technical Reports Server (NTRS)
Hood, Robert; Jost, Gabriele
2000-01-01
The p2d2 project at NAS has built a debugger for applications running on heterogeneous computational grids. It employs a client-server architecture to simplify the implementation. Its user interface has been designed to provide process control and state examination functions on a computation containing a large number of processes. It can find processes participating in distributed computations even when those processes were not created under debugger control. These process identification techniques work both on conventional distributed executions as well as those on a computational grid.
Distribution of chirality in the quantum walk: Markov process and entanglement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romanelli, Alejandro
The asymptotic behavior of the quantum walk on the line is investigated, focusing on the probability distribution of chirality independently of position. It is shown analytically that this distribution has a longtime limit that is stationary and depends on the initial conditions. This result is unexpected in the context of the unitary evolution of the quantum walk as it is usually linked to a Markovian process. The asymptotic value of the entanglement between the coin and the position is determined by the chirality distribution. For given asymptotic values of both the entanglement and the chirality distribution, it is possible tomore » find the corresponding initial conditions within a particular class of spatially extended Gaussian distributions.« less
NASA Astrophysics Data System (ADS)
Papalexiou, Simon Michael
2018-05-01
Hydroclimatic processes come in all "shapes and sizes". They are characterized by different spatiotemporal correlation structures and probability distributions that can be continuous, mixed-type, discrete or even binary. Simulating such processes by reproducing precisely their marginal distribution and linear correlation structure, including features like intermittency, can greatly improve hydrological analysis and design. Traditionally, modelling schemes are case specific and typically attempt to preserve few statistical moments providing inadequate and potentially risky distribution approximations. Here, a single framework is proposed that unifies, extends, and improves a general-purpose modelling strategy, based on the assumption that any process can emerge by transforming a specific "parent" Gaussian process. A novel mathematical representation of this scheme, introducing parametric correlation transformation functions, enables straightforward estimation of the parent-Gaussian process yielding the target process after the marginal back transformation, while it provides a general description that supersedes previous specific parameterizations, offering a simple, fast and efficient simulation procedure for every stationary process at any spatiotemporal scale. This framework, also applicable for cyclostationary and multivariate modelling, is augmented with flexible parametric correlation structures that parsimoniously describe observed correlations. Real-world simulations of various hydroclimatic processes with different correlation structures and marginals, such as precipitation, river discharge, wind speed, humidity, extreme events per year, etc., as well as a multivariate example, highlight the flexibility, advantages, and complete generality of the method.
Xyce release and distribution management : version 1.2.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hutchinson, Scott Alan; Williamson, Charles Michael
2003-10-01
This document presents a high-level description of the Xyce {trademark} Parallel Electronic Simulator Release and Distribution Management Process. The purpose of this process is to standardize the manner in which all Xyce software products progress toward release and how releases are made available to customers. Rigorous Release Management will assure that Xyce releases are created in such a way that the elements comprising the release are traceable and the release itself is reproducible. Distribution Management describes what is to be done with a Xyce release that is eligible for distribution.
Production and Distribution of NASA MODIS Remote Sensing Products
NASA Technical Reports Server (NTRS)
Wolfe, Robert
2007-01-01
The two Moderate Resolution Imaging Spectroradiometer (MODIS) instruments on-board NASA's Earth Observing System (EOS) Terra and Aqua satellites make key measurements for understanding the Earth's terrestrial ecosystems. Global time-series of terrestrial geophysical parameters have been produced from MODIS/Terra for over 7 years and for MODIS/Aqua for more than 4 1/2 years. These well calibrated instruments, a team of scientists and a large data production, archive and distribution systems have allowed for the development of a new suite of high quality product variables at spatial resolutions as fine as 250m in support of global change research and natural resource applications. This talk describes the MODIS Science team's products, with a focus on the terrestrial (land) products, the data processing approach and the process for monitoring and improving the product quality. The original MODIS science team was formed in 1989. The team's primary role is the development and implementation of the geophysical algorithms. In addition, the team provided feedback on the design and pre-launch testing of the instrument and helped guide the development of the data processing system. The key challenges the science team dealt with before launch were the development of algorithms for a new instrument and provide guidance of the large and complex multi-discipline processing system. Land, Ocean and Atmosphere discipline teams drove the processing system requirements, particularly in the area of the processing loads and volumes needed to daily produce geophysical maps of the Earth at resolutions as fine as 250 m. The processing system had to handle a large number of data products, large data volumes and processing loads, and complex processing requirements. Prior to MODIS, daily global maps from heritage instruments, such as Advanced Very High Resolution Radiometer (AVHRR), were not produced at resolutions finer than 5 km. The processing solution evolved into a combination of processing the lower level (Level 1) products and the higher level discipline specific Land and Atmosphere products in the MODIS Science Investigator Lead Processing System (SIPS), the MODIS Adaptive Processing System (MODAPS), and archive and distribution of the Land products to the user community by two of NASA s EOS Distributed Active Archive Centers (DAACs). Recently, a part of MODAPS, the Level 1 and Atmosphere Archive and Distribution System (LAADS), took over the role of archiving and distributing the Level 1 and Atmosphere products to the user community.
Logistics Response Time for the Direct Vendor Delivery Process, Defense Supply Center, Columbus
1999-03-04
SECRETARY OF DEFENSE (MATERIEL AND DISTRIBUTION MANAGEMENT ) DIRECTOR, DEFENSE LOGISTICS AGENCY SUBJECT: Audit Report on the Logistics Response Time for...Under Secretary of Defense (Materiel and Distribution Management ) about whether the direct vendor delivery process is unfavorably affecting the logistics...was requested by the Office of the Assistant Deputy Under Secretary of Defense (Materiel and Distribution Management ). DoD corporate goals in response
The origin of the criticality in meme popularity distribution on complex networks.
Kim, Yup; Park, Seokjong; Yook, Soon-Hyung
2016-03-24
Previous studies showed that the meme popularity distribution is described by a heavy-tailed distribution or a power-law, which is a characteristic feature of the criticality. Here, we study the origin of the criticality on non-growing and growing networks based on the competition induced criticality model. From the direct Mote Carlo simulations and the exact mapping into the position dependent biased random walk (PDBRW), we find that the meme popularity distribution satisfies a very robust power- law with exponent α = 3/2 if there is an innovation process. On the other hand, if there is no innovation, then we find that the meme popularity distribution is bounded and highly skewed for early transient time periods, while it satisfies a power-law with exponent α ≠ 3/2 for intermediate time periods. The exact mapping into PDBRW clearly shows that the balance between the creation of new memes by the innovation process and the extinction of old memes is the key factor for the criticality. We confirm that the balance for the criticality sustains for relatively small innovation rate. Therefore, the innovation processes with significantly influential memes should be the simple and fundamental processes which cause the critical distribution of the meme popularity in real social networks.
The origin of the criticality in meme popularity distribution on complex networks
Kim, Yup; Park, Seokjong; Yook, Soon-Hyung
2016-01-01
Previous studies showed that the meme popularity distribution is described by a heavy-tailed distribution or a power-law, which is a characteristic feature of the criticality. Here, we study the origin of the criticality on non-growing and growing networks based on the competition induced criticality model. From the direct Mote Carlo simulations and the exact mapping into the position dependent biased random walk (PDBRW), we find that the meme popularity distribution satisfies a very robust power- law with exponent α = 3/2 if there is an innovation process. On the other hand, if there is no innovation, then we find that the meme popularity distribution is bounded and highly skewed for early transient time periods, while it satisfies a power-law with exponent α ≠ 3/2 for intermediate time periods. The exact mapping into PDBRW clearly shows that the balance between the creation of new memes by the innovation process and the extinction of old memes is the key factor for the criticality. We confirm that the balance for the criticality sustains for relatively small innovation rate. Therefore, the innovation processes with significantly influential memes should be the simple and fundamental processes which cause the critical distribution of the meme popularity in real social networks. PMID:27009399
The origin of the criticality in meme popularity distribution on complex networks
NASA Astrophysics Data System (ADS)
Kim, Yup; Park, Seokjong; Yook, Soon-Hyung
2016-03-01
Previous studies showed that the meme popularity distribution is described by a heavy-tailed distribution or a power-law, which is a characteristic feature of the criticality. Here, we study the origin of the criticality on non-growing and growing networks based on the competition induced criticality model. From the direct Mote Carlo simulations and the exact mapping into the position dependent biased random walk (PDBRW), we find that the meme popularity distribution satisfies a very robust power- law with exponent α = 3/2 if there is an innovation process. On the other hand, if there is no innovation, then we find that the meme popularity distribution is bounded and highly skewed for early transient time periods, while it satisfies a power-law with exponent α ≠ 3/2 for intermediate time periods. The exact mapping into PDBRW clearly shows that the balance between the creation of new memes by the innovation process and the extinction of old memes is the key factor for the criticality. We confirm that the balance for the criticality sustains for relatively small innovation rate. Therefore, the innovation processes with significantly influential memes should be the simple and fundamental processes which cause the critical distribution of the meme popularity in real social networks.
Jeffrey A. Falke; Jason B. Dunham; Christopher E. Jordan; Kristina M. McNyset; Gordon H. Reeves
2013-01-01
Processes that influence habitat selection in landscapes involve the interaction of habitat composition and configuration and are particularly important for species with complex life cycles. We assessed the relative influence of landscape spatial processes and local habitat characteristics on patterns in the distribution and abundance of spawning steelhead (...
NASA Astrophysics Data System (ADS)
Plionis, A. A.; Peterson, D. S.; Tandon, L.; LaMont, S. P.
2010-03-01
Uranium particles within the respirable size range pose a significant hazard to the health and safety of workers. Significant differences in the deposition and incorporation patterns of aerosols within the respirable range can be identified and integrated into sophisticated health physics models. Data characterizing the uranium particle size distribution resulting from specific foundry-related processes are needed. Using personal air sampling cascade impactors, particles collected from several foundry processes were sorted by activity median aerodynamic diameter onto various Marple substrates. After an initial gravimetric assessment of each impactor stage, the substrates were analyzed by alpha spectrometry to determine the uranium content of each stage. Alpha spectrometry provides rapid non-distructive isotopic data that can distinguish process uranium from natural sources and the degree of uranium contribution to the total accumulated particle load. In addition, the particle size bins utilized by the impactors provide adequate resolution to determine if a process particle size distribution is: lognormal, bimodal, or trimodal. Data on process uranium particle size values and distributions facilitate the development of more sophisticated and accurate models for internal dosimetry, resulting in an improved understanding of foundry worker health and safety.
Toward a process-level view of distributed healthcare tasks: Medication management as a case study.
Werner, Nicole E; Malkana, Seema; Gurses, Ayse P; Leff, Bruce; Arbaje, Alicia I
2017-11-01
We aim to highlight the importance of using a process-level view in analyzing distributed healthcare tasks through a case study analysis of medication management (MM). MM during older adults' hospital-to-skilled-home-healthcare (SHHC) transitions is a healthcare process with tasks distributed across people, organizations, and time. MM has typically been studied at the task level, but a process-level is needed to fully understand and improve MM during transitions. A process-level view allows for a broader investigation of how tasks are distributed throughout the work system through an investigation of interactions and the resultant emergent properties. We studied MM during older adults' hospital-to-SHHC transitions through interviews and observations with 60 older adults, their 33 caregivers, and 79 SHHC providers at 5 sites associated with 3 SHHC agencies. Study findings identified key cross-system characteristics not observable at the task-level: (1) identification of emergent properties (e.g., role ambiguity, loosely-coupled teams performing MM) and associated barriers; and (2) examination of barrier propagation across system boundaries. Findings highlight the importance of a process-level view of healthcare delivery occurring across system boundaries. Copyright © 2017 Elsevier Ltd. All rights reserved.
Dynamic measurement of fluorescent proteins spectral distribution on virus infected cells
NASA Astrophysics Data System (ADS)
Lee, Ja-Yun; Wu, Ming-Xiu; Kao, Chia-Yun; Wu, Tzong-Yuan; Hsu, I.-Jen
2006-09-01
We constructed a dynamic spectroscopy system that can simultaneously measure the intensity and spectral distributions of samples with multi-fluorophores in a single scan. The system was used to monitor the fluorescence distribution of cells infected by the virus, which is constructed by a recombinant baculoviruses, vAcD-Rhir-E, containing the red and green fluorescent protein gene that can simultaneously produce dual fluorescence in recombinant virus-infected Spodoptera frugiperda 21 cells (Sf21) under the control of a polyhedrin promoter. The system was composed of an excitation light source, a scanning system and a spectrometer. We also developed an algorithm and fitting process to analyze the pattern of fluorescence distribution of the dual fluorescence produced in the recombinant virus-infected cells. All the algorithm and calculation are automatically processed in a visualized scanning program and can monitor the specific region of sample by calculating its intensity distribution. The spectral measurement of each pixel was performed at millisecond range and the two dimensional distribution of full spectrum was recorded within several seconds. We have constructed a dynamic spectroscopy system to monitor the process of virus-infection of cells. The distributions of the dual fluorescence were simultaneously measured at micrometer resolution.
Log-amplitude statistics for Beck-Cohen superstatistics
NASA Astrophysics Data System (ADS)
Kiyono, Ken; Konno, Hidetoshi
2013-05-01
As a possible generalization of Beck-Cohen superstatistical processes, we study non-Gaussian processes with temporal heterogeneity of local variance. To characterize the variance heterogeneity, we define log-amplitude cumulants and log-amplitude autocovariance and derive closed-form expressions of the log-amplitude cumulants for χ2, inverse χ2, and log-normal superstatistical distributions. Furthermore, we show that χ2 and inverse χ2 superstatistics with degree 2 are closely related to an extreme value distribution, called the Gumbel distribution. In these cases, the corresponding superstatistical distributions result in the q-Gaussian distribution with q=5/3 and the bilateral exponential distribution, respectively. Thus, our finding provides a hypothesis that the asymptotic appearance of these two special distributions may be explained by a link with the asymptotic limit distributions involving extreme values. In addition, as an application of our approach, we demonstrated that non-Gaussian fluctuations observed in a stock index futures market can be well approximated by the χ2 superstatistical distribution with degree 2.
Wagner, Peter J.
2012-01-01
Rate distributions are important considerations when testing hypotheses about morphological evolution or phylogeny. They also have implications about general processes underlying character evolution. Molecular systematists often assume that rates are Poisson processes with gamma distributions. However, morphological change is the product of multiple probabilistic processes and should theoretically be affected by hierarchical integration of characters. Both factors predict lognormal rate distributions. Here, a simple inverse modelling approach assesses the best single-rate, gamma and lognormal models given observed character compatibility for 115 invertebrate groups. Tests reject the single-rate model for nearly all cases. Moreover, the lognormal outperforms the gamma for character change rates and (especially) state derivation rates. The latter in particular is consistent with integration affecting morphological character evolution. PMID:21795266
Study on thickness distribution of thermoformed medical PVC blister
NASA Astrophysics Data System (ADS)
Li, Yiping
2017-08-01
Vacuum forming has many advantages over other plastic forming processes due to its cost effectiveness, time efficiency, higher product precision, and more design flexibility. Nevertheless, when pressures greater than the atmospheric value are required to force the thermo-plastic into more intimate contact with the mold surface, pressure forming is a better choice. This paper studies the process of air-pressure thermoforming of plastic sheet, and focuses on medical blister PVC products. ANSYS POLYFLOW tool is used to simulate the process and analyze the wall thickness distribution of the blister. The influence of mold parameters on the wall thickness distribution of thermoformed part is thus obtained through simulation. Increasing radius between mold and side wall at the bottom of blister and draft prove to improve the wall thickness distribution.
Koyama, Kento; Hokunan, Hidekazu; Hasegawa, Mayumi; Kawamura, Shuso
2016-01-01
ABSTRACT Despite effective inactivation procedures, small numbers of bacterial cells may still remain in food samples. The risk that bacteria will survive these procedures has not been estimated precisely because deterministic models cannot be used to describe the uncertain behavior of bacterial populations. We used the Poisson distribution as a representative probability distribution to estimate the variability in bacterial numbers during the inactivation process. Strains of four serotypes of Salmonella enterica, three serotypes of enterohemorrhagic Escherichia coli, and one serotype of Listeria monocytogenes were evaluated for survival. We prepared bacterial cell numbers following a Poisson distribution (indicated by the parameter λ, which was equal to 2) and plated the cells in 96-well microplates, which were stored in a desiccated environment at 10% to 20% relative humidity and at 5, 15, and 25°C. The survival or death of the bacterial cells in each well was confirmed by adding tryptic soy broth as an enrichment culture. Changes in the Poisson distribution parameter during the inactivation process, which represent the variability in the numbers of surviving bacteria, were described by nonlinear regression with an exponential function based on a Weibull distribution. We also examined random changes in the number of surviving bacteria using a random number generator and computer simulations to determine whether the number of surviving bacteria followed a Poisson distribution during the bacterial death process by use of the Poisson process. For small initial cell numbers, more than 80% of the simulated distributions (λ = 2 or 10) followed a Poisson distribution. The results demonstrate that variability in the number of surviving bacteria can be described as a Poisson distribution by use of the model developed by use of the Poisson process. IMPORTANCE We developed a model to enable the quantitative assessment of bacterial survivors of inactivation procedures because the presence of even one bacterium can cause foodborne disease. The results demonstrate that the variability in the numbers of surviving bacteria was described as a Poisson distribution by use of the model developed by use of the Poisson process. Description of the number of surviving bacteria as a probability distribution rather than as the point estimates used in a deterministic approach can provide a more realistic estimation of risk. The probability model should be useful for estimating the quantitative risk of bacterial survival during inactivation. PMID:27940547
Koyama, Kento; Hokunan, Hidekazu; Hasegawa, Mayumi; Kawamura, Shuso; Koseki, Shigenobu
2017-02-15
Despite effective inactivation procedures, small numbers of bacterial cells may still remain in food samples. The risk that bacteria will survive these procedures has not been estimated precisely because deterministic models cannot be used to describe the uncertain behavior of bacterial populations. We used the Poisson distribution as a representative probability distribution to estimate the variability in bacterial numbers during the inactivation process. Strains of four serotypes of Salmonella enterica, three serotypes of enterohemorrhagic Escherichia coli, and one serotype of Listeria monocytogenes were evaluated for survival. We prepared bacterial cell numbers following a Poisson distribution (indicated by the parameter λ, which was equal to 2) and plated the cells in 96-well microplates, which were stored in a desiccated environment at 10% to 20% relative humidity and at 5, 15, and 25°C. The survival or death of the bacterial cells in each well was confirmed by adding tryptic soy broth as an enrichment culture. Changes in the Poisson distribution parameter during the inactivation process, which represent the variability in the numbers of surviving bacteria, were described by nonlinear regression with an exponential function based on a Weibull distribution. We also examined random changes in the number of surviving bacteria using a random number generator and computer simulations to determine whether the number of surviving bacteria followed a Poisson distribution during the bacterial death process by use of the Poisson process. For small initial cell numbers, more than 80% of the simulated distributions (λ = 2 or 10) followed a Poisson distribution. The results demonstrate that variability in the number of surviving bacteria can be described as a Poisson distribution by use of the model developed by use of the Poisson process. We developed a model to enable the quantitative assessment of bacterial survivors of inactivation procedures because the presence of even one bacterium can cause foodborne disease. The results demonstrate that the variability in the numbers of surviving bacteria was described as a Poisson distribution by use of the model developed by use of the Poisson process. Description of the number of surviving bacteria as a probability distribution rather than as the point estimates used in a deterministic approach can provide a more realistic estimation of risk. The probability model should be useful for estimating the quantitative risk of bacterial survival during inactivation. Copyright © 2017 Koyama et al.
7 CFR 1717.859 - Application process and timeframes.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Lien Accommodations and Subordinations for 100 Percent Private Financing § 1717.859 Application process... supporting documents, such as a CWP. (b) Advance approval—100 percent private financing of distribution... of a lien accommodation or subordination for 100 percent private financing of distribution...
Post-processing procedure for industrial quantum key distribution systems
NASA Astrophysics Data System (ADS)
Kiktenko, Evgeny; Trushechkin, Anton; Kurochkin, Yury; Fedorov, Aleksey
2016-08-01
We present algorithmic solutions aimed on post-processing procedure for industrial quantum key distribution systems with hardware sifting. The main steps of the procedure are error correction, parameter estimation, and privacy amplification. Authentication of classical public communication channel is also considered.
A data distributed parallel algorithm for ray-traced volume rendering
NASA Technical Reports Server (NTRS)
Ma, Kwan-Liu; Painter, James S.; Hansen, Charles D.; Krogh, Michael F.
1993-01-01
This paper presents a divide-and-conquer ray-traced volume rendering algorithm and a parallel image compositing method, along with their implementation and performance on the Connection Machine CM-5, and networked workstations. This algorithm distributes both the data and the computations to individual processing units to achieve fast, high-quality rendering of high-resolution data. The volume data, once distributed, is left intact. The processing nodes perform local ray tracing of their subvolume concurrently. No communication between processing units is needed during this locally ray-tracing process. A subimage is generated by each processing unit and the final image is obtained by compositing subimages in the proper order, which can be determined a priori. Test results on both the CM-5 and a group of networked workstations demonstrate the practicality of our rendering algorithm and compositing method.
Methods and apparatuses for information analysis on shared and distributed computing systems
Bohn, Shawn J [Richland, WA; Krishnan, Manoj Kumar [Richland, WA; Cowley, Wendy E [Richland, WA; Nieplocha, Jarek [Richland, WA
2011-02-22
Apparatuses and computer-implemented methods for analyzing, on shared and distributed computing systems, information comprising one or more documents are disclosed according to some aspects. In one embodiment, information analysis can comprise distributing one or more distinct sets of documents among each of a plurality of processes, wherein each process performs operations on a distinct set of documents substantially in parallel with other processes. Operations by each process can further comprise computing term statistics for terms contained in each distinct set of documents, thereby generating a local set of term statistics for each distinct set of documents. Still further, operations by each process can comprise contributing the local sets of term statistics to a global set of term statistics, and participating in generating a major term set from an assigned portion of a global vocabulary.
Monte Carlo based toy model for fission process
NASA Astrophysics Data System (ADS)
Kurniadi, R.; Waris, A.; Viridi, S.
2014-09-01
There are many models and calculation techniques to obtain visible image of fission yield process. In particular, fission yield can be calculated by using two calculations approach, namely macroscopic approach and microscopic approach. This work proposes another calculation approach in which the nucleus is treated as a toy model. Hence, the fission process does not represent real fission process in nature completely. The toy model is formed by Gaussian distribution of random number that randomizes distance likesthe distance between particle and central point. The scission process is started by smashing compound nucleus central point into two parts that are left central and right central points. These three points have different Gaussian distribution parameters such as mean (μCN, μL, μR), and standard deviation (σCN, σL, σR). By overlaying of three distributions, the number of particles (NL, NR) that are trapped by central points can be obtained. This process is iterated until (NL, NR) become constant numbers. Smashing process is repeated by changing σL and σR, randomly.
a Hadoop-Based Distributed Framework for Efficient Managing and Processing Big Remote Sensing Images
NASA Astrophysics Data System (ADS)
Wang, C.; Hu, F.; Hu, X.; Zhao, S.; Wen, W.; Yang, C.
2015-07-01
Various sensors from airborne and satellite platforms are producing large volumes of remote sensing images for mapping, environmental monitoring, disaster management, military intelligence, and others. However, it is challenging to efficiently storage, query and process such big data due to the data- and computing- intensive issues. In this paper, a Hadoop-based framework is proposed to manage and process the big remote sensing data in a distributed and parallel manner. Especially, remote sensing data can be directly fetched from other data platforms into the Hadoop Distributed File System (HDFS). The Orfeo toolbox, a ready-to-use tool for large image processing, is integrated into MapReduce to provide affluent image processing operations. With the integration of HDFS, Orfeo toolbox and MapReduce, these remote sensing images can be directly processed in parallel in a scalable computing environment. The experiment results show that the proposed framework can efficiently manage and process such big remote sensing data.
Barista: A Framework for Concurrent Speech Processing by USC-SAIL
Can, Doğan; Gibson, James; Vaz, Colin; Georgiou, Panayiotis G.; Narayanan, Shrikanth S.
2016-01-01
We present Barista, an open-source framework for concurrent speech processing based on the Kaldi speech recognition toolkit and the libcppa actor library. With Barista, we aim to provide an easy-to-use, extensible framework for constructing highly customizable concurrent (and/or distributed) networks for a variety of speech processing tasks. Each Barista network specifies a flow of data between simple actors, concurrent entities communicating by message passing, modeled after Kaldi tools. Leveraging the fast and reliable concurrency and distribution mechanisms provided by libcppa, Barista lets demanding speech processing tasks, such as real-time speech recognizers and complex training workflows, to be scheduled and executed on parallel (and/or distributed) hardware. Barista is released under the Apache License v2.0. PMID:27610047
Method for distributed agent-based non-expert simulation of manufacturing process behavior
Ivezic, Nenad; Potok, Thomas E.
2004-11-30
A method for distributed agent based non-expert simulation of manufacturing process behavior on a single-processor computer comprises the steps of: object modeling a manufacturing technique having a plurality of processes; associating a distributed agent with each the process; and, programming each the agent to respond to discrete events corresponding to the manufacturing technique, wherein each discrete event triggers a programmed response. The method can further comprise the step of transmitting the discrete events to each agent in a message loop. In addition, the programming step comprises the step of conditioning each agent to respond to a discrete event selected from the group consisting of a clock tick message, a resources received message, and a request for output production message.
Barista: A Framework for Concurrent Speech Processing by USC-SAIL.
Can, Doğan; Gibson, James; Vaz, Colin; Georgiou, Panayiotis G; Narayanan, Shrikanth S
2014-05-01
We present Barista, an open-source framework for concurrent speech processing based on the Kaldi speech recognition toolkit and the libcppa actor library. With Barista, we aim to provide an easy-to-use, extensible framework for constructing highly customizable concurrent (and/or distributed) networks for a variety of speech processing tasks. Each Barista network specifies a flow of data between simple actors, concurrent entities communicating by message passing, modeled after Kaldi tools. Leveraging the fast and reliable concurrency and distribution mechanisms provided by libcppa, Barista lets demanding speech processing tasks, such as real-time speech recognizers and complex training workflows, to be scheduled and executed on parallel (and/or distributed) hardware. Barista is released under the Apache License v2.0.
A gossip based information fusion protocol for distributed frequent itemset mining
NASA Astrophysics Data System (ADS)
Sohrabi, Mohammad Karim
2018-07-01
The computational complexity, huge memory space requirement, and time-consuming nature of frequent pattern mining process are the most important motivations for distribution and parallelization of this mining process. On the other hand, the emergence of distributed computational and operational environments, which causes the production and maintenance of data on different distributed data sources, makes the parallelization and distribution of the knowledge discovery process inevitable. In this paper, a gossip based distributed itemset mining (GDIM) algorithm is proposed to extract frequent itemsets, which are special types of frequent patterns, in a wireless sensor network environment. In this algorithm, local frequent itemsets of each sensor are extracted using a bit-wise horizontal approach (LHPM) from the nodes which are clustered using a leach-based protocol. Heads of clusters exploit a gossip based protocol in order to communicate each other to find the patterns which their global support is equal to or more than the specified support threshold. Experimental results show that the proposed algorithm outperforms the best existing gossip based algorithm in term of execution time.
Where is the Battle-Line for Supply Contractors?
1999-04-01
military supply distribution system initiates, at the Theater Distribution Management Center (TMC). 3 Chapter 2 Current peacetime supply process I don’t know...terms of distribution success on the battlefield. There are three components which comprise the idea of distribution and distribution management . They...throughout the distribution pipeline. Visibility is the most essential component of distribution management . History is full of examples that prove
Lindley frailty model for a class of compound Poisson processes
NASA Astrophysics Data System (ADS)
Kadilar, Gamze Özel; Ata, Nihal
2013-10-01
The Lindley distribution gain importance in survival analysis for the similarity of exponential distribution and allowance for the different shapes of hazard function. Frailty models provide an alternative to proportional hazards model where misspecified or omitted covariates are described by an unobservable random variable. Despite of the distribution of the frailty is generally assumed to be continuous, it is appropriate to consider discrete frailty distributions In some circumstances. In this paper, frailty models with discrete compound Poisson process for the Lindley distributed failure time are introduced. Survival functions are derived and maximum likelihood estimation procedures for the parameters are studied. Then, the fit of the models to the earthquake data set of Turkey are examined.
NASA Astrophysics Data System (ADS)
Ehrlich, C.; Noll, G.; Kalkoff, W.-D.; Baumbach, G.; Dreiseidler, A.
Emission measurement programmes were carried out at industrial plants in several regions of Germany to determine the fine dust in the waste gases; the PM 10, PM 2.5 and PM 1.0 fractions were sampled using a cascade impactor technique. The installations tested included plants used for: combustion (brown coal, heavy fuel oil, wood), cement production, glass production, asphalt mixing, and processing plants for natural stones and sand, ceramics, metallurgy, chemical production, spray painting, wood processing/chip drying, poultry farming and waste treatment. In addition waste gas samples were taken from small-scale combustion units, like domestic stoves, firing lignite briquettes or wood. In total 303 individual measurement results were obtained during 106 different measurement campaigns. In the study it was found that in more than 70% of the individual emission measurement results from industrial plants and domestic stoves the PM 10 portion amounted to more than 90% and the PM 2.5 portion between 50% and 90% of the total PM (particulate matter) emission. For thermal industrial processes the PM 1.0 portion constituted between 20% and 60% of the total PM emission. Typical particle size distributions for different processes were presented as cumulative frequency distributions and as frequency distributions. The particle size distributions determined for the different plant types show interesting similarities and differences depending on whether the processes are thermal, mechanical, chemical or mixed. Consequently, for the groups of plant investigated, a major finding of this study has been that the particle size distribution is a characteristic of the industrial process. Attempts to correlate particle size distributions of different plants to different gas cleaning technologies did not lead to usable results.
NASA Astrophysics Data System (ADS)
Grujicic, M.; Ramaswami, S.; Snipes, J. S.; Yavari, R.; Yen, C.-F.; Cheeseman, B. A.
2015-01-01
Our recently developed multi-physics computational model for the conventional gas metal arc welding (GMAW) joining process has been upgraded with respect to its predictive capabilities regarding the process optimization for the attainment of maximum ballistic limit within the weld. The original model consists of six modules, each dedicated to handling a specific aspect of the GMAW process, i.e., (a) electro-dynamics of the welding gun; (b) radiation-/convection-controlled heat transfer from the electric arc to the workpiece and mass transfer from the filler metal consumable electrode to the weld; (c) prediction of the temporal evolution and the spatial distribution of thermal and mechanical fields within the weld region during the GMAW joining process; (d) the resulting temporal evolution and spatial distribution of the material microstructure throughout the weld region; (e) spatial distribution of the as-welded material mechanical properties; and (f) spatial distribution of the material ballistic limit. In the present work, the model is upgraded through the introduction of the seventh module in recognition of the fact that identification of the optimum GMAW process parameters relative to the attainment of the maximum ballistic limit within the weld region entails the use of advanced optimization and statistical sensitivity analysis methods and tools. The upgraded GMAW process model is next applied to the case of butt welding of MIL A46100 (a prototypical high-hardness armor-grade martensitic steel) workpieces using filler metal electrodes made of the same material. The predictions of the upgraded GMAW process model pertaining to the spatial distribution of the material microstructure and ballistic limit-controlling mechanical properties within the MIL A46100 butt weld are found to be consistent with general expectations and prior observations.
Power Aware Signal Processing Environment (PASPE) for PAC/C
2003-02-01
vs. FFT Size For our implementation , the Annapolis FFT core was radix-256, and therefore the smallest PN code length that could be processed was the...PN-64. A C- code version of correlate was compared to the FPGA 61 implementation . The results in Figure 68 show that for a PN-1024, the...12a. DISTRIBUTION / AVAILABILITY STATEMENT APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED. 12b. DISTRIBUTION CODE 13. ABSTRACT (Maximum
Work Distribution in a Fully Distributed Processing System.
1982-01-01
Institute of Technology Atlanta, Georgia 30332 THE VIEW, OPINIONS, AND/OR FINDINGS CONTAINED IN THIS REPORT ARE THOSE OF THE AUTHOR AND SHOULD NOT BE...opinions, and/or findings contained in this report are those of the author and should not be construed as an official Department of the Navy position...SECTIONO1 Distributed data processing systems are currently being studied by researchers and prospective users because of their potential for improvements
Understanding scaling through history-dependent processes with collapsing sample space.
Corominas-Murtra, Bernat; Hanel, Rudolf; Thurner, Stefan
2015-04-28
History-dependent processes are ubiquitous in natural and social systems. Many such stochastic processes, especially those that are associated with complex systems, become more constrained as they unfold, meaning that their sample space, or their set of possible outcomes, reduces as they age. We demonstrate that these sample-space-reducing (SSR) processes necessarily lead to Zipf's law in the rank distributions of their outcomes. We show that by adding noise to SSR processes the corresponding rank distributions remain exact power laws, p(x) ~ x(-λ), where the exponent directly corresponds to the mixing ratio of the SSR process and noise. This allows us to give a precise meaning to the scaling exponent in terms of the degree to which a given process reduces its sample space as it unfolds. Noisy SSR processes further allow us to explain a wide range of scaling exponents in frequency distributions ranging from α = 2 to ∞. We discuss several applications showing how SSR processes can be used to understand Zipf's law in word frequencies, and how they are related to diffusion processes in directed networks, or aging processes such as in fragmentation processes. SSR processes provide a new alternative to understand the origin of scaling in complex systems without the recourse to multiplicative, preferential, or self-organized critical processes.
NASA Astrophysics Data System (ADS)
Scheu, B.; Fowler, A. C.
2015-12-01
Fragmentation is a ubiquitous phenomenon in many natural and engineering systems. It is the process by which an initially competent medium, solid or liquid, is broken up into a population of constituents. Examples occur in collisions and impacts of asteroids/meteorites, explosion driven fragmentation of munitions on a battlefield, as well as of magma in a volcanic conduit causing explosive volcanic eruptions and break-up of liquid drops. Besides the mechanism of fragmentation the resulting frequency-size distribution of the generated constituents is of central interest. Initially their distributions were fitted empirically using lognormal, Rosin-Rammler and Weibull distributions (e.g. Brown & Wohletz 1995). The sequential fragmentation theory (Brown 1989, Wohletz at al. 1989, Wohletz & Brown 1995) and the application of fractal theory to fragmentation products (Turcotte 1986, Perfect 1997, Perugini & Kueppers 2012) attempt to overcome this shortcoming by providing a more physical basis for the applied distribution. Both rely on an at least partially scale-invariant and thus self-similar random fragmentation process. Here we provide a stochastic model for the evolution of grain size distribution during the explosion process. Our model is based on laboratory experiments in which volcanic rock samples explode naturally when rapidly depressurized from initial pressures of several MPa to ambient conditions. The physics governing this fragmentation process has been successfully modelled and the observed fragmentation pattern could be numerically reproduced (Fowler et al. 2010). The fragmentation of these natural rocks leads to grain size distributions which vary depending on the experimental starting conditions. Our model provides a theoretical description of these different grain size distributions. Our model combines a sequential model of the type outlined by Turcotte (1986), but generalized to cater for the explosive process appropriate here, in particular by including in the description of the fracturing events in which the rock fragments, with a recipe for the production of fines, as observed in the experiments. To our knowledge, this implementation of a deterministic fracturing process into a stochastic (sequential) model is unique, further it provides the model with some forecasting power.
Studies of the General Parton Distributions.
NASA Astrophysics Data System (ADS)
Goloskokov, Sergey
2017-12-01
We discuss possibility to study Generalized Parton Distributions (GPSs) induced processes using polarized beams at NICA. We show that important information on GPDs structure can be obtained at NICA in exclusive meson production and in Drell-Yan (D-Y) process that determined by the double GPDs contribution.
Distributed parallel computing in stochastic modeling of groundwater systems.
Dong, Yanhui; Li, Guomin; Xu, Haizhen
2013-03-01
Stochastic modeling is a rapidly evolving, popular approach to the study of the uncertainty and heterogeneity of groundwater systems. However, the use of Monte Carlo-type simulations to solve practical groundwater problems often encounters computational bottlenecks that hinder the acquisition of meaningful results. To improve the computational efficiency, a system that combines stochastic model generation with MODFLOW-related programs and distributed parallel processing is investigated. The distributed computing framework, called the Java Parallel Processing Framework, is integrated into the system to allow the batch processing of stochastic models in distributed and parallel systems. As an example, the system is applied to the stochastic delineation of well capture zones in the Pinggu Basin in Beijing. Through the use of 50 processing threads on a cluster with 10 multicore nodes, the execution times of 500 realizations are reduced to 3% compared with those of a serial execution. Through this application, the system demonstrates its potential in solving difficult computational problems in practical stochastic modeling. © 2012, The Author(s). Groundwater © 2012, National Ground Water Association.
Spatial analysis of extension fracture systems: A process modeling approach
Ferguson, C.C.
1985-01-01
Little consensus exists on how best to analyze natural fracture spacings and their sequences. Field measurements and analyses published in geotechnical literature imply fracture processes radically different from those assumed by theoretical structural geologists. The approach adopted in this paper recognizes that disruption of rock layers by layer-parallel extension results in two spacing distributions, one representing layer-fragment lengths and another separation distances between fragments. These two distributions and their sequences reflect mechanics and history of fracture and separation. Such distributions and sequences, represented by a 2 ?? n matrix of lengthsL, can be analyzed using a method that is history sensitive and which yields also a scalar estimate of bulk extension, e (L). The method is illustrated by a series of Monte Carlo experiments representing a variety of fracture-and-separation processes, each with distinct implications for extension history. Resulting distributions of e (L)are process-specific, suggesting that the inverse problem of deducing fracture-and-separation history from final structure may be tractable. ?? 1985 Plenum Publishing Corporation.
Zivoli, Rosanna; Gambacorta, Lucia; Perrone, Giancarlo; Solfrizzo, Michele
2014-06-18
The fate of aflatoxins during processing of contaminated almonds into nougat, pastries, and almond syrup was evaluated by testing the effect of each processing step (blanching, peeling, roasting, caramelization, cooking, and water infusion) on the distribution and levels of aflatoxins. Blanching and peeling did not reduce total aflatoxins that were distributed between peeled almonds (90-93%) and skins (7-10%). Roasting of peeled almonds reduced up to 50% of aflatoxins. Up to 70% reduction of aflatoxins was observed during preparation and cooking of almond nougat in caramelized sugar. Aflatoxins were substantially stable during preparation and cooking of almond pastries. The whole process of almond syrup preparation produced a marked increase of total aflatoxins (up to 270%) that were distributed between syrup (18-25%) and spent almonds (75-82%). The increase of total aflatoxins was probably due to the activation of almond enzymes during the infusion step that released free aflatoxins from masked aflatoxins.
Brightness field distributions of microlens arrays using micro molding.
Cheng, Hsin-Chung; Huang, Chiung-Fang; Lin, Yi; Shen, Yung-Kang
2010-12-20
This study describes the brightness field distributions of microlens arrays fabricated by micro injection molding (μIM) and micro injection-compression molding (μICM). The process for fabricating microlens arrays used room-temperature imprint lithography, photoresist reflow, electroforming, μIM, μICM, and optical properties measurement. Analytical results indicate that the brightness field distribution of the molded microlens arrays generated by μICM is better than those made using μIM. Our results further demonstrate that mold temperature is the most important processing parameter for brightness field distribution of molded microlens arrays made by μIM or μICM.
A Spatiotemporal Clustering Approach to Maritime Domain Awareness
2013-09-01
1997. [25] M. E. Celebi, “Effective initialization of k-means for color quantization,” 16th IEEE International Conference on Image Processing (ICIP...release; distribution is unlimited 12b. DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) Spatiotemporal clustering is the process of grouping...Department of Electrical and Computer Engineering iv THIS PAGE INTENTIONALLY LEFT BLANK v ABSTRACT Spatiotemporal clustering is the process of
ERIC Educational Resources Information Center
Feickert, Joan Davis; And Others
A study evaluated the use of the request for proposal (RFP) process as a method of distributing federal vocational education and job training funds in Minnesota. Thirty-seven employees of Minnesota technical colleges, community-based organizations, service delivery areas, and state agencies who had actually prepared proposals requesting Job…
The model of drugs distribution dynamics in biological tissue
NASA Astrophysics Data System (ADS)
Ginevskij, D. A.; Izhevskij, P. V.; Sheino, I. N.
2017-09-01
The dose distribution by Neutron Capture Therapy follows the distribution of 10B in the tissue. The modern models of pharmacokinetics of drugs describe the processes occurring in conditioned "chambers" (blood-organ-tumor), but fail to describe the spatial distribution of the drug in the tumor and in normal tissue. The mathematical model of the spatial distribution dynamics of drugs in the tissue, depending on the concentration of the drug in the blood, was developed. The modeling method is the representation of the biological structure in the form of a randomly inhomogeneous medium in which the 10B distribution occurs. The parameters of the model, which cannot be determined rigorously in the experiment, are taken as the quantities subject to the laws of the unconnected random processes. The estimates of 10B distribution preparations in the tumor and healthy tissue, inside/outside the cells, are obtained.
NASA Astrophysics Data System (ADS)
Palm, Juliane; Klaus, Julian; van Schaik, Loes; Zehe, Erwin; Schröder, Boris
2010-05-01
Soils provide central ecosystem functions in recycling nutrients, detoxifying harmful chemicals as well as regulating microclimate and local hydrological processes. The internal regulation of these functions and therefore the development of healthy and fertile soils mainly depend on the functional diversity of plants and animals. Soil organisms drive essential processes such as litter decomposition, nutrient cycling, water dynamics, and soil structure formation. Disturbances by different soil management practices (e.g., soil tillage, fertilization, pesticide application) affect the distribution and abundance of soil organisms and hence influence regulating processes. The strong relationship between environmental conditions and soil organisms gives us the opportunity to link spatiotemporal distribution patterns of indicator species with the potential provision of essential soil processes on different scales. Earthworms are key organisms for soil function and affect, among other things, water dynamics and solute transport in soils. Through their burrowing activity, earthworms increase the number of macropores by building semi-permanent burrow systems. In the unsaturated zone, earthworm burrows act as preferential flow pathways and affect water infiltration, surface-, subsurface- and matrix flow as well as the transport of water and solutes into deeper soil layers. Thereby different ecological earthworm types have different importance. Deep burrowing anecic earthworm species (e.g., Lumbricus terrestris) affect the vertical flow and thus increase the risk of potential contamination of ground water with agrochemicals. In contrast, horizontal burrowing endogeic (e.g., Aporrectodea caliginosa) and epigeic species (e.g., Lumbricus rubellus) increase water conductivity and the diffuse distribution of water and solutes in the upper soil layers. The question which processes are more relevant is pivotal for soil management and risk assessment. Thus, finding relevant environmental predictors which explain the distribution and dynamics of different ecological earthworm types can help us to understand where or when these processes are relevant in the landscape. Therefore, we develop species distribution models which are a useful tool to predict spatiotemporal distributions of earthworm occurrence and abundance under changing environmental conditions. On field scale, geostatistical distribution maps have shown that the spatial distribution of earthworms depends on soil parameters such as food supply, soil moisture, bulk density but with different patterns for earthworm stages (adult, juvenile) and ecological types (anecic, endogeic, epigeic). On landscape scales, earthworm distribution seems to be strongly controlled by management/disturbance-related factors. Our study shows different modelling approaches for predicting distribution patterns of earthworms in the Weiherbach area, an agricultural site in Kraichtal (Baden-Württemberg, Germany). We carried out field studies on arable fields differing in soil management practices (conventional, conservational), soil properties (organic matter content, texture, soil moisture), and topography (slope, elevation) in order to identify predictors for earthworm occurrence, abundance and biomass. Our earthworm distribution models consider all ecological groups as well as different life stages, accounting for the fact that the activity of juveniles is sometimes different from those of adults. Within our BIOPORE-project it is our final goal to couple our distribution models with population dynamic models and a preferential flow model to an integrated ecohydrological model to analyse feedbacks between earthworm engineering and transport characteristics affecting the functioning of (agro-) ecosystems.
2008-03-01
WVD Wigner - Ville Distribution xiv THIS PAGE INTENTIONALLY LEFT BLANK xv ACKNOWLEDGMENTS Many thanks to David Caliga of SRC Computer for his...11 2. Wigner - Ville Distribution .................................................................11 3. Choi-Williams... Ville Distribution ...................................12 Table 3. C Code Output for Wigner - Ville Distribution
Ligament Mediated Fragmentation of Viscoelastic Liquids
NASA Astrophysics Data System (ADS)
Keshavarz, Bavand; Houze, Eric C.; Moore, John R.; Koerner, Michael R.; McKinley, Gareth H.
2016-10-01
The breakup and atomization of complex fluids can be markedly different than the analogous processes in a simple Newtonian fluid. Atomization of paint, combustion of fuels containing antimisting agents, as well as physiological processes such as sneezing are common examples in which the atomized liquid contains synthetic or biological macromolecules that result in viscoelastic fluid characteristics. Here, we investigate the ligament-mediated fragmentation dynamics of viscoelastic fluids in three different canonical flows. The size distributions measured in each viscoelastic fragmentation process show a systematic broadening from the Newtonian solvent. In each case, the droplet sizes are well described by Gamma distributions which correspond to a fragmentation-coalescence scenario. We use a prototypical axial step strain experiment together with high-speed video imaging to show that this broadening results from the pronounced change in the corrugated shape of viscoelastic ligaments as they separate from the liquid core. These corrugations saturate in amplitude and the measured distributions for viscoelastic liquids in each process are given by a universal probability density function, corresponding to a Gamma distribution with nmin=4 . The breadth of this size distribution for viscoelastic filaments is shown to be constrained by a geometrical limit which can not be exceeded in ligament-mediated fragmentation phenomena.
Ligament Mediated Fragmentation of Viscoelastic Liquids.
Keshavarz, Bavand; Houze, Eric C; Moore, John R; Koerner, Michael R; McKinley, Gareth H
2016-10-07
The breakup and atomization of complex fluids can be markedly different than the analogous processes in a simple Newtonian fluid. Atomization of paint, combustion of fuels containing antimisting agents, as well as physiological processes such as sneezing are common examples in which the atomized liquid contains synthetic or biological macromolecules that result in viscoelastic fluid characteristics. Here, we investigate the ligament-mediated fragmentation dynamics of viscoelastic fluids in three different canonical flows. The size distributions measured in each viscoelastic fragmentation process show a systematic broadening from the Newtonian solvent. In each case, the droplet sizes are well described by Gamma distributions which correspond to a fragmentation-coalescence scenario. We use a prototypical axial step strain experiment together with high-speed video imaging to show that this broadening results from the pronounced change in the corrugated shape of viscoelastic ligaments as they separate from the liquid core. These corrugations saturate in amplitude and the measured distributions for viscoelastic liquids in each process are given by a universal probability density function, corresponding to a Gamma distribution with n_{min}=4. The breadth of this size distribution for viscoelastic filaments is shown to be constrained by a geometrical limit which can not be exceeded in ligament-mediated fragmentation phenomena.
Beowulf Distributed Processing and the United States Geological Survey
Maddox, Brian G.
2002-01-01
Introduction In recent years, the United States Geological Survey's (USGS) National Mapping Discipline (NMD) has expanded its scientific and research activities. Work is being conducted in areas such as emergency response research, scientific visualization, urban prediction, and other simulation activities. Custom-produced digital data have become essential for these types of activities. High-resolution, remotely sensed datasets are also seeing increased use. Unfortunately, the NMD is also finding that it lacks the resources required to perform some of these activities. Many of these projects require large amounts of computer processing resources. Complex urban-prediction simulations, for example, involve large amounts of processor-intensive calculations on large amounts of input data. This project was undertaken to learn and understand the concepts of distributed processing. Experience was needed in developing these types of applications. The idea was that this type of technology could significantly aid the needs of the NMD scientific and research programs. Porting a numerically intensive application currently being used by an NMD science program to run in a distributed fashion would demonstrate the usefulness of this technology. There are several benefits that this type of technology can bring to the USGS's research programs. Projects can be performed that were previously impossible due to a lack of computing resources. Other projects can be performed on a larger scale than previously possible. For example, distributed processing can enable urban dynamics research to perform simulations on larger areas without making huge sacrifices in resolution. The processing can also be done in a more reasonable amount of time than with traditional single-threaded methods (a scaled version of Chester County, Pennsylvania, took about fifty days to finish its first calibration phase with a single-threaded program). This paper has several goals regarding distributed processing technology. It will describe the benefits of the technology. Real data about a distributed application will be presented as an example of the benefits that this technology can bring to USGS scientific programs. Finally, some of the issues with distributed processing that relate to USGS work will be discussed.
Light-induced electronic non-equilibrium in plasmonic particles.
Kornbluth, Mordechai; Nitzan, Abraham; Seideman, Tamar
2013-05-07
We consider the transient non-equilibrium electronic distribution that is created in a metal nanoparticle upon plasmon excitation. Following light absorption, the created plasmons decohere within a few femtoseconds, producing uncorrelated electron-hole pairs. The corresponding non-thermal electronic distribution evolves in response to the photo-exciting pulse and to subsequent relaxation processes. First, on the femtosecond timescale, the electronic subsystem relaxes to a Fermi-Dirac distribution characterized by an electronic temperature. Next, within picoseconds, thermalization with the underlying lattice phonons leads to a hot particle in internal equilibrium that subsequently equilibrates with the environment. Here we focus on the early stage of this multistep relaxation process, and on the properties of the ensuing non-equilibrium electronic distribution. We consider the form of this distribution as derived from the balance between the optical absorption and the subsequent relaxation processes, and discuss its implication for (a) heating of illuminated plasmonic particles, (b) the possibility to optically induce current in junctions, and (c) the prospect for experimental observation of such light-driven transport phenomena.
Distributed processing method for arbitrary view generation in camera sensor network
NASA Astrophysics Data System (ADS)
Tehrani, Mehrdad P.; Fujii, Toshiaki; Tanimoto, Masayuki
2003-05-01
Camera sensor network as a new advent of technology is a network that each sensor node can capture video signals, process and communicate them with other nodes. The processing task in this network is to generate arbitrary view, which can be requested from central node or user. To avoid unnecessary communication between nodes in camera sensor network and speed up the processing time, we have distributed the processing tasks between nodes. In this method, each sensor node processes part of interpolation algorithm to generate the interpolated image with local communication between nodes. The processing task in camera sensor network is ray-space interpolation, which is an object independent method and based on MSE minimization by using adaptive filtering. Two methods were proposed for distributing processing tasks, which are Fully Image Shared Decentralized Processing (FIS-DP), and Partially Image Shared Decentralized Processing (PIS-DP), to share image data locally. Comparison of the proposed methods with Centralized Processing (CP) method shows that PIS-DP has the highest processing speed after FIS-DP, and CP has the lowest processing speed. Communication rate of CP and PIS-DP is almost same and better than FIS-DP. So, PIS-DP is recommended because of its better performance than CP and FIS-DP.
Design alternatives for process group membership and multicast
NASA Technical Reports Server (NTRS)
Birman, Kenneth P.; Cooper, Robert; Gleeson, Barry
1991-01-01
Process groups are a natural tool for distributed programming, and are increasingly important in distributed computing environments. However, there is little agreement on the most appropriate semantics for process group membership and group communication. These issues are of special importance in the Isis system, a toolkit for distributed programming. Isis supports several styles of process group, and a collection of group communication protocols spanning a range of atomicity and ordering properties. This flexibility makes Isis adaptable to a variety of applications, but is also a source of complexity that limits performance. This paper reports on a new architecture that arose from an effort to simplify Isis process group semantics. Our findings include a refined notion of how the clients of a group should be treated, what the properties of a multicast primitive should be when systems contain large numbers of overlapping groups, and a new construct called the casuality domain. As an illustration, we apply the architecture to the problem of converting processes into fault-tolerant process groups in a manner that is 'transparent' to other processes in the system.
USDA-ARS?s Scientific Manuscript database
Soil surface roughness significantly impacts runoff and erosion under rainfall. Few previous studies on runoff generation focused on the effects of soil surface roughness on the sediment particle size distribution (PSD), which greatly affects interrill erosion and sedimentation processes. To address...
Efficient High Performance Collective Communication for Distributed Memory Environments
ERIC Educational Resources Information Center
Ali, Qasim
2009-01-01
Collective communication allows efficient communication and synchronization among a collection of processes, unlike point-to-point communication that only involves a pair of communicating processes. Achieving high performance for both kernels and full-scale applications running on a distributed memory system requires an efficient implementation of…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-19
..., including warehousing and distribution; research and development; technology manufacturing; food processing... warehousing and distribution; research and development; technology manufacturing; food processing and... defense manufacturing, sensor manufacturing, or medical devices; (iv) Food/Agriculture--such as wine, food...
Manufacturing, Marketing and Distribution, Business and Office Occupations: Grade 8. Cluster III.
ERIC Educational Resources Information Center
Calhoun, Olivia H.
A curriculum guide for grade 8, the document is divided into eleven units: marketing and distribution; food manufacturing; data processing and automation; administration, management, and labor; secretarial and clerical services; office machines; equipment; metal manufacturing and processing; prefabrication and prepackaging; textile and clothing…
A gentle introduction to quantile regression for ecologists
Cade, B.S.; Noon, B.R.
2003-01-01
Quantile regression is a way to estimate the conditional quantiles of a response variable distribution in the linear model that provides a more complete view of possible causal relationships between variables in ecological processes. Typically, all the factors that affect ecological processes are not measured and included in the statistical models used to investigate relationships between variables associated with those processes. As a consequence, there may be a weak or no predictive relationship between the mean of the response variable (y) distribution and the measured predictive factors (X). Yet there may be stronger, useful predictive relationships with other parts of the response variable distribution. This primer relates quantile regression estimates to prediction intervals in parametric error distribution regression models (eg least squares), and discusses the ordering characteristics, interval nature, sampling variation, weighting, and interpretation of the estimates for homogeneous and heterogeneous regression models.
Where Might We Be Headed? Signposts from Other States
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reiter, Emerson
2017-04-07
Presentation on the state of distributed energy resources interconnection in Wisconsin from the Wisconsin Distributed Resources Collaborative (WIDRC) Interconnection Forum for Distributed Generation. It addresses concerns over application submission and processing, lack of visibility into the distribution system, and uncertainty in upgrade costs.
The process group approach to reliable distributed computing
NASA Technical Reports Server (NTRS)
Birman, Kenneth P.
1992-01-01
The difficulty of developing reliable distribution software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems that are substantially easier to develop, exploit sophisticated forms of cooperative computation, and achieve high reliability. Six years of research on ISIS, describing the model, its implementation challenges, and the types of applications to which ISIS has been applied are reviewed.
Variety of Sedimentary Process and Distribution of Tsunami Deposits in Laboratory Experiments
NASA Astrophysics Data System (ADS)
Yamaguchi, N.; Sekiguchi, T.
2017-12-01
As an indicator of the history and magnitude of paleotsunami events, tsunami deposits have received considerable attention. To improve the identification and interpretation of paleotsunami deposits, an understanding of sedimentary process and distribution of tsunami deposits is crucial. Recent detailed surveys of onshore tsunami deposits including the 2004 Indian Ocean tsunami and the 2011 Tohoku-oki tsunami have revealed that terrestrial topography causes a variety of their features and distributions. Therefore, a better understanding of possible sedimentary process and distribution on such influential topographies is required. Flume experiments, in which sedimentary conditions can be easily controlled, can provide insights into the effects of terrestrial topography as well as tsunami magnitude on the feature of tsunami deposits. In this presentation, we report laboratory experiments that focused on terrestrial topography including a water body (e.g. coastal lake) on a coastal lowland and a cliff. In both cases, the results suggested relationship between the distribution of tsunami deposits and the hydraulic condition of the tsunami flow associated with the terrestrial topography. These experiments suggest that influential topography would enhance the variability in thickness of tsunami deposits, and thus, in reconstructions of paleotsunami events using sedimentary records, we should take into account such anomalous distribution of tsunami deposits. Further examination of the temporal sequence of sedimentary process in laboratory tsunamis may improve interpretation and estimation of paleotsunami events.
Time-evolution of grain size distributions in random nucleation and growth crystallization processes
NASA Astrophysics Data System (ADS)
Teran, Anthony V.; Bill, Andreas; Bergmann, Ralf B.
2010-02-01
We study the time dependence of the grain size distribution N(r,t) during crystallization of a d -dimensional solid. A partial differential equation, including a source term for nuclei and a growth law for grains, is solved analytically for any dimension d . We discuss solutions obtained for processes described by the Kolmogorov-Avrami-Mehl-Johnson model for random nucleation and growth (RNG). Nucleation and growth are set on the same footing, which leads to a time-dependent decay of both effective rates. We analyze in detail how model parameters, the dimensionality of the crystallization process, and time influence the shape of the distribution. The calculations show that the dynamics of the effective nucleation and effective growth rates play an essential role in determining the final form of the distribution obtained at full crystallization. We demonstrate that for one class of nucleation and growth rates, the distribution evolves in time into the logarithmic-normal (lognormal) form discussed earlier by Bergmann and Bill [J. Cryst. Growth 310, 3135 (2008)]. We also obtain an analytical expression for the finite maximal grain size at all times. The theory allows for the description of a variety of RNG crystallization processes in thin films and bulk materials. Expressions useful for experimental data analysis are presented for the grain size distribution and the moments in terms of fundamental and measurable parameters of the model.
NASA Astrophysics Data System (ADS)
Queirós, S. M. D.; Tsallis, C.
2005-11-01
The GARCH algorithm is the most renowned generalisation of Engle's original proposal for modelising returns, the ARCH process. Both cases are characterised by presenting a time dependent and correlated variance or volatility. Besides a memory parameter, b, (present in ARCH) and an independent and identically distributed noise, ω, GARCH involves another parameter, c, such that, for c=0, the standard ARCH process is reproduced. In this manuscript we use a generalised noise following a distribution characterised by an index qn, such that qn=1 recovers the Gaussian distribution. Matching low statistical moments of GARCH distribution for returns with a q-Gaussian distribution obtained through maximising the entropy Sq=1-sumipiq/q-1, basis of nonextensive statistical mechanics, we obtain a sole analytical connection between q and left( b,c,qnright) which turns out to be remarkably good when compared with computational simulations. With this result we also derive an analytical approximation for the stationary distribution for the (squared) volatility. Using a generalised Kullback-Leibler relative entropy form based on Sq, we also analyse the degree of dependence between successive returns, zt and zt+1, of GARCH(1,1) processes. This degree of dependence is quantified by an entropic index, qop. Our analysis points the existence of a unique relation between the three entropic indexes qop, q and qn of the problem, independent of the value of (b,c).
NASA Astrophysics Data System (ADS)
Yang, Tianzu; Xiao, Hui; Chen, Lin; Chen, Wei; Liu, Weifeng; Zhang, Duchao
2018-03-01
Oxygen-rich side-blow bath smelting (OSBS) technology offers an efficient method for processing complex bismuth-lead concentrates; however, the element distributions in the process remain unclear. This work determined the distributions of elements, i.e., bismuth, lead, silver, copper, arsenic and antimony, in an industrial-scale OSBS process. The feed, oxidized slag and final products were collected from the respective sampling points and analyzed. For the oxidative smelting process, 65% of bismuth and 76% of silver in the concentrate report to the metal alloy, whereas less lead reports to the metal ( 31%) than the oxidized slag ( 44%). Approximately 50% of copper enters the matte, while more than 63% of arsenic and antimony report to the slag. For the reductive smelting process, less than 4.5% of bismuth, lead, silver and copper in the oxidized slag enter the reduced slag, indicating high recoveries of these metal values.
Baldovin-Stella stochastic volatility process and Wiener process mixtures
NASA Astrophysics Data System (ADS)
Peirano, P. P.; Challet, D.
2012-08-01
Starting from inhomogeneous time scaling and linear decorrelation between successive price returns, Baldovin and Stella recently proposed a powerful and consistent way to build a model describing the time evolution of a financial index. We first make it fully explicit by using Student distributions instead of power law-truncated Lévy distributions and show that the analytic tractability of the model extends to the larger class of symmetric generalized hyperbolic distributions and provide a full computation of their multivariate characteristic functions; more generally, we show that the stochastic processes arising in this framework are representable as mixtures of Wiener processes. The basic Baldovin and Stella model, while mimicking well volatility relaxation phenomena such as the Omori law, fails to reproduce other stylized facts such as the leverage effect or some time reversal asymmetries. We discuss how to modify the dynamics of this process in order to reproduce real data more accurately.
Wave scheduling - Decentralized scheduling of task forces in multicomputers
NASA Technical Reports Server (NTRS)
Van Tilborg, A. M.; Wittie, L. D.
1984-01-01
Decentralized operating systems that control large multicomputers need techniques to schedule competing parallel programs called task forces. Wave scheduling is a probabilistic technique that uses a hierarchical distributed virtual machine to schedule task forces by recursively subdividing and issuing wavefront-like commands to processing elements capable of executing individual tasks. Wave scheduling is highly resistant to processing element failures because it uses many distributed schedulers that dynamically assign scheduling responsibilities among themselves. The scheduling technique is trivially extensible as more processing elements join the host multicomputer. A simple model of scheduling cost is used by every scheduler node to distribute scheduling activity and minimize wasted processing capacity by using perceived workload to vary decentralized scheduling rules. At low to moderate levels of network activity, wave scheduling is only slightly less efficient than a central scheduler in its ability to direct processing elements to accomplish useful work.
NASA Astrophysics Data System (ADS)
Yang, Tianzu; Xiao, Hui; Chen, Lin; Chen, Wei; Liu, Weifeng; Zhang, Duchao
2018-06-01
Oxygen-rich side-blow bath smelting (OSBS) technology offers an efficient method for processing complex bismuth-lead concentrates; however, the element distributions in the process remain unclear. This work determined the distributions of elements, i.e., bismuth, lead, silver, copper, arsenic and antimony, in an industrial-scale OSBS process. The feed, oxidized slag and final products were collected from the respective sampling points and analyzed. For the oxidative smelting process, 65% of bismuth and 76% of silver in the concentrate report to the metal alloy, whereas less lead reports to the metal ( 31%) than the oxidized slag ( 44%). Approximately 50% of copper enters the matte, while more than 63% of arsenic and antimony report to the slag. For the reductive smelting process, less than 4.5% of bismuth, lead, silver and copper in the oxidized slag enter the reduced slag, indicating high recoveries of these metal values.
Species, functional groups, and thresholds in ecological resilience
Sundstrom, Shana M.; Allen, Craig R.; Barichievy, Chris
2012-01-01
The cross-scale resilience model states that ecological resilience is generated in part from the distribution of functions within and across scales in a system. Resilience is a measure of a system's ability to remain organized around a particular set of mutually reinforcing processes and structures, known as a regime. We define scale as the geographic extent over which a process operates and the frequency with which a process occurs. Species can be categorized into functional groups that are a link between ecosystem processes and structures and ecological resilience. We applied the cross-scale resilience model to avian species in a grassland ecosystem. A species’ morphology is shaped in part by its interaction with ecological structure and pattern, so animal body mass reflects the spatial and temporal distribution of resources. We used the log-transformed rank-ordered body masses of breeding birds associated with grasslands to identify aggregations and discontinuities in the distribution of those body masses. We assessed cross-scale resilience on the basis of 3 metrics: overall number of functional groups, number of functional groups within an aggregation, and the redundancy of functional groups across aggregations. We assessed how the loss of threatened species would affect cross-scale resilience by removing threatened species from the data set and recalculating values of the 3 metrics. We also determined whether more function was retained than expected after the loss of threatened species by comparing observed loss with simulated random loss in a Monte Carlo process. The observed distribution of function compared with the random simulated loss of function indicated that more functionality in the observed data set was retained than expected. On the basis of our results, we believe an ecosystem with a full complement of species can sustain considerable species losses without affecting the distribution of functions within and across aggregations, although ecological resilience is reduced. We propose that the mechanisms responsible for shaping discontinuous distributions of body mass and the nonrandom distribution of functions may also shape species losses such that local extinctions will be nonrandom with respect to the retention and distribution of functions and that the distribution of function within and across aggregations will be conserved despite extinctions.
Hommel, Johannes; Lauchnor, Ellen; Gerlach, Robin; ...
2015-12-16
Attachment of bacteria in porous media is a complex mixture of processes resulting in the transfer and immobilization of suspended cells onto a solid surface within the porous medium. However, quantifying the rate of attachment is difficult due to the many simultaneous processes possibly involved in attachment, including straining, sorption, and sedimentation, and the difficulties in measuring metabolically active cells attached to porous media. Preliminary experiments confirmed the difficulty associated with measuring active Sporosarcina pasteurii cells attached to porous media. However, attachment is a key process in applications of biofilm-mediated reactions in the subsurface such as microbially induced calcite precipitation.more » Independent of the exact processes involved, attachment determines both the distribution and the initial amount of attached biomass and as such the initial reaction rate. As direct experimental investigations are difficult, this study is limited to a numerical investigation of the effect of various initial biomass distributions and initial amounts of attached biomass. This is performed for various injection strategies, changing the injection rate as well as alternating between continuous and pulsed injections. The results of this study indicate that, for the selected scenarios, both the initial amount and the distribution of attached biomass have minor influence on the Ca 2+ precipitation efficiency as well as the distribution of the precipitates compared to the influence of the injection strategy. The influence of the initial biomass distribution on the resulting final distribution of the precipitated calcite is limited, except for the continuous injection at intermediate injection rate. But even for this injection strategy, the Ca 2+ precipitation efficiency shows no significant dependence on the initial biomass distribution.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hommel, Johannes; Lauchnor, Ellen; Gerlach, Robin
Attachment of bacteria in porous media is a complex mixture of processes resulting in the transfer and immobilization of suspended cells onto a solid surface within the porous medium. However, quantifying the rate of attachment is difficult due to the many simultaneous processes possibly involved in attachment, including straining, sorption, and sedimentation, and the difficulties in measuring metabolically active cells attached to porous media. Preliminary experiments confirmed the difficulty associated with measuring active Sporosarcina pasteurii cells attached to porous media. However, attachment is a key process in applications of biofilm-mediated reactions in the subsurface such as microbially induced calcite precipitation.more » Independent of the exact processes involved, attachment determines both the distribution and the initial amount of attached biomass and as such the initial reaction rate. As direct experimental investigations are difficult, this study is limited to a numerical investigation of the effect of various initial biomass distributions and initial amounts of attached biomass. This is performed for various injection strategies, changing the injection rate as well as alternating between continuous and pulsed injections. The results of this study indicate that, for the selected scenarios, both the initial amount and the distribution of attached biomass have minor influence on the Ca 2+ precipitation efficiency as well as the distribution of the precipitates compared to the influence of the injection strategy. The influence of the initial biomass distribution on the resulting final distribution of the precipitated calcite is limited, except for the continuous injection at intermediate injection rate. But even for this injection strategy, the Ca 2+ precipitation efficiency shows no significant dependence on the initial biomass distribution.« less
Building Big Flares: Constraining Generating Processes of Solar Flare Distributions
NASA Astrophysics Data System (ADS)
Wyse Jackson, T.; Kashyap, V.; McKillop, S.
2015-12-01
We address mechanisms which seek to explain the observed solar flare distribution, dN/dE ~ E1.8. We have compiled a comprehensive database, from GOES, NOAA, XRT, and AIA data, of solar flares and their characteristics, covering the year 2013. These datasets allow us to probe how stored magnetic energy is released over the course of an active region's evolution. We fit power-laws to flare distributions over various attribute groupings. For instance, we compare flares that occur before and after an active region reaches its maximum area, and show that the corresponding flare distributions are indistinguishable; thus, the processes that lead to magnetic reconnection are similar in both cases. A turnover in the distribution is not detectable at the energies accessible to our study, suggesting that a self-organized critical (SOC) process is a valid mechanism. However, we find changes in the distributions that suggest that the simple picture of an SOC where flares draw energy from an inexhaustible reservoir of stored magnetic energy is incomplete. Following the evolution of the flare distribution over the lifetimes of active regions, we find that the distribution flattens with time, and for larger active regions, and that a single power-law model is insufficient. This implies that flares that occur later in the lifetime of the active region tend towards higher energies. We conclude that the SOC process must have an upper bound. Increasing the scope of the study to include data from other years and more instruments will increase the robustness of these results. This work was supported by the NSF-REU Solar Physics Program at SAO, grant number AGS 1263241, NASA Contract NAS8-03060 to the Chandra X-ray Center and by NASA Hinode/XRT contract NNM07AB07C to SAO
NASA Technical Reports Server (NTRS)
Hung, Ching-Chen (Inventor)
1999-01-01
A process for providing elemental metals or metal oxides distributed on a carbon substrate or self-supported utilizing graphite oxide as a percursor. The graphite oxide is exposed to one or more metal chlorides to form an intermediary product comprising carbon, metal, chloride, and oxygen. This intermediary product can be further processed by direct exposure to carbonate solutions to form a second intermediary product comprising carbon, metal carbonate, and oxygen. Either intermediary product may be further processed: a) in air to produce metal oxide; b) in an inert environment to produce metal oxide on carbon substrate; c) in a reducing environment to produce elemental metal distributed on carbon substrate. The product generally takes the shape of the carbon precursor.
NASA Technical Reports Server (NTRS)
Hung, Ching-Cheh (Inventor)
1999-01-01
A process for providing elemental metals or metal oxides distributed on a carbon substrate or self-supported utilizing graphite oxide as a precursor. The graphite oxide is exposed to one or more metal chlorides to form an intermediary product comprising carbon, metal, chloride, and oxygen. This intermediary product can be further processed by direct exposure to carbonate-solutions to form a second intermediary product comprising carbon, metal carbonate, and oxygen. Either intermediary product may be further processed: a) in air to produce metal oxide; b) in an inert environment to produce metal oxide on carbon substrate; c) in a reducing environment to produce elemental metal distributed on carbon substrate. The product generally takes the shape of the carbon precursor.
A distributed data base management system. [for Deep Space Network
NASA Technical Reports Server (NTRS)
Bryan, A. I.
1975-01-01
Major system design features of a distributed data management system for the NASA Deep Space Network (DSN) designed for continuous two-way deep space communications are described. The reasons for which the distributed data base utilizing third-generation minicomputers is selected as the optimum approach for the DSN are threefold: (1) with a distributed master data base, valid data is available in real-time to support DSN management activities at each location; (2) data base integrity is the responsibility of local management; and (3) the data acquisition/distribution and processing power of a third-generation computer enables the computer to function successfully as a data handler or as an on-line process controller. The concept of the distributed data base is discussed along with the software, data base integrity, and hardware used. The data analysis/update constraint is examined.
The Effect of Considering Environmental Aspect to Distribution Planning: A Case in Logistics SME
NASA Astrophysics Data System (ADS)
Prambudia, Yudha; Andrian Nur, Andri
2016-01-01
Environmental aspect is often neglected in traditional distribution planning process of a product. Especially in small-medium enterprises (SME) of developing countries where cost efficiency is the predominant factor. Bearing in mind that there is a large number of SME's performing logistics activities, the consideration of environmental aspect in their distribution planning process would be beneficial to climate change mitigation efforts. The purpose of this paper is to show the impact of environmental aspect should it be considered as a contributing factor in distribution planning. In this research, an adoption of CO2-emission factor in an SME's distribution planning in Indonesia was simulated. The outputs of distribution planning with and without the factor consideration are then compared. The result shows that adoption of CO2-emission factor would change the priority of delivery route.
Wang, Wen J; He, Hong S; Thompson, Frank R; Spetich, Martin A; Fraser, Jacob S
2018-09-01
Demographic processes (fecundity, dispersal, colonization, growth, and mortality) and their interactions with environmental changes are not well represented in current climate-distribution models (e.g., niche and biophysical process models) and constitute a large uncertainty in projections of future tree species distribution shifts. We investigate how species biological traits and environmental heterogeneity affect species distribution shifts. We used a species-specific, spatially explicit forest dynamic model LANDIS PRO, which incorporates site-scale tree species demography and competition, landscape-scale dispersal and disturbances, and regional-scale abiotic controls, to simulate the distribution shifts of four representative tree species with distinct biological traits in the central hardwood forest region of United States. Our results suggested that biological traits (e.g., dispersal capacity, maturation age) were important for determining tree species distribution shifts. Environmental heterogeneity, on average, reduced shift rates by 8% compared to perfect environmental conditions. The average distribution shift rates ranged from 24 to 200myear -1 under climate change scenarios, implying that many tree species may not able to keep up with climate change because of limited dispersal capacity, long generation time, and environmental heterogeneity. We suggest that climate-distribution models should include species demographic processes (e.g., fecundity, dispersal, colonization), biological traits (e.g., dispersal capacity, maturation age), and environmental heterogeneity (e.g., habitat fragmentation) to improve future predictions of species distribution shifts in response to changing climates. Copyright © 2018 Elsevier B.V. All rights reserved.
Improvements in surface singularity analysis and design methods. [applicable to airfoils
NASA Technical Reports Server (NTRS)
Bristow, D. R.
1979-01-01
The coupling of the combined source vortex distribution of Green's potential flow function with contemporary numerical techniques is shown to provide accurate, efficient, and stable solutions to subsonic inviscid analysis and design problems for multi-element airfoils. The analysis problem is solved by direct calculation of the surface singularity distribution required to satisfy the flow tangency boundary condition. The design or inverse problem is solved by an iteration process. In this process, the geometry and the associated pressure distribution are iterated until the pressure distribution most nearly corresponding to the prescribed design distribution is obtained. Typically, five iteration cycles are required for convergence. A description of the analysis and design method is presented, along with supporting examples.
Cluster analysis for determining distribution center location
NASA Astrophysics Data System (ADS)
Lestari Widaningrum, Dyah; Andika, Aditya; Murphiyanto, Richard Dimas Julian
2017-12-01
Determination of distribution facilities is highly important to survive in the high level of competition in today’s business world. Companies can operate multiple distribution centers to mitigate supply chain risk. Thus, new problems arise, namely how many and where the facilities should be provided. This study examines a fast-food restaurant brand, which located in the Greater Jakarta. This brand is included in the category of top 5 fast food restaurant chain based on retail sales. There were three stages in this study, compiling spatial data, cluster analysis, and network analysis. Cluster analysis results are used to consider the location of the additional distribution center. Network analysis results show a more efficient process referring to a shorter distance to the distribution process.
Evidence of Chemical Cloud Processing from In Situ Measurements in the Polluted Marine Environment
NASA Astrophysics Data System (ADS)
Hudson, J. G.; Noble, S. R., Jr.
2017-12-01
Chemical cloud processing alters activated cloud condensation nuclei (CCN). Aqueous oxidation of trace gases dissolved within cloud droplets adds soluble material. As most cloud droplets evaporate, the residual material produces CCN that are larger and with a different hygroscopicity (κ). This improves the CCN, lowering the critical supersaturation (Sc), making it more easily activated. This process separates the processed (accumulation) and unprocessed (Aitken) modes creating bimodal CCN distributions (Hudson et al., 2015). Various measurements made during the MArine Stratus/stratocumulus Experiment (MASE), including CCN, exhibited aqueous processing signals. Particle size distributions; measured by a differential mobility analyzer; were compared with CCN distributions; measured by the Desert Research Institute CCN spectrometer; by converting size to Sc using κ to overlay concurrent distributions. By tuning each mode to the best agreement, κ for each mode is determined; processed κ (κp), unprocessed κ (κu). In MASE, 59% of bimodal distributions had different κ for the two modes indicating dominance of chemical processing via aqueous oxidation. This is consistent with Hudson et al. (2015). Figure 1A also indicates chemical processing with larger κp between 0.35-0.75. Processed CCN had an influx of soluble material from aqueous oxidation which increased κp versus κu. Above 0.75 κp is lower than κu (Fig. 1A). When κu is high and sulfate material is added, κp tends towards κ of the added material. Thus, κp is reduced by additional material that is less soluble than the original material. Chemistry measurements in MASE also indicate in-cloud aqueous oxidation (Fig. 1B and 1C). Higher fraction of CCN concentrations in the processed mode are also associated with larger amounts of sulfates (Fig. 1B, red) and nitrates (Fig. 1C, orange) while SO2 (Fig. 1B, black) and O3 (Fig. 1C, blue) have lower amounts. This larger amount of sulfate is at the expense of SO2, indicating aqueous oxidation within cloud as associated with larger concentrations in the processed mode. Thus, in situ measurements indicate that chemical cloud processing alters size, Sc and κ of activated CCN. Hudson et al. (2015), JGRA, 120, 3436-3452.
Planning Systems for Distributed Operations
NASA Technical Reports Server (NTRS)
Maxwell, Theresa G.
2002-01-01
This viewgraph representation presents an overview of the mission planning process involving distributed operations (such as the International Space Station (ISS)) and the computer hardware and software systems needed to support such an effort. Topics considered include: evolution of distributed planning systems, ISS distributed planning, the Payload Planning System (PPS), future developments in distributed planning systems, Request Oriented Scheduling Engine (ROSE) and Next Generation distributed planning systems.
Slow diffusion by Markov random flights
NASA Astrophysics Data System (ADS)
Kolesnik, Alexander D.
2018-06-01
We present a conception of the slow diffusion processes in the Euclidean spaces Rm , m ≥ 1, based on the theory of random flights with small constant speed that are driven by a homogeneous Poisson process of small rate. The slow diffusion condition that, on long time intervals, leads to the stationary distributions, is given. The stationary distributions of slow diffusion processes in some Euclidean spaces of low dimensions, are presented.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-29
... POSTAL SERVICE 39 CFR Part 111 Express Mail Open and Distribute and Priority Mail Open and... proposes to revise its standards to reflect changes and updates for Express Mail[supreg] Open and Distribute and Priority Mail[supreg] Open and Distribute to improve efficiencies in processing and to control...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-24
... POSTAL SERVICE 39 CFR Part 111 Express Mail Open and Distribute and Priority Mail Open and... to reflect changes and updates for Express Mail[supreg] Open and Distribute and Priority Mail[supreg] Open and Distribute to improve efficiencies in processing and to control costs. DATES: Effective Date...
Hadoop neural network for parallel and distributed feature selection.
Hodge, Victoria J; O'Keefe, Simon; Austin, Jim
2016-06-01
In this paper, we introduce a theoretical basis for a Hadoop-based neural network for parallel and distributed feature selection in Big Data sets. It is underpinned by an associative memory (binary) neural network which is highly amenable to parallel and distributed processing and fits with the Hadoop paradigm. There are many feature selectors described in the literature which all have various strengths and weaknesses. We present the implementation details of five feature selection algorithms constructed using our artificial neural network framework embedded in Hadoop YARN. Hadoop allows parallel and distributed processing. Each feature selector can be divided into subtasks and the subtasks can then be processed in parallel. Multiple feature selectors can also be processed simultaneously (in parallel) allowing multiple feature selectors to be compared. We identify commonalities among the five features selectors. All can be processed in the framework using a single representation and the overall processing can also be greatly reduced by only processing the common aspects of the feature selectors once and propagating these aspects across all five feature selectors as necessary. This allows the best feature selector and the actual features to select to be identified for large and high dimensional data sets through exploiting the efficiency and flexibility of embedding the binary associative-memory neural network in Hadoop. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Application of ideal pressure distribution in development process of automobile seats.
Kilincsoy, U; Wagner, A; Vink, P; Bubb, H
2016-07-19
In designing a car seat the ideal pressure distribution is important as it is the largest contact surface between the human and the car. Because of obstacles hindering a more general application of the ideal pressure distribution in seating design, multidimensional measuring techniques are necessary with extensive user tests. The objective of this study is to apply and integrate the knowledge about the ideal pressure distribution in the seat design process for a car manufacturer in an efficient way. Ideal pressure distribution was combined with pressure measurement, in this case pressure mats. In order to integrate this theoretical knowledge of seating comfort in the seat development process for a car manufacturer a special user interface was defined and developed. The mapping of the measured pressure distribution in real-time and accurately scaled to actual seats during test setups directly lead to design implications for seat design even during the test situation. Detailed analysis of the subject's feedback was correlated with objective measurements of the subject's pressure distribution in real time. Therefore existing seating characteristics were taken into account as well. A user interface can incorporate theoretical and validated 'state of the art' models of comfort. Consequently, this information can reduce extensive testing and lead to more detailed results in a shorter time period.
USDA-ARS?s Scientific Manuscript database
Twenty-seven environmental contaminants and pharmaceuticals encompassing a wide range of physicochemical properties were utilized to determine the effects of milk processing on xenobiotic distribution among milk fractions. Target compounds included radiolabeled antibiotics [ciprofloxacin (CIPR), cl...
Spatio-temporal distribution of stored-product inects around food processing and storage facilities
USDA-ARS?s Scientific Manuscript database
Grain storage and processing facilities consist of a landscape of indoor and outdoor habitats that can potentially support stored-product insect pests, and understanding patterns of species diversity and spatial distribution in the landscape surrounding structures can provide insight into how the ou...
75 FR 48955 - Arbitration Panel Decision Under the Randolph-Sheppard Act
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-12
... vending machine facility operated by a blind vendor at the USPS's Chicago Processing and Distribution... cafeteria operations are exempt from the Act and whether the vending machines operated by a private vendor at the Chicago Processing and Distribution Center are in direct competition with the vending machines...
NASA Astrophysics Data System (ADS)
Mit'kin, A. S.; Pogorelov, V. A.; Chub, E. G.
2015-08-01
We consider the method of constructing the suboptimal filter on the basis of approximating the a posteriori probability density of the multidimensional Markov process by the Pearson distributions. The proposed method can efficiently be used for approximating asymmetric, excessive, and finite densities.
40 CFR 750.30 - Applicability.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Processing and Distribution in Commerce Exemptions § 750.30 Applicability. Sections 750.30-750.41 apply to all rulemakings under authority of section 6(e)(3)(B) of the Toxic Substances Control Act (TSCA), 15 U.S.C. 2605(e)(3)(B) with respect to petitions for PCB processing and distribution in commerce...
40 CFR 750.30 - Applicability.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Processing and Distribution in Commerce Exemptions § 750.30 Applicability. Sections 750.30-750.41 apply to all rulemakings under authority of section 6(e)(3)(B) of the Toxic Substances Control Act (TSCA), 15 U.S.C. 2605(e)(3)(B) with respect to petitions for PCB processing and distribution in commerce...
40 CFR 750.30 - Applicability.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Processing and Distribution in Commerce Exemptions § 750.30 Applicability. Sections 750.30-750.41 apply to all rulemakings under authority of section 6(e)(3)(B) of the Toxic Substances Control Act (TSCA), 15 U.S.C. 2605(e)(3)(B) with respect to petitions for PCB processing and distribution in commerce...
40 CFR 750.30 - Applicability.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Processing and Distribution in Commerce Exemptions § 750.30 Applicability. Sections 750.30-750.41 apply to all rulemakings under authority of section 6(e)(3)(B) of the Toxic Substances Control Act (TSCA), 15 U.S.C. 2605(e)(3)(B) with respect to petitions for PCB processing and distribution in commerce...
NASA Technical Reports Server (NTRS)
Mah, G. R.; Myers, J.
1993-01-01
The U.S. Government has initiated the Global Change Research program, a systematic study of the Earth as a complete system. NASA's contribution of the Global Change Research Program is the Earth Observing System (EOS), a series of orbital sensor platforms and an associated data processing and distribution system. The EOS Data and Information System (EOSDIS) is the archiving, production, and distribution system for data collected by the EOS space segment and uses a multilayer architecture for processing, archiving, and distributing EOS data. The first layer consists of the spacecraft ground stations and processing facilities that receive the raw data from the orbiting platforms and then separate the data by individual sensors. The second layer consists of Distributed Active Archive Centers (DAAC) that process, distribute, and archive the sensor data. The third layer consists of a user science processing network. The EOSDIS is being developed in a phased implementation. The initial phase, Version 0, is a prototype of the operational system. Version 0 activities are based upon existing systems and are designed to provide an EOSDIS-like capability for information management and distribution. An important science support task is the creation of simulated data sets for EOS instruments from precursor aircraft or satellite data. The Land Processes DAAC, at the EROS Data Center (EDC), is responsible for archiving and processing EOS precursor data from airborne instruments such as the Thermal Infrared Multispectral Scanner (TIMS), the Thematic Mapper Simulator (TMS), and Airborne Visible and Infrared Imaging Spectrometer (AVIRIS). AVIRIS, TIMS, and TMS are flown by the NASA-Ames Research Center ARC) on an ER-2. The ER-2 flies at 65000 feet and can carry up to three sensors simultaneously. Most jointly collected data sets are somewhat boresighted and roughly registered. The instrument data are being used to construct data sets that simulate the spectral and spatial characteristics of the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) instrument scheduled to be flown on the first EOS-AM spacecraft. The ASTER is designed to acquire 14 channels of land science data in the visible and near-IR (VNIR), shortwave-IR (SWIR), and thermal-IR (TIR) regions from 0.52 micron to 11.65 micron at high spatial resolutions of 15 m to 90 m. Stereo data will also be acquired in the VNIR region in a single band. The AVIRIS and TMS cover the ASTER VNIR and SWIR bands, and the TIMS covers the TIR bands. Simulated ASTER data sets have been generated over Death Valley, California, Cuprite, Nevada, and the Drum Mountains, Utah using a combination of AVIRIS, TIMS, amd TMS data, and existing digital elevation models (DEM) for the topographic information.
Enforcement of entailment constraints in distributed service-based business processes.
Hummer, Waldemar; Gaubatz, Patrick; Strembeck, Mark; Zdun, Uwe; Dustdar, Schahram
2013-11-01
A distributed business process is executed in a distributed computing environment. The service-oriented architecture (SOA) paradigm is a popular option for the integration of software services and execution of distributed business processes. Entailment constraints, such as mutual exclusion and binding constraints, are important means to control process execution. Mutually exclusive tasks result from the division of powerful rights and responsibilities to prevent fraud and abuse. In contrast, binding constraints define that a subject who performed one task must also perform the corresponding bound task(s). We aim to provide a model-driven approach for the specification and enforcement of task-based entailment constraints in distributed service-based business processes. Based on a generic metamodel, we define a domain-specific language (DSL) that maps the different modeling-level artifacts to the implementation-level. The DSL integrates elements from role-based access control (RBAC) with the tasks that are performed in a business process. Process definitions are annotated using the DSL, and our software platform uses automated model transformations to produce executable WS-BPEL specifications which enforce the entailment constraints. We evaluate the impact of constraint enforcement on runtime performance for five selected service-based processes from existing literature. Our evaluation demonstrates that the approach correctly enforces task-based entailment constraints at runtime. The performance experiments illustrate that the runtime enforcement operates with an overhead that scales well up to the order of several ten thousand logged invocations. Using our DSL annotations, the user-defined process definition remains declarative and clean of security enforcement code. Our approach decouples the concerns of (non-technical) domain experts from technical details of entailment constraint enforcement. The developed framework integrates seamlessly with WS-BPEL and the Web services technology stack. Our prototype implementation shows the feasibility of the approach, and the evaluation points to future work and further performance optimizations.
NASA Astrophysics Data System (ADS)
Zabala, M. E.; Manzano, M.; Vives, L.
2016-10-01
Groundwater in the upper 50 m of the Pampeano Aquifer in the Del Azul Creek basin (Argentina) has F and As contents above the WHO safe drinking levels. This basin is situated to the SE of the Chaco-Pampean plain, in Buenos Aires Province. The Pampeano Aquifer is a major water source for all uses. The aim of the study is to assess the primary processes controlling the regional distribution of F and As in the most exploited part of the aquifer. The study involved sampling for chemical and isotopic analyses, interpretation of data with different methods (diagrams, bivariate analyses, mineral saturation states, Principal Component Analysis) and deduction of leading processes. Information about aquifer mineralogy and hydrogeochemical processes involved in F and As solubilization in the aquifer has been taken from previous works of the same and other authors. Groundwater salinity increases to the NE, in the direction of the regional groundwater flow. Chemical types evolve from Ca/Mg-HCO3 in the upper part of the basin, to Na-HCO3 in the middle part and to Na-ClSO4 and Na-Cl in the lower part. The regional distribution of F is controlled by hydrogeochemical processes. The distribution of As is controlled by two types of processes dominating in different areas: hydrogeochemical controls prevail in the low to moderately mineralized groundwater of the middle and lower parts of the basin; hydrogeological controls lead to the NE of the lower basin and beyond it. In the last zone there are abundant lagoons and seasonal flooding is frequent, making evapoconcentration an important process for groundwater mineralization. The main hydrogeochemical processes involved in both F and As distribution are cation exchange, with Na release and Ca uptake, carbonate dissolution and pH increase. Arsenic release induced by redox processes may play to the NE, but its results would be masked by the effect of evaporation.
Marinkovic, Ksenija; Courtney, Maureen G.; Witzel, Thomas; Dale, Anders M.; Halgren, Eric
2014-01-01
Although a crucial role of the fusiform gyrus (FG) in face processing has been demonstrated with a variety of methods, converging evidence suggests that face processing involves an interactive and overlapping processing cascade in distributed brain areas. Here we examine the spatio-temporal stages and their functional tuning to face inversion, presence and configuration of inner features, and face contour in healthy subjects during passive viewing. Anatomically-constrained magnetoencephalography (aMEG) combines high-density whole-head MEG recordings and distributed source modeling with high-resolution structural MRI. Each person's reconstructed cortical surface served to constrain noise-normalized minimum norm inverse source estimates. The earliest activity was estimated to the occipital cortex at ~100 ms after stimulus onset and was sensitive to an initial coarse level visual analysis. Activity in the right-lateralized ventral temporal area (inclusive of the FG) peaked at ~160 ms and was largest to inverted faces. Images containing facial features in the veridical and rearranged configuration irrespective of the facial outline elicited intermediate level activity. The M160 stage may provide structural representations necessary for downstream distributed areas to process identity and emotional expression. However, inverted faces additionally engaged the left ventral temporal area at ~180 ms and were uniquely subserved by bilateral processing. This observation is consistent with the dual route model and spared processing of inverted faces in prosopagnosia. The subsequent deflection, peaking at ~240 ms in the anterior temporal areas bilaterally, was largest to normal, upright faces. It may reflect initial engagement of the distributed network subserving individuation and familiarity. These results support dynamic models suggesting that processing of unfamiliar faces in the absence of a cognitive task is subserved by a distributed and interactive neural circuit. PMID:25426044
Okayama optical polarimetry and spectroscopy system (OOPS) II. Network-transparent control software.
NASA Astrophysics Data System (ADS)
Sasaki, T.; Kurakami, T.; Shimizu, Y.; Yutani, M.
Control system of the OOPS (Okayama Optical Polarimetry and Spectroscopy system) is designed to integrate several instruments whose controllers are distributed over a network; the OOPS instrument, a CCD camera and data acquisition unit, the 91 cm telescope, an autoguider, a weather monitor, and an image display tool SAOimage. With the help of message-based communication, the control processes cooperate with related processes to perform an astronomical observation under supervising control by a scheduler process. A logger process collects status data of all the instruments to distribute them to related processes upon request. Software structure of each process is described.
NASA Astrophysics Data System (ADS)
Savorskiy, V.; Lupyan, E.; Balashov, I.; Burtsev, M.; Proshin, A.; Tolpin, V.; Ermakov, D.; Chernushich, A.; Panova, O.; Kuznetsov, O.; Vasilyev, V.
2014-04-01
Both development and application of remote sensing involves a considerable expenditure of material and intellectual resources. Therefore, it is important to use high-tech means of distribution of remote sensing data and processing results in order to facilitate access for as much as possible number of researchers. It should be accompanied with creation of capabilities for potentially more thorough and comprehensive, i.e. ultimately deeper, acquisition and complex analysis of information about the state of Earth's natural resources. As well objective need in a higher degree of Earth observation (EO) data assimilation is set by conditions of satellite observations, in which the observed objects are uncontrolled state. Progress in addressing this problem is determined to a large extent by order of the distributed EO information system (IS) functioning. Namely, it is largely dependent on reducing the cost of communication processes (data transfer) between spatially distributed IS nodes and data users. One of the most effective ways to improve the efficiency of data exchange processes is the creation of integrated EO IS optimized for running procedures of distributed data processing. The effective EO IS implementation should be based on specific software architecture.
Distributions of Autocorrelated First-Order Kinetic Outcomes: Illness Severity
Englehardt, James D.
2015-01-01
Many complex systems produce outcomes having recurring, power law-like distributions over wide ranges. However, the form necessarily breaks down at extremes, whereas the Weibull distribution has been demonstrated over the full observed range. Here the Weibull distribution is derived as the asymptotic distribution of generalized first-order kinetic processes, with convergence driven by autocorrelation, and entropy maximization subject to finite positive mean, of the incremental compounding rates. Process increments represent multiplicative causes. In particular, illness severities are modeled as such, occurring in proportion to products of, e.g., chronic toxicant fractions passed by organs along a pathway, or rates of interacting oncogenic mutations. The Weibull form is also argued theoretically and by simulation to be robust to the onset of saturation kinetics. The Weibull exponential parameter is shown to indicate the number and widths of the first-order compounding increments, the extent of rate autocorrelation, and the degree to which process increments are distributed exponential. In contrast with the Gaussian result in linear independent systems, the form is driven not by independence and multiplicity of process increments, but by increment autocorrelation and entropy. In some physical systems the form may be attracting, due to multiplicative evolution of outcome magnitudes towards extreme values potentially much larger and smaller than control mechanisms can contain. The Weibull distribution is demonstrated in preference to the lognormal and Pareto I for illness severities versus (a) toxicokinetic models, (b) biologically-based network models, (c) scholastic and psychological test score data for children with prenatal mercury exposure, and (d) time-to-tumor data of the ED01 study. PMID:26061263
Generalized Processing Tree Models: Jointly Modeling Discrete and Continuous Variables.
Heck, Daniel W; Erdfelder, Edgar; Kieslich, Pascal J
2018-05-24
Multinomial processing tree models assume that discrete cognitive states determine observed response frequencies. Generalized processing tree (GPT) models extend this conceptual framework to continuous variables such as response times, process-tracing measures, or neurophysiological variables. GPT models assume finite-mixture distributions, with weights determined by a processing tree structure, and continuous components modeled by parameterized distributions such as Gaussians with separate or shared parameters across states. We discuss identifiability, parameter estimation, model testing, a modeling syntax, and the improved precision of GPT estimates. Finally, a GPT version of the feature comparison model of semantic categorization is applied to computer-mouse trajectories.
76 FR 43990 - Procurement List; Proposed Additions and Deletions
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-22
... Polished Nickel Finish NSN: AF430--Nameplate, Class B, USAF, Cloth, Dark Navy Blue with Silver/Gray Thread... Distribution. In 2010, the Defense Distribution Center (DDC) was renamed DLA Distribution. A process was also...
Task allocation in a distributed computing system
NASA Technical Reports Server (NTRS)
Seward, Walter D.
1987-01-01
A conceptual framework is examined for task allocation in distributed systems. Application and computing system parameters critical to task allocation decision processes are discussed. Task allocation techniques are addressed which focus on achieving a balance in the load distribution among the system's processors. Equalization of computing load among the processing elements is the goal. Examples of system performance are presented for specific applications. Both static and dynamic allocation of tasks are considered and system performance is evaluated using different task allocation methodologies.
The ‘hit’ phenomenon: a mathematical model of human dynamics interactions as a stochastic process
NASA Astrophysics Data System (ADS)
Ishii, Akira; Arakaki, Hisashi; Matsuda, Naoya; Umemura, Sanae; Urushidani, Tamiko; Yamagata, Naoya; Yoshida, Narihiko
2012-06-01
A mathematical model for the ‘hit’ phenomenon in entertainment within a society is presented as a stochastic process of human dynamics interactions. The model uses only the advertisement budget time distribution as an input, and word-of-mouth (WOM), represented by posts on social network systems, is used as data to make a comparison with the calculated results. The unit of time is days. The WOM distribution in time is found to be very close to the revenue distribution in time. Calculations for the Japanese motion picture market based on the mathematical model agree well with the actual revenue distribution in time.
An automated model-based aim point distribution system for solar towers
NASA Astrophysics Data System (ADS)
Schwarzbözl, Peter; Rong, Amadeus; Macke, Ansgar; Säck, Jan-Peter; Ulmer, Steffen
2016-05-01
Distribution of heliostat aim points is a major task during central receiver operation, as the flux distribution produced by the heliostats varies continuously with time. Known methods for aim point distribution are mostly based on simple aim point patterns and focus on control strategies to meet local temperature and flux limits of the receiver. Lowering the peak flux on the receiver to avoid hot spots and maximizing thermal output are obviously competing targets that call for a comprehensive optimization process. This paper presents a model-based method for online aim point optimization that includes the current heliostat field mirror quality derived through an automated deflectometric measurement process.
Estimation Methods for Non-Homogeneous Regression - Minimum CRPS vs Maximum Likelihood
NASA Astrophysics Data System (ADS)
Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.; Zeileis, Achim
2017-04-01
Non-homogeneous regression models are widely used to statistically post-process numerical weather prediction models. Such regression models correct for errors in mean and variance and are capable to forecast a full probability distribution. In order to estimate the corresponding regression coefficients, CRPS minimization is performed in many meteorological post-processing studies since the last decade. In contrast to maximum likelihood estimation, CRPS minimization is claimed to yield more calibrated forecasts. Theoretically, both scoring rules used as an optimization score should be able to locate a similar and unknown optimum. Discrepancies might result from a wrong distributional assumption of the observed quantity. To address this theoretical concept, this study compares maximum likelihood and minimum CRPS estimation for different distributional assumptions. First, a synthetic case study shows that, for an appropriate distributional assumption, both estimation methods yield to similar regression coefficients. The log-likelihood estimator is slightly more efficient. A real world case study for surface temperature forecasts at different sites in Europe confirms these results but shows that surface temperature does not always follow the classical assumption of a Gaussian distribution. KEYWORDS: ensemble post-processing, maximum likelihood estimation, CRPS minimization, probabilistic temperature forecasting, distributional regression models
Smolensky, Paul; Goldrick, Matthew; Mathis, Donald
2014-08-01
Mental representations have continuous as well as discrete, combinatorial properties. For example, while predominantly discrete, phonological representations also vary continuously; this is reflected by gradient effects in instrumental studies of speech production. Can an integrated theoretical framework address both aspects of structure? The framework we introduce here, Gradient Symbol Processing, characterizes the emergence of grammatical macrostructure from the Parallel Distributed Processing microstructure (McClelland, Rumelhart, & The PDP Research Group, 1986) of language processing. The mental representations that emerge, Distributed Symbol Systems, have both combinatorial and gradient structure. They are processed through Subsymbolic Optimization-Quantization, in which an optimization process favoring representations that satisfy well-formedness constraints operates in parallel with a distributed quantization process favoring discrete symbolic structures. We apply a particular instantiation of this framework, λ-Diffusion Theory, to phonological production. Simulations of the resulting model suggest that Gradient Symbol Processing offers a way to unify accounts of grammatical competence with both discrete and continuous patterns in language performance. Copyright © 2013 Cognitive Science Society, Inc.
On the impact of neutron star binaries' natal-kick distribution on the Galactic r-process enrichment
NASA Astrophysics Data System (ADS)
Safarzadeh, Mohammadtaher; Côté, Benoit
2017-11-01
We study the impact of the neutron star binaries' (NSBs) natal-kick distribution on the galactic r-process enrichment. We model the growth of a Milky Way type halo based on N-body simulation results and its star formation history based on multi-epoch abundance matching techniques. We consider that the NSBs that merge well beyond the galaxy's effective radius (>2 × Reff) do not contribute to the galactic r-process enrichment. Assuming a power-law delay-time distribution (DTD) function (∝t-1) with tmin = 30 Myr for binaries' coalescence time-scales and an exponential profile for their natal-kick distribution with an average value of 180 km s-1, we show that up to ˜ 40 per cent of all formed NSBs do not contribute to the r-process enrichment by z = 0, either because they merge far from the galaxy at a given redshift (up to ˜ 25 per cent) or have not yet merged by today (˜ 15 per cent). Our result is largely insensitive to the details of the DTD function. Assuming a constant coalescence time-scale of 100 Myr well approximates the adopted DTD although with 30 per cent of the NSBs ending up not contributing to the r-process enrichment. Our results, although rather dependent on the adopted natal-kick distribution, represent the first step towards estimating the impact of natal kicks and DTD functions on the r-process enrichment of galaxies that would need to be incorporated in the hydrodynamical simulations.
The Web Measurement Environment (WebME): A Tool for Combining and Modeling Distributed Data
NASA Technical Reports Server (NTRS)
Tesoriero, Roseanne; Zelkowitz, Marvin
1997-01-01
Many organizations have incorporated data collection into their software processes for the purpose of process improvement. However, in order to improve, interpreting the data is just as important as the collection of data. With the increased presence of the Internet and the ubiquity of the World Wide Web, the potential for software processes being distributed among several physically separated locations has also grown. Because project data may be stored in multiple locations and in differing formats, obtaining and interpreting data from this type of environment becomes even more complicated. The Web Measurement Environment (WebME), a Web-based data visualization tool, is being developed to facilitate the understanding of collected data in a distributed environment. The WebME system will permit the analysis of development data in distributed, heterogeneous environments. This paper provides an overview of the system and its capabilities.
Analysis of fault-tolerant neurocontrol architectures
NASA Technical Reports Server (NTRS)
Troudet, T.; Merrill, W.
1992-01-01
The fault-tolerance of analog parallel distributed implementations of a multivariable aircraft neurocontroller is analyzed by simulating weight and neuron failures in a simplified scheme of analog processing based on the functional architecture of the ETANN chip (Electrically Trainable Artificial Neural Network). The neural information processing is found to be only partially distributed throughout the set of weights of the neurocontroller synthesized with the backpropagation algorithm. Although the degree of distribution of the neural processing, and consequently the fault-tolerance of the neurocontroller, could be enhanced using Locally Distributed Weight and Neuron Approaches, a satisfactory level of fault-tolerance could only be obtained by retraining the degrated VLSI neurocontroller. The possibility of maintaining neurocontrol performance and stability in the presence of single weight of neuron failures was demonstrated through an automated retraining procedure of the neurocontroller based on a pre-programmed choice and sequence of the training parameters.
On Maximal Hard-Core Thinnings of Stationary Particle Processes
NASA Astrophysics Data System (ADS)
Hirsch, Christian; Last, Günter
2018-02-01
The present paper studies existence and distributional uniqueness of subclasses of stationary hard-core particle systems arising as thinnings of stationary particle processes. These subclasses are defined by natural maximality criteria. We investigate two specific criteria, one related to the intensity of the hard-core particle process, the other one being a local optimality criterion on the level of realizations. In fact, the criteria are equivalent under suitable moment conditions. We show that stationary hard-core thinnings satisfying such criteria exist and are frequently distributionally unique. More precisely, distributional uniqueness holds in subcritical and barely supercritical regimes of continuum percolation. Additionally, based on the analysis of a specific example, we argue that fluctuations in grain sizes can play an important role for establishing distributional uniqueness at high intensities. Finally, we provide a family of algorithmically constructible approximations whose volume fractions are arbitrarily close to the maximum.
Virtual Sensor Web Architecture
NASA Astrophysics Data System (ADS)
Bose, P.; Zimdars, A.; Hurlburt, N.; Doug, S.
2006-12-01
NASA envisions the development of smart sensor webs, intelligent and integrated observation network that harness distributed sensing assets, their associated continuous and complex data sets, and predictive observation processing mechanisms for timely, collaborative hazard mitigation and enhanced science productivity and reliability. This paper presents Virtual Sensor Web Infrastructure for Collaborative Science (VSICS) Architecture for sustained coordination of (numerical and distributed) model-based processing, closed-loop resource allocation, and observation planning. VSICS's key ideas include i) rich descriptions of sensors as services based on semantic markup languages like OWL and SensorML; ii) service-oriented workflow composition and repair for simple and ensemble models; event-driven workflow execution based on event-based and distributed workflow management mechanisms; and iii) development of autonomous model interaction management capabilities providing closed-loop control of collection resources driven by competing targeted observation needs. We present results from initial work on collaborative science processing involving distributed services (COSEC framework) that is being extended to create VSICS.
Distributed computing feasibility in a non-dedicated homogeneous distributed system
NASA Technical Reports Server (NTRS)
Leutenegger, Scott T.; Sun, Xian-He
1993-01-01
The low cost and availability of clusters of workstations have lead researchers to re-explore distributed computing using independent workstations. This approach may provide better cost/performance than tightly coupled multiprocessors. In practice, this approach often utilizes wasted cycles to run parallel jobs. The feasibility of such a non-dedicated parallel processing environment assuming workstation processes have preemptive priority over parallel tasks is addressed. An analytical model is developed to predict parallel job response times. Our model provides insight into how significantly workstation owner interference degrades parallel program performance. A new term task ratio, which relates the parallel task demand to the mean service demand of nonparallel workstation processes, is introduced. It was proposed that task ratio is a useful metric for determining how large the demand of a parallel applications must be in order to make efficient use of a non-dedicated distributed system.
Parallel volume ray-casting for unstructured-grid data on distributed-memory architectures
NASA Technical Reports Server (NTRS)
Ma, Kwan-Liu
1995-01-01
As computing technology continues to advance, computational modeling of scientific and engineering problems produces data of increasing complexity: large in size and unstructured in shape. Volume visualization of such data is a challenging problem. This paper proposes a distributed parallel solution that makes ray-casting volume rendering of unstructured-grid data practical. Both the data and the rendering process are distributed among processors. At each processor, ray-casting of local data is performed independent of the other processors. The global image composing processes, which require inter-processor communication, are overlapped with the local ray-casting processes to achieve maximum parallel efficiency. This algorithm differs from previous ones in four ways: it is completely distributed, less view-dependent, reasonably scalable, and flexible. Without using dynamic load balancing, test results on the Intel Paragon using from two to 128 processors show, on average, about 60% parallel efficiency.
The distribution of individual cabinet positions in coalition governments: A sequential approach
Meyer, Thomas M.; Müller, Wolfgang C.
2015-01-01
Abstract Multiparty government in parliamentary democracies entails bargaining over the payoffs of government participation, in particular the allocation of cabinet positions. While most of the literature deals with the numerical distribution of cabinet seats among government parties, this article explores the distribution of individual portfolios. It argues that coalition negotiations are sequential choice processes that begin with the allocation of those portfolios most important to the bargaining parties. This induces conditionality in the bargaining process as choices of individual cabinet positions are not independent of each other. Linking this sequential logic with party preferences for individual cabinet positions, the authors of the article study the allocation of individual portfolios for 146 coalition governments in Western and Central Eastern Europe. The results suggest that a sequential logic in the bargaining process results in better predictions than assuming mutual independence in the distribution of individual portfolios. PMID:27546952
Measurement of the Drell-Yan angular distribution in the dimuon channel using 2011 CMS data
NASA Astrophysics Data System (ADS)
Silvers, David I.
The angular distributions of muons produced by the Drell-Yan process are measured as a function of dimuon transverse momentum in two ranges of rapidity. Events from pp collisions at sqrt( s) = 7 TeV were collected with the CMS detector using dimuon triggers and selected from data samples corresponding to 4.9 fb-1 of integrated luminosity. The two-dimensional angular distribution dN/dO of the negative muon in the Collins-Soper frame is fitted to determine the coefficients in a parametric form of the angular distribution. The measured coefficients are compared to next-to-leading order calculations. We observe that qq and leading order qg production dominate the Drell-Yan process at pT (mumu) <55 GeV/c, while higher-order qg production dominates the Drell-Yan process for 55< pT (mumu) <120 GeV/c.
Bidirectional light-scattering image processing method for high-concentration jet sprays
NASA Astrophysics Data System (ADS)
Shimizu, I.; Emori, Y.; Yang, W.-J.; Shimoda, M.; Suzuki, T.
1985-01-01
In order to study the distributions of droplet size and volume density in high-concentration jet sprays, a new technique is developed, which combines the forward and backward light scattering method and an image processing method. A pulsed ruby laser is used as the light source. The Mie scattering theory is applied to the results obtained from image processing on the scattering photographs. The time history is obtained for the droplet size and volume density distributions, and the method is demonstrated by diesel fuel sprays under various injecting conditions. The validity of the technique is verified by a good agreement in the injected fuel volume distributions obtained by the present method and by injection rate measurements.
Katzman, G L; Morris, D; Lauman, J; Cochella, C; Goede, P; Harnsberger, H R
2001-06-01
To foster a community supported evaluation processes for open-source digital teaching file (DTF) development and maintenance. The mechanisms used to support this process will include standard web browsers, web servers, forum software, and custom additions to the forum software to potentially enable a mediated voting protocol. The web server will also serve as a focal point for beta and release software distribution, which is the desired end-goal of this process. We foresee that www.mdtf.org will provide for widespread distribution of open source DTF software that will include function and interface design decisions from community participation on the website forums.
Hierarchical Process Composition: Dynamic Maintenance of Structure in a Distributed Environment
1988-01-01
One prominent hne of research stresses the independence of address space and thread of control, and the resulting efficiencies due to shared memory...cooperating processes. StarOS focuses on case of use and a general capability mechanism, while Medusa stresses the effect of distributed hardware on system...process structure and the asynchrony among agents and between agents and sources of failure. By stressing dynamic structure, we are led to adopt an
Assessing Performance Tradeoffs in Undersea Distributed Sensor Networks
2006-09-01
time. We refer to this process as track - before - detect (see [5] for a description), since the final determination of a target presence is not made until...expressions for probability of successful search and probability of false search for modeling the track - before - detect process. We then describe a numerical...random manner (randomly sampled from a uniform distribution). II. SENSOR NETWORK PERFORMANCE MODELS We model the process of track - before - detect by
SPS Energy Conversion Power Management Workshop
NASA Technical Reports Server (NTRS)
1980-01-01
Energy technology concerning photovoltaic conversion, solar thermal conversion systems, and electrical power distribution processing is discussed. The manufacturing processes involving solar cells and solar array production are summarized. Resource issues concerning gallium arsenides and silicon alternatives are reported. Collector structures for solar construction are described and estimates in their service life, failure rates, and capabilities are presented. Theories of advanced thermal power cycles are summarized. Power distribution system configurations and processing components are presented.
Drew, L.J.; Attanasi, E.D.; Schuenemeyer, J.H.
1988-01-01
If observed oil and gas field size distributions are obtained by random samplings, the fitted distributions should approximate that of the parent population of oil and gas fields. However, empirical evidence strongly suggests that larger fields tend to be discovered earlier in the discovery process than they would be by random sampling. Economic factors also can limit the number of small fields that are developed and reported. This paper examines observed size distributions in state and federal waters of offshore Texas. Results of the analysis demonstrate how the shape of the observable size distributions change with significant hydrocarbon price changes. Comparison of state and federal observed size distributions in the offshore area shows how production cost differences also affect the shape of the observed size distribution. Methods for modifying the discovery rate estimation procedures when economic factors significantly affect the discovery sequence are presented. A primary conclusion of the analysis is that, because hydrocarbon price changes can significantly affect the observed discovery size distribution, one should not be confident about inferring the form and specific parameters of the parent field size distribution from the observed distributions. ?? 1988 International Association for Mathematical Geology.
Distribution of binder in granules produced by means of twin screw granulation.
Fonteyne, Margot; Fussell, Andrew Luke; Vercruysse, Jurgen; Vervaet, Chris; Remon, Jean Paul; Strachan, Clare; Rades, Thomas; De Beer, Thomas
2014-02-28
According to the quality by design principle processes may not remain black-boxes and full process understanding is required. The granule size distribution of granules produced via twin screw granulation is often found to be bimodal. The aim of this study was to gain a better understanding of binder distribution within granules produced via twin screw granulation in order to investigate if an inhomogeneous spread of binder is causing this bimodal size distribution. Theophylline-lactose-polyvinylpyrrolidone K30 (PVP) (30-67.5-2.5%, w/w) was used as a model formulation. The intra-granular distribution of PVP was evaluated by means of hyperspectral coherent anti-Stokes Raman scattering (CARS) microscopy. For the evaluated formulation, no PVP rich zones were detected when applying a lateral spatial resolution of 0.5 μm, indicating that PVP is homogenously distributed within the granules. Copyright © 2013 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Laszlo, Sarah; Plaut, David C.
2012-01-01
The Parallel Distributed Processing (PDP) framework has significant potential for producing models of cognitive tasks that approximate how the brain performs the same tasks. To date, however, there has been relatively little contact between PDP modeling and data from cognitive neuroscience. In an attempt to advance the relationship between…
Parallel Distributed Processing at 25: Further Explorations in the Microstructure of Cognition
ERIC Educational Resources Information Center
Rogers, Timothy T.; McClelland, James L.
2014-01-01
This paper introduces a special issue of "Cognitive Science" initiated on the 25th anniversary of the publication of "Parallel Distributed Processing" (PDP), a two-volume work that introduced the use of neural network models as vehicles for understanding cognition. The collection surveys the core commitments of the PDP…
Distributed Group Design Process: Lessons Learned.
ERIC Educational Resources Information Center
Eseryel, Deniz; Ganesan, Radha
A typical Web-based training development team consists of a project manager, an instructional designer, a subject-matter expert, a graphic artist, and a Web programmer. The typical scenario involves team members working together in the same setting during the entire design and development process. What happens when the team is distributed, that is…
The distribution of minor constituents in the stratosphere and lower mesosphere
NASA Technical Reports Server (NTRS)
Martell, E. A.
1973-01-01
The complex circulation processes within the stratosphere and mesosphere have been clarified by recent studies. The distribution of minor constituents in the middle atmosphere is significantly influenced by these transport processes. Rocket sampling results are discussed, giving attention to the sampling method, noble gases, methane, water vapor, molecular hydrogen, and carbon dioxide.
WaveJava: Wavelet-based network computing
NASA Astrophysics Data System (ADS)
Ma, Kun; Jiao, Licheng; Shi, Zhuoer
1997-04-01
Wavelet is a powerful theory, but its successful application still needs suitable programming tools. Java is a simple, object-oriented, distributed, interpreted, robust, secure, architecture-neutral, portable, high-performance, multi- threaded, dynamic language. This paper addresses the design and development of a cross-platform software environment for experimenting and applying wavelet theory. WaveJava, a wavelet class library designed by the object-orient programming, is developed to take advantage of the wavelets features, such as multi-resolution analysis and parallel processing in the networking computing. A new application architecture is designed for the net-wide distributed client-server environment. The data are transmitted with multi-resolution packets. At the distributed sites around the net, these data packets are done the matching or recognition processing in parallel. The results are fed back to determine the next operation. So, the more robust results can be arrived quickly. The WaveJava is easy to use and expand for special application. This paper gives a solution for the distributed fingerprint information processing system. It also fits for some other net-base multimedia information processing, such as network library, remote teaching and filmless picture archiving and communications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chamana, Manohar; Prabakar, Kumaraguru; Palmintier, Bryan
A software process is developed to convert distribution network models from a quasi-static time-series tool (OpenDSS) to a real-time dynamic phasor simulator (ePHASORSIM). The description of this process in this paper would be helpful for researchers who intend to perform similar conversions. The converter could be utilized directly by users of real-time simulators who intend to perform software-in-the-loop or hardware-in-the-loop tests on large distribution test feeders for a range of use cases, including testing functions of advanced distribution management systems against a simulated distribution system. In the future, the developers intend to release the conversion tool as open source tomore » enable use by others.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chamana, Manohar; Prabakar, Kumaraguru; Palmintier, Bryan
A software process is developed to convert distribution network models from a quasi-static time-series tool (OpenDSS) to a real-time dynamic phasor simulator (ePHASORSIM). The description of this process in this paper would be helpful for researchers who intend to perform similar conversions. The converter could be utilized directly by users of real-time simulators who intend to perform software-in-the-loop or hardware-in-the-loop tests on large distribution test feeders for a range of use cases, including testing functions of advanced distribution management systems against a simulated distribution system. In the future, the developers intend to release the conversion tool as open source tomore » enable use by others.« less
Effects of Spatial Gradients on Electron Runaway Acceleration
NASA Technical Reports Server (NTRS)
MacNeice, Peter; Ljepojevic, N. N.
1996-01-01
The runaway process is known to accelerate electrons in many laboratory plasmas and has been suggested as an acceleration mechanism in some astrophysical plasmas, including solar flares. Current calculations of the electron velocity distributions resulting from the runaway process are greatly restricted because they impose spatial homogeneity on the distribution. We have computed runaway distributions which include consistent development of spatial gradients in the energetic tail. Our solution for the electron velocity distribution is presented as a function of distance along a finite length acceleration region, and is compared with the equivalent distribution for the infinitely long homogenous system (i.e., no spatial gradients), as considered in the existing literature. All these results are for the weak field regime. We also discuss the severe restrictiveness of this weak field assumption.
Exploiting virtual synchrony in distributed systems
NASA Technical Reports Server (NTRS)
Birman, Kenneth P.; Joseph, Thomas A.
1987-01-01
Applications of a virtually synchronous environment are described for distributed programming, which underlies a collection of distributed programming tools in the ISIS2 system. A virtually synchronous environment allows processes to be structured into process groups, and makes events like broadcasts to the group as an entity, group membership changes, and even migration of an activity from one place to another appear to occur instantaneously, in other words, synchronously. A major advantage to this approach is that many aspects of a distributed application can be treated independently without compromising correctness. Moreover, user code that is designed as if the system were synchronous can often be executed concurrently. It is argued that this approach to building distributed and fault tolerant software is more straightforward, more flexible, and more likely to yield correct solutions than alternative approaches.
NASA Astrophysics Data System (ADS)
Doyle, Paul; Mtenzi, Fred; Smith, Niall; Collins, Adrian; O'Shea, Brendan
2012-09-01
The scientific community is in the midst of a data analysis crisis. The increasing capacity of scientific CCD instrumentation and their falling costs is contributing to an explosive generation of raw photometric data. This data must go through a process of cleaning and reduction before it can be used for high precision photometric analysis. Many existing data processing pipelines either assume a relatively small dataset or are batch processed by a High Performance Computing centre. A radical overhaul of these processing pipelines is required to allow reduction and cleaning rates to process terabyte sized datasets at near capture rates using an elastic processing architecture. The ability to access computing resources and to allow them to grow and shrink as demand fluctuates is essential, as is exploiting the parallel nature of the datasets. A distributed data processing pipeline is required. It should incorporate lossless data compression, allow for data segmentation and support processing of data segments in parallel. Academic institutes can collaborate and provide an elastic computing model without the requirement for large centralized high performance computing data centers. This paper demonstrates how a base 10 order of magnitude improvement in overall processing time has been achieved using the "ACN pipeline", a distributed pipeline spanning multiple academic institutes.
The analysis of the process in the cooling tower with the low efficiency
NASA Astrophysics Data System (ADS)
Badriev, A. I.; Sharifullin, V. N.
2017-11-01
We put quite a difficult task maintaining a temperature drop to 11-12 degrees at thermal power plants to ensure the required depth of cooling of vacuum in the condenser, cooling towers. This requirement is achieved with the reducing of the hydraulic load with the low efficiency of the apparatus. The task analysis process in this unit and identify the causes of his poor performance was put in the work. One of the possible reasons may be the heterogeneity of the process in the volume of the apparatus. Therefore, it was decided to investigate experimentally the distribution of the irrigation water and the air flow in the cross section of industrial cooling towers. As a result, we found a significant uneven distribution of flows of water and air in the volume of the apparatus. We have shown theoretically that the uneven distribution of irrigation leads to a significant decrease in the efficiency of evaporation in the cooling tower. The velocity distribution of the air as the tower sections, and inside sections are interesting. The obtained experimental data allowed to establish the internal communication: the effects of the distributions of the density of irrigation in sections of the apparatus for the distribution of changes of the temperature and the air velocity. The obtained results allowed to formulate a methodology for determining process problems and to develop actions on increase of the efficiency of the cooling tower.
NASA Astrophysics Data System (ADS)
Sergeenko, N. P.
2017-11-01
An adequate statistical method should be developed in order to predict probabilistically the range of ionospheric parameters. This problem is solved in this paper. The time series of the critical frequency of the layer F2- foF2( t) were subjected to statistical processing. For the obtained samples {δ foF2}, statistical distributions and invariants up to the fourth order are calculated. The analysis shows that the distributions differ from the Gaussian law during the disturbances. At levels of sufficiently small probability distributions, there are arbitrarily large deviations from the model of the normal process. Therefore, it is attempted to describe statistical samples {δ foF2} based on the Poisson model. For the studied samples, the exponential characteristic function is selected under the assumption that time series are a superposition of some deterministic and random processes. Using the Fourier transform, the characteristic function is transformed into a nonholomorphic excessive-asymmetric probability-density function. The statistical distributions of the samples {δ foF2} calculated for the disturbed periods are compared with the obtained model distribution function. According to the Kolmogorov's criterion, the probabilities of the coincidence of a posteriori distributions with the theoretical ones are P 0.7-0.9. The conducted analysis makes it possible to draw a conclusion about the applicability of a model based on the Poisson random process for the statistical description and probabilistic variation estimates during heliogeophysical disturbances of the variations {δ foF2}.
A "total parameter estimation" method in the varification of distributed hydrological models
NASA Astrophysics Data System (ADS)
Wang, M.; Qin, D.; Wang, H.
2011-12-01
Conventionally hydrological models are used for runoff or flood forecasting, hence the determination of model parameters are common estimated based on discharge measurements at the catchment outlets. With the advancement in hydrological sciences and computer technology, distributed hydrological models based on the physical mechanism such as SWAT, MIKESHE, and WEP, have gradually become the mainstream models in hydrology sciences. However, the assessments of distributed hydrological models and model parameter determination still rely on runoff and occasionally, groundwater level measurements. It is essential in many countries, including China, to understand the local and regional water cycle: not only do we need to simulate the runoff generation process and for flood forecasting in wet areas, we also need to grasp the water cycle pathways and consumption process of transformation in arid and semi-arid regions for the conservation and integrated water resources management. As distributed hydrological model can simulate physical processes within a catchment, we can get a more realistic representation of the actual water cycle within the simulation model. Runoff is the combined result of various hydrological processes, using runoff for parameter estimation alone is inherits problematic and difficult to assess the accuracy. In particular, in the arid areas, such as the Haihe River Basin in China, runoff accounted for only 17% of the rainfall, and very concentrated during the rainy season from June to August each year. During other months, many of the perennial rivers within the river basin dry up. Thus using single runoff simulation does not fully utilize the distributed hydrological model in arid and semi-arid regions. This paper proposed a "total parameter estimation" method to verify the distributed hydrological models within various water cycle processes, including runoff, evapotranspiration, groundwater, and soil water; and apply it to the Haihe river basin in China. The application results demonstrate that this comprehensive testing method is very useful in the development of a distributed hydrological model and it provides a new way of thinking in hydrological sciences.
NASA Astrophysics Data System (ADS)
Huang, D.; Liu, Y.
2014-12-01
The effects of subgrid cloud variability on grid-average microphysical rates and radiative fluxes are examined by use of long-term retrieval products at the Tropical West Pacific (TWP), Southern Great Plains (SGP), and North Slope of Alaska (NSA) sites of the Department of Energy's Atmospheric Radiation Measurement (ARM) Program. Four commonly used distribution functions, the truncated Gaussian, Gamma, lognormal, and Weibull distributions, are constrained to have the same mean and standard deviation as observed cloud liquid water content. The PDFs are then used to upscale relevant physical processes to obtain grid-average process rates. It is found that the truncated Gaussian representation results in up to 30% mean bias in autoconversion rate whereas the mean bias for the lognormal representation is about 10%. The Gamma and Weibull distribution function performs the best for the grid-average autoconversion rate with the mean relative bias less than 5%. For radiative fluxes, the lognormal and truncated Gaussian representations perform better than the Gamma and Weibull representations. The results show that the optimal choice of subgrid cloud distribution function depends on the nonlinearity of the process of interest and thus there is no single distribution function that works best for all parameterizations. Examination of the scale (window size) dependence of the mean bias indicates that the bias in grid-average process rates monotonically increases with increasing window sizes, suggesting the increasing importance of subgrid variability with increasing grid sizes.
Does climate have heavy tails?
NASA Astrophysics Data System (ADS)
Bermejo, Miguel; Mudelsee, Manfred
2013-04-01
When we speak about a distribution with heavy tails, we are referring to the probability of the existence of extreme values will be relatively large. Several heavy-tail models are constructed from Poisson processes, which are the most tractable models. Among such processes, one of the most important are the Lévy processes, which are those process with independent, stationary increments and stochastic continuity. If the random component of a climate process that generates the data exhibits a heavy-tail distribution, and if that fact is ignored by assuming a finite-variance distribution, then there would be serious consequences (in the form, e.g., of bias) for the analysis of extreme values. Yet, it appears that it is an open question to what extent and degree climate data exhibit heavy-tail phenomena. We present a study about the statistical inference in the presence of heavy-tail distribution. In particular, we explore (1) the estimation of tail index of the marginal distribution using several estimation techniques (e.g., Hill estimator, Pickands estimator) and (2) the power of hypothesis tests. The performance of the different methods are compared using artificial time-series by means of Monte Carlo experiments. We systematically apply the heavy tail inference to observed climate data, in particular we focus on time series data. We study several proxy and directly observed climate variables from the instrumental period, the Holocene and the Pleistocene. This work receives financial support from the European Commission (Marie Curie Initial Training Network LINC, No. 289447, within the 7th Framework Programme).
Pre-compound emission in low-energy heavy-ion interactions
NASA Astrophysics Data System (ADS)
Sharma, Manoj Kumar; Shuaib, Mohd.; Sharma, Vijay R.; Yadav, Abhishek; Singh, Pushpendra P.; Singh, Devendra P.; Unnati; Singh, B. P.; Prasad, R.
2017-11-01
Recent experimental studies have shown the presence of pre-compound emission component in heavy ion reactions at low projectile energy ranging from 4 to 7 MeV/nucleons. In earlier measurements strength of the pre-compound component has been estimated from the difference in forward-backward distributions of emitted particles. Present measurement is a part of an ongoing program on the study of reaction dynamics of heavy ion interactions at low energies aimed at investigating the effect of momentum transfer in compound, precompound, complete and incomplete fusion processes in heavy ion reactions. In the present work on the basis of momentum transfer the measurement of the recoil range distributions of heavy residues has been used to decipher the components of compound and pre-compound emission processes in the fusion of 16O projectile with 159Tb and 169Tm targets. The analysis of recoil range distribution measurements show two distinct linear momentum transfer components corresponding to pre-compound and compound nucleus processes are involved. In order to obtain the mean input angular momentum associated with compound and pre-compound emission processes, an online measurement of the spin distributions of the residues has been performed. The analysis of spin distribution indicate that the mean input angular momentum associated with pre-compound products is found to be relatively lower than that associated with compound nucleus process. The pre-compound components obtained from the present analysis are consistent with those obtained from the analysis of excitation functions.
Bober, David B.; Kumar, Mukal; Rupert, Timothy J.; ...
2015-12-28
Nanocrystalline materials are defined by their fine grain size, but details of the grain boundary character distribution should also be important. Grain boundary character distributions are reported for ball-milled, sputter-deposited, and electrodeposited Ni and Ni-based alloys, all with average grain sizes of ~20 nm, to study the influence of processing route. The two deposited materials had nearly identical grain boundary character distributions, both marked by a Σ3 length percentage of 23 to 25 pct. In contrast, the ball-milled material had only 3 pct Σ3-type grain boundaries and a large fraction of low-angle boundaries (16 pct), with the remainder being predominantlymore » random high angle (73 pct). Furthermore, these grain boundary character measurements are connected to the physical events that control their respective processing routes. Consequences for material properties are also discussed with a focus on nanocrystalline corrosion. As a whole, the results presented here show that grain boundary character distribution, which has often been overlooked in nanocrystalline metals, can vary significantly and influence material properties in profound ways.« less
NASA Astrophysics Data System (ADS)
Steenhuis, T. S.; Mendoza, G.; Lyon, S. W.; Gerard Marchant, P.; Walter, M. T.; Schneiderman, E.
2003-04-01
Because the traditional Soil Conservation Service Curve Number (SCS-CN) approach continues to be ubiquitously used in GIS-BASED water quality models, new application methods are needed that are consistent with variable source area (VSA) hydrological processes in the landscape. We developed within an integrated GIS modeling environment a distributed approach for applying the traditional SCS-CN equation to watersheds where VSA hydrology is a dominant process. Spatial representation of hydrologic processes is important for watershed planning because restricting potentially polluting activities from runoff source areas is fundamental to controlling non-point source pollution. The methodology presented here uses the traditional SCS-CN method to predict runoff volume and spatial extent of saturated areas and uses a topographic index to distribute runoff source areas through watersheds. The resulting distributed CN-VSA method was incorporated in an existing GWLF water quality model and applied to sub-watersheds of the Delaware basin in the Catskill Mountains region of New York State. We found that the distributed CN-VSA approach provided a physically-based method that gives realistic results for watersheds with VSA hydrology.
DAMT - DISTRIBUTED APPLICATION MONITOR TOOL (HP9000 VERSION)
NASA Technical Reports Server (NTRS)
Keith, B.
1994-01-01
Typical network monitors measure status of host computers and data traffic among hosts. A monitor to collect statistics about individual processes must be unobtrusive and possess the ability to locate and monitor processes, locate and monitor circuits between processes, and report traffic back to the user through a single application program interface (API). DAMT, Distributed Application Monitor Tool, is a distributed application program that will collect network statistics and make them available to the user. This distributed application has one component (i.e., process) on each host the user wishes to monitor as well as a set of components at a centralized location. DAMT provides the first known implementation of a network monitor at the application layer of abstraction. Potential users only need to know the process names of the distributed application they wish to monitor. The tool locates the processes and the circuit between them, and reports any traffic between them at a user-defined rate. The tool operates without the cooperation of the processes it monitors. Application processes require no changes to be monitored by this tool. Neither does DAMT require the UNIX kernel to be recompiled. The tool obtains process and circuit information by accessing the operating system's existing process database. This database contains all information available about currently executing processes. Expanding the information monitored by the tool can be done by utilizing more information from the process database. Traffic on a circuit between processes is monitored by a low-level LAN analyzer that has access to the raw network data. The tool also provides features such as dynamic event reporting and virtual path routing. A reusable object approach was used in the design of DAMT. The tool has four main components; the Virtual Path Switcher, the Central Monitor Complex, the Remote Monitor, and the LAN Analyzer. All of DAMT's components are independent, asynchronously executing processes. The independent processes communicate with each other via UNIX sockets through a Virtual Path router, or Switcher. The Switcher maintains a routing table showing the host of each component process of the tool, eliminating the need for each process to do so. The Central Monitor Complex provides the single application program interface (API) to the user and coordinates the activities of DAMT. The Central Monitor Complex is itself divided into independent objects that perform its functions. The component objects are the Central Monitor, the Process Locator, the Circuit Locator, and the Traffic Reporter. Each of these objects is an independent, asynchronously executing process. User requests to the tool are interpreted by the Central Monitor. The Process Locator identifies whether a named process is running on a monitored host and which host that is. The circuit between any two processes in the distributed application is identified using the Circuit Locator. The Traffic Reporter handles communication with the LAN Analyzer and accumulates traffic updates until it must send a traffic report to the user. The Remote Monitor process is replicated on each monitored host. It serves the Central Monitor Complex processes with application process information. The Remote Monitor process provides access to operating systems information about currently executing processes. It allows the Process Locator to find processes and the Circuit Locator to identify circuits between processes. It also provides lifetime information about currently monitored processes. The LAN Analyzer consists of two processes. Low-level monitoring is handled by the Sniffer. The Sniffer analyzes the raw data on a single, physical LAN. It responds to commands from the Analyzer process, which maintains the interface to the Traffic Reporter and keeps track of which circuits to monitor. DAMT is written in C-language for HP-9000 series computers running HP-UX and Sun 3 and 4 series computers running SunOS. DAMT requires 1Mb of disk space and 4Mb of RAM for execution. This package requires MIT's X Window System, Version 11 Revision 4, with OSF/Motif 1.1. The HP-9000 version (GSC-13589) includes sample HP-9000/375 and HP-9000/730 executables which were compiled under HP-UX, and the Sun version (GSC-13559) includes sample Sun3 and Sun4 executables compiled under SunOS. The standard distribution medium for the HP version of DAMT is a .25 inch HP pre-formatted streaming magnetic tape cartridge in UNIX tar format. It is also available on a 4mm magnetic tape in UNIX tar format. The standard distribution medium for the Sun version of DAMT is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format. DAMT was developed in 1992.
Lévy-Student distributions for halos in accelerator beams.
Cufaro Petroni, Nicola; De Martino, Salvatore; De Siena, Silvio; Illuminati, Fabrizio
2005-12-01
We describe the transverse beam distribution in particle accelerators within the controlled, stochastic dynamical scheme of stochastic mechanics (SM) which produces time reversal invariant diffusion processes. This leads to a linearized theory summarized in a Schrödinger-like (SL) equation. The space charge effects have been introduced in recent papers by coupling this S-L equation with the Maxwell equations. We analyze the space-charge effects to understand how the dynamics produces the actual beam distributions, and in particular we show how the stationary, self-consistent solutions are related to the (external and space-charge) potentials both when we suppose that the external field is harmonic (constant focusing), and when we a priori prescribe the shape of the stationary solution. We then proceed to discuss a few other ideas by introducing generalized Student distributions, namely, non-Gaussian, Lévy infinitely divisible (but not stable) distributions. We will discuss this idea from two different standpoints: (a) first by supposing that the stationary distribution of our (Wiener powered) SM model is a Student distribution; (b) by supposing that our model is based on a (non-Gaussian) Lévy process whose increments are Student distributed. We show that in the case (a) the longer tails of the power decay of the Student laws and in the case (b) the discontinuities of the Lévy-Student process can well account for the rare escape of particles from the beam core, and hence for the formation of a halo in intense beams.
Levy-Student distributions for halos in accelerator beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cufaro Petroni, Nicola; De Martino, Salvatore; De Siena, Silvio
2005-12-15
We describe the transverse beam distribution in particle accelerators within the controlled, stochastic dynamical scheme of stochastic mechanics (SM) which produces time reversal invariant diffusion processes. This leads to a linearized theory summarized in a Schroedinger-like (SL) equation. The space charge effects have been introduced in recent papers by coupling this S-L equation with the Maxwell equations. We analyze the space-charge effects to understand how the dynamics produces the actual beam distributions, and in particular we show how the stationary, self-consistent solutions are related to the (external and space-charge) potentials both when we suppose that the external field is harmonicmore » (constant focusing), and when we a priori prescribe the shape of the stationary solution. We then proceed to discuss a few other ideas by introducing generalized Student distributions, namely, non-Gaussian, Levy infinitely divisible (but not stable) distributions. We will discuss this idea from two different standpoints: (a) first by supposing that the stationary distribution of our (Wiener powered) SM model is a Student distribution; (b) by supposing that our model is based on a (non-Gaussian) Levy process whose increments are Student distributed. We show that in the case (a) the longer tails of the power decay of the Student laws and in the case (b) the discontinuities of the Levy-Student process can well account for the rare escape of particles from the beam core, and hence for the formation of a halo in intense beams.« less
Distribution and interplay of geologic processes on Titan from Cassini radar data
Lopes, R.M.C.; Stofan, E.R.; Peckyno, R.; Radebaugh, J.; Mitchell, K.L.; Mitri, Giuseppe; Wood, C.A.; Kirk, R.L.; Wall, S.D.; Lunine, J.I.; Hayes, A.; Lorenz, R.; Farr, Tom; Wye, L.; Craig, J.; Ollerenshaw, R.J.; Janssen, M.; LeGall, A.; Paganelli, F.; West, R.; Stiles, B.; Callahan, P.; Anderson, Y.; Valora, P.; Soderblom, L.
2010-01-01
The Cassini Titan Radar Mapper is providing an unprecedented view of Titan's surface geology. Here we use Synthetic Aperture Radar (SAR) image swaths (Ta-T30) obtained from October 2004 to December 2007 to infer the geologic processes that have shaped Titan's surface. These SAR swaths cover about 20% of the surface, at a spatial resolution ranging from ???350 m to ???2 km. The SAR data are distributed over a wide latitudinal and longitudinal range, enabling some conclusions to be drawn about the global distribution of processes. They reveal a geologically complex surface that has been modified by all the major geologic processes seen on Earth - volcanism, tectonism, impact cratering, and erosion and deposition by fluvial and aeolian activity. In this paper, we map geomorphological units from SAR data and analyze their areal distribution and relative ages of modification in order to infer the geologic evolution of Titan's surface. We find that dunes and hummocky and mountainous terrains are more widespread than lakes, putative cryovolcanic features, mottled plains, and craters and crateriform structures that may be due to impact. Undifferentiated plains are the largest areal unit; their origin is uncertain. In terms of latitudinal distribution, dunes and hummocky and mountainous terrains are located mostly at low latitudes (less than 30??), with no dunes being present above 60??. Channels formed by fluvial activity are present at all latitudes, but lakes are at high latitudes only. Crateriform structures that may have been formed by impact appear to be uniformly distributed with latitude, but the well-preserved impact craters are all located at low latitudes, possibly indicating that more resurfacing has occurred at higher latitudes. Cryovolcanic features are not ubiquitous, and are mostly located between 30?? and 60?? north. We examine temporal relationships between units wherever possible, and conclude that aeolian and fluvial/pluvial/lacustrine processes are the most recent, while tectonic processes that led to the formation of mountains and Xanadu are likely the most ancient. ?? 2009 Elsevier Inc.
The social architecture of capitalism
NASA Astrophysics Data System (ADS)
Wright, Ian
2005-02-01
A dynamic model of the social relations between workers and capitalists is introduced. The model self-organises into a dynamic equilibrium with statistical properties that are in close qualitative and in many cases quantitative agreement with a broad range of known empirical distributions of developed capitalism, including the power-law firm size distribution, the Laplace firm and GDP growth distribution, the lognormal firm demises distribution, the exponential recession duration distribution, the lognormal-Pareto income distribution, and the gamma-like firm rate-of-profit distribution. Normally these distributions are studied in isolation, but this model unifies and connects them within a single causal framework. The model also generates business cycle phenomena, including fluctuating wage and profit shares in national income about values consistent with empirical studies. The generation of an approximately lognormal-Pareto income distribution and an exponential-Pareto wealth distribution demonstrates that the power-law regime of the income distribution can be explained by an additive process on a power-law network that models the social relation between employers and employees organised in firms, rather than a multiplicative process that models returns to investment in financial markets. A testable consequence of the model is the conjecture that the rate-of-profit distribution is consistent with a parameter-mix of a ratio of normal variates with means and variances that depend on a firm size parameter that is distributed according to a power-law.
Rapid Processing of Radio Interferometer Data for Transient Surveys
NASA Astrophysics Data System (ADS)
Bourke, S.; Mooley, K.; Hallinan, G.
2014-05-01
We report on a software infrastructure and pipeline developed to process large radio interferometer datasets. The pipeline is implemented using a radical redesign of the AIPS processing model. An infrastructure we have named AIPSlite is used to spawn, at runtime, minimal AIPS environments across a cluster. The pipeline then distributes and processes its data in parallel. The system is entirely free of the traditional AIPS distribution and is self configuring at runtime. This software has so far been used to process a EVLA Stripe 82 transient survey, the data for the JVLA-COSMOS project, and has been used to process most of the EVLA L-Band data archive imaging each integration to search for short duration transients.
EOS MLS Science Data Processing System: A Description of Architecture and Capabilities
NASA Technical Reports Server (NTRS)
Cuddy, David T.; Echeverri, Mark D.; Wagner, Paul A.; Hanzel, Audrey T.; Fuller, Ryan A.
2006-01-01
This paper describes the architecture and capabilities of the Science Data Processing System (SDPS) for the EOS MLS. The SDPS consists of two major components--the Science Computing Facility and the Science Investigator-led Processing System. The Science Computing Facility provides the facilities for the EOS MLS Science Team to perform the functions of scientific algorithm development, processing software development, quality control of data products, and scientific analyses. The Science Investigator-led Processing System processes and reprocesses the science data for the entire mission and delivers the data products to the Science Computing Facility and to the Goddard Space Flight Center Earth Science Distributed Active Archive Center, which archives and distributes the standard science products.
Evaluation of Gas Phase Dispersion in Flotation under Predetermined Hydrodynamic Conditions
NASA Astrophysics Data System (ADS)
Młynarczykowska, Anna; Oleksik, Konrad; Tupek-Murowany, Klaudia
2018-03-01
Results of various investigations shows the relationship between the flotation parameters and gas distribution in a flotation cell. The size of gas bubbles is a random variable with a specific distribution. The analysis of this distribution is useful to make mathematical description of the flotation process. The flotation process depends on many variable factors. These are mainly occurrences like collision of single particle with gas bubble, adhesion of particle to the surface of bubble and detachment process. These factors are characterized by randomness. Because of that it is only possible to talk about the probability of occurence of one of these events which directly affects the speed of the process, thus a constant speed of flotation process. Probability of the bubble-particle collision in the flotation chamber with mechanical pulp agitation depends on the surface tension of the solution, air consumption, degree of pul aeration, energy dissipation and average feed particle size. Appropriate identification and description of the parameters of the dispersion of gas bubbles helps to complete the analysis of the flotation process in a specific physicochemical conditions and hydrodynamic for any raw material. The article presents the results of measurements and analysis of the gas phase dispersion by the size distribution of air bubbles in a flotation chamber under fixed hydrodynamic conditions. The tests were carried out in the Laboratory of Instrumental Methods in Department of Environmental Engineering and Mineral Processing, Faculty of Mining and Geoengineerin, AGH Univeristy of Science and Technology in Krakow.
NASA Astrophysics Data System (ADS)
Grujicic, M.; Ramaswami, S.; Snipes, J. S.; Yen, C.-F.; Cheeseman, B. A.; Montgomery, J. S.
2013-10-01
A multiphysics computational model has been developed for the conventional Gas Metal Arc Welding (GMAW) joining process and used to analyze butt-welding of MIL A46100, a prototypical high-hardness armor martensitic steel. The model consists of five distinct modules, each covering a specific aspect of the GMAW process, i.e., (a) dynamics of welding-gun behavior; (b) heat transfer from the electric arc and mass transfer from the electrode to the weld; (c) development of thermal and mechanical fields during the GMAW process; (d) the associated evolution and spatial distribution of the material microstructure throughout the weld region; and (e) the final spatial distribution of the as-welded material properties. To make the newly developed GMAW process model applicable to MIL A46100, the basic physical-metallurgy concepts and principles for this material have to be investigated and properly accounted for/modeled. The newly developed GMAW process model enables establishment of the relationship between the GMAW process parameters (e.g., open circuit voltage, welding current, electrode diameter, electrode-tip/weld distance, filler-metal feed speed, and gun travel speed), workpiece material chemistry, and the spatial distribution of as-welded material microstructure and properties. The predictions of the present GMAW model pertaining to the spatial distribution of the material microstructure and properties within the MIL A46100 weld region are found to be consistent with general expectations and prior observations.
NASA Astrophysics Data System (ADS)
Dudek, Mirosław R.; Mleczko, Józef
Surprisingly, still very little is known about the mathematical modeling of peaks in the binding affinities distribution function. In general, it is believed that the peaks represent antibodies directed towards single epitopes. In this paper, we refer to fluorescence flow cytometry experiments and show that even monoclonal antibodies can display multi-modal histograms of affinity distribution. This result take place when some obstacles appear in the paratope-epitope reaction such that the process of reaching the specific epitope ceases to be a point Poisson process. A typical example is the large area of cell surface, which could be unreachable by antibodies leading to the heterogeneity of the cell surface repletion. In this case the affinity of cells to bind the antibodies should be described by a more complex process than the pure-Poisson point process. We suggested to use a doubly stochastic Poisson process, where the points are replaced by a binomial point process resulting in the Neyman distribution. The distribution can have a strongly multinomial character, and with the number of modes depending on the concentration of antibodies and epitopes. All this means that there is a possibility to go beyond the simplified theory, one response towards one epitope. As a consequence, our description provides perspectives for describing antigen-antibody reactions, both qualitatively and quantitavely, even in the case when some peaks result from more than one binding mechanism.
Analysis Using Bi-Spectral Related Technique
1993-11-17
filtering is employed as the data is processed (equation 1). Earlier results have shown that in contrast to the Wigner - Ville Distribution ( WVD ) no spectral...Technique by-o -~ Ralph Hippenstiel November 17, 1993 94 2 22 1 0 Approved for public reslease; distribution unlimited. Prepared for: Naval Command Control...Government. 12a. DISTRIBUTION /AVAILABILITY STATEMENT 12b. DISTRIBUTION ’.ODE Approved for public relkase; distribution unlimited. 13. ABSTRACT (Maximum
Li, Yue; Zhang, Di; Capoglu, Ilker; Hujsak, Karl A; Damania, Dhwanil; Cherkezyan, Lusik; Roth, Eric; Bleher, Reiner; Wu, Jinsong S; Subramanian, Hariharan; Dravid, Vinayak P; Backman, Vadim
2017-06-01
Essentially all biological processes are highly dependent on the nanoscale architecture of the cellular components where these processes take place. Statistical measures, such as the autocorrelation function (ACF) of the three-dimensional (3D) mass-density distribution, are widely used to characterize cellular nanostructure. However, conventional methods of reconstruction of the deterministic 3D mass-density distribution, from which these statistical measures can be calculated, have been inadequate for thick biological structures, such as whole cells, due to the conflict between the need for nanoscale resolution and its inverse relationship with thickness after conventional tomographic reconstruction. To tackle the problem, we have developed a robust method to calculate the ACF of the 3D mass-density distribution without tomography. Assuming the biological mass distribution is isotropic, our method allows for accurate statistical characterization of the 3D mass-density distribution by ACF with two data sets: a single projection image by scanning transmission electron microscopy and a thickness map by atomic force microscopy. Here we present validation of the ACF reconstruction algorithm, as well as its application to calculate the statistics of the 3D distribution of mass-density in a region containing the nucleus of an entire mammalian cell. This method may provide important insights into architectural changes that accompany cellular processes.
Li, Yue; Zhang, Di; Capoglu, Ilker; Hujsak, Karl A.; Damania, Dhwanil; Cherkezyan, Lusik; Roth, Eric; Bleher, Reiner; Wu, Jinsong S.; Subramanian, Hariharan; Dravid, Vinayak P.; Backman, Vadim
2018-01-01
Essentially all biological processes are highly dependent on the nanoscale architecture of the cellular components where these processes take place. Statistical measures, such as the autocorrelation function (ACF) of the three-dimensional (3D) mass–density distribution, are widely used to characterize cellular nanostructure. However, conventional methods of reconstruction of the deterministic 3D mass–density distribution, from which these statistical measures can be calculated, have been inadequate for thick biological structures, such as whole cells, due to the conflict between the need for nanoscale resolution and its inverse relationship with thickness after conventional tomographic reconstruction. To tackle the problem, we have developed a robust method to calculate the ACF of the 3D mass–density distribution without tomography. Assuming the biological mass distribution is isotropic, our method allows for accurate statistical characterization of the 3D mass–density distribution by ACF with two data sets: a single projection image by scanning transmission electron microscopy and a thickness map by atomic force microscopy. Here we present validation of the ACF reconstruction algorithm, as well as its application to calculate the statistics of the 3D distribution of mass–density in a region containing the nucleus of an entire mammalian cell. This method may provide important insights into architectural changes that accompany cellular processes. PMID:28416035
Understanding the distributed cognitive processes of intensive care patient discharge.
Lin, Frances; Chaboyer, Wendy; Wallis, Marianne
2014-03-01
To better understand and identify vulnerabilities and risks in the ICU patient discharge process, which provides evidence for service improvement. Previous studies have identified that 'after hours' discharge and 'premature' discharge from ICU are associated with increased mortality. However, some of these studies have largely been retrospective reviews of various administrative databases, while others have focused on specific aspects of the process, which may miss crucial components of the discharge process. This is an ethnographic exploratory study. Distributed cognition and activity theory were used as theoretical frameworks. Ethnographic data collection techniques including informal interviews, direct observations and collecting existing documents were used. A total of 56 one-to-one interviews were conducted with 46 participants; 28 discharges were observed; and numerous documents were collected during a five-month period. A triangulated technique was used in both data collection and data analysis to ensure the research rigour. Under the guidance of activity theory and distributed cognition theoretical frameworks, five themes emerged: hierarchical power and authority, competing priorities, ineffective communication, failing to enact the organisational processes and working collaboratively to optimise the discharge process. Issues with teamwork, cognitive processes and team members' interaction with cognitive artefacts influenced the discharge process. Strategies to improve shared situational awareness are needed to improve teamwork, patient flow and resource efficiency. Tools need to be evaluated regularly to ensure their continuous usefulness. Health care professionals need to be aware of the impact of their competing priorities and ensure discharges occur in a timely manner. Activity theory and distributed cognition are useful theoretical frameworks to support healthcare organisational research. © 2013 John Wiley & Sons Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Voltolini, Marco; Kwon, Tae-Hyuk; Ajo-Franklin, Jonathan
Pore-scale distribution of supercritical CO 2 (scCO 2) exerts significant control on a variety of key hydrologic as well as geochemical processes, including residual trapping and dissolution. Despite such importance, only a small number of experiments have directly characterized the three-dimensional distribution of scCO 2 in geologic materials during the invasion (drainage) process. Here, we present a study which couples dynamic high-resolution synchrotron X-ray micro-computed tomography imaging of a scCO 2/brine system at in situ pressure/temperature conditions with quantitative pore-scale modeling to allow direct validation of a pore-scale description of scCO2 distribution. The experiment combines high-speed synchrotron radiography with tomographymore » to characterize the brine saturated sample, the scCO 2 breakthrough process, and the partially saturated state of a sandstone sample from the Domengine Formation, a regionally extensive unit within the Sacramento Basin (California, USA). The availability of a 3D dataset allowed us to examine correlations between grains and pores morphometric parameters and the actual distribution of scCO 2 in the sample, including the examination of the role of small-scale sedimentary structure on CO2 distribution. The segmented scCO 2/brine volume was also used to validate a simple computational model based on the local thickness concept, able to accurately simulate the distribution of scCO 2 after drainage. The same method was also used to simulate Hg capillary pressure curves with satisfactory results when compared to the measured ones. Finally, this predictive approach, requiring only a tomographic scan of the dry sample, proved to be an effective route for studying processes related to CO 2 invasion structure in geological samples at the pore scale.« less
Voltolini, Marco; Kwon, Tae-Hyuk; Ajo-Franklin, Jonathan
2017-10-21
Pore-scale distribution of supercritical CO 2 (scCO 2) exerts significant control on a variety of key hydrologic as well as geochemical processes, including residual trapping and dissolution. Despite such importance, only a small number of experiments have directly characterized the three-dimensional distribution of scCO 2 in geologic materials during the invasion (drainage) process. Here, we present a study which couples dynamic high-resolution synchrotron X-ray micro-computed tomography imaging of a scCO 2/brine system at in situ pressure/temperature conditions with quantitative pore-scale modeling to allow direct validation of a pore-scale description of scCO2 distribution. The experiment combines high-speed synchrotron radiography with tomographymore » to characterize the brine saturated sample, the scCO 2 breakthrough process, and the partially saturated state of a sandstone sample from the Domengine Formation, a regionally extensive unit within the Sacramento Basin (California, USA). The availability of a 3D dataset allowed us to examine correlations between grains and pores morphometric parameters and the actual distribution of scCO 2 in the sample, including the examination of the role of small-scale sedimentary structure on CO2 distribution. The segmented scCO 2/brine volume was also used to validate a simple computational model based on the local thickness concept, able to accurately simulate the distribution of scCO 2 after drainage. The same method was also used to simulate Hg capillary pressure curves with satisfactory results when compared to the measured ones. Finally, this predictive approach, requiring only a tomographic scan of the dry sample, proved to be an effective route for studying processes related to CO 2 invasion structure in geological samples at the pore scale.« less
Performance issues for domain-oriented time-driven distributed simulations
NASA Technical Reports Server (NTRS)
Nicol, David M.
1987-01-01
It has long been recognized that simulations form an interesting and important class of computations that may benefit from distributed or parallel processing. Since the point of parallel processing is improved performance, the recent proliferation of multiprocessors requires that we consider the performance issues that naturally arise when attempting to implement a distributed simulation. Three such issues are: (1) the problem of mapping the simulation onto the architecture, (2) the possibilities for performing redundant computation in order to reduce communication, and (3) the avoidance of deadlock due to distributed contention for message-buffer space. These issues are discussed in the context of a battlefield simulation implemented on a medium-scale multiprocessor message-passing architecture.
A simple model for factory distribution: Historical effect in an industry city
NASA Astrophysics Data System (ADS)
Uehara, Takashi; Sato, Kazunori; Morita, Satoru; Maeda, Yasunobu; Yoshimura, Jin; Tainaka, Kei-ichi
2016-02-01
The construction and discontinuance processes of factories are complicated problems in sociology. We focus on the spatial and temporal changes of factories at Hamamatsu city in Japan. Real data indicate that the clumping degree of factories decreases as the density of factory increases. To represent the spatial and temporal changes of factories, we apply "contact process" which is one of cellular automata. This model roughly explains the dynamics of factory distribution. We also find "historical effect" in spatial distribution. Namely, the recent factories have been dispersed due to the past distribution during the period of economic bubble. This effect may be related to heavy shock in Japanese stock market.
Differential memory in the earth's magnetotail
NASA Technical Reports Server (NTRS)
Burkhart, G. R.; Chen, J.
1991-01-01
The process of 'differential memory' in the earth's magnetotail is studied in the framework of the modified Harris magnetotail geometry. It is verified that differential memory can generate non-Maxwellian features in the modified Harris field model. The time scales and the potentially observable distribution functions associated with the process of differential memory are investigated, and it is shown that non-Maxwelllian distributions can evolve as a test particle response to distribution function boundary conditions in a Harris field magnetotail model. The non-Maxwellian features which arise from distribution function mapping have definite time scales associated with them, which are generally shorter than the earthward convection time scale but longer than the typical Alfven crossing time.
Distributed semantic networks and CLIPS
NASA Technical Reports Server (NTRS)
Snyder, James; Rodriguez, Tony
1991-01-01
Semantic networks of frames are commonly used as a method of reasoning in many problems. In most of these applications the semantic network exists as a single entity in a single process environment. Advances in workstation hardware provide support for more sophisticated applications involving multiple processes, interacting in a distributed environment. In these applications the semantic network may well be distributed over several concurrently executing tasks. This paper describes the design and implementation of a frame based, distributed semantic network in which frames are accessed both through C Language Integrated Production System (CLIPS) expert systems and procedural C++ language programs. The application area is a knowledge based, cooperative decision making model utilizing both rule based and procedural experts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chamana, Manohar; Prabakar, Kumaraguru; Palmintier, Bryan
A software process is developed to convert distribution network models from a quasi-static time-series tool (OpenDSS) to a real-time dynamic phasor simulator (ePHASORSIM). The description of this process in this paper would be helpful for researchers who intend to perform similar conversions. The converter could be utilized directly by users of real-time simulators who intend to perform software-in-the-loop or hardware-in-the-loop tests on large distribution test feeders for a range of use cases, including testing functions of advanced distribution management systems against a simulated distribution system. In the future, the developers intend to release the conversion tool as open source tomore » enable use by others.« less
The coalescent process in models with selection and recombination.
Hudson, R R; Kaplan, N L
1988-11-01
The statistical properties of the process describing the genealogical history of a random sample of genes at a selectively neutral locus which is linked to a locus at which natural selection operates are investigated. It is found that the equations describing this process are simple modifications of the equations describing the process assuming that the two loci are completely linked. Thus, the statistical properties of the genealogical process for a random sample at a neutral locus linked to a locus with selection follow from the results obtained for the selected locus. Sequence data from the alcohol dehydrogenase (Adh) region of Drosophila melanogaster are examined and compared to predictions based on the theory. It is found that the spatial distribution of nucleotide differences between Fast and Slow alleles of Adh is very similar to the spatial distribution predicted if balancing selection operates to maintain the allozyme variation at the Adh locus. The spatial distribution of nucleotide differences between different Slow alleles of Adh do not match the predictions of this simple model very well.
NASA Astrophysics Data System (ADS)
Wang, Chengpeng; Li, Fuguo; Liu, Juncheng
2018-04-01
The objectives of this work are to study the deformational feature, textures, microstructures, and dislocation configurations of ultrafine-grained copper processed by the process of elliptical cross-section spiral equal-channel extrusion (ECSEE). The deformation patterns of simple shear and pure shear in the ECSEE process were evaluated with the analytical method of geometric strain. The influence of the main technical parameters of ECSEE die on the effective strain distribution on the surface of ECSEE-fabricated samples was examined by the finite element simulation. The high friction factor could improve the effective strain accumulation of material deformation. Moreover, the pure copper sample fabricated by ECSEE ion shows a strong rotated cube shear texture. The refining mechanism of the dislocation deformation is dominant in copper processed by a single pass of ECSEE. The inhomogeneity of the micro-hardness distribution on the longitudinal section of the ECSEE-fabricated sample is consistent with the strain and microstructure distribution features.
Needs assessment under the Maternal and Child Health Services Block Grant: Massachusetts.
Guyer, B; Schor, L; Messenger, K P; Prenney, B; Evans, F
1984-09-01
The Massachusetts maternal and child health (MCH) agency has developed a needs assessment process which includes four components: a statistical measure of need based on indirect, proxy health and social indicators; clinical standards for services to be provided; an advisory process which guides decision making and involves constituency groups; and a management system for implementing funds distribution, namely open competitive bidding in response to a Request for Proposals. In Fiscal Years 1982 and 1983, the process was applied statewide in the distribution of primary prenatal (MIC) and pediatric (C&Y) care services and lead poisoning prevention projects. Both processes resulted in clearer definitions of services to be provided under contract to the state as well as redistribution of funds to serve localities that had previously received no resources. Although the needs assessment process does not provide a direct measure of unmet need in a complex system of private and public services, it can be used to advocate for increased MCH funding and guide the distribution of new MCH service dollars.
Choi, Hyungsuk; Choi, Woohyuk; Quan, Tran Minh; Hildebrand, David G C; Pfister, Hanspeter; Jeong, Won-Ki
2014-12-01
As the size of image data from microscopes and telescopes increases, the need for high-throughput processing and visualization of large volumetric data has become more pressing. At the same time, many-core processors and GPU accelerators are commonplace, making high-performance distributed heterogeneous computing systems affordable. However, effectively utilizing GPU clusters is difficult for novice programmers, and even experienced programmers often fail to fully leverage the computing power of new parallel architectures due to their steep learning curve and programming complexity. In this paper, we propose Vivaldi, a new domain-specific language for volume processing and visualization on distributed heterogeneous computing systems. Vivaldi's Python-like grammar and parallel processing abstractions provide flexible programming tools for non-experts to easily write high-performance parallel computing code. Vivaldi provides commonly used functions and numerical operators for customized visualization and high-throughput image processing applications. We demonstrate the performance and usability of Vivaldi on several examples ranging from volume rendering to image segmentation.
OpenCluster: A Flexible Distributed Computing Framework for Astronomical Data Processing
NASA Astrophysics Data System (ADS)
Wei, Shoulin; Wang, Feng; Deng, Hui; Liu, Cuiyin; Dai, Wei; Liang, Bo; Mei, Ying; Shi, Congming; Liu, Yingbo; Wu, Jingping
2017-02-01
The volume of data generated by modern astronomical telescopes is extremely large and rapidly growing. However, current high-performance data processing architectures/frameworks are not well suited for astronomers because of their limitations and programming difficulties. In this paper, we therefore present OpenCluster, an open-source distributed computing framework to support rapidly developing high-performance processing pipelines of astronomical big data. We first detail the OpenCluster design principles and implementations and present the APIs facilitated by the framework. We then demonstrate a case in which OpenCluster is used to resolve complex data processing problems for developing a pipeline for the Mingantu Ultrawide Spectral Radioheliograph. Finally, we present our OpenCluster performance evaluation. Overall, OpenCluster provides not only high fault tolerance and simple programming interfaces, but also a flexible means of scaling up the number of interacting entities. OpenCluster thereby provides an easily integrated distributed computing framework for quickly developing a high-performance data processing system of astronomical telescopes and for significantly reducing software development expenses.
A distributed computing model for telemetry data processing
NASA Astrophysics Data System (ADS)
Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.
1994-05-01
We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.
A distributed computing model for telemetry data processing
NASA Technical Reports Server (NTRS)
Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.
1994-01-01
We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Liyin; Wang, Zhen-guo, E-mail: wangzhenguo-wzg@163.com; Li, Qinglian
2015-09-07
Phase Doppler anemometry was applied to investigate the atomization processes of a kerosene jet injected into Ma = 1.86 crossflow. Physical behaviors, such as breakup and coalescence, are reproduced through the analysis of the spatial distribution of kerosene droplets' size. It is concluded that Sauter mean diameter distribution shape transforms into “I” type from “C” type as the atomization development. Simultaneously, the breakup of large droplets and the coalescence of small droplets can be observed throughout the whole atomization process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gong, Tao; Research Center of Laser Fusion, China Academy of Engineering Physics, Mianyang, Sichuan 621900; Zheng, Jian, E-mail: jzheng@ustc.edu.cn
2016-06-15
A 2D cylindrically symmetric model with inclusion of both diffraction and self-focus effects is developed to deal with the stimulated scattering processes of a single hotspot. The calculated results show that the transverse distribution of the scattered light is sensitive to the longitudinal profiles of the plasma parameters. The analysis of the evolution of the scattered light indicates that it is the frequency mismatch of coupling due to the inhomogeneity of plasmas that determines the transverse distribution of the scattered light.
The process group approach to reliable distributed computing
NASA Technical Reports Server (NTRS)
Birman, Kenneth P.
1991-01-01
The difficulty of developing reliable distributed software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems which are substantially easier to develop, fault-tolerance, and self-managing. Six years of research on ISIS are reviewed, describing the model, the types of applications to which ISIS was applied, and some of the reasoning that underlies a recent effort to redesign and reimplement ISIS as a much smaller, lightweight system.
Distribution, movement, and evolution of the volatile elements in the lunar regolith
NASA Technical Reports Server (NTRS)
Gibson, E. K., Jr.
1975-01-01
The abundances and distributions of carbon, nitrogen, and sulfur in lunar soils are reviewed. Carbon and nitrogen have a predominantly extra-lunar origin in lunar soils and breccias, while sulfur is mostly indigeneous to the moon. The lunar processes which effect the movement, distribution, and evolution of carbon, nitrogen, and sulfur, along with the volatile alkali elements sodium, potassium, and rubidium during regolith processes are discussed. Possible mechanisms which may result in the addition to or loss from the moon of these volatile elements are considered.
Self-referenced processing, neurodevelopment and joint attention in autism.
Mundy, Peter; Gwaltney, Mary; Henderson, Heather
2010-09-01
This article describes a parallel and distributed processing model (PDPM) of joint attention, self-referenced processing and autism. According to this model, autism involves early impairments in the capacity for rapid, integrated processing of self-referenced (proprioceptive and interoceptive) and other-referenced (exteroceptive) information. Measures of joint attention have proven useful in research on autism because they are sensitive to the early development of the 'parallel' and integrated processing of self- and other-referenced stimuli. Moreover, joint attention behaviors are a consequence, but also an organizer of the functional development of a distal distributed cortical system involving anterior networks including the prefrontal and insula cortices, as well as posterior neural networks including the temporal and parietal cortices. Measures of joint attention provide early behavioral indicators of atypical development in this parallel and distributed processing system in autism. In addition it is proposed that an early, chronic disturbance in the capacity for integrating self- and other-referenced information may have cascading effects on the development of self awareness in autism. The assumptions, empirical support and future research implications of this model are discussed.
How can model comparison help improving species distribution models?
Gritti, Emmanuel Stephan; Gaucherel, Cédric; Crespo-Perez, Maria-Veronica; Chuine, Isabelle
2013-01-01
Today, more than ever, robust projections of potential species range shifts are needed to anticipate and mitigate the impacts of climate change on biodiversity and ecosystem services. Such projections are so far provided almost exclusively by correlative species distribution models (correlative SDMs). However, concerns regarding the reliability of their predictive power are growing and several authors call for the development of process-based SDMs. Still, each of these methods presents strengths and weakness which have to be estimated if they are to be reliably used by decision makers. In this study we compare projections of three different SDMs (STASH, LPJ and PHENOFIT) that lie in the continuum between correlative models and process-based models for the current distribution of three major European tree species, Fagussylvatica L., Quercusrobur L. and Pinussylvestris L. We compare the consistency of the model simulations using an innovative comparison map profile method, integrating local and multi-scale comparisons. The three models simulate relatively accurately the current distribution of the three species. The process-based model performs almost as well as the correlative model, although parameters of the former are not fitted to the observed species distributions. According to our simulations, species range limits are triggered, at the European scale, by establishment and survival through processes primarily related to phenology and resistance to abiotic stress rather than to growth efficiency. The accuracy of projections of the hybrid and process-based model could however be improved by integrating a more realistic representation of the species resistance to water stress for instance, advocating for pursuing efforts to understand and formulate explicitly the impact of climatic conditions and variations on these processes.
How Can Model Comparison Help Improving Species Distribution Models?
Gritti, Emmanuel Stephan; Gaucherel, Cédric; Crespo-Perez, Maria-Veronica; Chuine, Isabelle
2013-01-01
Today, more than ever, robust projections of potential species range shifts are needed to anticipate and mitigate the impacts of climate change on biodiversity and ecosystem services. Such projections are so far provided almost exclusively by correlative species distribution models (correlative SDMs). However, concerns regarding the reliability of their predictive power are growing and several authors call for the development of process-based SDMs. Still, each of these methods presents strengths and weakness which have to be estimated if they are to be reliably used by decision makers. In this study we compare projections of three different SDMs (STASH, LPJ and PHENOFIT) that lie in the continuum between correlative models and process-based models for the current distribution of three major European tree species, Fagus sylvatica L., Quercus robur L. and Pinus sylvestris L. We compare the consistency of the model simulations using an innovative comparison map profile method, integrating local and multi-scale comparisons. The three models simulate relatively accurately the current distribution of the three species. The process-based model performs almost as well as the correlative model, although parameters of the former are not fitted to the observed species distributions. According to our simulations, species range limits are triggered, at the European scale, by establishment and survival through processes primarily related to phenology and resistance to abiotic stress rather than to growth efficiency. The accuracy of projections of the hybrid and process-based model could however be improved by integrating a more realistic representation of the species resistance to water stress for instance, advocating for pursuing efforts to understand and formulate explicitly the impact of climatic conditions and variations on these processes. PMID:23874779
USDA-ARS?s Scientific Manuscript database
During the grinding and packaging processes, it is important to understand how Shiga toxin-producing Escherichia coli (STEC) would be distributed and how well it could be detected in beef trim. This study is important because it shows what would happen if contaminated meat is allowed into a commerc...
Mesocell study area snow distributions for the Cold Land Processes Experiment (CLPX)
Glen E. Liston; Christopher A. Hiemstra; Kelly Elder; Donald W. Cline
2008-01-01
The Cold Land Processes Experiment (CLPX) had a goal of describing snow-related features over a wide range of spatial and temporal scales. This required linking disparate snow tools and datasets into one coherent, integrated package. Simulating realistic high-resolution snow distributions and features requires a snow-evolution modeling system (SnowModel) that can...
ERIC Educational Resources Information Center
Grace, Christine Cooper
2017-01-01
This paper explores the potential of incorporating constructs of distributive justice and procedural justice into summative assessment of student learning in higher education. I systematically compare the process used by managers to evaluate employee performance in organizations--performance appraisal (PA)--with processes used by professors to…
Local Anesthetic Microcapsules.
1981-04-15
III Chemical Structure of Local Anesthetics 12 Table IV Processing Summary of Lidocaine Microencapsulation 15 Table V Lidocaine Microcapsule Size...Distribution 17 Table VI Processing Summary of Etidocaine Microencapsulation 18 Table VII Etidocaine Microcapsule Size Distribution 19 Table VIII Lidocaine...REPORT I PERIOD COVERED Annual Local Anesthetic Microcapsules 1 July 1980-30 March 1981 6. PERFORMING ORG. REPORT NUMBER 2106-1 7. AUTHOR() S
NASA Astrophysics Data System (ADS)
El Labban, A.; Mousseau, P.; Bailleul, J. L.; Deterre, R.
2007-04-01
Although numerical simulation has proved to be a useful tool to predict the rubber vulcanization process, few applications in the process control have been reported. Because the end-use rubber properties depend on the state of cure distribution in the parts thickness, the prediction of the optimal distribution remains a challenge for the rubber industry. The analysis of the vulcanization process requires the determination of the thermal behavior of the material and the cure kinetics. A nonisothermal vulcanization model with nonisothermal induction time is used in this numerical study. Numerical results are obtained for natural rubber (NR) thick-section part curing. A controlled gradient of the state of cure in the part thickness is obtained by a curing process that consists not only in mold heating phase, but also a forced convection mold cooling phase in order to stop the vulcanization process and to control the vulcanization distribution. The mold design that allows this control is described. In the heating phase, the state of cure is mainly controlled by the chemical kinetics (the induction time), but in the cooling phase, it is the heat diffusion that controls the state of cure distribution. A comparison among different cooling conditions is shown and a good state of cure gradient control is obtained.
Generalized fractional diffusion equations for accelerating subdiffusion and truncated Lévy flights
NASA Astrophysics Data System (ADS)
Chechkin, A. V.; Gonchar, V. Yu.; Gorenflo, R.; Korabel, N.; Sokolov, I. M.
2008-08-01
Fractional diffusion equations are widely used to describe anomalous diffusion processes where the characteristic displacement scales as a power of time. For processes lacking such scaling the corresponding description may be given by diffusion equations with fractional derivatives of distributed order. Such equations were introduced in A. V. Chechkin, R. Gorenflo, and I. Sokolov [Phys. Rev. E 66, 046129 (2002)] for the description of the processes getting more anomalous in the course of time (decelerating subdiffusion and accelerating superdiffusion). Here we discuss the properties of diffusion equations with fractional derivatives of the distributed order for the description of anomalous relaxation and diffusion phenomena getting less anomalous in the course of time, which we call, respectively, accelerating subdiffusion and decelerating superdiffusion. For the former process, by taking a relatively simple particular example with two fixed anomalous diffusion exponents we show that the proposed equation effectively describes the subdiffusion phenomenon with diffusion exponent varying in time. For the latter process we demonstrate by a particular example how the power-law truncated Lévy stable distribution evolves in time to the distribution with power-law asymptotics and Gaussian shape in the central part. The special case of two different orders is characteristic for the general situation in which the extreme orders dominate the asymptotics.
DISPAQ: Distributed Profitable-Area Query from Big Taxi Trip Data.
Putri, Fadhilah Kurnia; Song, Giltae; Kwon, Joonho; Rao, Praveen
2017-09-25
One of the crucial problems for taxi drivers is to efficiently locate passengers in order to increase profits. The rapid advancement and ubiquitous penetration of Internet of Things (IoT) technology into transportation industries enables us to provide taxi drivers with locations that have more potential passengers (more profitable areas) by analyzing and querying taxi trip data. In this paper, we propose a query processing system, called Distributed Profitable-Area Query ( DISPAQ ) which efficiently identifies profitable areas by exploiting the Apache Software Foundation's Spark framework and a MongoDB database. DISPAQ first maintains a profitable-area query index (PQ-index) by extracting area summaries and route summaries from raw taxi trip data. It then identifies candidate profitable areas by searching the PQ-index during query processing. Then, it exploits a Z-Skyline algorithm, which is an extension of skyline processing with a Z-order space filling curve, to quickly refine the candidate profitable areas. To improve the performance of distributed query processing, we also propose local Z-Skyline optimization, which reduces the number of dominant tests by distributing killer profitable areas to each cluster node. Through extensive evaluation with real datasets, we demonstrate that our DISPAQ system provides a scalable and efficient solution for processing profitable-area queries from huge amounts of big taxi trip data.
DISPAQ: Distributed Profitable-Area Query from Big Taxi Trip Data †
Putri, Fadhilah Kurnia; Song, Giltae; Rao, Praveen
2017-01-01
One of the crucial problems for taxi drivers is to efficiently locate passengers in order to increase profits. The rapid advancement and ubiquitous penetration of Internet of Things (IoT) technology into transportation industries enables us to provide taxi drivers with locations that have more potential passengers (more profitable areas) by analyzing and querying taxi trip data. In this paper, we propose a query processing system, called Distributed Profitable-Area Query (DISPAQ) which efficiently identifies profitable areas by exploiting the Apache Software Foundation’s Spark framework and a MongoDB database. DISPAQ first maintains a profitable-area query index (PQ-index) by extracting area summaries and route summaries from raw taxi trip data. It then identifies candidate profitable areas by searching the PQ-index during query processing. Then, it exploits a Z-Skyline algorithm, which is an extension of skyline processing with a Z-order space filling curve, to quickly refine the candidate profitable areas. To improve the performance of distributed query processing, we also propose local Z-Skyline optimization, which reduces the number of dominant tests by distributing killer profitable areas to each cluster node. Through extensive evaluation with real datasets, we demonstrate that our DISPAQ system provides a scalable and efficient solution for processing profitable-area queries from huge amounts of big taxi trip data. PMID:28946679
Compiling software for a hierarchical distributed processing system
Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E
2013-12-31
Compiling software for a hierarchical distributed processing system including providing to one or more compiling nodes software to be compiled, wherein at least a portion of the software to be compiled is to be executed by one or more nodes; compiling, by the compiling node, the software; maintaining, by the compiling node, any compiled software to be executed on the compiling node; selecting, by the compiling node, one or more nodes in a next tier of the hierarchy of the distributed processing system in dependence upon whether any compiled software is for the selected node or the selected node's descendents; sending to the selected node only the compiled software to be executed by the selected node or selected node's descendent.
Competitive cluster growth in complex networks.
Moreira, André A; Paula, Demétrius R; Costa Filho, Raimundo N; Andrade, José S
2006-06-01
In this work we propose an idealized model for competitive cluster growth in complex networks. Each cluster can be thought of as a fraction of a community that shares some common opinion. Our results show that the cluster size distribution depends on the particular choice for the topology of the network of contacts among the agents. As an application, we show that the cluster size distributions obtained when the growth process is performed on hierarchical networks, e.g., the Apollonian network, have a scaling form similar to what has been observed for the distribution of a number of votes in an electoral process. We suggest that this similarity may be due to the fact that social networks involved in the electoral process may also possess an underlining hierarchical structure.
NASA Astrophysics Data System (ADS)
Beach, Shaun E.; Semkow, Thomas M.; Remling, David J.; Bradt, Clayton J.
2017-07-01
We have developed accessible methods to demonstrate fundamental statistics in several phenomena, in the context of teaching electronic signal processing in a physics-based college-level curriculum. A relationship between the exponential time-interval distribution and Poisson counting distribution for a Markov process with constant rate is derived in a novel way and demonstrated using nuclear counting. Negative binomial statistics is demonstrated as a model for overdispersion and justified by the effect of electronic noise in nuclear counting. The statistics of digital packets on a computer network are shown to be compatible with the fractal-point stochastic process leading to a power-law as well as generalized inverse Gaussian density distributions of time intervals between packets.
NASA Astrophysics Data System (ADS)
Boer, Marie
2017-09-01
Generalized Parton Distributions (GPDs) contain the correlation between the parton's longitudinal momentum and their transverse distribution. They are accessed through hard exclusive processes, such as Deeply Virtual Compton Scattering (DVCS). DVCS has already been measured in several experiments and several models allow for extracting GPDs from these measurements. Timelike Compton Scattering (TCS) is, at leading order, the time-reversal equivalent process to DVCS and accesses GPDs at the same kinematics. Comparing GPDs extracted from DVCS and TCS is a unique way for proving GPD universality. Combining fits from the two processes will also allow for better constraining the GPDs. We will present our method for extracting GPDs from DVCS and TCS pseudo-data. We will compare fit results from the two processes in similar conditions and present what can be expected in term of contraints on GPDs from combined fits.
NASA Astrophysics Data System (ADS)
Rydalevskaya, Maria A.; Voroshilova, Yulia N.
2018-05-01
Vibrationally non-equilibrium flows of chemically homogeneous diatomic gases are considered under the conditions that the distribution of the molecules over vibrational levels differs significantly from the Boltzmann distribution. In such flows, molecular collisions can be divided into two groups: the first group corresponds to "rapid" microscopic processes whereas the second one corresponds to "slow" microscopic processes (their rate is comparable to or larger than that of gasdynamic parameters variation). The collisions of the first group form quasi-stationary vibrationally non-equilibrium distribution functions. The model kinetic equations are used to study the transport processes under these conditions. In these equations, the BGK-type approximation is used to model only the collision operators of the first group. It allows us to simplify derivation of the transport fluxes and calculation of the kinetic coefficients. Special attention is given to the connection between the formulae for the bulk viscosity coefficient and the sound velocity square.
Stationary Size Distributions of Growing Cells with Binary and Multiple Cell Division
NASA Astrophysics Data System (ADS)
Rading, M. M.; Engel, T. A.; Lipowsky, R.; Valleriani, A.
2011-10-01
Populations of unicellular organisms that grow under constant environmental conditions are considered theoretically. The size distribution of these cells is calculated analytically, both for the usual process of binary division, in which one mother cell produces always two daughter cells, and for the more complex process of multiple division, in which one mother cell can produce 2 n daughter cells with n=1,2,3,… . The latter mode of division is inspired by the unicellular algae Chlamydomonas reinhardtii. The uniform response of the whole population to different environmental conditions is encoded in the individual rates of growth and division of the cells. The analytical treatment of the problem is based on size-dependent rules for cell growth and stochastic transition processes for cell division. The comparison between binary and multiple division shows that these different division processes lead to qualitatively different results for the size distribution and the population growth rates.
Proton spin structure from measurable parton distributions.
Ji, Xiangdong; Xiong, Xiaonu; Yuan, Feng
2012-10-12
We present a systematic study of the proton spin structure in terms of measurable parton distributions. For a transversely polarized proton, we derive a polarization sum rule from the leading generalized parton distributions appearing in hard exclusive processes. For a longitudinally polarized proton, we obtain a helicity decomposition from well-known quark and gluon helicity distributions and orbital angular-momentum contributions. The latter are shown to be related to measurable subleading generalized parton distributions and quantum-phase space Wigner distributions.
Reward skewness coding in the insula independent of probability and loss
Tobler, Philippe N.
2011-01-01
Rewards in the natural environment are rarely predicted with complete certainty. Uncertainty relating to future rewards has typically been defined as the variance of the potential outcomes. However, the asymmetry of predicted reward distributions, known as skewness, constitutes a distinct but neuroscientifically underexplored risk term that may also have an impact on preference. By changing only reward magnitudes, we study skewness processing in equiprobable ternary lotteries involving only gains and constant probabilities, thus excluding probability distortion or loss aversion as mechanisms for skewness preference formation. We show that individual preferences are sensitive to not only the mean and variance but also to the skewness of predicted reward distributions. Using neuroimaging, we show that the insula, a structure previously implicated in the processing of reward-related uncertainty, responds to the skewness of predicted reward distributions. Some insula responses increased in a monotonic fashion with skewness (irrespective of individual skewness preferences), whereas others were similarly elevated to both negative and positive as opposed to no reward skew. These data support the notion that the asymmetry of reward distributions is processed in the brain and, taken together with replicated findings of mean coding in the striatum and variance coding in the cingulate, suggest that the brain codes distinct aspects of reward distributions in a distributed fashion. PMID:21849610
Transverse-momentum-dependent gluon distributions from JIMWLK evolution
NASA Astrophysics Data System (ADS)
Marquet, C.; Petreska, E.; Roiesnel, C.
2016-10-01
Transverse-momentum-dependent (TMD) gluon distributions have different operator definitions, depending on the process under consideration. We study that aspect of TMD factorization in the small- x limit, for the various unpolarized TMD gluon distributions encountered in the literature. To do this, we consider di-jet production in hadronic collisions, since this process allows to be exhaustive with respect to the possible operator definitions, and is suitable to be investigated at small x. Indeed, for forward and nearly back-to-back jets, one can apply both the TMD factorization and Color Glass Condensate (CGC) approaches to compute the di-jet cross-section, and compare the results. Doing so, we show that both descriptions coincide, and we show how to express the various TMD gluon distributions in terms of CGC correlators of Wilson lines, while keeping N c finite. We then proceed to evaluate them by solving the JIMWLK equation numerically. We obtain that at large transverse momentum, the process dependence essentially disappears, while at small transverse momentum, non-linear saturation effects impact the various TMD gluon distributions in very different ways. We notice the presence of a geometric scaling regime for all the TMD gluon distributions studied: the "dipole" one, the Weizsäcker-Williams one, and the six others involved in forward di-jet production.
van Thienen, P; Vreeburg, J H G; Blokker, E J M
2011-02-01
Various particle transport mechanisms play a role in the build-up of discoloration potential in drinking water distribution networks. In order to enhance our understanding of and ability to predict this build-up, it is essential to recognize and understand their role. Gravitational settling with drag has primarily been considered in this context. However, since flow in water distribution pipes is nearly always in the turbulent regime, turbulent processes should be considered also. In addition to these, single particle effects and forces may affect radial particle transport. In this work, we present an application of a previously published turbulent particle deposition theory to conditions relevant for drinking water distribution systems. We predict quantitatively under which conditions turbophoresis, including the virtual mass effect, the Saffman lift force, and the Magnus force may contribute significantly to sediment transport in radial direction and compare these results to experimental observations. The contribution of turbophoresis is mostly limited to large particles (>50 μm) in transport mains, and not expected to play a major role in distribution mains. The Saffman lift force may enhance this process to some degree. The Magnus force is not expected to play any significant role in drinking water distribution systems. © 2010 Elsevier Ltd. All rights reserved.
76 FR 22366 - Certain Polyester Staple Fiber From Taiwan: Preliminary Results of Antidumping Duty...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-21
... marketing stages involved in making its reported home-market and U.S. sales for each channel of distribution. FENC reported one channel of distribution (i.e., direct sales to distributers) and a single level of... sales, we examine stages in the marketing process and selling functions along the chain of distribution...
NASA Astrophysics Data System (ADS)
Zens, A.; Gnedel, M.; Zaeh, M. F.; Haider, F.
2018-06-01
Friction Stir Processing (FSP) can be used to locally modify properties in materials such as aluminium. This may be used, for example, to produce a fine microstructure or to integrate secondary elements into the base material. The purpose of this work is to examine the effect of the properties of the metal additives on the resulting material distribution in the processed region. For this, commercially pure iron and copper were integrated into an EN AW-1050 aluminium base material using FSP. Iron in the form of powder, wire and foil as well as copper in powder form were assessed. The various additive forms represent materials with differing surface-to-volume ratios as well as varying dispersion characteristics in the processing zone. The processing parameters for each additive form remained constant; however, two- and four-pass FSP processes were conducted. The results of CT analysis proved especially insightful regarding the spatial distribution of the various additive form within the workpiece. As expected, the powder additive was most widely distributed within the welding zone. Micro-hardness mappings showed that the powder additive contributed to the hardness within the weld nugget in comparison to the processed material without secondary elements.
Flexible and fast: linguistic shortcut affects both shallow and deep conceptual processing.
Connell, Louise; Lynott, Dermot
2013-06-01
Previous research has shown that people use linguistic distributional information during conceptual processing, and that it is especially useful for shallow tasks and rapid responding. Using two conceptual combination tasks, we showed that this linguistic shortcut extends to the processing of novel stimuli, is used in both successful and unsuccessful conceptual processing, and is evident in both shallow and deep conceptual tasks. Specifically, as predicted by the ECCo theory of conceptual combination, people use the linguistic shortcut as a "quick-and-dirty" guide to whether the concepts are likely to combine into a coherent conceptual representation, in both shallow sensibility judgment and deep interpretation generation tasks. Linguistic distributional frequency predicts both the likelihood and the time course of rejecting a novel word compound as nonsensical or uninterpretable. However, it predicts the time course of successful processing only in shallow sensibility judgment, because the deeper conceptual process of interpretation generation does not allow the linguistic shortcut to suffice. Furthermore, the effects of linguistic distributional frequency are independent of any effects of conventional word frequency. We discuss the utility of the linguistic shortcut as a cognitive triage mechanism that can optimize processing in a limited-resource conceptual system.
Bao, Yan; Yang, Taoxi; Lin, Xiaoxiong; Pöppel, Ernst
2016-09-01
Differences of reaction times to specific stimulus configurations are used as indicators of cognitive processing stages. In this classical experimental paradigm, continuous temporal processing is implicitly assumed. Multimodal response distributions indicate, however, discrete time sampling, which is often masked by experimental conditions. Differences in reaction times reflect discrete temporal mechanisms that are pre-semantically implemented and suggested to be based on entrained neural oscillations. © 2016 The Institute of Psychology, Chinese Academy of Sciences and John Wiley & Sons Australia, Ltd.
Encounter Detection Using Visual Analytics to Improve Maritime Domain Awareness
2015-06-01
assigned to be processed in a record set consisting of all the records within a one degree of latitude by one degree of longitude square box. For the case...0.002 3 30 185 0.001 4 30 370 0.002 37 a degree of latitude by a tenth of a degree of longitude . This prototype further reduces the processing ...STATEMENT Approved for public release; distribution is unlimited 12b. DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) A visual analytics process
Using process groups to implement failure detection in asynchronous environments
NASA Technical Reports Server (NTRS)
Ricciardi, Aleta M.; Birman, Kenneth P.
1991-01-01
Agreement on the membership of a group of processes in a distributed system is a basic problem that arises in a wide range of applications. Such groups occur when a set of processes cooperate to perform some task, share memory, monitor one another, subdivide a computation, and so forth. The group membership problems is discussed as it relates to failure detection in asynchronous, distributed systems. A rigorous, formal specification for group membership is presented under this interpretation. A solution is then presented for this problem.
A model for the distributed storage and processing of large arrays
NASA Technical Reports Server (NTRS)
Mehrota, P.; Pratt, T. W.
1983-01-01
A conceptual model for parallel computations on large arrays is developed. The model provides a set of language concepts appropriate for processing arrays which are generally too large to fit in the primary memories of a multiprocessor system. The semantic model is used to represent arrays on a concurrent architecture in such a way that the performance realities inherent in the distributed storage and processing can be adequately represented. An implementation of the large array concept as an Ada package is also described.
NASA Technical Reports Server (NTRS)
Parrish, R. S.; Carter, M. C.
1974-01-01
This analysis utilizes computer simulation and statistical estimation. Realizations of stationary gaussian stochastic processes with selected autocorrelation functions are computer simulated. Analysis of the simulated data revealed that the mean and the variance of a process were functionally dependent upon the autocorrelation parameter and crossing level. Using predicted values for the mean and standard deviation, by the method of moments, the distribution parameters was estimated. Thus, given the autocorrelation parameter, crossing level, mean, and standard deviation of a process, the probability of exceeding the crossing level for a particular length of time was calculated.
NASA Astrophysics Data System (ADS)
Rodrigues, João Fabrício Mota; Coelho, Marco Túlio Pacheco; Ribeiro, Bruno R.
2018-04-01
Species distribution models (SDM) have been broadly used in ecology to address theoretical and practical problems. Currently, there are two main approaches to generate SDMs: (i) correlative, which is based on species occurrences and environmental predictor layers and (ii) process-based models, which are constructed based on species' functional traits and physiological tolerances. The distributions estimated by each approach are based on different components of species niche. Predictions of correlative models approach species realized niches, while predictions of process-based are more akin to species fundamental niche. Here, we integrated the predictions of fundamental and realized distributions of the freshwater turtle Trachemys dorbigni. Fundamental distribution was estimated using data of T. dorbigni's egg incubation temperature, and realized distribution was estimated using species occurrence records. Both types of distributions were estimated using the same regression approaches (logistic regression and support vector machines), both considering macroclimatic and microclimatic temperatures. The realized distribution of T. dorbigni was generally nested in its fundamental distribution reinforcing theoretical assumptions that the species' realized niche is a subset of its fundamental niche. Both modelling algorithms produced similar results but microtemperature generated better results than macrotemperature for the incubation model. Finally, our results reinforce the conclusion that species realized distributions are constrained by other factors other than just thermal tolerances.
Ship Detection in SAR Image Based on the Alpha-stable Distribution
Wang, Changcheng; Liao, Mingsheng; Li, Xiaofeng
2008-01-01
This paper describes an improved Constant False Alarm Rate (CFAR) ship detection algorithm in spaceborne synthetic aperture radar (SAR) image based on Alpha-stable distribution model. Typically, the CFAR algorithm uses the Gaussian distribution model to describe statistical characteristics of a SAR image background clutter. However, the Gaussian distribution is only valid for multilook SAR images when several radar looks are averaged. As sea clutter in SAR images shows spiky or heavy-tailed characteristics, the Gaussian distribution often fails to describe background sea clutter. In this study, we replace the Gaussian distribution with the Alpha-stable distribution, which is widely used in impulsive or spiky signal processing, to describe the background sea clutter in SAR images. In our proposed algorithm, an initial step for detecting possible ship targets is employed. Then, similar to the typical two-parameter CFAR algorithm, a local process is applied to the pixel identified as possible target. A RADARSAT-1 image is used to validate this Alpha-stable distribution based algorithm. Meanwhile, known ship location data during the time of RADARSAT-1 SAR image acquisition is used to validate ship detection results. Validation results show improvements of the new CFAR algorithm based on the Alpha-stable distribution over the CFAR algorithm based on the Gaussian distribution. PMID:27873794
Montana Curriculum Guidelines for Distributive Education. Revised.
ERIC Educational Resources Information Center
Harris, Ron, Ed.
These distributive education curriculum guidelines are intended to provide Montana teachers with teaching information for 11 units. Units cover introduction to marketing and distributive education, human relations and communications, operations and control, processes involved in buying for resale, merchandise handling, sales promotion, sales and…
NASA Astrophysics Data System (ADS)
Lan, Hengxing; Derek Martin, C.; Lim, C. H.
2007-02-01
Geographic information system (GIS) modeling is used in combination with three-dimensional (3D) rockfall process modeling to assess rockfall hazards. A GIS extension, RockFall Analyst (RA), which is capable of effectively handling large amounts of geospatial information relative to rockfall behaviors, has been developed in ArcGIS using ArcObjects and C#. The 3D rockfall model considers dynamic processes on a cell plane basis. It uses inputs of distributed parameters in terms of raster and polygon features created in GIS. Two major components are included in RA: particle-based rockfall process modeling and geostatistics-based rockfall raster modeling. Rockfall process simulation results, 3D rockfall trajectories and their velocity features either for point seeders or polyline seeders are stored in 3D shape files. Distributed raster modeling, based on 3D rockfall trajectories and a spatial geostatistical technique, represents the distribution of spatial frequency, the flying and/or bouncing height, and the kinetic energy of falling rocks. A distribution of rockfall hazard can be created by taking these rockfall characteristics into account. A barrier analysis tool is also provided in RA to aid barrier design. An application of these modeling techniques to a case study is provided. The RA has been tested in ArcGIS 8.2, 8.3, 9.0 and 9.1.
Porosity characterization of biodegradable porous poly (L-lactic acid) electrospun nanofibers
NASA Astrophysics Data System (ADS)
Valipouri, Afsaneh; Gharehaghaji, Ali Akbar; Alirezazadeh, Azam; Ravandi, Seyed Abdolkarim Hosseini
2017-12-01
Poly-L lactic acid (PLLA) is one of the mostly used fibers in biomedical applications as a biodegradable and biocompatible material. Porosity and fiber diameter distribution are governing factors that determine the performance of nanofibers. Present work aims at investigating the process parameters that are affecting porosity and diameter distribution of PLLA nanofibers. PLLA nanofibers were fabricated through electrospinning method using the solution of PLLA polymer/dichloromethane (DCM). Nanofibers with various fiber diameter distribution and porosity were made by changing of process parameters such as spinning distance (5, 10 and 15 cm), voltage (11 and 15 kV), solution concentration (10, 11 and 12 wt%) and feeding rate (0.3, 0.4 and 0.7 ml h-1). Image processing techniques (with Matlab R2017), surface analysis (with Mountainsmap7) and diameter distribution analysis (with Measurement software) were used to examine surface morphology of samples. The results showed that the fiber diameter distribution becomes wider with increasing the applied voltage and reducing the spinning distance. In the other hand, coarse fibers possessed larger pores while having irregular and fewer pores in comparison to fine fibers. The most uniform nano-web with high porous nanofibers was attained by the choice of the process parameters at the voltage of 11 kV, spinning distance of 15 cm, feeding rate of 0.4 ml h-1 and solution concentration of 10 wt%.
Study of temperature distributions in wafer exposure process
NASA Astrophysics Data System (ADS)
Lin, Zone-Ching; Wu, Wen-Jang
During the exposure process of photolithography, wafer absorbs the exposure energy, which results in rising temperature and the phenomenon of thermal expansion. This phenomenon was often neglected due to its limited effect in the previous generation of process. However, in the new generation of process, it may very likely become a factor to be considered. In this paper, the finite element model for analyzing the transient behavior of the distribution of wafer temperature during exposure was established under the assumption that the wafer was clamped by a vacuum chuck without warpage. The model is capable of simulating the distribution of the wafer temperature under different exposure conditions. The flowchart of analysis begins with the simulation of transient behavior in a single exposure region to the variation of exposure energy, interval of exposure locations and interval of exposure time under continuous exposure to investigate the distribution of wafer temperature. The simulation results indicate that widening the interval of exposure locations has a greater impact in improving the distribution of wafer temperature than extending the interval of exposure time between neighboring image fields. Besides, as long as the distance between the field center locations of two neighboring exposure regions exceeds the straight distance equals to three image fields wide, the interacting thermal effect during wafer exposure can be ignored. The analysis flow proposed in this paper can serve as a supporting reference tool for engineers in planning exposure paths.
Volcanic Ash Data Assimilation System for Atmospheric Transport Model
NASA Astrophysics Data System (ADS)
Ishii, K.; Shimbori, T.; Sato, E.; Tokumoto, T.; Hayashi, Y.; Hashimoto, A.
2017-12-01
The Japan Meteorological Agency (JMA) has two operations for volcanic ash forecasts, which are Volcanic Ash Fall Forecast (VAFF) and Volcanic Ash Advisory (VAA). In these operations, the forecasts are calculated by atmospheric transport models including the advection process, the turbulent diffusion process, the gravitational fall process and the deposition process (wet/dry). The initial distribution of volcanic ash in the models is the most important but uncertain factor. In operations, the model of Suzuki (1983) with many empirical assumptions is adopted to the initial distribution. This adversely affects the reconstruction of actual eruption plumes.We are developing a volcanic ash data assimilation system using weather radars and meteorological satellite observation, in order to improve the initial distribution of the atmospheric transport models. Our data assimilation system is based on the three-dimensional variational data assimilation method (3D-Var). Analysis variables are ash concentration and size distribution parameters which are mutually independent. The radar observation is expected to provide three-dimensional parameters such as ash concentration and parameters of ash particle size distribution. On the other hand, the satellite observation is anticipated to provide two-dimensional parameters of ash clouds such as mass loading, top height and particle effective radius. In this study, we estimate the thickness of ash clouds using vertical wind shear of JMA numerical weather prediction, and apply for the volcanic ash data assimilation system.
Universal Distribution of Litter Decay Rates
NASA Astrophysics Data System (ADS)
Forney, D. C.; Rothman, D. H.
2008-12-01
Degradation of litter is the result of many physical, chemical and biological processes. The high variability of these processes likely accounts for the progressive slowdown of decay with litter age. This age dependence is commonly thought to result from the superposition of processes with different decay rates k. Here we assume an underlying continuous yet unknown distribution p(k) of decay rates [1]. To seek its form, we analyze the mass-time history of 70 LIDET [2] litter data sets obtained under widely varying conditions. We construct a regularized inversion procedure to find the best fitting distribution p(k) with the least degrees of freedom. We find that the resulting p(k) is universally consistent with a lognormal distribution, i.e.~a Gaussian distribution of log k, characterized by a dataset-dependent mean and variance of log k. This result is supported by a recurring observation that microbial populations on leaves are log-normally distributed [3]. Simple biological processes cause the frequent appearance of the log-normal distribution in ecology [4]. Environmental factors, such as soil nitrate, soil aggregate size, soil hydraulic conductivity, total soil nitrogen, soil denitrification, soil respiration have been all observed to be log-normally distributed [5]. Litter degradation rates depend on many coupled, multiplicative factors, which provides a fundamental basis for the lognormal distribution. Using this insight, we systematically estimated the mean and variance of log k for 512 data sets from the LIDET study. We find the mean strongly correlates with temperature and precipitation, while the variance appears to be uncorrelated with main environmental factors and is thus likely more correlated with chemical composition and/or ecology. Results indicate the possibility that the distribution in rates reflects, at least in part, the distribution of microbial niches. [1] B. P. Boudreau, B.~R. Ruddick, American Journal of Science,291, 507, (1991). [2] M. Harmon, Forest Science Data Bank: TD023 [Database]. LTER Intersite Fine Litter Decomposition Experiment (LIDET): Long-Term Ecological Research, (2007). [3] G.~A. Beattie, S.~E. Lindow, Phytopathology 89, 353 (1999). [4] R.~A. May, Ecology and Evolution of Communities/, A pattern of Species Abundance and Diversity, 81 (1975). [5] T.~B. Parkin, J.~A. Robinson, Advances in Soil Science 20, Analysis of Lognormal Data, 194 (1992).
Mitra, Rajib; Jordan, Michael I.; Dunbrack, Roland L.
2010-01-01
Distributions of the backbone dihedral angles of proteins have been studied for over 40 years. While many statistical analyses have been presented, only a handful of probability densities are publicly available for use in structure validation and structure prediction methods. The available distributions differ in a number of important ways, which determine their usefulness for various purposes. These include: 1) input data size and criteria for structure inclusion (resolution, R-factor, etc.); 2) filtering of suspect conformations and outliers using B-factors or other features; 3) secondary structure of input data (e.g., whether helix and sheet are included; whether beta turns are included); 4) the method used for determining probability densities ranging from simple histograms to modern nonparametric density estimation; and 5) whether they include nearest neighbor effects on the distribution of conformations in different regions of the Ramachandran map. In this work, Ramachandran probability distributions are presented for residues in protein loops from a high-resolution data set with filtering based on calculated electron densities. Distributions for all 20 amino acids (with cis and trans proline treated separately) have been determined, as well as 420 left-neighbor and 420 right-neighbor dependent distributions. The neighbor-independent and neighbor-dependent probability densities have been accurately estimated using Bayesian nonparametric statistical analysis based on the Dirichlet process. In particular, we used hierarchical Dirichlet process priors, which allow sharing of information between densities for a particular residue type and different neighbor residue types. The resulting distributions are tested in a loop modeling benchmark with the program Rosetta, and are shown to improve protein loop conformation prediction significantly. The distributions are available at http://dunbrack.fccc.edu/hdp. PMID:20442867
Kalvelage, Thomas A.; Willems, Jennifer
2005-01-01
The US Geological Survey's EROS Data Center (EDC) hosts the Land Processes Distributed Active Archive Center (LP DAAC). The LP DAAC supports NASA's Earth Observing System (EOS), which is a series of polar-orbiting and low inclination satellites for long-term global observations of the land surface, biosphere, solid Earth, atmosphere, and oceans. The EOS Data and Information Systems (EOSDIS) was designed to acquire, archive, manage and distribute Earth observation data to the broadest possible user community.The LP DAAC is one of four DAACs that utilize the EOSDIS Core System (ECS) to manage and archive their data. Since the ECS was originally designed, significant changes have taken place in technology, user expectations, and user requirements. Therefore the LP DAAC has implemented additional systems to meet the evolving needs of scientific users, tailored to an integrated working environment. These systems provide a wide variety of services to improve data access and to enhance data usability through subsampling, reformatting, and reprojection. These systems also support the wide breadth of products that are handled by the LP DAAC.The LP DAAC is the primary archive for the Landsat 7 Enhanced Thematic Mapper Plus (ETM+) data; it is the only facility in the United States that archives, processes, and distributes data from the Advanced Spaceborne Thermal Emission/Reflection Radiometer (ASTER) on NASA's Terra spacecraft; and it is responsible for the archive and distribution of “land products” generated from data acquired by the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA's Terra and Aqua satellites.
Synthetic Foveal Imaging Technology
NASA Technical Reports Server (NTRS)
Nikzad, Shouleh (Inventor); Monacos, Steve P. (Inventor); Hoenk, Michael E. (Inventor)
2013-01-01
Apparatuses and methods are disclosed that create a synthetic fovea in order to identify and highlight interesting portions of an image for further processing and rapid response. Synthetic foveal imaging implements a parallel processing architecture that uses reprogrammable logic to implement embedded, distributed, real-time foveal image processing from different sensor types while simultaneously allowing for lossless storage and retrieval of raw image data. Real-time, distributed, adaptive processing of multi-tap image sensors with coordinated processing hardware used for each output tap is enabled. In mosaic focal planes, a parallel-processing network can be implemented that treats the mosaic focal plane as a single ensemble rather than a set of isolated sensors. Various applications are enabled for imaging and robotic vision where processing and responding to enormous amounts of data quickly and efficiently is important.
A cost-effective line-based light-balancing technique using adaptive processing.
Hsia, Shih-Chang; Chen, Ming-Huei; Chen, Yu-Min
2006-09-01
The camera imaging system has been widely used; however, the displaying image appears to have an unequal light distribution. This paper presents novel light-balancing techniques to compensate uneven illumination based on adaptive signal processing. For text image processing, first, we estimate the background level and then process each pixel with nonuniform gain. This algorithm can balance the light distribution while keeping a high contrast in the image. For graph image processing, the adaptive section control using piecewise nonlinear gain is proposed to equalize the histogram. Simulations show that the performance of light balance is better than the other methods. Moreover, we employ line-based processing to efficiently reduce the memory requirement and the computational cost to make it applicable in real-time systems.
NASA Astrophysics Data System (ADS)
Atta, Abdu; Yahaya, Sharipah; Zain, Zakiyah; Ahmed, Zalikha
2017-11-01
Control chart is established as one of the most powerful tools in Statistical Process Control (SPC) and is widely used in industries. The conventional control charts rely on normality assumption, which is not always the case for industrial data. This paper proposes a new S control chart for monitoring process dispersion using skewness correction method for skewed distributions, named as SC-S control chart. Its performance in terms of false alarm rate is compared with various existing control charts for monitoring process dispersion, such as scaled weighted variance S chart (SWV-S); skewness correction R chart (SC-R); weighted variance R chart (WV-R); weighted variance S chart (WV-S); and standard S chart (STD-S). Comparison with exact S control chart with regards to the probability of out-of-control detections is also accomplished. The Weibull and gamma distributions adopted in this study are assessed along with the normal distribution. Simulation study shows that the proposed SC-S control chart provides good performance of in-control probabilities (Type I error) in almost all the skewness levels and sample sizes, n. In the case of probability of detection shift the proposed SC-S chart is closer to the exact S control chart than the existing charts for skewed distributions, except for the SC-R control chart. In general, the performance of the proposed SC-S control chart is better than all the existing control charts for monitoring process dispersion in the cases of Type I error and probability of detection shift.
Distributed parameterization of complex terrain
NASA Astrophysics Data System (ADS)
Band, Lawrence E.
1991-03-01
This paper addresses the incorporation of high resolution topography, soils and vegetation information into the simulation of land surface processes in atmospheric circulation models (ACM). Recent work has concentrated on detailed representation of one-dimensional exchange processes, implicitly assuming surface homogeneity over the atmospheric grid cell. Two approaches that could be taken to incorporate heterogeneity are the integration of a surface model over distributed, discrete portions of the landscape, or over a distribution function of the model parameters. However, the computational burden and parameter intensive nature of current land surface models in ACM limits the number of independent model runs and parameterizations that are feasible to accomplish for operational purposes. Therefore, simplications in the representation of the vertical exchange processes may be necessary to incorporate the effects of landscape variability and horizontal divergence of energy and water. The strategy is then to trade off the detail and rigor of point exchange calculations for the ability to repeat those calculations over extensive, complex terrain. It is clear the parameterization process for this approach must be automated such that large spatial databases collected from remotely sensed images, digital terrain models and digital maps can be efficiently summarized and transformed into the appropriate parameter sets. Ideally, the landscape should be partitioned into surface units that maximize between unit variance while minimizing within unit variance, although it is recognized that some level of surface heterogeneity will be retained at all scales. Therefore, the geographic data processing necessary to automate the distributed parameterization should be able to estimate or predict parameter distributional information within each surface unit.
The future of PanDA in ATLAS distributed computing
NASA Astrophysics Data System (ADS)
De, K.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.
2015-12-01
Experiments at the Large Hadron Collider (LHC) face unprecedented computing challenges. Heterogeneous resources are distributed worldwide at hundreds of sites, thousands of physicists analyse the data remotely, the volume of processed data is beyond the exabyte scale, while data processing requires more than a few billion hours of computing usage per year. The PanDA (Production and Distributed Analysis) system was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. In the process, the old batch job paradigm of locally managed computing in HEP was discarded in favour of a far more automated, flexible and scalable model. The success of PanDA in ATLAS is leading to widespread adoption and testing by other experiments. PanDA is the first exascale workload management system in HEP, already operating at more than a million computing jobs per day, and processing over an exabyte of data in 2013. There are many new challenges that PanDA will face in the near future, in addition to new challenges of scale, heterogeneity and increasing user base. PanDA will need to handle rapidly changing computing infrastructure, will require factorization of code for easier deployment, will need to incorporate additional information sources including network metrics in decision making, be able to control network circuits, handle dynamically sized workload processing, provide improved visualization, and face many other challenges. In this talk we will focus on the new features, planned or recently implemented, that are relevant to the next decade of distributed computing workload management using PanDA.
A mobile ferromagnetic shape detection sensor using a Hall sensor array and magnetic imaging.
Misron, Norhisam; Shin, Ng Wei; Shafie, Suhaidi; Marhaban, Mohd Hamiruce; Mailah, Nashiren Farzilah
2011-01-01
This paper presents a mobile Hall sensor array system for the shape detection of ferromagnetic materials that are embedded in walls or floors. The operation of the mobile Hall sensor array system is based on the principle of magnetic flux leakage to describe the shape of the ferromagnetic material. Two permanent magnets are used to generate the magnetic flux flow. The distribution of magnetic flux is perturbed as the ferromagnetic material is brought near the permanent magnets and the changes in magnetic flux distribution are detected by the 1-D array of the Hall sensor array setup. The process for magnetic imaging of the magnetic flux distribution is done by a signal processing unit before it displays the real time images using a netbook. A signal processing application software is developed for the 1-D Hall sensor array signal acquisition and processing to construct a 2-D array matrix. The processed 1-D Hall sensor array signals are later used to construct the magnetic image of ferromagnetic material based on the voltage signal and the magnetic flux distribution. The experimental results illustrate how the shape of specimens such as square, round and triangle shapes is determined through magnetic images based on the voltage signal and magnetic flux distribution of the specimen. In addition, the magnetic images of actual ferromagnetic objects are also illustrated to prove the functionality of mobile Hall sensor array system for actual shape detection. The results prove that the mobile Hall sensor array system is able to perform magnetic imaging in identifying various ferromagnetic materials.
A Mobile Ferromagnetic Shape Detection Sensor Using a Hall Sensor Array and Magnetic Imaging
Misron, Norhisam; Shin, Ng Wei; Shafie, Suhaidi; Marhaban, Mohd Hamiruce; Mailah, Nashiren Farzilah
2011-01-01
This paper presents a Mobile Hall Sensor Array system for the shape detection of ferromagnetic materials that are embedded in walls or floors. The operation of the Mobile Hall Sensor Array system is based on the principle of magnetic flux leakage to describe the shape of the ferromagnetic material. Two permanent magnets are used to generate the magnetic flux flow. The distribution of magnetic flux is perturbed as the ferromagnetic material is brought near the permanent magnets and the changes in magnetic flux distribution are detected by the 1-D array of the Hall sensor array setup. The process for magnetic imaging of the magnetic flux distribution is done by a signal processing unit before it displays the real time images using a netbook. A signal processing application software is developed for the 1-D Hall sensor array signal acquisition and processing to construct a 2-D array matrix. The processed 1-D Hall sensor array signals are later used to construct the magnetic image of ferromagnetic material based on the voltage signal and the magnetic flux distribution. The experimental results illustrate how the shape of specimens such as square, round and triangle shapes is determined through magnetic images based on the voltage signal and magnetic flux distribution of the specimen. In addition, the magnetic images of actual ferromagnetic objects are also illustrated to prove the functionality of Mobile Hall Sensor Array system for actual shape detection. The results prove that the Mobile Hall Sensor Array system is able to perform magnetic imaging in identifying various ferromagnetic materials. PMID:22346653
Performance of the Heavy Flavor Tracker (HFT) detector in star experiment at RHIC
NASA Astrophysics Data System (ADS)
Alruwaili, Manal
With the growing technology, the number of the processors is becoming massive. Current supercomputer processing will be available on desktops in the next decade. For mass scale application software development on massive parallel computing available on desktops, existing popular languages with large libraries have to be augmented with new constructs and paradigms that exploit massive parallel computing and distributed memory models while retaining the user-friendliness. Currently, available object oriented languages for massive parallel computing such as Chapel, X10 and UPC++ exploit distributed computing, data parallel computing and thread-parallelism at the process level in the PGAS (Partitioned Global Address Space) memory model. However, they do not incorporate: 1) any extension at for object distribution to exploit PGAS model; 2) the programs lack the flexibility of migrating or cloning an object between places to exploit load balancing; and 3) lack the programming paradigms that will result from the integration of data and thread-level parallelism and object distribution. In the proposed thesis, I compare different languages in PGAS model; propose new constructs that extend C++ with object distribution and object migration; and integrate PGAS based process constructs with these extensions on distributed objects. Object cloning and object migration. Also a new paradigm MIDD (Multiple Invocation Distributed Data) is presented when different copies of the same class can be invoked, and work on different elements of a distributed data concurrently using remote method invocations. I present new constructs, their grammar and their behavior. The new constructs have been explained using simple programs utilizing these constructs.
Distributed Aerodynamic Sensing and Processing Toolbox
NASA Technical Reports Server (NTRS)
Brenner, Martin; Jutte, Christine; Mangalam, Arun
2011-01-01
A Distributed Aerodynamic Sensing and Processing (DASP) toolbox was designed and fabricated for flight test applications with an Aerostructures Test Wing (ATW) mounted under the fuselage of an F-15B on the Flight Test Fixture (FTF). DASP monitors and processes the aerodynamics with the structural dynamics using nonintrusive, surface-mounted, hot-film sensing. This aerodynamic measurement tool benefits programs devoted to static/dynamic load alleviation, body freedom flutter suppression, buffet control, improvement of aerodynamic efficiency through cruise control, supersonic wave drag reduction through shock control, etc. This DASP toolbox measures local and global unsteady aerodynamic load distribution with distributed sensing. It determines correlation between aerodynamic observables (aero forces) and structural dynamics, and allows control authority increase through aeroelastic shaping and active flow control. It offers improvements in flutter suppression and, in particular, body freedom flutter suppression, as well as aerodynamic performance of wings for increased range/endurance of manned/ unmanned flight vehicles. Other improvements include inlet performance with closed-loop active flow control, and development and validation of advanced analytical and computational tools for unsteady aerodynamics.
Thermal analysis of disc brakes using finite element method
NASA Astrophysics Data System (ADS)
Jaenudin, Jamari, J.; Tauviqirrahman, M.
2017-01-01
Disc brakes are components of a vehicle that serve to slow or stop the rotation of the wheel. This paper discusses the phenomenon of heat distribution on the brake disc during braking. Heat distribution on the brake disc is caused by kinetic energy changing into mechanical energy. Energy changes occur during the braking process due to friction between the surface of the disc and a disc pad. The temperature resulting from this friction rises high. This thermal analysis on brake discs is aimed to evaluate the performance of an electric car in the braking process. The aim of this study is to analyze the thermal behavior of the brake discs using the Finite Element Method (FEM) through examining the heat distribution on the brake disc using 3-D modeling. Results obtained from the FEM reflect the effects of high heat due to the friction between the disc pad with the disc rotor. Results of the simulation study are used to identify the effect of the heat distribution that occurred during the braking process.
A robust close-range photogrammetric target extraction algorithm for size and type variant targets
NASA Astrophysics Data System (ADS)
Nyarko, Kofi; Thomas, Clayton; Torres, Gilbert
2016-05-01
The Photo-G program conducted by Naval Air Systems Command at the Atlantic Test Range in Patuxent River, Maryland, uses photogrammetric analysis of large amounts of real-world imagery to characterize the motion of objects in a 3-D scene. Current approaches involve several independent processes including target acquisition, target identification, 2-D tracking of image features, and 3-D kinematic state estimation. Each process has its own inherent complications and corresponding degrees of both human intervention and computational complexity. One approach being explored for automated target acquisition relies on exploiting the pixel intensity distributions of photogrammetric targets, which tend to be patterns with bimodal intensity distributions. The bimodal distribution partitioning algorithm utilizes this distribution to automatically deconstruct a video frame into regions of interest (ROI) that are merged and expanded to target boundaries, from which ROI centroids are extracted to mark target acquisition points. This process has proved to be scale, position and orientation invariant, as well as fairly insensitive to global uniform intensity disparities.
Level crossings and excess times due to a superposition of uncorrelated exponential pulses
NASA Astrophysics Data System (ADS)
Theodorsen, A.; Garcia, O. E.
2018-01-01
A well-known stochastic model for intermittent fluctuations in physical systems is investigated. The model is given by a superposition of uncorrelated exponential pulses, and the degree of pulse overlap is interpreted as an intermittency parameter. Expressions for excess time statistics, that is, the rate of level crossings above a given threshold and the average time spent above the threshold, are derived from the joint distribution of the process and its derivative. Limits of both high and low intermittency are investigated and compared to previously known results. In the case of a strongly intermittent process, the distribution of times spent above threshold is obtained analytically. This expression is verified numerically, and the distribution of times above threshold is explored for other intermittency regimes. The numerical simulations compare favorably to known results for the distribution of times above the mean threshold for an Ornstein-Uhlenbeck process. This contribution generalizes the excess time statistics for the stochastic model, which find applications in a wide diversity of natural and technological systems.
Posterior consistency in conditional distribution estimation
Pati, Debdeep; Dunson, David B.; Tokdar, Surya T.
2014-01-01
A wide variety of priors have been proposed for nonparametric Bayesian estimation of conditional distributions, and there is a clear need for theorems providing conditions on the prior for large support, as well as posterior consistency. Estimation of an uncountable collection of conditional distributions across different regions of the predictor space is a challenging problem, which differs in some important ways from density and mean regression estimation problems. Defining various topologies on the space of conditional distributions, we provide sufficient conditions for posterior consistency focusing on a broad class of priors formulated as predictor-dependent mixtures of Gaussian kernels. This theory is illustrated by showing that the conditions are satisfied for a class of generalized stick-breaking process mixtures in which the stick-breaking lengths are monotone, differentiable functions of a continuous stochastic process. We also provide a set of sufficient conditions for the case where stick-breaking lengths are predictor independent, such as those arising from a fixed Dirichlet process prior. PMID:25067858
Dependence of Snowmelt Simulations on Scaling of the Forcing Processes (Invited)
NASA Astrophysics Data System (ADS)
Winstral, A. H.; Marks, D. G.; Gurney, R. J.
2009-12-01
The spatial organization and scaling relationships of snow distribution in mountain environs is ultimately dependent on the controlling processes. These processes include interactions between weather, topography, vegetation, snow state, and seasonally-dependent radiation inputs. In large scale snow modeling it is vital to know these dependencies to obtain accurate predictions while reducing computational costs. This study examined the scaling characteristics of the forcing processes and the dependency of distributed snowmelt simulations to their scaling. A base model simulation characterized these processes with 10m resolution over a 14.0 km2 basin with an elevation range of 1474 - 2244 masl. Each of the major processes affecting snow accumulation and melt - precipitation, wind speed, solar radiation, thermal radiation, temperature, and vapor pressure - were independently degraded to 1 km resolution. Seasonal and event-specific results were analyzed. Results indicated that scale effects on melt vary by process and weather conditions. The dependence of melt simulations on the scaling of solar radiation fluxes also had a seasonal component. These process-based scaling characteristics should remain static through time as they are based on physical considerations. As such, these results not only provide guidance for current modeling efforts, but are also well suited to predicting how potential climate changes will affect the heterogeneity of mountain snow distributions.
Exploiting Virtual Synchrony in Distributed Systems
1987-02-01
for distributed systems yield the best performance relative to the level of synchronization guaranteed by the primitive . A pro- grammer could then... synchronization facility. Semaphores Replicated binary and general semaphores . Monitors Monitor lock, condition variables and signals. Deadlock detection...We describe applications of a new software abstraction called the virtually synchronous process group. Such a group consists of a set of processes
ERIC Educational Resources Information Center
Borowsky, Ron; Besner, Derek
2006-01-01
D. C. Plaut and J. R. Booth presented a parallel distributed processing model that purports to simulate human lexical decision performance. This model (and D. C. Plaut, 1995) offers a single mechanism account of the pattern of factor effects on reaction time (RT) between semantic priming, word frequency, and stimulus quality without requiring a…
NASA Technical Reports Server (NTRS)
Behnke, Jeanne; Doescher, Chris
2015-01-01
This presentation discusses 25 years of interactions between NASA and the USGS to manage a Land Processes Distributed Active Archive Center (LPDAAC) for the purpose of providing users access to NASA's rich collection of Earth Science data. The presentation addresses challenges, efforts and metrics on the performance.
Hazard function analysis for flood planning under nonstationarity
NASA Astrophysics Data System (ADS)
Read, Laura K.; Vogel, Richard M.
2016-05-01
The field of hazard function analysis (HFA) involves a probabilistic assessment of the "time to failure" or "return period," T, of an event of interest. HFA is used in epidemiology, manufacturing, medicine, actuarial statistics, reliability engineering, economics, and elsewhere. For a stationary process, the probability distribution function (pdf) of the return period always follows an exponential distribution, the same is not true for nonstationary processes. When the process of interest, X, exhibits nonstationary behavior, HFA can provide a complementary approach to risk analysis with analytical tools particularly useful for hydrological applications. After a general introduction to HFA, we describe a new mathematical linkage between the magnitude of the flood event, X, and its return period, T, for nonstationary processes. We derive the probabilistic properties of T for a nonstationary one-parameter exponential model of X, and then use both Monte-Carlo simulation and HFA to generalize the behavior of T when X arises from a nonstationary two-parameter lognormal distribution. For this case, our findings suggest that a two-parameter Weibull distribution provides a reasonable approximation for the pdf of T. We document how HFA can provide an alternative approach to characterize the probabilistic properties of both nonstationary flood series and the resulting pdf of T.
CHANGES IN BACTERIAL COMPOSITION OF BIOFILM IN A METROPOLITAN DRINKING WATER DISTRIBUTION SYSTEM
This study examined the development of bacterial biofilms within a metropolitan distribution system. The distribution system is fed with different source water (i.e., groundwater, GW and surface water, SW) and undergoes different treatment processes in separate facilities. The b...
76 FR 70376 - Efficiency and Renewables Advisory Committee; Notice of Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-14
...-Voltage Dry-Type Distribution Transformers. The Liquid Immersed and Medium-Voltage Dry- Type Group (MV... of distribution transformers, as authorized by the Energy Policy Conservation Act (EPCA) of 1975, as... negotiated rulemaking process to develop proposed energy efficiency standards for distribution transformers...
Spatial distribution visualization of PWM continuous variable-rate spray
USDA-ARS?s Scientific Manuscript database
Chemical application is a dynamic spatial distribution process, during which spray liquid covers the targets with certain thickness and uniformity. Therefore, it is important to study the 2-D and 3-D (dimensional) spray distribution to evaluate spraying quality. The curve-surface generation methods ...
Studies on thermokinetic of Chlorella pyrenoidosa devolatilization via different models.
Chen, Zhihua; Lei, Jianshen; Li, Yunbei; Su, Xianfa; Hu, Zhiquan; Guo, Dabin
2017-11-01
The thermokinetics of Chlorella pyrenoidosa (CP) devolatilization were investigated based on iso-conversional model and different distributed activation energy models (DAEM). Iso-conversional process result showed that CP devolatilization roughly followed a single-step with mechanism function of f(α)=(1-α) 3 , and kinetic parameters pair of E 0 =180.5kJ/mol and A 0 =1.5E+13s -1 . Logistic distribution was the most suitable activation energy distribution function for CP devolatilization. Although reaction order n=3.3 was in accordance with iso-conversional process, Logistic DAEM could not detail the weight loss features since it presented as single-step reaction. The un-uniform feature of activation energy distribution in Miura-Maki DAEM, and weight fraction distribution in discrete DAEM reflected weight loss features. Due to the un-uniform distribution of activation and weight fraction, Miura-Maki DAEM and discreted DAEM could describe weight loss features. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sohrabi, M.; Habibi, M.; Ramezani, V.
2017-02-01
The paper presents an experimental study and analysis of full helium ion density angular distributions in a 4-kJ plasma focus device (PFD) at pressures of 10, 15, 25, and 30 mbar using large-area polycarbonate track detectors (PCTDs) (15-cm etchable diameter) processed by 50-Hz-HV electrochemical etching (ECE). Helium ion track distributions at different pressures, in particular, at the main axis of the PFD are presented. Maximum ion track density of 4.4 × 104 tracks/cm2 was obtained in the PCTD placed 6 cm from the anode. The ion distributions for all pressures applied are ring-shaped, which is possibly due to the hollow cylindrical copper anode used. The large-area PCTD processed by ECE proves, at the present state-of-theart, a superior method for direct observation and analysis of ion distributions at a glance with minimum efforts and time. Some observations of the ion density distributions at different pressures are reported and discussed.
NASA Astrophysics Data System (ADS)
Bao, Yi; Valipour, Mahdi; Meng, Weina; Khayat, Kamal H.; Chen, Genda
2017-08-01
This study develops a delamination detection system for smart ultra-high-performance concrete (UHPC) overlays using a fully distributed fiber optic sensor. Three 450 mm (length) × 200 mm (width) × 25 mm (thickness) UHPC overlays were cast over an existing 200 mm thick concrete substrate. The initiation and propagation of delamination due to early-age shrinkage of the UHPC overlay were detected as sudden increases and their extension in spatial distribution of shrinkage-induced strains measured from the sensor based on pulse pre-pump Brillouin optical time domain analysis. The distributed sensor is demonstrated effective in detecting delamination openings from microns to hundreds of microns. A three-dimensional finite element model with experimental material properties is proposed to understand the complete delamination process measured from the distributed sensor. The model is validated using the distributed sensor data. The finite element model with cohesive elements for the overlay-substrate interface can predict the complete delamination process.
Applying simulation model to uniform field space charge distribution measurements by the PEA method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Y.; Salama, M.M.A.
1996-12-31
Signals measured under uniform fields by the Pulsed Electroacoustic (PEA) method have been processed by the deconvolution procedure to obtain space charge distributions since 1988. To simplify data processing, a direct method has been proposed recently in which the deconvolution is eliminated. However, the surface charge cannot be represented well by the method because the surface charge has a bandwidth being from zero to infinity. The bandwidth of the charge distribution must be much narrower than the bandwidths of the PEA system transfer function in order to apply the direct method properly. When surface charges can not be distinguished frommore » space charge distributions, the accuracy and the resolution of the obtained space charge distributions decrease. To overcome this difficulty a simulation model is therefore proposed. This paper shows their attempts to apply the simulation model to obtain space charge distributions under plane-plane electrode configurations. Due to the page limitation for the paper, the charge distribution originated by the simulation model is compared to that obtained by the direct method with a set of simulated signals.« less
NASA Astrophysics Data System (ADS)
Dong, Siqun; Zhao, Dianli
2018-01-01
This paper studies the subcritical, near-critical and supercritical asymptotic behavior of a reversible random coagulation-fragmentation polymerization process as N → ∞, with the number of distinct ways to form a k-clusters from k units satisfying f(k) =(1 + o (1)) cr-ke-kαk-β, where 0 < α < 1 and β > 0. When the cluster size is small, its distribution is proved to converge to the Gaussian distribution. For the medium clusters, its distribution will converge to Poisson distribution in supercritical stage, and no large clusters exist in this stage. Furthermore, the largest length of polymers of size N is of order ln N in the subcritical stage under α ⩽ 1 / 2.
Dynamical Aspects of Quasifission Process in Heavy-Ion Reactions
NASA Astrophysics Data System (ADS)
Knyazheva, G. N.; Itkis, I. M.; Kozulin, E. M.
2015-06-01
The study of mass-energy distributions of binary fragments obtained in the reactions of 36S, 48Ca, 58Fe and 64Ni ions with the 232Th, 238U, 244Pu and 248Cm at energies below and above the Coulomb barrier is presented. For all the reactions the main component of the distributions corresponds to asymmetrical mass division typical for asymmetric quasifission process. To describe the quasifission mass distribution the simple method has been proposed. This method is based on the driving potential of the system and time dependent mass drift. This procedure allows to estimate QF time scale from the measured mass distributions. It has been found that the QF time exponentially decreases when the reaction Coulomb factor Z1Z2 increases.
Solar nebula heterogeneity in p-process samarium and neodymium isotopes.
Andreasen, Rasmus; Sharma, Mukul
2006-11-03
Bulk carbonaceous chondrites display a deficit of approximately 100 parts per million (ppm) in 144Sm with respect to other meteorites and terrestrial standards, leading to a decrease in their 142Nd/144Nd ratios by approximately 11 ppm. The data require that samarium and neodymium isotopes produced by the p process associated with photodisintegration reactions in supernovae were heterogeneously distributed in the solar nebula. Other samarium and neodymium isotopes produced by rapid neutron capture (r process) in supernovae and by slow neutron capture (s process) in red giants were homogeneously distributed. The supernovae sources supplying the p- and r-process nuclides to the solar nebula were thus disconnected or only weakly connected.
Bosse, Stefan
2015-01-01
Multi-agent systems (MAS) can be used for decentralized and self-organizing data processing in a distributed system, like a resource-constrained sensor network, enabling distributed information extraction, for example, based on pattern recognition and self-organization, by decomposing complex tasks in simpler cooperative agents. Reliable MAS-based data processing approaches can aid the material-integration of structural-monitoring applications, with agent processing platforms scaled to the microchip level. The agent behavior, based on a dynamic activity-transition graph (ATG) model, is implemented with program code storing the control and the data state of an agent, which is novel. The program code can be modified by the agent itself using code morphing techniques and is capable of migrating in the network between nodes. The program code is a self-contained unit (a container) and embeds the agent data, the initialization instructions and the ATG behavior implementation. The microchip agent processing platform used for the execution of the agent code is a standalone multi-core stack machine with a zero-operand instruction format, leading to a small-sized agent program code, low system complexity and high system performance. The agent processing is token-queue-based, similar to Petri-nets. The agent platform can be implemented in software, too, offering compatibility at the operational and code level, supporting agent processing in strong heterogeneous networks. In this work, the agent platform embedded in a large-scale distributed sensor network is simulated at the architectural level by using agent-based simulation techniques. PMID:25690550
Bosse, Stefan
2015-02-16
Multi-agent systems (MAS) can be used for decentralized and self-organizing data processing in a distributed system, like a resource-constrained sensor network, enabling distributed information extraction, for example, based on pattern recognition and self-organization, by decomposing complex tasks in simpler cooperative agents. Reliable MAS-based data processing approaches can aid the material-integration of structural-monitoring applications, with agent processing platforms scaled to the microchip level. The agent behavior, based on a dynamic activity-transition graph (ATG) model, is implemented with program code storing the control and the data state of an agent, which is novel. The program code can be modified by the agent itself using code morphing techniques and is capable of migrating in the network between nodes. The program code is a self-contained unit (a container) and embeds the agent data, the initialization instructions and the ATG behavior implementation. The microchip agent processing platform used for the execution of the agent code is a standalone multi-core stack machine with a zero-operand instruction format, leading to a small-sized agent program code, low system complexity and high system performance. The agent processing is token-queue-based, similar to Petri-nets. The agent platform can be implemented in software, too, offering compatibility at the operational and code level, supporting agent processing in strong heterogeneous networks. In this work, the agent platform embedded in a large-scale distributed sensor network is simulated at the architectural level by using agent-based simulation techniques.
NASA Astrophysics Data System (ADS)
Wiesmann, William P.; Pranger, L. Alex; Bogucki, Mary S.
1998-05-01
Remote monitoring of physiologic data from individual high- risk workers distributed over time and space is a considerable challenge. This is often due to an inadequate capability to accurately integrate large amounts of data into usable information in real time. In this report, we have used the vertical and horizontal organization of the 'fireground' as a framework to design a distributed network of sensors. In this system, sensor output is linked through a hierarchical object oriented programing process to accurately interpret physiological data, incorporate these data into a synchronous model and relay processed data, trends and predictions to members of the fire incident command structure. There are several unique aspects to this approach. The first includes a process to account for variability in vital parameter values for each individual's normal physiologic response by including an adaptive network in each data process. This information is used by the model in an iterative process to baseline a 'normal' physiologic response to a given stress for each individual and to detect deviations that indicate dysfunction or a significant insult. The second unique capability of the system orders the information for each user including the subject, local company officers, medical personnel and the incident commanders. Information can be retrieved and used for training exercises and after action analysis. Finally this system can easily be adapted to existing communication and processing links along with incorporating the best parts of current models through the use of object oriented programming techniques. These modern software techniques are well suited to handling multiple data processes independently over time in a distributed network.
Falcone, U; Gilardi, Luisella; Pasqualini, O; Santoro, S; Coffano, Elena
2010-01-01
Exposure to carcinogens is still widespread in working environments. For the purpose of defining priority of interventions, it is necessary to estimate the number and the geographic distribution of workers potentially exposed to carcinogens. It could therefore be useful to test the use of tools and information sources already available in order to map the distribution of exposure to carcinogens. Formaldehyde is suggested as an example of an occupational carcinogen in this study. The study aimed at verifying and investigating the potential of 3 integrated databases: MATline, CAREX, and company databases resulting from occupational accident and disease claims (INAIL), in order to estimate the number of workers exposed to formaldehyde and map their distribution in the Piedmont Region. The list of manufacturing processes involving exposure to formaldehyde was sorted by MIATline; for each process the number of firms and employees were obtained from the INAIL archives. By applying the prevalence of exposed workers obtained with CAREX, an estimate of exposure for each process was determined. A map of the distribution of employees associated with a specific process was produced using ArcView GIS software. It was estimated that more than 13,000 employees are exposed to formaldehyde in the Piedmont Region. The manufacture of furniture was identified as the process with the highest number of workers exposed to formaldehyde (3,130),followed by metal workers (2,301 exposed) and synthetic resin processing (1,391 exposed). The results obtained from the integrated use of databases provide a basis for defining priority of preventive interventions required in the industrial processes involving exposure to carcinogens in the Piedmont Region.
A Metrics-Based Approach to Intrusion Detection System Evaluation for Distributed Real-Time Systems
2002-04-01
Based Approach to Intrusion Detection System Evaluation for Distributed Real - Time Systems Authors: G. A. Fink, B. L. Chappell, T. G. Turner, and...Distributed, Security. 1 Introduction Processing and cost requirements are driving future naval combat platforms to use distributed, real - time systems of...distributed, real - time systems . As these systems grow more complex, the timing requirements do not diminish; indeed, they may become more constrained
Unconventional Signal Processing Using the Cone Kernel Time-Frequency Representation.
1992-10-30
Wigner - Ville distribution ( WVD ), the Choi- Williams distribution , and the cone kernel distribution were compared with the spectrograms. Results were...ambiguity function. Figures A-18(c) and (d) are the Wigner - Ville Distribution ( WVD ) and CK-TFR Doppler maps. In this noiseless case all three exhibit...kernel is the basis for the well known Wigner - Ville distribution . In A-9(2), the cone kernel defined by Zhao, Atlas and Marks [21 is described
Wavelet-Based Signal Processing for Monitoring Discomfort and Fatigue
2008-06-01
Wigner - Ville distribution ( WVD ), the short-time Fourier transform (STFT) or spectrogram, the Choi-Williams distribution (CWD), the smoothed pseudo Wigner ...has the advantage of being computationally less expensive than other standard techniques, such as the Wigner - Ville distribution ( WVD ), the spectrogram...slopes derived from the spectrogram and the smoothed pseudo Wigner - Ville distribution . Furthermore, slopes derived from the filter bank
A numerical study of zone-melting process for the thermoelectric material of Bi2Te3
NASA Astrophysics Data System (ADS)
Chen, W. C.; Wu, Y. C.; Hwang, W. S.; Hsieh, H. L.; Huang, J. Y.; Huang, T. K.
2015-06-01
In this study, a numerical model has been established by employing a commercial software; ProCAST, to simulate the variation/distribution of temperature and the subsequent microstructure of Bi2Te3 fabricated by zone-melting technique. Then an experiment is conducted to measure the temperature variation/distribution during the zone-melting process to validate the numerical system. Also, the effects of processing parameters on crystallization microstructure such as moving speed and temperature of heater are numerically evaluated. In the experiment, the Bi2Te3 powder are filled into a 30mm diameter quartz cylinder and the heater is set to 800°C with a moving speed 12.5 mm/hr. A thermocouple is inserted in the Bi2Te3 powder to measure the temperature variation/distribution of the zone-melting process. The temperature variation/distribution measured by experiment is compared to the results of numerical simulation. The results show that our model and the experiment are well matched. Then the model is used to evaluate the crystal formation for Bi2Te3 with a 30mm diameter process. It's found that when the moving speed is slower than 17.5 mm/hr, columnar crystal is obtained. In the end, we use this model to predict the crystal formation of zone-melting process for Bi2Te3 with a 45 mm diameter. The results show that it is difficult to grow columnar crystal when the diameter comes to 45mm.
2003-06-27
KENNEDY SPACE CENTER, FLA. - At Vandenberg Air Force Base, Calif., the Pegasus launch vehicle is moved toward its hangar. The Pegasus will carry the SciSat-1 spacecraft in a 400-mile-high polar orbit to investigate processes that control the distribution of ozone in the upper atmosphere. The scientific mission of SciSat-1 is to measure and understand the chemical processes that control the distribution of ozone in the Earth’s atmosphere, particularly at high altitudes. The data from the satellite will provide Canadian and international scientists with improved measurements relating to global ozone processes and help policymakers assess existing environmental policy and develop protective measures for improving the health of our atmosphere, preventing further ozone depletion. The mission is designed to last two years.
2003-06-27
KENNEDY SPACE CENTER, FLA. - The Pegasus launch vehicle is moved back to its hangar at Vandenberg Air Force Base, Calif. The Pegasus will carry the SciSat-1 spacecraft in a 400-mile-high polar orbit to investigate processes that control the distribution of ozone in the upper atmosphere. The scientific mission of SciSat-1 is to measure and understand the chemical processes that control the distribution of ozone in the Earth’s atmosphere, particularly at high altitudes. The data from the satellite will provide Canadian and international scientists with improved measurements relating to global ozone processes and help policymakers assess existing environmental policy and develop protective measures for improving the health of our atmosphere, preventing further ozone depletion. The mission is designed to last two years.
2003-06-26
KENNEDY SPACE CENTER, FLA. - The SciSat-1 spacecraft is uncrated at Vandenberg Air Force Base, Calif. SciSat-1 weighs approximately 330 pounds and will be placed in a 400-mile-high polar orbit to investigate processes that control the distribution of ozone in the upper atmosphere. The scientific mission of SciSat-1 is to measure and understand the chemical processes that control the distribution of ozone in the Earth’s atmosphere, particularly at high altitudes. The data from the satellite will provide Canadian and international scientists with improved measurements relating to global ozone processes and help policymakers assess existing environmental policy and develop protective measures for improving the health of our atmosphere, preventing further ozone depletion. The mission is designed to last two years.
2003-06-26
KENNEDY SPACE CENTER, FLA. - The SciSat-1 spacecraft is revealed after being uncrated at Vandenberg Air Force Base, Calif. SciSat-1 weighs approximately 330 pounds and will be placed in a 400-mile-high polar orbit to investigate processes that control the distribution of ozone in the upper atmosphere. The scientific mission of SciSat-1 is to measure and understand the chemical processes that control the distribution of ozone in the Earth’s atmosphere, particularly at high altitudes. The data from the satellite will provide Canadian and international scientists with improved measurements relating to global ozone processes and help policymakers assess existing environmental policy and develop protective measures for improving the health of our atmosphere, preventing further ozone depletion. The mission is designed to last two years.
2003-06-26
KENNEDY SPACE CENTER, FLA. - Workers at Vandenberg Air Force Base, Calif., prepare to move the SciSat-1 spacecraft. SciSat-1 weighs approximately 330 pounds and will be placed in a 400-mile-high polar orbit to investigate processes that control the distribution of ozone in the upper atmosphere. The scientific mission of SciSat-1 is to measure and understand the chemical processes that control the distribution of ozone in the Earth’s atmosphere, particularly at high altitudes. The data from the satellite will provide Canadian and international scientists with improved measurements relating to global ozone processes and help policymakers assess existing environmental policy and develop protective measures for improving the health of our atmosphere, preventing further ozone depletion. The mission is designed to last two years.
2003-06-27
KENNEDY SPACE CENTER, FLA. - At Vandenberg Air Force Base, Calif., the Pegasus launch vehicle is moved into its hangar. The Pegasus will carry the SciSat-1 spacecraft in a 400-mile-high polar orbit to investigate processes that control the distribution of ozone in the upper atmosphere. The scientific mission of SciSat-1 is to measure and understand the chemical processes that control the distribution of ozone in the Earth’s atmosphere, particularly at high altitudes. The data from the satellite will provide Canadian and international scientists with improved measurements relating to global ozone processes and help policymakers assess existing environmental policy and develop protective measures for improving the health of our atmosphere, preventing further ozone depletion. The mission is designed to last two years.
NASA Astrophysics Data System (ADS)
Cincotti, Silvano; Ponta, Linda; Raberto, Marco; Scalas, Enrico
2005-05-01
In this paper, empirical analyses and computational experiments are presented on high-frequency data for a double-auction (book) market. Main objective of the paper is to generalize the order waiting time process in order to properly model such empirical evidences. The empirical study is performed on the best bid and best ask data of 7 U.S. financial markets, for 30-stock time series. In particular, statistical properties of trading waiting times have been analyzed and quality of fits is evaluated by suitable statistical tests, i.e., comparing empirical distributions with theoretical models. Starting from the statistical studies on real data, attention has been focused on the reproducibility of such results in an artificial market. The computational experiments have been performed within the Genoa Artificial Stock Market. In the market model, heterogeneous agents trade one risky asset in exchange for cash. Agents have zero intelligence and issue random limit or market orders depending on their budget constraints. The price is cleared by means of a limit order book. The order generation is modelled with a renewal process. Based on empirical trading estimation, the distribution of waiting times between two consecutive orders is modelled by a mixture of exponential processes. Results show that the empirical waiting-time distribution can be considered as a generalization of a Poisson process. Moreover, the renewal process can approximate real data and implementation on the artificial stocks market can reproduce the trading activity in a realistic way.
Process evaluation distributed system
NASA Technical Reports Server (NTRS)
Moffatt, Christopher L. (Inventor)
2006-01-01
The distributed system includes a database server, an administration module, a process evaluation module, and a data display module. The administration module is in communication with the database server for providing observation criteria information to the database server. The process evaluation module is in communication with the database server for obtaining the observation criteria information from the database server and collecting process data based on the observation criteria information. The process evaluation module utilizes a personal digital assistant (PDA). A data display module in communication with the database server, including a website for viewing collected process data in a desired metrics form, the data display module also for providing desired editing and modification of the collected process data. The connectivity established by the database server to the administration module, the process evaluation module, and the data display module, minimizes the requirement for manual input of the collected process data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Urban, P., E-mail: purban@us.es; Montes, J. M.; Cintas, J.
2015-03-30
The effect of intensity and duration of the electrical resistance sintering process on the phase stability, porosity distribution and microstructural evolution of Al{sub 50}Ti{sub 50} amorphous powders is studied. The phase transformations during the consolidation process were determined by X-ray diffraction. The porosity distribution was observed by optical and scanning electron microscopy. The amorphous phase is partially transformed to the crystalline phase during the sintering process, and formation of AlTi and AlTi{sub 3} intermetallic compounds occurs for temperatures higher than 300 °C. Finally, it is observed that the compacts core have lower porosity and a higher tendency to the amorphous-crystallinemore » phase transformation than the periphery.« less
Estimation of the Scatterer Distribution of the Cirrhotic Liver using Ultrasonic Image
NASA Astrophysics Data System (ADS)
Yamaguchi, Tadashi; Hachiya, Hiroyuki
1998-05-01
In the B-mode image of the liver obtained by an ultrasonic imaging system, the speckled pattern changes with the progression of the disease such as liver cirrhosis.In this paper we present the statistical characteristics of the echo envelope of the liver, and the technique to extract information of the scatterer distribution from the normal and cirrhotic liver images using constant false alarm rate (CFAR) processing.We analyze the relationship between the extracted scatterer distribution and the stage of liver cirrhosis. The ratio of the area in which the amplitude of the processing signal is more than the threshold to the entire processed image area is related quantitatively to the stage of liver cirrhosis.It is found that the proposed technique is valid for the quantitative diagnosis of liver cirrhosis.
Diffusion of Siderophile Elements in Fe Metal: Application to Zoned Metal Grains in Chondrites
NASA Technical Reports Server (NTRS)
Righter, K.; Campbell, A. J.; Humajun, M.
2003-01-01
The distribution of highly siderophile elements (HSE) in planetary materials is controlled mainly by metal. Diffusion processes can control the distribution or re-distribution of these elements within metals, yet there is little systematic or appropriate diffusion data that can be used to interpret HSE concentrations in such metals. Because our understanding of isotope chronometry, redox processes, kamacite/taenite-based cooling rates, and metal grain zoning would be enhanced with diffusion data, we have measured diffusion coefficients for Ni, Co, Ga, Ge, Ru, Pd, Ir and Au in Fe metal from 1200 to 1400 C and 1 bar and 10 kbar. These new data on refractory and volatile siderophile elements are used to evaluate the role of diffusional processes in controlling zoning patterns in metal-rich chondrites.
A fortran program for Monte Carlo simulation of oil-field discovery sequences
Bohling, Geoffrey C.; Davis, J.C.
1993-01-01
We have developed a program for performing Monte Carlo simulation of oil-field discovery histories. A synthetic parent population of fields is generated as a finite sample from a distribution of specified form. The discovery sequence then is simulated by sampling without replacement from this parent population in accordance with a probabilistic discovery process model. The program computes a chi-squared deviation between synthetic and actual discovery sequences as a function of the parameters of the discovery process model, the number of fields in the parent population, and the distributional parameters of the parent population. The program employs the three-parameter log gamma model for the distribution of field sizes and employs a two-parameter discovery process model, allowing the simulation of a wide range of scenarios. ?? 1993.
Distribution of iron, copper and manganese in the Arabian Sea
NASA Astrophysics Data System (ADS)
Moffett, James
2014-05-01
The distribution of iron, copper and manganese was studied on a zonal transect of the Arabian Sea during the SW monsoon in 2007. The distribution of metals in the eastern and western ends of the transect are completely different, with concentrations of Fe and Mn higher in the east, but copper much higher in the west. Redox cycling in the east, and enhanced ventilation in the west contributes to these processes. It seems likely that blooms of Phaeocystis sp. contribute to the pronounced surface depletion and oxicline regeneration we observe, particularly for copper. The results are very different than similar surveys in the Peru upwelling, indicating controls by very different processes. These results have important implications for carbon and nitrogen cycling, particularly for processes mediated by key Cu and Fe metalloenzymes.
NASA Astrophysics Data System (ADS)
Gedalin, M.; Liverts, M.; Balikhin, M. A.
2008-05-01
Field-aligned and gyrophase bunched ion beams are observed in the foreshock of the Earth bow shock. One of the mechanisms proposed for their production is non-specular reflection at the shock front. We study the distributions which are formed at the stationary quasi-perpendicular shock front within the same process which is responsible for the generation of reflected ions and transmitted gyrating ions. The test particle motion analysis in a model shock allows one to identify the parameters which control the efficiency of the process and the features of the escaping ion distribution. These parameters are: the angle between the shock normal and the upstream magnetic field, the ratio of the ion thermal velocity to the flow velocity upstream, and the cross-shock potential. A typical distribution of escaping ions exhibits a bimodal pitch angle distribution (in the plasma rest frame).
Nuclear parton distributions and the Drell-Yan process
NASA Astrophysics Data System (ADS)
Kulagin, S. A.; Petti, R.
2014-10-01
We study the nuclear parton distribution functions on the basis of our recently developed semimicroscopic model, which takes into account a number of nuclear effects including nuclear shadowing, Fermi motion and nuclear binding, nuclear meson-exchange currents, and off-shell corrections to bound nucleon distributions. We discuss in detail the dependencies of nuclear effects on the type of parton distribution (nuclear sea vs valence), as well as on the parton flavor (isospin). We apply the resulting nuclear parton distributions to calculate ratios of cross sections for proton-induced Drell-Yan production off different nuclear targets. We obtain a good agreement on the magnitude, target and projectile x, and the dimuon mass dependence of proton-nucleus Drell-Yan process data from the E772 and E866 experiments at Fermilab. We also provide nuclear corrections for the Drell-Yan data from the E605 experiment.
Modeling the VARTM Composite Manufacturing Process
NASA Technical Reports Server (NTRS)
Song, Xiao-Lan; Loos, Alfred C.; Grimsley, Brian W.; Cano, Roberto J.; Hubert, Pascal
2004-01-01
A comprehensive simulation model of the Vacuum Assisted Resin Transfer Modeling (VARTM) composite manufacturing process has been developed. For isothermal resin infiltration, the model incorporates submodels which describe cure of the resin and changes in resin viscosity due to cure, resin flow through the reinforcement preform and distribution medium and compaction of the preform during the infiltration. The accuracy of the model was validated by measuring the flow patterns during resin infiltration of flat preforms. The modeling software was used to evaluate the effects of the distribution medium on resin infiltration of a flat preform. Different distribution medium configurations were examined using the model and the results were compared with data collected during resin infiltration of a carbon fabric preform. The results of the simulations show that the approach used to model the distribution medium can significantly effect the predicted resin infiltration times. Resin infiltration into the preform can be accurately predicted only when the distribution medium is modeled correctly.
Managing distribution changes in time series prediction
NASA Astrophysics Data System (ADS)
Matias, J. M.; Gonzalez-Manteiga, W.; Taboada, J.; Ordonez, C.
2006-07-01
When a problem is modeled statistically, a single distribution model is usually postulated that is assumed to be valid for the entire space. Nonetheless, this practice may be somewhat unrealistic in certain application areas, in which the conditions of the process that generates the data may change; as far as we are aware, however, no techniques have been developed to tackle this problem.This article proposes a technique for modeling and predicting this change in time series with a view to improving estimates and predictions. The technique is applied, among other models, to the hypernormal distribution recently proposed. When tested on real data from a range of stock market indices the technique produces better results that when a single distribution model is assumed to be valid for the entire period of time studied.Moreover, when a global model is postulated, it is highly recommended to select the hypernormal distribution parameter in the same likelihood maximization process.
Sea-quark distributions in the pion
NASA Astrophysics Data System (ADS)
Hwang, W.-Y. P.; Speth, J.
1992-05-01
Using Sullivan processes with ρππ, K*+K¯ 0π, and K¯ *0K+π vertices, we describe how the sea-quark distributions of a pion may be generated in a quantitative manner. The input valence-quark distributions are obtained using the leading Fock component of the light-cone wave function, which is in accord with results obtained from the QCD sum rules. The sample numerical results appear to be reasonable as far as the existing Drell-Yan production data are concerned, although the distributions as a function of x differs slightly from those obtained by imposing counting rules for x-->0 and x-->1. Our results lend additional support toward the conjecture of Hwang, Speth, and Brown that the sea distributions of a hadron, at low and moderate Q2 (at least up to a few GeV2), may be attributed primarily to generalized Sullivan processes.
The Impact of Aerosols on Cloud and Precipitation Processes: Cloud-Resolving Model Simulations
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Li, X.; Khain, A.; Simpson, S.
2005-01-01
Cloud microphysics are inevitable affected by the smoke particle (CCN, cloud condensation nuclei) size distributions below the clouds, Therefore, size distributions parameterized as spectral bin microphysics are needed to explicitly study the effect of atmospheric aerosol concentration on cloud development, rainfall production, and rainfall rates for convective clouds. Recently, a detailed spectral-bin microphysical scheme was implemented into the the Goddard Cumulus Ensemble (GCE) model. The formulation for the explicit spectral-bim microphysical processes is based on solving stochastic kinetic equations for the size distribution functions of water droplets (i.e., cloud droplets and raindrops), and several types of ice particles [i.e., pristine ice crystals (columnar and plate-like), snow (dendrites and aggregates), graupel and frozen drops/hail]. Each type is described by a special size distribution function containing many categories (i.e., 33 bins). Atmospheric aerosols are also described using number density size-distribution functions.
On the use of distributed sensing in control of large flexible spacecraft
NASA Technical Reports Server (NTRS)
Montgomery, Raymond C.; Ghosh, Dave
1990-01-01
Distributed processing technology is being developed to process signals from distributed sensors using distributed computations. Thiw work presents a scheme for calculating the operators required to emulate a conventional Kalman filter and regulator using such a computer. The scheme makes use of conventional Kalman theory as applied to the control of large flexible structures. The required computation of the distributed operators given the conventional Kalman filter and regulator is explained. A straightforward application of this scheme may lead to nonsmooth operators whose convergence is not apparent. This is illustrated by application to the Mini-Mast, a large flexible truss at the Langley Research Center used for research in structural dynamics and control. Techniques for developing smooth operators are presented. These involve spatial filtering as well as adjusting the design constants in the Kalman theory. Results are presented that illustrate the degree of smoothness achieved.
Effect of the state of internal boundaries on granite fracture nature under quasi-static compression
NASA Astrophysics Data System (ADS)
Damaskinskaya, E. E.; Panteleev, I. A.; Kadomtsev, A. G.; Naimark, O. B.
2017-05-01
Based on an analysis of the spatial distribution of hypocenters of acoustic emission signal sources and an analysis of the energy distributions of acoustic emission signals, the effect of the liquid phase and a weak electric field on the spatiotemporal nature of granite sample fracture is studied. Experiments on uniaxial compression of granite samples of natural moisture showed that the damage accumulation process is twostage: disperse accumulation of damages is followed by localized accumulation of damages in the formed macrofracture nucleus region. In energy distributions of acoustic emission signals, this transition is accompanied by a change in the distribution shape from exponential to power-law. Granite water saturation qualitatively changes the damage accumulation nature: the process is delocalized until macrofracture with the exponential energy distribution of acoustic emission signals. An exposure to a weak electric field results in a selective change in the damage accumulation nature in the sample volume.
Analysis and numerical simulation research of the heating process in the oven
NASA Astrophysics Data System (ADS)
Chen, Yawei; Lei, Dingyou
2016-10-01
How to use the oven to bake delicious food is the most concerned problem of the designers and users of the oven. For this intent, this paper analyzed the heat distribution in the oven based on the basic operation principles and proceeded the data simulation of the temperature distribution on the rack section. Constructing the differential equation model of the temperature distribution changes in the pan when the oven works based on the heat radiation and heat transmission, based on the idea of utilizing cellular automation to simulate heat transfer process, used ANSYS software to proceed the numerical simulation analysis to the rectangular, round-cornered rectangular, elliptical and circular pans and giving out the instantaneous temperature distribution of the corresponding shapes of the pans. The temperature distribution of the rectangular and circular pans proves that the product gets overcooked easily at the corners and edges of rectangular pans but not of a round pan.
NASA Technical Reports Server (NTRS)
Feigenbaum, Haim (Inventor); Pudick, Sheldon (Inventor)
1988-01-01
A process for forming an integral edge seal in a gas distribution plate for use in a fuel cell. A seal layer is formed along an edge of a porous gas distribution plate by impregnating the pores in the layer with a material adapted to provide a seal which is operative dry or when wetted by an electrolyte of a fuel cell. Vibratory energy is supplied to the sealing material during the step of impregnating the pores to provide a more uniform seal throughout the cross section of the plate.
NASA Astrophysics Data System (ADS)
Pan, Tianheng
2018-01-01
In recent years, the combination of workflow management system and Multi-agent technology is a hot research field. The problem of lack of flexibility in workflow management system can be improved by introducing multi-agent collaborative management. The workflow management system adopts distributed structure. It solves the problem that the traditional centralized workflow structure is fragile. In this paper, the agent of Distributed workflow management system is divided according to its function. The execution process of each type of agent is analyzed. The key technologies such as process execution and resource management are analyzed.
Calculation of the transverse parton distribution functions at next-to-next-to-leading order
NASA Astrophysics Data System (ADS)
Gehrmann, Thomas; Lübbert, Thomas; Yang, Li Lin
2014-06-01
We describe the perturbative calculation of the transverse parton distribution functions in all partonic channels up to next-to-next-to-leading order based on a gauge invariant operator definition. We demonstrate the cancellation of light-cone divergences and show that universal process-independent transverse parton distribution functions can be obtained through a refactorization. Our results serve as the first explicit higher-order calculation of these functions starting from first principles, and can be used to perform next-to-next-to-next-to-leading logarithmic q T resummation for a large class of processes at hadron colliders.
Phenomenology of the Z boson plus jet process at NNLO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boughezal, Radja; Liu, Xiaohui; Petriello, Frank
Here, we present a detailed phenomenological study of Z-boson production in association with a jet through next-to-next-to-leading order (NNLO) in perturbative QCD. Fiducial cross sections and differential distributions for both 8 TeV and 13 TeV LHC collisions are presented. We study the impact of different parton distribution functions (PDFs) on predictions for the Z + jet process. Upon inclusion of the NNLO corrections, the residual scale uncertainty is reduced such that both the total rate and the transverse momentum distributions can be used to discriminate between various PDF sets.
Effect of process parameters on temperature distribution in twin-electrode TIG coupling arc
NASA Astrophysics Data System (ADS)
Zhang, Guangjun; Xiong, Jun; Gao, Hongming; Wu, Lin
2012-10-01
The twin-electrode TIG coupling arc is a new type of welding heat source, which is generated in a single welding torch that has two tungsten electrodes insulated from each other. This paper aims at determining the distribution of temperature for the coupling arc using the Fowler-Milne method under the assumption of local thermodynamic equilibrium. The influences of welding current, arc length, and distance between both electrode tips on temperature distribution of the coupling arc were analyzed. Based on the results, a better understanding of the twin-electrode TIG welding process was obtained.
Effect of molecular weight on polymer processability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karg, R.F.
1983-01-01
Differences in rheological behavior due to the polymer molecular weight and molecular weight distribution have been shown with the MPT. SBR polymers having high molecular weight fractions develop higher stress relaxation time values due to the higher degree of polymer entanglements. Tests conducted at increasing temperatures show the diminishing influence of the polymer entanglements upon stress relaxation time. EPDM polymers show stress relaxation time and head pressure behavior which correlates with mill processability. As anticipated, compounded stock of EPDM have broad molecular weight distribution has higher stress relaxation time values than EPDM compounds with narrow molecular weight distribution.
40 CFR 763.169 - Distribution in commerce prohibitions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... SUBSTANCES CONTROL ACT ASBESTOS Prohibition of the Manufacture, Importation, Processing, and Distribution in Commerce of Certain Asbestos-Containing Products; Labeling Requirements § 763.169 Distribution in commerce... States or for export, any of the asbestos-containing products listed at § 763.165(a). (b) After August 25...
Evaluating the fundamental corrosion and passivation of metallic copper used in drinking water distribution materials is important in understanding the overall mechanism of the corrosion process. Copper pipes are widely used for drinking water distribution systems and although it...
Algorithm Calculates Cumulative Poisson Distribution
NASA Technical Reports Server (NTRS)
Bowerman, Paul N.; Nolty, Robert C.; Scheuer, Ernest M.
1992-01-01
Algorithm calculates accurate values of cumulative Poisson distribution under conditions where other algorithms fail because numbers are so small (underflow) or so large (overflow) that computer cannot process them. Factors inserted temporarily to prevent underflow and overflow. Implemented in CUMPOIS computer program described in "Cumulative Poisson Distribution Program" (NPO-17714).
14 CFR 1260.16 - Distribution.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 5 2010-01-01 2010-01-01 false Distribution. 1260.16 Section 1260.16... Pre-Award Requirements § 1260.16 Distribution. (a) Copies of grants and supplements will be provided... when delegated; (4) The NASA Center for AeroSpace Information (CASI), Attn: Document Processing Section...
Micro-organism distribution sampling for bioassays
NASA Technical Reports Server (NTRS)
Nelson, B. A.
1975-01-01
Purpose of sampling distribution is to characterize sample-to-sample variation so statistical tests may be applied, to estimate error due to sampling (confidence limits) and to evaluate observed differences between samples. Distribution could be used for bioassays taken in hospitals, breweries, food-processing plants, and pharmaceutical plants.
Size distribution of dust grains: A problem of self-similarity
NASA Technical Reports Server (NTRS)
Henning, TH.; Dorschner, J.; Guertler, J.
1989-01-01
Distribution functions describing the results of natural processes frequently show the shape of power laws, e.g., mass functions of stars and molecular clouds, velocity spectrum of turbulence, size distributions of asteroids, micrometeorites and also interstellar dust grains. It is an open question whether this behavior is a result simply coming about by the chosen mathematical representation of the observational data or reflects a deep-seated principle of nature. The authors suppose the latter being the case. Using a dust model consisting of silicate and graphite grains Mathis et al. (1977) showed that the interstellar extinction curve can be represented by taking a grain radii distribution of power law type n(a) varies as a(exp -p) with 3.3 less than or equal to p less than or equal to 3.6 (example 1) as a basis. A different approach to understanding power laws like that in example 1 becomes possible by the theory of self-similar processes (scale invariance). The beta model of turbulence (Frisch et al., 1978) leads in an elementary way to the concept of the self-similarity dimension D, a special case of Mandelbrot's (1977) fractal dimension. In the frame of this beta model, it is supposed that on each stage of a cascade the system decays to N clumps and that only the portion beta N remains active further on. An important feature of this model is that the active eddies become less and less space-filling. In the following, the authors assume that grain-grain collisions are such a scale-invarient process and that the remaining grains are the inactive (frozen) clumps of the cascade. In this way, a size distribution n(a) da varies as a(exp -(D+1))da (example 2) results. It seems to be highly probable that the power law character of the size distribution of interstellar dust grains is the result of a self-similarity process. We can, however, not exclude that the process leading to the interstellar grain size distribution is not fragmentation at all. It could be, e.g., diffusion-limited growth discussed by Sander (1986), who applied the theory of fractal geometry to the classification of non-equilibrium growth processes. He received D=2.4 for diffusion-limited aggregation in 3d-space.
Improving flow distribution in influent channels using computational fluid dynamics.
Park, No-Suk; Yoon, Sukmin; Jeong, Woochang; Lee, Seungjae
2016-10-01
Although the flow distribution in an influent channel where the inflow is split into each treatment process in a wastewater treatment plant greatly affects the efficiency of the process, and a weir is the typical structure for the flow distribution, to the authors' knowledge, there is a paucity of research on the flow distribution in an open channel with a weir. In this study, the influent channel of a real-scale wastewater treatment plant was used, installing a suppressed rectangular weir that has a horizontal crest to cross the full channel width. The flow distribution in the influent channel was analyzed using a validated computational fluid dynamics model to investigate (1) the comparison of single-phase and two-phase simulation, (2) the improved procedure of the prototype channel, and (3) the effect of the inflow rate on flow distribution. The results show that two-phase simulation is more reliable due to the description of the free-surface fluctuations. It should first be considered for improving flow distribution to prevent a short-circuit flow, and the difference in the kinetic energy with the inflow rate makes flow distribution trends different. The authors believe that this case study is helpful for improving flow distribution in an influent channel.
Simulating neutron star mergers as r-process sources in ultrafaint dwarf galaxies
NASA Astrophysics Data System (ADS)
Safarzadeh, Mohammadtaher; Scannapieco, Evan
2017-10-01
To explain the high observed abundances of r-process elements in local ultrafaint dwarf (UFD) galaxies, we perform cosmological zoom simulations that include r-process production from neutron star mergers (NSMs). We model star formation stochastically and simulate two different haloes with total masses ≈108 M⊙ at z = 6. We find that the final distribution of [Eu/H] versus [Fe/H] is relatively insensitive to the energy by which the r-process material is ejected into the interstellar medium, but strongly sensitive to the environment in which the NSM event occurs. In one halo, the NSM event takes place at the centre of the stellar distribution, leading to high levels of r-process enrichment such as seen in a local UFD, Reticulum II (Ret II). In a second halo, the NSM event takes place outside of the densest part of the galaxy, leading to a more extended r-process distribution. The subsequent star formation occurs in an interstellar medium with shallow levels of r-process enrichment that results in stars with low levels of [Eu/H] compared to Ret II stars even when the maximum possible r-process mass is assumed to be ejected. This suggests that the natal kicks of neutron stars may also play an important role in determining the r-process abundances in UFD galaxies, a topic that warrants further theoretical investigation.
Modeling Neutron stars as r-process sources in Ultra Faint Dwarf galaxies
NASA Astrophysics Data System (ADS)
Safarzadeh, Mohammadtaher; Scannapieco, Evan
2018-06-01
To explain the high observed abundances of r-process elements in local ultrafaint dwarf (UFD) galaxies, we perform cosmological zoom simulations that include r-process production from neutron star mergers (NSMs). We model star formation stochastically and simulate two different haloes with total masses ≈108 M⊙ at z = 6. We find that the final distribution of [Eu/H] versus [Fe/H] is relatively insensitive to the energy by which the r-process material is ejected into the interstellar medium, but strongly sensitive to the environment in which the NSM event occurs. In one halo, the NSM event takes place at the centre of the stellar distribution, leading to high levels of r-process enrichment such as seen in a local UFD, Reticulum II (Ret II). In a second halo, the NSM event takes place outside of the densest part of the galaxy, leading to a more extended r-process distribution. The subsequent star formation occurs in an interstellar medium with shallow levels of r-process enrichment that results in stars with low levels of [Eu/H] compared to Ret II stars even when the maximum possible r-process mass is assumed to be ejected. This suggests that the natal kicks of neutron stars may also play an important role in determining the r-process abundances in UFD galaxies, a topic that warrants further theoretical investigation.
Star Formation Quenching, How Fast And How Frequently? Inside-Out Or Not?
NASA Astrophysics Data System (ADS)
Lian, Jianhui; Yan, Renbin; Blanton, Michael; Zhang, Kai; Kong, Xu
2017-06-01
Star formation quenching is a critical process that drive galaxies evolving from blue star-forming to red passive stage. This rapid quenching process is necessary in galaxy evolution models to explain the galaxy distribution in NUV-optical colour-colour diagrams1,2 and the buildup of red-sequence from z = 1 to z = 03,4,5. Yet, the mechanism of this quenching process is not fully understood and is of hot debate. Many candidate scenarios, such as strangulation due to shock heating in massive halos, AGN feedback or gas stripping due to environmental effect, have been proposed. To differentiate these scenarios, more constraints on the quenching process and thus the potential physical mechanism are badly needed. The first result we show in this poster is the properties of quenching process we obtained from the galaxy distribution in NUV-optical colour-colour diagrams. Aside from the unclear integrated star formation history (SFH) of galaxies, how the SFH of galaxies varies internally is still poorly understood. One direct probe of the internal variation of SFH is the spatial distribution of colours, i.e. the colour gradient. In the second part of the results of this poster, we explicitly illustrate the definition of 'inside-out growth' and 'inside-out quenching' scenarios and utilize the galaxy distribution in the u-I colour gradients to see which one is more observationally favoured.
Droplet size prediction in ultrasonic nebulization for non-oxide ceramic powder synthesis.
Muñoz, Mariana; Goutier, Simon; Foucaud, Sylvie; Mariaux, Gilles; Poirier, Thierry
2018-03-01
Spray pyrolysis process has been used for the synthesis of non-oxide ceramic powders from liquid precursors in the Si/C/N system. Particles with a high thermal stability and with variable composition and size distribution have been obtained. In this process, the mechanisms involved in precursor decomposition and gas phase recombination of species are still unknown. The final aim of this work consists in improving the whole process comprehension by an experimental/modelling approach that helps to connect the synthesized particles characteristics to the precursor properties and process operating parameters. It includes the following steps: aerosol formation by a piezoelectric nebulizer, its transport and the chemical-physical phenomena involved in the reaction processes. This paper focuses on the aerosol characterization to understand the relationship between the liquid precursor properties and the liquid droplet diameter distribution. Liquids with properties close to the precursor of interest (hexamethyldisilazane) have been used. Experiments have been performed using a shadowgraphy technique to determine the drop size distribution of the aerosol. For all operating parameters of the nebulizer device and liquids used, bimodal droplet size distributions have been obtained. Correlations proposed in the literature for the droplet size prediction by ultrasonic nebulization were used and adapted to the specific nebulizer device used in this study, showing rather good agreement with experimental values. Copyright © 2017 Elsevier B.V. All rights reserved.
Simulating galactic dust grain evolution on a moving mesh
NASA Astrophysics Data System (ADS)
McKinnon, Ryan; Vogelsberger, Mark; Torrey, Paul; Marinacci, Federico; Kannan, Rahul
2018-05-01
Interstellar dust is an important component of the galactic ecosystem, playing a key role in multiple galaxy formation processes. We present a novel numerical framework for the dynamics and size evolution of dust grains implemented in the moving-mesh hydrodynamics code AREPO suited for cosmological galaxy formation simulations. We employ a particle-based method for dust subject to dynamical forces including drag and gravity. The drag force is implemented using a second-order semi-implicit integrator and validated using several dust-hydrodynamical test problems. Each dust particle has a grain size distribution, describing the local abundance of grains of different sizes. The grain size distribution is discretised with a second-order piecewise linear method and evolves in time according to various dust physical processes, including accretion, sputtering, shattering, and coagulation. We present a novel scheme for stochastically forming dust during stellar evolution and new methods for sub-cycling of dust physics time-steps. Using this model, we simulate an isolated disc galaxy to study the impact of dust physical processes that shape the interstellar grain size distribution. We demonstrate, for example, how dust shattering shifts the grain size distribution to smaller sizes resulting in a significant rise of radiation extinction from optical to near-ultraviolet wavelengths. Our framework for simulating dust and gas mixtures can readily be extended to account for other dynamical processes relevant in galaxy formation, like magnetohydrodynamics, radiation pressure, and thermo-chemical processes.
Inferring the parameters of a Markov process from snapshots of the steady state
NASA Astrophysics Data System (ADS)
Dettmer, Simon L.; Berg, Johannes
2018-02-01
We seek to infer the parameters of an ergodic Markov process from samples taken independently from the steady state. Our focus is on non-equilibrium processes, where the steady state is not described by the Boltzmann measure, but is generally unknown and hard to compute, which prevents the application of established equilibrium inference methods. We propose a quantity we call propagator likelihood, which takes on the role of the likelihood in equilibrium processes. This propagator likelihood is based on fictitious transitions between those configurations of the system which occur in the samples. The propagator likelihood can be derived by minimising the relative entropy between the empirical distribution and a distribution generated by propagating the empirical distribution forward in time. Maximising the propagator likelihood leads to an efficient reconstruction of the parameters of the underlying model in different systems, both with discrete configurations and with continuous configurations. We apply the method to non-equilibrium models from statistical physics and theoretical biology, including the asymmetric simple exclusion process (ASEP), the kinetic Ising model, and replicator dynamics.
Caballero Morales, Santiago Omar
2013-01-01
The application of Preventive Maintenance (PM) and Statistical Process Control (SPC) are important practices to achieve high product quality, small frequency of failures, and cost reduction in a production process. However there are some points that have not been explored in depth about its joint application. First, most SPC is performed with the X-bar control chart which does not fully consider the variability of the production process. Second, many studies of design of control charts consider just the economic aspect while statistical restrictions must be considered to achieve charts with low probabilities of false detection of failures. Third, the effect of PM on processes with different failure probability distributions has not been studied. Hence, this paper covers these points, presenting the Economic Statistical Design (ESD) of joint X-bar-S control charts with a cost model that integrates PM with general failure distribution. Experiments showed statistically significant reductions in costs when PM is performed on processes with high failure rates and reductions in the sampling frequency of units for testing under SPC. PMID:23527082
Different modelling approaches to evaluate nitrogen transport and turnover at the watershed scale
NASA Astrophysics Data System (ADS)
Epelde, Ane Miren; Antiguedad, Iñaki; Brito, David; Jauch, Eduardo; Neves, Ramiro; Garneau, Cyril; Sauvage, Sabine; Sánchez-Pérez, José Miguel
2016-08-01
This study presents the simulation of hydrological processes and nutrient transport and turnover processes using two integrated numerical models: Soil and Water Assessment Tool (SWAT) (Arnold et al., 1998), an empirical and semi-distributed numerical model; and Modelo Hidrodinâmico (MOHID) (Neves, 1985), a physics-based and fully distributed numerical model. This work shows that both models reproduce satisfactorily water and nitrate exportation at the watershed scale at annual and daily basis, MOHID providing slightly better results. At the watershed scale, both SWAT and MOHID simulated similarly and satisfactorily the denitrification amount. However, as MOHID numerical model was the only one able to reproduce adequately the spatial variation of the soil hydrological conditions and water table level fluctuation, it proved to be the only model able of reproducing the spatial variation of the nutrient cycling processes that are dependent to the soil hydrological conditions such as the denitrification process. This evidences the strength of the fully distributed and physics-based models to simulate the spatial variability of nutrient cycling processes that are dependent to the hydrological conditions of the soils.
Particle formation in the emulsion-solvent evaporation process.
Staff, Roland H; Schaeffel, David; Turshatov, Andrey; Donadio, Davide; Butt, Hans-Jürgen; Landfester, Katharina; Koynov, Kaloian; Crespy, Daniel
2013-10-25
The mechanism of particle formation from submicrometer emulsion droplets by solvent evaporation is revisited. A combination of dynamic light scattering, fluorescence resonance energy transfer, zeta potential measurements, and fluorescence cross-correlation spectroscopy is used to analyze the colloids during the evaporation process. It is shown that a combination of different methods yields reliable and quantitative data for describing the fate of the droplets during the process. The results indicate that coalescence plays a minor role during the process; the relatively large size distribution of the obtained polymer colloids can be explained by the droplet distribution after their formation. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Particle dispersing system and method for testing semiconductor manufacturing equipment
Chandrachood, Madhavi; Ghanayem, Steve G.; Cantwell, Nancy; Rader, Daniel J.; Geller, Anthony S.
1998-01-01
The system and method prepare a gas stream comprising particles at a known concentration using a particle disperser for moving particles from a reservoir of particles into a stream of flowing carrier gas. The electrostatic charges on the particles entrained in the carrier gas are then neutralized or otherwise altered, and the resulting particle-laden gas stream is then diluted to provide an acceptable particle concentration. The diluted gas stream is then split into a calibration stream and the desired output stream. The particles in the calibration stream are detected to provide an indication of the actual size distribution and concentration of particles in the output stream that is supplied to a process chamber being analyzed. Particles flowing out of the process chamber within a vacuum pumping system are detected, and the output particle size distribution and concentration are compared with the particle size distribution and concentration of the calibration stream in order to determine the particle transport characteristics of a process chamber, or to determine the number of particles lodged in the process chamber as a function of manufacturing process parameters such as pressure, flowrate, temperature, process chamber geometry, particle size, particle charge, and gas composition.
The Dynamics of Power laws: Fitness and Aging in Preferential Attachment Trees
NASA Astrophysics Data System (ADS)
Garavaglia, Alessandro; van der Hofstad, Remco; Woeginger, Gerhard
2017-09-01
Continuous-time branching processes describe the evolution of a population whose individuals generate a random number of children according to a birth process. Such branching processes can be used to understand preferential attachment models in which the birth rates are linear functions. We are motivated by citation networks, where power-law citation counts are observed as well as aging in the citation patterns. To model this, we introduce fitness and age-dependence in these birth processes. The multiplicative fitness moderates the rate at which children are born, while the aging is integrable, so that individuals receives a finite number of children in their lifetime. We show the existence of a limiting degree distribution for such processes. In the preferential attachment case, where fitness and aging are absent, this limiting degree distribution is known to have power-law tails. We show that the limiting degree distribution has exponential tails for bounded fitnesses in the presence of integrable aging, while the power-law tail is restored when integrable aging is combined with fitness with unbounded support with at most exponential tails. In the absence of integrable aging, such processes are explosive.
Analysis of haptic information in the cerebral cortex
2016-01-01
Haptic sensing of objects acquires information about a number of properties. This review summarizes current understanding about how these properties are processed in the cerebral cortex of macaques and humans. Nonnoxious somatosensory inputs, after initial processing in primary somatosensory cortex, are partially segregated into different pathways. A ventrally directed pathway carries information about surface texture into parietal opercular cortex and thence to medial occipital cortex. A dorsally directed pathway transmits information regarding the location of features on objects to the intraparietal sulcus and frontal eye fields. Shape processing occurs mainly in the intraparietal sulcus and lateral occipital complex, while orientation processing is distributed across primary somatosensory cortex, the parietal operculum, the anterior intraparietal sulcus, and a parieto-occipital region. For each of these properties, the respective areas outside primary somatosensory cortex also process corresponding visual information and are thus multisensory. Consistent with the distributed neural processing of haptic object properties, tactile spatial acuity depends on interaction between bottom-up tactile inputs and top-down attentional signals in a distributed neural network. Future work should clarify the roles of the various brain regions and how they interact at the network level. PMID:27440247
Estimating the Propagation of Interdependent Cascading Outages with Multi-Type Branching Processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qi, Junjian; Ju, Wenyun; Sun, Kai
In this paper, the multi-type branching process is applied to describe the statistics and interdependencies of line outages, the load shed, and isolated buses. The offspring mean matrix of the multi-type branching process is estimated by the Expectation Maximization (EM) algorithm and can quantify the extent of outage propagation. The joint distribution of two types of outages is estimated by the multi-type branching process via the Lagrange-Good inversion. The proposed model is tested with data generated by the AC OPA cascading simulations on the IEEE 118-bus system. The largest eigenvalues of the offspring mean matrix indicate that the system ismore » closer to criticality when considering the interdependence of different types of outages. Compared with empirically estimating the joint distribution of the total outages, good estimate is obtained by using the multitype branching process with a much smaller number of cascades, thus greatly improving the efficiency. It is shown that the multitype branching process can effectively predict the distribution of the load shed and isolated buses and their conditional largest possible total outages even when there are no data of them.« less
Working toward integrated models of alpine plant distribution.
Carlson, Bradley Z; Randin, Christophe F; Boulangeat, Isabelle; Lavergne, Sébastien; Thuiller, Wilfried; Choler, Philippe
2013-10-01
Species distribution models (SDMs) have been frequently employed to forecast the response of alpine plants to global changes. Efforts to model alpine plant distribution have thus far been primarily based on a correlative approach, in which ecological processes are implicitly addressed through a statistical relationship between observed species occurrences and environmental predictors. Recent evidence, however, highlights the shortcomings of correlative SDMs, especially in alpine landscapes where plant species tend to be decoupled from atmospheric conditions in micro-topographic habitats and are particularly exposed to geomorphic disturbances. While alpine plants respond to the same limiting factors as plants found at lower elevations, alpine environments impose a particular set of scale-dependent and hierarchical drivers that shape the realized niche of species and that require explicit consideration in a modelling context. Several recent studies in the European Alps have successfully integrated both correlative and process-based elements into distribution models of alpine plants, but for the time being a single integrative modelling framework that includes all key drivers remains elusive. As a first step in working toward a comprehensive integrated model applicable to alpine plant communities, we propose a conceptual framework that structures the primary mechanisms affecting alpine plant distributions. We group processes into four categories, including multi-scalar abiotic drivers, gradient dependent species interactions, dispersal and spatial-temporal plant responses to disturbance. Finally, we propose a methodological framework aimed at developing an integrated model to better predict alpine plant distribution.
Krecar, Dragan; Vassileva, Vassilka; Danninger, Herbert; Hutter, Herbert
2004-06-01
Powder metallurgy is a highly developed method of manufacturing reliable ferrous parts. The main processing steps in a powder metallurgical line are pressing and sintering. Sintering can be strongly enhanced by the formation of a liquid phase during the sintering process when using phosphorus as sintering activator. In this work the distribution (effect) of phosphorus was investigated by means of secondary ion mass spectrometry (SIMS) supported by Auger electron spectroscopy (AES) and electron probe micro analysis (EPMA). To verify the influence of the process conditions (phosphorus content, sintering atmosphere, time) on the mechanical properties, additional measurements of the microstructure (pore shape) and of impact energy were performed. Analysis of fracture surfaces was performed by means of scanning electron microscopy (SEM). The concentration of phosphorus differs in the samples from 0 to 1% (w/ w). Samples with higher phosphorus concentrations (1% (w/ w) and above) are also measurable by EPMA, whereas the distributions of P at technically relevant concentrations and the distribution of possible impurities are only detectable (visible) by means of SIMS. The influence of the sintering time on the phosphorus distribution will be demonstrated. In addition the grain boundary segregation of P was measured by AES at the surface of in-situ broken samples. It will be shown that the distribution of phosphorus depends also on the concentration of carbon in the samples.
Real-Time Embedded High Performance Computing: Communications Scheduling.
1995-06-01
real - time operating system must explicitly limit the degradation of the timing performance of all processes as the number of processes...adequately supported by a real - time operating system , could compound the development problems encountered in the past. Many experts feel that the... real - time operating system support for an MPP, although they all provide some support for distributed real-time applications. A distributed real
ERIC Educational Resources Information Center
Ikeda, Kenji; Ueno, Taiji; Ito, Yuichi; Kitagami, Shinji; Kawaguchi, Jun
2017-01-01
Humans can pronounce a nonword (e.g., rint). Some researchers have interpreted this behavior as requiring a sequential mechanism by which a grapheme-phoneme correspondence rule is applied to each grapheme in turn. However, several parallel-distributed processing (PDP) models in English have simulated human nonword reading accuracy without a…
Microphysical Processes Affecting the Pinatubo Volcanic Plume
NASA Technical Reports Server (NTRS)
Hamill, Patrick; Houben, Howard; Young, Richard; Turco, Richard; Zhao, Jingxia
1996-01-01
In this paper we consider microphysical processes which affect the formation of sulfate particles and their size distribution in a dispersing cloud. A model for the dispersion of the Mt. Pinatubo volcanic cloud is described. We then consider a single point in the dispersing cloud and study the effects of nucleation, condensation and coagulation on the time evolution of the particle size distribution at that point.
Jian Yang; Peter J. Weisberg; Thomas E. Dilts; E. Louise Loudermilk; Robert M. Scheller; Alison Stanton; Carl Skinner
2015-01-01
Strategic fire and fuel management planning benefits from detailed understanding of how wildfire occurrences are distributed spatially under current climate, and from predictive models of future wildfire occurrence given climate change scenarios. In this study, we fitted historical wildfire occurrence data from 1986 to 2009 to a suite of spatial point process (SPP)...
Bibliography On Multiprocessors And Distributed Processing
NASA Technical Reports Server (NTRS)
Miya, Eugene N.
1988-01-01
Multiprocessor and Distributed Processing Bibliography package consists of large machine-readable bibliographic data base, which in addition to usual keyword searches, used for producing citations, indexes, and cross-references. Data base contains UNIX(R) "refer" -formatted ASCII data and implemented on any computer running under UNIX(R) operating system. Easily convertible to other operating systems. Requires approximately one megabyte of secondary storage. Bibliography compiled in 1985.
ERIC Educational Resources Information Center
Kliegl, Reinhold
2007-01-01
K. Rayner, A. Pollatsek, D. Drieghe, T. J. Slattery, and E. D. Reichle argued that the R. Kliegl, A. Nuthmann, and R. Engbert corpus-analytic evidence for distributed processing during reading should not be accepted because (a) there might be problems of multicollinearity, (b) the distinction between content and function words and the skipping…
Fault-Tolerant Signal Processing Architectures with Distributed Error Control.
1985-01-01
Zm, Revisited," Information and Control, Vol. 37, pp. 100-104, 1978. 13. J. Wakerly , Error Detecting Codes. SeIf-Checkino Circuits and Applications ...However, the newer results concerning applications of real codes are still in the publication process. Hence, two very detailed appendices are included to...significant entities to be protected. While the distributed finite field approach afforded adequate protection, its applicability was restricted and
Distributed Fusion in Sensor Networks with Information Genealogy
2011-06-28
image processing [2], acoustic and speech recognition [3], multitarget tracking [4], distributed fusion [5], and Bayesian inference [6-7]. For...Adaptation for Distant-Talking Speech Recognition." in Proc Acoustics. Speech , and Signal Processing, 2004 |4| Y Bar-Shalom and T 1-. Fortmann...used in speech recognition and other classification applications [8]. But their use in underwater mine classification is limited. In this paper, we
Remote-Sensing Data Distribution and Processing in the Cloud at the ASF DAAC
NASA Astrophysics Data System (ADS)
Stoner, C.; Arko, S. A.; Nicoll, J. B.; Labelle-Hamer, A. L.
2016-12-01
The Alaska Satellite Facility (ASF) Distributed Active Archive Center (DAAC) has been tasked to archive and distribute data from both SENTINEL-1 satellites and from the NASA-ISRO Synthetic Aperture Radar (NISAR) satellite in a cost effective manner. In order to best support processing and distribution of these large data sets for users, the ASF DAAC enhanced our data system in a number of ways that will be detailed in this presentation.The SENTINEL-1 mission comprises a constellation of two polar-orbiting satellites, operating day and night performing C-band Synthetic Aperture Radar (SAR) imaging, enabling them to acquire imagery regardless of the weather. SENTINEL-1A was launched by the European Space Agency (ESA) in April 2014. SENTINEL-1B is scheduled to launch in April 2016.The NISAR satellite is designed to observe and take measurements of some of the planet's most complex processes, including ecosystem disturbances, ice-sheet collapse, and natural hazards such as earthquakes, tsunamis, volcanoes and landslides. NISAR will employ radar imaging, polarimetry, and interferometry techniques using the SweepSAR technology employed for full-resolution wide-swath imaging. NISAR data files are large, making storage and processing a challenge for conventional store and download systems.To effectively process, store, and distribute petabytes of data in a High-performance computing environment, ASF took a long view with regard to technology choices and picked a path of most flexibility and Software re-use. To that end, this Software tools and services presentation will cover Web Object Storage (WOS) and the ability to seamlessly move from local sunk cost hardware to public cloud, such as Amazon Web Services (AWS). A prototype of SENTINEL-1A system that is in AWS, as well as a local hardware solution, will be examined to explain the pros and cons of each. In preparation for NISAR files which will be even larger than SENTINEL-1A, ASF has embarked on a number of cloud initiatives, including processing in the cloud at scale, processing data on-demand, and processing end-user computations on DAAC data in the cloud.
Future electro-optical sensors and processing in urban operations
NASA Astrophysics Data System (ADS)
Grönwall, Christina; Schwering, Piet B.; Rantakokko, Jouni; Benoist, Koen W.; Kemp, Rob A. W.; Steinvall, Ove; Letalick, Dietmar; Björkert, Stefan
2013-10-01
In the electro-optical sensors and processing in urban operations (ESUO) study we pave the way for the European Defence Agency (EDA) group of Electro-Optics experts (IAP03) for a common understanding of the optimal distribution of processing functions between the different platforms. Combinations of local, distributed and centralized processing are proposed. In this way one can match processing functionality to the required power, and available communication systems data rates, to obtain the desired reaction times. In the study, three priority scenarios were defined. For these scenarios, present-day and future sensors and signal processing technologies were studied. The priority scenarios were camp protection, patrol and house search. A method for analyzing information quality in single and multi-sensor systems has been applied. A method for estimating reaction times for transmission of data through the chain of command has been proposed and used. These methods are documented and can be used to modify scenarios, or be applied to other scenarios. Present day data processing is organized mainly locally. Very limited exchange of information with other platforms is present; this is performed mainly at a high information level. Main issues that arose from the analysis of present-day systems and methodology are the slow reaction time due to the limited field of view of present-day sensors and the lack of robust automated processing. Efficient handover schemes between wide and narrow field of view sensors may however reduce the delay times. The main effort in the study was in forecasting the signal processing of EO-sensors in the next ten to twenty years. Distributed processing is proposed between hand-held and vehicle based sensors. This can be accompanied by cloud processing on board several vehicles. Additionally, to perform sensor fusion on sensor data originating from different platforms, and making full use of UAV imagery, a combination of distributed and centralized processing is essential. There is a central role for sensor fusion of heterogeneous sensors in future processing. The changes that occur in the urban operations of the future due to the application of these new technologies will be the improved quality of information, with shorter reaction time, and with lower operator load.
Supporting large scale applications on networks of workstations
NASA Technical Reports Server (NTRS)
Cooper, Robert; Birman, Kenneth P.
1989-01-01
Distributed applications on networks of workstations are an increasingly common way to satisfy computing needs. However, existing mechanisms for distributed programming exhibit poor performance and reliability as application size increases. Extension of the ISIS distributed programming system to support large scale distributed applications by providing hierarchical process groups is discussed. Incorporation of hierarchy in the program structure and exploitation of this to limit the communication and storage required in any one component of the distributed system is examined.
Modeling of First-Passage Processes in Financial Markets
NASA Astrophysics Data System (ADS)
Inoue, Jun-Ichi; Hino, Hikaru; Sazuka, Naoya; Scalas, Enrico
2010-03-01
In this talk, we attempt to make a microscopic modeling the first-passage process (or the first-exit process) of the BUND future by minority game with market history. We find that the first-passage process of the minority game with appropriate history length generates the same properties as the BTP future (the middle and long term Italian Government bonds with fixed interest rates), namely, both first-passage time distributions have a crossover at some specific time scale as is the case for the Mittag-Leffler function. We also provide a macroscopic (or a phenomenological) modeling of the first-passage process of the BTP future and show analytically that the first-passage time distribution of a simplest mixture of the normal compound Poisson processes does not have such a crossover.
Programming with process groups: Group and multicast semantics
NASA Technical Reports Server (NTRS)
Birman, Kenneth P.; Cooper, Robert; Gleeson, Barry
1991-01-01
Process groups are a natural tool for distributed programming and are increasingly important in distributed computing environments. Discussed here is a new architecture that arose from an effort to simplify Isis process group semantics. The findings include a refined notion of how the clients of a group should be treated, what the properties of a multicast primitive should be when systems contain large numbers of overlapping groups, and a new construct called the causality domain. A system based on this architecture is now being implemented in collaboration with the Chorus and Mach projects.
NASA Astrophysics Data System (ADS)
Nakano, Masaru; Kubota, Fumiko; Inamori, Yutaka; Mitsuyuki, Keiji
Manufacturing system designers should concentrate on designing and planning manufacturing systems instead of spending their efforts on creating the simulation models to verify the design. This paper proposes a method and its tool to navigate the designers through the engineering process and generate the simulation model automatically from the design results. The design agent also supports collaborative design projects among different companies or divisions with distributed engineering and distributed simulation techniques. The idea was implemented and applied to a factory planning process.
NASA Astrophysics Data System (ADS)
Kumlander, Deniss
The globalization of companies operations and competitor between software vendors demand improving quality of delivered software and decreasing the overall cost. The same in fact introduce a lot of problem into software development process as produce distributed organization breaking the co-location rule of modern software development methodologies. Here we propose a reformulation of the ambassador position increasing its productivity in order to bridge communication and workflow gap by managing the entire communication process rather than concentrating purely on the communication result.
The Internet and the Banks' Strategic Distribution Channel Decisions.
ERIC Educational Resources Information Center
Mols, Niels Peter
1998-01-01
Discusses two strategic distribution channel decisions facing banks, one regarding whether to target the Internet banking segment of customers versus the branch banking segment, and the other regarding the geographical area banks aim to serve. Future distribution channels, the change process, and local, national, and international strategies are…
The distribution of hillslope-channel interactions in a rangeland watershed
Leslie M. Reid
1998-01-01
The distribution of erosion and deposition in a basin--and thus of the major controls on basin evolution--is dependent upon the local balance between sediment transport and sediment supply. This balance, in turn, reflects the nature, strength, and distribution of interactions between hillslope and channel processes.
Implementing and Investigating Distributed Leadership in a National University Network--SaMnet
ERIC Educational Resources Information Center
Sharma, Manjula D.; Rifkin, Will; Tzioumis, Vicky; Hill, Matthew; Johnson, Elizabeth; Varsavsky, Cristina; Jones, Susan; Beames, Stephanie; Crampton, Andrea; Zadnik, Marjan; Pyke, Simon
2017-01-01
The literature suggests that collaborative approaches to leadership, such as distributed leadership, are essential for supporting educational innovators in leading change in teaching in universities. This paper briefly describes the array of activities, processes and resources to support distributed leadership in the implementation of a network,…
Mercury (Hg) species distribution patterns among ecosystem compartments in the Everglades were analyzed at the landscape level in order to explore the implications of Hg distribution for Hg bioaccumulation, and to investigate major biogeochemical processes that are pertinent to t...
NASA Technical Reports Server (NTRS)
Standfield, Clarence E.
1994-01-01
Resin-powder dispenser used at NASA's Langley Research Center for processing of composite-material prepregs. Dispenser evenly distributes powder (resin polymer and other matrix materials in powder form) onto wet uncured prepregs. Provides versatility in distribution of solid resin in prepreg operation. Used wherever there is requirement for even, continuous distribution of small amount of powder.
Challenges of Using CSCL in Open Distributed Learning.
ERIC Educational Resources Information Center
Nilsen, Anders Grov; Instefjord, Elen J.
As a compulsory part of the study in Pedagogical Information Science at the University of Bergen and Stord/Haugesund College (Norway) during the spring term of 1999, students participated in a distributed group activity that provided experience on distributed collaboration and use of online groupware systems. The group collaboration process was…
ERIC Educational Resources Information Center
Koch, James V.; Elkin, Randyl D.
This study was conducted to determine: (1) the stated criteria and priorities which the Illinois Vocational and Technical Education Division uses to determine the distribution of funds to districts, (2) how closely the actual distribution of funds match the stated criteria and priorities, (3) whether the actual distribution of funds reflect…
Distributed Adaptive Control: Beyond Single-Instant, Discrete Variables
NASA Technical Reports Server (NTRS)
Wolpert, David H.; Bieniawski, Stefan
2005-01-01
In extensive form noncooperative game theory, at each instant t, each agent i sets its state x, independently of the other agents, by sampling an associated distribution, q(sub i)(x(sub i)). The coupling between the agents arises in the joint evolution of those distributions. Distributed control problems can be cast the same way. In those problems the system designer sets aspects of the joint evolution of the distributions to try to optimize the goal for the overall system. Now information theory tells us what the separate q(sub i) of the agents are most likely to be if the system were to have a particular expected value of the objective function G(x(sub 1),x(sub 2), ...). So one can view the job of the system designer as speeding an iterative process. Each step of that process starts with a specified value of E(G), and the convergence of the q(sub i) to the most likely set of distributions consistent with that value. After this the target value for E(sub q)(G) is lowered, and then the process repeats. Previous work has elaborated many schemes for implementing this process when the underlying variables x(sub i) all have a finite number of possible values and G does not extend to multiple instants in time. That work also is based on a fixed mapping from agents to control devices, so that the the statistical independence of the agents' moves means independence of the device states. This paper also extends that work to relax all of these restrictions. This extends the applicability of that work to include continuous spaces and Reinforcement Learning. This paper also elaborates how some of that earlier work can be viewed as a first-principles justification of evolution-based search algorithms.
About Distributed Simulation-based Optimization of Forming Processes using a Grid Architecture
NASA Astrophysics Data System (ADS)
Grauer, Manfred; Barth, Thomas
2004-06-01
Permanently increasing complexity of products and their manufacturing processes combined with a shorter "time-to-market" leads to more and more use of simulation and optimization software systems for product design. Finding a "good" design of a product implies the solution of computationally expensive optimization problems based on the results of simulation. Due to the computational load caused by the solution of these problems, the requirements on the Information&Telecommunication (IT) infrastructure of an enterprise or research facility are shifting from stand-alone resources towards the integration of software and hardware resources in a distributed environment for high-performance computing. Resources can either comprise software systems, hardware systems, or communication networks. An appropriate IT-infrastructure must provide the means to integrate all these resources and enable their use even across a network to cope with requirements from geographically distributed scenarios, e.g. in computational engineering and/or collaborative engineering. Integrating expert's knowledge into the optimization process is inevitable in order to reduce the complexity caused by the number of design variables and the high dimensionality of the design space. Hence, utilization of knowledge-based systems must be supported by providing data management facilities as a basis for knowledge extraction from product data. In this paper, the focus is put on a distributed problem solving environment (PSE) capable of providing access to a variety of necessary resources and services. A distributed approach integrating simulation and optimization on a network of workstations and cluster systems is presented. For geometry generation the CAD-system CATIA is used which is coupled with the FEM-simulation system INDEED for simulation of sheet-metal forming processes and the problem solving environment OpTiX for distributed optimization.
Cetinceviz, Yucel; Bayindir, Ramazan
2012-05-01
The network requirements of control systems in industrial applications increase day by day. The Internet based control system and various fieldbus systems have been designed in order to meet these requirements. This paper describes an Internet based control system with wireless fieldbus communication designed for distributed processes. The system was implemented as an experimental setup in a laboratory. In industrial facilities, the process control layer and the distance connection of the distributed control devices in the lowest levels of the industrial production environment are provided with fieldbus networks. In this paper, the Internet based control system that will be able to meet the system requirements with a new-generation communication structure, which is called wired/wireless hybrid system, has been designed on field level and carried out to cover all sectors of distributed automation, from process control, to distributed input/output (I/O). The system has been accomplished by hardware structure with a programmable logic controller (PLC), a communication processor (CP) module, two industrial wireless modules and a distributed I/O module, Motor Protection Package (MPP) and software structure with WinCC flexible program used for the screen of Scada (Supervisory Control And Data Acquisition), SIMATIC MANAGER package program ("STEP7") used for the hardware and network configuration and also for downloading control program to PLC. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
Vercruysse, Jurgen; Toiviainen, Maunu; Fonteyne, Margot; Helkimo, Niko; Ketolainen, Jarkko; Juuti, Mikko; Delaet, Urbain; Van Assche, Ivo; Remon, Jean Paul; Vervaet, Chris; De Beer, Thomas
2014-04-01
Over the last decade, there has been increased interest in the application of twin screw granulation as a continuous wet granulation technique for pharmaceutical drug formulations. However, the mixing of granulation liquid and powder material during the short residence time inside the screw chamber and the atypical particle size distribution (PSD) of granules produced by twin screw granulation is not yet fully understood. Therefore, this study aims at visualizing the granulation liquid mixing and distribution during continuous twin screw granulation using NIR chemical imaging. In first instance, the residence time of material inside the barrel was investigated as function of screw speed and moisture content followed by the visualization of the granulation liquid distribution as function of different formulation and process parameters (liquid feed rate, liquid addition method, screw configuration, moisture content and barrel filling degree). The link between moisture uniformity and granule size distributions was also studied. For residence time analysis, increased screw speed and lower moisture content resulted to a shorter mean residence time and narrower residence time distribution. Besides, the distribution of granulation liquid was more homogenous at higher moisture content and with more kneading zones on the granulator screws. After optimization of the screw configuration, a two-level full factorial experimental design was performed to evaluate the influence of moisture content, screw speed and powder feed rate on the mixing efficiency of the powder and liquid phase. From these results, it was concluded that only increasing the moisture content significantly improved the granulation liquid distribution. This study demonstrates that NIR chemical imaging is a fast and adequate measurement tool for allowing process visualization and hence for providing better process understanding of a continuous twin screw granulation system. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Janković, Bojan
2009-10-01
The decomposition process of sodium bicarbonate (NaHCO3) has been studied by thermogravimetry in isothermal conditions at four different operating temperatures (380 K, 400 K, 420 K, and 440 K). It was found that the experimental integral and differential conversion curves at the different operating temperatures can be successfully described by the isothermal Weibull distribution function with a unique value of the shape parameter ( β = 1.07). It was also established that the Weibull distribution parameters ( β and η) show independent behavior on the operating temperature. Using the integral and differential (Friedman) isoconversional methods, in the conversion (α) range of 0.20 ≤ α ≤ 0.80, the apparent activation energy ( E a ) value was approximately constant ( E a, int = 95.2 kJmol-1 and E a, diff = 96.6 kJmol-1, respectively). The values of E a calculated by both isoconversional methods are in good agreement with the value of E a evaluated from the Arrhenius equation (94.3 kJmol-1), which was expressed through the scale distribution parameter ( η). The Málek isothermal procedure was used for estimation of the kinetic model for the investigated decomposition process. It was found that the two-parameter Šesták-Berggren (SB) autocatalytic model best describes the NaHCO3 decomposition process with the conversion function f(α) = α0.18(1-α)1.19. It was also concluded that the calculated density distribution functions of the apparent activation energies ( ddfE a ’s) are not dependent on the operating temperature, which exhibit the highly symmetrical behavior (shape factor = 1.00). The obtained isothermal decomposition results were compared with corresponding results of the nonisothermal decomposition process of NaHCO3.
NASA Astrophysics Data System (ADS)
Grujicic, M.; Snipes, J. S.; Galgalikar, R.; Ramaswami, S.; Yavari, R.; Yen, C.-F.; Cheeseman, B. A.
2014-09-01
In our recent work, a multi-physics computational model for the conventional gas metal arc welding (GMAW) joining process was introduced. The model is of a modular type and comprises five modules, each designed to handle a specific aspect of the GMAW process, i.e.: (i) electro-dynamics of the welding-gun; (ii) radiation-/convection-controlled heat transfer from the electric-arc to the workpiece and mass transfer from the filler-metal consumable electrode to the weld; (iii) prediction of the temporal evolution and the spatial distribution of thermal and mechanical fields within the weld region during the GMAW joining process; (iv) the resulting temporal evolution and spatial distribution of the material microstructure throughout the weld region; and (v) spatial distribution of the as-welded material mechanical properties. In the present work, the GMAW process model has been upgraded with respect to its predictive capabilities regarding the spatial distribution of the mechanical properties controlling the ballistic-limit (i.e., penetration-resistance) of the weld. The model is upgraded through the introduction of the sixth module in the present work in recognition of the fact that in thick steel GMAW weldments, the overall ballistic performance of the armor may become controlled by the (often inferior) ballistic limits of its weld (fusion and heat-affected) zones. To demonstrate the utility of the upgraded GMAW process model, it is next applied to the case of butt-welding of a prototypical high-hardness armor-grade martensitic steel, MIL A46100. The model predictions concerning the spatial distribution of the material microstructure and ballistic-limit-controlling mechanical properties within the MIL A46100 butt-weld are found to be consistent with prior observations and general expectations.
The Birth-Death-Mutation Process: A New Paradigm for Fat Tailed Distributions
Maruvka, Yosef E.; Kessler, David A.; Shnerb, Nadav M.
2011-01-01
Fat tailed statistics and power-laws are ubiquitous in many complex systems. Usually the appearance of of a few anomalously successful individuals (bio-species, investors, websites) is interpreted as reflecting some inherent “quality” (fitness, talent, giftedness) as in Darwin's theory of natural selection. Here we adopt the opposite, “neutral”, outlook, suggesting that the main factor explaining success is merely luck. The statistics emerging from the neutral birth-death-mutation (BDM) process is shown to fit marvelously many empirical distributions. While previous neutral theories have focused on the power-law tail, our theory economically and accurately explains the entire distribution. We thus suggest the BDM distribution as a standard neutral model: effects of fitness and selection are to be identified by substantial deviations from it. PMID:22069453
The effect of bean origin and temperature on grinding roasted coffee
NASA Astrophysics Data System (ADS)
Uman, Erol; Colonna-Dashwood, Maxwell; Colonna-Dashwood, Lesley; Perger, Matthew; Klatt, Christian; Leighton, Stephen; Miller, Brian; Butler, Keith T.; Melot, Brent C.; Speirs, Rory W.; Hendon, Christopher H.
2016-04-01
Coffee is prepared by the extraction of a complex array of organic molecules from the roasted bean, which has been ground into fine particulates. The extraction depends on temperature, water chemistry and also the accessible surface area of the coffee. Here we investigate whether variations in the production processes of single origin coffee beans affects the particle size distribution upon grinding. We find that the particle size distribution is independent of the bean origin and processing method. Furthermore, we elucidate the influence of bean temperature on particle size distribution, concluding that grinding cold results in a narrower particle size distribution, and reduced mean particle size. We anticipate these results will influence the production of coffee industrially, as well as contribute to how we store and use coffee daily.
The effect of bean origin and temperature on grinding roasted coffee.
Uman, Erol; Colonna-Dashwood, Maxwell; Colonna-Dashwood, Lesley; Perger, Matthew; Klatt, Christian; Leighton, Stephen; Miller, Brian; Butler, Keith T; Melot, Brent C; Speirs, Rory W; Hendon, Christopher H
2016-04-18
Coffee is prepared by the extraction of a complex array of organic molecules from the roasted bean, which has been ground into fine particulates. The extraction depends on temperature, water chemistry and also the accessible surface area of the coffee. Here we investigate whether variations in the production processes of single origin coffee beans affects the particle size distribution upon grinding. We find that the particle size distribution is independent of the bean origin and processing method. Furthermore, we elucidate the influence of bean temperature on particle size distribution, concluding that grinding cold results in a narrower particle size distribution, and reduced mean particle size. We anticipate these results will influence the production of coffee industrially, as well as contribute to how we store and use coffee daily.
The effect of bean origin and temperature on grinding roasted coffee
Uman, Erol; Colonna-Dashwood, Maxwell; Colonna-Dashwood, Lesley; Perger, Matthew; Klatt, Christian; Leighton, Stephen; Miller, Brian; Butler, Keith T.; Melot, Brent C.; Speirs, Rory W.; Hendon, Christopher H.
2016-01-01
Coffee is prepared by the extraction of a complex array of organic molecules from the roasted bean, which has been ground into fine particulates. The extraction depends on temperature, water chemistry and also the accessible surface area of the coffee. Here we investigate whether variations in the production processes of single origin coffee beans affects the particle size distribution upon grinding. We find that the particle size distribution is independent of the bean origin and processing method. Furthermore, we elucidate the influence of bean temperature on particle size distribution, concluding that grinding cold results in a narrower particle size distribution, and reduced mean particle size. We anticipate these results will influence the production of coffee industrially, as well as contribute to how we store and use coffee daily. PMID:27086837
Statistical methods for investigating quiescence and other temporal seismicity patterns
Matthews, M.V.; Reasenberg, P.A.
1988-01-01
We propose a statistical model and a technique for objective recognition of one of the most commonly cited seismicity patterns:microearthquake quiescence. We use a Poisson process model for seismicity and define a process with quiescence as one with a particular type of piece-wise constant intensity function. From this model, we derive a statistic for testing stationarity against a 'quiescence' alternative. The large-sample null distribution of this statistic is approximated from simulated distributions of appropriate functionals applied to Brownian bridge processes. We point out the restrictiveness of the particular model we propose and of the quiescence idea in general. The fact that there are many point processes which have neither constant nor quiescent rate functions underscores the need to test for and describe nonuniformity thoroughly. We advocate the use of the quiescence test in conjunction with various other tests for nonuniformity and with graphical methods such as density estimation. ideally these methods may promote accurate description of temporal seismicity distributions and useful characterizations of interesting patterns. ?? 1988 Birkha??user Verlag.
The ATLAS PanDA Pilot in Operation
NASA Astrophysics Data System (ADS)
Nilsson, P.; Caballero, J.; De, K.; Maeno, T.; Stradling, A.; Wenaus, T.; ATLAS Collaboration
2011-12-01
The Production and Distributed Analysis system (PanDA) [1-2] was designed to meet ATLAS [3] requirements for a data-driven workload management system capable of operating at LHC data processing scale. Submitted jobs are executed on worker nodes by pilot jobs sent to the grid sites by pilot factories. This paper provides an overview of the PanDA pilot [4] system and presents major features added in light of recent operational experience, including multi-job processing, advanced job recovery for jobs with output storage failures, gLExec [5-6] based identity switching from the generic pilot to the actual user, and other security measures. The PanDA system serves all ATLAS distributed processing and is the primary system for distributed analysis; it is currently used at over 100 sites worldwide. We analyze the performance of the pilot system in processing real LHC data on the OSG [7], EGI [8] and Nordugrid [9-10] infrastructures used by ATLAS, and describe plans for its evolution.
Mathematical model of whole-process calculation for bottom-blowing copper smelting
NASA Astrophysics Data System (ADS)
Li, Ming-zhou; Zhou, Jie-min; Tong, Chang-ren; Zhang, Wen-hai; Li, He-song
2017-11-01
The distribution law of materials in smelting products is key to cost accounting and contaminant control. Regardless, the distribution law is difficult to determine quickly and accurately by mere sampling and analysis. Mathematical models for material and heat balance in bottom-blowing smelting, converting, anode furnace refining, and electrolytic refining were established based on the principles of material (element) conservation, energy conservation, and control index constraint in copper bottom-blowing smelting. Simulation of the entire process of bottom-blowing copper smelting was established using a self-developed MetCal software platform. A whole-process simulation for an enterprise in China was then conducted. Results indicated that the quantity and composition information of unknown materials, as well as heat balance information, can be quickly calculated using the model. Comparison of production data revealed that the model can basically reflect the distribution law of the materials in bottom-blowing copper smelting. This finding provides theoretical guidance for mastering the performance of the entire process.
Dynamics of assembly production flow
NASA Astrophysics Data System (ADS)
Ezaki, Takahiro; Yanagisawa, Daichi; Nishinari, Katsuhiro
2015-06-01
Despite recent developments in management theory, maintaining a manufacturing schedule remains difficult because of production delays and fluctuations in demand and supply of materials. The response of manufacturing systems to such disruptions to dynamic behavior has been rarely studied. To capture these responses, we investigate a process that models the assembly of parts into end products. The complete assembly process is represented by a directed tree, where the smallest parts are injected at leaves and the end products are removed at the root. A discrete assembly process, represented by a node on the network, integrates parts, which are then sent to the next downstream node as a single part. The model exhibits some intriguing phenomena, including overstock cascade, phase transition in terms of demand and supply fluctuations, nonmonotonic distribution of stockout in the network, and the formation of a stockout path and stockout chains. Surprisingly, these rich phenomena result from only the nature of distributed assembly processes. From a physical perspective, these phenomena provide insight into delay dynamics and inventory distributions in large-scale manufacturing systems.
NASA Astrophysics Data System (ADS)
Jain, Rahul; Pal, Surjya Kanta; Singh, Shiv Brat
2017-02-01
Friction Stir Welding (FSW) is a solid state joining process and is handy for welding aluminum alloys. Finite Element Method (FEM) is an important tool to predict state variables of the process but numerical simulation of FSW is highly complex due to non-linear contact interactions between tool and work piece and interdependency of displacement and temperature. In the present work, a three dimensional coupled thermo-mechanical method based on Lagrangian implicit method is proposed to study the thermal history, strain distribution and thermo-mechanical process in butt welding of Aluminum alloy 2024 using DEFORM-3D software. Workpiece is defined as rigid-visco plastic material and sticking condition between tool and work piece is defined. Adaptive re-meshing is used to tackle high mesh distortion. Effect of tool rotational and welding speed on plastic strain is studied and insight is given on asymmetric nature of FSW process. Temperature distribution on the workpiece and tool is predicted and maximum temperature is found in workpiece top surface.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goriely, S.; Chamel, N.; Pearson, J. M.
The rapid neutron-capture process, or r-process, is known to be of fundamental importance for explaining the origin of approximately half of the A>60 stable nuclei observed in nature. In recent years nuclear astrophysicists have developed more and more sophisticated r-process models, eagerly trying to add new astrophysical or nuclear physics ingredients to explain the solar system composition in a satisfactory way.We show here that the decompression of the neutron star matter may provide suitable conditions for a robust r-processing. After decompression, the inner crust material gives rise to an abundance distribution for A>130 nuclei similar to the one observed inmore » the solar system. Similarly, the outer crust if heated at a temperature of about 8 10{sup 9} K before decompression is made of exotic neutron-rich nuclei with a mass distribution close to the 80{<=}A{<=}130 solar one. During the decompression, the free neutrons (initially liberated by the high temperatures) are re-captured leading to a final pattern similar to the solar system distribution.« less
NASA Technical Reports Server (NTRS)
Newton, G. P.
1973-01-01
Previous solutions of the problem of the distribution of vibrationally excited molecular nitrogen in the thermosphere have either assumed a Boltzmann distribution and considered diffusion as one of the loss processes or solved for the energy level populations and neglected diffusion. Both of the previous approaches are combined by solving the time dependent continuity equations, including the diffusion process, for the first six energy levels of molecular nitrogen for conditions in the thermosphere corresponding to a stable auroral red arc. The primary source of molecular nitrogen excitation was subexcitation, and inelastic collisions between thermal electrons and molecular nitrogen. The reaction rates for this process were calculated from published cross section calculations. The loss processes for vibrational energy were electron and atomic oxygen quenching and vibrational energy exchange. The coupled sets of nonlinear, partial differential equations were solved numerically by employing finite difference equations.
Gundersen, H J; Seefeldt, T; Osterby, R
1980-01-01
The width of individual glomerular epithelial foot processes appears very different on electron micrographs. A method for obtainining distributions of the true width of foot processes from that of their apparent width on electron micrographs has been developed based on geometric probability theory pertaining to a specific geometric model. Analyses of foot process width in humans and rats show a remarkable interindividual invariance implying rigid control and therefore great biological significance of foot process width or a derivative thereof. The very low inter-individual variation of the true width, shown in the present paper, makes it possible to demonstrate slight changes in rather small groups of patients or experimental animals.
Doležal, Pavel; Zapletal, Josef; Fintová, Stanislava; Trojanová, Zuzanka; Greger, Miroslav; Roupcová, Pavla; Podrábský, Tomáš
2016-01-01
New Mg-3Zn-2Ca magnesium alloy was prepared using different processing techniques: gravity casting as well as squeeze casting in liquid and semisolid states. Materials were further thermally treated; thermal treatment of the gravity cast alloy was additionally combined with the equal channel angular pressing (ECAP). Alloy processed by the squeeze casting in liquid as well as in semisolid state exhibit improved plasticity; the ECAP processing positively influenced both the tensile and compressive characteristics of the alloy. Applied heat treatment influenced the distribution and chemical composition of present intermetallic phases. Influence of particular processing techniques, heat treatment, and intermetallic phase distribution is thoroughly discussed in relation to mechanical behavior of presented alloys. PMID:28774000
Processing of Aluminum-Graphite Particulate Metal Matrix Composites by Advanced Shear Technology
NASA Astrophysics Data System (ADS)
Barekar, N.; Tzamtzis, S.; Dhindaw, B. K.; Patel, J.; Hari Babu, N.; Fan, Z.
2009-12-01
To extend the possibilities of using aluminum/graphite composites as structural materials, a novel process is developed. The conventional methods often produce agglomerated structures exhibiting lower strength and ductility. To overcome the cohesive force of the agglomerates, a melt conditioned high-pressure die casting (MC-HPDC) process innovatively adapts the well-established, high-shear dispersive mixing action of a twin screw mechanism. The distribution of particles and properties of composites are quantitatively evaluated. The adopted rheo process significantly improved the distribution of the reinforcement in the matrix with a strong interfacial bond between the two. A good combination of improved ultimate tensile strength (UTS) and tensile elongation (ɛ) is obtained compared with composites produced by conventional processes.
Equivalence of MAXENT and Poisson point process models for species distribution modeling in ecology.
Renner, Ian W; Warton, David I
2013-03-01
Modeling the spatial distribution of a species is a fundamental problem in ecology. A number of modeling methods have been developed, an extremely popular one being MAXENT, a maximum entropy modeling approach. In this article, we show that MAXENT is equivalent to a Poisson regression model and hence is related to a Poisson point process model, differing only in the intercept term, which is scale-dependent in MAXENT. We illustrate a number of improvements to MAXENT that follow from these relations. In particular, a point process model approach facilitates methods for choosing the appropriate spatial resolution, assessing model adequacy, and choosing the LASSO penalty parameter, all currently unavailable to MAXENT. The equivalence result represents a significant step in the unification of the species distribution modeling literature. Copyright © 2013, The International Biometric Society.
Fiacco, P. A.; Rice, W. H.
1991-01-01
Computerized medical record systems require structured database architectures for information processing. However, the data must be able to be transferred across heterogeneous platform and software systems. Client-Server architecture allows for distributive processing of information among networked computers and provides the flexibility needed to link diverse systems together effectively. We have incorporated this client-server model with a graphical user interface into an outpatient medical record system, known as SuperChart, for the Department of Family Medicine at SUNY Health Science Center at Syracuse. SuperChart was developed using SuperCard and Oracle SuperCard uses modern object-oriented programming to support a hypermedia environment. Oracle is a powerful relational database management system that incorporates a client-server architecture. This provides both a distributed database and distributed processing which improves performance. PMID:1807732
Yap, Melvin J; Balota, David A; Cortese, Michael J; Watson, Jason M
2006-12-01
This article evaluates 2 competing models that address the decision-making processes mediating word recognition and lexical decision performance: a hybrid 2-stage model of lexical decision performance and a random-walk model. In 2 experiments, nonword type and word frequency were manipulated across 2 contrasts (pseudohomophone-legal nonword and legal-illegal nonword). When nonwords became more wordlike (i.e., BRNTA vs. BRANT vs. BRANE), response latencies to nonwords were slowed and the word frequency effect increased. More important, distributional analyses revealed that the Nonword Type = Word Frequency interaction was modulated by different components of the response time distribution, depending on the specific nonword contrast. A single-process random-walk model was able to account for this particular set of findings more successfully than the hybrid 2-stage model. (c) 2006 APA, all rights reserved.
Signal processing for distributed sensor concept: DISCO
NASA Astrophysics Data System (ADS)
Rafailov, Michael K.
2007-04-01
Distributed Sensor concept - DISCO proposed for multiplication of individual sensor capabilities through cooperative target engagement. DISCO relies on ability of signal processing software to format, to process and to transmit and receive sensor data and to exploit those data in signal synthesis process. Each sensor data is synchronized formatted, Signal-to-Noise Ration (SNR) enhanced and distributed inside of the sensor network. Signal processing technique for DISCO is Recursive Adaptive Frame Integration of Limited data - RAFIL technique that was initially proposed [1] as a way to improve the SNR, reduce data rate and mitigate FPA correlated noise of an individual sensor digital video-signal processing. In Distributed Sensor Concept RAFIL technique is used in segmented way, when constituencies of the technique are spatially and/or temporally separated between transmitters and receivers. Those constituencies include though not limited to two thresholds - one is tuned for optimum probability of detection, the other - to manage required false alarm rate, and limited frame integration placed somewhere between the thresholds as well as formatters, conventional integrators and more. RAFIL allows a non-linear integration that, along with SNR gain, provides system designers more capability where cost, weight, or power considerations limit system data rate, processing, or memory capability [2]. DISCO architecture allows flexible optimization of SNR gain, data rates and noise suppression on sensor's side and limited integration, re-formatting and final threshold on node's side. DISCO with Recursive Adaptive Frame Integration of Limited data may have flexible architecture that allows segmenting the hardware and software to be best suitable for specific DISCO applications and sensing needs - whatever it is air-or-space platforms, ground terminals or integration of sensors network.
NASA Astrophysics Data System (ADS)
Gallant, Frederick M.
A novel method of fabricating functionally graded extruded composite materials is proposed for propellant applications using the technology of continuous processing with a Twin-Screw Extruder. The method is applied to the manufacturing of grains for solid rocket motors in an end-burning configuration with an axial gradient in ammonium perchlorate volume fraction and relative coarse/fine particle size distributions. The fabrication of functionally graded extruded polymer composites with either inert or energetic ingredients has yet to be investigated. The lack of knowledge concerning the processing of these novel materials has necessitated that a number of research issues be addressed. Of primary concern is characterizing and modeling the relationship between the extruder screw geometry, transient processing conditions, and the gradient architecture that evolves in the extruder. Recent interpretations of the Residence Time Distributions (RTDs) and Residence Volume Distributions (RVDs) for polymer composites in the TSE are used to develop new process models for predicting gradient architectures in the direction of extrusion. An approach is developed for characterizing the sections of the extrudate using optical, mechanical, and compositional analysis to determine the gradient architectures. The effects of processing on the burning rate properties of extruded energetic polymer composites are characterized for homogeneous formulations over a range of compositions to determine realistic gradient architectures for solid rocket motor applications. The new process models and burning rate properties that have been characterized in this research effort will be the basis for an inverse design procedure that is capable of determining gradient architectures for grains in solid rocket motors that possess tailored burning rate distributions that conform to user-defined performance specifications.
Optimization of chlorine fluxing process for magnesium removal from molten aluminum
NASA Astrophysics Data System (ADS)
Fu, Qian
High-throughput and low operational cost are the keys to a successful industrial process. Much aluminum is now recycled in the form of used beverage cans and this aluminum is of alloys that contain high levels of magnesium. It is common practice to "demag" the metal by injecting chlorine that preferentially reacts with the magnesium. In the conventional chlorine fluxing processes, low reaction efficiency results in excessive reactive gas emissions. In this study, through an experimental investigation of the reaction kinetics involved in this process, a mathematical model is set up for the purpose of process optimization. A feedback controlled chlorine reduction process strategy is suggested for demagging the molten aluminum to the desired magnesium level without significant gas emissions. This strategy also needs the least modification of the existing process facility. The suggested process time will only be slightly longer than conventional methods and chlorine usage and emissions will be reduced. In order to achieve process optimization through novel designs in any fluxing process, a system is necessary for measuring the bubble distribution in liquid metals. An electro-resistivity probe described in the literature has low accuracy and its capability to measure bubble distribution has not yet been fully demonstrated. A capacitance bubble probe was designed for bubble measurements in molten metals. The probe signal was collected and processed digitally. Higher accuracy was obtained by higher discrimination against corrupted signals. A single-size bubble experiment in Belmont metal was designed to reveal the characteristic response of the capacitance probe. This characteristic response fits well with a theoretical model. It is suggested that using a properly designed deconvolution process, the actual bubble size distribution can be calculated. The capacitance probe was used to study some practical bubble generation devices. Preliminary results on bubble distribution generated by a porous plug in Belmont metal showed bubbles much bigger than those in a water model. Preliminary results in molten aluminum showed that the probe was applicable in this harsh environment. An interesting bubble coalescence phenomenon was also observed in both Belmont metal and molten aluminum.
Dou, Zhiying; Li, Kefeng; Wang, Ping; Cao, Liu
2012-01-18
Vinegar and wine processing of medicinal plants are two traditional pharmaceutical techniques which have been used for thousands of years in China. Tetrahydropalmatine (THP), dehydrocorydaline (DHC) and protopine are three major bioactive molecules in Rhizoma Corydalis. In this study, a simple and reliable HPLC method was developed for simultaneous analysis of THP, DHC and protopine in rat tissues after gastric gavage administration of Rhizoma Corydalis. The validated HPLC method was successfully applied to investigate the effect of wine and vinegar processing on the compounds' distribution in rat tissues. Our results showed that processing mainly affect the T(max) and mean residence time (MRT) of the molecules without changing their C(max) and AUC(0-24)( )(h) Vinegar processing significantly increased the T(max) of DHC in heart, kidney, cerebrum, cerebrellum, brain stem and striatum and prolonged the T(max) of protopine in brain. No significant changes were observed on the T(max) of THP in rat tissues after vinegar processing. Wine processing reduced the T(max) of protopine and DHC in liver and spleen and T(max) of protopine in lung, but increased the T(max) of THP in all the rat tissues examined. To our knowledge, this is the first report on the effects of processing on the tissue distribution of the bioactive molecules from Rhizoma Corydalis.
Incorporating Skew into RMS Surface Roughness Probability Distribution
NASA Technical Reports Server (NTRS)
Stahl, Mark T.; Stahl, H. Philip.
2013-01-01
The standard treatment of RMS surface roughness data is the application of a Gaussian probability distribution. This handling of surface roughness ignores the skew present in the surface and overestimates the most probable RMS of the surface, the mode. Using experimental data we confirm the Gaussian distribution overestimates the mode and application of an asymmetric distribution provides a better fit. Implementing the proposed asymmetric distribution into the optical manufacturing process would reduce the polishing time required to meet surface roughness specifications.
NASA Astrophysics Data System (ADS)
Liang, Likai; Bi, Yushen
Considered on the distributed network management system's demand of high distributives, extensibility and reusability, a framework model of Three-tier distributed network management system based on COM/COM+ and DNA is proposed, which adopts software component technology and N-tier application software framework design idea. We also give the concrete design plan of each layer of this model. Finally, we discuss the internal running process of each layer in the distributed network management system's framework model.
NASA Astrophysics Data System (ADS)
Raschke, Mathias
2016-02-01
In this short note, I comment on the research of Pisarenko et al. (Pure Appl. Geophys 171:1599-1624, 2014) regarding the extreme value theory and statistics in the case of earthquake magnitudes. The link between the generalized extreme value distribution (GEVD) as an asymptotic model for the block maxima of a random variable and the generalized Pareto distribution (GPD) as a model for the peaks over threshold (POT) of the same random variable is presented more clearly. Inappropriately, Pisarenko et al. (Pure Appl. Geophys 171:1599-1624, 2014) have neglected to note that the approximations by GEVD and GPD work only asymptotically in most cases. This is particularly the case with truncated exponential distribution (TED), a popular distribution model for earthquake magnitudes. I explain why the classical models and methods of the extreme value theory and statistics do not work well for truncated exponential distributions. Consequently, these classical methods should be used for the estimation of the upper bound magnitude and corresponding parameters. Furthermore, I comment on various issues of statistical inference in Pisarenko et al. and propose alternatives. I argue why GPD and GEVD would work for various types of stochastic earthquake processes in time, and not only for the homogeneous (stationary) Poisson process as assumed by Pisarenko et al. (Pure Appl. Geophys 171:1599-1624, 2014). The crucial point of earthquake magnitudes is the poor convergence of their tail distribution to the GPD, and not the earthquake process over time.
NASA Astrophysics Data System (ADS)
Kelleher, Christa; McGlynn, Brian; Wagener, Thorsten
2017-07-01
Distributed catchment models are widely used tools for predicting hydrologic behavior. While distributed models require many parameters to describe a system, they are expected to simulate behavior that is more consistent with observed processes. However, obtaining a single set of acceptable parameters can be problematic, as parameter equifinality often results in several behavioral
sets that fit observations (typically streamflow). In this study, we investigate the extent to which equifinality impacts a typical distributed modeling application. We outline a hierarchical approach to reduce the number of behavioral sets based on regional, observation-driven, and expert-knowledge-based constraints. For our application, we explore how each of these constraint classes reduced the number of behavioral
parameter sets and altered distributions of spatiotemporal simulations, simulating a well-studied headwater catchment, Stringer Creek, Montana, using the distributed hydrology-soil-vegetation model (DHSVM). As a demonstrative exercise, we investigated model performance across 10 000 parameter sets. Constraints on regional signatures, the hydrograph, and two internal measurements of snow water equivalent time series reduced the number of behavioral parameter sets but still left a small number with similar goodness of fit. This subset was ultimately further reduced by incorporating pattern expectations of groundwater table depth across the catchment. Our results suggest that utilizing a hierarchical approach based on regional datasets, observations, and expert knowledge to identify behavioral parameter sets can reduce equifinality and bolster more careful application and simulation of spatiotemporal processes via distributed modeling at the catchment scale.
Disribution and interplay of geologic processes on Titan from Cassini radar data
Lopes, R.M.C.; Stofan, E.R.; Peckyno, R.; Radebaugh, J.; Mitchell, K.L.; Mitri, Giuseppe; Wood, C.A.; Kirk, R.L.; Wall, S.D.; Lunine, J.I.; Hayes, A.; Lorenz, R.; Farr, Tom; Wye, L.; Craig, J.; Ollerenshaw, R.J.; Janssen, M.; LeGall, A.; Paganelli, F.; West, R.; Stiles, B.; Callahan, P.; Anderson, Y.; Valora, P.; Soderblom, L.
2010-01-01
The Cassini Titan Radar Mapper is providing an unprecedented view of Titan's surface geology. Here we use Synthetic Aperture Radar (SAR) image swaths (Ta-T30) obtained from October 2004 to December 2007 to infer the geologic processes that have shaped Titan's surface. These SAR swaths cover about 20% of the surface, at a spatial resolution ranging from ~350 m to ~2 km. The SAR data are distributed over a wide latitudinal and longitudinal range, enabling some conclusions to be drawn about the global distribution of processes. They reveal a geologically complex surface that has been modified by all the major geologic processes seen on Earth - volcanism, tectonism, impact cratering, and erosion and deposition by fluvial and aeolian activity. In this paper, we map geomorphological units from SAR data and analyze their areal distribution and relative ages of modification in order to infer the geologic evolution of Titan's surface. We find that dunes and hummocky and mountainous terrains are more widespread than lakes, putative cryovolcanic features, mottled plains, and craters and crateriform structures that may be due to impact. Undifferentiated plains are the largest areal unit; their origin is uncertain. In terms of latitudinal distribution, dunes and hummocky and mountainous terrains are located mostly at low latitudes (less than 30 degrees), with no dunes being present above 60 degrees. Channels formed by fluvial activity are present at all latitudes, but lakes are at high latitudes only. Crateriform structures that may have been formed by impact appear to be uniformly distributed with latitude, but the well-preserved impact craters are all located at low latitudes, possibly indicating that more resurfacing has occurred at higher latitudes. Cryovolcanic features are not ubiquitous, and are mostly located between 30 degrees and 60 degrees north. We examine temporal relationships between units wherever possible, and conclude that aeolian and fluvial/pluvial/lacustrine processes are the most recent, while tectonic processes that led to the formation of mountains and Xanadu are likely the most ancient.
Yackulic, Charles B.
2016-01-01
There is considerable debate about the role of competition in shaping species distributions over broad spatial extents. This debate has practical implications because predicting changes in species' geographic ranges in response to ongoing environmental change would be simpler if competition could be ignored. While this debate has been the subject of many reviews, recent literature has not addressed the rates of relevant processes. This omission is surprising in that ecologists hypothesized decades ago that regional competitive exclusion is a slow process. The goal of this review is to reassess the debate under the hypothesis that competitive exclusion over broad spatial extents is a slow process.Available evidence, including simulations presented for the first time here, suggests that competitive exclusion over broad spatial extents occurs slowly over temporal extents of many decades to millennia. Ecologists arguing against an important role for competition frequently study modern patterns and/or range dynamics over periods of decades, while much of the evidence for competition shaping geographic ranges at broad spatial extents comes from paleoecological studies over time scales of centuries or longer. If competition is slow, as evidence suggests, the geographic distributions of some, perhaps many species, would continue to change over time scales of decades to millennia, even if environmental conditions did not continue to change. If the distributions of competing species are at equilibrium it is possible to predict species distributions based on observed species–environment relationships. However, disequilibrium is widespread as a result of competition and many other processes. Studies whose goal is accurate predictions over intermediate time scales (decades to centuries) should focus on factors associated with range expansion (colonization) and loss (local extinction), as opposed to current patterns. In general, understanding of modern range dynamics would be enhanced by considering the rates of relevant processes.
2003-06-27
KENNEDY SPACE CENTER, FLA. - Inside the hangar at Vandenberg Air Force Base, Calif., workers wait for the Pegasus launch vehicle to be moved inside. The Pegasus will carry the SciSat-1 spacecraft in a 400-mile-high polar orbit to investigate processes that control the distribution of ozone in the upper atmosphere. The scientific mission of SciSat-1 is to measure and understand the chemical processes that control the distribution of ozone in the Earth’s atmosphere, particularly at high altitudes. The data from the satellite will provide Canadian and international scientists with improved measurements relating to global ozone processes and help policymakers assess existing environmental policy and develop protective measures for improving the health of our atmosphere, preventing further ozone depletion. The mission is designed to last two years.
Decending motion of particle and its effect on ozone hole chemistry
NASA Technical Reports Server (NTRS)
Iwasaka, Y.
1988-01-01
Particle descending motion is one possible process which causes ozone loss near the tropopause in the Antarctic spring. However, this particle size distribution has not yet been measured. Particle settling is an important redistribution process of the chemical constituents contained in the particles. To understand particle settling effects on the Ozone Hole, information on the size distribution and the chemical composition of the particles is necessary.
A Disk-Based System for Producing and Distributing Science Products from MODIS
NASA Technical Reports Server (NTRS)
Masuoka, Edward; Wolfe, Robert; Sinno, Scott; Ye Gang; Teague, Michael
2007-01-01
Since beginning operations in 1999, the MODIS Adaptive Processing System (MODAPS) has evolved to take advantage of trends in information technology, such as the falling cost of computing cycles and disk storage and the availability of high quality open-source software (Linux, Apache and Perl), to achieve substantial gains in processing and distribution capacity and throughput while driving down the cost of system operations.
Understanding Local Structure Globally in Earth Science Remote Sensing Data Sets
NASA Technical Reports Server (NTRS)
Braverman, Amy; Fetzer, Eric
2007-01-01
Empirical probability distributions derived from the data are the signatures of physical processes generating the data. Distributions defined on different space-time windows can be compared and differences or changes can be attributed to physical processes. This presentation discusses on ways to reduce remote sensing data in a way that preserves information, focusing on the rate-distortion theory and using the entropy-constrained vector quantization algorithm.
A Research Program in Computer Technology. 1987 Annual Technical Report
1990-07-01
TITLE (Indcle Security Clanificstion) 1987 Annual Technical Report: *A Research Program in Computer Technology (Unclassified) 12. PERSONAL AUTHOR(S) IS...distributed processing, survivable networks 17. NCE: distributed processing, local networks, personal computers, workstation environment 18. SC Dev...are the auw’iors and should not be Interpreted as representIng the official opinion or policy of DARPA, the U.S. Government, or any person or agency