A hybrid heuristic for the multiple choice multidimensional knapsack problem
NASA Astrophysics Data System (ADS)
Mansi, Raïd; Alves, Cláudio; Valério de Carvalho, J. M.; Hanafi, Saïd
2013-08-01
In this article, a new solution approach for the multiple choice multidimensional knapsack problem is described. The problem is a variant of the multidimensional knapsack problem where items are divided into classes, and exactly one item per class has to be chosen. Both problems are NP-hard. However, the multiple choice multidimensional knapsack problem appears to be more difficult to solve in part because of its choice constraints. Many real applications lead to very large scale multiple choice multidimensional knapsack problems that can hardly be addressed using exact algorithms. A new hybrid heuristic is proposed that embeds several new procedures for this problem. The approach is based on the resolution of linear programming relaxations of the problem and reduced problems that are obtained by fixing some variables of the problem. The solutions of these problems are used to update the global lower and upper bounds for the optimal solution value. A new strategy for defining the reduced problems is explored, together with a new family of cuts and a reformulation procedure that is used at each iteration to improve the performance of the heuristic. An extensive set of computational experiments is reported for benchmark instances from the literature and for a large set of hard instances generated randomly. The results show that the approach outperforms other state-of-the-art methods described so far, providing the best known solution for a significant number of benchmark instances.
NASA Astrophysics Data System (ADS)
Shiangjen, Kanokwatt; Chaijaruwanich, Jeerayut; Srisujjalertwaja, Wijak; Unachak, Prakarn; Somhom, Samerkae
2018-02-01
This article presents an efficient heuristic placement algorithm, namely, a bidirectional heuristic placement, for solving the two-dimensional rectangular knapsack packing problem. The heuristic demonstrates ways to maximize space utilization by fitting the appropriate rectangle from both sides of the wall of the current residual space layer by layer. The iterative local search along with a shift strategy is developed and applied to the heuristic to balance the exploitation and exploration tasks in the solution space without the tuning of any parameters. The experimental results on many scales of packing problems show that this approach can produce high-quality solutions for most of the benchmark datasets, especially for large-scale problems, within a reasonable duration of computational time.
NASA Astrophysics Data System (ADS)
Quan, Zhe; Wu, Lei
2017-09-01
This article investigates the use of parallel computing for solving the disjunctively constrained knapsack problem. The proposed parallel computing model can be viewed as a cooperative algorithm based on a multi-neighbourhood search. The cooperation system is composed of a team manager and a crowd of team members. The team members aim at applying their own search strategies to explore the solution space. The team manager collects the solutions from the members and shares the best one with them. The performance of the proposed method is evaluated on a group of benchmark data sets. The results obtained are compared to those reached by the best methods from the literature. The results show that the proposed method is able to provide the best solutions in most cases. In order to highlight the robustness of the proposed parallel computing model, a new set of large-scale instances is introduced. Encouraging results have been obtained.
Approximating the 0-1 Multiple Knapsack Problem with Agent Decomposition and Market Negotiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smolinski, B.
The 0-1 multiple knapsack problem appears in many domains from financial portfolio management to cargo ship stowing. Methods for solving it range from approximate algorithms, such as greedy algorithms, to exact algorithms, such as branch and bound. Approximate algorithms have no bounds on how poorly they perform and exact algorithms can suffer from exponential time and space complexities with large data sets. This paper introduces a market model based on agent decomposition and market auctions for approximating the 0-1 multiple knapsack problem, and an algorithm that implements the model (M(x)). M(x) traverses the solution space rather than getting caught inmore » a local maximum, overcoming an inherent problem of many greedy algorithms. The use of agents ensures that infeasible solutions are not considered while traversing the solution space and that traversal of the solution space is not just random, but is also directed. M(x) is compared to a bound and bound algorithm (BB) and a simple greedy algorithm with a random shuffle (G(x)). The results suggest that M(x) is a good algorithm for approximating the 0-1 Multiple Knapsack problem. M(x) almost always found solutions that were close to optimal in a fraction of the time it took BB to run and with much less memory on large test data sets. M(x) usually performed better than G(x) on hard problems with correlated data.« less
An Improved Hybrid Encoding Cuckoo Search Algorithm for 0-1 Knapsack Problems
Feng, Yanhong; Jia, Ke; He, Yichao
2014-01-01
Cuckoo search (CS) is a new robust swarm intelligence method that is based on the brood parasitism of some cuckoo species. In this paper, an improved hybrid encoding cuckoo search algorithm (ICS) with greedy strategy is put forward for solving 0-1 knapsack problems. First of all, for solving binary optimization problem with ICS, based on the idea of individual hybrid encoding, the cuckoo search over a continuous space is transformed into the synchronous evolution search over discrete space. Subsequently, the concept of confidence interval (CI) is introduced; hence, the new position updating is designed and genetic mutation with a small probability is introduced. The former enables the population to move towards the global best solution rapidly in every generation, and the latter can effectively prevent the ICS from trapping into the local optimum. Furthermore, the greedy transform method is used to repair the infeasible solution and optimize the feasible solution. Experiments with a large number of KP instances show the effectiveness of the proposed algorithm and its ability to achieve good quality solutions. PMID:24527026
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brickell, E.F.; Simmons, G.J.
In the period since 1976, when Diffie and Hellman published the first discussion of two-key cryptography to appear in the open literature, only a handful of two-key cryptoalgorithms have been proposed - two of which are based on the knapsack problem. Consequently there was enormous interest when Shamir announced in early 1982 a cryptanalytic technique that could break many Merkle-Hellman knapsacks. In a rapid sequence of developments, Simmons and Brickell, Adleman, and Lagarias all announced other attacks on knapsack-based cryptosystems that were either computationally much more efficient or else directed at other knapsack schemes such as the Graham-Shamir or iteratedmore » systems. This paper analyzes the common features of knapsack-based cryptosystems and presents all of the cryptanalytic attacks made in 1982 from a unified viewpoint.« less
Li, Yanjie; Li, Yifan; Pan, Xiang; Li, Qing X; Chen, Ronghua; Li, Xuesheng; Pan, Canping; Song, Jianli
2018-02-01
Plant protection products (PPPs) are applied in China and many other developing countries with knapsack sprayers at high volumes with coarse spray quality, resulting in a high percentage of pesticide losses. In this study, a new air-assisted electric knapsack sprayer and two conventional knapsack sprayers were evaluated in terms of pesticide deposition, residues and loss into the soil. Artificial targets fixed to the upper side and underside of the leaf surface in six zones (at two depths and three heights) were used to collect the deposition, which were analyzed by liquid chromatography triple-quadrupole mass spectrometry. The air-assisted electric knapsack sprayer produced more deposition and better penetrability and uniformity than the two traditional spraying methods. In particular, the air-assisted electric knapsack sprayer reduced pesticide losses to the soil by roughly 37% to 75% and deposited 1.18 and 1.24 times more pesticide than the manual air-pressure and battery-powered knapsack sprayers, respectively. The residues of azoxystrobin and tebuconazole in tomato and cucumber were below the maximum residue limits (MRLs). In general, use of the the air-assisted electric knapsack sprayer in tomato and cucumber crops could improve the effectiveness of PPPs, reduce the risk of contamination and protect food safety. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.
A Novel Harmony Search Algorithm Based on Teaching-Learning Strategies for 0-1 Knapsack Problems
Tuo, Shouheng; Yong, Longquan; Deng, Fang'an
2014-01-01
To enhance the performance of harmony search (HS) algorithm on solving the discrete optimization problems, this paper proposes a novel harmony search algorithm based on teaching-learning (HSTL) strategies to solve 0-1 knapsack problems. In the HSTL algorithm, firstly, a method is presented to adjust dimension dynamically for selected harmony vector in optimization procedure. In addition, four strategies (harmony memory consideration, teaching-learning strategy, local pitch adjusting, and random mutation) are employed to improve the performance of HS algorithm. Another improvement in HSTL method is that the dynamic strategies are adopted to change the parameters, which maintains the proper balance effectively between global exploration power and local exploitation power. Finally, simulation experiments with 13 knapsack problems show that the HSTL algorithm can be an efficient alternative for solving 0-1 knapsack problems. PMID:24574905
A novel harmony search algorithm based on teaching-learning strategies for 0-1 knapsack problems.
Tuo, Shouheng; Yong, Longquan; Deng, Fang'an
2014-01-01
To enhance the performance of harmony search (HS) algorithm on solving the discrete optimization problems, this paper proposes a novel harmony search algorithm based on teaching-learning (HSTL) strategies to solve 0-1 knapsack problems. In the HSTL algorithm, firstly, a method is presented to adjust dimension dynamically for selected harmony vector in optimization procedure. In addition, four strategies (harmony memory consideration, teaching-learning strategy, local pitch adjusting, and random mutation) are employed to improve the performance of HS algorithm. Another improvement in HSTL method is that the dynamic strategies are adopted to change the parameters, which maintains the proper balance effectively between global exploration power and local exploitation power. Finally, simulation experiments with 13 knapsack problems show that the HSTL algorithm can be an efficient alternative for solving 0-1 knapsack problems.
The generalized quadratic knapsack problem. A neuronal network approach.
Talaván, Pedro M; Yáñez, Javier
2006-05-01
The solution of an optimization problem through the continuous Hopfield network (CHN) is based on some energy or Lyapunov function, which decreases as the system evolves until a local minimum value is attained. A new energy function is proposed in this paper so that any 0-1 linear constrains programming with quadratic objective function can be solved. This problem, denoted as the generalized quadratic knapsack problem (GQKP), includes as particular cases well-known problems such as the traveling salesman problem (TSP) and the quadratic assignment problem (QAP). This new energy function generalizes those proposed by other authors. Through this energy function, any GQKP can be solved with an appropriate parameter setting procedure, which is detailed in this paper. As a particular case, and in order to test this generalized energy function, some computational experiments solving the traveling salesman problem are also included.
Development and Mining of a Volatile Organic Compound Database
Abdullah, Azian Azamimi; Ono, Naoaki; Sugiura, Tadao; Morita, Aki Hirai; Katsuragi, Tetsuo; Muto, Ai; Nishioka, Takaaki; Kanaya, Shigehiko
2015-01-01
Volatile organic compounds (VOCs) are small molecules that exhibit high vapor pressure under ambient conditions and have low boiling points. Although VOCs contribute only a small proportion of the total metabolites produced by living organisms, they play an important role in chemical ecology specifically in the biological interactions between organisms and ecosystems. VOCs are also important in the health care field as they are presently used as a biomarker to detect various human diseases. Information on VOCs is scattered in the literature until now; however, there is still no available database describing VOCs and their biological activities. To attain this purpose, we have developed KNApSAcK Metabolite Ecology Database, which contains the information on the relationships between VOCs and their emitting organisms. The KNApSAcK Metabolite Ecology is also linked with the KNApSAcK Core and KNApSAcK Metabolite Activity Database to provide further information on the metabolites and their biological activities. The VOC database can be accessed online. PMID:26495281
Bruhn, Peter; Geyer-Schulz, Andreas
2002-01-01
In this paper, we introduce genetic programming over context-free languages with linear constraints for combinatorial optimization, apply this method to several variants of the multidimensional knapsack problem, and discuss its performance relative to Michalewicz's genetic algorithm with penalty functions. With respect to Michalewicz's approach, we demonstrate that genetic programming over context-free languages with linear constraints improves convergence. A final result is that genetic programming over context-free languages with linear constraints is ideally suited to modeling complementarities between items in a knapsack problem: The more complementarities in the problem, the stronger the performance in comparison to its competitors.
An Empirical Comparison of Seven Iterative and Evolutionary Function Optimization Heuristics
NASA Technical Reports Server (NTRS)
Baluja, Shumeet
1995-01-01
This report is a repository of the results obtained from a large scale empirical comparison of seven iterative and evolution-based optimization heuristics. Twenty-seven static optimization problems, spanning six sets of problem classes which are commonly explored in genetic algorithm literature, are examined. The problem sets include job-shop scheduling, traveling salesman, knapsack, binpacking, neural network weight optimization, and standard numerical optimization. The search spaces in these problems range from 2368 to 22040. The results indicate that using genetic algorithms for the optimization of static functions does not yield a benefit, in terms of the final answer obtained, over simpler optimization heuristics. Descriptions of the algorithms tested and the encodings of the problems are described in detail for reproducibility.
Straight Privilege: Unpacking the (Still) Invisible Knapsack
ERIC Educational Resources Information Center
Tollefson, Kaia
2010-01-01
Several unearned benefits attending straight privilege are listed, prefaced by two main arguments. First, it is argued that the rampant heterosexism in the U.S. is largely attributable to many Americans' framing of heterosexism as a matter of religious freedom rather than as a form of bigotry. It is further argued that educators' elimination of…
Unpacking Teachers' Invisible Knapsacks: Social Identity and Privilege in Higher Education
ERIC Educational Resources Information Center
Barnett, Pamela E.
2013-01-01
Peggy McIntosh (1988) famously unpacked what she called an "invisible knapsack" of privileges socially conferred upon whites, men, and heterosexuals (1988). She argued that not only are women and minorities at a disadvantage, but those with social power enjoy benefits that are both unearned and unjustified. We often accept those…
Tu, Zhi-bin; Cui, Meng-jing; Yao, Hong-yan; Hu, Guo-qing; Xiang, Hui-yun; Stallones, Lorann; Zhang, Xu-jun
2012-04-01
To explore the risk factors on cases regarding work-related acute pesticide poisoning among farmers of Jiangsu province. A population-based, 1:2 matched case-control study was carried out, with 121 patients as case-group paired by 242 persons with same gender, district and age less then difference of 3 years, as controls. Cases were the ones who had suffered from work-related acute pesticide poisoning. A unified questionnaire was used. Data base was established by EpiData 3.1, and SPSS 16.0 was used for both data single factor and multi-conditional logistics regression analysis. Results from the single factor logistic regression analysis showed that the related risk factors were: lack of safety guidance, lack of readable labels before praying pesticides, no regression during application, using hand to wipe sweat, using leaking knapsack, body contaminated during application and continuing to work when feeling ill after the contact of pesticides. Results from multi-conditional logistic regression analysis indicated that the lack of safety guidance (OR=2.25, 95%CI: 1.35-3.74), no readable labels before praying pesticides (OR=1.95, 95%CI: 1.19-3.18), wiping the sweat by hand during application (OR=1.97, 95%CI: 1.20-3.24) and using leaking knapsack during application (OR=1.82, 95%CI:1.10-3.01) were risk factors for the occurrence of work-related acute pesticide poisoning. The lack of safety guidance, no readable labels before praying pesticides, wiping the sweat by hand or using leaking knapsack during application were correlated to the occurrence of work-related acute pesticide poisoning.
Bas, Esra
2014-07-01
In this paper, an integrated methodology for Quality Function Deployment (QFD) and a 0-1 knapsack model is proposed for occupational safety and health as a systems thinking approach. The House of Quality (HoQ) in QFD methodology is a systematic tool to consider the inter-relationships between two factors. In this paper, three HoQs are used to consider the interrelationships between tasks and hazards, hazards and events, and events and preventive/protective measures. The final priority weights of events are defined by considering their project-specific preliminary weights, probability of occurrence, and effects on the victim and the company. The priority weights of the preventive/protective measures obtained in the last HoQ are fed into a 0-1 knapsack model for the investment decision. Then, the selected preventive/protective measures can be adapted to the task design. The proposed step-by-step methodology can be applied to any stage of a project to design the workplace for occupational safety and health, and continuous improvement for safety is endorsed by the closed loop characteristic of the integrated methodology. Copyright © 2013 Elsevier Ltd. All rights reserved.
Wang, Gai-Ge; Feng, Qingjiang; Zhao, Xiang-Jun
2014-01-01
An effective hybrid cuckoo search algorithm (CS) with improved shuffled frog-leaping algorithm (ISFLA) is put forward for solving 0-1 knapsack problem. First of all, with the framework of SFLA, an improved frog-leap operator is designed with the effect of the global optimal information on the frog leaping and information exchange between frog individuals combined with genetic mutation with a small probability. Subsequently, in order to improve the convergence speed and enhance the exploitation ability, a novel CS model is proposed with considering the specific advantages of Lévy flights and frog-leap operator. Furthermore, the greedy transform method is used to repair the infeasible solution and optimize the feasible solution. Finally, numerical simulations are carried out on six different types of 0-1 knapsack instances, and the comparative results have shown the effectiveness of the proposed algorithm and its ability to achieve good quality solutions, which outperforms the binary cuckoo search, the binary differential evolution, and the genetic algorithm. PMID:25404940
On the Hardness of Subset Sum Problem from Different Intervals
NASA Astrophysics Data System (ADS)
Kogure, Jun; Kunihiro, Noboru; Yamamoto, Hirosuke
The subset sum problem, which is often called as the knapsack problem, is known as an NP-hard problem, and there are several cryptosystems based on the problem. Assuming an oracle for shortest vector problem of lattice, the low-density attack algorithm by Lagarias and Odlyzko and its variants solve the subset sum problem efficiently, when the “density” of the given problem is smaller than some threshold. When we define the density in the context of knapsack-type cryptosystems, weights are usually assumed to be chosen uniformly at random from the same interval. In this paper, we focus on general subset sum problems, where this assumption may not hold. We assume that weights are chosen from different intervals, and make analysis of the effect on the success probability of above algorithms both theoretically and experimentally. Possible application of our result in the context of knapsack cryptosystems is the security analysis when we reduce the data size of public keys.
Knapsack - TOPSIS Technique for Vertical Handover in Heterogeneous Wireless Network
2015-01-01
In a heterogeneous wireless network, handover techniques are designed to facilitate anywhere/anytime service continuity for mobile users. Consistent best-possible access to a network with widely varying network characteristics requires seamless mobility management techniques. Hence, the vertical handover process imposes important technical challenges. Handover decisions are triggered for continuous connectivity of mobile terminals. However, bad network selection and overload conditions in the chosen network can cause fallout in the form of handover failure. In order to maintain the required Quality of Service during the handover process, decision algorithms should incorporate intelligent techniques. In this paper, a new and efficient vertical handover mechanism is implemented using a dynamic programming method from the operation research discipline. This dynamic programming approach, which is integrated with the Technique to Order Preference by Similarity to Ideal Solution (TOPSIS) method, provides the mobile user with the best handover decisions. Moreover, in this proposed handover algorithm a deterministic approach which divides the network into zones is incorporated into the network server in order to derive an optimal solution. The study revealed that this method is found to achieve better performance and QoS support to users and greatly reduce the handover failures when compared to the traditional TOPSIS method. The decision arrived at the zone gateway using this operational research analytical method (known as the dynamic programming knapsack approach together with Technique to Order Preference by Similarity to Ideal Solution) yields remarkably better results in terms of the network performance measures such as throughput and delay. PMID:26237221
Knapsack--TOPSIS Technique for Vertical Handover in Heterogeneous Wireless Network.
Malathy, E M; Muthuswamy, Vijayalakshmi
2015-01-01
In a heterogeneous wireless network, handover techniques are designed to facilitate anywhere/anytime service continuity for mobile users. Consistent best-possible access to a network with widely varying network characteristics requires seamless mobility management techniques. Hence, the vertical handover process imposes important technical challenges. Handover decisions are triggered for continuous connectivity of mobile terminals. However, bad network selection and overload conditions in the chosen network can cause fallout in the form of handover failure. In order to maintain the required Quality of Service during the handover process, decision algorithms should incorporate intelligent techniques. In this paper, a new and efficient vertical handover mechanism is implemented using a dynamic programming method from the operation research discipline. This dynamic programming approach, which is integrated with the Technique to Order Preference by Similarity to Ideal Solution (TOPSIS) method, provides the mobile user with the best handover decisions. Moreover, in this proposed handover algorithm a deterministic approach which divides the network into zones is incorporated into the network server in order to derive an optimal solution. The study revealed that this method is found to achieve better performance and QoS support to users and greatly reduce the handover failures when compared to the traditional TOPSIS method. The decision arrived at the zone gateway using this operational research analytical method (known as the dynamic programming knapsack approach together with Technique to Order Preference by Similarity to Ideal Solution) yields remarkably better results in terms of the network performance measures such as throughput and delay.
Darmann, Andreas; Nicosia, Gaia; Pferschy, Ulrich; Schauer, Joachim
2014-03-16
In this work we address a game theoretic variant of the Subset Sum problem, in which two decision makers (agents/players) compete for the usage of a common resource represented by a knapsack capacity. Each agent owns a set of integer weighted items and wants to maximize the total weight of its own items included in the knapsack. The solution is built as follows: Each agent, in turn, selects one of its items (not previously selected) and includes it in the knapsack if there is enough capacity. The process ends when the remaining capacity is too small for including any item left. We look at the problem from a single agent point of view and show that finding an optimal sequence of items to select is an [Formula: see text]-hard problem. Therefore we propose two natural heuristic strategies and analyze their worst-case performance when (1) the opponent is able to play optimally and (2) the opponent adopts a greedy strategy. From a centralized perspective we observe that some known results on the approximation of the classical Subset Sum can be effectively adapted to the multi-agent version of the problem.
Darmann, Andreas; Nicosia, Gaia; Pferschy, Ulrich; Schauer, Joachim
2014-01-01
In this work we address a game theoretic variant of the Subset Sum problem, in which two decision makers (agents/players) compete for the usage of a common resource represented by a knapsack capacity. Each agent owns a set of integer weighted items and wants to maximize the total weight of its own items included in the knapsack. The solution is built as follows: Each agent, in turn, selects one of its items (not previously selected) and includes it in the knapsack if there is enough capacity. The process ends when the remaining capacity is too small for including any item left. We look at the problem from a single agent point of view and show that finding an optimal sequence of items to select is an NP-hard problem. Therefore we propose two natural heuristic strategies and analyze their worst-case performance when (1) the opponent is able to play optimally and (2) the opponent adopts a greedy strategy. From a centralized perspective we observe that some known results on the approximation of the classical Subset Sum can be effectively adapted to the multi-agent version of the problem. PMID:25844012
a Genetic Algorithm Based on Sexual Selection for the Multidimensional 0/1 Knapsack Problems
NASA Astrophysics Data System (ADS)
Varnamkhasti, Mohammad Jalali; Lee, Lai Soon
In this study, a new technique is presented for choosing mate chromosomes during sexual selection in a genetic algorithm. The population is divided into groups of males and females. During the sexual selection, the female chromosome is selected by the tournament selection while the male chromosome is selected based on the hamming distance from the selected female chromosome, fitness value or active genes. Computational experiments are conducted on the proposed technique and the results are compared with some selection mechanisms commonly used for solving multidimensional 0/1 knapsack problems published in the literature.
On local search for bi-objective knapsack problems.
Liefooghe, Arnaud; Paquete, Luís; Figueira, José Rui
2013-01-01
In this article, a local search approach is proposed for three variants of the bi-objective binary knapsack problem, with the aim of maximizing the total profit and minimizing the total weight. First, an experimental study on a given structural property of connectedness of the efficient set is conducted. Based on this property, a local search algorithm is proposed and its performance is compared to exact algorithms in terms of runtime and quality metrics. The experimental results indicate that this simple local search algorithm is able to find a representative set of optimal solutions in most of the cases, and in much less time than exact algorithms.
Ikeda, Shun; Abe, Takashi; Nakamura, Yukiko; Kibinge, Nelson; Hirai Morita, Aki; Nakatani, Atsushi; Ono, Naoaki; Ikemura, Toshimichi; Nakamura, Kensuke; Altaf-Ul-Amin, Md; Kanaya, Shigehiko
2013-05-01
Biology is increasingly becoming a data-intensive science with the recent progress of the omics fields, e.g. genomics, transcriptomics, proteomics and metabolomics. The species-metabolite relationship database, KNApSAcK Core, has been widely utilized and cited in metabolomics research, and chronological analysis of that research work has helped to reveal recent trends in metabolomics research. To meet the needs of these trends, the KNApSAcK database has been extended by incorporating a secondary metabolic pathway database called Motorcycle DB. We examined the enzyme sequence diversity related to secondary metabolism by means of batch-learning self-organizing maps (BL-SOMs). Initially, we constructed a map by using a big data matrix consisting of the frequencies of all possible dipeptides in the protein sequence segments of plants and bacteria. The enzyme sequence diversity of the secondary metabolic pathways was examined by identifying clusters of segments associated with certain enzyme groups in the resulting map. The extent of diversity of 15 secondary metabolic enzyme groups is discussed. Data-intensive approaches such as BL-SOM applied to big data matrices are needed for systematizing protein sequences. Handling big data has become an inevitable part of biology.
NASA Astrophysics Data System (ADS)
Amalia; Budiman, M. A.; Sitepu, R.
2018-03-01
Cryptography is one of the best methods to keep the information safe from security attack by unauthorized people. At present, Many studies had been done by previous researchers to generate a more robust cryptographic algorithm to provide high security for data communication. To strengthen data security, one of the methods is hybrid cryptosystem method that combined symmetric and asymmetric algorithm. In this study, we observed a hybrid cryptosystem method contain Modification Playfair Cipher 16x16 algorithm as a symmetric algorithm and Knapsack Naccache-Stern as an asymmetric algorithm. We observe a running time of this hybrid algorithm with some of the various experiments. We tried different amount of characters to be tested which are 10, 100, 1000, 10000 and 100000 characters and we also examined the algorithm with various key’s length which are 10, 20, 30, 40 of key length. The result of our study shows that the processing time for encryption and decryption process each algorithm is linearly proportional, it means the longer messages character then, the more significant times needed to encrypt and decrypt the messages. The encryption running time of Knapsack Naccache-Stern algorithm takes a longer time than its decryption, while the encryption running time of modification Playfair Cipher 16x16 algorithm takes less time than its decryption.
Quantification of emissions from knapsack sprayers: 'the weight method
NASA Astrophysics Data System (ADS)
Garcia-Santos, Glenda; Binder, Claudia R.
2010-05-01
Misuse of pesticides kill or seriously sicken thousands of people every year and poison the natural environment. Investigations of occupational and environmental risk have received considerable interest over the last decades. And yet, lack of staff and analytical equipments as well the costs of chemical analyses make difficult, if not impossible, the control of the pesticide contamination and residues in humans, air, water, and soils in developing countries. To assess emissions of pesticides (transport and deposition) during spray application and the risk for the human health and the environment, tracers can be useful tools. Uranine was used to quantify drift airborne and later deposition on the neighbouring field and clothes of the applicator after spraying with a knapsack sprayer in one of the biggest areas of potato production in Colombia. Keeping the same setup the amount of wet drift was measured by difference in the weight of high absorbent papers used to collect the tracer. Surprisingly this weight method (Weight-HAP) was able to explain 71% of the drift variance measured with the tracer. Therefore the weight method is presented as a suitable rapid low cost screening tool, complementary to toxicological tests, to assess air pollution, occupational and environmental exposure generated by the emissions from knapsack sprayers during pesticide application. This technique might be important in places were there is lack of analytical instruments.
Dynamic Inertia Weight Binary Bat Algorithm with Neighborhood Search
2017-01-01
Binary bat algorithm (BBA) is a binary version of the bat algorithm (BA). It has been proven that BBA is competitive compared to other binary heuristic algorithms. Since the update processes of velocity in the algorithm are consistent with BA, in some cases, this algorithm also faces the premature convergence problem. This paper proposes an improved binary bat algorithm (IBBA) to solve this problem. To evaluate the performance of IBBA, standard benchmark functions and zero-one knapsack problems have been employed. The numeric results obtained by benchmark functions experiment prove that the proposed approach greatly outperforms the original BBA and binary particle swarm optimization (BPSO). Compared with several other heuristic algorithms on zero-one knapsack problems, it also verifies that the proposed algorithm is more able to avoid local minima. PMID:28634487
Dynamic Inertia Weight Binary Bat Algorithm with Neighborhood Search.
Huang, Xingwang; Zeng, Xuewen; Han, Rui
2017-01-01
Binary bat algorithm (BBA) is a binary version of the bat algorithm (BA). It has been proven that BBA is competitive compared to other binary heuristic algorithms. Since the update processes of velocity in the algorithm are consistent with BA, in some cases, this algorithm also faces the premature convergence problem. This paper proposes an improved binary bat algorithm (IBBA) to solve this problem. To evaluate the performance of IBBA, standard benchmark functions and zero-one knapsack problems have been employed. The numeric results obtained by benchmark functions experiment prove that the proposed approach greatly outperforms the original BBA and binary particle swarm optimization (BPSO). Compared with several other heuristic algorithms on zero-one knapsack problems, it also verifies that the proposed algorithm is more able to avoid local minima.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brickell, E.F.; Davis, J.A.; Simmons, G.J.
A study of the algorithm and the underlying mathematical concepts of A Polynomial Time Algorithm for Breaking Merkle-Hellman Cryptosystems, by Adi Shamir, is presented. Ways of protecting the Merkle-Hellman knapsack from cryptanalysis are given with derivations. (GHT)
NASA Astrophysics Data System (ADS)
Rahman, P. A.
2018-05-01
This scientific paper deals with the model of the knapsack optimization problem and method of its solving based on directed combinatorial search in the boolean space. The offered by the author specialized mathematical model of decomposition of the search-zone to the separate search-spheres and the algorithm of distribution of the search-spheres to the different cores of the multi-core processor are also discussed. The paper also provides an example of decomposition of the search-zone to the several search-spheres and distribution of the search-spheres to the different cores of the quad-core processor. Finally, an offered by the author formula for estimation of the theoretical maximum of the computational acceleration, which can be achieved due to the parallelization of the search-zone to the search-spheres on the unlimited number of the processor cores, is also given.
An automated system for reduction of the firm's employees under maximal overall efficiency
NASA Astrophysics Data System (ADS)
Yonchev, Yoncho; Nikolov, Simeon; Baeva, Silvia
2012-11-01
Achieving maximal overall efficiency is a priority in all companies. This problem is formulated as a knap-sack problem and afterwards as a linear assignment problem. An automated system is created for solving of this problem.
Making Unseen Privilege Visible in Mathematics Education Research
ERIC Educational Resources Information Center
Bartell, Tonya Gau; Johnson, Kate R.
2013-01-01
In this essay, the authors begin to "unpack the invisible knapsack" of mathematics education research privilege. They present short statements representing the multiplicity of their respective identities; acknowledging that efforts to understand privilege and oppression are often supported and constrained by identities. The authors then…
A set partitioning reformulation for the multiple-choice multidimensional knapsack problem
NASA Astrophysics Data System (ADS)
Voß, Stefan; Lalla-Ruiz, Eduardo
2016-05-01
The Multiple-choice Multidimensional Knapsack Problem (MMKP) is a well-known ?-hard combinatorial optimization problem that has received a lot of attention from the research community as it can be easily translated to several real-world problems arising in areas such as allocating resources, reliability engineering, cognitive radio networks, cloud computing, etc. In this regard, an exact model that is able to provide high-quality feasible solutions for solving it or being partially included in algorithmic schemes is desirable. The MMKP basically consists of finding a subset of objects that maximizes the total profit while observing some capacity restrictions. In this article a reformulation of the MMKP as a set partitioning problem is proposed to allow for new insights into modelling the MMKP. The computational experimentation provides new insights into the problem itself and shows that the new model is able to improve on the best of the known results for some of the most common benchmark instances.
Drift from the Use of Hand-Held Knapsack Pesticide Sprayers in Boyacá (Colombian Andes).
García-Santos, Glenda; Feola, Giuseppe; Nuyttens, David; Diaz, Jaime
2016-05-25
Offsite pesticide losses in tropical mountainous regions have been little studied. One example is measuring pesticide drift soil deposition, which can support pesticide risk assessment for surface water, soil, bystanders, and off-target plants and fauna. This is considered a serious gap, given the evidence of pesticide-related poisoning in those regions. Empirical data of drift deposition of a pesticide surrogate, Uranine tracer, within one of the highest potato-producing regions in Colombia, characterized by small plots and mountain orography, is presented. High drift values encountered in this study reflect the actual spray conditions using hand-held knapsack sprayers. Comparison between measured and predicted drift values using three existing empirical equations showed important underestimation. However, after their optimization based on measured drift information, the equations showed a strong predictive power for this study area and the study conditions. The most suitable curve to assess mean relative drift was the IMAG calculator after optimization.
Multiple Choice Knapsack Problem: example of planning choice in transportation.
Zhong, Tao; Young, Rhonda
2010-05-01
Transportation programming, a process of selecting projects for funding given budget and other constraints, is becoming more complex as a result of new federal laws, local planning regulations, and increased public involvement. This article describes the use of an integer programming tool, Multiple Choice Knapsack Problem (MCKP), to provide optimal solutions to transportation programming problems in cases where alternative versions of projects are under consideration. In this paper, optimization methods for use in the transportation programming process are compared and then the process of building and solving the optimization problems is discussed. The concepts about the use of MCKP are presented and a real-world transportation programming example at various budget levels is provided. This article illustrates how the use of MCKP addresses the modern complexities and provides timely solutions in transportation programming practice. While the article uses transportation programming as a case study, MCKP can be useful in other fields where a similar decision among a subset of the alternatives is required. Copyright 2009 Elsevier Ltd. All rights reserved.
Optimal Sensor Allocation for Fault Detection and Isolation
NASA Technical Reports Server (NTRS)
Azam, Mohammad; Pattipati, Krishna; Patterson-Hine, Ann
2004-01-01
Automatic fault diagnostic schemes rely on various types of sensors (e.g., temperature, pressure, vibration, etc) to measure the system parameters. Efficacy of a diagnostic scheme is largely dependent on the amount and quality of information available from these sensors. The reliability of sensors, as well as the weight, volume, power, and cost constraints, often makes it impractical to monitor a large number of system parameters. An optimized sensor allocation that maximizes the fault diagnosibility, subject to specified weight, volume, power, and cost constraints is required. Use of optimal sensor allocation strategies during the design phase can ensure better diagnostics at a reduced cost for a system incorporating a high degree of built-in testing. In this paper, we propose an approach that employs multiple fault diagnosis (MFD) and optimization techniques for optimal sensor placement for fault detection and isolation (FDI) in complex systems. Keywords: sensor allocation, multiple fault diagnosis, Lagrangian relaxation, approximate belief revision, multidimensional knapsack problem.
Control of the white-pine weevil with insecticidal emulsions
David Crosby
1958-01-01
Excellent control of the white-pine weevil in young white pine plantations, by applying concentrated lead arsenate spray with knapsack sprayers, was demonstrated and reported several years ago. Since then, research has shown that a number of newer insecticides, used as emulsions, are also very effective.
McIntosh as Synecdoche: How Teacher Education's Focus on White Privilege Undermines Antiracism
ERIC Educational Resources Information Center
Lensmire, Timothy J.; McManimon, Shannon K.; Tierney, Jessica Dockter; Lee-Nichols, Mary E.; Casey, Zachary A.; Lensmire, Audrey; Davis, Bryan M.
2013-01-01
In this article, members of the Midwest Critical Whiteness Collective argue that Peggy McIntosh's seminal "knapsack" article acts as a synecdoche, or as a stand-in, for all the antiracist work to be done in teacher education and that this limits our understanding and possibilities for action. The authors develop this argument by…
Ohtana, Yuki; Abdullah, Azian Azamimi; Altaf-Ul-Amin, Md; Huang, Ming; Ono, Naoaki; Sato, Tetsuo; Sugiura, Tadao; Horai, Hisayuki; Nakamura, Yukiko; Morita Hirai, Aki; Lange, Klaus W; Kibinge, Nelson K; Katsuragi, Tetsuo; Shirai, Tsuyoshi; Kanaya, Shigehiko
2014-12-01
Developing database systems connecting diverse species based on omics is the most important theme in big data biology. To attain this purpose, we have developed KNApSAcK Family Databases, which are utilized in a number of researches in metabolomics. In the present study, we have developed a network-based approach to analyze relationships between 3D structure and biological activity of metabolites consisting of four steps as follows: construction of a network of metabolites based on structural similarity (Step 1), classification of metabolites into structure groups (Step 2), assessment of statistically significant relations between structure groups and biological activities (Step 3), and 2-dimensional clustering of the constructed data matrix based on statistically significant relations between structure groups and biological activities (Step 4). Applying this method to a data set consisting of 2072 secondary metabolites and 140 biological activities reported in KNApSAcK Metabolite Activity DB, we obtained 983 statistically significant structure group-biological activity pairs. As a whole, we systematically analyzed the relationship between 3D-chemical structures of metabolites and biological activities. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A tight upper bound for quadratic knapsack problems in grid-based wind farm layout optimization
NASA Astrophysics Data System (ADS)
Quan, Ning; Kim, Harrison M.
2018-03-01
The 0-1 quadratic knapsack problem (QKP) in wind farm layout optimization models possible turbine locations as nodes, and power loss due to wake effects between pairs of turbines as edges in a complete graph. The goal is to select up to a certain number of turbine locations such that the sum of selected node and edge coefficients is maximized. Finding the optimal solution to the QKP is difficult in general, but it is possible to obtain a tight upper bound on the QKP's optimal value which facilitates the use of heuristics to solve QKPs by giving a good estimate of the optimality gap of any feasible solution. This article applies an upper bound method that is especially well-suited to QKPs in wind farm layout optimization due to certain features of the formulation that reduce the computational complexity of calculating the upper bound. The usefulness of the upper bound was demonstrated by assessing the performance of the greedy algorithm for solving QKPs in wind farm layout optimization. The results show that the greedy algorithm produces good solutions within 4% of the optimal value for small to medium sized problems considered in this article.
Lifetime evaluation of large format CMOS mixed signal infrared devices
NASA Astrophysics Data System (ADS)
Linder, A.; Glines, Eddie
2015-09-01
New large scale foundry processes continue to produce reliable products. These new large scale devices continue to use industry best practice to screen for failure mechanisms and validate their long lifetime. The Failure-in-Time analysis in conjunction with foundry qualification information can be used to evaluate large format device lifetimes. This analysis is a helpful tool when zero failure life tests are typical. The reliability of the device is estimated by applying the failure rate to the use conditions. JEDEC publications continue to be the industry accepted methods.
Solving a Class of Stochastic Mixed-Integer Programs With Branch and Price
2006-01-01
a two-dimensional knapsack problem, but for a given m, the objective value gi does not depend on the variance index v. This will be used in a final...optimization. Journal of Multicriteria Decision Analysis 11, 139–150 (2002) 29. Ford, L.R., Fulkerson, D.R.: A suggested computation for the maximal...for solution by a branch-and-price algorithm (B&P). We then survey a number of examples, and use a stochastic facility-location problem (SFLP) for a
Commanders Responsibilities in the Operations Process During the 1864 Red River Expedition
2015-05-21
1999), 340. (Hereafter referred to as J.C.C.W.). 20 J.C.C.W., 278. 6 Major General Frederick Steele , 17,000 troops from the Department of the...Assistant Quartermaster Captain D .N. Welch noted “the navy is seizing all that cotton they can get hold of. Every gun -boat is loaded with cotton...then precipitated an increasing withdrawal, turned into an all-out rout, of the Union forces as “ Guns , knapsacks, blankets—everything was thrown away by
A Comparision of Heuristic Methods Used in Hierarchical Production Planning.
1979-03-01
literature: Equalization— of—Run—Out—Ti mes ( EROT ) (12], Winters ( 18 ] , Hex and Meal (lii, and Knap- sack [1] . Figure 2 displays a flow chart...forecast error mus t be greater than 24% for this difference to be noticed. Hax—Meal outperformed the EROT with forecast error between 18 and 23% by more...planning frame- work: the Equalization—of—Run—Out—Time approach (12], the Winters approach [ 18 ], the Hax—Meal approach [lii, and the Knapsack approach (1
Oesterlund, Anna H; Thomsen, Jane F; Sekimpi, Deogratias K; Maziina, James; Racheal, Apio; Jørs, Erik
2014-06-01
Over the past years there has been an increase in the use of pesticides in developing countries. This study describes pesticide use among small-scale farmers in Uganda and analyses predictors of pesticide poisoning (intoxication) symptoms. A cross-sectional study was conducted using a standardized questionnaire. Some 317 small-scale farmers in two districts in Uganda were interviewed about pesticide use, knowledge and attitude, symptoms of intoxication, personal protective equipment (PPE) and hygiene. The risk of reporting symptoms was analysed using logistic regression analysis. The most frequently used pesticides belonged to WHO class II. The farmers had poor knowledge about pesticide toxicity, and the majority did not use appropriate PPE nor good hygiene when handling pesticides. There was no significant association between the number of times of spraying with pesticides and self-reported symptoms of pesticide poisoning. The only significant association was between blowing and sucking the nozzle of the knapsack sprayer and self-reported symptoms of pesticide intoxication (OR: 2.13. 95% CI: 1.09 - 4.18). Unlike the practice in several other developing countries, small-scale farmers in Uganda do not use the most hazardous pesticides (WHO class 1a and 1b). However use of WHO class II pesticides and those of lower toxicity is seen in combination with inadequate knowledge and practice among the farmers. This poses a danger of acute intoxications, chronic health problems and environmental pollution. Training of farmers in Integrated Pest Management (IPM) methods, use of proper hygiene and personal protective equipment when handling pesticides should be promoted.
Drake, John H; Özcan, Ender; Burke, Edmund K
2016-01-01
Hyper-heuristics are high-level methodologies for solving complex problems that operate on a search space of heuristics. In a selection hyper-heuristic framework, a heuristic is chosen from an existing set of low-level heuristics and applied to the current solution to produce a new solution at each point in the search. The use of crossover low-level heuristics is possible in an increasing number of general-purpose hyper-heuristic tools such as HyFlex and Hyperion. However, little work has been undertaken to assess how best to utilise it. Since a single-point search hyper-heuristic operates on a single candidate solution, and two candidate solutions are required for crossover, a mechanism is required to control the choice of the other solution. The frameworks we propose maintain a list of potential solutions for use in crossover. We investigate the use of such lists at two conceptual levels. First, crossover is controlled at the hyper-heuristic level where no problem-specific information is required. Second, it is controlled at the problem domain level where problem-specific information is used to produce good-quality solutions to use in crossover. A number of selection hyper-heuristics are compared using these frameworks over three benchmark libraries with varying properties for an NP-hard optimisation problem: the multidimensional 0-1 knapsack problem. It is shown that allowing crossover to be managed at the domain level outperforms managing crossover at the hyper-heuristic level in this problem domain.
General Quantum Meet-in-the-Middle Search Algorithm Based on Target Solution of Fixed Weight
NASA Astrophysics Data System (ADS)
Fu, Xiang-Qun; Bao, Wan-Su; Wang, Xiang; Shi, Jian-Hong
2016-10-01
Similar to the classical meet-in-the-middle algorithm, the storage and computation complexity are the key factors that decide the efficiency of the quantum meet-in-the-middle algorithm. Aiming at the target vector of fixed weight, based on the quantum meet-in-the-middle algorithm, the algorithm for searching all n-product vectors with the same weight is presented, whose complexity is better than the exhaustive search algorithm. And the algorithm can reduce the storage complexity of the quantum meet-in-the-middle search algorithm. Then based on the algorithm and the knapsack vector of the Chor-Rivest public-key crypto of fixed weight d, we present a general quantum meet-in-the-middle search algorithm based on the target solution of fixed weight, whose computational complexity is \\sumj = 0d {(O(\\sqrt {Cn - k + 1d - j }) + O(C_kj log C_k^j))} with Σd i =0 Ck i memory cost. And the optimal value of k is given. Compared to the quantum meet-in-the-middle search algorithm for knapsack problem and the quantum algorithm for searching a target solution of fixed weight, the computational complexity of the algorithm is lower. And its storage complexity is smaller than the quantum meet-in-the-middle-algorithm. Supported by the National Basic Research Program of China under Grant No. 2013CB338002 and the National Natural Science Foundation of China under Grant No. 61502526
Automating the packing heuristic design process with genetic programming.
Burke, Edmund K; Hyde, Matthew R; Kendall, Graham; Woodward, John
2012-01-01
The literature shows that one-, two-, and three-dimensional bin packing and knapsack packing are difficult problems in operational research. Many techniques, including exact, heuristic, and metaheuristic approaches, have been investigated to solve these problems and it is often not clear which method to use when presented with a new instance. This paper presents an approach which is motivated by the goal of building computer systems which can design heuristic methods. The overall aim is to explore the possibilities for automating the heuristic design process. We present a genetic programming system to automatically generate a good quality heuristic for each instance. It is not necessary to change the methodology depending on the problem type (one-, two-, or three-dimensional knapsack and bin packing problems), and it therefore has a level of generality unmatched by other systems in the literature. We carry out an extensive suite of experiments and compare with the best human designed heuristics in the literature. Note that our heuristic design methodology uses the same parameters for all the experiments. The contribution of this paper is to present a more general packing methodology than those currently available, and to show that, by using this methodology, it is possible for a computer system to design heuristics which are competitive with the human designed heuristics from the literature. This represents the first packing algorithm in the literature able to claim human competitive results in such a wide variety of packing domains.
Optimal reconfiguration strategy for a degradable multimodule computing system
NASA Technical Reports Server (NTRS)
Lee, Yann-Hang; Shin, Kang G.
1987-01-01
The present quantitative approach to the problem of reconfiguring a degradable multimode system assigns some modules to computation and arranges others for reliability. By using expected total reward as the optimal criterion, there emerges an active reconfiguration strategy based not only on the occurrence of failure but the progression of the given mission. This reconfiguration strategy requires specification of the times at which the system should undergo reconfiguration, and the configurations to which the system should change. The optimal reconfiguration problem is converted to integer nonlinear knapsack and fractional programming problems.
Liquid Fertilizer Spraying Performance Using A Knapsack Power Sprayer On Soybean Field
NASA Astrophysics Data System (ADS)
Gatot, P.; Anang, R.
2018-05-01
An effort for increasing soybean production can be conducted by applying liquid fertilizer on soybean cultivation field. The objective of this research was to determine liquid fertilizer spraying performance using knapsack power sprayer TASCO TF-900 on a soybean cultivation field. Performances test were conducted in the Laboratory of Spraying Test and on a soybean cultivation field to determine (1) effective spraying width, (2) droplets diameter, (3) droplets density, (4) effective spraying discharge rate, and (5) effective field capacity of spraying. The research was conducted using 2 methods: (1) one-nozzle spraying, and (2) four- nozzles spraying. Results of the research showed that at a constant pressure of 900 kPa effective spraying width using one-nozzle spraying and four-nozzles spraying were 0.62 m and 1.10 m. A bigger effective spraying width was resulted in a bigger average effective spraying discharge rate and average effective spraying field capacity of 4.52 l/min and 83.92 m2/min on forward walking speed range of 0.94 m/s up to 1.77 m/s. On the contrary, bigger effective spraying width was result in bigger droplets diameter of 502.73 μm and a smaller droplets density of 98.39 droplets/cm2, whereas smaller effective spraying width was resulted in a smaller droplets diameter of 367.09 μm and a bigger droplets density of 350.53 droplets/cm2. One-nozzle spraying method produced a better spraying quality than four-nozzles spraying method, although four-nozzles spraying was resulted in a bigger effective field capacity of spraying.
Granmo, Ole-Christoffer; Oommen, B John; Myrer, Svein Arild; Olsen, Morten Goodwin
2007-02-01
This paper considers the nonlinear fractional knapsack problem and demonstrates how its solution can be effectively applied to two resource allocation problems dealing with the World Wide Web. The novel solution involves a "team" of deterministic learning automata (LA). The first real-life problem relates to resource allocation in web monitoring so as to "optimize" information discovery when the polling capacity is constrained. The disadvantages of the currently reported solutions are explained in this paper. The second problem concerns allocating limited sampling resources in a "real-time" manner with the purpose of estimating multiple binomial proportions. This is the scenario encountered when the user has to evaluate multiple web sites by accessing a limited number of web pages, and the proportions of interest are the fraction of each web site that is successfully validated by an HTML validator. Using the general LA paradigm to tackle both of the real-life problems, the proposed scheme improves a current solution in an online manner through a series of informed guesses that move toward the optimal solution. At the heart of the scheme, a team of deterministic LA performs a controlled random walk on a discretized solution space. Comprehensive experimental results demonstrate that the discretization resolution determines the precision of the scheme, and that for a given precision, the current solution (to both problems) is consistently improved until a nearly optimal solution is found--even for switching environments. Thus, the scheme, while being novel to the entire field of LA, also efficiently handles a class of resource allocation problems previously not addressed in the literature.
ERIC Educational Resources Information Center
Burgin, Rick A.
2012-01-01
Large-scale crises continue to surprise, overwhelm, and shatter college and university campuses. While the devastation to physical plants and persons is often evident and is addressed with crisis management plans, the number of emotional casualties left in the wake of these large-scale crises may not be apparent and are often not addressed with…
Tools for understanding landscapes: combining large-scale surveys to characterize change. Chapter 9.
W. Keith Moser; Janine Bolliger; Don C. Bragg; Mark H. Hansen; Mark A. Hatfield; Timothy A. Nigh; Lisa A. Schulte
2008-01-01
All landscapes change continuously. Since change is perceived and interpreted through measures of scale, any quantitative analysis of landscapes must identify and describe the spatiotemporal mosaics shaped by large-scale structures and processes. This process is controlled by core influences, or "drivers," that shape the change and affect the outcome...
A modeling tool to support decision making in future hydropower development in Chile
NASA Astrophysics Data System (ADS)
Vicuna, S.; Hermansen, C.; Cerda, J. P.; Olivares, M. A.; Gomez, T. I.; Toha, E.; Poblete, D.; Mao, L.; Falvey, M. J.; Pliscoff, P.; Melo, O.; Lacy, S.; Peredo, M.; Marquet, P. A.; Maturana, J.; Gironas, J. A.
2017-12-01
Modeling tools support planning by providing transparent means to assess the outcome of natural resources management alternatives within technical frameworks in the presence of conflicting objectives. Such tools, when employed to model different scenarios, complement discussion in a policy-making context. Examples of practical use of this type of tool exist, such as the Canadian public forest management, but are not common, especially in the context of developing countries. We present a tool to support the selection from a portfolio of potential future hydropower projects in Chile. This tool, developed by a large team of researchers under the guidance of the Chilean Energy Ministry, is especially relevant in the context of evident regionalism, skepticism and change in societal values in a country that has achieved a sustained growth alongside increased demands from society. The tool operates at a scale of a river reach, between 1-5 km long, on a domain that can be defined according to the scale needs of the related discussion, and its application can vary from river basins to regions or other spatial configurations that may be of interest. The tool addresses both available hydropower potential and the existence (inferred or observed) of other ecological, social, cultural and productive characteristics of the territory which are valuable to society, and provides a means to evaluate their interaction. The occurrence of each of these other valuable characteristics in the territory is measured by generating a presence-density score for each. Considering the level of constraint each characteristic imposes on hydropower development, they are weighted against each other and an aggregate score is computed. With this information, optimal trade-offs are computed between additional hydropower capacity and valuable local characteristics over the entire domain, using the classical knapsack 0-1 optimization algorithm. Various scenarios of different weightings and hydropower development targets are tested and compared. The results illustrate the capabilities of the tool to identify promising hydropower development strategies and to aid public policy discussions aimed at establishing incentives and regulations, and therefore provide decision makers with supporting material allowing a more informed discussion.
Continuous Easy-Plane Deconfined Phase Transition on the Kagome Lattice
NASA Astrophysics Data System (ADS)
Zhang, Xue-Feng; He, Yin-Chen; Eggert, Sebastian; Moessner, Roderich; Pollmann, Frank
2018-03-01
We use large scale quantum Monte Carlo simulations to study an extended Hubbard model of hard core bosons on the kagome lattice. In the limit of strong nearest-neighbor interactions at 1 /3 filling, the interplay between frustration and quantum fluctuations leads to a valence bond solid ground state. The system undergoes a quantum phase transition to a superfluid phase as the interaction strength is decreased. It is still under debate whether the transition is weakly first order or represents an unconventional continuous phase transition. We present a theory in terms of an easy plane noncompact C P1 gauge theory describing the phase transition at 1 /3 filling. Utilizing large scale quantum Monte Carlo simulations with parallel tempering in the canonical ensemble up to 15552 spins, we provide evidence that the phase transition is continuous at exactly 1 /3 filling. A careful finite size scaling analysis reveals an unconventional scaling behavior hinting at deconfined quantum criticality.
Large-scale derived flood frequency analysis based on continuous simulation
NASA Astrophysics Data System (ADS)
Dung Nguyen, Viet; Hundecha, Yeshewatesfa; Guse, Björn; Vorogushyn, Sergiy; Merz, Bruno
2016-04-01
There is an increasing need for spatially consistent flood risk assessments at the regional scale (several 100.000 km2), in particular in the insurance industry and for national risk reduction strategies. However, most large-scale flood risk assessments are composed of smaller-scale assessments and show spatial inconsistencies. To overcome this deficit, a large-scale flood model composed of a weather generator and catchments models was developed reflecting the spatially inherent heterogeneity. The weather generator is a multisite and multivariate stochastic model capable of generating synthetic meteorological fields (precipitation, temperature, etc.) at daily resolution for the regional scale. These fields respect the observed autocorrelation, spatial correlation and co-variance between the variables. They are used as input into catchment models. A long-term simulation of this combined system enables to derive very long discharge series at many catchment locations serving as a basic for spatially consistent flood risk estimates at the regional scale. This combined model was set up and validated for major river catchments in Germany. The weather generator was trained by 53-year observation data at 528 stations covering not only the complete Germany but also parts of France, Switzerland, Czech Republic and Australia with the aggregated spatial scale of 443,931 km2. 10.000 years of daily meteorological fields for the study area were generated. Likewise, rainfall-runoff simulations with SWIM were performed for the entire Elbe, Rhine, Weser, Donau and Ems catchments. The validation results illustrate a good performance of the combined system, as the simulated flood magnitudes and frequencies agree well with the observed flood data. Based on continuous simulation this model chain is then used to estimate flood quantiles for the whole Germany including upstream headwater catchments in neighbouring countries. This continuous large scale approach overcomes the several drawbacks reported in traditional approaches for the derived flood frequency analysis and therefore is recommended for large scale flood risk case studies.
SCALING-UP INFORMATION IN LAND-COVER DATA FOR LARGE-SCALE ENVIRONMENTAL ASSESSMENTS
The NLCD project provides national-scope land-cover data for the conterminous United States. The first land-cover data set was completed in 2000, and the continuing need for recent land-cover information has motivated continuation of the project to provide current and change info...
Learning dominance relations in combinatorial search problems
NASA Technical Reports Server (NTRS)
Yu, Chee-Fen; Wah, Benjamin W.
1988-01-01
Dominance relations commonly are used to prune unnecessary nodes in search graphs, but they are problem-dependent and cannot be derived by a general procedure. The authors identify machine learning of dominance relations and the applicable learning mechanisms. A study of learning dominance relations using learning by experimentation is described. This system has been able to learn dominance relations for the 0/1-knapsack problem, an inventory problem, the reliability-by-replication problem, the two-machine flow shop problem, a number of single-machine scheduling problems, and a two-machine scheduling problem. It is considered that the same methodology can be extended to learn dominance relations in general.
SfM with MRFs: discrete-continuous optimization for large-scale structure from motion.
Crandall, David J; Owens, Andrew; Snavely, Noah; Huttenlocher, Daniel P
2013-12-01
Recent work in structure from motion (SfM) has built 3D models from large collections of images downloaded from the Internet. Many approaches to this problem use incremental algorithms that solve progressively larger bundle adjustment problems. These incremental techniques scale poorly as the image collection grows, and can suffer from drift or local minima. We present an alternative framework for SfM based on finding a coarse initial solution using hybrid discrete-continuous optimization and then improving that solution using bundle adjustment. The initial optimization step uses a discrete Markov random field (MRF) formulation, coupled with a continuous Levenberg-Marquardt refinement. The formulation naturally incorporates various sources of information about both the cameras and points, including noisy geotags and vanishing point (VP) estimates. We test our method on several large-scale photo collections, including one with measured camera positions, and show that it produces models that are similar to or better than those produced by incremental bundle adjustment, but more robustly and in a fraction of the time.
Stochastic search, optimization and regression with energy applications
NASA Astrophysics Data System (ADS)
Hannah, Lauren A.
Designing clean energy systems will be an important task over the next few decades. One of the major roadblocks is a lack of mathematical tools to economically evaluate those energy systems. However, solutions to these mathematical problems are also of interest to the operations research and statistical communities in general. This thesis studies three problems that are of interest to the energy community itself or provide support for solution methods: R&D portfolio optimization, nonparametric regression and stochastic search with an observable state variable. First, we consider the one stage R&D portfolio optimization problem to avoid the sequential decision process associated with the multi-stage. The one stage problem is still difficult because of a non-convex, combinatorial decision space and a non-convex objective function. We propose a heuristic solution method that uses marginal project values---which depend on the selected portfolio---to create a linear objective function. In conjunction with the 0-1 decision space, this new problem can be solved as a knapsack linear program. This method scales well to large decision spaces. We also propose an alternate, provably convergent algorithm that does not exploit problem structure. These methods are compared on a solid oxide fuel cell R&D portfolio problem. Next, we propose Dirichlet Process mixtures of Generalized Linear Models (DPGLM), a new method of nonparametric regression that accommodates continuous and categorical inputs, and responses that can be modeled by a generalized linear model. We prove conditions for the asymptotic unbiasedness of the DP-GLM regression mean function estimate. We also give examples for when those conditions hold, including models for compactly supported continuous distributions and a model with continuous covariates and categorical response. We empirically analyze the properties of the DP-GLM and why it provides better results than existing Dirichlet process mixture regression models. We evaluate DP-GLM on several data sets, comparing it to modern methods of nonparametric regression like CART, Bayesian trees and Gaussian processes. Compared to existing techniques, the DP-GLM provides a single model (and corresponding inference algorithms) that performs well in many regression settings. Finally, we study convex stochastic search problems where a noisy objective function value is observed after a decision is made. There are many stochastic search problems whose behavior depends on an exogenous state variable which affects the shape of the objective function. Currently, there is no general purpose algorithm to solve this class of problems. We use nonparametric density estimation to take observations from the joint state-outcome distribution and use them to infer the optimal decision for a given query state. We propose two solution methods that depend on the problem characteristics: function-based and gradient-based optimization. We examine two weighting schemes, kernel-based weights and Dirichlet process-based weights, for use with the solution methods. The weights and solution methods are tested on a synthetic multi-product newsvendor problem and the hour-ahead wind commitment problem. Our results show that in some cases Dirichlet process weights offer substantial benefits over kernel based weights and more generally that nonparametric estimation methods provide good solutions to otherwise intractable problems.
Continuous data assimilation for downscaling large-footprint soil moisture retrievals
NASA Astrophysics Data System (ADS)
Altaf, Muhammad U.; Jana, Raghavendra B.; Hoteit, Ibrahim; McCabe, Matthew F.
2016-10-01
Soil moisture is a key component of the hydrologic cycle, influencing processes leading to runoff generation, infiltration and groundwater recharge, evaporation and transpiration. Generally, the measurement scale for soil moisture is found to be different from the modeling scales for these processes. Reducing this mismatch between observation and model scales in necessary for improved hydrological modeling. An innovative approach to downscaling coarse resolution soil moisture data by combining continuous data assimilation and physically based modeling is presented. In this approach, we exploit the features of Continuous Data Assimilation (CDA) which was initially designed for general dissipative dynamical systems and later tested numerically on the incompressible Navier-Stokes equation, and the Benard equation. A nudging term, estimated as the misfit between interpolants of the assimilated coarse grid measurements and the fine grid model solution, is added to the model equations to constrain the model's large scale variability by available measurements. Soil moisture fields generated at a fine resolution by a physically-based vadose zone model (HYDRUS) are subjected to data assimilation conditioned upon coarse resolution observations. This enables nudging of the model outputs towards values that honor the coarse resolution dynamics while still being generated at the fine scale. Results show that the approach is feasible to generate fine scale soil moisture fields across large extents, based on coarse scale observations. Application of this approach is likely in generating fine and intermediate resolution soil moisture fields conditioned on the radiometerbased, coarse resolution products from remote sensing satellites.
NASA Astrophysics Data System (ADS)
Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien
Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provide a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to a selection rule that favors the rare trajectories of interest. However, such algorithms are plagued by finite simulation time- and finite population size- effects that can render their use delicate. Using the continuous-time cloning algorithm, we analyze the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of the rare trajectories. We use these scalings in order to propose a numerical approach which allows to extract the infinite-time and infinite-size limit of these estimators.
NASA Astrophysics Data System (ADS)
Li, Jianping; Xia, Xiangsheng
2015-09-01
In order to improve the understanding of the hot deformation and dynamic recrystallization (DRX) behaviors of large-scaled AZ80 magnesium alloy fabricated by semi-continuous casting, compression tests were carried out in the temperature range from 250 to 400 °C and strain rate range from 0.001 to 0.1 s-1 on a Gleeble 1500 thermo-mechanical machine. The effects of the temperature and strain rate on the hot deformation behavior have been expressed by means of the conventional hyperbolic sine equation, and the influence of the strain has been incorporated in the equation by considering its effect on different material constants for large-scaled AZ80 magnesium alloy. In addition, the DRX behavior has been discussed. The result shows that the deformation temperature and strain rate exerted remarkable influences on the flow stress. The constitutive equation of large-scaled AZ80 magnesium alloy for hot deformation at steady-state stage (ɛ = 0.5) was The true stress-true strain curves predicted by the extracted model were in good agreement with the experimental results, thereby confirming the validity of the developed constitutive relation. The DRX kinetic model of large-scaled AZ80 magnesium alloy was established as X d = 1 - exp[-0.95((ɛ - ɛc)/ɛ*)2.4904]. The rate of DRX increases with increasing deformation temperature, and high temperature is beneficial for achieving complete DRX in the large-scaled AZ80 magnesium alloy.
Model for the design of distributed data bases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ram, S.
This research focuses on developing a model to solve the File Allocation Problem (FAP). The model integrates two major design issues, namely Concurrently Control and Data Distribution. The central node locking mechanism is incorporated in developing a nonlinear integer programming model. Two solution algorithms are proposed, one of which was implemented in FORTRAN.V. The allocation of data bases and programs are examined using this heuristic. Several decision rules were also formulated based on the results of the heuristic. A second more comprehensive heuristic was proposed, based on the knapsack problem. The development and implementation of this algorithm has been leftmore » as a topic for future research.« less
Huang, Yi-Shao; Liu, Wel-Ping; Wu, Min; Wang, Zheng-Wu
2014-09-01
This paper presents a novel observer-based decentralized hybrid adaptive fuzzy control scheme for a class of large-scale continuous-time multiple-input multiple-output (MIMO) uncertain nonlinear systems whose state variables are unmeasurable. The scheme integrates fuzzy logic systems, state observers, and strictly positive real conditions to deal with three issues in the control of a large-scale MIMO uncertain nonlinear system: algorithm design, controller singularity, and transient response. Then, the design of the hybrid adaptive fuzzy controller is extended to address a general large-scale uncertain nonlinear system. It is shown that the resultant closed-loop large-scale system keeps asymptotically stable and the tracking error converges to zero. The better characteristics of our scheme are demonstrated by simulations. Copyright © 2014. Published by Elsevier Ltd.
Continuous evolutionary change in Plio-Pleistocene mammals of eastern Africa
NASA Astrophysics Data System (ADS)
Bibi, Faysal; Kiessling, Wolfgang
2015-08-01
Much debate has revolved around the question of whether the mode of evolutionary and ecological turnover in the fossil record of African mammals was continuous or pulsed, and the degree to which faunal turnover tracked changes in global climate. Here, we assembled and analyzed large specimen databases of the fossil record of eastern African Bovidae (antelopes) and Turkana Basin large mammals. Our results indicate that speciation and extinction proceeded continuously throughout the Pliocene and Pleistocene, as did increases in the relative abundance of arid-adapted bovids, and in bovid body mass. Species durations were similar among clades with different ecological attributes. Occupancy patterns were unimodal, with long and nearly symmetrical origination and extinction phases. A single origination pulse may be present at 2.0-1.75 Ma, but besides this, there is no evidence that evolutionary or ecological changes in the eastern African record tracked rapid, 100,000-y-scale changes in global climate. Rather, eastern African large mammal evolution tracked global or regional climatic trends at long (million year) time scales, while local, basin-scale changes (e.g., tectonic or hydrographic) and biotic interactions ruled at shorter timescales.
Daniel J. Isaak; Russell F. Thurow
2006-01-01
Spatially continuous sampling designs, when temporally replicated, provide analytical flexibility and are unmatched in their ability to provide a dynamic system view. We have compiled such a data set by georeferencing the network-scale distribution of Chinook salmon (Oncorhynchus tshawytscha) redds across a large wilderness basin (7330 km2) in...
Diazo compounds in continuous-flow technology.
Müller, Simon T R; Wirth, Thomas
2015-01-01
Diazo compounds are very versatile reagents in organic chemistry and meet the challenge of selective assembly of structurally complex molecules. Their leaving group is dinitrogen; therefore, they are very clean and atom-efficient reagents. However, diazo compounds are potentially explosive and extremely difficult to handle on an industrial scale. In this review, it is discussed how continuous flow technology can help to make these powerful reagents accessible on large scale. Microstructured devices can improve heat transfer greatly and help with the handling of dangerous reagents safely. The in situ formation and subsequent consumption of diazo compounds are discussed along with advances in handling diazomethane and ethyl diazoacetate. The potential large-scale applications of a given methodology is emphasized. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Large-scale production of lipoplexes with long shelf-life.
Clement, Jule; Kiefer, Karin; Kimpfler, Andrea; Garidel, Patrick; Peschka-Süss, Regine
2005-01-01
The instability of lipoplex formulations is a major obstacle to overcome before their commercial application in gene therapy. In this study, a continuous mixing technique for the large-scale preparation of lipoplexes followed by lyophilisation for increased stability and shelf-life has been developed. Lipoplexes were analysed for transfection efficiency and cytotoxicity in human aorta smooth muscle cells (HASMC) and a rat smooth muscle cell line (A-10 SMC). Homogeneity of lipid/DNA-products was investigated by photon correlation spectroscopy (PCS) and cryotransmission electron microscopy (cryo-TEM). Studies have been undertaken with DAC-30, a composition of 3beta-[N-(N,N'-dimethylaminoethane)-carbamoyl]-cholesterol (DAC-Chol) and dioleylphosphatidylethanolamine (DOPE) and a green fluorescent protein (GFP) expressing marker plasmid. A continuous mixing technique was compared to the small-scale preparation of lipoplexes by pipetting. Individual steps of the continuous mixing process were evaluated in order to optimise the manufacturing technique: lipid/plasmid ratio, composition of transfection medium, pre-treatment of the lipid, size of the mixing device, mixing procedure and the influence of the lyophilisation process. It could be shown that the method developed for production of lipoplexes on a large scale under sterile conditions led to lipoplexes with good transfection efficiencies combined with low cytotoxicity, improved characteristics and long shelf-life.
A Life-Cycle Model of Human Social Groups Produces a U-Shaped Distribution in Group Size.
Salali, Gul Deniz; Whitehouse, Harvey; Hochberg, Michael E
2015-01-01
One of the central puzzles in the study of sociocultural evolution is how and why transitions from small-scale human groups to large-scale, hierarchically more complex ones occurred. Here we develop a spatially explicit agent-based model as a first step towards understanding the ecological dynamics of small and large-scale human groups. By analogy with the interactions between single-celled and multicellular organisms, we build a theory of group lifecycles as an emergent property of single cell demographic and expansion behaviours. We find that once the transition from small-scale to large-scale groups occurs, a few large-scale groups continue expanding while small-scale groups gradually become scarcer, and large-scale groups become larger in size and fewer in number over time. Demographic and expansion behaviours of groups are largely influenced by the distribution and availability of resources. Our results conform to a pattern of human political change in which religions and nation states come to be represented by a few large units and many smaller ones. Future enhancements of the model should include decision-making rules and probabilities of fragmentation for large-scale societies. We suggest that the synthesis of population ecology and social evolution will generate increasingly plausible models of human group dynamics.
A Life-Cycle Model of Human Social Groups Produces a U-Shaped Distribution in Group Size
Salali, Gul Deniz; Whitehouse, Harvey; Hochberg, Michael E.
2015-01-01
One of the central puzzles in the study of sociocultural evolution is how and why transitions from small-scale human groups to large-scale, hierarchically more complex ones occurred. Here we develop a spatially explicit agent-based model as a first step towards understanding the ecological dynamics of small and large-scale human groups. By analogy with the interactions between single-celled and multicellular organisms, we build a theory of group lifecycles as an emergent property of single cell demographic and expansion behaviours. We find that once the transition from small-scale to large-scale groups occurs, a few large-scale groups continue expanding while small-scale groups gradually become scarcer, and large-scale groups become larger in size and fewer in number over time. Demographic and expansion behaviours of groups are largely influenced by the distribution and availability of resources. Our results conform to a pattern of human political change in which religions and nation states come to be represented by a few large units and many smaller ones. Future enhancements of the model should include decision-making rules and probabilities of fragmentation for large-scale societies. We suggest that the synthesis of population ecology and social evolution will generate increasingly plausible models of human group dynamics. PMID:26381745
Scalable clustering algorithms for continuous environmental flow cytometry.
Hyrkas, Jeremy; Clayton, Sophie; Ribalet, Francois; Halperin, Daniel; Armbrust, E Virginia; Howe, Bill
2016-02-01
Recent technological innovations in flow cytometry now allow oceanographers to collect high-frequency flow cytometry data from particles in aquatic environments on a scale far surpassing conventional flow cytometers. The SeaFlow cytometer continuously profiles microbial phytoplankton populations across thousands of kilometers of the surface ocean. The data streams produced by instruments such as SeaFlow challenge the traditional sample-by-sample approach in cytometric analysis and highlight the need for scalable clustering algorithms to extract population information from these large-scale, high-frequency flow cytometers. We explore how available algorithms commonly used for medical applications perform at classification of such a large-scale, environmental flow cytometry data. We apply large-scale Gaussian mixture models to massive datasets using Hadoop. This approach outperforms current state-of-the-art cytometry classification algorithms in accuracy and can be coupled with manual or automatic partitioning of data into homogeneous sections for further classification gains. We propose the Gaussian mixture model with partitioning approach for classification of large-scale, high-frequency flow cytometry data. Source code available for download at https://github.com/jhyrkas/seaflow_cluster, implemented in Java for use with Hadoop. hyrkas@cs.washington.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
The cost-constrained traveling salesman problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sokkappa, P.R.
1990-10-01
The Cost-Constrained Traveling Salesman Problem (CCTSP) is a variant of the well-known Traveling Salesman Problem (TSP). In the TSP, the goal is to find a tour of a given set of cities such that the total cost of the tour is minimized. In the CCTSP, each city is given a value, and a fixed cost-constraint is specified. The objective is to find a subtour of the cities that achieves maximum value without exceeding the cost-constraint. Thus, unlike the TSP, the CCTSP requires both selection and sequencing. As a consequence, most results for the TSP cannot be extended to the CCTSP.more » We show that the CCTSP is NP-hard and that no K-approximation algorithm or fully polynomial approximation scheme exists, unless P = NP. We also show that several special cases are polynomially solvable. Algorithms for the CCTSP, which outperform previous methods, are developed in three areas: upper bounding methods, exact algorithms, and heuristics. We found that a bounding strategy based on the knapsack problem performs better, both in speed and in the quality of the bounds, than methods based on the assignment problem. Likewise, we found that a branch-and-bound approach using the knapsack bound was superior to a method based on a common branch-and-bound method for the TSP. In our study of heuristic algorithms, we found that, when selecting modes for inclusion in the subtour, it is important to consider the neighborhood'' of the nodes. A node with low value that brings the subtour near many other nodes may be more desirable than an isolated node of high value. We found two types of repetition to be desirable: repetitions based on randomization in the subtour buildings process, and repetitions encouraging the inclusion of different subsets of the nodes. By varying the number and type of repetitions, we can adjust the computation time required by our method to obtain algorithms that outperform previous methods.« less
NASA Astrophysics Data System (ADS)
Gu, Xiaodan; Zhou, Yan; Gu, Kevin; Kurosawa, Tadanori; Yan, Hongping; Wang, Cheng; Toney, Micheal; Bao, Zhenan
The challenge of continuous printing in high efficiency large-area organic solar cells is a key limiting factor for their widespread adoption. We present a materials design concept for achieving large-area, solution coated all-polymer bulk heterojunction (BHJ) solar cells with stable phase separation morphology between the donor and acceptor. The key concept lies in inhibiting strong crystallization of donor and acceptor polymers, thus forming intermixed, low crystallinity and mostly amorphous blends. Based on experiments using donors and acceptors with different degree of crystallinity, our results showed that microphase separated donor and acceptor domain sizes are inversely proportional to the crystallinity of the conjugated polymers. This methodology of using low crystallinity donors and acceptors has the added benefit of forming a consistent and robust morphology that is insensitive to different processing conditions, allowing one to easily scale up the printing process from a small scale solution shearing coater to a large-scale continuous roll-to-roll (R2R) printer. We were able to continuously roll-to-roll slot die print large area all-polymer solar cells with power conversion efficiencies of 5%, with combined cell area up to 10 cm2. This is among the highest efficiencies realized with R2R coated active layer organic materials on flexible substrate. DOE BRIDGE sunshot program. Office of Naval Research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gu, Xiaodan; Zhou, Yan; Gu, Kevin
The challenge of continuous printing in high-efficiency large-area organic solar cells is a key limiting factor for their widespread adoption. We present a materials design concept for achieving large-area, solution-coated all-polymer bulk heterojunction solar cells with stable phase separation morphology between the donor and acceptor. The key concept lies in inhibiting strong crystallization of donor and acceptor polymers, thus forming intermixed, low crystallinity, and mostly amorphous blends. Based on experiments using donors and acceptors with different degree of crystallinity, the results show that microphase separated donor and acceptor domain sizes are inversely proportional to the crystallinity of the conjugated polymers.more » This particular methodology of using low crystallinity donors and acceptors has the added benefit of forming a consistent and robust morphology that is insensitive to different processing conditions, allowing one to easily scale up the printing process from a small-scale solution shearing coater to a large-scale continuous roll-to-roll (R2R) printer. Large-area all-polymer solar cells are continuously roll-to-roll slot die printed with power conversion efficiencies of 5%, with combined cell area up to 10 cm 2. This is among the highest efficiencies realized with R2R-coated active layer organic materials on flexible substrate.« less
Gu, Xiaodan; Zhou, Yan; Gu, Kevin; ...
2017-03-07
The challenge of continuous printing in high-efficiency large-area organic solar cells is a key limiting factor for their widespread adoption. We present a materials design concept for achieving large-area, solution-coated all-polymer bulk heterojunction solar cells with stable phase separation morphology between the donor and acceptor. The key concept lies in inhibiting strong crystallization of donor and acceptor polymers, thus forming intermixed, low crystallinity, and mostly amorphous blends. Based on experiments using donors and acceptors with different degree of crystallinity, the results show that microphase separated donor and acceptor domain sizes are inversely proportional to the crystallinity of the conjugated polymers.more » This particular methodology of using low crystallinity donors and acceptors has the added benefit of forming a consistent and robust morphology that is insensitive to different processing conditions, allowing one to easily scale up the printing process from a small-scale solution shearing coater to a large-scale continuous roll-to-roll (R2R) printer. Large-area all-polymer solar cells are continuously roll-to-roll slot die printed with power conversion efficiencies of 5%, with combined cell area up to 10 cm 2. This is among the highest efficiencies realized with R2R-coated active layer organic materials on flexible substrate.« less
Moon-based Earth Observation for Large Scale Geoscience Phenomena
NASA Astrophysics Data System (ADS)
Guo, Huadong; Liu, Guang; Ding, Yixing
2016-07-01
The capability of Earth observation for large-global-scale natural phenomena needs to be improved and new observing platform are expected. We have studied the concept of Moon as an Earth observation in these years. Comparing with manmade satellite platform, Moon-based Earth observation can obtain multi-spherical, full-band, active and passive information,which is of following advantages: large observation range, variable view angle, long-term continuous observation, extra-long life cycle, with the characteristics of longevity ,consistency, integrity, stability and uniqueness. Moon-based Earth observation is suitable for monitoring the large scale geoscience phenomena including large scale atmosphere change, large scale ocean change,large scale land surface dynamic change,solid earth dynamic change,etc. For the purpose of establishing a Moon-based Earth observation platform, we already have a plan to study the five aspects as follows: mechanism and models of moon-based observing earth sciences macroscopic phenomena; sensors' parameters optimization and methods of moon-based Earth observation; site selection and environment of moon-based Earth observation; Moon-based Earth observation platform; and Moon-based Earth observation fundamental scientific framework.
Adapting viral safety assurance strategies to continuous processing of biological products.
Johnson, Sarah A; Brown, Matthew R; Lute, Scott C; Brorson, Kurt A
2017-01-01
There has been a recent drive in commercial large-scale production of biotechnology products to convert current batch mode processing to continuous processing manufacturing. There have been reports of model systems capable of adapting and linking upstream and downstream technologies into a continuous manufacturing pipeline. However, in many of these proposed continuous processing model systems, viral safety has not been comprehensively addressed. Viral safety and detection is a highly important and often expensive regulatory requirement for any new biological product. To ensure success in the adaption of continuous processing to large-scale production, there is a need to consider the development of approaches that allow for seamless incorporation of viral testing and clearance/inactivation methods. In this review, we outline potential strategies to apply current viral testing and clearance/inactivation technologies to continuous processing, as well as modifications of existing unit operations to ensure the successful integration of viral clearance into the continuous processing of biological products. Biotechnol. Bioeng. 2017;114: 21-32. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Subsurface Monitoring of CO2 Sequestration - A Review and Look Forward
NASA Astrophysics Data System (ADS)
Daley, T. M.
2012-12-01
The injection of CO2 into subsurface formations is at least 50 years old with large-scale utilization of CO2 for enhanced oil recovery (CO2-EOR) beginning in the 1970s. Early monitoring efforts had limited measurements in available boreholes. With growing interest in CO2 sequestration beginning in the 1990's, along with growth in geophysical reservoir monitoring, small to mid-size sequestration monitoring projects began to appear. The overall goals of a subsurface monitoring plan are to provide measurement of CO2 induced changes in subsurface properties at a range of spatial and temporal scales. The range of spatial scales allows tracking of the location and saturation of the plume with varying detail, while finer temporal sampling (up to continuous) allows better understanding of dynamic processes (e.g. multi-phase flow) and constraining of reservoir models. Early monitoring of small scale pilots associated with CO2-EOR (e.g., the McElroy field and the Lost Hills field), developed many of the methodologies including tomographic imaging and multi-physics measurements. Large (reservoir) scale sequestration monitoring began with the Sleipner and Weyburn projects. Typically, large scale monitoring, such as 4D surface seismic, has limited temporal sampling due to costs. Smaller scale pilots can allow more frequent measurements as either individual time-lapse 'snapshots' or as continuous monitoring. Pilot monitoring examples include the Frio, Nagaoka and Otway pilots using repeated well logging, crosswell imaging, vertical seismic profiles and CASSM (continuous active-source seismic monitoring). For saline reservoir sequestration projects, there is typically integration of characterization and monitoring, since the sites are not pre-characterized resource developments (oil or gas), which reinforces the need for multi-scale measurements. As we move beyond pilot sites, we need to quantify CO2 plume and reservoir properties (e.g. pressure) over large scales, while still obtaining high resolution. Typically the high-resolution (spatial and temporal) tools are deployed in permanent or semi-permanent borehole installations, where special well design may be necessary, such as non-conductive casing for electrical surveys. Effective utilization of monitoring wells requires an approach of modular borehole monitoring (MBM) were multiple measurements can be made. An example is recent work at the Citronelle pilot injection site where an MBM package with seismic, fluid sampling and distributed fiber sensing was deployed. For future large scale sequestration monitoring, an adaptive borehole-monitoring program is proposed.
Weegman, Bradley P.; Nash, Peter; Carlson, Alexandra L.; Voltzke, Kristin J.; Geng, Zhaohui; Jahani, Marjan; Becker, Benjamin B.; Papas, Klearchos K.; Firpo, Meri T.
2013-01-01
Cellular therapies are emerging as a standard approach for the treatment of several diseases. However, realizing the promise of cellular therapies across the full range of treatable disorders will require large-scale, controlled, reproducible culture methods. Bioreactor systems offer the scale-up and monitoring needed, but standard stirred bioreactor cultures do not allow for the real-time regulation of key nutrients in the medium. In this study, β-TC6 insulinoma cells were aggregated and cultured for 3 weeks as a model of manufacturing a mammalian cell product. Cell expansion rates and medium nutrient levels were compared in static, stirred suspension bioreactors (SSB), and continuously fed (CF) SSB. While SSB cultures facilitated increased culture volumes, no increase in cell yields were observed, partly due to limitations in key nutrients, which were consumed by the cultures between feedings, such as glucose. Even when glucose levels were increased to prevent depletion between feedings, dramatic fluctuations in glucose levels were observed. Continuous feeding eliminated fluctuations and improved cell expansion when compared with both static and SSB culture methods. Further improvements in growth rates were observed after adjusting the feed rate based on calculated nutrient depletion, which maintained physiological glucose levels for the duration of the expansion. Adjusting the feed rate in a continuous medium replacement system can maintain the consistent nutrient levels required for the large-scale application of many cell products. Continuously fed bioreactor systems combined with nutrient regulation can be used to improve the yield and reproducibility of mammalian cells for biological products and cellular therapies and will facilitate the translation of cell culture from the research lab to clinical applications. PMID:24204645
Kotamäki, Niina; Thessler, Sirpa; Koskiaho, Jari; Hannukkala, Asko O.; Huitu, Hanna; Huttula, Timo; Havento, Jukka; Järvenpää, Markku
2009-01-01
Sensor networks are increasingly being implemented for environmental monitoring and agriculture to provide spatially accurate and continuous environmental information and (near) real-time applications. These networks provide a large amount of data which poses challenges for ensuring data quality and extracting relevant information. In the present paper we describe a river basin scale wireless sensor network for agriculture and water monitoring. The network, called SoilWeather, is unique and the first of this type in Finland. The performance of the network is assessed from the user and maintainer perspectives, concentrating on data quality, network maintenance and applications. The results showed that the SoilWeather network has been functioning in a relatively reliable way, but also that the maintenance and data quality assurance by automatic algorithms and calibration samples requires a lot of effort, especially in continuous water monitoring over large areas. We see great benefits on sensor networks enabling continuous, real-time monitoring, while data quality control and maintenance efforts highlight the need for tight collaboration between sensor and sensor network owners to decrease costs and increase the quality of the sensor data in large scale applications. PMID:22574050
Lim, Hosub; Woo, Ju Young; Lee, Doh C; Lee, Jinkee; Jeong, Sohee; Kim, Duckjong
2017-02-27
Colloidal quantum dots (QDs) afford huge potential in numerous applications owing to their excellent optical and electronic properties. After the synthesis of QDs, separating QDs from unreacted impurities in large scale is one of the biggest issues to achieve scalable and high performance optoelectronic applications. Thus far, however, continuous purification method, which is essential for mass production, has rarely been reported. In this study, we developed a new continuous purification process that is suitable to the mass production of high-quality QDs. As-synthesized QDs are driven by electrophoresis in a flow channel and captured by porous electrodes and finally separated from the unreacted impurities. Nuclear magnetic resonance and ultraviolet/visible/near-infrared absorption spectroscopic data clearly showed that the impurities were efficiently removed from QDs with the purification yield, defined as the ratio of the mass of purified QDs to that of QDs in the crude solution, up to 87%. Also, we could successfully predict the purification yield depending on purification conditions with a simple theoretical model. The proposed large-scale purification process could be an important cornerstone for the mass production and industrial use of high-quality QDs.
NASA Astrophysics Data System (ADS)
Lim, Hosub; Woo, Ju Young; Lee, Doh Chang; Lee, Jinkee; Jeong, Sohee; Kim, Duckjong
2017-11-01
Colloidal Quantum dots (QDs) afford huge potential in numerous applications owing to their excellent optical and electronic properties. After the synthesis of QDs, separating QDs from unreacted impurities in large scale is one of the biggest issues to achieve scalable and high performance optoelectronic applications. Thus far, however, continuous purification method, which is essential for mass production, has rarely been reported. In this study, we developed a new continuous purification process that is suitable to the mass production of high-quality QDs. As-synthesized QDs are driven by electrophoresis in a flow channel and captured by porous electrodes and finally separated from the unreacted impurities. Nuclear magnetic resonance and ultraviolet/visible/near-infrared absorption spectroscopic data clearly showed that the impurities were efficiently removed from QDs with the purification yield, defined as the ratio of the mass of purified QDs to that of QDs in the crude solution, up to 87%. Also, we could successfully predict the purification yield depending on purification conditions with a simple theoretical model. The proposed large-scale purification process could be an important cornerstone for the mass production and industrial use of high-quality QDs.
Co-governing decentralised water systems: an analytical framework.
Yu, C; Brown, R; Morison, P
2012-01-01
Current discourses in urban water management emphasise a diversity of water sources and scales of infrastructure for resilience and adaptability. During the last 2 decades, in particular, various small-scale systems emerged and developed so that the debate has largely moved from centralised versus decentralised water systems toward governing integrated and networked systems of provision and consumption where small-scale technologies are embedded in large-scale centralised infrastructures. However, while centralised systems have established boundaries of ownership and management, decentralised water systems (such as stormwater harvesting technologies for the street, allotment/house scales) do not, therefore the viability for adoption and/or continued use of decentralised water systems is challenged. This paper brings together insights from the literature on public sector governance, co-production and social practices model to develop an analytical framework for co-governing such systems. The framework provides urban water practitioners with guidance when designing co-governance arrangements for decentralised water systems so that these systems continue to exist, and become widely adopted, within the established urban water regime.
Pirotte, Geert; Kesters, Jurgen; Verstappen, Pieter; Govaerts, Sanne; Manca, Jean; Lutsen, Laurence; Vanderzande, Dirk; Maes, Wouter
2015-10-12
Organic photovoltaics (OPV) have attracted great interest as a solar cell technology with appealing mechanical, aesthetical, and economies-of-scale features. To drive OPV toward economic viability, low-cost, large-scale module production has to be realized in combination with increased top-quality material availability and minimal batch-to-batch variation. To this extent, continuous flow chemistry can serve as a powerful tool. In this contribution, a flow protocol is optimized for the high performance benzodithiophene-thienopyrroledione copolymer PBDTTPD and the material quality is probed through systematic solar-cell evaluation. A stepwise approach is adopted to turn the batch process into a reproducible and scalable continuous flow procedure. Solar cell devices fabricated using the obtained polymer batches deliver an average power conversion efficiency of 7.2 %. Upon incorporation of an ionic polythiophene-based cathodic interlayer, the photovoltaic performance could be enhanced to a maximum efficiency of 9.1 %. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Ubiquitous and Continuous Propagating Disturbances in the Solar Corona
NASA Astrophysics Data System (ADS)
Morgan, Huw; Hutton, Joseph
2018-02-01
A new processing method applied to Atmospheric Imaging Assembly/Solar Dynamic Observatory observations reveals continuous propagating faint motions throughout the corona. The amplitudes are small, typically 2% of the background intensity. An hour’s data are processed from four AIA channels for a region near disk center, and the motions are characterized using an optical flow method. The motions trace the underlying large-scale magnetic field. The motion vector field describes large-scale coherent regions that tend to converge at narrow corridors. Large-scale vortices can also be seen. The hotter channels have larger-scale regions of coherent motion compared to the cooler channels, interpreted as the typical length of magnetic loops at different heights. Regions of low mean and high time variance in velocity are where the dominant motion component is along the line of sight as a result of a largely vertical magnetic field. The mean apparent magnitude of the optical velocities are a few tens of km s‑1, with different distributions in different channels. Over time, the velocities vary smoothly between a few km s‑1 to 100 km s‑1 or higher, varying on timescales of minutes. A clear bias of a few km s‑1 toward positive x-velocities is due to solar rotation and may be used as calibration in future work. All regions of the low corona thus experience a continuous stream of propagating disturbances at the limit of both spatial resolution and signal level. The method provides a powerful new diagnostic tool for tracing the magnetic field, and to probe motions at sub-pixel scales, with important implications for models of heating and of the magnetic field.
U.S. sent fuel shipment experience by rail
DOE Office of Scientific and Technical Information (OSTI.GOV)
Colborn, K.
2007-07-01
As planning for the large scale shipment of spent nuclear fuel to Yucca Mountain proceeds to address these challenges, actual shipments of spent fuel in other venues continues to provide proof that domestic rail spent fuel shipments can proceed safely and effectively. This paper presents some examples of recently completed spent fuel shipments, and the shipment of large low-level radioactive waste shipments offering lessons learned that may be beneficial to the planning process for large scale spent fuel shipments in the US. (authors)
Amplification of large scale magnetic fields in a decaying MHD system
NASA Astrophysics Data System (ADS)
Park, Kiwan
2017-10-01
Dynamo theory explains the amplification of magnetic fields in the conducting fluids (plasmas) driven by the continuous external energy. It is known that the nonhelical continuous kinetic or magnetic energy amplifies the small scale magnetic field; and the helical energy, the instability, or the shear with rotation effect amplifies the large scale magnetic field. However, recently it was reported that the decaying magnetic energy independent of helicity or instability could generate the large scale magnetic field. This phenomenon may look somewhat contradictory to the conventional dynamo theory. But it gives us some clues to the fundamental mechanism of energy transfer in the magnetized conducting fluids. It also implies that an ephemeral astrophysical event emitting the magnetic and kinetic energy can be a direct cause of the large scale magnetic field observed in space. As of now the exact physical mechanism is not yet understood in spite of several numerical results. The plasma motion coupled with a nearly conserved vector potential in the magnetohydrodynamic (MHD) system may transfer magnetic energy to the large scale. Also the intrinsic property of the scaling invariant MHD equation may decide the direction of energy transfer. In this paper we present the simulation results of inversely transferred helical and nonhelical energy in a decaying MHD system. We introduce a field structure model based on the MHD equation to show that the transfer of magnetic energy is essentially bidirectional depending on the plasma motion and initial energy distribution. And then we derive α coefficient algebraically in line with the field structure model to explain how the large scale magnetic field is induced by the helical energy in the system regardless of an external forcing source. And for the algebraic analysis of nonhelical magnetic energy, we use the eddy damped quasinormalized Markovian approximation to show the inverse transfer of magnetic energy.
Online Learning Experiences of New versus Continuing Learners: A Large-Scale Replication Study
ERIC Educational Resources Information Center
Li, Nai; Marsh, Vicky; Rienties, Bart; Whitelock, Denise
2017-01-01
A vast body of research has indicated the importance of distinguishing new vs. continuing students' learning experiences in blended and online environments. Continuing learners may have developed learning and coping mechanisms for "surviving" in such learning environments, while new learners might still need to adjust their learning…
Continuing Professional Education: Status, Trends, and Issues Related to Electronic Delivery.
ERIC Educational Resources Information Center
Rothenberg, Donna
Continuing professional education for teachers, doctors, lawyers, and engineers is examined in terms of its potential for large-scale electronic technology. For each profession, a profile is provided, and current continuing education programs and use of electronics in each field are described. These include satellite projects, in-house and closed…
NASA Astrophysics Data System (ADS)
de Boer, D. H.; Hassan, M. A.; MacVicar, B.; Stone, M.
2005-01-01
Contributions by Canadian fluvial geomorphologists between 1999 and 2003 are discussed under four major themes: sediment yield and sediment dynamics of large rivers; cohesive sediment transport; turbulent flow structure and sediment transport; and bed material transport and channel morphology. The paper concludes with a section on recent technical advances. During the review period, substantial progress has been made in investigating the details of fluvial processes at relatively small scales. Examples of this emphasis are the studies of flow structure, turbulence characteristics and bedload transport, which continue to form central themes in fluvial research in Canada. Translating the knowledge of small-scale, process-related research to an understanding of the behaviour of large-scale fluvial systems, however, continues to be a formidable challenge. Models play a prominent role in elucidating the link between small-scale processes and large-scale fluvial geomorphology, and, as a result, a number of papers describing models and modelling results have been published during the review period. In addition, a number of investigators are now approaching the problem by directly investigating changes in the system of interest at larger scales, e.g. a channel reach over tens of years, and attempting to infer what processes may have led to the result. It is to be expected that these complementary approaches will contribute to an increased understanding of fluvial systems at a variety of spatial and temporal scales. Copyright
Large-Scale Aerosol Modeling and Analysis
2007-09-30
deserts of the world: Arabian Gulf, Sea of Japan, China Sea , Mediterranean Sea , and the Tropical Atlantic Ocean. NAAPS also accurately predicts the...fate of large-scale smoke and pollution plumes. With its global and continuous coverage, 1 Report Documentation Page Form ApprovedOMB No. 0704-0188...origin of dust plumes impacting naval operations in the Red Sea , Mediterranean, eastern Atlantic, Gulf of Guinea, Sea of Japan, Yellow Sea , and East
Icing Simulation Research Supporting the Ice-Accretion Testing of Large-Scale Swept-Wing Models
NASA Technical Reports Server (NTRS)
Yadlin, Yoram; Monnig, Jaime T.; Malone, Adam M.; Paul, Bernard P.
2018-01-01
The work summarized in this report is a continuation of NASA's Large-Scale, Swept-Wing Test Articles Fabrication; Research and Test Support for NASA IRT contract (NNC10BA05 -NNC14TA36T) performed by Boeing under the NASA Research and Technology for Aerospace Propulsion Systems (RTAPS) contract. In the study conducted under RTAPS, a series of icing tests in the Icing Research Tunnel (IRT) have been conducted to characterize ice formations on large-scale swept wings representative of modern commercial transport airplanes. The outcome of that campaign was a large database of ice-accretion geometries that can be used for subsequent aerodynamic evaluation in other experimental facilities and for validation of ice-accretion prediction codes.
Validity Issues in Standard-Setting Studies
ERIC Educational Resources Information Center
Pant, Hans A.; Rupp, Andre A.; Tiffin-Richards, Simon P.; Koller, Olaf
2009-01-01
Standard-setting procedures are a key component within many large-scale educational assessment systems. They are consensual approaches in which committees of experts set cut-scores on continuous proficiency scales, which facilitate communication of proficiency distributions of students to a wide variety of stakeholders. This communicative function…
A family of conjugate gradient methods for large-scale nonlinear equations.
Feng, Dexiang; Sun, Min; Wang, Xueyong
2017-01-01
In this paper, we present a family of conjugate gradient projection methods for solving large-scale nonlinear equations. At each iteration, it needs low storage and the subproblem can be easily solved. Compared with the existing solution methods for solving the problem, its global convergence is established without the restriction of the Lipschitz continuity on the underlying mapping. Preliminary numerical results are reported to show the efficiency of the proposed method.
Centrifuge impact cratering experiments: Scaling laws for non-porous targets
NASA Technical Reports Server (NTRS)
Schmidt, Robert M.
1987-01-01
This research is a continuation of an ongoing program whose objective is to perform experiments and to develop scaling relationships for large body impacts onto planetary surfaces. The development of the centrifuge technique has been pioneered by the present investigator and is used to provide experimental data for actual target materials of interest. With both powder and gas guns mounted on a rotor arm, it is possible to match various dimensionless similarity parameters, which have been shown to govern the behavior of large scale impacts. Current work is directed toward the determination of scaling estimates for nonporous targets. The results are presented in summary form.
Large-scale dynamos in rapidly rotating plane layer convection
NASA Astrophysics Data System (ADS)
Bushby, P. J.; Käpylä, P. J.; Masada, Y.; Brandenburg, A.; Favier, B.; Guervilly, C.; Käpylä, M. J.
2018-05-01
Context. Convectively driven flows play a crucial role in the dynamo processes that are responsible for producing magnetic activity in stars and planets. It is still not fully understood why many astrophysical magnetic fields have a significant large-scale component. Aims: Our aim is to investigate the dynamo properties of compressible convection in a rapidly rotating Cartesian domain, focusing upon a parameter regime in which the underlying hydrodynamic flow is known to be unstable to a large-scale vortex instability. Methods: The governing equations of three-dimensional non-linear magnetohydrodynamics (MHD) are solved numerically. Different numerical schemes are compared and we propose a possible benchmark case for other similar codes. Results: In keeping with previous related studies, we find that convection in this parameter regime can drive a large-scale dynamo. The components of the mean horizontal magnetic field oscillate, leading to a continuous overall rotation of the mean field. Whilst the large-scale vortex instability dominates the early evolution of the system, the large-scale vortex is suppressed by the magnetic field and makes a negligible contribution to the mean electromotive force that is responsible for driving the large-scale dynamo. The cycle period of the dynamo is comparable to the ohmic decay time, with longer cycles for dynamos in convective systems that are closer to onset. In these particular simulations, large-scale dynamo action is found only when vertical magnetic field boundary conditions are adopted at the upper and lower boundaries. Strongly modulated large-scale dynamos are found at higher Rayleigh numbers, with periods of reduced activity (grand minima-like events) occurring during transient phases in which the large-scale vortex temporarily re-establishes itself, before being suppressed again by the magnetic field.
Matsuda, Fumio; Nakabayashi, Ryo; Sawada, Yuji; Suzuki, Makoto; Hirai, Masami Y.; Kanaya, Shigehiko; Saito, Kazuki
2011-01-01
A novel framework for automated elucidation of metabolite structures in liquid chromatography–mass spectrometer metabolome data was constructed by integrating databases. High-resolution tandem mass spectra data automatically acquired from each metabolite signal were used for database searches. Three distinct databases, KNApSAcK, ReSpect, and the PRIMe standard compound database, were employed for the structural elucidation. The outputs were retrieved using the CAS metabolite identifier for identification and putative annotation. A simple metabolite ontology system was also introduced to attain putative characterization of the metabolite signals. The automated method was applied for the metabolome data sets obtained from the rosette leaves of 20 Arabidopsis accessions. Phenotypic variations in novel Arabidopsis metabolites among these accessions could be investigated using this method. PMID:22645535
Fidelity-Based Ant Colony Algorithm with Q-learning of Quantum System
NASA Astrophysics Data System (ADS)
Liao, Qin; Guo, Ying; Tu, Yifeng; Zhang, Hang
2018-03-01
Quantum ant colony algorithm (ACA) has potential applications in quantum information processing, such as solutions of traveling salesman problem, zero-one knapsack problem, robot route planning problem, and so on. To shorten the search time of the ACA, we suggest the fidelity-based ant colony algorithm (FACA) for the control of quantum system. Motivated by structure of the Q-learning algorithm, we demonstrate the combination of a FACA with the Q-learning algorithm and suggest the design of a fidelity-based ant colony algorithm with the Q-learning to improve the performance of the FACA in a spin-1/2 quantum system. The numeric simulation results show that the FACA with the Q-learning can efficiently avoid trapping into local optimal policies and increase the speed of convergence process of quantum system.
Fidelity-Based Ant Colony Algorithm with Q-learning of Quantum System
NASA Astrophysics Data System (ADS)
Liao, Qin; Guo, Ying; Tu, Yifeng; Zhang, Hang
2017-12-01
Quantum ant colony algorithm (ACA) has potential applications in quantum information processing, such as solutions of traveling salesman problem, zero-one knapsack problem, robot route planning problem, and so on. To shorten the search time of the ACA, we suggest the fidelity-based ant colony algorithm (FACA) for the control of quantum system. Motivated by structure of the Q-learning algorithm, we demonstrate the combination of a FACA with the Q-learning algorithm and suggest the design of a fidelity-based ant colony algorithm with the Q-learning to improve the performance of the FACA in a spin-1/2 quantum system. The numeric simulation results show that the FACA with the Q-learning can efficiently avoid trapping into local optimal policies and increase the speed of convergence process of quantum system.
Reynolds number trend of hierarchies and scale interactions in turbulent boundary layers.
Baars, W J; Hutchins, N; Marusic, I
2017-03-13
Small-scale velocity fluctuations in turbulent boundary layers are often coupled with the larger-scale motions. Studying the nature and extent of this scale interaction allows for a statistically representative description of the small scales over a time scale of the larger, coherent scales. In this study, we consider temporal data from hot-wire anemometry at Reynolds numbers ranging from Re τ ≈2800 to 22 800, in order to reveal how the scale interaction varies with Reynolds number. Large-scale conditional views of the representative amplitude and frequency of the small-scale turbulence, relative to the large-scale features, complement the existing consensus on large-scale modulation of the small-scale dynamics in the near-wall region. Modulation is a type of scale interaction, where the amplitude of the small-scale fluctuations is continuously proportional to the near-wall footprint of the large-scale velocity fluctuations. Aside from this amplitude modulation phenomenon, we reveal the influence of the large-scale motions on the characteristic frequency of the small scales, known as frequency modulation. From the wall-normal trends in the conditional averages of the small-scale properties, it is revealed how the near-wall modulation transitions to an intermittent-type scale arrangement in the log-region. On average, the amplitude of the small-scale velocity fluctuations only deviates from its mean value in a confined temporal domain, the duration of which is fixed in terms of the local Taylor time scale. These concentrated temporal regions are centred on the internal shear layers of the large-scale uniform momentum zones, which exhibit regions of positive and negative streamwise velocity fluctuations. With an increasing scale separation at high Reynolds numbers, this interaction pattern encompasses the features found in studies on internal shear layers and concentrated vorticity fluctuations in high-Reynolds-number wall turbulence.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).
Multi-Criteria Knapsack Problem for Disease Selection in an Observation Ward
NASA Astrophysics Data System (ADS)
Lurkittikul, N.; Kittithreerapronchai, O.
2014-06-01
The aging population and the introduction of Thailand universal healthcare have increased inpatients and outpatients to public hospitals, particularly to a hospital that provides special and comprehensive health services. Many inpatient wards have experienced large influx of inpatients as the hospitals have to admit all patients regardless their conditions. These overcrowding wards cause stress to medical staffs, block access between medical departments, hospital-acquired infections, and ineffective uses of resources. One way to manage such inundated inpatient is to select some patients whose conditions require less clinical attention or whose lengths of stay are predictable and short and, then, place them at an observation ward. This intermediate ward increases turnover of beds and reduces unnecessary paperwork as patients are considered to be outpatients. In this article, we studied inpatient data of a tertiary care hospital in which an observation ward was considered to alleviate the overcrowding problem at Internal Medicine Department. The analysis of data showed that the hospital can balance inpatient flow by managing a group of patients who is admitted because of treatments ordered by its special clinics. Having explored several alternatives, we suggested patient selection criteria and proposed a layout at an observation ward. The hospital should increase medical beds in a new building ward because the current observation ward can handle 27.3% of total short stay patients, while the observation ward is projected to handle 80% of total short stay patients.
The Developmental Evaluation of School Improvement Networks
ERIC Educational Resources Information Center
Peurach, Donald J.; Glazer, Joshua L.; Winchell Lenhoff, Sarah
2016-01-01
The national education reform agenda has rapidly expanded to include attention to continuous improvement research in education. The purpose of this analysis is to propose a new approach to "developmental evaluation" aimed at building a foundation for continuous improvement in large-scale school improvement networks, on the argument that…
ERIC Educational Resources Information Center
Pleguezuelos, E. M.; Hornos, E.; Dory, V.; Gagnon, R.; Malagrino, P.; Brailovsky, C. A.; Charlin, B.
2013-01-01
Context: The PRACTICUM Institute has developed large-scale international programs of on-line continuing professional development (CPD) based on self-testing and feedback using the Practicum Script Concordance Test© (PSCT). Aims: To examine the psychometric consequences of pooling the responses of panelists from different countries (composite…
Outcomes and Process in Reading Tutoring
ERIC Educational Resources Information Center
Topping, K. J.; Thurston, A.; McGavock, K.; Conlin, N.
2012-01-01
Background: Large-scale randomised controlled trials are relatively rare in education. The present study approximates to, but is not exactly, a randomised controlled trial. It was an attempt to scale up previous small peer tutoring projects, while investing only modestly in continuing professional development for teachers. Purpose: A two-year…
The continuous large-scale preparation of several 1-methylimidazole based ionic liquids was carried out using a Spinning Tube-in-Tube (STT) reactor (manufactured by Kreido Laboratories). This reactor, which embodies and facilitates the use of Green Chemistry principles and Proce...
NASA Astrophysics Data System (ADS)
Wu, Qiujie; Tan, Liu; Xu, Sen; Liu, Dabin; Min, Li
2018-04-01
Numerous accidents of emulsion explosive (EE) are attributed to uncontrolled thermal decomposition of ammonium nitrate emulsion (ANE, the intermediate of EE) and EE in large scale. In order to study the thermal decomposition characteristics of ANE and EE in different scales, a large-scale test of modified vented pipe test (MVPT), and two laboratory-scale tests of differential scanning calorimeter (DSC) and accelerating rate calorimeter (ARC) were applied in the present study. The scale effect and water effect both play an important role in the thermal stability of ANE and EE. The measured decomposition temperatures of ANE and EE in MVPT are 146°C and 144°C, respectively, much lower than those in DSC and ARC. As the size of the same sample in DSC, ARC, and MVPT successively increases, the onset temperatures decrease. In the same test, the measured onset temperature value of ANE is higher than that of EE. The water composition of the sample stabilizes the sample. The large-scale test of MVPT can provide information for the real-life operations. The large-scale operations have more risks, and continuous overheating should be avoided.
"Parents a dead end life": The main experiences of parents of children with leukemia.
Jadidi, Rahmatollah; Hekmatpou, Davood; Eghbali, Aziz; Memari, Fereshteh; Anbari, Zohreh
2014-11-01
The quantitative studies show that due to the widespread prevalence, high death rate, high treatment expenses, and long hospital stay, leukemia influences the families and their children to a great extent. In this regard, no qualitative study has been conducted in Iran. So, this study was conducted in Arak in 2011 with the aim of expressing the experiences of the parents whose children suffered from leukemia. Using qualitative research approach, by applying content analysis method, 22 participants were interviewed in two educational hospitals during 2 months. The study was started by purposive sampling and continued by theoretical one. The data were analyzed based on the content analysis method. Data analysis showed that insolvency, knapsack problems, cancer secrecy, trust on God, self-sacrifice, adaptation, medical malpractice, and hospital facilities were the level 3 codes of parents' experiences and "parents a dead end life" was the main theme of this study. In this study, the experiences of the parents whose children suffered from cancer were studied deeply by the use of qualitative method, especially by the use of resources syncretism rather than studying quantitatively. Parents a dead end life emerged as the main theme of this study, emphasizing the necessity of paying further attention to the parents. On the other hand, making more use of parents' experiences and encouraging them helps make the treatment more effective. It is suggested that these experiences be shared with parents in the form of pamphlets distributed right at the beginning of the treatment process.
Backscattering from a Gaussian distributed, perfectly conducting, rough surface
NASA Technical Reports Server (NTRS)
Brown, G. S.
1977-01-01
The problem of scattering by random surfaces possessing many scales of roughness is analyzed. The approach is applicable to bistatic scattering from dielectric surfaces, however, this specific analysis is restricted to backscattering from a perfectly conducting surface in order to more clearly illustrate the method. The surface is assumed to be Gaussian distributed so that the surface height can be split into large and small scale components, relative to the electromagnetic wavelength. A first order perturbation approach is employed wherein the scattering solution for the large scale structure is perturbed by the small scale diffraction effects. The scattering from the large scale structure is treated via geometrical optics techniques. The effect of the large scale surface structure is shown to be equivalent to a convolution in k-space of the height spectrum with the following: the shadowing function, a polarization and surface slope dependent function, and a Gaussian factor resulting from the unperturbed geometrical optics solution. This solution provides a continuous transition between the near normal incidence geometrical optics and wide angle Bragg scattering results.
NASA Technical Reports Server (NTRS)
Caulfield, John; Crosson, William L.; Inguva, Ramarao; Laymon, Charles A.; Schamschula, Marius
1998-01-01
This is a followup on the preceding presentation by Crosson and Schamschula. The grid size for remote microwave measurements is much coarser than the hydrological model computational grids. To validate the hydrological models with measurements we propose mechanisms to disaggregate the microwave measurements to allow comparison with outputs from the hydrological models. Weighted interpolation and Bayesian methods are proposed to facilitate the comparison. While remote measurements occur at a large scale, they reflect underlying small-scale features. We can give continuing estimates of the small scale features by correcting the simple 0th-order, starting with each small-scale model with each large-scale measurement using a straightforward method based on Kalman filtering.
Effects on aquatic and human health due to large scale bioenergy crop expansion.
Love, Bradley J; Einheuser, Matthew D; Nejadhashemi, A Pouyan
2011-08-01
In this study, the environmental impacts of large scale bioenergy crops were evaluated using the Soil and Water Assessment Tool (SWAT). Daily pesticide concentration data for a study area consisting of four large watersheds located in Michigan (totaling 53,358 km²) was estimated over a six year period (2000-2005). Model outputs for atrazine, bromoxynil, glyphosate, metolachlor, pendimethalin, sethoxydim, triflualin, and 2,4-D model output were used to predict the possible long-term implications that large-scale bioenergy crop expansion may have on the bluegill (Lepomis macrochirus) and humans. Threshold toxicity levels were obtained for the bluegill and for human consumption for all pesticides being evaluated through an extensive literature review. Model output was compared to each toxicity level for the suggested exposure time (96-hour for bluegill and 24-hour for humans). The results suggest that traditional intensive row crops such as canola, corn and sorghum may negatively impact aquatic life, and in most cases affect the safe drinking water availability. The continuous corn rotation, the most representative rotation for current agricultural practices for a starch-based ethanol economy, delivers the highest concentrations of glyphosate to the stream. In addition, continuous canola contributed to a concentration of 1.11 ppm of trifluralin, a highly toxic herbicide, which is 8.7 times the 96-hour ecotoxicity of bluegills and 21 times the safe drinking water level. Also during the period of study, continuous corn resulted in the impairment of 541,152 km of stream. However, there is promise with second-generation lignocellulosic bioenergy crops such as switchgrass, which resulted in a 171,667 km reduction in total stream length that exceeds the human threshold criteria, as compared to the base scenario. Results of this study may be useful in determining the suitability of bioenergy crop rotations and aid in decision making regarding the adaptation of large-scale bioenergy cropping systems. Published by Elsevier B.V.
Complexity-aware simple modeling.
Gómez-Schiavon, Mariana; El-Samad, Hana
2018-02-26
Mathematical models continue to be essential for deepening our understanding of biology. On one extreme, simple or small-scale models help delineate general biological principles. However, the parsimony of detail in these models as well as their assumption of modularity and insulation make them inaccurate for describing quantitative features. On the other extreme, large-scale and detailed models can quantitatively recapitulate a phenotype of interest, but have to rely on many unknown parameters, making them often difficult to parse mechanistically and to use for extracting general principles. We discuss some examples of a new approach-complexity-aware simple modeling-that can bridge the gap between the small-scale and large-scale approaches. Copyright © 2018 Elsevier Ltd. All rights reserved.
Performance of Grey Wolf Optimizer on large scale problems
NASA Astrophysics Data System (ADS)
Gupta, Shubham; Deep, Kusum
2017-01-01
For solving nonlinear continuous problems of optimization numerous nature inspired optimization techniques are being proposed in literature which can be implemented to solve real life problems wherein the conventional techniques cannot be applied. Grey Wolf Optimizer is one of such technique which is gaining popularity since the last two years. The objective of this paper is to investigate the performance of Grey Wolf Optimization Algorithm on large scale optimization problems. The Algorithm is implemented on 5 common scalable problems appearing in literature namely Sphere, Rosenbrock, Rastrigin, Ackley and Griewank Functions. The dimensions of these problems are varied from 50 to 1000. The results indicate that Grey Wolf Optimizer is a powerful nature inspired Optimization Algorithm for large scale problems, except Rosenbrock which is a unimodal function.
Recent developments in large-scale ozone generation with dielectric barrier discharges
NASA Astrophysics Data System (ADS)
Lopez, Jose L.
2014-10-01
Large-scale ozone generation for industrial applications has been entirely based on the creation of microplasmas or microdischarges created using dielectric barrier discharge (DBD) reactors. Although versions of DBD generated ozone have been in continuous use for over a hundred years especially in water treatment, recent changes in environmental awareness and sustainability have lead to a surge of ozone generating facilities throughout the world. As a result of this enhanced global usage of this environmental cleaning application various new discoveries have emerged in the science and technology of ozone generation. This presentation will describe some of the most recent breakthrough developments in large-scale ozone generation while further addressing some of the current scientific and engineering challenges of this technology.
TARGET Publication Guidelines | Office of Cancer Genomics
Like other NCI large-scale genomics initiatives, TARGET is a community resource project and data are made available rapidly after validation for use by other researchers. To act in accord with the Fort Lauderdale principles and support the continued prompt public release of large-scale genomic data prior to publication, researchers who plan to prepare manuscripts containing descriptions of TARGET pediatric cancer data that would be of comparable scope to an initial TARGET disease-specific comprehensive, global analysis publication, and journal editors who receive such manuscripts, are
1985-01-01
RD-Ai56 759 AQUATIC PLANT CONTROL RESEARCH PROGRAM LARGE-SCALE 1/2 OPERATIONS MRNAGEMENT..(U) ARMY ENGINEER WATERAYS EXPERIMENT STATION VICKSBURG MS...PO Box 631, Vicksburg, Aquatic Plant Control Mississippi 39180-0631 and University of Research Program Tennessee-Chattanooga, Chattanooga, 12...19. KEY WORDS (Continue on reverse side if necesary nd identify by block number) - Aquatic plant control Louisiana Biological control Plant
2012-05-01
pressures on supply that led to the global food crisis of 2007 and 2008, allowing prices to fall from their peak in August 2008, the foundational...involved in the acquisition of farmland.9 This trend is also unlikely to slow, with food prices continuing to climb, surpassing the highs of 2007 and...and general secrecy in most large-scale land acquisition contracts, exact data regarding the number of deals and amount of land transferred are
Erin L. Landguth; Michael K. Schwartz
2014-01-01
One of the most pressing issues in spatial genetics concerns sampling. Traditionally, substructure and gene flow are estimated for individuals sampled within discrete populations. Because many species may be continuously distributed across a landscape without discrete boundaries, understanding sampling issues becomes paramount. Given large-scale, geographically broad...
You Have One Hour: Developing a Standardized Library Orientation and Evaluating Student Learning
ERIC Educational Resources Information Center
Brown, Elizabeth
2017-01-01
Library orientations continue to excite, or plague, instruction librarians everywhere. Reaching first year students early can preempt academic heartache and research woes, yet the question of "what students really need" continues to evolve. This article presents a case study of a large-scale implementation of library orientations. The…
Outcomes in a Randomised Controlled Trial of Mathematics Tutoring
ERIC Educational Resources Information Center
Topping, K. J.; Miller, D.; Murray, P.; Henderson, S.; Fortuna, C.; Conlin, N.
2011-01-01
Background: Large-scale randomised controlled trials (RCT) are relatively rare in education. The present study was an attempt to scale up previous small peer tutoring projects, while investing only modestly in continuing professional development for teachers. Purpose: A two-year RCT of peer tutoring in mathematics was undertaken in one local…
Zusammenarbeit aus Sicht eines outgesourcten Instandhalters
NASA Astrophysics Data System (ADS)
Grüßer, Stefan; Loeven, Heinz-Wilhelm
Dauerhafter Unternehmenserfolg ist nur mit einer fortschrittlichen Instandhaltung zu erzielen. Durch den enormen Kostendruck infolge der Globalisierung und die Innovationssprünge auf der technischen Seite wird auch die Frage nach der modernen Organisationsform für die Instandhaltung gestellt. Eine Möglichkeit der Kostenoptimierung ist das Outsourcing von Instandhaltungsleistungen. Hierbei ist es unerlässlich, dass sich die Mitarbeiter zum Dienstleister entwickeln. In diesem Beitrag wird die Entwicklung der InfraServ Knapsack von einer internen Instandhaltungsabteilung hin zu einem Industriellen Dienstleister beschrieben und Aspekte der Zusammenarbeit mit externen Kunden aus der Sicht des outgesourcten Instandhalters geschildert. Es werden die wichtigen Entwicklungsschritte zur Dienstleistungsorientierung der früheren Eigeninstandhaltung aufgezeigt. Dieser Beitrag ist nicht als "Königsweg“ zu verstehen, er soll vielmehr anhand der Erfahrungen einer outgesourcten Eigeninstandhaltung Anregungen für die Entwicklung der eigenen Instandhaltungsorganisation liefern.
Various Numerical Applications on Tropical Convective Systems Using a Cloud Resolving Model
NASA Technical Reports Server (NTRS)
Shie, C.-L.; Tao, W.-K.; Simpson, J.
2003-01-01
In recent years, increasing attention has been given to cloud resolving models (CRMs or cloud ensemble models-CEMs) for their ability to simulate the radiative-convective system, which plays a significant role in determining the regional heat and moisture budgets in the Tropics. The growing popularity of CRM usage can be credited to its inclusion of crucial and physically relatively realistic features such as explicit cloud-scale dynamics, sophisticated microphysical processes, and explicit cloud-radiation interaction. On the other hand, impacts of the environmental conditions (for example, the large-scale wind fields, heat and moisture advections as well as sea surface temperature) on the convective system can also be plausibly investigated using the CRMs with imposed explicit forcing. In this paper, by basically using a Goddard Cumulus Ensemble (GCE) model, three different studies on tropical convective systems are briefly presented. Each of these studies serves a different goal as well as uses a different approach. In the first study, which uses more of an idealized approach, the respective impacts of the large-scale horizontal wind shear and surface fluxes on the modeled tropical quasi-equilibrium states of temperature and water vapor are examined. In this 2-D study, the imposed large-scale horizontal wind shear is ideally either nudged (wind shear maintained strong) or mixed (wind shear weakened), while the minimum surface wind speed used for computing surface fluxes varies among various numerical experiments. For the second study, a handful of real tropical episodes (TRMM Kwajalein Experiment - KWAJEX, 1999; TRMM South China Sea Monsoon Experiment - SCSMEX, 1998) have been simulated such that several major atmospheric characteristics such as the rainfall amount and its associated stratiform contribution, the Qlheat and Q2/moisture budgets are investigated. In this study, the observed large-scale heat and moisture advections are continuously applied to the 2-D model. The modeled cloud generated from such an approach is termed continuously forced convection or continuous large-scale forced convection. A third study, which focuses on the respective impact of atmospheric components on upper Ocean heat and salt budgets, will be presented in the end. Unlike the two previous 2-D studies, this study employs the 3-D GCE-simulated diabatic source terms (using TOGA COARE observations) - radiation (longwave and shortwave), surface fluxes (sensible and latent heat, and wind stress), and precipitation as input for the Ocean mixed-layer (OML) model.
Spiking neural network simulation: memory-optimal synaptic event scheduling.
Stewart, Robert D; Gurney, Kevin N
2011-06-01
Spiking neural network simulations incorporating variable transmission delays require synaptic events to be scheduled prior to delivery. Conventional methods have memory requirements that scale with the total number of synapses in a network. We introduce novel scheduling algorithms for both discrete and continuous event delivery, where the memory requirement scales instead with the number of neurons. Superior algorithmic performance is demonstrated using large-scale, benchmarking network simulations.
Large-scale flow experiments for managing river systems
Konrad, Christopher P.; Olden, Julian D.; Lytle, David A.; Melis, Theodore S.; Schmidt, John C.; Bray, Erin N.; Freeman, Mary C.; Gido, Keith B.; Hemphill, Nina P.; Kennard, Mark J.; McMullen, Laura E.; Mims, Meryl C.; Pyron, Mark; Robinson, Christopher T.; Williams, John G.
2011-01-01
Experimental manipulations of streamflow have been used globally in recent decades to mitigate the impacts of dam operations on river systems. Rivers are challenging subjects for experimentation, because they are open systems that cannot be isolated from their social context. We identify principles to address the challenges of conducting effective large-scale flow experiments. Flow experiments have both scientific and social value when they help to resolve specific questions about the ecological action of flow with a clear nexus to water policies and decisions. Water managers must integrate new information into operating policies for large-scale experiments to be effective. Modeling and monitoring can be integrated with experiments to analyze long-term ecological responses. Experimental design should include spatially extensive observations and well-defined, repeated treatments. Large-scale flow manipulations are only a part of dam operations that affect river systems. Scientists can ensure that experimental manipulations continue to be a valuable approach for the scientifically based management of river systems.
Structure of large-scale flows and their oscillation in the thermal convection of liquid gallium.
Yanagisawa, Takatoshi; Yamagishi, Yasuko; Hamano, Yozo; Tasaka, Yuji; Yoshida, Masataka; Yano, Kanako; Takeda, Yasushi
2010-07-01
This investigation observed large-scale flows in liquid gallium and the oscillation with Rayleigh-Bénard convection. An ultrasonic velocity profiling method was used to visualize the spatiotemporal flow pattern of the liquid gallium in a horizontally long rectangular vessel. Measuring the horizontal component of the flow velocity at several lines, an organized roll-like structure with four cells was observed in the 1×10(4)-2×10(5) range of Rayleigh numbers, and the rolls show clear oscillatory behavior. The long-term fluctuations in temperature observed in point measurements correspond to the oscillations of the organized roll structure. This flow structure can be interpreted as the continuous development of the oscillatory instability of two-dimensional roll convection that is theoretically investigated around the critical Rayleigh number. Both the velocity of the large-scale flows and the frequency of the oscillation increase proportional to the square root of the Rayleigh number. This indicates that the oscillation is closely related to the circulation of large-scale flow.
Large-scale horizontal flows from SOUP observations of solar granulation
NASA Technical Reports Server (NTRS)
November, L. J.; Simon, G. W.; Tarbell, T. D.; Title, A. M.; Ferguson, S. H.
1987-01-01
Using high resolution time sequence photographs of solar granulation from the SOUP experiment on Spacelab 2, large scale horizontal flows were observed in the solar surface. The measurement method is based upon a local spatial cross correlation analysis. The horizontal motions have amplitudes in the range 300 to 1000 m/s. Radial outflow of granulation from a sunspot penumbra into surrounding photosphere is a striking new discovery. Both the supergranulation pattern and cellular structures having the scale of mesogranulation are seen. The vertical flows that are inferred by continuity of mass from these observed horizontal flows have larger upflow amplitudes in cell centers than downflow amplitudes at cell boundaries.
The end-stage renal disease industry and exit strategies for nephrologists.
Sullivan, John D
2006-01-01
The purpose of this presentation is to identify exit strategies for nephrologists under changing conditions in the dialysis market. The end-stage renal disease service provider market continues to be highly receptive to consolidation. Taking advantage of large economies of scale, large for-profit dialysis chains have surpassed independent operators in both number of clinics and total patients. With relatively low barriers to entry, new smaller clinics continue to open, serving a niche outside the larger chains. Additional competition comes in the form of medium players funded by venture capitalists with the added pressure of rapid growth and financial return. To ensure market power in both dialysis products and managed care negotiation leverage, medium and large service providers will continue to seek out attractive acquisition targets. For nephrologists to capitalize on investment, clinic and business preparation will continue to be the driving force for these divestitures.
Source imaging of potential fields through a matrix space-domain algorithm
NASA Astrophysics Data System (ADS)
Baniamerian, Jamaledin; Oskooi, Behrooz; Fedi, Maurizio
2017-01-01
Imaging of potential fields yields a fast 3D representation of the source distribution of potential fields. Imaging methods are all based on multiscale methods allowing the source parameters of potential fields to be estimated from a simultaneous analysis of the field at various scales or, in other words, at many altitudes. Accuracy in performing upward continuation and differentiation of the field has therefore a key role for this class of methods. We here describe an accurate method for performing upward continuation and vertical differentiation in the space-domain. We perform a direct discretization of the integral equations for upward continuation and Hilbert transform; from these equations we then define matrix operators performing the transformation, which are symmetric (upward continuation) or anti-symmetric (differentiation), respectively. Thanks to these properties, just the first row of the matrices needs to be computed, so to decrease dramatically the computation cost. Our approach allows a simple procedure, with the advantage of not involving large data extension or tapering, as due instead in case of Fourier domain computation. It also allows level-to-drape upward continuation and a stable differentiation at high frequencies; finally, upward continuation and differentiation kernels may be merged into a single kernel. The accuracy of our approach is shown to be important for multi-scale algorithms, such as the continuous wavelet transform or the DEXP (depth from extreme point method), because border errors, which tend to propagate largely at the largest scales, are radically reduced. The application of our algorithm to synthetic and real-case gravity and magnetic data sets confirms the accuracy of our space domain strategy over FFT algorithms and standard convolution procedures.
NASA Astrophysics Data System (ADS)
Kendall, E. A.; Bhatt, A.
2017-12-01
The Midlatitude Allsky-imaging Network for GeoSpace Observations (MANGO) is a network of imagers filtered at 630 nm spread across the continental United States. MANGO is used to image large-scale airglow and aurora features and observes the generation, propagation, and dissipation of medium and large-scale wave activity in the subauroral, mid and low-latitude thermosphere. This network consists of seven all-sky imagers providing continuous coverage over the United States and extending south into Mexico. This network sees high levels of medium and large scale wave activity due to both neutral and geomagnetic storm forcing. The geomagnetic storm observations largely fall into two categories: Stable Auroral Red (SAR) arcs and Large-scale traveling ionospheric disturbances (LSTIDs). In addition, less-often observed effects include anomalous airglow brightening, bright swirls, and frozen-in traveling structures. We will present an analysis of multiple events observed over four years of MANGO network operation. We will provide both statistics on the cumulative observations and a case study of the "Memorial Day Storm" on May 27, 2017.
A holistic approach for large-scale derived flood frequency analysis
NASA Astrophysics Data System (ADS)
Dung Nguyen, Viet; Apel, Heiko; Hundecha, Yeshewatesfa; Guse, Björn; Sergiy, Vorogushyn; Merz, Bruno
2017-04-01
Spatial consistency, which has been usually disregarded because of the reported methodological difficulties, is increasingly demanded in regional flood hazard (and risk) assessments. This study aims at developing a holistic approach for deriving flood frequency at large scale consistently. A large scale two-component model has been established for simulating very long-term multisite synthetic meteorological fields and flood flow at many gauged and ungauged locations hence reflecting the spatially inherent heterogeneity. The model has been applied for the region of nearly a half million km2 including Germany and parts of nearby countries. The model performance has been multi-objectively examined with a focus on extreme. By this continuous simulation approach, flood quantiles for the studied region have been derived successfully and provide useful input for a comprehensive flood risk study.
Targeted enrichment strategies for next-generation plant biology
Richard Cronn; Brian J. Knaus; Aaron Liston; Peter J. Maughan; Matthew Parks; John V. Syring; Joshua Udall
2012-01-01
The dramatic advances offered by modem DNA sequencers continue to redefine the limits of what can be accomplished in comparative plant biology. Even with recent achievements, however, plant genomes present obstacles that can make it difficult to execute large-scale population and phylogenetic studies on next-generation sequencing platforms. Factors like large genome...
USDA-ARS?s Scientific Manuscript database
The objective of this paper is to study shedding patterns of cows infected with Mycobacterium avium subsp. paratuberculosis (MAP). While multiple single farm studies of MAP dynamics were reported, there is not large-scale meta-analysis of both natural and experimental infections. Large difference...
Continuing professional education: Status, trends, and issues related to electronic delivery
NASA Technical Reports Server (NTRS)
Rothenberg, D.
1975-01-01
Four professional groups, teachers, doctors, lawyers, and engineers were examined to determine if they constitute a potential market for continuing professional education via large scale electronic technology. Data were collected in view of social and economic forces, such as mandatory periodic relicensure, additional course requirements for certification, or the economic health of supporting industries.
USDA-ARS?s Scientific Manuscript database
A new stackable modular system was developed for continuous in-vivo production of phytoseiid mites. The system consists of cage units that are filled with lima bean, Phaseolus lunatus, or red beans, P. vulgaris, leaves infested with high levels of the two-spotted spider mites, Tetranychus urticae. T...
Native fish conservation areas: a vision for large-scale conservation of native fish communities
Jack E. Williams; Richard N. Williams; Russell F. Thurow; Leah Elwell; David P. Philipp; Fred A. Harris; Jeffrey L. Kershner; Patrick J. Martinez; Dirk Miller; Gordon H. Reeves; Christopher A. Frissell; James R. Sedell
2011-01-01
The status of freshwater fishes continues to decline despite substantial conservation efforts to reverse this trend and recover threatened and endangered aquatic species. Lack of success is partially due to working at smaller spatial scales and focusing on habitats and species that are already degraded. Protecting entire watersheds and aquatic communities, which we...
NASA Technical Reports Server (NTRS)
Allen, N. C.
1978-01-01
Implementation of SOLARES will input large quantities of heat continuously into a stationary location on the Earth's surface. The quantity of heat released by each of the SOlARES ground receivers, having a reflector orbit height of 6378 km, exceeds by 30 times that released by large power parks which were studied in detail. Using atmospheric models, estimates are presented for the local weather effects, the synoptic scale effects, and the global scale effects from such intense thermal radiation.
Solar-Power System Produces High-Pressure Steam
NASA Technical Reports Server (NTRS)
Lansing, F. L.
1985-01-01
Combination of three multistaged solar collectors produces highpressure steam for large-scale continuously operating turbines for generating mechanical or electrical energy. Superheated water vapor drives turbines, attaining an overall system efficiency about 22 percent.
Cascaded Optimization for a Persistent Data Ferrying Unmanned Aircraft
NASA Astrophysics Data System (ADS)
Carfang, Anthony
This dissertation develops and assesses a cascaded method for designing optimal periodic trajectories and link schedules for an unmanned aircraft to ferry data between stationary ground nodes. This results in a fast solution method without the need to artificially constrain system dynamics. Focusing on a fundamental ferrying problem that involves one source and one destination, but includes complex vehicle and Radio-Frequency (RF) dynamics, a cascaded structure to the system dynamics is uncovered. This structure is exploited by reformulating the nonlinear optimization problem into one that reduces the independent control to the vehicle's motion, while the link scheduling control is folded into the objective function and implemented as an optimal policy that depends on candidate motion control. This formulation is proven to maintain optimality while reducing computation time in comparison to traditional ferry optimization methods. The discrete link scheduling problem takes the form of a combinatorial optimization problem that is known to be NP-Hard. A derived necessary condition for optimality guides the development of several heuristic algorithms, specifically the Most-Data-First Algorithm and the Knapsack Adaptation. These heuristics are extended to larger ferrying scenarios, and assessed analytically and through Monte Carlo simulation, showing better throughput performance in the same order of magnitude of computation time in comparison to other common link scheduling policies. The cascaded optimization method is implemented with a novel embedded software system on a small, unmanned aircraft to validate the simulation results with field experiments. To address the sensitivity of results on trajectory tracking performance, a system that combines motion and link control with waypoint-based navigation is developed and assessed through field experiments. The data ferrying algorithms are further extended by incorporating a Gaussian process to opportunistically learn the RF environment. By continuously improving RF models, the cascaded planner can continually improve the ferrying system's overall performance.
Cyclicity in Upper Mississippian Bangor Limestone, Blount County, Alabama
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bronner, R.L.
1988-01-01
The Upper Mississippian (Chesterian) Bangor Limestone in Alabama consists of a thick, complex sequence of carbonate platform deposits. A continuous core through the Bangor on Blount Mountain in north-central Alabama provides the opportunity to analyze the unit for cyclicity and to identify controls on vertical facies sequence. Lithologies from the core represent four general environments of deposition: (1) subwave-base, open marine, (2) shoal, (3) lagoon, and (4) peritidal. Analysis of the vertical sequence of lithologies in the core indicates the presence of eight large-scale cycles dominated by subtidal deposits, but defined on the basis of peritidal caps. These large-scale cyclesmore » can be subdivided into 16 small-scale cycles that may be entirely subtidal but illustrate upward shallowing followed by rapid deepening. Large-scale cycles range from 33 to 136 ft thick, averaging 68 ft; small-scale cycles range from 5 to 80 ft thick and average 34 ft. Small-scale cycles have an average duration of approximately 125,000 years, which is compatible with Milankovitch periodicity. The large-scale cycles have an average duration of approximately 250,000 years, which may simply reflect variations in amplitude of sea level fluctuation or the influence of tectonic subsidence along the southeastern margin of the North American craton.« less
Sánchez, R; Carreras, B A; van Milligen, B Ph
2005-01-01
The fluid limit of a recently introduced family of nonintegrable (nonlinear) continuous-time random walks is derived in terms of fractional differential equations. In this limit, it is shown that the formalism allows for the modeling of the interaction between multiple transport mechanisms with not only disparate spatial scales but also different temporal scales. For this reason, the resulting fluid equations may find application in the study of a large number of nonlinear multiscale transport problems, ranging from the study of self-organized criticality to the modeling of turbulent transport in fluids and plasmas.
Large-scale horizontal flows from SOUP observations of solar granulation
NASA Astrophysics Data System (ADS)
November, L. J.; Simon, G. W.; Tarbell, T. D.; Title, A. M.; Ferguson, S. H.
1987-09-01
Using high-resolution time-sequence photographs of solar granulation from the SOUP experiment on Spacelab 2 the authors observed large-scale horizontal flows in the solar surface. The measurement method is based upon a local spatial cross correlation analysis. The horizontal motions have amplitudes in the range 300 to 1000 m/s. Radial outflow of granulation from a sunspot penumbra into the surrounding photosphere is a striking new discovery. Both the supergranulation pattern and cellular structures having the scale of mesogranulation are seen. The vertical flows that are inferred by continuity of mass from these observed horizontal flows have larger upflow amplitudes in cell centers than downflow amplitudes at cell boundaries.
ERIC Educational Resources Information Center
Walsh, Christopher S.; Power, Tom; Khatoon, Masuda; Biswas, Sudeb Kumar; Paul, Ashok Kumar; Sarkar, Bikash Chandra; Griffiths, Malcolm
2013-01-01
Examples of mobile phones being used with teachers to provide continuing professional development (CPD) in emerging economies at scale are largely absent from the research literature. We outline English in Action's (EIA) model for providing 80,000 teachers with CPD to improve their communicative language teaching in Bangladesh over nine years.…
Status and Prospects of Small Farmers in the South.
ERIC Educational Resources Information Center
Marshall, Ray; Thompson, Allen
The large scale displacement of small farmers in the South is an important concern to all persons interested in the problems of low-income people. Despite a decline in the numerical significance of farming, a large part of the South remains rural, and agriculture continues to significantly influence the rural economy and rural labor markets. The…
D.J. Hayes; W.B. Cohen
2006-01-01
This article describes the development of a methodology for scaling observations of changes in tropical forest cover to large areas at high temporal frequency from coarse-resolution satellite imagery. The approach for estimating proportional forest cover change as a continuous variable is based on a regression model that relates multispectral, multitemporal Moderate...
NASA Astrophysics Data System (ADS)
Pratt, Lawrence M.; Strothers, Joel; Pinnock, Travis; Hilaire, Dickens Saint; Bacolod, Beatrice; Cai, Zhuo Biao; Sim, Yoke-Leng
2017-04-01
Brown grease is a generic term for the oily solids and semi-solids that accumulate in the sewer system and in sewage treatment plants. It has previously been shown that brown grease undergoes pyrolysis to form a homologous series of alkanes and 1-alkenes between 7 and 17 carbon atoms, with smaller amounts of higher hydrocarbons and ketones up to about 30 carbon atoms. The initial study was performed in batch mode on a scale of up to 50 grams of starting material. However, continuous processes are usually more efficient for large scale production of fuels and commodity chemicals. This work describes the research and development of a continuous process. The first step was to determine the required reactor temperature. Brown grease consists largely of saturated and unsaturated fatty acids, and they react at different rates, and produce different products and intermediates. Intermediates include ketones, alcohols, and aldehydes, and Fe(III) ion catalyzes at least some of the reactions. By monitoring the pyrolysis of brown grease, its individual components, and intermediates, it was determined that a reactor temperature of at least 340 °C is required. A small scale (1 L) continuous stirred tank reactor was built and its performance is described.
Relatedness-based Multi-Entity Summarization
Gunaratna, Kalpa; Yazdavar, Amir Hossein; Thirunarayan, Krishnaprasad; Sheth, Amit; Cheng, Gong
2017-01-01
Representing world knowledge in a machine processable format is important as entities and their descriptions have fueled tremendous growth in knowledge-rich information processing platforms, services, and systems. Prominent applications of knowledge graphs include search engines (e.g., Google Search and Microsoft Bing), email clients (e.g., Gmail), and intelligent personal assistants (e.g., Google Now, Amazon Echo, and Apple’s Siri). In this paper, we present an approach that can summarize facts about a collection of entities by analyzing their relatedness in preference to summarizing each entity in isolation. Specifically, we generate informative entity summaries by selecting: (i) inter-entity facts that are similar and (ii) intra-entity facts that are important and diverse. We employ a constrained knapsack problem solving approach to efficiently compute entity summaries. We perform both qualitative and quantitative experiments and demonstrate that our approach yields promising results compared to two other stand-alone state-of-the-art entity summarization approaches. PMID:29051696
High Fidelity Simulations of Large-Scale Wireless Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Onunkwo, Uzoma; Benz, Zachary
The worldwide proliferation of wireless connected devices continues to accelerate. There are 10s of billions of wireless links across the planet with an additional explosion of new wireless usage anticipated as the Internet of Things develops. Wireless technologies do not only provide convenience for mobile applications, but are also extremely cost-effective to deploy. Thus, this trend towards wireless connectivity will only continue and Sandia must develop the necessary simulation technology to proactively analyze the associated emerging vulnerabilities. Wireless networks are marked by mobility and proximity-based connectivity. The de facto standard for exploratory studies of wireless networks is discrete event simulationsmore » (DES). However, the simulation of large-scale wireless networks is extremely difficult due to prohibitively large turnaround time. A path forward is to expedite simulations with parallel discrete event simulation (PDES) techniques. The mobility and distance-based connectivity associated with wireless simulations, however, typically doom PDES and fail to scale (e.g., OPNET and ns-3 simulators). We propose a PDES-based tool aimed at reducing the communication overhead between processors. The proposed solution will use light-weight processes to dynamically distribute computation workload while mitigating communication overhead associated with synchronizations. This work is vital to the analytics and validation capabilities of simulation and emulation at Sandia. We have years of experience in Sandia’s simulation and emulation projects (e.g., MINIMEGA and FIREWHEEL). Sandia’s current highly-regarded capabilities in large-scale emulations have focused on wired networks, where two assumptions prevent scalable wireless studies: (a) the connections between objects are mostly static and (b) the nodes have fixed locations.« less
“Parents a dead end life”: The main experiences of parents of children with leukemia
Jadidi, Rahmatollah; Hekmatpou, Davood; Eghbali, Aziz; Memari, Fereshteh; Anbari, Zohreh
2014-01-01
Background: The quantitative studies show that due to the widespread prevalence, high death rate, high treatment expenses, and long hospital stay, leukemia influences the families and their children to a great extent. In this regard, no qualitative study has been conducted in Iran. So, this study was conducted in Arak in 2011 with the aim of expressing the experiences of the parents whose children suffered from leukemia. Materials and Methods: Using qualitative research approach, by applying content analysis method, 22 participants were interviewed in two educational hospitals during 2 months. The study was started by purposive sampling and continued by theoretical one. The data were analyzed based on the content analysis method. Resluts: Data analysis showed that insolvency, knapsack problems, cancer secrecy, trust on God, self-sacrifice, adaptation, medical malpractice, and hospital facilities were the level 3 codes of parents’ experiences and “parents a dead end life” was the main theme of this study. Conclusion: In this study, the experiences of the parents whose children suffered from cancer were studied deeply by the use of qualitative method, especially by the use of resources syncretism rather than studying quantitatively. Parents a dead end life emerged as the main theme of this study, emphasizing the necessity of paying further attention to the parents. On the other hand, making more use of parents’ experiences and encouraging them helps make the treatment more effective. It is suggested that these experiences be shared with parents in the form of pamphlets distributed right at the beginning of the treatment process. PMID:25558257
NASA Astrophysics Data System (ADS)
Austin, Kemen G.; González-Roglich, Mariano; Schaffer-Smith, Danica; Schwantes, Amanda M.; Swenson, Jennifer J.
2017-05-01
Deforestation continues across the tropics at alarming rates, with repercussions for ecosystem processes, carbon storage and long term sustainability. Taking advantage of recent fine-scale measurement of deforestation, this analysis aims to improve our understanding of the scale of deforestation drivers in the tropics. We examined trends in forest clearings of different sizes from 2000-2012 by country, region and development level. As tropical deforestation increased from approximately 6900 kha yr-1 in the first half of the study period, to >7900 kha yr-1 in the second half of the study period, >50% of this increase was attributable to the proliferation of medium and large clearings (>10 ha). This trend was most pronounced in Southeast Asia and in South America. Outside of Brazil >60% of the observed increase in deforestation in South America was due to an upsurge in medium- and large-scale clearings; Brazil had a divergent trend of decreasing deforestation, >90% of which was attributable to a reduction in medium and large clearings. The emerging prominence of large-scale drivers of forest loss in many regions and countries suggests the growing need for policy interventions which target industrial-scale agricultural commodity producers. The experience in Brazil suggests that there are promising policy solutions to mitigate large-scale deforestation, but that these policy initiatives do not adequately address small-scale drivers. By providing up-to-date and spatially explicit information on the scale of deforestation, and the trends in these patterns over time, this study contributes valuable information for monitoring, and designing effective interventions to address deforestation.
Mechanisation of large-scale agricultural fields in developing countries - a review.
Onwude, Daniel I; Abdulstter, Rafia; Gomes, Chandima; Hashim, Norhashila
2016-09-01
Mechanisation of large-scale agricultural fields often requires the application of modern technologies such as mechanical power, automation, control and robotics. These technologies are generally associated with relatively well developed economies. The application of these technologies in some developing countries in Africa and Asia is limited by factors such as technology compatibility with the environment, availability of resources to facilitate the technology adoption, cost of technology purchase, government policies, adequacy of technology and appropriateness in addressing the needs of the population. As a result, many of the available resources have been used inadequately by farmers, who continue to rely mostly on conventional means of agricultural production, using traditional tools and equipment in most cases. This has led to low productivity and high cost of production among others. Therefore this paper attempts to evaluate the application of present day technology and its limitations to the advancement of large-scale mechanisation in developing countries of Africa and Asia. Particular emphasis is given to a general understanding of the various levels of mechanisation, present day technology, its management and application to large-scale agricultural fields. This review also focuses on/gives emphasis to future outlook that will enable a gradual, evolutionary and sustainable technological change. The study concludes that large-scale-agricultural farm mechanisation for sustainable food production in Africa and Asia must be anchored on a coherent strategy based on the actual needs and priorities of the large-scale farmers. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.
Klein, Brennan J; Li, Zhi; Durgin, Frank H
2016-04-01
What is the natural reference frame for seeing large-scale spatial scenes in locomotor action space? Prior studies indicate an asymmetric angular expansion in perceived direction in large-scale environments: Angular elevation relative to the horizon is perceptually exaggerated by a factor of 1.5, whereas azimuthal direction is exaggerated by a factor of about 1.25. Here participants made angular and spatial judgments when upright or on their sides to dissociate egocentric from allocentric reference frames. In Experiment 1, it was found that body orientation did not affect the magnitude of the up-down exaggeration of direction, suggesting that the relevant orientation reference frame for this directional bias is allocentric rather than egocentric. In Experiment 2, the comparison of large-scale horizontal and vertical extents was somewhat affected by viewer orientation, but only to the extent necessitated by the classic (5%) horizontal-vertical illusion (HVI) that is known to be retinotopic. Large-scale vertical extents continued to appear much larger than horizontal ground extents when observers lay sideways. When the visual world was reoriented in Experiment 3, the bias remained tied to the ground-based allocentric reference frame. The allocentric HVI is quantitatively consistent with differential angular exaggerations previously measured for elevation and azimuth in locomotor space. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Klein, Brennan J.; Li, Zhi; Durgin, Frank H.
2015-01-01
What is the natural reference frame for seeing large-scale spatial scenes in locomotor action space? Prior studies indicate an asymmetric angular expansion in perceived direction in large-scale environments: Angular elevation relative to the horizon is perceptually exaggerated by a factor of 1.5, whereas azimuthal direction is exaggerated by a factor of about 1.25. Here participants made angular and spatial judgments when upright or on their sides in order to dissociate egocentric from allocentric reference frames. In Experiment 1 it was found that body orientation did not affect the magnitude of the up-down exaggeration of direction, suggesting that the relevant orientation reference frame for this directional bias is allocentric rather than egocentric. In Experiment 2, the comparison of large-scale horizontal and vertical extents was somewhat affected by viewer orientation, but only to the extent necessitated by the classic (5%) horizontal-vertical illusion (HVI) that is known to be retinotopic. Large-scale vertical extents continued to appear much larger than horizontal ground extents when observers lay sideways. When the visual world was reoriented in Experiment 3, the bias remained tied to the ground-based allocentric reference frame. The allocentric HVI is quantitatively consistent with differential angular exaggerations previously measured for elevation and azimuth in locomotor space. PMID:26594884
Wiley, Joshua S; Shelley, Jacob T; Cooks, R Graham
2013-07-16
We describe a handheld, wireless low-temperature plasma (LTP) ambient ionization source and its performance on a benchtop and a miniature mass spectrometer. The source, which is inexpensive to build and operate, is battery-powered and utilizes miniature helium cylinders or air as the discharge gas. Comparison of a conventional, large-scale LTP source against the handheld LTP source, which uses less helium and power than the large-scale version, revealed that the handheld source had similar or slightly better analytical performance. Another advantage of the handheld LTP source is the ability to quickly interrogate a gaseous, liquid, or solid sample without requiring any setup time. A small, 7.4-V Li-polymer battery is able to sustain plasma for 2 h continuously, while the miniature helium cylinder supplies gas flow for approximately 8 continuous hours. Long-distance ion transfer was achieved for distances up to 1 m.
NASA Astrophysics Data System (ADS)
Chen, J.; Wang, D.; Zhao, R. L.; Zhang, H.; Liao, A.; Jiu, J.
2014-04-01
Geospatial databases are irreplaceable national treasure of immense importance. Their up-to-dateness referring to its consistency with respect to the real world plays a critical role in its value and applications. The continuous updating of map databases at 1:50,000 scales is a massive and difficult task for larger countries of the size of more than several million's kilometer squares. This paper presents the research and technological development to support the national map updating at 1:50,000 scales in China, including the development of updating models and methods, production tools and systems for large-scale and rapid updating, as well as the design and implementation of the continuous updating workflow. The use of many data sources and the integration of these data to form a high accuracy, quality checked product were required. It had in turn required up to date techniques of image matching, semantic integration, generalization, data base management and conflict resolution. Design and develop specific software tools and packages to support the large-scale updating production with high resolution imagery and large-scale data generalization, such as map generalization, GIS-supported change interpretation from imagery, DEM interpolation, image matching-based orthophoto generation, data control at different levels. A national 1:50,000 databases updating strategy and its production workflow were designed, including a full coverage updating pattern characterized by all element topographic data modeling, change detection in all related areas, and whole process data quality controlling, a series of technical production specifications, and a network of updating production units in different geographic places in the country.
CERN data services for LHC computing
NASA Astrophysics Data System (ADS)
Espinal, X.; Bocchi, E.; Chan, B.; Fiorot, A.; Iven, J.; Lo Presti, G.; Lopez, J.; Gonzalez, H.; Lamanna, M.; Mascetti, L.; Moscicki, J.; Pace, A.; Peters, A.; Ponce, S.; Rousseau, H.; van der Ster, D.
2017-10-01
Dependability, resilience, adaptability and efficiency. Growing requirements require tailoring storage services and novel solutions. Unprecedented volumes of data coming from the broad number of experiments at CERN need to be quickly available in a highly scalable way for large-scale processing and data distribution while in parallel they are routed to tape for long-term archival. These activities are critical for the success of HEP experiments. Nowadays we operate at high incoming throughput (14GB/s during 2015 LHC Pb-Pb run and 11PB in July 2016) and with concurrent complex production work-loads. In parallel our systems provide the platform for the continuous user and experiment driven work-loads for large-scale data analysis, including end-user access and sharing. The storage services at CERN cover the needs of our community: EOS and CASTOR as a large-scale storage; CERNBox for end-user access and sharing; Ceph as data back-end for the CERN OpenStack infrastructure, NFS services and S3 functionality; AFS for legacy distributed-file-system services. In this paper we will summarise the experience in supporting LHC experiments and the transition of our infrastructure from static monolithic systems to flexible components providing a more coherent environment with pluggable protocols, tuneable QoS, sharing capabilities and fine grained ACLs management while continuing to guarantee dependable and robust services.
Large-Scale Analysis of Network Bistability for Human Cancers
Shiraishi, Tetsuya; Matsuyama, Shinako; Kitano, Hiroaki
2010-01-01
Protein–protein interaction and gene regulatory networks are likely to be locked in a state corresponding to a disease by the behavior of one or more bistable circuits exhibiting switch-like behavior. Sets of genes could be over-expressed or repressed when anomalies due to disease appear, and the circuits responsible for this over- or under-expression might persist for as long as the disease state continues. This paper shows how a large-scale analysis of network bistability for various human cancers can identify genes that can potentially serve as drug targets or diagnosis biomarkers. PMID:20628618
Performance of Sweetpotato for Bioregenerative Life Support
NASA Technical Reports Server (NTRS)
Barta, Daniel J.; Henderson, Keith E.; Mortley, Desmond G.; Henninger, Donald L.
2001-01-01
Sweetpotato was successfully grown to harvest maturity in a large-scale atmospherically-closed controlled environment chamber. Yield of edible biomass and capacity for contributing to air revitalization and water recovery were documented. Yield was slightly less than that found in smaller-scale studies, but this is not unusual (Wheeler 1999). Continued work is suggested to improve control of storage root initiation, bulking and vine growth.
Large scale in vivo recordings to study neuronal biophysics.
Giocomo, Lisa M
2015-06-01
Over the last several years, technological advances have enabled researchers to more readily observe single-cell membrane biophysics in awake, behaving animals. Studies utilizing these technologies have provided important insights into the mechanisms generating functional neural codes in both sensory and non-sensory cortical circuits. Crucial for a deeper understanding of how membrane biophysics control circuit dynamics however, is a continued effort to move toward large scale studies of membrane biophysics, in terms of the numbers of neurons and ion channels examined. Future work faces a number of theoretical and technical challenges on this front but recent technological developments hold great promise for a larger scale understanding of how membrane biophysics contribute to circuit coding and computation. Copyright © 2014 Elsevier Ltd. All rights reserved.
The stability properties of cylindrical force-free fields - Effect of an external potential field
NASA Technical Reports Server (NTRS)
Chiuderi, C.; Einaudi, G.; Ma, S. S.; Van Hoven, G.
1980-01-01
A large-scale potential field with an embedded smaller-scale force-free structure gradient x B equals alpha B is studied in cylindrical geometry. Cases in which alpha goes continuously from a constant value alpha 0 on the axis to zero at large r are considered. Such a choice of alpha (r) produces fields which are realistic (few field reversals) but not completely stable. The MHD-unstable wavenumber regime is found. Since the considered equilibrium field exhibits a certain amount of magnetic shear, resistive instabilities can arise. The growth rates of the tearing mode in the limited MHD-stable region of k space are calculated, showing time-scales much shorter than the resistive decay time.
Enzymatic regeneration of adenosine triphosphate cofactor
NASA Technical Reports Server (NTRS)
Marshall, D. L.
1974-01-01
Regenerating adenosine triphosphate (ATP) from adenosine diphosphate (ADP) by enzymatic process which utilizes carbamyl phosphate as phosphoryl donor is technique used to regenerate expensive cofactors. Process allows complex enzymatic reactions to be considered as candidates for large-scale continuous processes.
Constant Stress Drop Fits Earthquake Surface Slip-Length Data
NASA Astrophysics Data System (ADS)
Shaw, B. E.
2011-12-01
Slip at the surface of the Earth provides a direct window into the earthquake source. A longstanding controversy surrounds the scaling of average surface slip with rupture length, which shows the puzzling feature of continuing to increase with rupture length for lengths many times the seismogenic width. Here we show that a more careful treatment of how ruptures transition from small circular ruptures to large rectangular ruptures combined with an assumption of constant stress drop provides a new scaling law for slip versus length which (1) does an excellent job fitting the data, (2) gives an explanation for the large crossover lengthscale at which slip begins to saturate, and (3) supports constant stress drop scaling which matches that seen for small earthquakes. We additionally discuss how the new scaling can be usefully applied to seismic hazard estimates.
Bioprocessing Data for the Production of Marine Enzymes
Sarkar, Sreyashi; Pramanik, Arnab; Mitra, Anindita; Mukherjee, Joydeep
2010-01-01
This review is a synopsis of different bioprocess engineering approaches adopted for the production of marine enzymes. Three major modes of operation: batch, fed-batch and continuous have been used for production of enzymes (such as protease, chitinase, agarase, peroxidase) mainly from marine bacteria and fungi on a laboratory bioreactor and pilot plant scales. Submerged, immobilized and solid-state processes in batch mode were widely employed. The fed-batch process was also applied in several bioprocesses. Continuous processes with suspended cells as well as with immobilized cells have been used. Investigations in shake flasks were conducted with the prospect of large-scale processing in reactors. PMID:20479981
DOE Office of Scientific and Technical Information (OSTI.GOV)
Terwilliger, Thomas C., E-mail: terwilliger@lanl.gov; Bricogne, Gerard, E-mail: terwilliger@lanl.gov; Los Alamos National Laboratory, Mail Stop M888, Los Alamos, NM 87507
Macromolecular structures deposited in the PDB can and should be continually reinterpreted and improved on the basis of their accompanying experimental X-ray data, exploiting the steady progress in methods and software that the deposition of such data into the PDB on a massive scale has made possible. Accurate crystal structures of macromolecules are of high importance in the biological and biomedical fields. Models of crystal structures in the Protein Data Bank (PDB) are in general of very high quality as deposited. However, methods for obtaining the best model of a macromolecular structure from a given set of experimental X-ray datamore » continue to progress at a rapid pace, making it possible to improve most PDB entries after their deposition by re-analyzing the original deposited data with more recent software. This possibility represents a very significant departure from the situation that prevailed when the PDB was created, when it was envisioned as a cumulative repository of static contents. A radical paradigm shift for the PDB is therefore proposed, away from the static archive model towards a much more dynamic body of continuously improving results in symbiosis with continuously improving methods and software. These simultaneous improvements in methods and final results are made possible by the current deposition of processed crystallographic data (structure-factor amplitudes) and will be supported further by the deposition of raw data (diffraction images). It is argued that it is both desirable and feasible to carry out small-scale and large-scale efforts to make this paradigm shift a reality. Small-scale efforts would focus on optimizing structures that are of interest to specific investigators. Large-scale efforts would undertake a systematic re-optimization of all of the structures in the PDB, or alternatively the redetermination of groups of structures that are either related to or focused on specific questions. All of the resulting structures should be made generally available, along with the precursor entries, with various views of the structures being made available depending on the types of questions that users are interested in answering.« less
Laboratory Experiments On Continually Forced 2d Turbulence
NASA Astrophysics Data System (ADS)
Wells, M. G.; Clercx, H. J. H.; Van Heijst, G. J. F.
There has been much recent interest in the advection of tracers by 2D turbulence in geophysical flows. While there is a large body of literature on decaying 2D turbulence or forced 2D turbulence in unbounded domains, there have been very few studies of forced turbulence in bounded domains. In this study we present new experimental results from a continuously forced quasi 2D turbulent field. The experiments are performed in a square Perspex tank filled with water. The flow is made quasi 2D by a steady background rotation. The rotation rate of the tank has a small (<8 %) sinusoidal perturbation which leads to the periodic formation of eddies in the corners of the tank. When the oscillation period of the perturbation is greater than an eddy roll-up time-scale, dipole structures are observed to form. The dipoles can migrate away from the walls, and the interior of the tank is continually filled with vortexs. From experimental visualizations the length scale of the vortexs appears to be largely controlled by the initial formation mechanism and large scale structures are not observed to form at large times. Thus the experiments provide a simple way of cre- ating a continuously forced 2D turbulent field. The resulting structures are in contrast with most previous laboratory experiments on 2D turbulence which have investigated decaying turbulence and have observed the formations of large scale structure. In these experiments, decaying turbulence had been produced by a variety of methods such as the decaying turbulence in the wake of a comb of rods (Massen et al 1999), organiza- tion of vortices in thin conducting liquids (Cardoso et al 1994) or in rotating systems where there are sudden changes in angular rotation rate (Konijnenberg et al 1998). Results of dye visualizations, particle tracking experiments and a direct numerical simulation will be presented and discussed in terms of their oceanographic application. Bibliography Cardoso,O. Marteau, D. &Tabeling, P. Quantitative experimental study of the free decay of quasi-two-dimensional turbulence Phys. Rev. E 49, 454 (1994) Maassen, S.R., H.J.H. Clercx &G.J.F. van Heijst - Decaying quasi-2D turbulence in a stratified fluid with circular boundaries. Europhys. Lett. 46, 339-345 (1999). Konijnenberg, J.A. van de, J.B. Flor &G.J.F. van Heijst - Decaying quasi-two- dimensional viscous flow on a square domain. Phys. Fluids 10, 595-606 (1998).
Mechanisms of diurnal precipitation over the US Great Plains: a cloud resolving model perspective
NASA Astrophysics Data System (ADS)
Lee, Myong-In; Choi, Ildae; Tao, Wei-Kuo; Schubert, Siegfried D.; Kang, In-Sik
2010-02-01
The mechanisms of summertime diurnal precipitation in the US Great Plains were examined with the two-dimensional (2D) Goddard Cumulus Ensemble (GCE) cloud-resolving model (CRM). The model was constrained by the observed large-scale background state and surface flux derived from the Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Program’s Intensive Observing Period (IOP) data at the Southern Great Plains (SGP). The model, when continuously-forced by realistic surface flux and large-scale advection, simulates reasonably well the temporal evolution of the observed rainfall episodes, particularly for the strongly forced precipitation events. However, the model exhibits a deficiency for the weakly forced events driven by diurnal convection. Additional tests were run with the GCE model in order to discriminate between the mechanisms that determine daytime and nighttime convection. In these tests, the model was constrained with the same repeating diurnal variation in the large-scale advection and/or surface flux. The results indicate that it is primarily the surface heat and moisture flux that is responsible for the development of deep convection in the afternoon, whereas the large-scale upward motion and associated moisture advection play an important role in preconditioning nocturnal convection. In the nighttime, high clouds are continuously built up through their interaction and feedback with long-wave radiation, eventually initiating deep convection from the boundary layer. Without these upper-level destabilization processes, the model tends to produce only daytime convection in response to boundary layer heating. This study suggests that the correct simulation of the diurnal variation in precipitation requires that the free-atmospheric destabilization mechanisms resolved in the CRM simulation must be adequately parameterized in current general circulation models (GCMs) many of which are overly sensitive to the parameterized boundary layer heating.
NASA Technical Reports Server (NTRS)
Lee, M.-I.; Choi, I.; Tao, W.-K.; Schubert, S. D.; Kang, I.-K.
2010-01-01
The mechanisms of summertime diurnal precipitation in the US Great Plains were examined with the two-dimensional (2D) Goddard Cumulus Ensemble (GCE) cloud-resolving model (CRM). The model was constrained by the observed large-scale background state and surface flux derived from the Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Program s Intensive Observing Period (IOP) data at the Southern Great Plains (SGP). The model, when continuously-forced by realistic surface flux and large-scale advection, simulates reasonably well the temporal evolution of the observed rainfall episodes, particularly for the strongly forced precipitation events. However, the model exhibits a deficiency for the weakly forced events driven by diurnal convection. Additional tests were run with the GCE model in order to discriminate between the mechanisms that determine daytime and nighttime convection. In these tests, the model was constrained with the same repeating diurnal variation in the large-scale advection and/or surface flux. The results indicate that it is primarily the surface heat and moisture flux that is responsible for the development of deep convection in the afternoon, whereas the large-scale upward motion and associated moisture advection play an important role in preconditioning nocturnal convection. In the nighttime, high clouds are continuously built up through their interaction and feedback with long-wave radiation, eventually initiating deep convection from the boundary layer. Without these upper-level destabilization processes, the model tends to produce only daytime convection in response to boundary layer heating. This study suggests that the correct simulation of the diurnal variation in precipitation requires that the free-atmospheric destabilization mechanisms resolved in the CRM simulation must be adequately parameterized in current general circulation models (GCMs) many of which are overly sensitive to the parameterized boundary layer heating.
Satellite orbit and data sampling requirements
NASA Technical Reports Server (NTRS)
Rossow, William
1993-01-01
Climate forcings and feedbacks vary over a wide range of time and space scales. The operation of non-linear feedbacks can couple variations at widely separated time and space scales and cause climatological phenomena to be intermittent. Consequently, monitoring of global, decadal changes in climate requires global observations that cover the whole range of space-time scales and are continuous over several decades. The sampling of smaller space-time scales must have sufficient statistical accuracy to measure the small changes in the forcings and feedbacks anticipated in the next few decades, while continuity of measurements is crucial for unambiguous interpretation of climate change. Shorter records of monthly and regional (500-1000 km) measurements with similar accuracies can also provide valuable information about climate processes, when 'natural experiments' such as large volcanic eruptions or El Ninos occur. In this section existing satellite datasets and climate model simulations are used to test the satellite orbits and sampling required to achieve accurate measurements of changes in forcings and feedbacks at monthly frequency and 1000 km (regional) scale.
A new large-scale manufacturing platform for complex biopharmaceuticals.
Vogel, Jens H; Nguyen, Huong; Giovannini, Roberto; Ignowski, Jolene; Garger, Steve; Salgotra, Anil; Tom, Jennifer
2012-12-01
Complex biopharmaceuticals, such as recombinant blood coagulation factors, are addressing critical medical needs and represent a growing multibillion-dollar market. For commercial manufacturing of such, sometimes inherently unstable, molecules it is important to minimize product residence time in non-ideal milieu in order to obtain acceptable yields and consistently high product quality. Continuous perfusion cell culture allows minimization of residence time in the bioreactor, but also brings unique challenges in product recovery, which requires innovative solutions. In order to maximize yield, process efficiency, facility and equipment utilization, we have developed, scaled-up and successfully implemented a new integrated manufacturing platform in commercial scale. This platform consists of a (semi-)continuous cell separation process based on a disposable flow path and integrated with the upstream perfusion operation, followed by membrane chromatography on large-scale adsorber capsules in rapid cycling mode. Implementation of the platform at commercial scale for a new product candidate led to a yield improvement of 40% compared to the conventional process technology, while product quality has been shown to be more consistently high. Over 1,000,000 L of cell culture harvest have been processed with 100% success rate to date, demonstrating the robustness of the new platform process in GMP manufacturing. While membrane chromatography is well established for polishing in flow-through mode, this is its first commercial-scale application for bind/elute chromatography in the biopharmaceutical industry and demonstrates its potential in particular for manufacturing of potent, low-dose biopharmaceuticals. Copyright © 2012 Wiley Periodicals, Inc.
Testing the gravitational instability hypothesis?
NASA Technical Reports Server (NTRS)
Babul, Arif; Weinberg, David H.; Dekel, Avishai; Ostriker, Jeremiah P.
1994-01-01
We challenge a widely accepted assumption of observational cosmology: that successful reconstruction of observed galaxy density fields from measured galaxy velocity fields (or vice versa), using the methods of gravitational instability theory, implies that the observed large-scale structures and large-scale flows were produced by the action of gravity. This assumption is false, in that there exist nongravitational theories that pass the reconstruction tests and gravitational theories with certain forms of biased galaxy formation that fail them. Gravitational instability theory predicts specific correlations between large-scale velocity and mass density fields, but the same correlations arise in any model where (a) structures in the galaxy distribution grow from homogeneous initial conditions in a way that satisfies the continuity equation, and (b) the present-day velocity field is irrotational and proportional to the time-averaged velocity field. We demonstrate these assertions using analytical arguments and N-body simulations. If large-scale structure is formed by gravitational instability, then the ratio of the galaxy density contrast to the divergence of the velocity field yields an estimate of the density parameter Omega (or, more generally, an estimate of beta identically equal to Omega(exp 0.6)/b, where b is an assumed constant of proportionality between galaxy and mass density fluctuations. In nongravitational scenarios, the values of Omega or beta estimated in this way may fail to represent the true cosmological values. However, even if nongravitational forces initiate and shape the growth of structure, gravitationally induced accelerations can dominate the velocity field at late times, long after the action of any nongravitational impulses. The estimated beta approaches the true value in such cases, and in our numerical simulations the estimated beta values are reasonably accurate for both gravitational and nongravitational models. Reconstruction tests that show correlations between galaxy density and velocity fields can rule out some physically interesting models of large-scale structure. In particular, successful reconstructions constrain the nature of any bias between the galaxy and mass distributions, since processes that modulate the efficiency of galaxy formation on large scales in a way that violates the continuity equation also produce a mismatch between the observed galaxy density and the density inferred from the peculiar velocity field. We obtain successful reconstructions for a gravitational model with peaks biasing, but we also show examples of gravitational and nongravitational models that fail reconstruction tests because of more complicated modulations of galaxy formation.
Quality Control for Scoring Tests Administered in Continuous Mode: An NCME Instructional Module
ERIC Educational Resources Information Center
Allalouf, Avi; Gutentag, Tony; Baumer, Michal
2017-01-01
Quality control (QC) in testing is paramount. QC procedures for tests can be divided into two types. The first type, one that has been well researched, is QC for tests administered to large population groups on few administration dates using a small set of test forms (e.g., large-scale assessment). The second type is QC for tests, usually…
Large-scale fiber release and equipment exposure experiments. [aircraft fires
NASA Technical Reports Server (NTRS)
Pride, R. A.
1980-01-01
Outdoor tests were conducted to determine the amount of fiber released in a full scale fire and trace its dissemination away from the fire. Equipment vulnerability to fire released fibers was assessed through shock tests. The greatest fiber release was observed in the shock tube where the composite was burned with a continuous agitation to total consumption. The largest average fiber length obtained outdoors was 5 mm.
Global-scale hydrological response to future glacier mass loss
NASA Astrophysics Data System (ADS)
Huss, Matthias; Hock, Regine
2018-01-01
Worldwide glacier retreat and associated future runoff changes raise major concerns over the sustainability of global water resources1-4, but global-scale assessments of glacier decline and the resulting hydrological consequences are scarce5,6. Here we compute global glacier runoff changes for 56 large-scale glacierized drainage basins to 2100 and analyse the glacial impact on streamflow. In roughly half of the investigated basins, the modelled annual glacier runoff continues to rise until a maximum (`peak water') is reached, beyond which runoff steadily declines. In the remaining basins, this tipping point has already been passed. Peak water occurs later in basins with larger glaciers and higher ice-cover fractions. Typically, future glacier runoff increases in early summer but decreases in late summer. Although most of the 56 basins have less than 2% ice coverage, by 2100 one-third of them might experience runoff decreases greater than 10% due to glacier mass loss in at least one month of the melt season, with the largest reductions in central Asia and the Andes. We conclude that, even in large-scale basins with minimal ice-cover fraction, the downstream hydrological effects of continued glacier wastage can be substantial, but the magnitudes vary greatly among basins and throughout the melt season.
Rolling up of Large-scale Laminar Vortex Ring from Synthetic Jet Impinging onto a Wall
NASA Astrophysics Data System (ADS)
Xu, Yang; Pan, Chong; Wang, Jinjun; Flow Control Lab Team
2015-11-01
Vortex ring impinging onto a wall exhibits a wide range of interesting behaviors. The present work devotes to an experimental investigation of a series of small-scale vortex rings impinging onto a wall. These laminar vortex rings were generated by a piston-cylinder driven synthetic jet in a water tank. Laser Induced Fluorescence (LIF) and Particle Image Velocimetry (PIV) were used for flow visualization/quantification. A special scenario of vortical dynamic was found for the first time: a large-scale laminar vortex ring is formed above the wall, on the outboard side of the jet. This large-scale structure is stable in topology pattern, and continuously grows in strength and size along time, thus dominating dynamics of near wall flow. To quantify its spatial/temporal characteristics, Finite-Time Lyapunov Exponent (FTLE) fields were calculated from PIV velocity fields. It is shown that the flow pattern revealed by FTLE fields is similar to the visualization. The size of this large-scale vortex ring can be up to one-order larger than the jet vortices, and its rolling-up speed and entrainment strength was correlated to constant vorticity flux issued from the jet. This work was supported by the National Natural Science Foundation of China (Grants No.11202015 and 11327202).
NASA Astrophysics Data System (ADS)
Ghosh, Sayantan; Manimaran, P.; Panigrahi, Prasanta K.
2011-11-01
We make use of wavelet transform to study the multi-scale, self-similar behavior and deviations thereof, in the stock prices of large companies, belonging to different economic sectors. The stock market returns exhibit multi-fractal characteristics, with some of the companies showing deviations at small and large scales. The fact that, the wavelets belonging to the Daubechies’ (Db) basis enables one to isolate local polynomial trends of different degrees, plays the key role in isolating fluctuations at different scales. One of the primary motivations of this work is to study the emergence of the k-3 behavior [X. Gabaix, P. Gopikrishnan, V. Plerou, H. Stanley, A theory of power law distributions in financial market fluctuations, Nature 423 (2003) 267-270] of the fluctuations starting with high frequency fluctuations. We make use of Db4 and Db6 basis sets to respectively isolate local linear and quadratic trends at different scales in order to study the statistical characteristics of these financial time series. The fluctuations reveal fat tail non-Gaussian behavior, unstable periodic modulations, at finer scales, from which the characteristic k-3 power law behavior emerges at sufficiently large scales. We further identify stable periodic behavior through the continuous Morlet wavelet.
Duke, Trevor; Hwaihwanje, Ilomo; Kaupa, Magdalynn; Karubi, Jonah; Panauwe, Doreen; Sa'avu, Martin; Pulsan, Francis; Prasad, Peter; Maru, Freddy; Tenambo, Henry; Kwaramb, Ambrose; Neal, Eleanor; Graham, Hamish; Izadnegahdar, Rasa
2017-06-01
Pneumonia is the largest cause of child deaths in Papua New Guinea (PNG), and hypoxaemia is the major complication causing death in childhood pneumonia, and hypoxaemia is a major factor in deaths from many other common conditions, including bronchiolitis, asthma, sepsis, malaria, trauma, perinatal problems, and obstetric emergencies. A reliable source of oxygen therapy can reduce mortality from pneumonia by up to 35%. However, in low and middle income countries throughout the world, improved oxygen systems have not been implemented at large scale in remote, difficult to access health care settings, and oxygen is often unavailable at smaller rural hospitals or district health centers which serve as the first point of referral for childhood illnesses. These hospitals are hampered by lack of reliable power, staff training and other basic services. We report the methodology of a large implementation effectiveness trial involving sustainable and renewable oxygen and power systems in 36 health facilities in remote rural areas of PNG. The methodology is a before-and after evaluation involving continuous quality improvement, and a health systems approach. We describe this model of implementation as the considerations and steps involved have wider implications in health systems in other countries. The implementation steps include: defining the criteria for where such an intervention is appropriate, assessment of power supplies and power requirements, the optimal design of a solar power system, specifications for oxygen concentrators and other oxygen equipment that will function in remote environments, installation logistics in remote settings, the role of oxygen analyzers in monitoring oxygen concentrator performance, the engineering capacity required to sustain a program at scale, clinical guidelines and training on oxygen equipment and the treatment of children with severe respiratory infection and other critical illnesses, program costs, and measurement of processes and outcomes to support continuous quality improvement. This study will evaluate the feasibility and sustainability issues in improving oxygen systems and providing reliable power on a large scale in remote rural settings in PNG, and the impact of this on child mortality from pneumonia over 3 years post-intervention. Taking a continuous quality improvement approach can be transformational for remote health services.
Kongelf, Anine; Bandewar, Sunita V S; Bharat, Shalini; Collumbien, Martine
2015-01-01
In the last decade, community mobilisation (CM) interventions targeting female sex workers (FSWs) have been scaled-up in India's national response to the HIV epidemic. This included the Bill and Melinda Gates Foundation's Avahan programme which adopted a business approach to plan and manage implementation at scale. With the focus of evaluation efforts on measuring effectiveness and health impacts there has been little analysis thus far of the interaction of the CM interventions with the sex work industry in complex urban environments. Between March and July 2012 semi-structured, in-depth interviews and focus group discussions were conducted with 63 HIV intervention implementers, to explore challenges of HIV prevention among FSWs in Mumbai. A thematic analysis identified contextual factors that impact CM implementation. Large-scale interventions are not only impacted by, but were shown to shape the dynamic social context. Registration practices and programme monitoring were experienced as stigmatising, reflected in shifting client preferences towards women not disclosing as 'sex workers'. This combined with urban redevelopment and gentrification of traditional red light areas, forcing dispersal and more 'hidden' ways of solicitation, further challenging outreach and collectivisation. Participants reported that brothel owners and 'pimps' continued to restrict access to sex workers and the heterogeneous 'community' of FSWs remains fragmented with high levels of mobility. Stakeholder engagement was poor and mobilising around HIV prevention not compelling. Interventions largely failed to respond to community needs as strong target-orientation skewed activities towards those most easily measured and reported. Large-scale interventions have been impacted by and contributed to an increasingly complex sex work environment in Mumbai, challenging outreach and mobilisation efforts. Sex workers remain a vulnerable and disempowered group needing continued support and more comprehensive services.
Gu, Xun; Wang, Yufeng; Gu, Jianying
2002-06-01
The classical (two-round) hypothesis of vertebrate genome duplication proposes two successive whole-genome duplication(s) (polyploidizations) predating the origin of fishes, a view now being seriously challenged. As the debate largely concerns the relative merits of the 'big-bang mode' theory (large-scale duplication) and the 'continuous mode' theory (constant creation by small-scale duplications), we tested whether a significant proportion of paralogous genes in the contemporary human genome was indeed generated in the early stage of vertebrate evolution. After an extensive search of major databases, we dated 1,739 gene duplication events from the phylogenetic analysis of 749 vertebrate gene families. We found a pattern characterized by two waves (I, II) and an ancient component. Wave I represents a recent gene family expansion by tandem or segmental duplications, whereas wave II, a rapid paralogous gene increase in the early stage of vertebrate evolution, supports the idea of genome duplication(s) (the big-bang mode). Further analysis indicated that large- and small-scale gene duplications both make a significant contribution during the early stage of vertebrate evolution to build the current hierarchy of the human proteome.
Fermi Gamma-Ray Space Telescope Science Overview
NASA Technical Reports Server (NTRS)
Thompson, David J.
2010-01-01
After more than 2 years of science operations, the Fermi Gamma-ray Space Telescope continues to survey the high-energy sky on a daily basis. In addition to the more than 1400 sources found in the first Fermi Large Area Telescope Catalog (I FGL), new results continue to emerge. Some of these are: (1) Large-scale diffuse emission suggests possible activity from the Galactic Center region in the past; (2) a gamma-ray nova was found, indicating particle acceleration in this binary system; and (3) the Crab Nebula, long thought to be a steady source, has varied in the energy ranges seen by both Fermi instruments.
Dorazio, Robert; Delampady, Mohan; Dey, Soumen; Gopalaswamy, Arjun M.; Karanth, K. Ullas; Nichols, James D.
2017-01-01
Conservationists and managers are continually under pressure from the public, the media, and political policy makers to provide “tiger numbers,” not just for protected reserves, but also for large spatial scales, including landscapes, regions, states, nations, and even globally. Estimating the abundance of tigers within relatively small areas (e.g., protected reserves) is becoming increasingly tractable (see Chaps. 9 and 10), but doing so for larger spatial scales still presents a formidable challenge. Those who seek “tiger numbers” are often not satisfied by estimates of tiger occupancy alone, regardless of the reliability of the estimates (see Chaps. 4 and 5). As a result, wherever tiger conservation efforts are underway, either substantially or nominally, scientists and managers are frequently asked to provide putative large-scale tiger numbers based either on a total count or on an extrapolation of some sort (see Chaps. 1 and 2).
DIESEL EXHAUST PARTICLES ENHANCE INFLUENZA VIRUS INFECTIVITY BY INCREASING VIRUS ATTACHMENT
Despite vaccination and antiviral therapies, influenza infections continue to cause large scale morbidity and mortality every year. Several factors, such as age and nutritional status can affect the incidence and severity of influenza infections. Moreover, exposure to air polluta...
Computation and Theory in Large-Scale Optimization
1993-01-13
Sang Jin Lee. Research Assistant. - Laura Morley, Research Assistant. - Yonca A. Ozge , Research Assistant. - Stephen M. Robinson. Professor. - Hichem...other participants. M.N. Azadez. S.J. Lee. Y.A. Ozge . and H. Sellami are continuing students in the doctoral program (in Industrial Engineering except
NASA Astrophysics Data System (ADS)
Tsai, Kuang-Jung; Chiang, Jie-Lun; Lee, Ming-Hsi; Chen, Yie-Ruey
2017-04-01
Analysis on the Critical Rainfall Value For Predicting Large Scale Landslides Caused by Heavy Rainfall In Taiwan. Kuang-Jung Tsai 1, Jie-Lun Chiang 2,Ming-Hsi Lee 2, Yie-Ruey Chen 1, 1Department of Land Management and Development, Chang Jung Christian Universityt, Tainan, Taiwan. 2Department of Soil and Water Conservation, National Pingtung University of Science and Technology, Pingtung, Taiwan. ABSTRACT The accumulated rainfall amount was recorded more than 2,900mm that were brought by Morakot typhoon in August, 2009 within continuous 3 days. Very serious landslides, and sediment related disasters were induced by this heavy rainfall event. The satellite image analysis project conducted by Soil and Water Conservation Bureau after Morakot event indicated that more than 10,904 sites of landslide with total sliding area of 18,113ha were found by this project. At the same time, all severe sediment related disaster areas are also characterized based on their disaster type, scale, topography, major bedrock formations and geologic structures during the period of extremely heavy rainfall events occurred at the southern Taiwan. Characteristics and mechanism of large scale landslide are collected on the basis of the field investigation technology integrated with GPS/GIS/RS technique. In order to decrease the risk of large scale landslides on slope land, the strategy of slope land conservation, and critical rainfall database should be set up and executed as soon as possible. Meanwhile, study on the establishment of critical rainfall value used for predicting large scale landslides induced by heavy rainfall become an important issue which was seriously concerned by the government and all people live in Taiwan. The mechanism of large scale landslide, rainfall frequency analysis ,sediment budge estimation and river hydraulic analysis under the condition of extremely climate change during the past 10 years would be seriously concerned and recognized as a required issue by this research. Hopefully, all results developed from this research can be used as a warning system for Predicting Large Scale Landslides in the southern Taiwan. Keywords:Heavy Rainfall, Large Scale, landslides, Critical Rainfall Value
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, S.; Wang, Minghuai; Ghan, Steven J.
Aerosol-cloud interactions continue to constitute a major source of uncertainty for the estimate of climate radiative forcing. The variation of aerosol indirect effects (AIE) in climate models is investigated across different dynamical regimes, determined by monthly mean 500 hPa vertical pressure velocity (ω500), lower-tropospheric stability (LTS) and large-scale surface precipitation rate derived from several global climate models (GCMs), with a focus on liquid water path (LWP) response to cloud condensation nuclei (CCN) concentrations. The LWP sensitivity to aerosol perturbation within dynamic regimes is found to exhibit a large spread among these GCMs. It is in regimes of strong large-scale ascendmore » (ω500 < -25 hPa/d) and low clouds (stratocumulus and trade wind cumulus) where the models differ most. Shortwave aerosol indirect forcing is also found to differ significantly among different regimes. Shortwave aerosol indirect forcing in ascending regimes is as large as that in stratocumulus regimes, which indicates that regimes with strong large-scale ascend are as important as stratocumulus regimes in studying AIE. 42" It is further shown that shortwave aerosol indirect forcing over regions with high monthly large-scale surface precipitation rate (> 0.1 mm/d) contributes the most to the total aerosol indirect forcing (from 64% to nearly 100%). Results show that the uncertainty in AIE is even larger within specific dynamical regimes than that globally, pointing to the need to reduce the uncertainty in AIE in different dynamical regimes.« less
NASA Astrophysics Data System (ADS)
Bowden, Jared H.; Nolte, Christopher G.; Otte, Tanya L.
2013-04-01
The impact of the simulated large-scale atmospheric circulation on the regional climate is examined using the Weather Research and Forecasting (WRF) model as a regional climate model. The purpose is to understand the potential need for interior grid nudging for dynamical downscaling of global climate model (GCM) output for air quality applications under a changing climate. In this study we downscale the NCEP-Department of Energy Atmospheric Model Intercomparison Project (AMIP-II) Reanalysis using three continuous 20-year WRF simulations: one simulation without interior grid nudging and two using different interior grid nudging methods. The biases in 2-m temperature and precipitation for the simulation without interior grid nudging are unreasonably large with respect to the North American Regional Reanalysis (NARR) over the eastern half of the contiguous United States (CONUS) during the summer when air quality concerns are most relevant. This study examines how these differences arise from errors in predicting the large-scale atmospheric circulation. It is demonstrated that the Bermuda high, which strongly influences the regional climate for much of the eastern half of the CONUS during the summer, is poorly simulated without interior grid nudging. In particular, two summers when the Bermuda high was west (1993) and east (2003) of its climatological position are chosen to illustrate problems in the large-scale atmospheric circulation anomalies. For both summers, WRF without interior grid nudging fails to simulate the placement of the upper-level anticyclonic (1993) and cyclonic (2003) circulation anomalies. The displacement of the large-scale circulation impacts the lower atmosphere moisture transport and precipitable water, affecting the convective environment and precipitation. Using interior grid nudging improves the large-scale circulation aloft and moisture transport/precipitable water anomalies, thereby improving the simulated 2-m temperature and precipitation. The results demonstrate that constraining the RCM to the large-scale features in the driving fields improves the overall accuracy of the simulated regional climate, and suggest that in the absence of such a constraint, the RCM will likely misrepresent important large-scale shifts in the atmospheric circulation under a future climate.
NASA Technical Reports Server (NTRS)
Kopardekar, Parimal Hemchandra
2016-01-01
Just a year ago we laid out the UTM challenges and NASA's proposed solutions. During the past year NASA's goal continues to be to conduct research, development and testing to identify airspace operations requirements to enable large-scale visual and beyond visual line-of-sight UAS operations in the low-altitude airspace. Significant progress has been made, and NASA is continuing to move forward.
Wang, Guan; Zhao, Junfei; Haringa, Cees; Tang, Wenjun; Xia, Jianye; Chu, Ju; Zhuang, Yingping; Zhang, Siliang; Deshmukh, Amit T; van Gulik, Walter; Heijnen, Joseph J; Noorman, Henk J
2018-05-01
In a 54 m 3 large-scale penicillin fermentor, the cells experience substrate gradient cycles at the timescales of global mixing time about 20-40 s. Here, we used an intermittent feeding regime (IFR) and a two-compartment reactor (TCR) to mimic these substrate gradients at laboratory-scale continuous cultures. The IFR was applied to simulate substrate dynamics experienced by the cells at full scale at timescales of tens of seconds to minutes (30 s, 3 min and 6 min), while the TCR was designed to simulate substrate gradients at an applied mean residence time (τc) of 6 min. A biological systems analysis of the response of an industrial high-yielding P. chrysogenum strain has been performed in these continuous cultures. Compared to an undisturbed continuous feeding regime in a single reactor, the penicillin productivity (q PenG ) was reduced in all scale-down simulators. The dynamic metabolomics data indicated that in the IFRs, the cells accumulated high levels of the central metabolites during the feast phase to actively cope with external substrate deprivation during the famine phase. In contrast, in the TCR system, the storage pool (e.g. mannitol and arabitol) constituted a large contribution of carbon supply in the non-feed compartment. Further, transcript analysis revealed that all scale-down simulators gave different expression levels of the glucose/hexose transporter genes and the penicillin gene clusters. The results showed that q PenG did not correlate well with exposure to the substrate regimes (excess, limitation and starvation), but there was a clear inverse relation between q PenG and the intracellular glucose level. © 2018 The Authors. Microbial Biotechnology published by John Wiley & Sons Ltd and Society for Applied Microbiology.
NASA Astrophysics Data System (ADS)
Santillán, Moisés; Qian, Hong
2013-01-01
We investigate the internal consistency of a recently developed mathematical thermodynamic structure across scales, between a continuous stochastic nonlinear dynamical system, i.e., a diffusion process with Langevin and Fokker-Planck equations, and its emergent discrete, inter-attractoral Markov jump process. We analyze how the system’s thermodynamic state functions, e.g. free energy F, entropy S, entropy production ep, free energy dissipation Ḟ, etc., are related when the continuous system is described with coarse-grained discrete variables. It is shown that the thermodynamics derived from the underlying, detailed continuous dynamics gives rise to exactly the free-energy representation of Gibbs and Helmholtz. That is, the system’s thermodynamic structure is the same as if one only takes a middle road and starts with the natural discrete description, with the corresponding transition rates empirically determined. By natural we mean in the thermodynamic limit of a large system, with an inherent separation of time scales between inter- and intra-attractoral dynamics. This result generalizes a fundamental idea from chemistry, and the theory of Kramers, by incorporating thermodynamics: while a mechanical description of a molecule is in terms of continuous bond lengths and angles, chemical reactions are phenomenologically described by a discrete representation, in terms of exponential rate laws and a stochastic thermodynamics.
De-mything the Cahokia catlinite trade
Emerson, T.E.; Hughes, R.E.
2001-01-01
Western-derived formalistic economic models continue to pervade much of the discussion relating to the political and economic history of noncapitalist societies. The rise of complex societies across the world has been intimately tied to such economic variables. In North America, the emergence of Cahokia and other Mississippian chiefdoms is also often linked to these factors. Such models rely on the large-scale movement of materials between distant locales. Critical to these approaches is the demonstration that items identified as "exotic" are nonlocal. Only archaeometric analysis can make this determination. This paper continues our research in geologic sourcing through X-ray diffraction and spectroscopic analysis (Emerson and Hughes 2000). We examine red stone from the American Bottom that was identified macroscopically as catlinite and as part of the panregional Cahokia trade network. We prove that the Cahokian "catlinite," in fact, is not catlinite and is from one or more other possible sources. This proof demonstrates catlinite, at the earliest, entered the American Bottom with Oneota peoples in the fourteenth century, and more likely, with protohistoric or historic groups in the sixteenth to seventeenth centuries. This geologic sourcing research continues to cast doubt on the role and importance of large-scale, long-distance economic transactions in Cahokian history.
Diego A. Riveros-Iregui; Brian L. McGlynn; Howard E. Epstein; Daniel L. Welsch
2008-01-01
Soil CO2 efflux is a large respiratory flux from terrestrial ecosystems and a critical component of the global carbon (C) cycle. Lack of process understanding of the spatiotemporal controls on soil CO2 efflux limits our ability to extrapolate from fluxes measured at point scales to scales useful for corroboration with other ecosystem level measures of C exchange....
Compliant Robotic Structures. Part 2
1986-07-01
Nonaxially Homogeneous Stresses and Strains 44 Parametric Studies 52 % References 65 III. LARGE DEFLECTIONS OF CONTINUOUS ELASTIC ’- STRUCTURES 66...APPENDIX C: Computer Program for the Element String 133 -° SUMMARY This is the second year report which is a part of a three- year study on compliant...ratios as high as 10/1 for laboratory-scale models and up to 3/1 for full-scale prototype arms. The first two years of this study have involved the
Wenchi Jin; Hong S. He; Stephen R. Shifley; Wen J. Wang; John M. Kabrick; Brian K. Davidson
2018-01-01
Historical fire regimes in the central United States maintained open-canopy shortleaf pine-oak woodlands on xeric sites. Following large-scale harvest and fire suppression, those woodlands grew denser with more continuous canopy cover, and they gained mesic species at the expense of shortleaf pine. There is high interest in restoring shortleaf pine-oak woodlands; most...
Coastal Aerosol Distribution by Data Assimilation
2006-09-30
useful for forecasts of dust storms in areas downwind of the large deserts of the world: Arabian Gulf, Sea of Japan, China Sea , Mediterranean Sea ...and the Tropical Atlantic Ocean. NAAPS also accurately predicts the fate of large-scale smoke and pollution plumes. With its global and continuous...The collaboration with Scripps Institute of Oceanography and the University of Warsaw has led to the addition of a sea salt component to NAAPS. The
JPRS Report, China: QIUSHI SEEKING TRUTH no 14, 16 July 1989.
1989-08-28
permit the development of the individual and private sectors of economy, and the development of Sino-foreign joint ventures and foreign enterprises...large quantity of rural labor force is shifting into nonfarming industries and moving into cities. The concentration of land and large-scale...stabilized, and land can also be continuously concentrated as more and more rural labor force is shifted to nonfarming industries. Therefore, in my
Power-law versus log-law in wall-bounded turbulence: A large-eddy simulation perspective
NASA Astrophysics Data System (ADS)
Cheng, W.; Samtaney, R.
2014-01-01
The debate whether the mean streamwise velocity in wall-bounded turbulent flows obeys a log-law or a power-law scaling originated over two decades ago, and continues to ferment in recent years. As experiments and direct numerical simulation can not provide sufficient clues, in this study we present an insight into this debate from a large-eddy simulation (LES) viewpoint. The LES organically combines state-of-the-art models (the stretched-vortex model and inflow rescaling method) with a virtual-wall model derived under different scaling law assumptions (the log-law or the power-law by George and Castillo ["Zero-pressure-gradient turbulent boundary layer," Appl. Mech. Rev. 50, 689 (1997)]). Comparison of LES results for Reθ ranging from 105 to 1011 for zero-pressure-gradient turbulent boundary layer flows are carried out for the mean streamwise velocity, its gradient and its scaled gradient. Our results provide strong evidence that for both sets of modeling assumption (log law or power law), the turbulence gravitates naturally towards the log-law scaling at extremely large Reynolds numbers.
Zhang, Cuicui; Liang, Xuefeng; Matsuyama, Takashi
2014-12-08
Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the "small sample size" (SSS) problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1) how to define diverse base classifiers from the small data; (2) how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0-1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system.
Zhang, Cuicui; Liang, Xuefeng; Matsuyama, Takashi
2014-01-01
Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the “small sample size” (SSS) problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1) how to define diverse base classifiers from the small data; (2) how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0–1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system. PMID:25494350
NASA Astrophysics Data System (ADS)
Zou, Rui; Riverson, John; Liu, Yong; Murphy, Ryan; Sim, Youn
2015-03-01
Integrated continuous simulation-optimization models can be effective predictors of a process-based responses for cost-benefit optimization of best management practices (BMPs) selection and placement. However, practical application of simulation-optimization model is computationally prohibitive for large-scale systems. This study proposes an enhanced Nonlinearity Interval Mapping Scheme (NIMS) to solve large-scale watershed simulation-optimization problems several orders of magnitude faster than other commonly used algorithms. An efficient interval response coefficient (IRC) derivation method was incorporated into the NIMS framework to overcome a computational bottleneck. The proposed algorithm was evaluated using a case study watershed in the Los Angeles County Flood Control District. Using a continuous simulation watershed/stream-transport model, Loading Simulation Program in C++ (LSPC), three nested in-stream compliance points (CP)—each with multiple Total Maximum Daily Loads (TMDL) targets—were selected to derive optimal treatment levels for each of the 28 subwatersheds, so that the TMDL targets at all the CP were met with the lowest possible BMP implementation cost. Genetic Algorithm (GA) and NIMS were both applied and compared. The results showed that the NIMS took 11 iterations (about 11 min) to complete with the resulting optimal solution having a total cost of 67.2 million, while each of the multiple GA executions took 21-38 days to reach near optimal solutions. The best solution obtained among all the GA executions compared had a minimized cost of 67.7 million—marginally higher, but approximately equal to that of the NIMS solution. The results highlight the utility for decision making in large-scale watershed simulation-optimization formulations.
The Large Ring Laser G for Continuous Earth Rotation Monitoring
NASA Astrophysics Data System (ADS)
Schreiber, K. U.; Klügel, T.; Velikoseltsev, A.; Schlüter, W.; Stedman, G. E.; Wells, J.-P. R.
2009-09-01
Ring Laser gyroscopes exploit the Sagnac effect and measure rotations absolute. They do not require an external reference frame and therefore provide an independent method to monitor Earth rotation. Large-scale versions of these gyroscopes promise to eventually provide a similar high resolution for the measurement of the variations in the Earth rotation rate as the established methods based on VLBI and GNSS. This would open the door to a continuous monitoring of LOD (Length of Day) and polar motion, which is not yet available today. Another advantage is the access to the sub-daily frequency regime of Earth rotation. The ring laser “G” (Grossring), located at the Geodetic Observatory Wettzell (Germany) is the most advanced realization of such a large gyroscope. This paper outlines the current sensor design and properties.
NASA Astrophysics Data System (ADS)
Tan, Zhihong; Kaul, Colleen M.; Pressel, Kyle G.; Cohen, Yair; Schneider, Tapio; Teixeira, João.
2018-03-01
Large-scale weather forecasting and climate models are beginning to reach horizontal resolutions of kilometers, at which common assumptions made in existing parameterization schemes of subgrid-scale turbulence and convection—such as that they adjust instantaneously to changes in resolved-scale dynamics—cease to be justifiable. Additionally, the common practice of representing boundary-layer turbulence, shallow convection, and deep convection by discontinuously different parameterizations schemes, each with its own set of parameters, has contributed to the proliferation of adjustable parameters in large-scale models. Here we lay the theoretical foundations for an extended eddy-diffusivity mass-flux (EDMF) scheme that has explicit time-dependence and memory of subgrid-scale variables and is designed to represent all subgrid-scale turbulence and convection, from boundary layer dynamics to deep convection, in a unified manner. Coherent up and downdrafts in the scheme are represented as prognostic plumes that interact with their environment and potentially with each other through entrainment and detrainment. The more isotropic turbulence in their environment is represented through diffusive fluxes, with diffusivities obtained from a turbulence kinetic energy budget that consistently partitions turbulence kinetic energy between plumes and environment. The cross-sectional area of up and downdrafts satisfies a prognostic continuity equation, which allows the plumes to cover variable and arbitrarily large fractions of a large-scale grid box and to have life cycles governed by their own internal dynamics. Relatively simple preliminary proposals for closure parameters are presented and are shown to lead to a successful simulation of shallow convection, including a time-dependent life cycle.
Ice stream motion facilitated by a shallow-deforming and accreting bed
Spagnolo, Matteo; Phillips, Emrys; Piotrowski, Jan A.; Rea, Brice R.; Clark, Chris D.; Stokes, Chris R.; Carr, Simon J.; Ely, Jeremy C.; Ribolini, Adriano; Wysota, Wojciech; Szuman, Izabela
2016-01-01
Ice streams drain large portions of ice sheets and play a fundamental role in governing their response to atmospheric and oceanic forcing, with implications for sea-level change. The mechanisms that generate ice stream flow remain elusive. Basal sliding and/or bed deformation have been hypothesized, but ice stream beds are largely inaccessible. Here we present a comprehensive, multi-scale study of the internal structure of mega-scale glacial lineations (MSGLs) formed at the bed of a palaeo ice stream. Analyses were undertaken at macro- and microscales, using multiple techniques including X-ray tomography, thin sections and ground penetrating radar (GPR) acquisitions. Results reveal homogeneity in stratigraphy, kinematics, granulometry and petrography. The consistency of the physical and geological properties demonstrates a continuously accreting, shallow-deforming, bed and invariant basal conditions. This implies that ice stream basal motion on soft sediment beds during MSGL formation is accommodated by plastic deformation, facilitated by continuous sediment supply and an inefficient drainage system. PMID:26898399
Newmark local time stepping on high-performance computing architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rietmann, Max, E-mail: max.rietmann@erdw.ethz.ch; Institute of Geophysics, ETH Zurich; Grote, Marcus, E-mail: marcus.grote@unibas.ch
In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strongmore » element-size contrasts (more than 100x). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.« less
Validation of the replica trick for simple models
NASA Astrophysics Data System (ADS)
Shinzato, Takashi
2018-04-01
We discuss the replica analytic continuation using several simple models in order to prove mathematically the validity of the replica analysis, which is used in a wide range of fields related to large-scale complex systems. While replica analysis consists of two analytical techniques—the replica trick (or replica analytic continuation) and the thermodynamical limit (and/or order parameter expansion)—we focus our study on replica analytic continuation, which is the mathematical basis of the replica trick. We apply replica analysis to solve a variety of analytical models, and examine the properties of replica analytic continuation. Based on the positive results for these models we propose that replica analytic continuation is a robust procedure in replica analysis.
Direct and inverse energy cascades in a forced rotating turbulence experiment
NASA Astrophysics Data System (ADS)
Campagne, Antoine; Gallet, Basile; Moisy, Frédéric; Cortet, Pierre-Philippe
2014-12-01
We present experimental evidence for a double cascade of kinetic energy in a statistically stationary rotating turbulence experiment. Turbulence is generated by a set of vertical flaps, which continuously injects velocity fluctuations towards the center of a rotating water tank. The energy transfers are evaluated from two-point third-order three-component velocity structure functions, which we measure using stereoscopic particle image velocimetry in the rotating frame. Without global rotation, the energy is transferred from large to small scales, as in classical three-dimensional turbulence. For nonzero rotation rates, the horizontal kinetic energy presents a double cascade: a direct cascade at small horizontal scales and an inverse cascade at large horizontal scales. By contrast, the vertical kinetic energy is always transferred from large to small horizontal scales, a behavior reminiscent of the dynamics of a passive scalar in two-dimensional turbulence. At the largest rotation rate, the flow is nearly two-dimensional, and a pure inverse energy cascade is found for the horizontal energy. To describe the scale-by-scale energy budget, we consider a generalization of the Kármán-Howarth-Monin equation to inhomogeneous turbulent flows, in which the energy input is explicitly described as the advection of turbulent energy from the flaps through the surface of the control volume where the measurements are performed.
Universal scaling function in discrete time asymmetric exclusion processes
NASA Astrophysics Data System (ADS)
Chia, Nicholas; Bundschuh, Ralf
2005-03-01
In the universality class of the one dimensional Kardar-Parisi-Zhang surface growth, Derrida and Lebowitz conjectured the universality of not only the scaling exponents, but of an entire scaling function. Since Derrida and Lebowitz' original publication this universality has been verified for a variety of continuous time systems in the KPZ universality class. We study the Derrida-Lebowitz scaling function for multi-particle versions of the discrete time Asymmetric Exclusion Process. We find that in this discrete time system the Derrida-Lebowitz scaling function not only properly characterizes the large system size limit, but even accurately describes surprisingly small systems. These results have immediate applications in searching biological sequence databases.
Experimental investigation of large-scale vortices in a freely spreading gravity current
NASA Astrophysics Data System (ADS)
Yuan, Yeping; Horner-Devine, Alexander R.
2017-10-01
A series of laboratory experiments are presented to compare the dynamics of constant-source buoyant gravity currents propagating into laterally confined (channelized) and unconfined (spreading) environments. The plan-form structure of the spreading current and the vertical density and velocity structures on the interface are quantified using the optical thickness method and a combined particle image velocimetry and planar laser-induced fluorescence method, respectively. With lateral boundaries, the buoyant current thickness is approximately constant and Kelvin-Helmholtz instabilities are generated within the shear layer. The buoyant current structure is significantly different in the spreading case. As the current spreads laterally, nonlinear large-scale vortex structures are observed at the interface, which maintain a coherent shape as they propagate away from the source. These structures are continuously generated near the river mouth, have amplitudes close to the buoyant layer thickness, and propagate offshore at speeds approximately equal to the internal wave speed. The observed depth and propagation speed of the instabilities match well with the fastest growing mode predicted by linear stability analysis, but with a shorter wavelength. The spreading flows have much higher vorticity, which is aggregated within the large-scale structures. Secondary instabilities are generated on the leading edge of the braids between the large-scale vortex structures and ultimately break and mix on the lee side of the structures. Analysis of the vortex dynamics shows that lateral stretching intensifies the vorticity in the spreading currents, contributing to higher vorticity within the large-scale structures in the buoyant plume. The large-scale instabilities and vortex structures observed in the present study provide new insights into the origin of internal frontal structures frequently observed in coastal river plumes.
Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien
2017-06-01
Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provides a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to selection rules that favor the rare trajectories of interest. Such algorithms are plagued by finite simulation time and finite population size, effects that can render their use delicate. In this paper, we present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories. The method we propose allows one to extract the infinite-time and infinite-size limit of these estimators, which-as shown on the contact process-provides a significant improvement of the large deviation function estimators compared to the standard one.
Large-eddy simulation of a boundary layer with concave streamwise curvature
NASA Technical Reports Server (NTRS)
Lund, Thomas S.
1994-01-01
Turbulence modeling continues to be one of the most difficult problems in fluid mechanics. Existing prediction methods are well developed for certain classes of simple equilibrium flows, but are still not entirely satisfactory for a large category of complex non-equilibrium flows found in engineering practice. Direct and large-eddy simulation (LES) approaches have long been believed to have great potential for the accurate prediction of difficult turbulent flows, but the associated computational cost has been prohibitive for practical problems. This remains true for direct simulation but is no longer clear for large-eddy simulation. Advances in computer hardware, numerical methods, and subgrid-scale modeling have made it possible to conduct LES for flows or practical interest at Reynolds numbers in the range of laboratory experiments. The objective of this work is to apply ES and the dynamic subgrid-scale model to the flow of a boundary layer over a concave surface.
Self-Organized Evolution of Sandy Coastline Shapes: Connections with Shoreline Erosion Problems
NASA Astrophysics Data System (ADS)
Murray, A. B.; Ashton, A.
2002-12-01
Landward movement of the shoreline severely impacts property owners and communities where structures and infrastructure are built near the coast. While sea level rise will increase the average rate of coastal erosion, even a slight gradient in wave-driven alongshore sediment flux will locally overwhelm that effect, causing either shoreline accretion or enhanced erosion. Recent analysis shows that because of the nonlinear relationship between alongshore sediment flux and the angle between deep water wave crests and local shoreline orientation, in some wave climates a straight coastline is unstable (Ashton et al., Nature, 2001). When deep-water waves approach from angles greater than the one that maximizes alongshore flux, in concave-seaward shoreline segments sediment flux will diverge, causing erosion. Similarly, convex regions such as the crests of perturbations on an otherwise straight shoreline will experience accretion; perturbations will grow. When waves approach from smaller angles, the sign of the relationship between shoreline curvature and shoreline change is reversed, but any deviation from a perfectly straight coastline will still result in alongshore-inhomogeneous shoreline change. A numerical model designed to explore the long-term effects of this instability operating over a spatially extended alongshore domain has shown that as perturbations grow to finite amplitude and interact with each other, large-scale coastline structures can emerge. The character of the local and non-local interactions, and the resulting emergent structures, depends on the wave climate. The 100-km scale capes and cuspate forelands that form much of the coast of the Carolinas, USA, provides one possible natural example. Our modeling suggests that on such a shoreline, continued interactions between large-scale structures will cause continued large-scale change in coastline shape. Consequently, some coastline segments will tend to experience accentuated erosion. Communities established in these areas face discouraging future prospects. Attempts can be made to arrest the shoreline retreat on large scales-for example through large beach nourishment projects or policies that allow pervasive hard stabilization (e.g. seawall, jetties) along a coastline segment. However, even if such attempts are successful for a significant period of time, the pinning in place of some parts of an otherwise dynamic system will change the large-scale evolution of the coastline, altering the future erosion/accretion experienced at other, perhaps distant, locations. Simple properties of alongshore sediment transport could also be relevant to alongshore-inhomogeneous shoreline change (including erosion 'hot spots') on shorter time scales and smaller spatial scales. We are comparing predictions arising from the modeling, and from analysis of alongshore transport as a function of shoreline orientation, to recent observations of shoreline change ranging across spatial scales from 100s of meters to 10s of kilometers, and time scales from days to decades (List and Farris, Coastal Sediments,1999; Tebbens et al., PNAS, 2002). Considering that many other processes and factors can also influence shoreline change, initial results show a surprising degree of correlation between observations and predictions.
Fish Gill Inspired Crossflow for Efficient and Continuous Collection of Spilled Oil.
Dou, Yuhai; Tian, Dongliang; Sun, Ziqi; Liu, Qiannan; Zhang, Na; Kim, Jung Ho; Jiang, Lei; Dou, Shi Xue
2017-03-28
Developing an effective system to clean up large-scale oil spills is of great significance due to their contribution to severe environmental pollution and destruction. Superwetting membranes have been widely studied for oil/water separation. The separation, however, adopts a gravity-driven approach that is inefficient and discontinuous due to quick fouling of the membrane by oil. Herein, inspired by the crossflow filtration behavior in fish gills, we propose a crossflow approach via a hydrophilic, tilted gradient membrane for spilled oil collection. In crossflow collection, as the oil/water flows parallel to the hydrophilic membrane surface, water is gradually filtered through the pores, while oil is repelled, transported, and finally collected for storage. Owing to the selective gating behavior of the water-sealed gradient membrane, the large pores at the bottom with high water flux favor fast water filtration, while the small pores at the top with strong oil repellency allow easy oil transportation. In addition, the gradient membrane exhibits excellent antifouling properties due to the protection of the water layer. Therefore, this bioinspired crossflow approach enables highly efficient and continuous spilled oil collection, which is very promising for the cleanup of large-scale oil spills.
Boatwright, J.; Bundock, H.; Luetgert, J.; Seekins, L.; Gee, L.; Lombard, P.
2003-01-01
We analyze peak ground velocity (PGV) and peak ground acceleration (PGA) data from 95 moderate (3.5 ??? M 100 km, the peak motions attenuate more rapidly than a simple power law (that is, r-??) can fit. Instead, we use an attenuation function that combines a fixed power law (r-0.7) with a fitted exponential dependence on distance, which is estimated as expt(-0.0063r) and exp(-0.0073r) for PGV and PGA, respectively, for moderate earthquakes. We regress log(PGV) and log(PGA) as functions of distance and magnitude. We assume that the scaling of log(PGV) and log(PGA) with magnitude can differ for moderate and large earthquakes, but must be continuous. Because the frequencies that carry PGV and PGA can vary with earthquake size for large earthquakes, the regression for large earthquakes incorporates a magnitude dependence in the exponential attenuation function. We fix the scaling break between moderate and large earthquakes at M 5.5; log(PGV) and log(PGA) scale as 1.06M and 1.00M, respectively, for moderate earthquakes and 0.58M and 0.31M for large earthquakes.
Low-Temperature Wafer-Scale Deposition of Continuous 2D SnS2 Films.
Mattinen, Miika; King, Peter J; Khriachtchev, Leonid; Meinander, Kristoffer; Gibbon, James T; Dhanak, Vin R; Räisänen, Jyrki; Ritala, Mikko; Leskelä, Markku
2018-04-19
Semiconducting 2D materials, such as SnS 2 , hold immense potential for many applications ranging from electronics to catalysis. However, deposition of few-layer SnS 2 films has remained a great challenge. Herein, continuous wafer-scale 2D SnS 2 films with accurately controlled thickness (2 to 10 monolayers) are realized by combining a new atomic layer deposition process with low-temperature (250 °C) postdeposition annealing. Uniform coating of large-area and 3D substrates is demonstrated owing to the unique self-limiting growth mechanism of atomic layer deposition. Detailed characterization confirms the 1T-type crystal structure and composition, smoothness, and continuity of the SnS 2 films. A two-stage deposition process is also introduced to improve the texture of the films. Successful deposition of continuous, high-quality SnS 2 films at low temperatures constitutes a crucial step toward various applications of 2D semiconductors. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Terwilliger, Thomas C; Bricogne, Gerard
2014-10-01
Accurate crystal structures of macromolecules are of high importance in the biological and biomedical fields. Models of crystal structures in the Protein Data Bank (PDB) are in general of very high quality as deposited. However, methods for obtaining the best model of a macromolecular structure from a given set of experimental X-ray data continue to progress at a rapid pace, making it possible to improve most PDB entries after their deposition by re-analyzing the original deposited data with more recent software. This possibility represents a very significant departure from the situation that prevailed when the PDB was created, when it was envisioned as a cumulative repository of static contents. A radical paradigm shift for the PDB is therefore proposed, away from the static archive model towards a much more dynamic body of continuously improving results in symbiosis with continuously improving methods and software. These simultaneous improvements in methods and final results are made possible by the current deposition of processed crystallographic data (structure-factor amplitudes) and will be supported further by the deposition of raw data (diffraction images). It is argued that it is both desirable and feasible to carry out small-scale and large-scale efforts to make this paradigm shift a reality. Small-scale efforts would focus on optimizing structures that are of interest to specific investigators. Large-scale efforts would undertake a systematic re-optimization of all of the structures in the PDB, or alternatively the redetermination of groups of structures that are either related to or focused on specific questions. All of the resulting structures should be made generally available, along with the precursor entries, with various views of the structures being made available depending on the types of questions that users are interested in answering.
Terwilliger, Thomas C.; Bricogne, Gerard
2014-09-30
Accurate crystal structures of macromolecules are of high importance in the biological and biomedical fields. Models of crystal structures in the Protein Data Bank (PDB) are in general of very high quality as deposited. However, methods for obtaining the best model of a macromolecular structure from a given set of experimental X-ray data continue to progress at a rapid pace, making it possible to improve most PDB entries after their deposition by re-analyzing the original deposited data with more recent software. This possibility represents a very significant departure from the situation that prevailed when the PDB was created, when itmore » was envisioned as a cumulative repository of static contents. A radical paradigm shift for the PDB is therefore proposed, away from the static archive model towards a much more dynamic body of continuously improving results in symbiosis with continuously improving methods and software. These simultaneous improvements in methods and final results are made possible by the current deposition of processed crystallographic data (structure-factor amplitudes) and will be supported further by the deposition of raw data (diffraction images). It is argued that it is both desirable and feasible to carry out small-scale and large-scale efforts to make this paradigm shift a reality. Small-scale efforts would focus on optimizing structures that are of interest to specific investigators. Large-scale efforts would undertake a systematic re-optimization of all of the structures in the PDB, or alternatively the redetermination of groups of structures that are either related to or focused on specific questions. All of the resulting structures should be made generally available, along with the precursor entries, with various views of the structures being made available depending on the types of questions that users are interested in answering.« less
Terwilliger, Thomas C.; Bricogne, Gerard
2014-01-01
Accurate crystal structures of macromolecules are of high importance in the biological and biomedical fields. Models of crystal structures in the Protein Data Bank (PDB) are in general of very high quality as deposited. However, methods for obtaining the best model of a macromolecular structure from a given set of experimental X-ray data continue to progress at a rapid pace, making it possible to improve most PDB entries after their deposition by re-analyzing the original deposited data with more recent software. This possibility represents a very significant departure from the situation that prevailed when the PDB was created, when it was envisioned as a cumulative repository of static contents. A radical paradigm shift for the PDB is therefore proposed, away from the static archive model towards a much more dynamic body of continuously improving results in symbiosis with continuously improving methods and software. These simultaneous improvements in methods and final results are made possible by the current deposition of processed crystallographic data (structure-factor amplitudes) and will be supported further by the deposition of raw data (diffraction images). It is argued that it is both desirable and feasible to carry out small-scale and large-scale efforts to make this paradigm shift a reality. Small-scale efforts would focus on optimizing structures that are of interest to specific investigators. Large-scale efforts would undertake a systematic re-optimization of all of the structures in the PDB, or alternatively the redetermination of groups of structures that are either related to or focused on specific questions. All of the resulting structures should be made generally available, along with the precursor entries, with various views of the structures being made available depending on the types of questions that users are interested in answering. PMID:25286839
DOE Office of Scientific and Technical Information (OSTI.GOV)
Terwilliger, Thomas C.; Bricogne, Gerard
Accurate crystal structures of macromolecules are of high importance in the biological and biomedical fields. Models of crystal structures in the Protein Data Bank (PDB) are in general of very high quality as deposited. However, methods for obtaining the best model of a macromolecular structure from a given set of experimental X-ray data continue to progress at a rapid pace, making it possible to improve most PDB entries after their deposition by re-analyzing the original deposited data with more recent software. This possibility represents a very significant departure from the situation that prevailed when the PDB was created, when itmore » was envisioned as a cumulative repository of static contents. A radical paradigm shift for the PDB is therefore proposed, away from the static archive model towards a much more dynamic body of continuously improving results in symbiosis with continuously improving methods and software. These simultaneous improvements in methods and final results are made possible by the current deposition of processed crystallographic data (structure-factor amplitudes) and will be supported further by the deposition of raw data (diffraction images). It is argued that it is both desirable and feasible to carry out small-scale and large-scale efforts to make this paradigm shift a reality. Small-scale efforts would focus on optimizing structures that are of interest to specific investigators. Large-scale efforts would undertake a systematic re-optimization of all of the structures in the PDB, or alternatively the redetermination of groups of structures that are either related to or focused on specific questions. All of the resulting structures should be made generally available, along with the precursor entries, with various views of the structures being made available depending on the types of questions that users are interested in answering.« less
Predictors of Sustainability of Social Programs
ERIC Educational Resources Information Center
Savaya, Riki; Spiro, Shimon E.
2012-01-01
This article presents the findings of a large scale study that tested a comprehensive model of predictors of three manifestations of sustainability: continuation, institutionalization, and duration. Based on the literature the predictors were arrayed in four groups: variables pertaining to the project, the auspice organization, the community, and…
Reclamation with trees in Illinois
Brad Evilsizer
1980-01-01
Thru private initiative, Illinois citizens historically have invented and conducted large-scale tree planting programs, starting with hedgerow fences and farmstead windbreaks and continuing with surface mine reclamation and farm woodlands. With invaluable help from public and private scientific personnel, the old and new programs hold promise of enlargement and...
The Vital Program: Transforming ICT Professional Development
ERIC Educational Resources Information Center
Bradshaw, Pete; Twining, Peter; Walsh, Christopher S.
2012-01-01
Developing a model for effective large-scale continuous professional development (CPD) for teachers remains a significant obstacle for many governments worldwide. This article describes the development and evolution of Vital--a CPD program designed to enhance the teaching of information communication technology in state-funded primary and…
Paramedics on the job: dynamic trunk motion assessment at the workplace.
Prairie, Jérôme; Corbeil, Philippe
2014-07-01
Many paramedics' work accidents are related to physical aspects of the job, and the most affected body part is the low back. This study documents the trunk motion exposure of paramedics on the job. Nine paramedics were observed over 12 shifts (120 h). Trunk postures were recorded with the computer-assisted CUELA measurement system worn on the back like a knapsack. Average duration of an emergency call was 23.5 min. Sagittal trunk flexion of >40° and twisting rotation of >24° were observed in 21% and 17% of time-sampled postures. Medical care on the scene (44% of total time) involved prolonged flexed and twisted postures (∼ 10s). The highest extreme sagittal trunk flexion (63°) and twisting rotation (40°) were observed during lifting activities, which lasted 2% of the total time. Paramedics adopted trunk motions that may significantly increase the risk of low back disorders during medical care and patient-handling activities. Copyright © 2013. Published by Elsevier Ltd.
Spatial interpolation of pesticide drift from hand-held knapsack sprayers used in potato production
NASA Astrophysics Data System (ADS)
Garcia-Santos, Glenda; Pleschberger, Martin; Scheiber, Michael; Pilz, Jürgen
2017-04-01
Tropical mountainous regions in developing countries are often neglected in research and policy but represent key areas to be considered if sustainable agricultural and rural development is to be promoted. One example is the lack of information of pesticide drift soil deposition, which can support pesticide risk assessment for soil, surface water, bystanders and off-target plants and fauna. This is considered a serious gap, given the evidence of pesticide-related poisoning in those regions. Empirical data of drift deposition of a pesticide surrogate, Uranine tracer, were obtained within one of the highest potato producing regions in Colombia. Based on the empirical data, different spatial interpolation techniques i.e. Thiessen, inverse distance squared weighting, co-kriging, pair-copulas and drift curves depending on distance and wind speed were tested and optimized. Results of the best performing spatial interpolation methods, suitable curves to assess mean relative drift and implications on risk assessment studies will be presented.
Proceedings of the second SISAL users` conference
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feo, J T; Frerking, C; Miller, P J
1992-12-01
This report contains papers on the following topics: A sisal code for computing the fourier transform on S{sub N}; five ways to fill your knapsack; simulating material dislocation motion in sisal; candis as an interface for sisal; parallelisation and performance of the burg algorithm on a shared-memory multiprocessor; use of genetic algorithm in sisal to solve the file design problem; implementing FFT`s in sisal; programming and evaluating the performance of signal processing applications in the sisal programming environment; sisal and Von Neumann-based languages: translation and intercommunication; an IF2 code generator for ADAM architecture; program partitioning for NUMA multiprocessor computer systems;more » mapping functional parallelism on distributed memory machines; implicit array copying: prevention is better than cure ; mathematical syntax for sisal; an approach for optimizing recursive functions; implementing arrays in sisal 2.0; Fol: an object oriented extension to the sisal language; twine: a portable, extensible sisal execution kernel; and investigating the memory performance of the optimizing sisal compiler.« less
The synthesis of active pharmaceutical ingredients (APIs) using continuous flow chemistry
2015-01-01
Summary The implementation of continuous flow processing as a key enabling technology has transformed the way we conduct chemistry and has expanded our synthetic capabilities. As a result many new preparative routes have been designed towards commercially relevant drug compounds achieving more efficient and reproducible manufacture. This review article aims to illustrate the holistic systems approach and diverse applications of flow chemistry to the preparation of pharmaceutically active molecules, demonstrating the value of this strategy towards every aspect ranging from synthesis, in-line analysis and purification to final formulation and tableting. Although this review will primarily concentrate on large scale continuous processing, additional selected syntheses using micro or meso-scaled flow reactors will be exemplified for key transformations and process control. It is hoped that the reader will gain an appreciation of the innovative technology and transformational nature that flow chemistry can leverage to an overall process. PMID:26425178
The synthesis of active pharmaceutical ingredients (APIs) using continuous flow chemistry.
Baumann, Marcus; Baxendale, Ian R
2015-01-01
The implementation of continuous flow processing as a key enabling technology has transformed the way we conduct chemistry and has expanded our synthetic capabilities. As a result many new preparative routes have been designed towards commercially relevant drug compounds achieving more efficient and reproducible manufacture. This review article aims to illustrate the holistic systems approach and diverse applications of flow chemistry to the preparation of pharmaceutically active molecules, demonstrating the value of this strategy towards every aspect ranging from synthesis, in-line analysis and purification to final formulation and tableting. Although this review will primarily concentrate on large scale continuous processing, additional selected syntheses using micro or meso-scaled flow reactors will be exemplified for key transformations and process control. It is hoped that the reader will gain an appreciation of the innovative technology and transformational nature that flow chemistry can leverage to an overall process.
Wearable sensor-based objective assessment of motor symptoms in Parkinson's disease.
Ossig, Christiana; Antonini, Angelo; Buhmann, Carsten; Classen, Joseph; Csoti, Ilona; Falkenburger, Björn; Schwarz, Michael; Winkler, Jürgen; Storch, Alexander
2016-01-01
Effective management and development of new treatment strategies of motor symptoms in Parkinson's disease (PD) largely depend on clinical rating instruments like the Unified PD rating scale (UPDRS) and the modified abnormal involuntary movement scale (mAIMS). Regarding inter-rater variability and continuous monitoring, clinical rating scales have various limitations. Patient-administered questionnaires such as the PD home diary to assess motor stages and fluctuations in late-stage PD are frequently used in clinical routine and as clinical trial endpoints, but diary/questionnaire are tiring, and recall bias impacts on data quality, particularly in patients with cognitive dysfunction or depression. Consequently, there is a strong need for continuous and objective monitoring of motor symptoms in PD for improving therapeutic regimen and for usage in clinical trials. Recent advances in battery technology, movement sensors such as gyroscopes, accelerometers and information technology boosted the field of objective measurement of movement in everyday life and medicine using wearable sensors allowing continuous (long-term) monitoring. This systematic review summarizes the current wearable sensor-based devices to objectively assess the various motor symptoms of PD.
Bertucco, Alberto; Beraldi, Mariaelena; Sforza, Eleonora
2014-08-01
In this work, the production of Scenedesmus obliquus in a continuous flat-plate laboratory-scale photobioreactor (PBR) under alternated day-night cycles was tested both experimentally and theoretically. Variation of light intensity according to the four seasons of the year were simulated experimentally by a tunable LED lamp, and effects on microalgal growth and productivity were measured to evaluate the conversion efficiency of light energy into biomass during the different seasons. These results were used to validate a mathematical model for algae growth that can be applied to simulate a large-scale production unit, carried out in a flat-plate PBR of similar geometry. The cellular concentration in the PBR was calculated in both steady-state and transient conditions, and the value of the maintenance kinetic term was correlated to experimental profiles. The relevance of this parameter was finally outlined.
Future changes in large-scale transport and stratosphere-troposphere exchange
NASA Astrophysics Data System (ADS)
Abalos, M.; Randel, W. J.; Kinnison, D. E.; Garcia, R. R.
2017-12-01
Future changes in large-scale transport are investigated in long-term (1955-2099) simulations of the Community Earth System Model - Whole Atmosphere Community Climate Model (CESM-WACCM) under an RCP6.0 climate change scenario. We examine artificial passive tracers in order to isolate transport changes from future changes in emissions and chemical processes. The model suggests enhanced stratosphere-troposphere exchange in both directions (STE), with decreasing tropospheric and increasing stratospheric tracer concentrations in the troposphere. Changes in the different transport processes are evaluated using the Transformed Eulerian Mean continuity equation, including parameterized convective transport. Dynamical changes associated with the rise of the tropopause height are shown to play a crucial role on future transport trends.
Souza, Juliana M DE; Galaverna, Renan; Souza, Aline A N DE; Brocksom, Timothy J; Pastre, Julio C; Souza, Rodrigo O M A DE; Oliveira, Kleber T DE
2018-01-01
We present a comprehensive review of the advent and impact of continuous flow chemistry with regard to the synthesis of natural products and drugs, important pharmaceutical products and definitely responsible for a revolution in modern healthcare. We detail the beginnings of modern drugs and the large scale batch mode of production, both chemical and microbiological. The introduction of modern continuous flow chemistry is then presented, both as a technological tool for enabling organic chemistry, and as a fundamental research endeavor. This part details the syntheses of bioactive natural products and commercial drugs.
Hyder, Adnan A; Allen, Katharine A; Peters, David H; Chandran, Aruna; Bishai, David
2013-01-01
The growing burden of road traffic injuries, which kill over 1.2 million people yearly, falls mostly on low- and middle-income countries (LMICs). Despite this, evidence generation on the effectiveness of road safety interventions in LMIC settings remains scarce. This paper explores a scientific approach for evaluating road safety programmes in LMICs and introduces such a road safety multi-country initiative, the Road Safety in 10 Countries Project (RS-10). By building on existing evaluation frameworks, we develop a scientific approach for evaluating large-scale road safety programmes in LMIC settings. This also draws on '13 lessons' of large-scale programme evaluation: defining the evaluation scope; selecting study sites; maintaining objectivity; developing an impact model; utilising multiple data sources; using multiple analytic techniques; maximising external validity; ensuring an appropriate time frame; the importance of flexibility and a stepwise approach; continuous monitoring; providing feedback to implementers, policy-makers; promoting the uptake of evaluation results; and understanding evaluation costs. The use of relatively new approaches for evaluation of real-world programmes allows for the production of relevant knowledge. The RS-10 project affords an important opportunity to scientifically test these approaches for a real-world, large-scale road safety evaluation and generate new knowledge for the field of road safety.
Shock propagation in locally driven granular systems
NASA Astrophysics Data System (ADS)
Joy, Jilmy P.; Pathak, Sudhir N.; Das, Dibyendu; Rajesh, R.
2017-09-01
We study shock propagation in a system of initially stationary hard spheres that is driven by a continuous injection of particles at the origin. The disturbance created by the injection of energy spreads radially outward through collisions between particles. Using scaling arguments, we determine the exponent characterizing the power-law growth of this disturbance in all dimensions. The scaling functions describing the various physical quantities are determined using large-scale event-driven simulations in two and three dimensions for both elastic and inelastic systems. The results are shown to describe well the data from two different experiments on granular systems that are similarly driven.
Shock propagation in locally driven granular systems.
Joy, Jilmy P; Pathak, Sudhir N; Das, Dibyendu; Rajesh, R
2017-09-01
We study shock propagation in a system of initially stationary hard spheres that is driven by a continuous injection of particles at the origin. The disturbance created by the injection of energy spreads radially outward through collisions between particles. Using scaling arguments, we determine the exponent characterizing the power-law growth of this disturbance in all dimensions. The scaling functions describing the various physical quantities are determined using large-scale event-driven simulations in two and three dimensions for both elastic and inelastic systems. The results are shown to describe well the data from two different experiments on granular systems that are similarly driven.
USDA-ARS?s Scientific Manuscript database
Landscape plant community transitions across the Great Basin and Intermountain West have altered fire regimes and present large-scale consequences relative to rangeland hydrology. Extensive conversion of Great Basin shrub steppe to annual grasslands has increased fuel continuity and the frequency, ...
Effects of Landscape Conditions and Management Practices on Lakes in Northeastern USA.
Lakes continue to face escalating pressures associated with land cover change and growing human populations. The U.S. EPA National Lakes Assessment, which sampled 1,028 lakes during the summer of 2007 using a probabilistic survey, was the first large scale effort to determine the...
USDA-ARS?s Scientific Manuscript database
Background: As the population of older adults continues to increase, the dissemination of strategies to maintain independence of older persons is of critical public health importance. Recent large-scale clinical trial evidence has definitively shown intervention of moderate-intensity physical activi...
An integrated approach to mapping forest conditions in the Southern Appalachians (North Carolina)
Weimin Xi; Lei Wang; Andrew G Birt; Maria D. Tchakerian; Robert N. Coulson; Kier D. Klepzig
2009-01-01
Accurate and continuous forest cover information is essential for forest management and restoration (SAMAB 1996, Xi et al. 2007). Ground-truthed, spatially explicit forest data, however, are often limited to federally managed land or large-scale commercial forestry operations where forest inventories are regularly collected. Moreover,...
Neurobehavioral studies pose unique challenges for dose-response modeling, including small sample size and relatively large intra-subject variation, repeated measurements over time, multiple endpoints with both continuous and ordinal scales, and time dependence of risk characteri...
Workforce Development Analysis | Energy Analysis | NREL
with customer service, construction, and electrical projects One-half of surveyed firms reported , training, and experience that will enable continued large-scale deployment of wind and solar technologies engineers; and project managers. Standardized education and training at all levels-primary school through
USDA-ARS?s Scientific Manuscript database
In this study, we use a combination of electrical resistivity profiling and radon (222Rn) measurements to characterize a shallow groundwater system beneath the last remaining, large-scale sugarcane plantation on Maui, Hawaii. Hawaiian Commercial & Sugar Company has continuously operated a sugarcane...
The Economic Condition of the Mexican-American.
ERIC Educational Resources Information Center
Schmidt, Fred H.; Koford, Kenneth
Persons of Spanish heritage constitute the only minority in the United States whose numbers continue to grow through large-scale immigration. Mexican nationals, the "invisible people", incessantly infiltrate the U.S. population from Mexico. From 1939 through 1969, more than 7.4 million nationals entered the country unlawfully and were…
BACKGROUND: The evidence for epigenome-wide associations between smoking and DNA methylation continues to grow through cross-sectional studies. However, few large scale investigations have explored the associations using observations for individuals at multiple time-points. ...
Reardon, Sean F.; Farrell, Chad R.; Matthews, Stephen A.; O'Sullivan, David; Bischoff, Kendra; Firebaugh, Glenn
2014-01-01
We use newly developed methods of measuring spatial segregation across a range of spatial scales to assess changes in racial residential segregation patterns in the 100 largest U.S. metropolitan areas from 1990 to 2000. Our results point to three notable trends in segregation from 1990 to 2000: 1) Hispanic-white and Asian-white segregation levels increased at both micro- and macro-scales; 2) black-white segregation declined at a micro-scale, but was unchanged at a macro-scale; and 3) for all three racial groups and for almost all metropolitan areas, macro-scale segregation accounted for more of the total metropolitan area segregation in 2000 than in 1990. Our examination of the variation in these trends among the metropolitan areas suggests that Hispanic-white and Asian-white segregation changes have been driven largely by increases in macro-scale segregation resulting from the rapid growth of the Hispanic and Asian populations in central cities. The changes in black-white segregation, in contrast, appear to be driven by the continuation of a 30-year trend in declining micro-segregation, coupled with persistent and largely stable patterns of macro-segregation. PMID:19569292
Ensuring IT service continuity in the face of increasing threats.
Nair, Vishwanath
2014-01-01
How is IT service continuity related to business continuity management? Is it just a glorified disaster recovery procedure? Will IT service continuity help increase the assurance of IT services from the business owner to the customer? This paper is an attempt at answering these and many such questions. It is presented as a case study of IT service continuity management implementation at Emirates Group IT, Dubai. It takes the reader through the need for the process as felt by the business, through the learning acquired during implementation, to the practices deployed for managing the process on an ongoing basis. It provides a detailed view of the kind of pitfalls that could be encountered during implementation of the IT service continuity management process in a large-scale enterprise.
Continuous quantum error correction for non-Markovian decoherence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oreshkov, Ognyan; Brun, Todd A.; Communication Sciences Institute, University of Southern California, Los Angeles, California 90089
2007-08-15
We study the effect of continuous quantum error correction in the case where each qubit in a codeword is subject to a general Hamiltonian interaction with an independent bath. We first consider the scheme in the case of a trivial single-qubit code, which provides useful insights into the workings of continuous error correction and the difference between Markovian and non-Markovian decoherence. We then study the model of a bit-flip code with each qubit coupled to an independent bath qubit and subject to continuous correction, and find its solution. We show that for sufficiently large error-correction rates, the encoded state approximatelymore » follows an evolution of the type of a single decohering qubit, but with an effectively decreased coupling constant. The factor by which the coupling constant is decreased scales quadratically with the error-correction rate. This is compared to the case of Markovian noise, where the decoherence rate is effectively decreased by a factor which scales only linearly with the rate of error correction. The quadratic enhancement depends on the existence of a Zeno regime in the Hamiltonian evolution which is absent in purely Markovian dynamics. We analyze the range of validity of this result and identify two relevant time scales. Finally, we extend the result to more general codes and argue that the performance of continuous error correction will exhibit the same qualitative characteristics.« less
Shah, Sohil Atul
2017-01-01
Clustering is a fundamental procedure in the analysis of scientific data. It is used ubiquitously across the sciences. Despite decades of research, existing clustering algorithms have limited effectiveness in high dimensions and often require tuning parameters for different domains and datasets. We present a clustering algorithm that achieves high accuracy across multiple domains and scales efficiently to high dimensions and large datasets. The presented algorithm optimizes a smooth continuous objective, which is based on robust statistics and allows heavily mixed clusters to be untangled. The continuous nature of the objective also allows clustering to be integrated as a module in end-to-end feature learning pipelines. We demonstrate this by extending the algorithm to perform joint clustering and dimensionality reduction by efficiently optimizing a continuous global objective. The presented approach is evaluated on large datasets of faces, hand-written digits, objects, newswire articles, sensor readings from the Space Shuttle, and protein expression levels. Our method achieves high accuracy across all datasets, outperforming the best prior algorithm by a factor of 3 in average rank. PMID:28851838
1982-02-01
control unit will detect and classify submerged submarins transiting within PJ The EnCAPsulated pedo augments air, surface and submarine anti...vidicon (date link video enhancement). Conduct Operational Test and Evaluation. Complete Large Scale Integration Receiver-Decoder improvement. Continue...analysis, and data link video enhancement focusing on application of a new silicon vidicon was continued; data link improvements such as adaptive null
Coupled continuous time-random walks in quenched random environment
NASA Astrophysics Data System (ADS)
Magdziarz, M.; Szczotka, W.
2018-02-01
We introduce a coupled continuous-time random walk with coupling which is characteristic for Lévy walks. Additionally we assume that the walker moves in a quenched random environment, i.e. the site disorder at each lattice point is fixed in time. We analyze the scaling limit of such a random walk. We show that for large times the behaviour of the analyzed process is exactly the same as in the case of uncoupled quenched trap model for Lévy flights.
Creation of current filaments in the solar corona
NASA Technical Reports Server (NTRS)
Mikic, Z.; Schnack, D. D.; Van Hoven, G.
1989-01-01
It has been suggested that the solar corona is heated by the dissipation of electric currents. The low value of the resistivity requires the magnetic field to have structure at very small length scales if this mechanism is to work. In this paper it is demonstrated that the coronal magnetic field acquires small-scale structure through the braiding produced by smooth, randomly phased, photospheric flows. The current density develops a filamentary structure and grows exponentially in time. Nonlinear processes in the ideal magnetohydrodynamic equations produce a cascade effect, in which the structure introduced by the flow at large length scales is transferred to smaller scales. If this process continues down to the resistive dissipation length scale, it would provide an effective mechanism for coronal heating.
Improving efficiency of polystyrene concrete production with composite binders
NASA Astrophysics Data System (ADS)
Lesovik, R. V.; Ageeva, M. S.; Lesovik, G. A.; Sopin, D. M.; Kazlitina, O. V.; Mitrokhina, A. A.
2018-03-01
According to leading marketing researchers, the construction market in Russia and CIS will continue growing at a rapid rate; this applies not only to a large-scale major construction, but to a construction of single-family houses and small-scale industrial facilities as well. Due to this, there are increased requirements for heat insulation of the building enclosures and a significant demand for efficient walling materials with high thermal performance. All these developments led to higher requirements imposed on the equipment that produces such materials.
Large Scale Application of Vibration Sensors for Fan Monitoring at Commercial Layer Hen Houses
Chen, Yan; Ni, Ji-Qin; Diehl, Claude A.; Heber, Albert J.; Bogan, Bill W.; Chai, Li-Long
2010-01-01
Continuously monitoring the operation of each individual fan can significantly improve the measurement quality of aerial pollutant emissions from animal buildings that have a large number of fans. To monitor the fan operation by detecting the fan vibration is a relatively new technique. A low-cost electronic vibration sensor was developed and commercialized. However, its large scale application has not yet been evaluated. This paper presents long-term performance results of this vibration sensor at two large commercial layer houses. Vibration sensors were installed on 164 fans of 130 cm diameter to continuously monitor the fan on/off status for two years. The performance of the vibration sensors was compared with fan rotational speed (FRS) sensors. The vibration sensors exhibited quick response and high sensitivity to fan operations and therefore satisfied the general requirements of air quality research. The study proved that detecting fan vibration was an effective method to monitor the on/off status of a large number of single-speed fans. The vibration sensor itself was $2 more expensive than a magnetic proximity FRS sensor but the overall cost including installation and data acquisition hardware was $77 less expensive than the FRS sensor. A total of nine vibration sensors failed during the study and the failure rate was related to the batches of product. A few sensors also exhibited unsteady sensitivity. As a new product, the quality of the sensor should be improved to make it more reliable and acceptable. PMID:22163544
Effect of small scale transport processes on phytoplankton distribution in coastal seas.
Hernández-Carrasco, Ismael; Orfila, Alejandro; Rossi, Vincent; Garçon, Veronique
2018-06-05
Coastal ocean ecosystems are major contributors to the global biogeochemical cycles and biological productivity. Physical factors induced by the turbulent flow play a crucial role in regulating marine ecosystems. However, while large-scale open-ocean dynamics is well described by geostrophy, the role of multiscale transport processes in coastal regions is still poorly understood due to the lack of continuous high-resolution observations. Here, the influence of small-scale dynamics (O(3.5-25) km, i.e. spanning upper submesoscale and mesoscale processes) on surface phytoplankton derived from satellite chlorophyll-a (Chl-a) is studied using Lagrangian metrics computed from High-Frequency Radar currents. The combination of complementary Lagrangian diagnostics, including the Lagrangian divergence along fluid trajectories, provides an improved description of the 3D flow geometry which facilitates the interpretation of two non-exclusive physical mechanisms affecting phytoplankton dynamics and patchiness. Attracting small-scale fronts, unveiled by backwards Lagrangian Coherent Structures, are associated to negative divergence where particles and Chl-a standing stocks cluster. Filaments of positive divergence, representing large accumulated upward vertical velocities and suggesting accrued injection of subsurface nutrients, match areas with large Chl-a concentrations. Our findings demonstrate that an accurate characterization of small-scale transport processes is necessary to comprehend bio-physical interactions in coastal seas.
Long-distance continuous-variable quantum key distribution by controlling excess noise
NASA Astrophysics Data System (ADS)
Huang, Duan; Huang, Peng; Lin, Dakai; Zeng, Guihua
2016-01-01
Quantum cryptography founded on the laws of physics could revolutionize the way in which communication information is protected. Significant progresses in long-distance quantum key distribution based on discrete variables have led to the secure quantum communication in real-world conditions being available. However, the alternative approach implemented with continuous variables has not yet reached the secure distance beyond 100 km. Here, we overcome the previous range limitation by controlling system excess noise and report such a long distance continuous-variable quantum key distribution experiment. Our result paves the road to the large-scale secure quantum communication with continuous variables and serves as a stepping stone in the quest for quantum network.
Long-distance continuous-variable quantum key distribution by controlling excess noise.
Huang, Duan; Huang, Peng; Lin, Dakai; Zeng, Guihua
2016-01-13
Quantum cryptography founded on the laws of physics could revolutionize the way in which communication information is protected. Significant progresses in long-distance quantum key distribution based on discrete variables have led to the secure quantum communication in real-world conditions being available. However, the alternative approach implemented with continuous variables has not yet reached the secure distance beyond 100 km. Here, we overcome the previous range limitation by controlling system excess noise and report such a long distance continuous-variable quantum key distribution experiment. Our result paves the road to the large-scale secure quantum communication with continuous variables and serves as a stepping stone in the quest for quantum network.
Long-distance continuous-variable quantum key distribution by controlling excess noise
Huang, Duan; Huang, Peng; Lin, Dakai; Zeng, Guihua
2016-01-01
Quantum cryptography founded on the laws of physics could revolutionize the way in which communication information is protected. Significant progresses in long-distance quantum key distribution based on discrete variables have led to the secure quantum communication in real-world conditions being available. However, the alternative approach implemented with continuous variables has not yet reached the secure distance beyond 100 km. Here, we overcome the previous range limitation by controlling system excess noise and report such a long distance continuous-variable quantum key distribution experiment. Our result paves the road to the large-scale secure quantum communication with continuous variables and serves as a stepping stone in the quest for quantum network. PMID:26758727
Dark energy and modified gravity in the Effective Field Theory of Large-Scale Structure
NASA Astrophysics Data System (ADS)
Cusin, Giulia; Lewandowski, Matthew; Vernizzi, Filippo
2018-04-01
We develop an approach to compute observables beyond the linear regime of dark matter perturbations for general dark energy and modified gravity models. We do so by combining the Effective Field Theory of Dark Energy and Effective Field Theory of Large-Scale Structure approaches. In particular, we parametrize the linear and nonlinear effects of dark energy on dark matter clustering in terms of the Lagrangian terms introduced in a companion paper [1], focusing on Horndeski theories and assuming the quasi-static approximation. The Euler equation for dark matter is sourced, via the Newtonian potential, by new nonlinear vertices due to modified gravity and, as in the pure dark matter case, by the effects of short-scale physics in the form of the divergence of an effective stress tensor. The effective fluid introduces a counterterm in the solution to the matter continuity and Euler equations, which allows a controlled expansion of clustering statistics on mildly nonlinear scales. We use this setup to compute the one-loop dark-matter power spectrum.
Lara, Alvaro R; Galindo, Enrique; Ramírez, Octavio T; Palomares, Laura A
2006-11-01
The presence of spatial gradients in fundamental culture parameters, such as dissolved gases, pH, concentration of substrates, and shear rate, among others, is an important problem that frequently occurs in large-scale bioreactors. This problem is caused by a deficient mixing that results from limitations inherent to traditional scale-up methods and practical constraints during large-scale bioreactor design and operation. When cultured in a heterogeneous environment, cells are continuously exposed to fluctuating conditions as they travel through the various zones of a bioreactor. Such fluctuations can affect cell metabolism, yields, and quality of the products of interest. In this review, the theoretical analyses that predict the existence of environmental gradients in bioreactors and their experimental confirmation are reviewed. The origins of gradients in common culture parameters and their effects on various organisms of biotechnological importance are discussed. In particular, studies based on the scale-down methodology, a convenient tool for assessing the effect of environmental heterogeneities, are surveyed.
Pettigrew, Luisa M; Kumpunen, Stephanie; Mays, Nicholas; Rosen, Rebecca; Posaner, Rachel
2018-03-01
Over the past decade, collaboration between general practices in England to form new provider networks and large-scale organisations has been driven largely by grassroots action among GPs. However, it is now being increasingly advocated for by national policymakers. Expectations of what scaling up general practice in England will achieve are significant. To review the evidence of the impact of new forms of large-scale general practice provider collaborations in England. Systematic review. Embase, MEDLINE, Health Management Information Consortium, and Social Sciences Citation Index were searched for studies reporting the impact on clinical processes and outcomes, patient experience, workforce satisfaction, or costs of new forms of provider collaborations between general practices in England. A total of 1782 publications were screened. Five studies met the inclusion criteria and four examined the same general practice networks, limiting generalisability. Substantial financial investment was required to establish the networks and the associated interventions that were targeted at four clinical areas. Quality improvements were achieved through standardised processes, incentives at network level, information technology-enabled performance dashboards, and local network management. The fifth study of a large-scale multisite general practice organisation showed that it may be better placed to implement safety and quality processes than conventional practices. However, unintended consequences may arise, such as perceptions of disenfranchisement among staff and reductions in continuity of care. Good-quality evidence of the impacts of scaling up general practice provider organisations in England is scarce. As more general practice collaborations emerge, evaluation of their impacts will be important to understand which work, in which settings, how, and why. © British Journal of General Practice 2018.
Finite-size scaling above the upper critical dimension in Ising models with long-range interactions
NASA Astrophysics Data System (ADS)
Flores-Sola, Emilio J.; Berche, Bertrand; Kenna, Ralph; Weigel, Martin
2015-01-01
The correlation length plays a pivotal role in finite-size scaling and hyperscaling at continuous phase transitions. Below the upper critical dimension, where the correlation length is proportional to the system length, both finite-size scaling and hyperscaling take conventional forms. Above the upper critical dimension these forms break down and a new scaling scenario appears. Here we investigate this scaling behaviour by simulating one-dimensional Ising ferromagnets with long-range interactions. We show that the correlation length scales as a non-trivial power of the linear system size and investigate the scaling forms. For interactions of sufficiently long range, the disparity between the correlation length and the system length can be made arbitrarily large, while maintaining the new scaling scenarios. We also investigate the behavior of the correlation function above the upper critical dimension and the modifications imposed by the new scaling scenario onto the associated Fisher relation.
Duke, Trevor; Hwaihwanje, Ilomo; Kaupa, Magdalynn; Karubi, Jonah; Panauwe, Doreen; Sa’avu, Martin; Pulsan, Francis; Prasad, Peter; Maru, Freddy; Tenambo, Henry; Kwaramb, Ambrose; Neal, Eleanor; Graham, Hamish; Izadnegahdar, Rasa
2017-01-01
Background Pneumonia is the largest cause of child deaths in Papua New Guinea (PNG), and hypoxaemia is the major complication causing death in childhood pneumonia, and hypoxaemia is a major factor in deaths from many other common conditions, including bronchiolitis, asthma, sepsis, malaria, trauma, perinatal problems, and obstetric emergencies. A reliable source of oxygen therapy can reduce mortality from pneumonia by up to 35%. However, in low and middle income countries throughout the world, improved oxygen systems have not been implemented at large scale in remote, difficult to access health care settings, and oxygen is often unavailable at smaller rural hospitals or district health centers which serve as the first point of referral for childhood illnesses. These hospitals are hampered by lack of reliable power, staff training and other basic services. Methods We report the methodology of a large implementation effectiveness trial involving sustainable and renewable oxygen and power systems in 36 health facilities in remote rural areas of PNG. The methodology is a before–and after evaluation involving continuous quality improvement, and a health systems approach. We describe this model of implementation as the considerations and steps involved have wider implications in health systems in other countries. Results The implementation steps include: defining the criteria for where such an intervention is appropriate, assessment of power supplies and power requirements, the optimal design of a solar power system, specifications for oxygen concentrators and other oxygen equipment that will function in remote environments, installation logistics in remote settings, the role of oxygen analyzers in monitoring oxygen concentrator performance, the engineering capacity required to sustain a program at scale, clinical guidelines and training on oxygen equipment and the treatment of children with severe respiratory infection and other critical illnesses, program costs, and measurement of processes and outcomes to support continuous quality improvement. Conclusions This study will evaluate the feasibility and sustainability issues in improving oxygen systems and providing reliable power on a large scale in remote rural settings in PNG, and the impact of this on child mortality from pneumonia over 3 years post–intervention. Taking a continuous quality improvement approach can be transformational for remote health services. PMID:28567280
Direct and inverse energy cascades in a forced rotating turbulence experiment
NASA Astrophysics Data System (ADS)
Campagne, Antoine; Gallet, Basile; Moisy, Frédéric; Cortet, Pierre-Philippe
2014-11-01
Turbulence in a rotating frame provides a remarkable system where 2D and 3D properties may coexist, with a possible tuning between direct and inverse cascades. We present here experimental evidence for a double cascade of kinetic energy in a statistically stationary rotating turbulence experiment. Turbulence is generated by a set of vertical flaps which continuously injects velocity fluctuations towards the center of a rotating water tank. The energy transfers are evaluated from two-point third-order three-component velocity structure functions, which we measure using stereoscopic PIV in the rotating frame. Without global rotation, the energy is transferred from large to small scales, as in classical 3D turbulence. For nonzero rotation rates, the horizontal kinetic energy presents a double cascade: a direct cascade at small horizontal scales and an inverse cascade at large horizontal scales. By contrast, the vertical kinetic energy is always transferred from large to small horizontal scales, a behavior reminiscent of the dynamics of a passive scalar in 2D turbulence. At the largest rotation rate, the flow is nearly 2D and a pure inverse energy cascade is found for the horizontal energy.
2009-01-01
Background Insertional mutagenesis is an effective method for functional genomic studies in various organisms. It can rapidly generate easily tractable mutations. A large-scale insertional mutagenesis with the piggyBac (PB) transposon is currently performed in mice at the Institute of Developmental Biology and Molecular Medicine (IDM), Fudan University in Shanghai, China. This project is carried out via collaborations among multiple groups overseeing interconnected experimental steps and generates a large volume of experimental data continuously. Therefore, the project calls for an efficient database system for recording, management, statistical analysis, and information exchange. Results This paper presents a database application called MP-PBmice (insertional mutation mapping system of PB Mutagenesis Information Center), which is developed to serve the on-going large-scale PB insertional mutagenesis project. A lightweight enterprise-level development framework Struts-Spring-Hibernate is used here to ensure constructive and flexible support to the application. The MP-PBmice database system has three major features: strict access-control, efficient workflow control, and good expandability. It supports the collaboration among different groups that enter data and exchange information on daily basis, and is capable of providing real time progress reports for the whole project. MP-PBmice can be easily adapted for other large-scale insertional mutation mapping projects and the source code of this software is freely available at http://www.idmshanghai.cn/PBmice. Conclusion MP-PBmice is a web-based application for large-scale insertional mutation mapping onto the mouse genome, implemented with the widely used framework Struts-Spring-Hibernate. This system is already in use by the on-going genome-wide PB insertional mutation mapping project at IDM, Fudan University. PMID:19958505
The Computing and Data Grid Approach: Infrastructure for Distributed Science Applications
NASA Technical Reports Server (NTRS)
Johnston, William E.
2002-01-01
With the advent of Grids - infrastructure for using and managing widely distributed computing and data resources in the science environment - there is now an opportunity to provide a standard, large-scale, computing, data, instrument, and collaboration environment for science that spans many different projects and provides the required infrastructure and services in a relatively uniform and supportable way. Grid technology has evolved over the past several years to provide the services and infrastructure needed for building 'virtual' systems and organizations. We argue that Grid technology provides an excellent basis for the creation of the integrated environments that can combine the resources needed to support the large- scale science projects located at multiple laboratories and universities. We present some science case studies that indicate that a paradigm shift in the process of science will come about as a result of Grids providing transparent and secure access to advanced and integrated information and technologies infrastructure: powerful computing systems, large-scale data archives, scientific instruments, and collaboration tools. These changes will be in the form of services that can be integrated with the user's work environment, and that enable uniform and highly capable access to these computers, data, and instruments, regardless of the location or exact nature of these resources. These services will integrate transient-use resources like computing systems, scientific instruments, and data caches (e.g., as they are needed to perform a simulation or analyze data from a single experiment); persistent-use resources. such as databases, data catalogues, and archives, and; collaborators, whose involvement will continue for the lifetime of a project or longer. While we largely address large-scale science in this paper, Grids, particularly when combined with Web Services, will address a broad spectrum of science scenarios. both large and small scale.
Evaluating the Large-Scale Environment of Extreme Events Using Reanalyses
NASA Astrophysics Data System (ADS)
Bosilovich, M. G.; Schubert, S. D.; Koster, R. D.; da Silva, A. M., Jr.; Eichmann, A.
2014-12-01
Extreme conditions and events have always been a long standing concern in weather forecasting and national security. While some evidence indicates extreme weather will increase in global change scenarios, extremes are often related to the large scale atmospheric circulation, but also occurring infrequently. Reanalyses assimilate substantial amounts of weather data and a primary strength of reanalysis data is the representation of the large-scale atmospheric environment. In this effort, we link the occurrences of extreme events or climate indicators to the underlying regional and global weather patterns. Now, with greater than 3o years of data, reanalyses can include multiple cases of extreme events, and thereby identify commonality among the weather to better characterize the large-scale to global environment linked to the indicator or extreme event. Since these features are certainly regionally dependent, and also, the indicators of climate are continually being developed, we outline various methods to analyze the reanalysis data and the development of tools to support regional evaluation of the data. Here, we provide some examples of both individual case studies and composite studies of similar events. For example, we will compare the large scale environment for Northeastern US extreme precipitation with that of highest mean precipitation seasons. Likewise, southerly winds can shown to be a major contributor to very warm days in the Northeast winter. While most of our development has involved NASA's MERRA reanalysis, we are also looking forward to MERRA-2 which includes several new features that greatly improve the representation of weather and climate, especially for the regions and sectors involved in the National Climate Assessment.
Ecological impacts of large-scale disposal of mining waste in the deep sea
Hughes, David J.; Shimmield, Tracy M.; Black, Kenneth D.; Howe, John A.
2015-01-01
Deep-Sea Tailings Placement (DSTP) from terrestrial mines is one of several large-scale industrial activities now taking place in the deep sea. The scale and persistence of its impacts on seabed biota are unknown. We sampled around the Lihir and Misima island mines in Papua New Guinea to measure the impacts of ongoing DSTP and assess the state of benthic infaunal communities after its conclusion. At Lihir, where DSTP has operated continuously since 1996, abundance of sediment infauna was substantially reduced across the sampled depth range (800–2020 m), accompanied by changes in higher-taxon community structure, in comparison with unimpacted reference stations. At Misima, where DSTP took place for 15 years, ending in 2004, effects on community composition persisted 3.5 years after its conclusion. Active tailings deposition has severe impacts on deep-sea infaunal communities and these impacts are detectable at a coarse level of taxonomic resolution. PMID:25939397
NASA Technical Reports Server (NTRS)
Mjolsness, Eric; Castano, Rebecca; Mann, Tobias; Wold, Barbara
2000-01-01
We provide preliminary evidence that existing algorithms for inferring small-scale gene regulation networks from gene expression data can be adapted to large-scale gene expression data coming from hybridization microarrays. The essential steps are (I) clustering many genes by their expression time-course data into a minimal set of clusters of co-expressed genes, (2) theoretically modeling the various conditions under which the time-courses are measured using a continuous-time analog recurrent neural network for the cluster mean time-courses, (3) fitting such a regulatory model to the cluster mean time courses by simulated annealing with weight decay, and (4) analysing several such fits for commonalities in the circuit parameter sets including the connection matrices. This procedure can be used to assess the adequacy of existing and future gene expression time-course data sets for determining transcriptional regulatory relationships such as coregulation.
Ecological impacts of large-scale disposal of mining waste in the deep sea.
Hughes, David J; Shimmield, Tracy M; Black, Kenneth D; Howe, John A
2015-05-05
Deep-Sea Tailings Placement (DSTP) from terrestrial mines is one of several large-scale industrial activities now taking place in the deep sea. The scale and persistence of its impacts on seabed biota are unknown. We sampled around the Lihir and Misima island mines in Papua New Guinea to measure the impacts of ongoing DSTP and assess the state of benthic infaunal communities after its conclusion. At Lihir, where DSTP has operated continuously since 1996, abundance of sediment infauna was substantially reduced across the sampled depth range (800-2020 m), accompanied by changes in higher-taxon community structure, in comparison with unimpacted reference stations. At Misima, where DSTP took place for 15 years, ending in 2004, effects on community composition persisted 3.5 years after its conclusion. Active tailings deposition has severe impacts on deep-sea infaunal communities and these impacts are detectable at a coarse level of taxonomic resolution.
The Case for Modular Redundancy in Large-Scale High Performance Computing Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engelmann, Christian; Ong, Hong Hoe; Scott, Stephen L
2009-01-01
Recent investigations into resilience of large-scale high-performance computing (HPC) systems showed a continuous trend of decreasing reliability and availability. Newly installed systems have a lower mean-time to failure (MTTF) and a higher mean-time to recover (MTTR) than their predecessors. Modular redundancy is being used in many mission critical systems today to provide for resilience, such as for aerospace and command \\& control systems. The primary argument against modular redundancy for resilience in HPC has always been that the capability of a HPC system, and respective return on investment, would be significantly reduced. We argue that modular redundancy can significantly increasemore » compute node availability as it removes the impact of scale from single compute node MTTR. We further argue that single compute nodes can be much less reliable, and therefore less expensive, and still be highly available, if their MTTR/MTTF ratio is maintained.« less
Extending large-scale forest inventories to assess urban forests.
Corona, Piermaria; Agrimi, Mariagrazia; Baffetta, Federica; Barbati, Anna; Chiriacò, Maria Vincenza; Fattorini, Lorenzo; Pompei, Enrico; Valentini, Riccardo; Mattioli, Walter
2012-03-01
Urban areas are continuously expanding today, extending their influence on an increasingly large proportion of woods and trees located in or nearby urban and urbanizing areas, the so-called urban forests. Although these forests have the potential for significantly improving the quality the urban environment and the well-being of the urban population, data to quantify the extent and characteristics of urban forests are still lacking or fragmentary on a large scale. In this regard, an expansion of the domain of multipurpose forest inventories like National Forest Inventories (NFIs) towards urban forests would be required. To this end, it would be convenient to exploit the same sampling scheme applied in NFIs to assess the basic features of urban forests. This paper considers approximately unbiased estimators of abundance and coverage of urban forests, together with estimators of the corresponding variances, which can be achieved from the first phase of most large-scale forest inventories. A simulation study is carried out in order to check the performance of the considered estimators under various situations involving the spatial distribution of the urban forests over the study area. An application is worked out on the data from the Italian NFI.
Kongelf, Anine; Bandewar, Sunita V. S.; Bharat, Shalini; Collumbien, Martine
2015-01-01
Background In the last decade, community mobilisation (CM) interventions targeting female sex workers (FSWs) have been scaled-up in India’s national response to the HIV epidemic. This included the Bill and Melinda Gates Foundation’s Avahan programme which adopted a business approach to plan and manage implementation at scale. With the focus of evaluation efforts on measuring effectiveness and health impacts there has been little analysis thus far of the interaction of the CM interventions with the sex work industry in complex urban environments. Methods and Findings Between March and July 2012 semi-structured, in-depth interviews and focus group discussions were conducted with 63 HIV intervention implementers, to explore challenges of HIV prevention among FSWs in Mumbai. A thematic analysis identified contextual factors that impact CM implementation. Large-scale interventions are not only impacted by, but were shown to shape the dynamic social context. Registration practices and programme monitoring were experienced as stigmatising, reflected in shifting client preferences towards women not disclosing as ‘sex workers’. This combined with urban redevelopment and gentrification of traditional red light areas, forcing dispersal and more ‘hidden’ ways of solicitation, further challenging outreach and collectivisation. Participants reported that brothel owners and ‘pimps’ continued to restrict access to sex workers and the heterogeneous ‘community’ of FSWs remains fragmented with high levels of mobility. Stakeholder engagement was poor and mobilising around HIV prevention not compelling. Interventions largely failed to respond to community needs as strong target-orientation skewed activities towards those most easily measured and reported. Conclusion Large-scale interventions have been impacted by and contributed to an increasingly complex sex work environment in Mumbai, challenging outreach and mobilisation efforts. Sex workers remain a vulnerable and disempowered group needing continued support and more comprehensive services. PMID:25811484
Chambers, Jeffrey Q; Negron-Juarez, Robinson I; Marra, Daniel Magnabosco; Di Vittorio, Alan; Tews, Joerg; Roberts, Dar; Ribeiro, Gabriel H P M; Trumbore, Susan E; Higuchi, Niro
2013-03-05
Old-growth forest ecosystems comprise a mosaic of patches in different successional stages, with the fraction of the landscape in any particular state relatively constant over large temporal and spatial scales. The size distribution and return frequency of disturbance events, and subsequent recovery processes, determine to a large extent the spatial scale over which this old-growth steady state develops. Here, we characterize this mosaic for a Central Amazon forest by integrating field plot data, remote sensing disturbance probability distribution functions, and individual-based simulation modeling. Results demonstrate that a steady state of patches of varying successional age occurs over a relatively large spatial scale, with important implications for detecting temporal trends on plots that sample a small fraction of the landscape. Long highly significant stochastic runs averaging 1.0 Mg biomass⋅ha(-1)⋅y(-1) were often punctuated by episodic disturbance events, resulting in a sawtooth time series of hectare-scale tree biomass. To maximize the detection of temporal trends for this Central Amazon site (e.g., driven by CO2 fertilization), plots larger than 10 ha would provide the greatest sensitivity. A model-based analysis of fractional mortality across all gap sizes demonstrated that 9.1-16.9% of tree mortality was missing from plot-based approaches, underscoring the need to combine plot and remote-sensing methods for estimating net landscape carbon balance. Old-growth tropical forests can exhibit complex large-scale structure driven by disturbance and recovery cycles, with ecosystem and community attributes of hectare-scale plots exhibiting continuous dynamic departures from a steady-state condition.
Spatial confinement of active microtubule networks induces large-scale rotational cytoplasmic flow
Suzuki, Kazuya; Miyazaki, Makito; Takagi, Jun; Itabashi, Takeshi; Ishiwata, Shin’ichi
2017-01-01
Collective behaviors of motile units through hydrodynamic interactions induce directed fluid flow on a larger length scale than individual units. In cells, active cytoskeletal systems composed of polar filaments and molecular motors drive fluid flow, a process known as cytoplasmic streaming. The motor-driven elongation of microtubule bundles generates turbulent-like flow in purified systems; however, it remains unclear whether and how microtubule bundles induce large-scale directed flow like the cytoplasmic streaming observed in cells. Here, we adopted Xenopus egg extracts as a model system of the cytoplasm and found that microtubule bundle elongation induces directed flow for which the length scale and timescale depend on the existence of geometrical constraints. At the lower activity of dynein, kinesins bundle and slide microtubules, organizing extensile microtubule bundles. In bulk extracts, the extensile bundles connected with each other and formed a random network, and vortex flows with a length scale comparable to the bundle length continually emerged and persisted for 1 min at multiple places. When the extracts were encapsulated in droplets, the extensile bundles pushed the droplet boundary. This pushing force initiated symmetry breaking of the randomly oriented bundle network, leading to bundles aligning into a rotating vortex structure. This vortex induced rotational cytoplasmic flows on the length scale and timescale that were 10- to 100-fold longer than the vortex flows emerging in bulk extracts. Our results suggest that microtubule systems use not only hydrodynamic interactions but also mechanical interactions to induce large-scale temporally stable cytoplasmic flow. PMID:28265076
NASA Astrophysics Data System (ADS)
Tenney, Andrew; Coleman, Thomas; Berry, Matthew; Magstadt, Andy; Gogineni, Sivaram; Kiel, Barry
2015-11-01
Shock cells and large scale structures present in a three-stream non-axisymmetric jet are studied both qualitatively and quantitatively. Large Eddy Simulation is utilized first to gain an understanding of the underlying physics of the flow and direct the focus of the physical experiment. The flow in the experiment is visualized using long exposure Schlieren photography, with time resolved Schlieren photography also a possibility. Velocity derivative diagnostics are calculated from the grey-scale Schlieren images are analyzed using continuous wavelet transforms. Pressure signals are also captured in the near-field of the jet to correlate with the velocity derivative diagnostics and assist in unraveling this complex flow. We acknowledge the support of AFRL through an SBIR grant.
Lectures on algebraic system theory: Linear systems over rings
NASA Technical Reports Server (NTRS)
Kamen, E. W.
1978-01-01
The presentation centers on four classes of systems that can be treated as linear systems over a ring. These are: (1) discrete-time systems over a ring of scalars such as the integers; (2) continuous-time systems containing time delays; (3) large-scale discrete-time systems; and (4) time-varying discrete-time systems.
The Politics of Education Revisited: Anthony Crosland and Michael Gove in Historical Perspective
ERIC Educational Resources Information Center
Finn, Mike
2015-01-01
This article traces continuity and change in the governance of British education through the comparison of two ministers, Anthony Crosland and Michael Gove. Taking Maurice Kogan's seminal "The Politics of Education" as the point of departure, the article highlights the role of political ideology in large-scale educational change, taking…
ERIC Educational Resources Information Center
Patel, Vimla L.; Branch, Timothy; Gutnik, Lily; Arocha, Jose F.
2006-01-01
High-risk behavior in youths related to HIV transmission continues to occur despite large-scale efforts to disseminate information about safe sexual practices through education. Our study examined the relationships among knowledge, decision-making strategies, and risk assessment about HIV by youths during peer group focused discussions. Two focus…
Robert. R. Ziemer
1991-01-01
Summary - There is increasing concern about how land management practices influence the frequency of mass erosion and sedimentation over large temporal and spatial scales. Monte Carlo simulations can identify fruitful areas for continuing cooperation between scientists in the U.S.A. and Japan.
Effective CPD on a Large Scale: Examining the Development of Multipliers
ERIC Educational Resources Information Center
Roesken-Winter, Bettina; Schüler, Sven; Stahnke, Rebekka; Blömeke, Sigrid
2015-01-01
Much research has been conducted on exploring teacher learning and constituting Continuous Professional Development (CPD) designs for teachers. Yet, little is known about appropriate design principles of CPD for teacher trainers/multipliers who in turn are supposed to provide CPD for teachers. The German Center for Mathematics Teacher Education…
USDA-ARS?s Scientific Manuscript database
1) Considerable research is currently focused on restoring the World’s degraded grasslands by introducing species from seed. The research is continually providing valuable new insights into early seeded plant establishment, but more emphasis on longer, larger studies is needed to better quantify s...
USDA-ARS?s Scientific Manuscript database
Large-scale disturbances such as fire and woodland encroachment continue to plague the sustainability of semi-arid regions around the world. Land managers are challenged with predicting and mitigating such disturbances to stabilize soil and ecological degradation of vast landscapes. Scientists fro...
Implementing Intensive Intervention: How Do We Get There from Here?
ERIC Educational Resources Information Center
Zumeta, Rebecca O.
2015-01-01
Despite years of school reform intended to help students reach high academic standards, students with disabilities continue to struggle, suggesting a need for more intensive intervention as a part of special education and multi-tiered systems of support. At the same time, greater inclusion of students with disabilities in large-scale assessment,…
Lakes continue to face escalating pressures associated with land cover change and growing human populations. The U.S. EPA National Lakes Assessment, which sampled more than 1000 lakes in a probabilistic survey, was the first large scale effort to characterize the condition of lak...
"Scientifically-Based Research": The Art of Politics and the Distortion of Science
ERIC Educational Resources Information Center
Shaker, Paul; Ruitenberg, Claudia
2007-01-01
The US Federal Government is forcefully prescribing a narrow definition of "scientifically-based" educational research. US policy, emerging from contemporary neoliberal and technocratic viewpoints and funded and propagated on a large scale, has the potential to influence international thinking on educational research. In this article we continue a…
Microfabrication techniques for integrated sensors and microsystems.
Wise, K D; Najafi, K
1991-11-29
Integrated sensors and actuators are rapidly evolving to provide an important link between very large scale integrated circuits and nonelectronic monitoring and control applications ranging from biomedicine to automated manufacturing. As they continue to expand, entire microsystems merging electrical, mechanical, thermal, optical, magnetic, and perhaps chemical components should be possible on a common substrate.
Zhu, Gang-Tian; Li, Xiao-Shui; Fu, Xiao-Meng; Wu, Jian-Yuan; Yuan, Bi-Feng; Feng, Yu-Qi
2012-10-14
Silica fiber with highly ordered mesoporous structure and continuously long fibrous property was synthesized on a large-scale for the first time. It can be applied to the rapid (less than 3 min) and effective enrichment of endogenous peptides with a novel lab-in-syringe approach.
The Cosmic Battery in Astrophysical Accretion Disks
NASA Astrophysics Data System (ADS)
Contopoulos, Ioannis; Nathanail, Antonios; Katsanikas, Matthaios
2015-06-01
The aberrated radiation pressure at the inner edge of the accretion disk around an astrophysical black hole imparts a relative azimuthal velocity on the electrons with respect to the ions which gives rise to a ring electric current that generates large-scale poloidal magnetic field loops. This is the Cosmic Battery established by Contopoulos and Kazanas in 1998. In the present work we perform realistic numerical simulations of this important astrophysical mechanism in advection-dominated accretion flows, ADAFs. We confirm the original prediction that the inner parts of the loops are continuously advected toward the central black hole and contribute to the growth of the large-scale magnetic field, whereas the outer parts of the loops are continuously diffusing outward through the turbulent accretion flow. This process of inward advection of the axial field and outward diffusion of the return field proceeds all the way to equipartition, thus generating astrophysically significant magnetic fields on astrophysically relevant timescales. We confirm that there exists a critical value of the magnetic Prandtl number between unity and 10 in the outer disk above which the Cosmic Battery mechanism is suppressed.
NASA Astrophysics Data System (ADS)
Sinha, Neeraj; Zambon, Andrea; Ott, James; Demagistris, Michael
2015-06-01
Driven by the continuing rapid advances in high-performance computing, multi-dimensional high-fidelity modeling is an increasingly reliable predictive tool capable of providing valuable physical insight into complex post-detonation reacting flow fields. Utilizing a series of test cases featuring blast waves interacting with combustible dispersed clouds in a small-scale test setup under well-controlled conditions, the predictive capabilities of a state-of-the-art code are demonstrated and validated. Leveraging physics-based, first principle models and solving large system of equations on highly-resolved grids, the combined effects of finite-rate/multi-phase chemical processes (including thermal ignition), turbulent mixing and shock interactions are captured across the spectrum of relevant time-scales and length scales. Since many scales of motion are generated in a post-detonation environment, even if the initial ambient conditions are quiescent, turbulent mixing plays a major role in the fireball afterburning as well as in dispersion, mixing, ignition and burn-out of combustible clouds in its vicinity. Validating these capabilities at the small scale is critical to establish a reliable predictive tool applicable to more complex and large-scale geometries of practical interest.
Diffuse pollution of soil and water: Long term trends at large scales?
NASA Astrophysics Data System (ADS)
Grathwohl, P.
2012-04-01
Industrialization and urbanization, which consequently increased pressure on the environment to cause degradation of soil and water quality over more than a century, is still ongoing. The number of potential environmental contaminants detected in surface and groundwater is continuously increasing; from classical industrial and agricultural chemicals, to flame retardants, pharmaceuticals, and personal care products. While point sources of pollution can be managed in principle, diffuse pollution is only reversible at very long time scales if at all. Compounds which were phased out many decades ago such as PCBs or DDT are still abundant in soils, sediments and biota. How diffuse pollution is processed at large scales in space (e.g. catchments) and time (centuries) is unknown. The relevance to the field of processes well investigated at the laboratory scale (e.g. sorption/desorption and (bio)degradation kinetics) is not clear. Transport of compounds is often coupled to the water cycle and in order to assess trends in diffuse pollution, detailed knowledge about the hydrology and the solute fluxes at the catchment scale is required (e.g. input/output fluxes, transformation rates at the field scale). This is also a prerequisite in assessing management options for reversal of adverse trends.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Menapace, J A; Davis, P J; Dixit, S
2007-03-07
Over the past four years we have advanced Magnetorheological Finishing (MRF) techniques and tools to imprint complex continuously varying topographical structures onto large-aperture (430 x 430 mm) optical surfaces. These optics, known as continuous phase plates (CPPs), are important for high-power laser applications requiring precise manipulation and control of beam-shape, energy distribution, and wavefront profile. MRF's unique deterministic-sub-aperture polishing characteristics make it possible to imprint complex topographical information onto optical surfaces at spatial scale-lengths approaching 1 mm and surface peak-to-valleys as high as 22 {micro}m. During this discussion, we will present the evolution of the MRF imprinting technology and themore » MRF tools designed to manufacture large-aperture 430 x 430 mm CPPs. Our results will show how the MRF removal function impacts and limits imprint fidelity and what must be done to arrive at a high-quality surface. We also present several examples of this imprinting technology for fabrication of phase correction plates and CPPs for use in high-power laser applications.« less
NASA Astrophysics Data System (ADS)
Cardall, Christian Y.; Budiardja, Reuben D.
2018-01-01
The large-scale computer simulation of a system of physical fields governed by partial differential equations requires some means of approximating the mathematical limit of continuity. For example, conservation laws are often treated with a 'finite-volume' approach in which space is partitioned into a large number of small 'cells,' with fluxes through cell faces providing an intuitive discretization modeled on the mathematical definition of the divergence operator. Here we describe and make available Fortran 2003 classes furnishing extensible object-oriented implementations of simple meshes and the evolution of generic conserved currents thereon, along with individual 'unit test' programs and larger example problems demonstrating their use. These classes inaugurate the Mathematics division of our developing astrophysics simulation code GENASIS (Gen eral A strophysical Si mulation S ystem), which will be expanded over time to include additional meshing options, mathematical operations, solver types, and solver variations appropriate for many multiphysics applications.
Test of the CLAS12 RICH large-scale prototype in the direct proximity focusing configuration
Anefalos Pereira, S.; Baltzell, N.; Barion, L.; ...
2016-02-11
A large area ring-imaging Cherenkov detector has been designed to provide clean hadron identification capability in the momentum range from 3 GeV/c up to 8 GeV/c for the CLAS12 experiments at the upgraded 12 GeV continuous electron beam accelerator facility of Jefferson Laboratory. The adopted solution foresees a novel hybrid optics design based on aerogel radiator, composite mirrors and high-packed and high-segmented photon detectors. Cherenkov light will either be imaged directly (forward tracks) or after two mirror reflections (large angle tracks). We report here the results of the tests of a large scale prototype of the RICH detector performed withmore » the hadron beam of the CERN T9 experimental hall for the direct detection configuration. As a result, the tests demonstrated that the proposed design provides the required pion-to-kaon rejection factor of 1:500 in the whole momentum range.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Shipeng; Wang, Minghuai; Ghan, Steven J.
Aerosol–cloud interactions continue to constitute a major source of uncertainty for the estimate of climate radiative forcing. The variation of aerosol indirect effects (AIE) in climate models is investigated across different dynamical regimes, determined by monthly mean 500 hPa vertical pressure velocity ( ω 500), lower-tropospheric stability (LTS) and large-scale surface precipitation rate derived from several global climate models (GCMs), with a focus on liquid water path (LWP) response to cloud condensation nuclei (CCN) concentrations. The LWP sensitivity to aerosol perturbation within dynamic regimes is found to exhibit a large spread among these GCMs. It is in regimes of strongmore » large-scale ascent ( ω 500 < −25 hPa day −1) and low clouds (stratocumulus and trade wind cumulus) where the models differ most. Shortwave aerosol indirect forcing is also found to differ significantly among different regimes. Shortwave aerosol indirect forcing in ascending regimes is close to that in subsidence regimes, which indicates that regimes with strong large-scale ascent are as important as stratocumulus regimes in studying AIE. It is further shown that shortwave aerosol indirect forcing over regions with high monthly large-scale surface precipitation rate (> 0.1 mm day −1) contributes the most to the total aerosol indirect forcing (from 64 to nearly 100 %). Results show that the uncertainty in AIE is even larger within specific dynamical regimes compared to the uncertainty in its global mean values, pointing to the need to reduce the uncertainty in AIE in different dynamical regimes.« less
Cross-National Patterns of Intergenerational Continuities in Childbearing in Developed Countries
Murphy, Michael
2013-01-01
Earlier work has shown that the association between the fertility of parents and the fertility of children has become stronger over time in some societies. This article updates and broadens the geographic coverage to assess the magnitude of intergenerational continuities in childbearing in developed and middle-income societies using data for 46 populations from 28 developed countries drawn from a number of recent large-scale survey programs. Robust positive intergenerational fertility correlations are found across these countries into the most recent period, and although there is no indication that the strength of the relationship is declining, the increasing trend does not appear to be continuing. PMID:24215254
Reproducing continuous radio blackout using glow discharge plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Kai; Li, Xiaoping; Liu, Donglin
2013-10-15
A novel plasma generator is described that offers large-scale, continuous, non-magnetized plasma with a 30-cm-diameter hollow structure, which provides a path for an electromagnetic wave. The plasma is excited by a low-pressure glow discharge, with varying electron densities ranging from 10{sup 9} to 2.5 × 10{sup 11} cm{sup −3}. An electromagnetic wave propagation experiment reproduced a continuous radio blackout in UHF-, L-, and S-bands. The results are consistent with theoretical expectations. The proposed method is suitable in simulating a plasma sheath, and in researching communications, navigation, electromagnetic mitigations, and antenna compensation in plasma sheaths.
NASA Astrophysics Data System (ADS)
Lauterbach, S.; Strasser, M.; Tjallingii, R.; Kowarik, K.; Reschreiter, H.; Spatl, C.; Brauer, A.
2017-12-01
The cultural importance of underground salt mining in Hallstatt (Austria), which is documented since the Middle Bronze Age, has been recognized already 20 years ago by assigning the status of a UNESCO World Cultural Heritage Site to the Hallstatt area, particularly because of the wealth of archaeological artefacts from the Early Iron Age. Local mining activity is well documented for prehistoric times and known to have been repeatedly affected by large-scale mass movements, for example at the end of the Bronze Age and during the Late Iron Age. In contrast, evidence of mining activity between the 5th and late 13th century AD is scarce, which could be related to socio-economic changes but also to continued mass movement activity, possibly biasing the archaeological record. Within the present study, a 15.63-m-long 14C-dated sediment core from Hallstätter See has been investigated with respect to the deposits of large-scale mass movements. Most of the lake sediment sequence consists of cm- to sub-mm-scale laminated carbonate mud with frequently intercalated small-scale turbidites, reflecting seasonally variable detrital input from the tributaries, but two major event layers clearly stand out. The upper one comprises a 2.45-m-thick basal mass transport deposit (containing folded laminated sediments, homogenized sediments with liquefaction structures, and coarse gravel) and an overlying 1.45-m-thick co-genetic turbidite. From the lower event layer only the topmost part of the turbiditic sequence with a (minimum) thickness of 1.49 m was recovered. Based on their sedimentological characteristics, both event layers are interpreted as the subaqueous continuation of large-scale mass movements, which occurred at ca. 1050 and 2300 cal. years BP and possibly originated from the rock walls along the western lake shore where also the salt mining area is located. This indicates that mass movement activity not only threatened prehistoric salt mining, but occurred also repeatedly during the Common Era, possibly explaining the lack of archaeological evidence of mining activity between the 5th and late 13th century AD. However, a direct spatial and temporal relationship between documented mass movements in the mining area and those recorded in the lake sediments cannot be proven at present and requires further investigations.
Walther, Andreas; Bjurhager, Ingela; Malho, Jani-Markus; Pere, Jaakko; Ruokolainen, Janne; Berglund, Lars A; Ikkala, Olli
2010-08-11
Although remarkable success has been achieved to mimic the mechanically excellent structure of nacre in laboratory-scale models, it remains difficult to foresee mainstream applications due to time-consuming sequential depositions or energy-intensive processes. Here, we introduce a surprisingly simple and rapid methodology for large-area, lightweight, and thick nacre-mimetic films and laminates with superior material properties. Nanoclay sheets with soft polymer coatings are used as ideal building blocks with intrinsic hard/soft character. They are forced to rapidly self-assemble into aligned nacre-mimetic films via paper-making, doctor-blading or simple painting, giving rise to strong and thick films with tensile modulus of 45 GPa and strength of 250 MPa, that is, partly exceeding nacre. The concepts are environmentally friendly, energy-efficient, and economic and are ready for scale-up via continuous roll-to-roll processes. Excellent gas barrier properties, optical translucency, and extraordinary shape-persistent fire-resistance are demonstrated. We foresee advanced large-scale biomimetic materials, relevant for lightweight sustainable construction and energy-efficient transportation.
Matsuda, Fumio; Shinbo, Yoko; Oikawa, Akira; Hirai, Masami Yokota; Fiehn, Oliver; Kanaya, Shigehiko; Saito, Kazuki
2009-01-01
Background In metabolomics researches using mass spectrometry (MS), systematic searching of high-resolution mass data against compound databases is often the first step of metabolite annotation to determine elemental compositions possessing similar theoretical mass numbers. However, incorrect hits derived from errors in mass analyses will be included in the results of elemental composition searches. To assess the quality of peak annotation information, a novel methodology for false discovery rates (FDR) evaluation is presented in this study. Based on the FDR analyses, several aspects of an elemental composition search, including setting a threshold, estimating FDR, and the types of elemental composition databases most reliable for searching are discussed. Methodology/Principal Findings The FDR can be determined from one measured value (i.e., the hit rate for search queries) and four parameters determined by Monte Carlo simulation. The results indicate that relatively high FDR values (30–50%) were obtained when searching time-of-flight (TOF)/MS data using the KNApSAcK and KEGG databases. In addition, searches against large all-in-one databases (e.g., PubChem) always produced unacceptable results (FDR >70%). The estimated FDRs suggest that the quality of search results can be improved not only by performing more accurate mass analysis but also by modifying the properties of the compound database. A theoretical analysis indicates that FDR could be improved by using compound database with smaller but higher completeness entries. Conclusions/Significance High accuracy mass analysis, such as Fourier transform (FT)-MS, is needed for reliable annotation (FDR <10%). In addition, a small, customized compound database is preferable for high-quality annotation of metabolome data. PMID:19847304
Large-Scale and Global Hydrology. Chapter 92
NASA Technical Reports Server (NTRS)
Rodell, Matthew; Beaudoing, Hiroko Kato; Koster, Randal; Peters-Lidard, Christa D.; Famiglietti, James S.; Lakshmi, Venkat
2016-01-01
Powered by the sun, water moves continuously between and through Earths oceanic, atmospheric, and terrestrial reservoirs. It enables life, shapes Earths surface, and responds to and influences climate change. Scientists measure various features of the water cycle using a combination of ground, airborne, and space-based observations, and seek to characterize it at multiple scales with the aid of numerical models. Over time our understanding of the water cycle and ability to quantify it have improved, owing to advances in observational capabilities, the extension of the data record, and increases in computing power and storage. Here we present some of the most recent estimates of global and continental ocean basin scale water cycle stocks and fluxes and provide examples of modern numerical modeling systems and reanalyses.Further, we discuss prospects for predicting water cycle variability at seasonal and longer scales, which is complicated by a changing climate and direct human impacts related to water management and agriculture. Changes to the water cycle will be among the most obvious and important facets of climate change, thus it is crucial that we continue to invest in our ability to monitor it.
Large Terrain Modeling and Visualization for Planets
NASA Technical Reports Server (NTRS)
Myint, Steven; Jain, Abhinandan; Cameron, Jonathan; Lim, Christopher
2011-01-01
Physics-based simulations are actively used in the design, testing, and operations phases of surface and near-surface planetary space missions. One of the challenges in realtime simulations is the ability to handle large multi-resolution terrain data sets within models as well as for visualization. In this paper, we describe special techniques that we have developed for visualization, paging, and data storage for dealing with these large data sets. The visualization technique uses a real-time GPU-based continuous level-of-detail technique that delivers multiple frames a second performance even for planetary scale terrain model sizes.
Facilitating large-scale clinical trials: in Asia.
Choi, Han Yong; Ko, Jae-Wook
2010-01-01
The number of clinical trials conducted in Asian countries has started to increase as a result of expansion of the pharmaceutical market in this area. There is a growing opportunity for large-scale clinical trials because of the large number of patients, significant market potential, good quality of data, and the cost effective and qualified medical infrastructure. However, for carrying out large-scale clinical trials in Asia, there are several major challenges, including the quality control of data, budget control, laboratory validation, monitoring capacity, authorship, staff training, and nonstandard treatment that need to be considered. There are also several difficulties in collaborating on international trials in Asia because Asia is an extremely diverse continent. The major challenges are language differences, diversity of patterns of disease, and current treatments, a large gap in the experience with performing multinational trials, and regulatory differences among the Asian countries. In addition, there are also differences in the understanding of global clinical trials, medical facilities, indemnity assurance, and culture, including food and religion. To make regional and local data provide evidence for efficacy through the standardization of these differences, unlimited effort is required. At this time, there are no large clinical trials led by urologists in Asia, but it is anticipated that the role of urologists in clinical trials will continue to increase. Copyright © 2010 Elsevier Inc. All rights reserved.
Thinking big: Towards ideal strains and processes for large-scale aerobic biofuels production
DOE Office of Scientific and Technical Information (OSTI.GOV)
McMillan, James D.; Beckham, Gregg T.
In this study, global concerns about anthropogenic climate change, energy security and independence, and environmental consequences of continued fossil fuel exploitation are driving significant public and private sector interest and financing to hasten development and deployment of processes to produce renewable fuels, as well as bio-based chemicals and materials, towards scales commensurate with current fossil fuel-based production. Over the past two decades, anaerobic microbial production of ethanol from first-generation hexose sugars derived primarily from sugarcane and starch has reached significant market share worldwide, with fermentation bioreactor sizes often exceeding the million litre scale. More recently, industrial-scale lignocellulosic ethanol plants aremore » emerging that produce ethanol from pentose and hexose sugars using genetically engineered microbes and bioreactor scales similar to first-generation biorefineries.« less
Thinking big: Towards ideal strains and processes for large-scale aerobic biofuels production
McMillan, James D.; Beckham, Gregg T.
2016-12-22
In this study, global concerns about anthropogenic climate change, energy security and independence, and environmental consequences of continued fossil fuel exploitation are driving significant public and private sector interest and financing to hasten development and deployment of processes to produce renewable fuels, as well as bio-based chemicals and materials, towards scales commensurate with current fossil fuel-based production. Over the past two decades, anaerobic microbial production of ethanol from first-generation hexose sugars derived primarily from sugarcane and starch has reached significant market share worldwide, with fermentation bioreactor sizes often exceeding the million litre scale. More recently, industrial-scale lignocellulosic ethanol plants aremore » emerging that produce ethanol from pentose and hexose sugars using genetically engineered microbes and bioreactor scales similar to first-generation biorefineries.« less
Continuous downstream processing for high value biological products: A Review.
Zydney, Andrew L
2016-03-01
There is growing interest in the possibility of developing truly continuous processes for the large-scale production of high value biological products. Continuous processing has the potential to provide significant reductions in cost and facility size while improving product quality and facilitating the design of flexible multi-product manufacturing facilities. This paper reviews the current state-of-the-art in separations technology suitable for continuous downstream bioprocessing, focusing on unit operations that would be most appropriate for the production of secreted proteins like monoclonal antibodies. This includes cell separation/recycle from the perfusion bioreactor, initial product recovery (capture), product purification (polishing), and formulation. Of particular importance are the available options, and alternatives, for continuous chromatographic separations. Although there are still significant challenges in developing integrated continuous bioprocesses, recent technological advances have provided process developers with a number of attractive options for development of truly continuous bioprocessing operations. © 2015 Wiley Periodicals, Inc.
Jimena: efficient computing and system state identification for genetic regulatory networks.
Karl, Stefan; Dandekar, Thomas
2013-10-11
Boolean networks capture switching behavior of many naturally occurring regulatory networks. For semi-quantitative modeling, interpolation between ON and OFF states is necessary. The high degree polynomial interpolation of Boolean genetic regulatory networks (GRNs) in cellular processes such as apoptosis or proliferation allows for the modeling of a wider range of node interactions than continuous activator-inhibitor models, but suffers from scaling problems for networks which contain nodes with more than ~10 inputs. Many GRNs from literature or new gene expression experiments exceed those limitations and a new approach was developed. (i) As a part of our new GRN simulation framework Jimena we introduce and setup Boolean-tree-based data structures; (ii) corresponding algorithms greatly expedite the calculation of the polynomial interpolation in almost all cases, thereby expanding the range of networks which can be simulated by this model in reasonable time. (iii) Stable states for discrete models are efficiently counted and identified using binary decision diagrams. As application example, we show how system states can now be sampled efficiently in small up to large scale hormone disease networks (Arabidopsis thaliana development and immunity, pathogen Pseudomonas syringae and modulation by cytokinins and plant hormones). Jimena simulates currently available GRNs about 10-100 times faster than the previous implementation of the polynomial interpolation model and even greater gains are achieved for large scale-free networks. This speed-up also facilitates a much more thorough sampling of continuous state spaces which may lead to the identification of new stable states. Mutants of large networks can be constructed and analyzed very quickly enabling new insights into network robustness and behavior.
Liao, Chi-Cheng; Chang, Chi-Ru; Hsu, Meng-Ting; Poo, Wak-Kim
2014-08-01
Sustainable harvest of natural products that meets the needs of local people has been viewed by many as an important means for sustaining conservation projects. Although plants often respond to tissue damage through compensatory growth, it may not secure long-term sustainability of the populations because many plants enhance individual well-being at the expense of propagation. Sustainability may further be threatened by infrequent, large-scale events, especially ill-documented ones. We studied the impacts of sprout harvesting on sprout growth in a dwarf bamboo (Pseudosasa usawai) population that has seemingly recovered from an infrequent, large-scale masting event. Experimental results suggest that although a single sprout harvest did not significantly alter the subsequent abundance and structure of sprouts, culm damage that accompanied sprout harvesting resulted in shorter, thinner, and fewer sprouts. Weaker recovery was found in windward, continually harvested, and more severely damaged sites. These findings suggest that sprout growth of damaged dwarf bamboos is likely non-compensatory, but is instead supported through physiological integration whose strength is determined by the well-being of the supplying ramets. Healthy culms closer to the damage also provided more resources than those farther away. Sustainable harvesting of sprouts could benefit from organized community efforts to limit the magnitude of culm damage, provide adequate spacing between harvested sites, and ensure sufficient time interval between harvests. Vegetation boundaries relatively resilient to infrequent, large-scale events are likely maintained by climatic factors and may be sensitive to climate change. Continual monitoring is, therefore, integral to the sustainability of harvesting projects.
Phase Transitions and Scaling in Systems Far from Equilibrium
NASA Astrophysics Data System (ADS)
Täuber, Uwe C.
2017-03-01
Scaling ideas and renormalization group approaches proved crucial for a deep understanding and classification of critical phenomena in thermal equilibrium. Over the past decades, these powerful conceptual and mathematical tools were extended to continuous phase transitions separating distinct nonequilibrium stationary states in driven classical and quantum systems. In concordance with detailed numerical simulations and laboratory experiments, several prominent dynamical universality classes have emerged that govern large-scale, long-time scaling properties both near and far from thermal equilibrium. These pertain to genuine specific critical points as well as entire parameter space regions for steady states that display generic scale invariance. The exploration of nonstationary relaxation properties and associated physical aging scaling constitutes a complementary potent means to characterize cooperative dynamics in complex out-of-equilibrium systems. This review describes dynamic scaling features through paradigmatic examples that include near-equilibrium critical dynamics, driven lattice gases and growing interfaces, correlation-dominated reaction-diffusion systems, and basic epidemic models.
Cleary, Daniel F R
2003-04-01
The impact of disturbance on species diversity may be related to the spatial scales over which it occurs. Here I assess the impact of logging and ENSO (El Niño Southern Oscillation) -induced burning and forest isolation on the species richness (477 species out of more than 28,000 individuals) and community composition of butterflies and butterfly guilds using small (0.9 ha) plots nested within large (450 ha) landscapes. The landscapes were located in three habitat classes: (1) continuous, unburned forest; (2) unburned isolates surrounded by burned forest; and (3) burned forest. Plots with different logging histories were sampled within the two unburned habitat classes, allowing for independent assessment of the two disturbance factors (logging and burning). Disturbance within habitat classes (logging) had a very different impact on butterfly diversity than disturbance among habitat classes (due to ENSO-induced burning and isolation). Logging increased species richness, increased evenness, and lowered dominance. Among guilds based on larval food plants, the species richness of tree and herb specialists was higher in logged areas but their abundance was lower. Both generalist species richness and abundance was higher in logged areas. Among habitat classes, species richness was lower in burned forest and isolates than continuous forest but there was no overall difference in evenness or dominance. Among guilds, generalist species richness was significantly lower in burned forest and isolates than continuous forest. Generalist abundance was also very low in the isolates. There was no difference among disturbance classes in herb specialist species richness but abundance was significantly higher in the isolates and burned forest than in continuous forest. Tree specialist species richness was lower in burned forest than continuous forest but did not differ between continuous forest and isolates. The scale of assessment proved important in estimating the impact of disturbance on species richness. Within disturbance classes, the difference in species richness between primary and logged forest was more pronounced at the smaller spatial scale. Among disturbance classes, the difference in species richness between continuous forest and isolates or burned forest was more pronounced at the larger spatial scale. The lower levels of species richness in ENSO-affected areas and at the larger (landscape) spatial scale indicate that future severe ENSO events may prove one of the most serious threats to extant biodiversity.
He, Qiaoning; Yang, Haijian; Hu, Chunxiang
2016-10-01
Cultivation modes of autotrophic microalgae for biodiesel production utilizing open raceway pond were analyzed in this study. Five before screened good microalgae were tested their lipid productivity and biodiesel quality again in outdoor 1000L ORP. Then, Chlorella sp. L1 and Monoraphidium dybowskii Y2 were selected due to their stronger environmental adaptability, higher lipid productivity and better biodiesel properties. Further scale up cultivation for two species with batch and semi-continuous culture was conducted. In 40,000L ORP, higher lipid productivity (5.15 versus 4.06gm(-2)d(-1) for Chlorella sp. L1, 5.35 versus 3.00gm(-2)d(-1) for M. dybowskii Y2) was achieved in semi-continuous mode. Moreover, the financial costs of 14.18$gal(-1) and 13.31$gal(-1) for crude biodiesel in two microalgae with semi-continuous mode were more economically feasible for commercial production on large scale outdoors. Copyright © 2016 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Witzke, B.J.
1993-03-01
Four large-scale (2--8 Ma) T-R sedimentary sequences of M. Ord. age (late Chaz.-Sherm.) were delimited by Witzke Kolata (1980) in the Iowa area, each bounded by local to regional unconformity/disconformity surfaces. These encompass both siliciclastic and carbonate intervals, in ascending order: (1) St. Peter-Glenwood fms., (2) Platteville Fm., (3) Decorah Fm., (4) Dunleith/upper Decorah fms. Finer-scale resolution of depth-related depositional features has led to regional recognition of smaller-scale shallowing-upward cyclicity contained within each large-scale sequence. Such smaller-scale cyclicity encompasses stratigraphic intervals of 1--10 m thickness, with estimated durations of 0.5--1.5 Ma. The St. Peter Sandst. has long been regarded asmore » a classic transgressive sheet sand. However, four discrete shallowing-upward packages characterize the St. Peter-Glenwood interval regionally (IA, MN, NB, KS), including western facies displaying coarsening-upward sandstone packages with condensed conodont-rich brown shale and phosphatic sediments in their lower part (local oolitic ironstone), commonly above pyritic hardgrounds. Regional continuity of small-scale cyclic patterns in M. Ord. strata of the Iowa area may suggest eustatic controls; this can be tested through inter-regional comparisons.« less
Jia, Zhiyan; Hu, Wentao; Xiang, Jianyong; Wen, Fusheng; Nie, Anmin; Mu, Congpu; Zhao, Zhisheng; Xu, Bo; Tian, Yongjun; Liu, Zhongyuan
2018-06-22
Centimeter-scale continuous monolayer WS 2 film with large tensile strain has been successfully grown on oxidized silicon substrate by chemical vapor deposition, in which monolayer grains can be more than 200 μm in size. Monolayer WS 2 grains are observed to merge together via not only traditional grain boundaries but also non-traditional ones, which are named as grain walls (GWs) due to their nanometer-scale widths. The GWs are revealed to consist of two or three layers. Though not a monolayer, the GWs exhibit significantly enhanced fluorescence and photoluminescence. This enhancement may be attributed to abundant structural defects such as stacking faults and partial dislocations in the GWs, which are clearly observable in atomically resolved high resolution transmission electron microscopy and scanning transmission electron microscopy images. Moreover, GW-based phototransistor is found to deliver higher photocurrent than that based on monolayer film. These features of GWs provide a clue to microstructure engineering of monolayer WS 2 for specific applications in (opto)electronics.
NASA Astrophysics Data System (ADS)
Johnson, Ian D.; Blagovidova, Ekaterina; Dingwall, Paul A.; Brett, Dan J. L.; Shearing, Paul R.; Darr, Jawwad A.
2016-09-01
High power, phase-pure Nb-doped LiFePO4 (LFP) nanoparticles are synthesised using a pilot-scale continuous hydrothermal flow synthesis process (production rate of 6 kg per day) in the range 0.01-2.00 at% Nb with respect to total transition metal content. EDS analysis suggests that Nb is homogeneously distributed throughout the structure. The addition of fructose as a reagent in the hydrothermal flow process, followed by a post synthesis heat-treatment, affords a continuous graphitic carbon coating on the particle surfaces. Electrochemical testing reveals that cycling performance improves with increasing dopant concentration, up to a maximum of 1.0 at% Nb, for which point a specific capacity of 110 mAh g-1 is obtained at 10 C (6 min for the charge or discharge). This is an excellent result for a high power cathode LFP based material, particularly when considering the synthesis was performed on a large pilot-scale apparatus.
NASA Astrophysics Data System (ADS)
Jia, Zhiyan; Hu, Wentao; Xiang, Jianyong; Wen, Fusheng; Nie, Anmin; Mu, Congpu; Zhao, Zhisheng; Xu, Bo; Tian, Yongjun; Liu, Zhongyuan
2018-06-01
Centimeter-scale continuous monolayer WS2 film with large tensile strain has been successfully grown on oxidized silicon substrate by chemical vapor deposition, in which monolayer grains can be more than 200 μm in size. Monolayer WS2 grains are observed to merge together via not only traditional grain boundaries but also non-traditional ones, which are named as grain walls (GWs) due to their nanometer-scale widths. The GWs are revealed to consist of two or three layers. Though not a monolayer, the GWs exhibit significantly enhanced fluorescence and photoluminescence. This enhancement may be attributed to abundant structural defects such as stacking faults and partial dislocations in the GWs, which are clearly observable in atomically resolved high resolution transmission electron microscopy and scanning transmission electron microscopy images. Moreover, GW-based phototransistor is found to deliver higher photocurrent than that based on monolayer film. These features of GWs provide a clue to microstructure engineering of monolayer WS2 for specific applications in (opto)electronics.
Nassar, H; Lebée, A; Monasse, L
2017-01-01
Origami tessellations are particular textured morphing shell structures. Their unique folding and unfolding mechanisms on a local scale aggregate and bring on large changes in shape, curvature and elongation on a global scale. The existence of these global deformation modes allows for origami tessellations to fit non-trivial surfaces thus inspiring applications across a wide range of domains including structural engineering, architectural design and aerospace engineering. The present paper suggests a homogenization-type two-scale asymptotic method which, combined with standard tools from differential geometry of surfaces, yields a macroscopic continuous characterization of the global deformation modes of origami tessellations and other similar periodic pin-jointed trusses. The outcome of the method is a set of nonlinear differential equations governing the parametrization, metric and curvature of surfaces that the initially discrete structure can fit. The theory is presented through a case study of a fairly generic example: the eggbox pattern. The proposed continuous model predicts correctly the existence of various fittings that are subsequently constructed and illustrated.
NASA Astrophysics Data System (ADS)
Nassar, H.; Lebée, A.; Monasse, L.
2017-01-01
Origami tessellations are particular textured morphing shell structures. Their unique folding and unfolding mechanisms on a local scale aggregate and bring on large changes in shape, curvature and elongation on a global scale. The existence of these global deformation modes allows for origami tessellations to fit non-trivial surfaces thus inspiring applications across a wide range of domains including structural engineering, architectural design and aerospace engineering. The present paper suggests a homogenization-type two-scale asymptotic method which, combined with standard tools from differential geometry of surfaces, yields a macroscopic continuous characterization of the global deformation modes of origami tessellations and other similar periodic pin-jointed trusses. The outcome of the method is a set of nonlinear differential equations governing the parametrization, metric and curvature of surfaces that the initially discrete structure can fit. The theory is presented through a case study of a fairly generic example: the eggbox pattern. The proposed continuous model predicts correctly the existence of various fittings that are subsequently constructed and illustrated.
Ranganathan, Panneerselvam; Savithri, Sivaraman
2018-06-01
Computational Fluid Dynamics (CFD) technique is used in this work to simulate the hydrothermal liquefaction of Nannochloropsis sp. microalgae in a lab-scale continuous plug-flow reactor to understand the fluid dynamics, heat transfer, and reaction kinetics in a HTL reactor under hydrothermal condition. The temperature profile in the reactor and the yield of HTL products from the present simulation are obtained and they are validated with the experimental data available in the literature. Furthermore, the parametric study is carried out to study the effect of slurry flow rate, reactor temperature, and external heat transfer coefficient on the yield of products. Though the model predictions are satisfactory in comparison with the experimental results, it still needs to be improved for better prediction of the product yields. This improved model will be considered as a baseline for design and scale-up of large-scale HTL reactor. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Jeong, Su-Jong; Schimel, David; Frankenberg, Christian; Drewry, Darren T.; Fisher, Joshua B.; Verma, Manish; Berry, Joseph A.; Lee, Jung-Eun; Joiner, Joanna
2016-01-01
This study evaluates the large-scale seasonal phenology and physiology of vegetation over northern high latitude forests (40 deg - 55 deg N) during spring and fall by using remote sensing of solar-induced chlorophyll fluorescence (SIF), normalized difference vegetation index (NDVI) and observation-based estimate of gross primary productivity (GPP) from 2009 to 2011. Based on GPP phenology estimation in GPP, the growing season determined by SIF time-series is shorter in length than the growing season length determined solely using NDVI. This is mainly due to the extended period of high NDVI values, as compared to SIF, by about 46 days (+/-11 days), indicating a large-scale seasonal decoupling of physiological activity and changes in greenness in the fall. In addition to phenological timing, mean seasonal NDVI and SIF have different responses to temperature changes throughout the growing season. We observed that both NDVI and SIF linearly increased with temperature increases throughout the spring. However, in the fall, although NDVI linearly responded to temperature increases, SIF and GPP did not linearly increase with temperature increases, implying a seasonal hysteresis of SIF and GPP in response to temperature changes across boreal ecosystems throughout their growing season. Seasonal hysteresis of vegetation at large-scales is consistent with the known phenomena that light limits boreal forest ecosystem productivity in the fall. Our results suggest that continuing measurements from satellite remote sensing of both SIF and NDVI can help to understand the differences between, and information carried by, seasonal variations vegetation structure and greenness and physiology at large-scales across the critical boreal regions.
Effects of a Data-Driven District-Level Reform Model
ERIC Educational Resources Information Center
Slavin, Robert E.; Holmes, GwenCarol; Madden, Nancy A.; Chamberlain, Anne; Cheung, Alan
2010-01-01
Despite a quarter-century of reform, US schools serving students in poverty continue to lag far behind other schools. There are proven programs, but these are not widely used. This large-scale experiment evaluated a district-level reform model created by the Center for DataDriven Reform in Education (CDDRE). The CDDRE model provided consultation…
Food or Fuel: New Competition for the World's Cropland. Worldwatch Paper 35.
ERIC Educational Resources Information Center
Brown, Lester R.
The paper explores how continuously expanding world demand for food, feed, and fuel is generating pressure to restructure agricultural land use. In addition, problems related to transfer of agricultural crop land to energy crops are discussed. The technology of energy crops has developed to the point where large-scale commercial production of…
Technical Assessment: Integrated Photonics
2015-10-01
in global internet protocol traffic as a function of time by local access technology. Photonics continues to play a critical role in enabling this...communication networks. This has enabled services like the internet , high performance computing, and power-efficient large-scale data centers. The...signal processing, quantum information science, and optics for free space applications. However major obstacles challenge the implementation of
Be careful what you wish for: The legacy of Smokey Bear
Geoffrey H. Donovan; Thomas C. Brown
2007-01-01
A century of wildfire suppression in the United States has led to increased fuel loading and large-scale ecological change across some of the nation's forests. Land management agencies have responded by increasing the use of prescribed fire and thinning. However, given the continued emphasis on fire suppression, current levels of funding for such fuel management...
Education Researchers as Bricoleurs in the Creation of Sustainable Learning Environments
ERIC Educational Resources Information Center
Mahlomaholo, Sechaba
2014-01-01
Higher education has, to date, been unable to provide effective and lasting solutions to challenges of education, because large sections thereof continue to search for knowledge for its own sake. At best, they conduct responsive research, but on a small scale they reduce the complexity that is education to a neat unilinear process which can be…
ERIC Educational Resources Information Center
McGann, Sean T.; Frost, Raymond D.; Matta, Vic; Huang, Wayne
2007-01-01
Information Systems (IS) departments are facing challenging times as enrollments decline and the field evolves, thus necessitating large-scale curriculum changes. Our experience shows that many IS departments are in such a predicament as they have not evolved content quickly enough to keep it relevant, they do a poor job coordinating curriculum…
An imputed forest composition map for New England screened by species range boundaries
Matthew J. Duveneck; Jonathan R. Thompson; B. Tyler Wilson
2015-01-01
Initializing forest landscape models (FLMs) to simulate changes in tree species composition requires accurate fine-scale forest attribute information mapped continuously over large areas. Nearest-neighbor imputation maps, maps developed from multivariate imputation of field plots, have high potential for use as the initial condition within FLMs, but the tendency for...
Historical open forest ecosystems in the Missouri Ozarks: reconstruction and restoration targets
Brice B. Hanberry; D. Todd Jones-Farrand; John M. Kabrick
2014-01-01
Current forests no longer resemble historical open forest ecosystems in the eastern United States. In the absence of representative forest ecosystems under a continuous surface fire regime at a large scale, reconstruction of historical landscapes can provide a reference for restoration efforts. For initial expert-assigned vegetation phases ranging from prairie to...
ERIC Educational Resources Information Center
Wolhuter, C. C.
2011-01-01
While South African higher education has, in many respects, achieved remarkable achievements since 1994, a series of serious problems continue to beset the system, and a low internal (high attrition rate) and external (alignment with the employment market) efficiency. There also exist the problems of large scale youth unemployment and the policy…
Restoring bottomland hardwood forests: A comparison of four techniques
John A. Stanturf; Emile S. Cardiner; James P. Shepard; Callie J. Schweitzer; C. Jeffrey Portwood; Lamar Dorris
2004-01-01
Large-scale afforestation of former agricultural lands in the Lower Mississippi Alluvial Valley (LMAV) is one of the largest forest restoration efforts in the world and continues to attract interest from landowners, policy makers, scientists, and managers. The decision by many landowners to afforest these lands has been aided in part by the increased availability of...
Incentives and Test-Based Accountability in Education
ERIC Educational Resources Information Center
Hout, Michael, Ed.; Elliott, Stuart W., Ed.
2011-01-01
In recent years there have been increasing efforts to use accountability systems based on large-scale tests of students as a mechanism for improving student achievement. The federal No Child Left Behind Act (NCLB) is a prominent example of such an effort, but it is only the continuation of a steady trend toward greater test-based accountability in…
Scott, Jan; Geoffroy, Pierre Alexis; Sportiche, Sarah; Brichant-Petit-Jean, Clara; Gard, Sebastien; Kahn, Jean-Pierre; Azorin, Jean-Michel; Henry, Chantal; Etain, Bruno; Bellivier, Frank
2017-01-15
It is increasingly recognised that reliable and valid assessments of lithium response are needed in order to target more efficiently the use of this medication in bipolar disorders (BD) and to identify genotypes, endophenotypes and biomarkers of response. In a large, multi-centre, clinically representative sample of 300 cases of BD, we assess external clinical validators of lithium response phenotypes as defined using three different recommended approaches to scoring the Alda lithium response scale. The scale comprises an A scale (rating lithium response) and a B scale (assessing confounders). Analysis of the two continuous scoring methods (A scale score minus the B scale score, or A scale score in those with a low B scale score) demonstrated that 21-23% of the explained variance in lithium response was accounted for by a positive family history of BD I and the early introduction of lithium. Categorical definitions of response suggest poor response is also associated with a positive history of alcohol and/or substance use comorbidities. High B scale scores were significantly associated with longer duration of illness prior to receiving lithium and the presence of psychotic symptoms. The original sample was not recruited specifically to study lithium response. The Alda scale is designed to assess response retrospectively. This cross-validation study identifies different clinical phenotypes of lithium response when defined by continuous or categorical measures. Future clinical, genetic and biomarker studies should report both the findings and the method employed to assess lithium response according to the Alda scale. Copyright © 2016 Elsevier B.V. All rights reserved.
Fallon, Barbara; Trocmé, Nico; MacLaurin, Bruce
2011-04-01
To examine evidence available in large-scale North American datasets on child abuse and neglect that can assist in understanding the complexities of child protection case classifications. A review of child abuse and neglect data from large North American epidemiological studies including the Canadian Incidence Study of Reported Child Abuse and Neglect (CIS), the National Child Abuse and Neglect Data System (NCANDS), and the National Incidence Studies of Reported Child Abuse and Neglect (NIS). The authors of this paper argue that recent evidence from large North American epidemiological studies examining the incidence of child abuse and neglect demonstrate that children and families identified as being at risk of maltreatment present with as many household and caregiver concerns as investigations that are substantiated. In order to continue to develop appropriate services and policies for vulnerable children the authors urge continue definitional clarity for research in child maltreatment that considers the exemplars or indicators of categories, in tandem with parental and child characteristics which can provide one source of evidence-basis to meaningful child protection case classifications. Continued monitoring, refined by the dilemmas faced in practice, are critical for a continued public health investment in children's well-being, predicated upon upholding children's rights. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Brander, K. M.; Dickson, R. R.; Edwards, M.
2003-08-01
The Continuous Plankton Recorder (CPR) survey was conceived from the outset as a programme of applied research designed to assist the fishing industry. Its survival and continuing vigour after 70 years is a testament to its utility, which has been achieved in spite of great changes in our understanding of the marine environment and in our concerns over how to manage it. The CPR has been superseded in several respects by other technologies, such as acoustics and remote sensing, but it continues to provide unrivalled seasonal and geographic information about a wide range of zooplankton and phytoplankton taxa. The value of this coverage increases with time and provides the basis for placing recent observations into the context of long-term, large-scale variability and thus suggesting what the causes are likely to be. Information from the CPR is used extensively in judging environmental impacts and producing quality status reports (QSR); it has shown the distributions of fish stocks, which had not previously been exploited; it has pointed to the extent of ungrazed phytoplankton production in the North Atlantic, which was a vital element in establishing the importance of carbon sequestration by phytoplankton. The CPR continues to be the principal source of large-scale, long-term information about the plankton ecosystem of the North Atlantic. It has recently provided extensive information about the biodiversity of the plankton and about the distribution of introduced species. It serves as a valuable example for the design of future monitoring of the marine environment and it has been essential to the design and implementation of most North Atlantic plankton research.
Biogeography and Change among Regional Coral Communities across the Western Indian Ocean
McClanahan, Timothy R.; Ateweberhan, Mebrahtu; Darling, Emily S.; Graham, Nicholas A. J.; Muthiga, Nyawira A.
2014-01-01
Coral reefs are biodiverse ecosystems structured by abiotic and biotic factors operating across many spatial scales. Regional-scale interactions between climate change, biogeography and fisheries management remain poorly understood. Here, we evaluated large-scale patterns of coral communities in the western Indian Ocean after a major coral bleaching event in 1998. We surveyed 291 coral reef sites in 11 countries and over 30° of latitude between 2004 and 2011 to evaluate variations in coral communities post 1998 across gradients in latitude, mainland-island geography and fisheries management. We used linear mixed-effect hierarchical models to assess total coral cover, the abundance of four major coral families (acroporids, faviids, pocilloporids and poritiids), coral genus richness and diversity, and the bleaching susceptibility of the coral communities. We found strong latitudinal and geographic gradients in coral community structure and composition that supports the presence of a high coral cover and diversity area that harbours temperature-sensitive taxa in the northern Mozambique Channel between Tanzania, northern Mozambique and northern Madagascar. Coral communities in the more northern latitudes of Kenya, Seychelles and the Maldives were generally composed of fewer bleaching-tolerant coral taxa and with reduced richness and diversity. There was also evidence for continued declines in the abundance of temperature-sensitive taxa and community change after 2004. While there are limitations of our regional dataset in terms of spatial and temporal replication, these patterns suggest that large-scale interactions between biogeographic factors and strong temperature anomalies influence coral communities while smaller-scale factors, such as the effect of fisheries closures, were weak. The northern Mozambique Channel, while not immune to temperature disturbances, shows continued signs of resistance to climate disturbances and remains a priority for future regional conservation and management actions. PMID:24718371
Biogeography and change among regional coral communities across the Western Indian Ocean.
McClanahan, Timothy R; Ateweberhan, Mebrahtu; Darling, Emily S; Graham, Nicholas A J; Muthiga, Nyawira A
2014-01-01
Coral reefs are biodiverse ecosystems structured by abiotic and biotic factors operating across many spatial scales. Regional-scale interactions between climate change, biogeography and fisheries management remain poorly understood. Here, we evaluated large-scale patterns of coral communities in the western Indian Ocean after a major coral bleaching event in 1998. We surveyed 291 coral reef sites in 11 countries and over 30° of latitude between 2004 and 2011 to evaluate variations in coral communities post 1998 across gradients in latitude, mainland-island geography and fisheries management. We used linear mixed-effect hierarchical models to assess total coral cover, the abundance of four major coral families (acroporids, faviids, pocilloporids and poritiids), coral genus richness and diversity, and the bleaching susceptibility of the coral communities. We found strong latitudinal and geographic gradients in coral community structure and composition that supports the presence of a high coral cover and diversity area that harbours temperature-sensitive taxa in the northern Mozambique Channel between Tanzania, northern Mozambique and northern Madagascar. Coral communities in the more northern latitudes of Kenya, Seychelles and the Maldives were generally composed of fewer bleaching-tolerant coral taxa and with reduced richness and diversity. There was also evidence for continued declines in the abundance of temperature-sensitive taxa and community change after 2004. While there are limitations of our regional dataset in terms of spatial and temporal replication, these patterns suggest that large-scale interactions between biogeographic factors and strong temperature anomalies influence coral communities while smaller-scale factors, such as the effect of fisheries closures, were weak. The northern Mozambique Channel, while not immune to temperature disturbances, shows continued signs of resistance to climate disturbances and remains a priority for future regional conservation and management actions.
Thompson, A J; Marks, L H; Goudie, M J; Rojas-Pena, A; Handa, H; Potkay, J A
2017-03-01
Artificial lungs have been used in the clinic for multiple decades to supplement patient pulmonary function. Recently, small-scale microfluidic artificial lungs (μAL) have been demonstrated with large surface area to blood volume ratios, biomimetic blood flow paths, and pressure drops compatible with pumpless operation. Initial small-scale microfluidic devices with blood flow rates in the μ l/min to ml/min range have exhibited excellent gas transfer efficiencies; however, current manufacturing techniques may not be suitable for scaling up to human applications. Here, we present a new manufacturing technology for a microfluidic artificial lung in which the structure is assembled via a continuous "rolling" and bonding procedure from a single, patterned layer of polydimethyl siloxane (PDMS). This method is demonstrated in a small-scale four-layer device, but is expected to easily scale to larger area devices. The presented devices have a biomimetic branching blood flow network, 10 μ m tall artificial capillaries, and a 66 μ m thick gas transfer membrane. Gas transfer efficiency in blood was evaluated over a range of blood flow rates (0.1-1.25 ml/min) for two different sweep gases (pure O 2 , atmospheric air). The achieved gas transfer data closely follow predicted theoretical values for oxygenation and CO 2 removal, while pressure drop is marginally higher than predicted. This work is the first step in developing a scalable method for creating large area microfluidic artificial lungs. Although designed for microfluidic artificial lungs, the presented technique is expected to result in the first manufacturing method capable of simply and easily creating large area microfluidic devices from PDMS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crater, Jason; Galleher, Connor; Lievense, Jeff
NREL is developing an advanced aerobic bubble column model using Aspen Custom Modeler (ACM). The objective of this work is to integrate the new fermentor model with existing techno-economic models in Aspen Plus and Excel to establish a new methodology for guiding process design. To assist this effort, NREL has contracted Genomatica to critique and make recommendations for improving NREL's bioreactor model and large scale aerobic bioreactor design for biologically producing lipids at commercial scale. Genomatica has highlighted a few areas for improving the functionality and effectiveness of the model. Genomatica recommends using a compartment model approach with an integratedmore » black-box kinetic model of the production microbe. We also suggest including calculations for stirred tank reactors to extend the models functionality and adaptability for future process designs. Genomatica also suggests making several modifications to NREL's large-scale lipid production process design. The recommended process modifications are based on Genomatica's internal techno-economic assessment experience and are focused primarily on minimizing capital and operating costs. These recommendations include selecting/engineering a thermotolerant yeast strain with lipid excretion; using bubble column fermentors; increasing the size of production fermentors; reducing the number of vessels; employing semi-continuous operation; and recycling cell mass.« less
Large-Scale Distributed Computational Fluid Dynamics on the Information Power Grid Using Globus
NASA Technical Reports Server (NTRS)
Barnard, Stephen; Biswas, Rupak; Saini, Subhash; VanderWijngaart, Robertus; Yarrow, Maurice; Zechtzer, Lou; Foster, Ian; Larsson, Olle
1999-01-01
This paper describes an experiment in which a large-scale scientific application development for tightly-coupled parallel machines is adapted to the distributed execution environment of the Information Power Grid (IPG). A brief overview of the IPG and a description of the computational fluid dynamics (CFD) algorithm are given. The Globus metacomputing toolkit is used as the enabling device for the geographically-distributed computation. Modifications related to latency hiding and Load balancing were required for an efficient implementation of the CFD application in the IPG environment. Performance results on a pair of SGI Origin 2000 machines indicate that real scientific applications can be effectively implemented on the IPG; however, a significant amount of continued effort is required to make such an environment useful and accessible to scientists and engineers.
Los Alamos Explosives Performance Key to Stockpile Stewardship
Dattelbaum, Dana
2018-02-14
As the U.S. Nuclear Deterrent ages, one essential factor in making sure that the weapons will continue to perform as designed is understanding the fundamental properties of the high explosives that are part of a nuclear weapons system. As nuclear weapons go through life extension programs, some changes may be advantageous, particularly through the addition of what are known as "insensitive" high explosives that are much less likely to accidentally detonate than the already very safe "conventional" high explosives that are used in most weapons. At Los Alamos National Laboratory explosives research includes a wide variety of both large- and small-scale experiments that include small contained detonations, gas and powder gun firings, larger outdoor detonations, large-scale hydrodynamic tests, and at the Nevada Nuclear Security Site, underground sub-critical experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marrinan, Thomas; Leigh, Jason; Renambot, Luc
Mixed presence collaboration involves remote collaboration between multiple collocated groups. This paper presents the design and results of a user study that focused on mixed presence collaboration using large-scale tiled display walls. The research was conducted in order to compare data synchronization schemes for multi-user visualization applications. Our study compared three techniques for sharing data between display spaces with varying constraints and affordances. The results provide empirical evidence that using data sharing techniques with continuous synchronization between the sites lead to improved collaboration for a search and analysis task between remotely located groups. We have also identified aspects of synchronizedmore » sessions that result in increased remote collaborator awareness and parallel task coordination. It is believed that this research will lead to better utilization of large-scale tiled display walls for distributed group work.« less
NASA Astrophysics Data System (ADS)
Baker, E. T.; Feely, R. A.; Mottl, M. J.; Sansone, F. T.; Wheat, C. G.; Resing, J. A.; Lupton, J. E.
1994-11-01
The interactions between hydrothermal circulation and large-scale geological and geophysical characteristics of the mid-ocean ridge cannot be ascertained without large-scale views of the pattern of hydrothermal venting. Such multi-ridge-segment surveys of venting are accomplished most efficiently by mapping the distribution and intensity of hydrothermal plumes. In November 1991, we mapped hydrothermal temperature (Delta(theta)) and light attenuation (Delta(c)) anomalies above the East Pacific Rise (EPR) continuously from 8 deg 40 min to 11 deg 50 min N, a fast spreading ridge crest portion bisected by the Clipperton Transform Fault. Plume distributions show a precise correlation with the distribution of active vents where video coverage of the axial caldera is exhaustive. Elsewhere in the study area the sketchy knowledge of vent locations gleaned from scattered camera tows predicts only poorly the large-scale hydrothermal pattern revealed by our plume studies. Plumes were most intense between 9 deg 42 min and 9 deg 54 min N, directly over a March/April, 1991, seafloor eruption. These plumes had exceptionally high Delta(c)/Delta(theta) ratios compared to the rest of the study area; we suggest that the phase-separated gas-rich vent fluids discharging here fertilize an abundant population of bacteria. Hydrothermal plume distributions define three categories: intense and continuous, weak and discontinuous and negligible. The location of each category is virtually congruent with areas that are, respectively, magmatically robust, magmatically weak and magmatically starved, as inferred from previous measurements of axial bathymetric undulations, cross-axis inflation and magma chamber depth and continuity. This congruency implies a fine-scale spatial and temporal connection between magmatic fluctuations and hydrothermal venting. We thus speculate that, at least along this fast spreading section of the EPR, cyclic replenishment, eruption and freezing of the thin axial melt lens exerts greater control over hydrothermal venting than the more enduring zones of crystal mush and hot rock. We found intense, and continuous, plumes along 33% of the surveyed ridge crest, an observation implying that any point on the ridge is, on average, hyrothermally active one-third of the time. Combining this result with the 20% plume coverage found along the medium-rate Juan de Fuca Ridge suggests that superfast (approximately 150 mm/yr) spreading rates should support vigorous venting along approximately 50% of their length, if spreading rate and along-axis plume coverage are linearly related.
A Rotating Bioreactor for Scalable Culture and Differentiation of Respiratory Epithelium
Raredon, Micha Sam Brickman; Ghaedi, Mahboobe; Calle, Elizabeth A.; Niklason, Laura E.
2015-01-01
Respiratory epithelium is difficult to grow in vitro, as it requires a well-maintained polarizing air–liquid interface (ALI) to maintain differentiation. Traditional methods rely on permeable membrane culture inserts, which are difficult to work with and are ill-suited for the production of large numbers of cells, such as the quantities required for cell-based clinical therapies. Herein, we investigate an alternative form of culture in which the cells are placed on a porous substrate that is continuously rolled, such that the monolayer of cells is alternately submerged in media or apically exposed to air. Our prototype bioreactor is reliable for up to 21 days of continuous culture and is designed for scale-up for large-scale cell culture with continuous medium and gas exchange. Normal human bronchial epithelial (NHBE) cells were cultured on an absorbent substrate in the reactor for periods of 7, 14, and 21 days and were compared to static controls that were submerged in media. Quantification by immunohistochemistry and quantitative PCR of markers specific to differentiated respiratory epithelium indicated increased cilia, mucous production, and tight junction formation in the rolled cultures, compared to static. Together with scanning electron microscopy and paraffin histology, the data indicate that the intermittent ALI provided by the rolling bioreactor promotes a polarized epithelial phenotype over a period of 21 days. PMID:26858899
NASA Astrophysics Data System (ADS)
Hood, Alan W.; Hughes, David W.
2011-08-01
This review provides an introduction to the generation and evolution of the Sun's magnetic field, summarising both observational evidence and theoretical models. The eleven year solar cycle, which is well known from a variety of observed quantities, strongly supports the idea of a large-scale solar dynamo. Current theoretical ideas on the location and mechanism of this dynamo are presented. The solar cycle influences the behaviour of the global coronal magnetic field and it is the eruptions of this field that can impact on the Earth's environment. These global coronal variations can be modelled to a surprising degree of accuracy. Recent high resolution observations of the Sun's magnetic field in quiet regions, away from sunspots, show that there is a continual evolution of a small-scale magnetic field, presumably produced by small-scale dynamo action in the solar interior. Sunspots, a natural consequence of the large-scale dynamo, emerge, evolve and disperse over a period of several days. Numerical simulations can help to determine the physical processes governing the emergence of sunspots. We discuss the interaction of these emerging fields with the pre-existing coronal field, resulting in a variety of dynamic phenomena.
The Universe at Moderate Redshift
NASA Technical Reports Server (NTRS)
Cen, Renyue; Ostriker, Jeremiah P.
1997-01-01
The report covers the work done in the past year and a wide range of fields including properties of clusters of galaxies; topological properties of galaxy distributions in terms of galaxy types; patterns of gravitational nonlinear clustering process; development of a ray tracing algorithm to study the gravitational lensing phenomenon by galaxies, clusters and large-scale structure, one of whose applications being the effects of weak gravitational lensing by large-scale structure on the determination of q(0); the origin of magnetic fields on the galactic and cluster scales; the topological properties of Ly(alpha) clouds the Ly(alpha) optical depth distribution; clustering properties of Ly(alpha) clouds; and a determination (lower bound) of Omega(b) based on the observed Ly(alpha) forest flux distribution. In the coming year, we plan to continue the investigation of Ly(alpha) clouds using larger dynamic range (about a factor of two) and better simulations (with more input physics included) than what we have now. We will study the properties of galaxies on 1 - 100h(sup -1) Mpc scales using our state-of-the-art large scale galaxy formation simulations of various cosmological models, which will have a resolution about a factor of 5 (in each dimension) better than our current, best simulations. We will plan to study the properties of X-ray clusters using unprecedented, very high dynamic range (20,000) simulations which will enable us to resolve the cores of clusters while keeping the simulation volume sufficiently large to ensure a statistically fair sample of the objects of interest. The details of the last year's works are now described.
Hu, Ya; Peng, Kui-Qing; Liu, Lin; Qiao, Zhen; Huang, Xing; Wu, Xiao-Ling; Meng, Xiang-Min; Lee, Shuit-Tong
2014-01-13
Silicon nanowires (SiNWs) are attracting growing interest due to their unique properties and promising applications in photovoltaic devices, thermoelectric devices, lithium-ion batteries, and biotechnology. Low-cost mass production of SiNWs is essential for SiNWs-based nanotechnology commercialization. However, economic, controlled large-scale production of SiNWs remains challenging and rarely attainable. Here, we demonstrate a facile strategy capable of low-cost, continuous-flow mass production of SiNWs on an industrial scale. The strategy relies on substrate-enhanced metal-catalyzed electroless etching (MCEE) of silicon using dissolved oxygen in aqueous hydrofluoric acid (HF) solution as an oxidant. The distinct advantages of this novel MCEE approach, such as simplicity, scalability and flexibility, make it an attractive alternative to conventional MCEE methods.
Investigation of aquifer-estuary interaction using wavelet analysis of fiber-optic temperature data
Henderson, R.D.; Day-Lewis, Frederick D.; Harvey, Charles F.
2009-01-01
Fiber-optic distributed temperature sensing (FODTS) provides sub-minute temporal and meter-scale spatial resolution over kilometer-long cables. Compared to conventional thermistor or thermocouple-based technologies, which measure temperature at discrete (and commonly sparse) locations, FODTS offers nearly continuous spatial coverage, thus providing hydrologic information at spatiotemporal scales previously impossible. Large and information-rich FODTS datasets, however, pose challenges for data exploration and analysis. To date, FODTS analyses have focused on time-series variance as the means to discriminate between hydrologic phenomena. Here, we demonstrate the continuous wavelet transform (CWT) and cross-wavelet transform (XWT) to analyze FODTS in the context of related hydrologic time series. We apply the CWT and XWT to data from Waquoit Bay, Massachusetts to identify the location and timing of tidal pumping of submarine groundwater.
Formulating a subgrid-scale breakup model for microbubble generation from interfacial collisions
NASA Astrophysics Data System (ADS)
Chan, Wai Hong Ronald; Mirjalili, Shahab; Urzay, Javier; Mani, Ali; Moin, Parviz
2017-11-01
Multiphase flows often involve impact events that engender important effects like the generation of a myriad of tiny bubbles that are subsequently transported in large liquid bodies. These impact events are created by large-scale phenomena like breaking waves on ocean surfaces, and often involve the relative approach of liquid surfaces. This relative motion generates continuously shrinking length scales as the entrapped gas layer thins and eventually breaks up into microbubbles. The treatment of this disparity in length scales is computationally challenging. In this presentation, a framework is presented that addresses a subgrid-scale (SGS) model aimed at capturing the process of microbubble generation. This work sets up the components in an overarching volume-of-fluid (VoF) toolset and investigates the analytical foundations of an SGS model for describing the breakup of a thin air film trapped between two approaching water bodies in a physical regime corresponding to Mesler entrainment. Constituents of the SGS model, such as the identification of impact events and the accurate computation of the local characteristic curvature in a VoF-based architecture, and the treatment of the air layer breakup, are discussed and illustrated in simplified scenarios. Supported by Office of Naval Research (ONR)/A*STAR (Singapore).
Large-scale water projects in the developing world: Revisiting the past and looking to the future
NASA Astrophysics Data System (ADS)
Sivakumar, Bellie; Chen, Ji
2014-05-01
During the past half a century or so, the developing world has been witnessing a significant increase in freshwater demands due to a combination of factors, including population growth, increased food demand, improved living standards, and water quality degradation. Since there exists significant variability in rainfall and river flow in both space and time, large-scale storage and distribution of water has become a key means to meet these increasing demands. In this regard, large dams and water transfer schemes (including river-linking schemes and virtual water trades) have been playing a key role. While the benefits of such large-scale projects in supplying water for domestic, irrigation, industrial, hydropower, recreational, and other uses both in the countries of their development and in other countries are undeniable, concerns on their negative impacts, such as high initial costs and damages to our ecosystems (e.g. river environment and species) and socio-economic fabric (e.g. relocation and socio-economic changes of affected people) have also been increasing in recent years. These have led to serious debates on the role of large-scale water projects in the developing world and on their future, but the often one-sided nature of such debates have inevitably failed to yield fruitful outcomes thus far. The present study aims to offer a far more balanced perspective on this issue. First, it recognizes and emphasizes the need for still additional large-scale water structures in the developing world in the future, due to the continuing increase in water demands, inefficiency in water use (especially in the agricultural sector), and absence of equivalent and reliable alternatives. Next, it reviews a few important success and failure stories of large-scale water projects in the developing world (and in the developed world), in an effort to arrive at a balanced view on the future role of such projects. Then, it discusses some major challenges in future water planning and management, with proper consideration to potential technological developments and new options. Finally, it highlights the urgent need for a broader framework that integrates the physical science-related aspects ("hard sciences") and the human science-related aspects ("soft sciences").
An adaptive evolutionary multi-objective approach based on simulated annealing.
Li, H; Landa-Silva, D
2011-01-01
A multi-objective optimization problem can be solved by decomposing it into one or more single objective subproblems in some multi-objective metaheuristic algorithms. Each subproblem corresponds to one weighted aggregation function. For example, MOEA/D is an evolutionary multi-objective optimization (EMO) algorithm that attempts to optimize multiple subproblems simultaneously by evolving a population of solutions. However, the performance of MOEA/D highly depends on the initial setting and diversity of the weight vectors. In this paper, we present an improved version of MOEA/D, called EMOSA, which incorporates an advanced local search technique (simulated annealing) and adapts the search directions (weight vectors) corresponding to various subproblems. In EMOSA, the weight vector of each subproblem is adaptively modified at the lowest temperature in order to diversify the search toward the unexplored parts of the Pareto-optimal front. Our computational results show that EMOSA outperforms six other well established multi-objective metaheuristic algorithms on both the (constrained) multi-objective knapsack problem and the (unconstrained) multi-objective traveling salesman problem. Moreover, the effects of the main algorithmic components and parameter sensitivities on the search performance of EMOSA are experimentally investigated.
Optimality and stability of symmetric evolutionary games with applications in genetic selection.
Huang, Yuanyuan; Hao, Yiping; Wang, Min; Zhou, Wen; Wu, Zhijun
2015-06-01
Symmetric evolutionary games, i.e., evolutionary games with symmetric fitness matrices, have important applications in population genetics, where they can be used to model for example the selection and evolution of the genotypes of a given population. In this paper, we review the theory for obtaining optimal and stable strategies for symmetric evolutionary games, and provide some new proofs and computational methods. In particular, we review the relationship between the symmetric evolutionary game and the generalized knapsack problem, and discuss the first and second order necessary and sufficient conditions that can be derived from this relationship for testing the optimality and stability of the strategies. Some of the conditions are given in different forms from those in previous work and can be verified more efficiently. We also derive more efficient computational methods for the evaluation of the conditions than conventional approaches. We demonstrate how these conditions can be applied to justifying the strategies and their stabilities for a special class of genetic selection games including some in the study of genetic disorders.
Hoechst and Wacker plan joint venture in PVC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Young, I.
1992-12-02
Restructuring of Europe's petrochemical industry has taken a further step with the announcement that Hoechst (Frankfurt) and Wacker Chemie (Munich) are planning a joint venture in polyvinyl chloride (PVC). The venture would include production, R D, sales and marketing, plus both companies' PVC recycling activities. However, their vinyl chloride monomer (VCM) plants, and Hoechst's Kalle PVC film business, have been left out. Erich Schnitzler, head of Hoechst's PVC business unit, does not anticipate problems with the European Community's competition directorate. We are both among the middle-sized European PVC producers, and together we would have a 9%-10% market share. Our jointmore » venture would not limit competition. Both partners are hoping for approval from Brussels in first-quarter 1993. Hoechst has 255,000 m.t./year of PVC capacity at Gendorfand Knapsack, while Wacker has 365,000 m.t./year at Burghausen and Cologne. All the units, except Wacker's Cologne plant, are back integrated to VCM. The joint venture would buy VCM from the two parent companies and on the merchant market.« less
Solving NP-Hard Problems with Physarum-Based Ant Colony System.
Liu, Yuxin; Gao, Chao; Zhang, Zili; Lu, Yuxiao; Chen, Shi; Liang, Mingxin; Tao, Li
2017-01-01
NP-hard problems exist in many real world applications. Ant colony optimization (ACO) algorithms can provide approximate solutions for those NP-hard problems, but the performance of ACO algorithms is significantly reduced due to premature convergence and weak robustness, etc. With these observations in mind, this paper proposes a Physarum-based pheromone matrix optimization strategy in ant colony system (ACS) for solving NP-hard problems such as traveling salesman problem (TSP) and 0/1 knapsack problem (0/1 KP). In the Physarum-inspired mathematical model, one of the unique characteristics is that critical tubes can be reserved in the process of network evolution. The optimized updating strategy employs the unique feature and accelerates the positive feedback process in ACS, which contributes to the quick convergence of the optimal solution. Some experiments were conducted using both benchmark and real datasets. The experimental results show that the optimized ACS outperforms other meta-heuristic algorithms in accuracy and robustness for solving TSPs. Meanwhile, the convergence rate and robustness for solving 0/1 KPs are better than those of classical ACS.
Lee, Yi-Hsuan; von Davier, Alina A
2013-07-01
Maintaining a stable score scale over time is critical for all standardized educational assessments. Traditional quality control tools and approaches for assessing scale drift either require special equating designs, or may be too time-consuming to be considered on a regular basis with an operational test that has a short time window between an administration and its score reporting. Thus, the traditional methods are not sufficient to catch unusual testing outcomes in a timely manner. This paper presents a new approach for score monitoring and assessment of scale drift. It involves quality control charts, model-based approaches, and time series techniques to accommodate the following needs of monitoring scale scores: continuous monitoring, adjustment of customary variations, identification of abrupt shifts, and assessment of autocorrelation. Performance of the methodologies is evaluated using manipulated data based on real responses from 71 administrations of a large-scale high-stakes language assessment.
Janke, Leandro; Leite, Athaydes F; Nikolausz, Marcell; Radetski, Claudemir M; Nelles, Michael; Stinner, Walter
2016-02-01
The anaerobic digestion of sugarcane filter cake and the option of co-digestion with bagasse were investigated in a semi-continuous feeding regime to assess the main parameters used for large-scale process designing. Moreover, fresh cattle manure was considered as alternative inoculum for the start-up of biogas reactors in cases where digestate from a biogas plant would not be available in remote rural areas. Experiments were carried out in 6 lab-scale semi-continuous stirred-tank reactors at mesophilic conditions (38±1°C) while the main anaerobic digestion process parameters monitored. Fresh cattle manure demonstrated to be appropriate for the start-up process. However, an acclimation period was required due to the high initial volatile fatty acids concentration (8.5gL(-1)). Regardless the mono-digestion of filter cake presented 50% higher biogas yield (480mLgVS(-1)) than co-digestion with bagasse (320mLgVS(-1)) during steady state conditions. A large-scale co-digestion system would produce 58% more biogas (1008m(3)h(-1)) than mono-digestion of filter cake (634m(3)h(-1)) due to its higher biomass availability for biogas conversion. Considering that the biogas production rate was the technical parameter that displayed the most relevant differences between the analyzed substrate options (0.99-1.45m(3)biogasm(3)d(-1)). The decision of which substrate option should be implemented in practice would be mainly driven by the available construction techniques, since economically efficient tanks could compensate the lower biogas production rate of co-digestion option. Copyright © 2015 Elsevier Ltd. All rights reserved.
Reverse engineering and analysis of large genome-scale gene networks
Aluru, Maneesha; Zola, Jaroslaw; Nettleton, Dan; Aluru, Srinivas
2013-01-01
Reverse engineering the whole-genome networks of complex multicellular organisms continues to remain a challenge. While simpler models easily scale to large number of genes and gene expression datasets, more accurate models are compute intensive limiting their scale of applicability. To enable fast and accurate reconstruction of large networks, we developed Tool for Inferring Network of Genes (TINGe), a parallel mutual information (MI)-based program. The novel features of our approach include: (i) B-spline-based formulation for linear-time computation of MI, (ii) a novel algorithm for direct permutation testing and (iii) development of parallel algorithms to reduce run-time and facilitate construction of large networks. We assess the quality of our method by comparison with ARACNe (Algorithm for the Reconstruction of Accurate Cellular Networks) and GeneNet and demonstrate its unique capability by reverse engineering the whole-genome network of Arabidopsis thaliana from 3137 Affymetrix ATH1 GeneChips in just 9 min on a 1024-core cluster. We further report on the development of a new software Gene Network Analyzer (GeNA) for extracting context-specific subnetworks from a given set of seed genes. Using TINGe and GeNA, we performed analysis of 241 Arabidopsis AraCyc 8.0 pathways, and the results are made available through the web. PMID:23042249
Dispersal scaling from the world's rivers
Warrick, J.A.; Fong, D.A.
2004-01-01
Although rivers provide important biogeochemical inputs to oceans, there are currently no descriptive or predictive relationships of the spatial scales of these river influences. Our combined satellite, laboratory, field and modeling results show that the coastal dispersal areas of small, mountainous rivers exhibit remarkable self-similar scaling relationships over many orders of magnitude. River plume areas scale with source drainage area to a power significantly less than one (average = 0.65), and this power relationship decreases significantly with distance offshore of the river mouth. Observations of plumes from large rivers reveal that this scaling continues over six orders of magnitude of river drainage basin areas. This suggests that the cumulative area of coastal influence for many of the smallest rivers of the world is greater than that of single rivers of equal watershed size. Copyright 2004 by the American Geophysical Union.
On the characteristics of aerosol indirect effect based on dynamic regimes in global climate models
Zhang, Shipeng; Wang, Minghuai; Ghan, Steven J.; ...
2016-03-04
Aerosol–cloud interactions continue to constitute a major source of uncertainty for the estimate of climate radiative forcing. The variation of aerosol indirect effects (AIE) in climate models is investigated across different dynamical regimes, determined by monthly mean 500 hPa vertical pressure velocity ( ω 500), lower-tropospheric stability (LTS) and large-scale surface precipitation rate derived from several global climate models (GCMs), with a focus on liquid water path (LWP) response to cloud condensation nuclei (CCN) concentrations. The LWP sensitivity to aerosol perturbation within dynamic regimes is found to exhibit a large spread among these GCMs. It is in regimes of strongmore » large-scale ascent ( ω 500 < −25 hPa day −1) and low clouds (stratocumulus and trade wind cumulus) where the models differ most. Shortwave aerosol indirect forcing is also found to differ significantly among different regimes. Shortwave aerosol indirect forcing in ascending regimes is close to that in subsidence regimes, which indicates that regimes with strong large-scale ascent are as important as stratocumulus regimes in studying AIE. It is further shown that shortwave aerosol indirect forcing over regions with high monthly large-scale surface precipitation rate (> 0.1 mm day −1) contributes the most to the total aerosol indirect forcing (from 64 to nearly 100 %). Results show that the uncertainty in AIE is even larger within specific dynamical regimes compared to the uncertainty in its global mean values, pointing to the need to reduce the uncertainty in AIE in different dynamical regimes.« less
Robinson, Stacie J.; Samuel, Michael D.; Lopez, Davin L.; Shelton, Paul
2012-01-01
One of the pervasive challenges in landscape genetics is detecting gene flow patterns within continuous populations of highly mobile wildlife. Understanding population genetic structure within a continuous population can give insights into social structure, movement across the landscape and contact between populations, which influence ecological interactions, reproductive dynamics or pathogen transmission. We investigated the genetic structure of a large population of deer spanning the area of Wisconsin and Illinois, USA, affected by chronic wasting disease. We combined multiscale investigation, landscape genetic techniques and spatial statistical modelling to address the complex questions of landscape factors influencing population structure. We sampled over 2000 deer and used spatial autocorrelation and a spatial principal components analysis to describe the population genetic structure. We evaluated landscape effects on this pattern using a spatial autoregressive model within a model selection framework to test alternative hypotheses about gene flow. We found high levels of genetic connectivity, with gradients of variation across the large continuous population of white-tailed deer. At the fine scale, spatial clustering of related animals was correlated with the amount and arrangement of forested habitat. At the broader scale, impediments to dispersal were important to shaping genetic connectivity within the population. We found significant barrier effects of individual state and interstate highways and rivers. Our results offer an important understanding of deer biology and movement that will help inform the management of this species in an area where overabundance and disease spread are primary concerns.
Thatcher, T L; Wilson, D J; Wood, E E; Craig, M J; Sextro, R G
2004-08-01
Scale modeling is a useful tool for analyzing complex indoor spaces. Scale model experiments can reduce experimental costs, improve control of flow and temperature conditions, and provide a practical method for pretesting full-scale system modifications. However, changes in physical scale and working fluid (air or water) can complicate interpretation of the equivalent effects in the full-scale structure. This paper presents a detailed scaling analysis of a water tank experiment designed to model a large indoor space, and experimental results obtained with this model to assess the influence of furniture and people in the pollutant concentration field at breathing height. Theoretical calculations are derived for predicting the effects from losses of molecular diffusion, small scale eddies, turbulent kinetic energy, and turbulent mass diffusivity in a scale model, even without Reynolds number matching. Pollutant dispersion experiments were performed in a water-filled 30:1 scale model of a large room, using uranine dye injected continuously from a small point source. Pollutant concentrations were measured in a plane, using laser-induced fluorescence techniques, for three interior configurations: unobstructed, table-like obstructions, and table-like and figure-like obstructions. Concentrations within the measurement plane varied by more than an order of magnitude, even after the concentration field was fully developed. Objects in the model interior had a significant effect on both the concentration field and fluctuation intensity in the measurement plane. PRACTICAL IMPLICATION: This scale model study demonstrates both the utility of scale models for investigating dispersion in indoor environments and the significant impact of turbulence created by furnishings and people on pollutant transport from floor level sources. In a room with no furniture or occupants, the average concentration can vary by about a factor of 3 across the room. Adding furniture and occupants can increase this spatial variation by another factor of 3.
Lee, Won-June; Park, Won-Tae; Park, Sungjun; Sung, Sujin; Noh, Yong-Young; Yoon, Myung-Han
2015-09-09
Ultrathin and dense metal oxide gate di-electric layers are reported by a simple printing of AlOx and HfOx sol-gel precursors. Large-area printed indium gallium zinc oxide (IGZO) thin-film transistor arrays, which exhibit mobilities >5 cm(2) V(-1) s(-1) and gate leakage current of 10(-9) A cm(-2) at a very low operation voltage of 2 V, are demonstrated by continuous simple bar-coated processes. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
2017-10-01
Facility is a large-scale cascade that allows detailed flow field surveys and blade surface measurements.10–12 The facility has a continuous run ...structured grids at 2 flow conditions, cruise and takeoff, of the VSPT blade . Computations were run in parallel on a Department of Defense...RANS/LES) and Unsteady RANS Predictions of Separated Flow for a Variable-Speed Power- Turbine Blade Operating with Low Inlet Turbulence Levels
Self-similar solutions of stationary Navier-Stokes equations
NASA Astrophysics Data System (ADS)
Shi, Zuoshunhua
2018-02-01
In this paper, we mainly study the existence of self-similar solutions of stationary Navier-Stokes equations for dimension n = 3 , 4. For n = 3, if the external force is axisymmetric, scaling invariant, C 1 , α continuous away from the origin and small enough on the sphere S2, we shall prove that there exists a family of axisymmetric self-similar solutions which can be arbitrarily large in the class Cloc3 , α (R3 0). Moreover, for axisymmetric external forces without swirl, corresponding to this family, the momentum flux of the flow along the symmetry axis can take any real number. However, there are no regular (U ∈ Cloc3 , α (R3 0)) axisymmetric self-similar solutions provided that the external force is a large multiple of some scaling invariant axisymmetric F which cannot be driven by a potential. In the case of dimension 4, there always exists at least one self-similar solution to the stationary Navier-Stokes equations with any scaling invariant external force in L 4 / 3 , ∞ (R4).
Analysis of five years of continuous GPS recording at Piton de La Fournaise (R
NASA Astrophysics Data System (ADS)
Peltier, A.; Staudacher, T.; Boissier, P.; Lauret, F.; Kowalski, P.
2009-04-01
A network of twelve permanent GPS stations has been implemented since 2004 at Piton de La Fournaise (hot spot basaltic volcano of La Réunion Island, Indian Ocean) to follow the ground deformation associated with its high eruptive activity. During the period covered by the continuous GPS recording, 12 eruptions occurred. The compilation of the data recorded between 2004 and 2008 allows us to define two time scales of ground deformation systematically associated with this eruptive activity: (1) Large short-term displacements, reaching up to 14 mm/min, monitored a few min to hours prior each eruption during magma injections toward the surface (co-eruptive deformation); (2) But also, small long-term ground displacements recorded during inter-eruptive periods. Between 2 weeks and 5 months before each eruption a slight summit inflation occurs (0.4-0.7 mm/day); whereas a post-eruptive summit deflation lasting 1 to 3 months is only recorded after the largest distal eruptions (0.3 - 1.3 mm/day). These two time scales ground deformation precursors allowed us to forecast all eruptions up to five months in advance. And the follow up of the large short-term displacement in real-time allowed us to evaluated the approximate location of the eruptive fissure a few min to hours before its opening (i.e. inside the summit crater, northern flank or southern flank). The large short-term ground displacements have been attributed to the dyke propagation toward the surface, whereas the long-term ground displacements, which were also recorded by the extensometer network since 2000, have been attributed to a continuous over pressurization of the shallow magma reservoir located at about 2300m depth. The continuous over-pressurization of the shallow magma reservoir would explain the high eruptive activity observed since 1998; 27 eruptions in 10 years.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Ruo-Yu; Rieger, F. M.; Aharonian, F. A., E-mail: ruoyu@mpi-hd.mpg.de, E-mail: frank.rieger@mpi-hd.mpg.de, E-mail: aharon@mpi-hd.mpg.de
The origin of the extended X-ray emission in the large-scale jets of active galactic nuclei (AGNs) poses challenges to conventional models of acceleration and emission. Although electron synchrotron radiation is considered the most feasible radiation mechanism, the formation of the continuous large-scale X-ray structure remains an open issue. As astrophysical jets are expected to exhibit some turbulence and shearing motion, we here investigate the potential of shearing flows to facilitate an extended acceleration of particles and evaluate its impact on the resultant particle distribution. Our treatment incorporates systematic shear and stochastic second-order Fermi effects. We show that for typical parametersmore » applicable to large-scale AGN jets, stochastic second-order Fermi acceleration, which always accompanies shear particle acceleration, can play an important role in facilitating the whole process of particle energization. We study the time-dependent evolution of the resultant particle distribution in the presence of second-order Fermi acceleration, shear acceleration, and synchrotron losses using a simple Fokker–Planck approach and provide illustrations for the possible emergence of a complex (multicomponent) particle energy distribution with different spectral branches. We present examples for typical parameters applicable to large-scale AGN jets, indicating the relevance of the underlying processes for understanding the extended X-ray emission and the origin of ultrahigh-energy cosmic rays.« less
From Pleistocene to Holocene: the prehistory of southwest Asia in evolutionary context.
Watkins, Trevor
2017-08-14
In this paper I seek to show how cultural niche construction theory offers the potential to extend the human evolutionary story beyond the Pleistocene, through the Neolithic, towards the kind of very large-scale societies in which we live today. The study of the human past has been compartmentalised, each compartment using different analytical vocabularies, so that their accounts are written in mutually incompatible languages. In recent years social, cognitive and cultural evolutionary theories, building on a growing body of archaeological evidence, have made substantial sense of the social and cultural evolution of the genus Homo. However, specialists in this field of studies have found it difficult to extend their kind of analysis into the Holocene human world. Within southwest Asia the three or four millennia of the Neolithic period at the beginning of the Holocene represents a pivotal point, which saw the transformation of human society in the emergence of the first large-scale, permanent communities, the domestication of plants and animals, and the establishment of effective farming economies. Following the Neolithic, the pace of human social, economic and cultural evolution continued to increase. By 5000 years ago, in parts of southwest Asia and northeast Africa there were very large-scale urban societies, and the first large-scale states (kingdoms). An extension of cultural niche construction theory enables us to extend the evolutionary narrative of the Pleistocene into the Holocene, opening the way to developing a single, long-term, evolutionary account of human history.
The morphing of geographical features by Fourier transformation.
Li, Jingzhong; Liu, Pengcheng; Yu, Wenhao; Cheng, Xiaoqiang
2018-01-01
This paper presents a morphing model of vector geographical data based on Fourier transformation. This model involves three main steps. They are conversion from vector data to Fourier series, generation of intermediate function by combination of the two Fourier series concerning a large scale and a small scale, and reverse conversion from combination function to vector data. By mirror processing, the model can also be used for morphing of linear features. Experimental results show that this method is sensitive to scale variations and it can be used for vector map features' continuous scale transformation. The efficiency of this model is linearly related to the point number of shape boundary and the interceptive value n of Fourier expansion. The effect of morphing by Fourier transformation is plausible and the efficiency of the algorithm is acceptable.
1996-04-01
time systems . The focus is on the study of ’building-blocks’ for the construction of reliable and efficient systems. Our works falls into three...Members of MIT’s Theory of Distributed Systems group have continued their work on modelling, designing, verifying and analyzing distributed and real
Appropriate experimental ecosystem warming methods by ecosystem, objective, and practicality
E.L. Aronson; S.G. McNulty
2009-01-01
The temperature of the Earth is rising, and is highly likely to continue to do so for the foreseeable future. The study of the effects of sustained heating on the ecosystems of the world is necessary so that wemight predict and respond to coming changes on both large and small spatial scales. To this end, ecosystem warming studies have...
Manifesting Destiny: A Land Education Analysis of Settler Colonialism in Jamestown, Virginia, USA
ERIC Educational Resources Information Center
McCoy, Kate
2014-01-01
Globally, colonization has been and continues to be enacted in the take-over of Indigenous land and the subsequent conversion of agriculture from diverse food and useful crops to large-scale monoculture and cash crops. This article uses a land education analysis to map the rise of the ideology and practices of Manifest Destiny in Virginia.…
Martin P. Schilling; Paul G. Wolf; Aaron M. Duffy; Hardeep S. Rai; Carol A. Rowe; Bryce A. Richardson; Karen E. Mock
2014-01-01
Continuing advances in nucleotide sequencing technology are inspiring a suite of genomic approaches in studies of natural populations. Researchers are faced with data management and analytical scales that are increasing by orders of magnitude. With such dramatic advances comes a need to understand biases and error rates, which can be propagated and magnified in large-...
Events Management Education through CD-ROM Simulation at Victoria University of Technology.
ERIC Educational Resources Information Center
Perry, Marcia; And Others
There has been a rapid growth in the events industry in Victoria and Australia over the past five years with an increase in large scale events--resulting in substantive economic impact. The growth in events in Australia is projected to continue to beyond 2001. The Department of Management at Victoria University of Technology (VU) received a…
Teach For America Teachers: How Long Do They Teach? Why Do They Leave?
ERIC Educational Resources Information Center
Donaldson, Morgaen L.; Johnson, Susan Moore
2011-01-01
A large-scale, nationwide analysis of Teach For America teacher turnover presents a deeper picture of which TFAers stay, which ones leave the profession and some suggestions about why they leave. The authors learned that nearly two-thirds (60.5%) of TFA teachers continue as public school teachers beyond their two-year commitment; more than half…
Cannabis cultivation: Methodological issues for obtaining medical-grade product.
Chandra, Suman; Lata, Hemant; ElSohly, Mahmoud A; Walker, Larry A; Potter, David
2017-05-01
As studies continue to reveal favorable findings for the use of cannabidiol in the management of childhood epilepsy syndromes and other disorders, best practices for the large-scale production of Cannabis are needed for timely product development and research purposes. The processes of two institutions with extensive experience in producing large-scale cannabidiol chemotype Cannabis crops-GW Pharmaceuticals and the University of Mississippi-are described, including breeding, indoor and outdoor growing, harvesting, and extraction methods. Such practices have yielded desirable outcomes in Cannabis breeding and production: GW Pharmaceuticals has a collection of chemotypes dominant in any one of eight cannabinoids, two of which-cannabidiol and cannabidivarin-are supporting epilepsy clinical trial research, whereas in addition to a germplasm bank of high-THC, high-CBD, and intermediate type cannabis varieties, the team at University of Mississippi has established an in vitro propagation protocol for cannabis with no detectable variations in morphologic, physiologic, biochemical, and genetic profiles as compared to the mother plants. Improvements in phytocannabinoid yields and growing efficiency are expected as research continues at these institutions. This article is part of a Special Issue entitled "Cannabinoids and Epilepsy". Copyright © 2016. Published by Elsevier Inc.
Remote maintenance monitoring system
NASA Technical Reports Server (NTRS)
Simpkins, Lorenz G. (Inventor); Owens, Richard C. (Inventor); Rochette, Donn A. (Inventor)
1992-01-01
A remote maintenance monitoring system retrofits to a given hardware device with a sensor implant which gathers and captures failure data from the hardware device, without interfering with its operation. Failure data is continuously obtained from predetermined critical points within the hardware device, and is analyzed with a diagnostic expert system, which isolates failure origin to a particular component within the hardware device. For example, monitoring of a computer-based device may include monitoring of parity error data therefrom, as well as monitoring power supply fluctuations therein, so that parity error and power supply anomaly data may be used to trace the failure origin to a particular plane or power supply within the computer-based device. A plurality of sensor implants may be rerofit to corresponding plural devices comprising a distributed large-scale system. Transparent interface of the sensors to the devices precludes operative interference with the distributed network. Retrofit capability of the sensors permits monitoring of even older devices having no built-in testing technology. Continuous real time monitoring of a distributed network of such devices, coupled with diagnostic expert system analysis thereof, permits capture and analysis of even intermittent failures, thereby facilitating maintenance of the monitored large-scale system.
NASA Astrophysics Data System (ADS)
Wang, D.; Naouar, N.; Vidal-Salle, E.; Boisse, P.
2018-05-01
In meso-scale finite element modeling, the yarns of the reinforcement are considered to be solids made of a continuous material in contact with their neighbors. The present paper consider the mechanical behavior of these yarns that can happen for some loadings of the reinforcement. The yarns present a specific mechanical behavior when under longitudinal compression because they are made up of a large number of fibers, Local buckling of the fibers causes the compressive stiffness of the continuous material representing the yarn to be much weaker than when under tension. In addition, longitudinal compression causes an important transverse expansion. It is shown that the transverse expansion can be depicted by a Poisson ratio that remained roughly constant when the yarn length and the compression strain varied. Buckling of the fibers significantly increases the transverse dimensions of the yarn which leads to a large Poisson ratio (up to 12 for a yarn analyzed in the present study). Meso-scale finite element simulations of reinforcements with binder yarns submitted to longitudinal compression showed that these improvements led to results in good agreement with micro-CT analyses.
Demonstration of a Large-Scale Tank Assembly via Circumferential Friction Stir Welds
NASA Technical Reports Server (NTRS)
Jones, Clyde S.; Adams, Glynn; Colligan, Kevin
2000-01-01
A collaborative effort between NASA/Marshall Space Flight Center and the Michoud Unit of Lockheed Martin Space Systems Company was undertaken to demonstrate assembly of a large-scale aluminum tank using circumferential friction stir welds. The hardware used to complete this demonstration was fabricated as a study of near-net- shape technologies. The tooling used to complete this demonstration was originally designed for assembly of a tank using fusion weld processes. This presentation describes the modifications and additions that were made to the existing fusion welding tools required to accommodate circumferential friction stir welding, as well as the process used to assemble the tank. The tooling modifications include design, fabrication and installation of several components. The most significant components include a friction stir weld unit with adjustable pin length capabilities, a continuous internal anvil for 'open' circumferential welds, a continuous closeout anvil, clamping systems, an external reaction system and the control system required to conduct the friction stir welds and integrate the operation of the tool. The demonstration was intended as a development task. The experience gained during each circumferential weld was applied to improve subsequent welds. Both constant and tapered thickness 14-foot diameter circumferential welds were successfully demonstrated.
Hammerschmidt, Nikolaus; Tscheliessnig, Anne; Sommer, Ralf; Helk, Bernhard; Jungbauer, Alois
2014-06-01
Standard industry processes for recombinant antibody production employ protein A affinity chromatography in combination with other chromatography steps and ultra-/diafiltration. This study compares a generic antibody production process with a recently developed purification process based on a series of selective precipitation steps. The new process makes two of the usual three chromatographic steps obsolete and can be performed in a continuous fashion. Cost of Goods (CoGs) analyses were done for: (i) a generic chromatography-based antibody standard purification; (ii) the continuous precipitation-based purification process coupled to a continuous perfusion production system; and (iii) a hybrid process, coupling the continuous purification process to an upstream batch process. The results of this economic analysis show that the precipitation-based process offers cost reductions at all stages of the life cycle of a therapeutic antibody, (i.e. clinical phase I, II and III, as well as full commercial production). The savings in clinical phase production are largely attributed to the fact that expensive chromatographic resins are omitted. These economic analyses will help to determine the strategies that are best suited for small-scale production in parallel fashion, which is of importance for antibody production in non-privileged countries and for personalized medicine. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Smith, F; Singleton, A; Hilton, S
1998-10-01
The accreditation and provision of continuing education for general practitioners (GPs) is set to change with new proposals from the General Medical Council, the Government, and the Chief Medical Officer. To review the theories, policies, strategies, and effectiveness in GP continuing education in the past 10 years. A systematic review of the literature by computerized and manual searches of relevant journals and books. Educational theory suggests that continuing education (CE) should be work-based and use the learner's experiences. Audit can play an important role in determining performance and needs assessment, but at present is largely a separate activity. Educational and professional support, such as through mentors or co-tutors, has been successfully piloted but awaits larger scale evaluation. Most accredited educational events are still the postgraduate centre lecture, and GP Tutors have a variable role in CE management and provision. Controlled trials of CE strategies suggest effectiveness is enhanced by personal feedback and work prompts. Qualitative studies have demonstrated that education plays only a small part in influencing doctors' behavior. Maintaining good clinical practice is on many stakeholders' agendas. A variety of methods may be effective in CE, and larger scale trials or evaluations are needed.
NASA Astrophysics Data System (ADS)
Ramsdale, Jason D.; Balme, Matthew R.; Conway, Susan J.; Gallagher, Colman; van Gasselt, Stephan A.; Hauber, Ernst; Orgel, Csilla; Séjourné, Antoine; Skinner, James A.; Costard, Francois; Johnsson, Andreas; Losiak, Anna; Reiss, Dennis; Swirad, Zuzanna M.; Kereszturi, Akos; Smith, Isaac B.; Platz, Thomas
2017-06-01
The increased volume, spatial resolution, and areal coverage of high-resolution images of Mars over the past 15 years have led to an increased quantity and variety of small-scale landform identifications. Though many such landforms are too small to represent individually on regional-scale maps, determining their presence or absence across large areas helps form the observational basis for developing hypotheses on the geological nature and environmental history of a study area. The combination of improved spatial resolution and near-continuous coverage significantly increases the time required to analyse the data. This becomes problematic when attempting regional or global-scale studies of metre and decametre-scale landforms. Here, we describe an approach for mapping small features (from decimetre to kilometre scale) across large areas, formulated for a project to study the northern plains of Mars, and provide context on how this method was developed and how it can be implemented. Rather than ;mapping; with points and polygons, grid-based mapping uses a ;tick box; approach to efficiently record the locations of specific landforms (we use an example suite of glacial landforms; including viscous flow features, the latitude dependant mantle and polygonised ground). A grid of squares (e.g. 20 km by 20 km) is created over the mapping area. Then the basemap data are systematically examined, grid-square by grid-square at full resolution, in order to identify the landforms while recording the presence or absence of selected landforms in each grid-square to determine spatial distributions. The result is a series of grids recording the distribution of all the mapped landforms across the study area. In some ways, these are equivalent to raster images, as they show a continuous distribution-field of the various landforms across a defined (rectangular, in most cases) area. When overlain on context maps, these form a coarse, digital landform map. We find that grid-based mapping provides an efficient solution to the problems of mapping small landforms over large areas, by providing a consistent and standardised approach to spatial data collection. The simplicity of the grid-based mapping approach makes it extremely scalable and workable for group efforts, requiring minimal user experience and producing consistent and repeatable results. The discrete nature of the datasets, simplicity of approach, and divisibility of tasks, open up the possibility for citizen science in which crowdsourcing large grid-based mapping areas could be applied.
An effective online data monitoring and saving strategy for large-scale climate simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xian, Xiaochen; Archibald, Rick; Mayer, Benjamin
Large-scale climate simulation models have been developed and widely used to generate historical data and study future climate scenarios. These simulation models often have to run for a couple of months to understand the changes in the global climate over the course of decades. This long-duration simulation process creates a huge amount of data with both high temporal and spatial resolution information; however, how to effectively monitor and record the climate changes based on these large-scale simulation results that are continuously produced in real time still remains to be resolved. Due to the slow process of writing data to disk,more » the current practice is to save a snapshot of the simulation results at a constant, slow rate although the data generation process runs at a very high speed. This study proposes an effective online data monitoring and saving strategy over the temporal and spatial domains with the consideration of practical storage and memory capacity constraints. Finally, our proposed method is able to intelligently select and record the most informative extreme values in the raw data generated from real-time simulations in the context of better monitoring climate changes.« less
An effective online data monitoring and saving strategy for large-scale climate simulations
Xian, Xiaochen; Archibald, Rick; Mayer, Benjamin; ...
2018-01-22
Large-scale climate simulation models have been developed and widely used to generate historical data and study future climate scenarios. These simulation models often have to run for a couple of months to understand the changes in the global climate over the course of decades. This long-duration simulation process creates a huge amount of data with both high temporal and spatial resolution information; however, how to effectively monitor and record the climate changes based on these large-scale simulation results that are continuously produced in real time still remains to be resolved. Due to the slow process of writing data to disk,more » the current practice is to save a snapshot of the simulation results at a constant, slow rate although the data generation process runs at a very high speed. This study proposes an effective online data monitoring and saving strategy over the temporal and spatial domains with the consideration of practical storage and memory capacity constraints. Finally, our proposed method is able to intelligently select and record the most informative extreme values in the raw data generated from real-time simulations in the context of better monitoring climate changes.« less
Gopinathan, Unni; Lewin, Simon; Glenton, Claire
2014-12-01
To identify factors affecting the implementation of large-scale programmes to optimise the health workforce in low- and middle-income countries. We conducted a multicountry case study synthesis. Eligible programmes were identified through consultation with experts and using Internet searches. Programmes were selected purposively to match the inclusion criteria. Programme documents were gathered via Google Scholar and PubMed and from key informants. The SURE Framework - a comprehensive list of factors that may influence the implementation of health system interventions - was used to organise the data. Thematic analysis was used to identify the key issues that emerged from the case studies. Programmes from Brazil, Ethiopia, India, Iran, Malawi, Venezuela and Zimbabwe were selected. Key system-level factors affecting the implementation of the programmes were related to health worker training and continuing education, management and programme support structures, the organisation and delivery of services, community participation, and the sociopolitical environment. Existing weaknesses in health systems may undermine the implementation of large-scale programmes to optimise the health workforce. Changes in the roles and responsibilities of cadres may also, in turn, impact the health system throughout. © 2014 John Wiley & Sons Ltd.
Controlling high-throughput manufacturing at the nano-scale
NASA Astrophysics Data System (ADS)
Cooper, Khershed P.
2013-09-01
Interest in nano-scale manufacturing research and development is growing. The reason is to accelerate the translation of discoveries and inventions of nanoscience and nanotechnology into products that would benefit industry, economy and society. Ongoing research in nanomanufacturing is focused primarily on developing novel nanofabrication techniques for a variety of applications—materials, energy, electronics, photonics, biomedical, etc. Our goal is to foster the development of high-throughput methods of fabricating nano-enabled products. Large-area parallel processing and highspeed continuous processing are high-throughput means for mass production. An example of large-area processing is step-and-repeat nanoimprinting, by which nanostructures are reproduced again and again over a large area, such as a 12 in wafer. Roll-to-roll processing is an example of continuous processing, by which it is possible to print and imprint multi-level nanostructures and nanodevices on a moving flexible substrate. The big pay-off is high-volume production and low unit cost. However, the anticipated cost benefits can only be realized if the increased production rate is accompanied by high yields of high quality products. To ensure product quality, we need to design and construct manufacturing systems such that the processes can be closely monitored and controlled. One approach is to bring cyber-physical systems (CPS) concepts to nanomanufacturing. CPS involves the control of a physical system such as manufacturing through modeling, computation, communication and control. Such a closely coupled system will involve in-situ metrology and closed-loop control of the physical processes guided by physics-based models and driven by appropriate instrumentation, sensing and actuation. This paper will discuss these ideas in the context of controlling high-throughput manufacturing at the nano-scale.
Carr, T.R.; Merriam, D.F.; Bartley, J.D.
2005-01-01
Large-scale relational databases and geographic information system tools are used to integrate temperature, pressure, and water geo-chemistry data from numerous wells to better understand regional-scale geothermal and hydrogeological regimes of the lower Paleozoic aquifer systems in the mid-continent and to evaluate their potential for geologic CO2 sequestration. The lower Paleozoic (Cambrian to Mississippian) aquifer systems in Kansas, Missouri, and Oklahoma comprise one of the largest regional-scale saline aquifer systems in North America. Understanding hydrologic conditions and processes of these regional-scale aquifer systems provides insight to the evolution of the various sedimentary basins, migration of hydrocarbons out of the Anadarko and Arkoma basins, and the distribution of Arbuckle petroleum reservoirs across Kansas and provides a basis to evaluate CO2 sequestration potential. The Cambrian and Ordovician stratigraphic units form a saline aquifer that is in hydrologic continuity with the freshwater recharge from the Ozark plateau and along the Nemaha anticline. The hydrologic continuity with areas of freshwater recharge provides an explanation for the apparent underpressure in the Arbuckle Group. Copyright ?? 2005. The American Association of Petroleum Geologists. All rights reserved.
Measuring water fluxes in forests: The need for integrative platforms of analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ward, Eric J.
To understand the importance of analytical tools such as those provided by Berdanier et al. (2016) in this issue of Tree Physiology, one must understand both the grand challenges facing Earth system modelers, as well as the minutia of engaging in ecophysiological research in the field. It is between these two extremes of scale that many ecologists struggle to translate empirical research into useful conclusions that guide our understanding of how ecosystems currently function and how they are likely to change in the future. Likewise, modelers struggle to build complexity into their models that match this sophisticated understanding of howmore » ecosystems function, so that necessary simplifications required by large scales do not themselves change the conclusions drawn from these simulations. As both monitoring technology and computational power increase, along with the continual effort in both empirical and modeling research, the gap between the scale of Earth system models and ecological observations continually closes. In addition, this creates a need for platforms of model–data interaction that incorporate uncertainties in both simulations and observations when scaling from one to the other, moving beyond simple comparisons of monthly or annual sums and means.« less
Measuring water fluxes in forests: The need for integrative platforms of analysis
Ward, Eric J.
2016-08-09
To understand the importance of analytical tools such as those provided by Berdanier et al. (2016) in this issue of Tree Physiology, one must understand both the grand challenges facing Earth system modelers, as well as the minutia of engaging in ecophysiological research in the field. It is between these two extremes of scale that many ecologists struggle to translate empirical research into useful conclusions that guide our understanding of how ecosystems currently function and how they are likely to change in the future. Likewise, modelers struggle to build complexity into their models that match this sophisticated understanding of howmore » ecosystems function, so that necessary simplifications required by large scales do not themselves change the conclusions drawn from these simulations. As both monitoring technology and computational power increase, along with the continual effort in both empirical and modeling research, the gap between the scale of Earth system models and ecological observations continually closes. In addition, this creates a need for platforms of model–data interaction that incorporate uncertainties in both simulations and observations when scaling from one to the other, moving beyond simple comparisons of monthly or annual sums and means.« less
NASA Technical Reports Server (NTRS)
Vincent, Dayton G.; Robertson, Franklin
1993-01-01
The research sponsored by this grant is a continuation and an extension of the work conducted under a previous contract, 'South Pacific Convergence Zone and Global-Scale Circulations'. In the prior work, we conducted a detailed investigation of the South Pacific convergence zone (SPCZ), and documented many of its significant features and characteristics. We also conducted studies of its interaction with global-scale circulation features through the use of both observational and modeling studies. The latter was accomplished toward the end of the contract when Dr. James Hurrell, then a Ph.D. candidate, successfully ported the NASA GLA general circulation model (GCM) to Purdue University. In our present grant, we have expanded our previous research to include studies of other convectively-driven circulation systems in the tropics besides the SPCZ. Furthermore, we have continued to examine the relationship between these convective systems and global-scale circulation patterns. Our recent research efforts have focused on three objectives: (1) determining the periodicity of large-scale bands of organized convection in the tropics, primarily synoptic to intraseasonal time scales in the Southern Hemisphere; (2) examining the relative importance of tropical versus mid-latitude forcing for Southern Hemisphere summertime subtropical jets, particularly over the Pacific Ocean; and (3) estimating tropical precipitation, especially over oceans, using observational and budget methods. A summary list of our most significant accomplishments in the past year is given.
NASA Astrophysics Data System (ADS)
Goodman, K. J.; Lunch, C. K.; Baxter, C.; Hall, R.; Holtgrieve, G. W.; Roberts, B. J.; Marcarelli, A. M.; Tank, J. L.
2013-12-01
Recent advances in dissolved oxygen sensing and modeling have made continuous measurements of whole-stream metabolism relatively easy to make, allowing ecologists to quantify and evaluate stream ecosystem health at expanded temporal and spatial scales. Long-term monitoring of continuous stream metabolism will enable a better understanding of the integrated and complex effects of anthropogenic change (e.g., land-use, climate, atmospheric deposition, invasive species, etc.) on stream ecosystem function. In addition to their value in the particular streams measured, information derived from long-term data will improve the ability to extrapolate from shorter-term data. With the need to better understand drivers and responses of whole-stream metabolism come difficulties in interpreting the results. Long-term trends will encompass physical changes in stream morphology and flow regime (e.g., variable flow conditions and changes in channel structure) combined with changes in biota. Additionally long-term data sets will require an organized database structure, careful quantification of errors and uncertainties, as well as propagation of error as a result of the calculation of metabolism metrics. Parsing of continuous data and the choice of modeling approaches can also have a large influence on results and on error estimation. The two main modeling challenges include 1) obtaining unbiased, low-error daily estimates of gross primary production (GPP) and ecosystem respiration (ER), and 2) interpreting GPP and ER measurements over extended time periods. The National Ecological Observatory Network (NEON), in partnership with academic and government scientists, has begun to tackle several of these challenges as it prepares for the collection and calculation of 30 years of continuous whole-stream metabolism data. NEON is a national-scale research platform that will use consistent procedures and protocols to standardize measurements across the United States, providing long-term, high-quality, open-access data from a connected network to address large-scale change. NEON infrastructure will support 36 aquatic sites across 19 ecoclimatic domains. Sites include core sites, which remain for 30 years, and relocatable sites, which move to capture regional gradients. NEON will measure continuous whole-stream metabolism in conjunction with aquatic, terrestrial and airborne observations, allowing researchers to link stream ecosystem function with landscape and climatic drivers encompassing short to long time periods (i.e., decades).
Evaluation of Criteria for the Detection of Fires in Underground Conveyor Belt Haulageways.
Litton, Charles D; Perera, Inoka Eranda
2012-07-01
Large-scale experiments were conducted in an above-ground gallery to simulate typical fires that develop along conveyor belt transport systems within underground coal mines. In the experiments, electrical strip heaters, imbedded ~5 cm below the top surface of a large mass of coal rubble, were used to ignite the coal, producing an open flame. The flaming coal mass subsequently ignited 1.83-meter-wide conveyor belts located approximately 0.30 m above the coal surface. Gas samples were drawn through an averaging probe located approximately 20 m downstream of the coal for continuous measurement of CO, CO 2 , and O 2 as the fire progressed through the stages of smoldering coal, flaming coal, and flaming conveyor belt. Also located approximately 20 m from the fire origin and approximately 0.5 m below the roof of the gallery were two commercially available smoke detectors, a light obscuration meter, and a sampling probe for measurement of total mass concentration of smoke particles. Located upstream of the fire origin and also along the wall of the gallery at approximately 14 m and 5 m upstream were two video cameras capable of both smoke and flame detection. During the experiments, alarm times of the smoke detectors and video cameras were measured while the smoke obscuration and total smoke mass were continually measured. Twelve large-scale experiments were conducted using three different types of fire-resistant conveyor belts and four air velocities for each belt. The air velocities spanned the range from 1.0 m/s to 6.9 m/s. The results of these experiments are compared to previous large-scale results obtained using a smaller fire gallery and much narrower (1.07-m) conveyor belts to determine if the fire detection criteria previously developed (1) remained valid for the wider conveyor belts. Although some differences between these and the previous experiments did occur, the results, in general, compare very favorably. Differences are duly noted and their impact on fire detection discussed.
Ding, Edwin; Lefrancois, Simon; Kutz, Jose Nathan; Wise, Frank W.
2011-01-01
The mode-locking of dissipative soliton fiber lasers using large mode area fiber supporting multiple transverse modes is studied experimentally and theoretically. The averaged mode-locking dynamics in a multi-mode fiber are studied using a distributed model. The co-propagation of multiple transverse modes is governed by a system of coupled Ginzburg–Landau equations. Simulations show that stable and robust mode-locked pulses can be produced. However, the mode-locking can be destabilized by excessive higher-order mode content. Experiments using large core step-index fiber, photonic crystal fiber, and chirally-coupled core fiber show that mode-locking can be significantly disturbed in the presence of higher-order modes, resulting in lower maximum single-pulse energies. In practice, spatial mode content must be carefully controlled to achieve full pulse energy scaling. This paper demonstrates that mode-locking performance is very sensitive to the presence of multiple waveguide modes when compared to systems such as amplifiers and continuous-wave lasers. PMID:21731106
Ding, Edwin; Lefrancois, Simon; Kutz, Jose Nathan; Wise, Frank W
2011-01-01
The mode-locking of dissipative soliton fiber lasers using large mode area fiber supporting multiple transverse modes is studied experimentally and theoretically. The averaged mode-locking dynamics in a multi-mode fiber are studied using a distributed model. The co-propagation of multiple transverse modes is governed by a system of coupled Ginzburg-Landau equations. Simulations show that stable and robust mode-locked pulses can be produced. However, the mode-locking can be destabilized by excessive higher-order mode content. Experiments using large core step-index fiber, photonic crystal fiber, and chirally-coupled core fiber show that mode-locking can be significantly disturbed in the presence of higher-order modes, resulting in lower maximum single-pulse energies. In practice, spatial mode content must be carefully controlled to achieve full pulse energy scaling. This paper demonstrates that mode-locking performance is very sensitive to the presence of multiple waveguide modes when compared to systems such as amplifiers and continuous-wave lasers.
Data-Driven Simulation-Enhanced Optimization of People-Based Print Production Service
NASA Astrophysics Data System (ADS)
Rai, Sudhendu
This paper describes a systematic six-step data-driven simulation-based methodology for optimizing people-based service systems on a large distributed scale that exhibit high variety and variability. The methodology is exemplified through its application within the printing services industry where it has been successfully deployed by Xerox Corporation across small, mid-sized and large print shops generating over 250 million in profits across the customer value chain. Each step of the methodology consisting of innovative concepts co-development and testing in partnership with customers, development of software and hardware tools to implement the innovative concepts, establishment of work-process and practices for customer-engagement and service implementation, creation of training and infrastructure for large scale deployment, integration of the innovative offering within the framework of existing corporate offerings and lastly the monitoring and deployment of the financial and operational metrics for estimating the return-on-investment and the continual renewal of the offering are described in detail.
Tapia, Felipe; Jordan, Ingo; Genzel, Yvonne; Reichl, Udo
2017-01-01
One important aim in cell culture-based viral vaccine and vector production is the implementation of continuous processes. Such a development has the potential to reduce costs of vaccine manufacturing as volumetric productivity is increased and the manufacturing footprint is reduced. In this work, continuous production of Modified Vaccinia Ankara (MVA) virus was investigated. First, a semi-continuous two-stage cultivation system consisting of two shaker flasks in series was established as a small-scale approach. Cultures of the avian AGE1.CR.pIX cell line were expanded in the first shaker, and MVA virus was propagated and harvested in the second shaker over a period of 8-15 days. A total of nine small-scale cultivations were performed to investigate the impact of process parameters on virus yields. Harvest volumes of 0.7-1 L with maximum TCID50 titers of up to 1.0×109 virions/mL were obtained. Genetic analysis of control experiments using a recombinant MVA virus containing green-fluorescent-protein suggested that the virus was stable over at least 16 d of cultivation. In addition, a decrease or fluctuation of infectious units that may indicate an excessive accumulation of defective interfering particles was not observed. The process was automated in a two-stage continuous system comprising two connected 1 L stirred tank bioreactors. Stable MVA virus titers, and a total production volume of 7.1 L with an average TCID50 titer of 9×107 virions/mL was achieved. Because titers were at the lower range of the shake flask cultivations potential for further process optimization at large scale will be discussed. Overall, MVA virus was efficiently produced in continuous and semi-continuous cultivations making two-stage stirred tank bioreactor systems a promising platform for industrial production of MVA-derived recombinant vaccines and viral vectors.
2013 Progress Report -- DOE Joint Genome Institute
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2013-11-01
In October 2012, we introduced a 10-Year Strategic Vision [http://bit.ly/JGI-Vision] for the Institute. A central focus of this Strategic Vision is to bridge the gap between sequenced genomes and an understanding of biological functions at the organism and ecosystem level. This involves the continued massive-scale generation of sequence data, complemented by orthogonal new capabilities to functionally annotate these large sequence data sets. Our Strategic Vision lays out a path to guide our decisions and ensure that the evolving set of experimental and computational capabilities available to DOE JGI users will continue to enable groundbreaking science.
Modelling of pollen dispersion in the atmosphere: evaluation with a continuous 1β+1δ lidar
NASA Astrophysics Data System (ADS)
Sicard, Michaël; Izquierdo, Rebeca; Jorba, Oriol; Alarcón, Marta; Belmonte, Jordina; Comerón, Adolfo; De Linares, Concepción; Baldasano, José Maria
2018-04-01
Pollen allergenicity plays an important role on human health and wellness. It is thus of large public interest to increase our knowledge of pollen grain behavior in the atmosphere (source, emission, processes involved during their transport, etc.) at fine temporal and spatial scales. First simulations with the Barcelona Supercomputing Center NMMB/BSC-CTM model of Platanus and Pinus dispersion in the atmosphere were performed during a 5-day pollination event observed in Barcelona, Spain, between 27 - 31 March, 2015. The simulations are compared to vertical profiles measured with the continuous Barcelona Micro Pulse Lidar system. First results show that the vertical distribution is well reproduced by the model in shape, but not in intensity, the model largely underestimating in the afternoon. Guidelines are proposed to improve the dispersion of airborne pollen by numerical prediction models.
Waszczuk, M A; Zavos, H M S; Gregory, A M; Eley, T C
2016-01-01
Depression and anxiety persist within and across diagnostic boundaries. The manner in which common v. disorder-specific genetic and environmental influences operate across development to maintain internalizing disorders and their co-morbidity is unclear. This paper investigates the stability and change of etiological influences on depression, panic, generalized, separation and social anxiety symptoms, and their co-occurrence, across adolescence and young adulthood. A total of 2619 twins/siblings prospectively reported symptoms of depression and anxiety at mean ages 15, 17 and 20 years. Each symptom scale showed a similar pattern of moderate continuity across development, largely underpinned by genetic stability. New genetic influences contributing to change in the developmental course of the symptoms emerged at each time point. All symptom scales correlated moderately with one another over time. Genetic influences, both stable and time-specific, overlapped considerably between the scales. Non-shared environmental influences were largely time- and symptom-specific, but some contributed moderately to the stability of depression and anxiety symptom scales. These stable, longitudinal environmental influences were highly correlated between the symptoms. The results highlight both stable and dynamic etiology of depression and anxiety symptom scales. They provide preliminary evidence that stable as well as newly emerging genes contribute to the co-morbidity between depression and anxiety across adolescence and young adulthood. Conversely, environmental influences are largely time-specific and contribute to change in symptoms over time. The results inform molecular genetics research and transdiagnostic treatment and prevention approaches.
Northwest Trajectory Analysis Capability: A Platform for Enhancing Computational Biophysics Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peterson, Elena S.; Stephan, Eric G.; Corrigan, Abigail L.
2008-07-30
As computational resources continue to increase, the ability of computational simulations to effectively complement, and in some cases replace, experimentation in scientific exploration also increases. Today, large-scale simulations are recognized as an effective tool for scientific exploration in many disciplines including chemistry and biology. A natural side effect of this trend has been the need for an increasingly complex analytical environment. In this paper, we describe Northwest Trajectory Analysis Capability (NTRAC), an analytical software suite developed to enhance the efficiency of computational biophysics analyses. Our strategy is to layer higher-level services and introduce improved tools within the user’s familiar environmentmore » without preventing researchers from using traditional tools and methods. Our desire is to share these experiences to serve as an example for effectively analyzing data intensive large scale simulation data.« less
Translational bioinformatics in the cloud: an affordable alternative
2010-01-01
With the continued exponential expansion of publicly available genomic data and access to low-cost, high-throughput molecular technologies for profiling patient populations, computational technologies and informatics are becoming vital considerations in genomic medicine. Although cloud computing technology is being heralded as a key enabling technology for the future of genomic research, available case studies are limited to applications in the domain of high-throughput sequence data analysis. The goal of this study was to evaluate the computational and economic characteristics of cloud computing in performing a large-scale data integration and analysis representative of research problems in genomic medicine. We find that the cloud-based analysis compares favorably in both performance and cost in comparison to a local computational cluster, suggesting that cloud computing technologies might be a viable resource for facilitating large-scale translational research in genomic medicine. PMID:20691073
Modeling veterans healthcare administration disclosure processes :
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beyeler, Walter E; DeMenno, Mercy B.; Finley, Patrick D.
As with other large healthcare organizations, medical adverse events at the Department of Veterans Affairs (VA) facilities can expose patients to unforeseen negative risks. VHA leadership recognizes that properly handled disclosure of adverse events can minimize potential harm to patients and negative consequences for the effective functioning of the organization. The work documented here seeks to help improve the disclosure process by situating it within the broader theoretical framework of issues management, and to identify opportunities for process improvement through modeling disclosure and reactions to disclosure. The computational model will allow a variety of disclosure actions to be tested acrossmore » a range of incident scenarios. Our conceptual model will be refined in collaboration with domain experts, especially by continuing to draw on insights from VA Study of the Communication of Adverse Large-Scale Events (SCALE) project researchers.« less
Los Alamos Explosives Performance Key to Stockpile Stewardship
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dattelbaum, Dana
2014-11-03
As the U.S. Nuclear Deterrent ages, one essential factor in making sure that the weapons will continue to perform as designed is understanding the fundamental properties of the high explosives that are part of a nuclear weapons system. As nuclear weapons go through life extension programs, some changes may be advantageous, particularly through the addition of what are known as "insensitive" high explosives that are much less likely to accidentally detonate than the already very safe "conventional" high explosives that are used in most weapons. At Los Alamos National Laboratory explosives research includes a wide variety of both large- andmore » small-scale experiments that include small contained detonations, gas and powder gun firings, larger outdoor detonations, large-scale hydrodynamic tests, and at the Nevada Nuclear Security Site, underground sub-critical experiments.« less
The Crotone Megalandslide, southern Italy: Architecture, timing and tectonic control.
Zecchin, Massimo; Accaino, Flavio; Ceramicola, Silvia; Civile, Dario; Critelli, Salvatore; Da Lio, Cristina; Mangano, Giacomo; Prosser, Giacomo; Teatini, Pietro; Tosi, Luigi
2018-05-17
Large-scale submarine gravitational land movements involving even more than 1,000 m thick sedimentary successions are known as megalandslides. We prove the existence of large-scale gravitational phenomena off the Crotone Basin, a forearc basin located on the Ionian side of Calabria (southern Italy), by seismic, morpho-bathymetric and well data. Our study reveals that the Crotone Megalandslide started moving between Late Zanclean and Early Piacenzian and was triggered by a contractional tectonic event leading to the basin inversion. Seaward gliding of the megalandslide continued until roughly Late Gelasian, and then resumed since Middle Pleistocene with a modest rate. Interestingly, the onshore part of the basin does not show a gravity-driven deformation comparable to that observed in the marine area, and this peculiar evidence allows some speculations on the origin of the megalandslide.
Versatile synchronized real-time MEG hardware controller for large-scale fast data acquisition.
Sun, Limin; Han, Menglai; Pratt, Kevin; Paulson, Douglas; Dinh, Christoph; Esch, Lorenz; Okada, Yoshio; Hämäläinen, Matti
2017-05-01
Versatile controllers for accurate, fast, and real-time synchronized acquisition of large-scale data are useful in many areas of science, engineering, and technology. Here, we describe the development of a controller software based on a technique called queued state machine for controlling the data acquisition (DAQ) hardware, continuously acquiring a large amount of data synchronized across a large number of channels (>400) at a fast rate (up to 20 kHz/channel) in real time, and interfacing with applications for real-time data analysis and display of electrophysiological data. This DAQ controller was developed specifically for a 384-channel pediatric whole-head magnetoencephalography (MEG) system, but its architecture is useful for wide applications. This controller running in a LabVIEW environment interfaces with microprocessors in the MEG sensor electronics to control their real-time operation. It also interfaces with a real-time MEG analysis software via transmission control protocol/internet protocol, to control the synchronous acquisition and transfer of the data in real time from >400 channels to acquisition and analysis workstations. The successful implementation of this controller for an MEG system with a large number of channels demonstrates the feasibility of employing the present architecture in several other applications.
Versatile synchronized real-time MEG hardware controller for large-scale fast data acquisition
NASA Astrophysics Data System (ADS)
Sun, Limin; Han, Menglai; Pratt, Kevin; Paulson, Douglas; Dinh, Christoph; Esch, Lorenz; Okada, Yoshio; Hämäläinen, Matti
2017-05-01
Versatile controllers for accurate, fast, and real-time synchronized acquisition of large-scale data are useful in many areas of science, engineering, and technology. Here, we describe the development of a controller software based on a technique called queued state machine for controlling the data acquisition (DAQ) hardware, continuously acquiring a large amount of data synchronized across a large number of channels (>400) at a fast rate (up to 20 kHz/channel) in real time, and interfacing with applications for real-time data analysis and display of electrophysiological data. This DAQ controller was developed specifically for a 384-channel pediatric whole-head magnetoencephalography (MEG) system, but its architecture is useful for wide applications. This controller running in a LabVIEW environment interfaces with microprocessors in the MEG sensor electronics to control their real-time operation. It also interfaces with a real-time MEG analysis software via transmission control protocol/internet protocol, to control the synchronous acquisition and transfer of the data in real time from >400 channels to acquisition and analysis workstations. The successful implementation of this controller for an MEG system with a large number of channels demonstrates the feasibility of employing the present architecture in several other applications.
Xu, Yinlin; Ma, Qianli D Y; Schmitt, Daniel T; Bernaola-Galván, Pedro; Ivanov, Plamen Ch
2011-11-01
We investigate how various coarse-graining (signal quantization) methods affect the scaling properties of long-range power-law correlated and anti-correlated signals, quantified by the detrended fluctuation analysis. Specifically, for coarse-graining in the magnitude of a signal, we consider (i) the Floor, (ii) the Symmetry and (iii) the Centro-Symmetry coarse-graining methods. We find that for anti-correlated signals coarse-graining in the magnitude leads to a crossover to random behavior at large scales, and that with increasing the width of the coarse-graining partition interval Δ, this crossover moves to intermediate and small scales. In contrast, the scaling of positively correlated signals is less affected by the coarse-graining, with no observable changes when Δ < 1, while for Δ > 1 a crossover appears at small scales and moves to intermediate and large scales with increasing Δ. For very rough coarse-graining (Δ > 3) based on the Floor and Symmetry methods, the position of the crossover stabilizes, in contrast to the Centro-Symmetry method where the crossover continuously moves across scales and leads to a random behavior at all scales; thus indicating a much stronger effect of the Centro-Symmetry compared to the Floor and the Symmetry method. For coarse-graining in time, where data points are averaged in non-overlapping time windows, we find that the scaling for both anti-correlated and positively correlated signals is practically preserved. The results of our simulations are useful for the correct interpretation of the correlation and scaling properties of symbolic sequences.
Xu, Yinlin; Ma, Qianli D.Y.; Schmitt, Daniel T.; Bernaola-Galván, Pedro; Ivanov, Plamen Ch.
2014-01-01
We investigate how various coarse-graining (signal quantization) methods affect the scaling properties of long-range power-law correlated and anti-correlated signals, quantified by the detrended fluctuation analysis. Specifically, for coarse-graining in the magnitude of a signal, we consider (i) the Floor, (ii) the Symmetry and (iii) the Centro-Symmetry coarse-graining methods. We find that for anti-correlated signals coarse-graining in the magnitude leads to a crossover to random behavior at large scales, and that with increasing the width of the coarse-graining partition interval Δ, this crossover moves to intermediate and small scales. In contrast, the scaling of positively correlated signals is less affected by the coarse-graining, with no observable changes when Δ < 1, while for Δ > 1 a crossover appears at small scales and moves to intermediate and large scales with increasing Δ. For very rough coarse-graining (Δ > 3) based on the Floor and Symmetry methods, the position of the crossover stabilizes, in contrast to the Centro-Symmetry method where the crossover continuously moves across scales and leads to a random behavior at all scales; thus indicating a much stronger effect of the Centro-Symmetry compared to the Floor and the Symmetry method. For coarse-graining in time, where data points are averaged in non-overlapping time windows, we find that the scaling for both anti-correlated and positively correlated signals is practically preserved. The results of our simulations are useful for the correct interpretation of the correlation and scaling properties of symbolic sequences. PMID:25392599
A Single-use Strategy to Enable Manufacturing of Affordable Biologics.
Jacquemart, Renaud; Vandersluis, Melissa; Zhao, Mochao; Sukhija, Karan; Sidhu, Navneet; Stout, Jim
2016-01-01
The current processing paradigm of large manufacturing facilities dedicated to single product production is no longer an effective approach for best manufacturing practices. Increasing competition for new indications and the launch of biosimilars for the monoclonal antibody market have put pressure on manufacturers to produce at lower cost. Single-use technologies and continuous upstream processes have proven to be cost-efficient options to increase biomass production but as of today the adoption has been only minimal for the purification operations, partly due to concerns related to cost and scale-up. This review summarizes how a single-use holistic process and facility strategy can overcome scale limitations and enable cost-efficient manufacturing to support the growing demand for affordable biologics. Technologies enabling high productivity, right-sized, small footprint, continuous, and automated upstream and downstream operations are evaluated in order to propose a concept for the flexible facility of the future.
Gradient perception of children's productions of /s/ and /θ/: A comparative study of rating methods.
Schellinger, Sarah K; Munson, Benjamin; Edwards, Jan
2017-01-01
Past studies have shown incontrovertible evidence for the existence of covert contrasts in children's speech, i.e. differences between target productions that are nonetheless transcribed with the same phonetic symbol. Moreover, there is evidence that these are relevant to forming prognoses and tracking progress in children with speech sound disorder. A challenge remains to determine the most efficient and reliable methods for assessing covert contrasts. This study investigates how readily listeners can identify covert contrasts in children's speech when using a continuous rating scale in the form of a visual analogue scale (VAS) to denote children's productions. Individual listeners' VAS responses were found to correlate statistically significantly with a variety of continuous measures of children's production accuracy, including judgements of binary accuracy pooled over a large set of listeners. These findings reinforce the growing body of evidence that VAS judgements are potentially useful clinical measures of covert contrast.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bolinger, Mark; Seel, Joachim
2015-09-01
Other than the nine Solar Energy Generation Systems (“SEGS”) parabolic trough projects built in the 1980s, virtually no large-scale or “utility-scale” solar projects – defined here to include any groundmounted photovoltaic (“PV”), concentrating photovoltaic (“CPV”), or concentrating solar thermal power (“CSP”) project larger than 5 MW AC – existed in the United States prior to 2007. By 2012 – just five years later – utility-scale had become the largest sector of the overall PV market in the United States, a distinction that was repeated in both 2013 and 2014 and that is expected to continue for at least the nextmore » few years. Over this same short period, CSP also experienced a bit of a renaissance in the United States, with a number of large new parabolic trough and power tower systems – some including thermal storage – achieving commercial operation. With this critical mass of new utility-scale projects now online and in some cases having operated for a number of years (generating not only electricity, but also empirical data that can be mined), the rapidly growing utility-scale sector is ripe for analysis. This report, the third edition in an ongoing annual series, meets this need through in-depth, annually updated, data-driven analysis of not just installed project costs or prices – i.e., the traditional realm of solar economics analyses – but also operating costs, capacity factors, and power purchase agreement (“PPA”) prices from a large sample of utility-scale solar projects in the United States. Given its current dominance in the market, utility-scale PV also dominates much of this report, though data from CPV and CSP projects are presented where appropriate.« less
Universality from disorder in the random-bond Blume-Capel model
NASA Astrophysics Data System (ADS)
Fytas, N. G.; Zierenberg, J.; Theodorakis, P. E.; Weigel, M.; Janke, W.; Malakis, A.
2018-04-01
Using high-precision Monte Carlo simulations and finite-size scaling we study the effect of quenched disorder in the exchange couplings on the Blume-Capel model on the square lattice. The first-order transition for large crystal-field coupling is softened to become continuous, with a divergent correlation length. An analysis of the scaling of the correlation length as well as the susceptibility and specific heat reveals that it belongs to the universality class of the Ising model with additional logarithmic corrections which is also observed for the Ising model itself if coupled to weak disorder. While the leading scaling behavior of the disordered system is therefore identical between the second-order and first-order segments of the phase diagram of the pure model, the finite-size scaling in the ex-first-order regime is affected by strong transient effects with a crossover length scale L*≈32 for the chosen parameters.
Operation of an aquatic worm reactor suitable for sludge reduction at large scale.
Hendrickx, Tim L G; Elissen, Hellen H J; Temmink, Hardy; Buisman, Cees J N
2011-10-15
Treatment of domestic waste water results in the production of waste sludge, which requires costly further processing. A biological method to reduce the amount of waste sludge and its volume is treatment in an aquatic worm reactor. The potential of such a worm reactor with the oligochaete Lumbriculus variegatus has been shown at small scale. For scaling up purposes, a new configuration of the reactor was designed, in which the worms were positioned horizontally in the carrier material. This was tested in a continuous experiment of 8 weeks where it treated all the waste sludge from a lab-scale activated sludge process. The results showed a higher worm growth rate compared to previous experiments with the old configuration, whilst nutrient release was similar. The new configuration has a low footprint and allows for easy aeration and faeces collection, thereby making it suitable for full scale application. Copyright © 2011 Elsevier Ltd. All rights reserved.
Supernova explosions in magnetized, primordial dark matter haloes
NASA Astrophysics Data System (ADS)
Seifried, D.; Banerjee, R.; Schleicher, D.
2014-05-01
The first supernova explosions are potentially relevant sources for the production of the first large-scale magnetic fields. For this reason, we present a set of high-resolution simulations studying the effect of supernova explosions on magnetized, primordial haloes. We focus on the evolution of an initially small-scale magnetic field formed during the collapse of the halo. We vary the degree of magnetization, the halo mass, and the amount of explosion energy in order to account for expected variations as well as to infer systematical dependences of the results on initial conditions. Our simulations suggest that core collapse supernovae with an explosion energy of 1051 erg and more violent pair instability supernovae with 1053 erg are able to disrupt haloes with masses up to about 106 and 107 M⊙, respectively. The peak of the magnetic field spectra shows a continuous shift towards smaller k-values, i.e. larger length scales, over time reaching values as low as k = 4. On small scales, the magnetic energy decreases at the cost of the energy on large scales resulting in a well-ordered magnetic field with a strength up to ˜10-8 G depending on the initial conditions. The coherence length of the magnetic field inferred from the spectra reaches values up to 250 pc in agreement with those obtained from autocorrelation functions. We find the coherence length to be as large as 50 per cent of the radius of the supernova bubble. Extrapolating this relation to later stages, we suggest that significantly strong magnetic fields with coherence lengths as large as 1.5 kpc could be created. We discuss possible implications of our results on processes like recollapse of the halo, first galaxy formation, and the magnetization of the intergalactic medium.
Slob, Wout
2017-04-01
A general theory on effect size for continuous data predicts a relationship between maximum response and within-group variation of biological parameters, which is empirically confirmed by results from dose-response analyses of 27 different biological parameters. The theory shows how effect sizes observed in distinct biological parameters can be compared and provides a basis for a generic definition of small, intermediate and large effects. While the theory is useful for experimental science in general, it has specific consequences for risk assessment: it solves the current debate on the appropriate metric for the Benchmark response in continuous data. The theory shows that scaling the BMR expressed as a percent change in means to the maximum response (in the way specified) automatically takes "natural variability" into account. Thus, the theory supports the underlying rationale of the BMR 1 SD. For various reasons, it is, however, recommended to use a BMR in terms of a percent change that is scaled to maximum response and/or within group variation (averaged over studies), as a single harmonized approach.
NASA Astrophysics Data System (ADS)
Durand, Olivier; Soulard, Laurent; Jaouen, Stephane; Heuze, Olivier; Colombet, Laurent; Cieren, Emmanuel
2017-06-01
We compare, at similar scales, the processes of microjetting and ejecta production from shocked roughened metal surfaces by using atomistic and continuous approaches. The atomistic approach is based on very large scale molecular dynamics (MD) simulations. The continuous approach is based on Eulerian hydrodynamics simulations with adaptive mesh refinement; the simulations take into account the effects of viscosity and surface tension, and they use an equation of state calculated from the MD simulations. The microjetting is generated by shock-loading above its fusion point a three-dimensional tin crystal with an initial sinusoidal free surface perturbation, the crystal being set in contact with a vacuum. Several samples with homothetic wavelengths and amplitudes of defect are simulated in order to investigate the influence of the viscosity and surface tension of the metal. The simulations show that the hydrodynamic code reproduces with a very good agreement the distributions, calculated from the MD simulations, of the ejected mass and velocity along the jet. Both codes exhibit also a similar phenomenology of fragmentation of the metallic liquid sheets ejected.
Cruz-Motta, Juan José; Miloslavich, Patricia; Palomo, Gabriela; Iken, Katrin; Konar, Brenda; Pohle, Gerhard; Trott, Tom; Benedetti-Cecchi, Lisandro; Herrera, César; Hernández, Alejandra; Sardi, Adriana; Bueno, Andrea; Castillo, Julio; Klein, Eduardo; Guerra-Castro, Edlin; Gobin, Judith; Gómez, Diana Isabel; Riosmena-Rodríguez, Rafael; Mead, Angela; Bigatti, Gregorio; Knowlton, Ann; Shirayama, Yoshihisa
2010-01-01
Assemblages associated with intertidal rocky shores were examined for large scale distribution patterns with specific emphasis on identifying latitudinal trends of species richness and taxonomic distinctiveness. Seventy-two sites distributed around the globe were evaluated following the standardized sampling protocol of the Census of Marine Life NaGISA project (www.nagisa.coml.org). There were no clear patterns of standardized estimators of species richness along latitudinal gradients or among Large Marine Ecosystems (LMEs); however, a strong latitudinal gradient in taxonomic composition (i.e., proportion of different taxonomic groups in a given sample) was observed. Environmental variables related to natural influences were strongly related to the distribution patterns of the assemblages on the LME scale, particularly photoperiod, sea surface temperature (SST) and rainfall. In contrast, no environmental variables directly associated with human influences (with the exception of the inorganic pollution index) were related to assemblage patterns among LMEs. Correlations of the natural assemblages with either latitudinal gradients or environmental variables were equally strong suggesting that neither neutral models nor models based solely on environmental variables sufficiently explain spatial variation of these assemblages at a global scale. Despite the data shortcomings in this study (e.g., unbalanced sample distribution), we show the importance of generating biological global databases for the use in large-scale diversity comparisons of rocky intertidal assemblages to stimulate continued sampling and analyses. PMID:21179546
Modelling utility-scale wind power plants. Part 1: Economics
NASA Astrophysics Data System (ADS)
Milligan, Michael R.
1999-10-01
As the worldwide use of wind turbine generators continues to increase in utility-scale applications, it will become increasingly important to assess the economic and reliability impact of these intermittent resources. Although the utility industry in the United States appears to be moving towards a restructured environment, basic economic and reliability issues will continue to be relevant to companies involved with electricity generation. This article is the first of two which address modelling approaches and results obtained in several case studies and research projects at the National Renewable Energy Laboratory (NREL). This first article addresses the basic economic issues associated with electricity production from several generators that include large-scale wind power plants. An important part of this discussion is the role of unit commitment and economic dispatch in production cost models. This paper includes overviews and comparisons of the prevalent production cost modelling methods, including several case studies applied to a variety of electric utilities. The second article discusses various methods of assessing capacity credit and results from several reliability-based studies performed at NREL.
Descriptor Fingerprints and Their Application to WhiteWine Clustering and Discrimination.
NASA Astrophysics Data System (ADS)
Bangov, I. P.; Moskovkina, M.; Stojanov, B. P.
2018-03-01
This study continues the attempt to use the statistical process for a large-scale analytical data. A group of 3898 white wines, each with 11 analytical laboratory benchmarks was analyzed by a fingerprint similarity search in order to be grouped into separate clusters. A characterization of the wine's quality in each individual cluster was carried out according to individual laboratory parameters.
Tornado Recovery Ongoing at NASA’s Michoud Assembly Facility, New Orleans LA
2017-02-07
Teams at NASA’s Michoud Assembly Facility in New Orleans are continuing with recovery efforts following a tornado strike at the facility Tuesday, Feb. 7. Michoud remains closed to all but security and emergency operations crews. For more than half a century, Michoud has been the space agency’s premiere site for manufacturing and assembly of large-scale space structures and systems.
Stephanie K. Moore; Nathan J. Mantua; Jonathan P. Kellogg; Jan A. Newton
2008-01-01
The influence of climate on Puget Sound oceanographic properties is investigated on seasonal to interannual timescales using continuous profile data at 16 stations from 1993 to 2002 and records of sea surface temperature (SST) and sea surface salinity (SSS) from 1951 to 2002. Principal components analyses of profile data identify indices representing 42%, 58%, and 56%...
Christopher M. Oswalt; Andrew J. Hartsell
2012-01-01
The Cumberland Plateau and Mountains (CPM) are a significant component of the eastern deciduous forest with biological and cultural resources strongly connected to and dependent upon the forest resources of the region. As a result, continuous inventory and monitoring is critical. The USDA Forest Service Forest Inventory and Analysis (FIA) program has been collecting...
ERIC Educational Resources Information Center
Xu, Hongjiang; Rondeau, Patrick J.; Mahenthiran, Sakthi
2011-01-01
Enterprise Resource Planning (ERP) system implementation projects are notoriously risky. While large-scale ERP cases continue to be developed, relatively few new ERP cases have been published that further ERP implementation education in small to medium size firms. This case details the implementation of a new ERP system in a medium sized…
Apparatus for the production of boron nitride nanotubes
Smith, Michael W; Jordan, Kevin
2014-06-17
An apparatus for the large scale production of boron nitride nanotubes comprising; a pressure chamber containing; a continuously fed boron containing target; a source of thermal energy preferably a focused laser beam; a cooled condenser; a source of pressurized nitrogen gas; and a mechanism for extracting boron nitride nanotubes that are condensed on or in the area of the cooled condenser from the pressure chamber.
U.S. Regional Aquifer Analysis Program
NASA Astrophysics Data System (ADS)
Johnson, Ivan
As a result of the severe 1976-1978 drought, Congress in 1978 requested that the U.S. Geological Survey (USGS) initiate studies of the nation's aquifers on a regional scale. This continuing USGS project, the Regional Aquifer System Analysis (RASA) Program, consists of systematic studies of the quality and quantity of water in the regional groundwater systems that supply a large part of the nation's water.
PHYSICS OF OUR DAYS: Dark energy: myths and reality
NASA Astrophysics Data System (ADS)
Lukash, V. N.; Rubakov, V. A.
2008-03-01
We discuss the questions related to dark energy in the Universe. We note that in spite of the effect of dark energy, large-scale structure is still being generated in the Universe and this will continue for about ten billion years. We also comment on some statements in the paper "Dark energy and universal antigravitation" by A D Chernin, Physics Uspekhi 51 (3) (2008).
Interactive Exploration for Continuously Expanding Neuron Databases.
Li, Zhongyu; Metaxas, Dimitris N; Lu, Aidong; Zhang, Shaoting
2017-02-15
This paper proposes a novel framework to help biologists explore and analyze neurons based on retrieval of data from neuron morphological databases. In recent years, the continuously expanding neuron databases provide a rich source of information to associate neuronal morphologies with their functional properties. We design a coarse-to-fine framework for efficient and effective data retrieval from large-scale neuron databases. In the coarse-level, for efficiency in large-scale, we employ a binary coding method to compress morphological features into binary codes of tens of bits. Short binary codes allow for real-time similarity searching in Hamming space. Because the neuron databases are continuously expanding, it is inefficient to re-train the binary coding model from scratch when adding new neurons. To solve this problem, we extend binary coding with online updating schemes, which only considers the newly added neurons and update the model on-the-fly, without accessing the whole neuron databases. In the fine-grained level, we introduce domain experts/users in the framework, which can give relevance feedback for the binary coding based retrieval results. This interactive strategy can improve the retrieval performance through re-ranking the above coarse results, where we design a new similarity measure and take the feedback into account. Our framework is validated on more than 17,000 neuron cells, showing promising retrieval accuracy and efficiency. Moreover, we demonstrate its use case in assisting biologists to identify and explore unknown neurons. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Gabellani, S.; Silvestro, F.; Rudari, R.; Boni, G.
2008-12-01
Flood forecasting undergoes a constant evolution, becoming more and more demanding about the models used for hydrologic simulations. The advantages of developing distributed or semi-distributed models have currently been made clear. Now the importance of using continuous distributed modeling emerges. A proper schematization of the infiltration process is vital to these types of models. Many popular infiltration schemes, reliable and easy to implement, are too simplistic for the development of continuous hydrologic models. On the other hand, the unavailability of detailed and descriptive information on soil properties often limits the implementation of complete infiltration schemes. In this work, a combination between the Soil Conservation Service Curve Number method (SCS-CN) and a method derived from Horton equation is proposed in order to overcome the inherent limits of the two schemes. The SCS-CN method is easily applicable on large areas, but has structural limitations. The Horton-like methods present parameters that, though measurable to a point, are difficult to achieve a reliable estimate at catchment scale. The objective of this work is to overcome these limits by proposing a calibration procedure which maintains the large applicability of the SCS-CN method as well as the continuous description of the infiltration process given by the Horton's equation suitably modified. The estimation of the parameters of the modified Horton method is carried out using a formal analogy with the SCS-CN method under specific conditions. Some applications, at catchment scale within a distributed model, are presented.
Detecting TLEs using a massive all-sky camera network
NASA Astrophysics Data System (ADS)
Garnung, M. B.; Celestin, S. J.
2017-12-01
Transient Luminous Events (TLEs) are large-scale optical events occurring in the upper-atmosphere from the top of thunderclouds up to the ionosphere. TLEs may have important effects in local, regional, and global scales, and many features of TLEs are not fully understood yet [e.g, Pasko, JGR, 115, A00E35, 2010]. Moreover, meteor events have been suggested to play a role in sprite initiation by producing ionospheric irregularities [e.g, Qin et al., Nat. Commun., 5, 3740, 2014]. The French Fireball Recovery and InterPlanetary Observation Network (FRIPON, https://www.fripon.org/?lang=en), is a national all-sky 30 fps camera network designed to continuously detect meteor events. We seek to make use of this network to observe TLEs over unprecedented space and time scales ( 1000×1000 km with continuous acquisition). To do so, we had to significantly modify FRIPON's triggering software Freeture (https://github.com/fripon/freeture) while leaving the meteor detection capability uncompromised. FRIPON has a great potential in the study of TLEs. Not only could it produce new results about spatial and time distributions of TLEs over a very large area, it could also be used to validate and complement observations from future space missions such as ASIM (ESA) and TARANIS (CNES). In this work, we present an original image processing algorithm that can detect sprites using all-sky cameras while strongly limiting the frequency of false positives and our ongoing work on sprite triangulation using the FRIPON network.
NASA Astrophysics Data System (ADS)
Waldhauser, F.; Schaff, D. P.
2012-12-01
Archives of digital seismic data recorded by seismometer networks around the world have grown tremendously over the last several decades helped by the deployment of seismic stations and their continued operation within the framework of monitoring earthquake activity and verification of the Nuclear Test-Ban Treaty. We show results from our continuing effort in developing efficient waveform cross-correlation and double-difference analysis methods for the large-scale processing of regional and global seismic archives to improve existing earthquake parameter estimates, detect seismic events with magnitudes below current detection thresholds, and improve real-time monitoring procedures. We demonstrate the performance of these algorithms as applied to the 28-year long seismic archive of the Northern California Seismic Network. The tools enable the computation of periodic updates of a high-resolution earthquake catalog of currently over 500,000 earthquakes using simultaneous double-difference inversions, achieving up to three orders of magnitude resolution improvement over existing hypocenter locations. This catalog, together with associated metadata, form the underlying relational database for a real-time double-difference scheme, DDRT, which rapidly computes high-precision correlation times and hypocenter locations of new events with respect to the background archive (http://ddrt.ldeo.columbia.edu). The DDRT system facilitates near-real-time seismicity analysis, including the ability to search at an unprecedented resolution for spatio-temporal changes in seismogenic properties. In areas with continuously recording stations, we show that a detector built around a scaled cross-correlation function can lower the detection threshold by one magnitude unit compared to the STA/LTA based detector employed at the network. This leads to increased event density, which in turn pushes the resolution capability of our location algorithms. On a global scale, we are currently building the computational framework for double-difference processing the combined parametric and waveform archives of the ISC, NEIC, and IRIS with over three million recorded earthquakes worldwide. Since our methods are scalable and run on inexpensive Beowulf clusters, periodic re-analysis of such archives may thus become a routine procedure to continuously improve resolution in existing global earthquake catalogs. Results from subduction zones and aftershock sequences of recent great earthquakes demonstrate the considerable social and economic impact that high-resolution images of active faults, when available in real-time, will have in the prompt evaluation and mitigation of seismic hazards. These results also highlight the need for consistent long-term seismic monitoring and archiving of records.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horiuchi, Shunsaku, E-mail: horiuchi@vt.edu
2016-06-21
The cold dark matter paradigm has been extremely successful in explaining the large-scale structure of the Universe. However, it continues to face issues when confronted by observations on sub-Galactic scales. A major caveat, now being addressed, has been the incomplete treatment of baryon physics. We first summarize the small-scale issues surrounding cold dark matter and discuss the solutions explored by modern state-of-the-art numerical simulations including treatment of baryonic physics. We identify the too big to fail in field galaxies as among the best targets to study modifications to dark matter, and discuss the particular connection with sterile neutrino warm darkmore » matter. We also discuss how the recently detected anomalous 3.55 keV X-ray lines, when interpreted as sterile neutrino dark matter decay, provide a very good description of small-scale observations of the Local Group.« less
The morphing of geographical features by Fourier transformation
Liu, Pengcheng; Yu, Wenhao; Cheng, Xiaoqiang
2018-01-01
This paper presents a morphing model of vector geographical data based on Fourier transformation. This model involves three main steps. They are conversion from vector data to Fourier series, generation of intermediate function by combination of the two Fourier series concerning a large scale and a small scale, and reverse conversion from combination function to vector data. By mirror processing, the model can also be used for morphing of linear features. Experimental results show that this method is sensitive to scale variations and it can be used for vector map features’ continuous scale transformation. The efficiency of this model is linearly related to the point number of shape boundary and the interceptive value n of Fourier expansion. The effect of morphing by Fourier transformation is plausible and the efficiency of the algorithm is acceptable. PMID:29351344
Arun, C; Sivashanmugam, P
2015-10-01
Reuse and management of organic solid waste, reduce the environmental impact on human health and increase the economic status by generating valuable products for current and novel applications. Garbage enzyme is one such product produced from fermentation of organic solid waste and it can be used as liquid fertilizer, antimicrobial agents, treatment of domestic wastewater, municipal and industrial sludge treatment, etc. The semi-continuous production of garbage enzyme in large quantity at minimal time period and at lesser cost is needed to cater for treatment of increasing quantities of industrial waste activated sludge. This necessitates a parameter for monitoring and control for the scaling up of current process on semi-continuous basis. In the present study a RP-HPLC (Reversed Phase-High Performance Liquid Chromatography) method is used for quantification of standard organic acid at optimized condition 30°C column oven temperature, pH 2.7, and 0.7 ml/min flow rate of the mobile phase (potassium dihydrogen phosphate in water) at 50mM concentration. The garbage enzyme solution collected in 15, 30, 45, 60, 75 and 90 days were used as sample to determine the concentration of organic acid. Among these, 90th day sample showed the maximum concentration of 78.14 g/l of acetic acid in garbage enzyme, whereas other organic acids concentration got decreased when compare to the 15th day sample. This result confirms that the matured garbage enzyme contains a higher concentration of acetic acid and thus it can be used as a monitoring parameter for semi-continuous production of garbage enzyme in large scale. Copyright © 2015 Elsevier Ltd. All rights reserved.
Automation of large scale transient protein expression in mammalian cells
Zhao, Yuguang; Bishop, Benjamin; Clay, Jordan E.; Lu, Weixian; Jones, Margaret; Daenke, Susan; Siebold, Christian; Stuart, David I.; Yvonne Jones, E.; Radu Aricescu, A.
2011-01-01
Traditional mammalian expression systems rely on the time-consuming generation of stable cell lines; this is difficult to accommodate within a modern structural biology pipeline. Transient transfections are a fast, cost-effective solution, but require skilled cell culture scientists, making man-power a limiting factor in a setting where numerous samples are processed in parallel. Here we report a strategy employing a customised CompacT SelecT cell culture robot allowing the large-scale expression of multiple protein constructs in a transient format. Successful protocols have been designed for automated transient transfection of human embryonic kidney (HEK) 293T and 293S GnTI− cells in various flask formats. Protein yields obtained by this method were similar to those produced manually, with the added benefit of reproducibility, regardless of user. Automation of cell maintenance and transient transfection allows the expression of high quality recombinant protein in a completely sterile environment with limited support from a cell culture scientist. The reduction in human input has the added benefit of enabling continuous cell maintenance and protein production, features of particular importance to structural biology laboratories, which typically use large quantities of pure recombinant proteins, and often require rapid characterisation of a series of modified constructs. This automated method for large scale transient transfection is now offered as a Europe-wide service via the P-cube initiative. PMID:21571074
Soini, Jaakko; Ukkonen, Kaisa; Neubauer, Peter
2008-01-01
Background For the cultivation of Escherichia coli in bioreactors trace element solutions are generally designed for optimal growth under aerobic conditions. They do normally not contain selenium and nickel. Molybdenum is only contained in few of them. These elements are part of the formate hydrogen lyase (FHL) complex which is induced under anaerobic conditions. As it is generally known that oxygen limitation appears in shake flask cultures and locally in large-scale bioreactors, function of the FHL complex may influence the process behaviour. Formate has been described to accumulate in large-scale cultures and may have toxic effects on E. coli. Although the anaerobic metabolism of E. coli is well studied, reference data which estimate the impact of the FHL complex on bioprocesses of E. coli with oxygen limitation have so far not been published, but are important for a better process understanding. Results Two sets of fed-batch cultures with conditions triggering oxygen limitation and formate accumulation were performed. Permanent oxygen limitation which is typical for shake flask cultures was caused in a bioreactor by reduction of the agitation rate. Transient oxygen limitation, which has been described to eventually occur in the feed-zone of large-scale bioreactors, was mimicked in a two-compartment scale-down bioreactor consisting of a stirred tank reactor and a plug flow reactor (PFR) with continuous glucose feeding into the PFR. In both models formate accumulated up to about 20 mM in the culture medium without addition of selenium, molybdenum and nickel. By addition of these trace elements the formate accumulation decreased below the level observed in well-mixed laboratory-scale cultures. Interestingly, addition of the extra trace elements caused accumulation of large amounts of lactate and reduced biomass yield in the simulator with permanent oxygen limitation, but not in the scale-down two-compartment bioreactor. Conclusion The accumulation of formate in oxygen limited cultivations of E. coli can be fully prevented by addition of the trace elements selenium, nickel and molybdenum, necessary for the function of FHL complex. For large-scale cultivations, if glucose gradients are likely, the results from the two-compartment scale-down bioreactor indicate that the addition of the extra trace elements is beneficial. No negative effects on the biomass yield or on any other bioprocess parameters could be observed in cultures with the extra trace elements if the cells were repeatedly exposed to transient oxygen limitation. PMID:18687130
Application of stochastic processes in random growth and evolutionary dynamics
NASA Astrophysics Data System (ADS)
Oikonomou, Panagiotis
We study the effect of power-law distributed randomness on the dynamical behavior of processes such as stochastic growth patterns and evolution. First, we examine the geometrical properties of random shapes produced by a generalized stochastic Loewner Evolution driven by a superposition of a Brownian motion and a stable Levy process. The situation is defined by the usual stochastic Loewner Evolution parameter, kappa, as well as alpha which defines the power-law tail of the stable Levy distribution. We show that the properties of these patterns change qualitatively and singularly at critical values of kappa and alpha. It is reasonable to call such changes "phase transitions". These transitions occur as kappa passes through four and as alpha passes through one. Numerical simulations are used to explore the global scaling behavior of these patterns in each "phase". We show both analytically and numerically that the growth continues indefinitely in the vertical direction for alpha greater than 1, goes as logarithmically with time for alpha equals to 1, and saturates for alpha smaller than 1. The probability density has two different scales corresponding to directions along and perpendicular to the boundary. Scaling functions for the probability density are given for various limiting cases. Second, we study the effect of the architecture of biological networks on their evolutionary dynamics. In recent years, studies of the architecture of large networks have unveiled a common topology, called scale-free, in which a majority of the elements are poorly connected except for a small fraction of highly connected components. We ask how networks with distinct topologies can evolve towards a pre-established target phenotype through a process of random mutations and selection. We use networks of Boolean components as a framework to model a large class of phenotypes. Within this approach, we find that homogeneous random networks and scale-free networks exhibit drastically different evolutionary paths. While homogeneous random networks accumulate neutral mutations and evolve by sparse punctuated steps, scale-free networks evolve rapidly and continuously towards the target phenotype. Moreover, we show that scale-free networks always evolve faster than homogeneous random networks; remarkably, this property does not depend on the precise value of the topological parameter. By contrast, homogeneous random networks require a specific tuning of their topological parameter in order to optimize their fitness. This model suggests that the evolutionary paths of biological networks, punctuated or continuous, may solely be determined by the network topology.
Experience in using commercial clouds in CMS
NASA Astrophysics Data System (ADS)
Bauerdick, L.; Bockelman, B.; Dykstra, D.; Fuess, S.; Garzoglio, G.; Girone, M.; Gutsche, O.; Holzman, B.; Hufnagel, D.; Kim, H.; Kennedy, R.; Mason, D.; Spentzouris, P.; Timm, S.; Tiradani, A.; Vaandering, E.; CMS Collaboration
2017-10-01
Historically high energy physics computing has been performed on large purpose-built computing systems. In the beginning there were single site computing facilities, which evolved into the Worldwide LHC Computing Grid (WLCG) used today. The vast majority of the WLCG resources are used for LHC computing and the resources are scheduled to be continuously used throughout the year. In the last several years there has been an explosion in capacity and capability of commercial and academic computing clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest amongst the cloud providers to demonstrate the capability to perform large scale scientific computing. In this presentation we will discuss results from the CMS experiment using the Fermilab HEPCloud Facility, which utilized both local Fermilab resources and Amazon Web Services (AWS). The goal was to work with AWS through a matching grant to demonstrate a sustained scale approximately equal to half of the worldwide processing resources available to CMS. We will discuss the planning and technical challenges involved in organizing the most IO intensive CMS workflows on a large-scale set of virtualized resource provisioned by the Fermilab HEPCloud. We will describe the data handling and data management challenges. Also, we will discuss the economic issues and cost and operational efficiency comparison to our dedicated resources. At the end we will consider the changes in the working model of HEP computing in a domain with the availability of large scale resources scheduled at peak times.
Experience in using commercial clouds in CMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bauerdick, L.; Bockelman, B.; Dykstra, D.
Historically high energy physics computing has been performed on large purposebuilt computing systems. In the beginning there were single site computing facilities, which evolved into the Worldwide LHC Computing Grid (WLCG) used today. The vast majority of the WLCG resources are used for LHC computing and the resources are scheduled to be continuously used throughout the year. In the last several years there has been an explosion in capacity and capability of commercial and academic computing clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is amore » growing interest amongst the cloud providers to demonstrate the capability to perform large scale scientific computing. In this presentation we will discuss results from the CMS experiment using the Fermilab HEPCloud Facility, which utilized both local Fermilab resources and Amazon Web Services (AWS). The goal was to work with AWS through a matching grant to demonstrate a sustained scale approximately equal to half of the worldwide processing resources available to CMS. We will discuss the planning and technical challenges involved in organizing the most IO intensive CMS workflows on a large-scale set of virtualized resource provisioned by the Fermilab HEPCloud. We will describe the data handling and data management challenges. Also, we will discuss the economic issues and cost and operational efficiency comparison to our dedicated resources. At the end we will consider the changes in the working model of HEP computing in a domain with the availability of large scale resources scheduled at peak times.« less
(New hosts and vectors for genome cloning)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The main goal of our project remains the development of new bacterial hosts and vectors for the stable propagation of human DNA clones in E. coli. During the past six months of our current budget period, we have (1) continued to develop new hosts that permit the stable maintenance of unstable features of human DNA, and (2) developed a series of vectors for (a) cloning large DNA inserts, (b) assessing the frequency of human sequences that are lethal to the growth of E. coli, and (c) assessing the stability of human sequences cloned in M13 for large-scale sequencing projects.
[New hosts and vectors for genome cloning]. Progress report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The main goal of our project remains the development of new bacterial hosts and vectors for the stable propagation of human DNA clones in E. coli. During the past six months of our current budget period, we have (1) continued to develop new hosts that permit the stable maintenance of unstable features of human DNA, and (2) developed a series of vectors for (a) cloning large DNA inserts, (b) assessing the frequency of human sequences that are lethal to the growth of E. coli, and (c) assessing the stability of human sequences cloned in M13 for large-scale sequencing projects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
de Supinski, B R; Miller, B P; Liblit, B
2011-09-13
Petascale platforms with O(10{sup 5}) and O(10{sup 6}) processing cores are driving advancements in a wide range of scientific disciplines. These large systems create unprecedented application development challenges. Scalable correctness tools are critical to shorten the time-to-solution on these systems. Currently, many DOE application developers use primitive manual debugging based on printf or traditional debuggers such as TotalView or DDT. This paradigm breaks down beyond a few thousand cores, yet bugs often arise above that scale. Programmers must reproduce problems in smaller runs to analyze them with traditional tools, or else perform repeated runs at scale using only primitive techniques.more » Even when traditional tools run at scale, the approach wastes substantial effort and computation cycles. Continued scientific progress demands new paradigms for debugging large-scale applications. The Correctness on Petascale Systems (CoPS) project is developing a revolutionary debugging scheme that will reduce the debugging problem to a scale that human developers can comprehend. The scheme can provide precise diagnoses of the root causes of failure, including suggestions of the location and the type of errors down to the level of code regions or even a single execution point. Our fundamentally new strategy combines and expands three relatively new complementary debugging approaches. The Stack Trace Analysis Tool (STAT), a 2011 R&D 100 Award Winner, identifies behavior equivalence classes in MPI jobs and highlights behavior when elements of the class demonstrate divergent behavior, often the first indicator of an error. The Cooperative Bug Isolation (CBI) project has developed statistical techniques for isolating programming errors in widely deployed code that we will adapt to large-scale parallel applications. Finally, we are developing a new approach to parallelizing expensive correctness analyses, such as analysis of memory usage in the Memgrind tool. In the first two years of the project, we have successfully extended STAT to determine the relative progress of different MPI processes. We have shown that the STAT, which is now included in the debugging tools distributed by Cray with their large-scale systems, substantially reduces the scale at which traditional debugging techniques are applied. We have extended CBI to large-scale systems and developed new compiler based analyses that reduce its instrumentation overhead. Our results demonstrate that CBI can identify the source of errors in large-scale applications. Finally, we have developed MPIecho, a new technique that will reduce the time required to perform key correctness analyses, such as the detection of writes to unallocated memory. Overall, our research results are the foundations for new debugging paradigms that will improve application scientist productivity by reducing the time to determine which package or module contains the root cause of a problem that arises at all scales of our high end systems. While we have made substantial progress in the first two years of CoPS research, significant work remains. While STAT provides scalable debugging assistance for incorrect application runs, we could apply its techniques to assertions in order to observe deviations from expected behavior. Further, we must continue to refine STAT's techniques to represent behavioral equivalence classes efficiently as we expect systems with millions of threads in the next year. We are exploring new CBI techniques that can assess the likelihood that execution deviations from past behavior are the source of erroneous execution. Finally, we must develop usable correctness analyses that apply the MPIecho parallelization strategy in order to locate coding errors. We expect to make substantial progress on these directions in the next year but anticipate that significant work will remain to provide usable, scalable debugging paradigms.« less
A Discretization Algorithm for Meteorological Data and its Parallelization Based on Hadoop
NASA Astrophysics Data System (ADS)
Liu, Chao; Jin, Wen; Yu, Yuting; Qiu, Taorong; Bai, Xiaoming; Zou, Shuilong
2017-10-01
In view of the large amount of meteorological observation data, the property is more and the attribute values are continuous values, the correlation between the elements is the need for the application of meteorological data, this paper is devoted to solving the problem of how to better discretize large meteorological data to more effectively dig out the hidden knowledge in meteorological data and research on the improvement of discretization algorithm for large scale data, in order to achieve data in the large meteorological data discretization for the follow-up to better provide knowledge to provide protection, a discretization algorithm based on information entropy and inconsistency of meteorological attributes is proposed and the algorithm is parallelized under Hadoop platform. Finally, the comparison test validates the effectiveness of the proposed algorithm for discretization in the area of meteorological large data.
10th Anniversary Review: a changing climate for coral reefs.
Lough, Janice M
2008-01-01
Tropical coral reefs are charismatic ecosystems that house a significant proportion of the world's marine biodiversity. Their valuable goods and services are fundamental to the livelihood of large coastal populations in the tropics. The health of many of the world's coral reefs, and the goods and services they provide, have already been severely compromised, largely due to over-exploitation by a range of human activities. These local-scale impacts, with the appropriate government instruments, support and management actions, can potentially be controlled and even ameliorated. Unfortunately, other human actions (largely in countries outside of the tropics), by changing global climate, have added additional global-scale threats to the continued survival of present-day coral reefs. Moderate warming of the tropical oceans has already resulted in an increase in mass coral bleaching events, affecting nearly all of the world's coral reef regions. The frequency of these events will only increase as global temperatures continue to rise. Weakening of coral reef structures will be a more insidious effect of changing ocean chemistry, as the oceans absorb part of the excess atmospheric carbon dioxide. More intense tropical cyclones, changed atmospheric and ocean circulation patterns will all affect coral reef ecosystems and the many associated plants and animals. Coral reefs will not disappear but their appearance, structure and community make-up will radically change. Drastic greenhouse gas mitigation strategies are necessary to prevent the full consequences of human activities causing such alterations to coral reef ecosystems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dayeh, M. A.; Fuselier, S. A.; Funsten, H. O.
We present remote, continuous observations from the Interstellar Boundary Explorer of the terrestrial plasma sheet location back to -16 Earth radii (R E) in the magnetospheric tail using energetic neutral atom emissions. The time period studied includes two orbits near the winter and summer solstices, thus associated with large negative and positive dipole tilt, respectively. Continuous side-view images reveal a complex shape that is dominated mainly by large-scale warping due to the diurnal motion of the dipole axis. Superposed on the global warped geometry are short-time fluctuations in plasma sheet location that appear to be consistent with plasma sheet flappingmore » and possibly twisting due to changes in the interplanetary conditions. We conclude that the plasma sheet warping due to the diurnal motion dominates the average shape of the plasma sheet. Over short times, the position of the plasma sheet can be dominated by twisting and flapping.« less
NASA Technical Reports Server (NTRS)
Anyamba, Assaf; Linthicum, Kenneth J.; Small, Jennifer; Britch, S. C.; Tucker, C. J.
2012-01-01
Remotely sensed vegetation measurements for the last 30 years combined with other climate data sets such as rainfall and sea surface temperatures have come to play an important role in the study of the ecology of arthropod-borne diseases. We show that epidemics and epizootics of previously unpredictable Rift Valley fever are directly influenced by large scale flooding associated with the El Ni o/Southern Oscillation. This flooding affects the ecology of disease transmitting arthropod vectors through vegetation development and other bioclimatic factors. This information is now utilized to monitor, model, and map areas of potential Rift Valley fever outbreaks and is used as an early warning system for risk reduction of outbreaks to human and animal health, trade, and associated economic impacts. The continuation of such satellite measurements is critical to anticipating, preventing, and managing disease epidemics and epizootics and other climate-related disasters.
Analytic study of the effect of dark energy-dark matter interaction on the growth of structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marcondes, Rafael J.F.; Landim, Ricardo C.G.; Costa, André A.
2016-12-01
Large-scale structure has been shown as a promising cosmic probe for distinguishing and constraining dark energy models. Using the growth index parametrization, we obtain an analytic formula for the growth rate of structures in a coupled dark energy model in which the exchange of energy-momentum is proportional to the dark energy density. We find that the evolution of f σ{sub 8} can be determined analytically once we know the coupling, the dark energy equation of state, the present value of the dark energy density parameter and the current mean amplitude of dark matter fluctuations. After correcting the growth function formore » the correspondence with the velocity field through the continuity equation in the interacting model, we use our analytic result to compare the model's predictions with large-scale structure observations.« less
NASA Technical Reports Server (NTRS)
Venkatachari, Balaji Shankar; Streett, Craig L.; Chang, Chau-Lyan; Friedlander, David J.; Wang, Xiao-Yen; Chang, Sin-Chung
2016-01-01
Despite decades of development of unstructured mesh methods, high-fidelity time-accurate simulations are still predominantly carried out on structured, or unstructured hexahedral meshes by using high-order finite-difference, weighted essentially non-oscillatory (WENO), or hybrid schemes formed by their combinations. In this work, the space-time conservation element solution element (CESE) method is used to simulate several flow problems including supersonic jet/shock interaction and its impact on launch vehicle acoustics, and direct numerical simulations of turbulent flows using tetrahedral meshes. This paper provides a status report for the continuing development of the space-time conservation element solution element (CESE) numerical and software framework under the Revolutionary Computational Aerosciences (RCA) project. Solution accuracy and large-scale parallel performance of the numerical framework is assessed with the goal of providing a viable paradigm for future high-fidelity flow physics simulations.
A class of hybrid finite element methods for electromagnetics: A review
NASA Technical Reports Server (NTRS)
Volakis, J. L.; Chatterjee, A.; Gong, J.
1993-01-01
Integral equation methods have generally been the workhorse for antenna and scattering computations. In the case of antennas, they continue to be the prominent computational approach, but for scattering applications the requirement for large-scale computations has turned researchers' attention to near neighbor methods such as the finite element method, which has low O(N) storage requirements and is readily adaptable in modeling complex geometrical features and material inhomogeneities. In this paper, we review three hybrid finite element methods for simulating composite scatterers, conformal microstrip antennas, and finite periodic arrays. Specifically, we discuss the finite element method and its application to electromagnetic problems when combined with the boundary integral, absorbing boundary conditions, and artificial absorbers for terminating the mesh. Particular attention is given to large-scale simulations, methods, and solvers for achieving low memory requirements and code performance on parallel computing architectures.
200-W single frequency laser based on short active double clad tapered fiber
NASA Astrophysics Data System (ADS)
Pierre, Christophe; Guiraud, Germain; Yehouessi, Jean-Paul; Santarelli, Giorgio; Boullet, Johan; Traynor, Nicholas; Vincont, Cyril
2018-02-01
High power single frequency lasers are very attractive for a wide range of applications such as nonlinear conversion, gravitational wave sensing or atom trapping. Power scaling in single frequency regime is a challenging domain of research. In fact, nonlinear effect as stimulated Brillouin scattering (SBS) is the primary power limitation in single frequency amplifiers. To mitigate SBS, different well-known techniques has been improved. These techniques allow generation of several hundred of watts [1]. Large mode area (LMA) fibers, transverse acoustically tailored fibers [2], coherent beam combining and also tapered fiber [3] seem to be serious candidates to continue the power scaling. We have demonstrated the generation of stable 200W output power with nearly diffraction limited output, and narrow linewidth (Δν<30kHz) by using a tapered Yb-doped fiber which allow an adiabatic transition from a small purely single mode input to a large core output.
A large-scale computer facility for computational aerodynamics
NASA Technical Reports Server (NTRS)
Bailey, F. R.; Ballhaus, W. F., Jr.
1985-01-01
As a result of advances related to the combination of computer system technology and numerical modeling, computational aerodynamics has emerged as an essential element in aerospace vehicle design methodology. NASA has, therefore, initiated the Numerical Aerodynamic Simulation (NAS) Program with the objective to provide a basis for further advances in the modeling of aerodynamic flowfields. The Program is concerned with the development of a leading-edge, large-scale computer facility. This facility is to be made available to Government agencies, industry, and universities as a necessary element in ensuring continuing leadership in computational aerodynamics and related disciplines. Attention is given to the requirements for computational aerodynamics, the principal specific goals of the NAS Program, the high-speed processor subsystem, the workstation subsystem, the support processing subsystem, the graphics subsystem, the mass storage subsystem, the long-haul communication subsystem, the high-speed data-network subsystem, and software.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sintonen, Sakari, E-mail: sakari.sintonen@aalto.fi; Suihkonen, Sami; Jussila, Henri
2014-08-28
The crystal quality of bulk GaN crystals is continuously improving due to advances in GaN growth techniques. Defect characterization of the GaN substrates by conventional methods is impeded by the very low dislocation density and a large scale defect analysis method is needed. White beam synchrotron radiation x-ray topography (SR-XRT) is a rapid and non-destructive technique for dislocation analysis on a large scale. In this study, the defect structure of an ammonothermal c-plane GaN substrate was recorded using SR-XRT and the image contrast caused by the dislocation induced microstrain was simulated. The simulations and experimental observations agree excellently and themore » SR-XRT image contrasts of mixed and screw dislocations were determined. Apart from a few exceptions, defect selective etching measurements were shown to correspond one to one with the SR-XRT results.« less
NASA Astrophysics Data System (ADS)
Stieglitz, T. C.; Burnett, W. C.; Rapaglia, J.
2008-12-01
Submarine groundwater discharge (SGD) is now increasingly recognized as an important component in the water balance, water quality and ecology of the coastal zone. A multitude of methods are currently employed to study SGD, ranging from point flux measurements with seepage meters to methods integrating over various spatial and temporal scales such as hydrological models, geophysical techniques or surface water tracer approaches. From studies in a large variety of hydrogeological settings, researchers in this field have come to expect that SGD is rarely uniformly distributed. Here we discuss the application of: (a) the mapping of subsurface electrical conductivity in a discharge zone on a beach; and (b) the large-scale mapping of radon in coastal surface water to improving our understanding of SGD and its spatial variability. On a beach scale, as part of intercomparison studies of a UNESCO/IAEA working group, mapping of subsurface electrical conductivity in a beach face have elucidated the non-uniform distribution of SGD associated with rock fractures, volcanic settings and man-made structures (e.g., piers, jetties). Variations in direct point measurements of SGD flux with seepage meters were linked to the subsurface conductivity distribution. We demonstrate how the combination of these two techniques may complement one another to better constrain SGD measurements. On kilometer to hundred kilometer scales, the spatial distribution and regional importance of SGD can be investigated by mapping relevant tracers in the coastal ocean. The radon isotope Rn-222 is a commonly used tracer for SGD investigations due to its significant enrichment in groundwater, and continuous mapping of this tracer, in combination with ocean water salinity, can be used to efficiently infer locations of SGD along a coastline on large scales. We use a surface-towed, continuously recording multi-detector setup installed on a moving vessel. This tool was used in various coastal environments, e.g. in Florida, Brazil, Mauritius and Australia's Great Barrier Reef lagoon. From shore-parallel transects along the Central Great Barrier Reef coastline, numerous processes and locations of SGD were identified, including terrestrially-derived fresh SGD and the recirculation of seawater in mangrove forests, as well as riverine sources. From variations in the inverse relationship of the two tracers radon and salinity, some aspects of regional freshwater input into the lagoon during the tropical wet season could be assessed. Such surveys on coastal scales can be a useful tool to obtain an overview of locations and processes of SGD on an unknown coastline.
Computational Prediction of Protein-Protein Interactions
Ehrenberger, Tobias; Cantley, Lewis C.; Yaffe, Michael B.
2015-01-01
The prediction of protein-protein interactions and kinase-specific phosphorylation sites on individual proteins is critical for correctly placing proteins within signaling pathways and networks. The importance of this type of annotation continues to increase with the continued explosion of genomic and proteomic data, particularly with emerging data categorizing posttranslational modifications on a large scale. A variety of computational tools are available for this purpose. In this chapter, we review the general methodologies for these types of computational predictions and present a detailed user-focused tutorial of one such method and computational tool, Scansite, which is freely available to the entire scientific community over the Internet. PMID:25859943
Epitaxial Growth of Aligned and Continuous Carbon Nanofibers from Carbon Nanotubes.
Lin, Xiaoyang; Zhao, Wei; Zhou, Wenbin; Liu, Peng; Luo, Shu; Wei, Haoming; Yang, Guangzhi; Yang, Junhe; Cui, Jie; Yu, Richeng; Zhang, Lina; Wang, Jiaping; Li, Qunqing; Zhou, Weiya; Zhao, Weisheng; Fan, Shoushan; Jiang, Kaili
2017-02-28
Exploiting the superior properties of nanomaterials at macroscopic scale is a key issue of nanoscience. Different from the integration strategy, "additive synthesis" of macroscopic structures from nanomaterial templates may be a promising choice. In this paper, we report the epitaxial growth of aligned, continuous, and catalyst-free carbon nanofiber thin films from carbon nanotube films. The fabrication process includes thickening of continuous carbon nanotube films by gas-phase pyrolytic carbon deposition and further graphitization of the carbon layer by high-temperature treatment. As-fabricated nanofibers in the film have an "annual ring" cross-section, with a carbon nanotube core and a graphitic periphery, indicating the templated growth mechanism. The absence of a distinct interface between the carbon nanotube template and the graphitic periphery further implies the epitaxial growth mechanism of the fiber. The mechanically robust thin film with tunable fiber diameters from tens of nanometers to several micrometers possesses low density, high electrical conductivity, and high thermal conductivity. Further extension of this fabrication method to enhance carbon nanotube yarns is also demonstrated, resulting in yarns with ∼4-fold increased tensile strength and ∼10-fold increased Young's modulus. The aligned and continuous features of the films together with their outstanding physical and chemical properties would certainly promote the large-scale applications of carbon nanofibers.
Smith, F; Singleton, A; Hilton, S
1998-01-01
BACKGROUND: The accreditation and provision of continuing education for general practitioners (GPs) is set to change with new proposals from the General Medical Council, the Government, and the Chief Medical Officer. AIM: To review the theories, policies, strategies, and effectiveness in GP continuing education in the past 10 years. METHOD: A systematic review of the literature by computerized and manual searches of relevant journals and books. RESULTS: Educational theory suggests that continuing education (CE) should be work-based and use the learner's experiences. Audit can play an important role in determining performance and needs assessment, but at present is largely a separate activity. Educational and professional support, such as through mentors or co-tutors, has been successfully piloted but awaits larger scale evaluation. Most accredited educational events are still the postgraduate centre lecture, and GP Tutors have a variable role in CE management and provision. Controlled trials of CE strategies suggest effectiveness is enhanced by personal feedback and work prompts. Qualitative studies have demonstrated that education plays only a small part in influencing doctors' behavior. CONCLUSION: Maintaining good clinical practice is on many stakeholders' agendas. A variety of methods may be effective in CE, and larger scale trials or evaluations are needed. PMID:10071406
NASA Astrophysics Data System (ADS)
Durand, O.; Jaouen, S.; Soulard, L.; Heuzé, O.; Colombet, L.
2017-10-01
We compare, at similar scales, the processes of microjetting and ejecta production from shocked roughened metal surfaces by using atomistic and continuous approaches. The atomistic approach is based on very large scale molecular dynamics (MD) simulations with systems containing up to 700 × 106 atoms. The continuous approach is based on Eulerian hydrodynamics simulations with adaptive mesh refinement; the simulations take into account the effects of viscosity and surface tension, and the equation of state is calculated from the MD simulations. The microjetting is generated by shock-loading above its fusion point a three-dimensional tin crystal with an initial sinusoidal free surface perturbation, the crystal being set in contact with a vacuum. Several samples with homothetic wavelengths and amplitudes of defect are simulated in order to investigate the influence of viscosity and surface tension of the metal. The simulations show that the hydrodynamic code reproduces with very good agreement the profiles, calculated from the MD simulations, of the ejected mass and velocity along the jet. Both codes also exhibit a similar fragmentation phenomenology of the metallic liquid sheets ejected, although the fragmentation seed is different. We show in particular, that it depends on the mesh size in the continuous approach.
Stochastic Dynamic Mixed-Integer Programming (SD-MIP)
2015-05-05
stochastic linear programming ( SLP ) problems. By using a combination of ideas from cutting plane theory of deterministic MIP (especially disjunctive...developed to date. b) As part of this project, we have also developed tools for very large scale Stochastic Linear Programming ( SLP ). There are...several reasons for this. First, SLP models continue to challenge many of the fastest computers to date, and many applications within the DoD (e.g
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cunningham, Kevin; Odette, G Robert; Fields, Kirk A.
2015-09-23
A promising approach to increasing the fracture toughness of W-alloys is ductile-phase toughening (DPT). A ductile phase reinforcement in a brittle matrix increases toughness primarily by crack bridging. A W-Cu laminate was fabricated and the properties of the constituent metals were characterized along with those for the composite. Development of a design model for large-scale crack bridging continued.
Continuing Evolution of Burkholderia mallei Through Genome Reduction and Large-Scale Rearrangements
2010-01-22
in Materials and Methods. b NRPS, nonribosomal peptide synthase ; PKS, polyketide synthase ; RND, resistance nodulation-division like pump. Losada et al...genomics, genome erosion, bacterial virulence. ª The Author(s) 2010. Published by Oxford University Press on behalf of the Society for Molecular Biology...creativecommons.org/licenses/by-nc/ 2.5), which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original
Applications and Methods for Continuous Monitoring of Physiological Chemistry
2016-02-04
product and test platform to verify the performance characteristics of the enzymes when used in diagnostic device fabrication. 1.3 Results This...project had three primary objectives: 1. Engineer a cortisol oxidase enzyme suitable for use in diagnostic devices 2. Large scale production and...for both animal and human use , and for direct sale to other entities to manufacture biosensors and other products for human monitoring. The enzymes
NASA Technical Reports Server (NTRS)
Parkinson, Claire L.
1999-01-01
Satellite data have revealed overall decreases in the Arctic sea ice cover since the late 1970s, although with substantial interannual variability. The ice reductions are likely tied to an overall warming in the Arctic region over the same time period, although both the warming and the ice reductions could be connected to large-scale oscillations within the system. Should the ice reductions continue, consequences to the Arctic ecosystems and climate could be considerable.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bender, W.
2013-01-01
Final technical progress report of SunShot Incubator Solaflect Energy. The project succeeded in demonstrating that the Solaflect Suspension Heliostat design is viable for large-scale CSP installations. Canting accuracy is acceptable and is continually improving as Solaflect improves its understanding of this design. Cost reduction initiatives were successful, and there are still many opportunities for further development and further cost reduction.
JPRS Report, Science & Technology, USSR: Life Sciences.
1990-10-24
genesis was used to produce two mutant rhodopsins with amino acid substitutions in the C-terminal domain. The substitution of Cys-316->Ser does not...proteins, and also as a system for large scale synthesis of protein for practical use with a continuous supply of energy sources and amino acids into the...better than traditional neuroleptic analgesic in analysis of central peripheral hemodynamics, oxygen-transport, gas composition, and acid -base
Occupancy mapping and surface reconstruction using local Gaussian processes with Kinect sensors.
Kim, Soohwan; Kim, Jonghyuk
2013-10-01
Although RGB-D sensors have been successfully applied to visual SLAM and surface reconstruction, most of the applications aim at visualization. In this paper, we propose a noble method of building continuous occupancy maps and reconstructing surfaces in a single framework for both navigation and visualization. Particularly, we apply a Bayesian nonparametric approach, Gaussian process classification, to occupancy mapping. However, it suffers from high-computational complexity of O(n(3))+O(n(2)m), where n and m are the numbers of training and test data, respectively, limiting its use for large-scale mapping with huge training data, which is common with high-resolution RGB-D sensors. Therefore, we partition both training and test data with a coarse-to-fine clustering method and apply Gaussian processes to each local clusters. In addition, we consider Gaussian processes as implicit functions, and thus extract iso-surfaces from the scalar fields, continuous occupancy maps, using marching cubes. By doing that, we are able to build two types of map representations within a single framework of Gaussian processes. Experimental results with 2-D simulated data show that the accuracy of our approximated method is comparable to previous work, while the computational time is dramatically reduced. We also demonstrate our method with 3-D real data to show its feasibility in large-scale environments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Chenguang; Manohar, Aswin K.; Narayanan, S. R.
Iron-based alkaline rechargeable batteries such as iron-air and nickel-iron batteries are particularly attractive for large-scale energy storage because these batteries can be relatively inexpensive, environment- friendly, and also safe. Therefore, our study has focused on achieving the essential electrical performance and cycling properties needed for the widespread use of iron-based alkaline batteries in stationary and distributed energy storage applications.We have demonstrated for the first time, an advanced sintered iron electrode capable of 3500 cycles of repeated charge and discharge at the 1-hour rate and 100% depth of discharge in each cycle, and an average Coulombic efficiency of over 97%. Suchmore » a robust and efficient rechargeable iron electrode is also capable of continuous discharge at rates as high as 3C with no noticeable loss in utilization. We have shown that the porosity, pore size and thickness of the sintered electrode can be selected rationally to optimize specific capacity, rate capability and robustness. As a result, these advances in the electrical performance and durability of the iron electrode enables iron-based alkaline batteries to be a viable technology solution for meeting the dire need for large-scale electrical energy storage.« less
Yang, Chenguang; Manohar, Aswin K.; Narayanan, S. R.
2017-01-07
Iron-based alkaline rechargeable batteries such as iron-air and nickel-iron batteries are particularly attractive for large-scale energy storage because these batteries can be relatively inexpensive, environment- friendly, and also safe. Therefore, our study has focused on achieving the essential electrical performance and cycling properties needed for the widespread use of iron-based alkaline batteries in stationary and distributed energy storage applications.We have demonstrated for the first time, an advanced sintered iron electrode capable of 3500 cycles of repeated charge and discharge at the 1-hour rate and 100% depth of discharge in each cycle, and an average Coulombic efficiency of over 97%. Suchmore » a robust and efficient rechargeable iron electrode is also capable of continuous discharge at rates as high as 3C with no noticeable loss in utilization. We have shown that the porosity, pore size and thickness of the sintered electrode can be selected rationally to optimize specific capacity, rate capability and robustness. As a result, these advances in the electrical performance and durability of the iron electrode enables iron-based alkaline batteries to be a viable technology solution for meeting the dire need for large-scale electrical energy storage.« less
Akerele, David; Ljolje, Dragan; Talundzic, Eldin; Udhayakumar, Venkatachalam
2017-01-01
Accurate diagnosis of malaria infections continues to be challenging and elusive, especially in the detection of submicroscopic infections. Developing new malaria diagnostic tools that are sensitive enough to detect low-level infections, user friendly, cost effective and capable of performing large scale diagnosis, remains critical. We have designed novel self-quenching photo-induced electron transfer (PET) fluorogenic primers for the detection of P. ovale by real-time PCR. In our study, a total of 173 clinical samples, consisting of different malaria species, were utilized to test this novel PET-PCR primer. The sensitivity and specificity were calculated using nested-PCR as the reference test. The novel primer set demonstrated a sensitivity of 97.5% and a specificity of 99.2% (95% CI 85.2–99.8% and 95.2–99.9% respectively). Furthermore, the limit of detection for P. ovale was found to be 1 parasite/μl. The PET-PCR assay is a new molecular diagnostic tool with comparable performance to other commonly used PCR methods. It is relatively easy to perform, and amiable to large scale malaria surveillance studies and malaria control and elimination programs. Further field validation of this novel primer will be helpful to ascertain the utility for large scale malaria screening programs. PMID:28640824
Akerele, David; Ljolje, Dragan; Talundzic, Eldin; Udhayakumar, Venkatachalam; Lucchi, Naomi W
2017-01-01
Accurate diagnosis of malaria infections continues to be challenging and elusive, especially in the detection of submicroscopic infections. Developing new malaria diagnostic tools that are sensitive enough to detect low-level infections, user friendly, cost effective and capable of performing large scale diagnosis, remains critical. We have designed novel self-quenching photo-induced electron transfer (PET) fluorogenic primers for the detection of P. ovale by real-time PCR. In our study, a total of 173 clinical samples, consisting of different malaria species, were utilized to test this novel PET-PCR primer. The sensitivity and specificity were calculated using nested-PCR as the reference test. The novel primer set demonstrated a sensitivity of 97.5% and a specificity of 99.2% (95% CI 85.2-99.8% and 95.2-99.9% respectively). Furthermore, the limit of detection for P. ovale was found to be 1 parasite/μl. The PET-PCR assay is a new molecular diagnostic tool with comparable performance to other commonly used PCR methods. It is relatively easy to perform, and amiable to large scale malaria surveillance studies and malaria control and elimination programs. Further field validation of this novel primer will be helpful to ascertain the utility for large scale malaria screening programs.
Partially acoustic dark matter, interacting dark radiation, and large scale structure
NASA Astrophysics Data System (ADS)
Chacko, Zackaria; Cui, Yanou; Hong, Sungwoo; Okui, Takemichi; Tsai, Yuhsinz
2016-12-01
The standard paradigm of collisionless cold dark matter is in tension with measurements on large scales. In particular, the best fit values of the Hubble rate H 0 and the matter density perturbation σ 8 inferred from the cosmic microwave background seem inconsistent with the results from direct measurements. We show that both problems can be solved in a framework in which dark matter consists of two distinct components, a dominant component and a subdominant component. The primary component is cold and collisionless. The secondary component is also cold, but interacts strongly with dark radiation, which itself forms a tightly coupled fluid. The growth of density perturbations in the subdominant component is inhibited by dark acoustic oscillations due to its coupling to the dark radiation, solving the σ 8 problem, while the presence of tightly coupled dark radiation ameliorates the H 0 problem. The subdominant component of dark matter and dark radiation continue to remain in thermal equilibrium until late times, inhibiting the formation of a dark disk. We present an example of a simple model that naturally realizes this scenario in which both constituents of dark matter are thermal WIMPs. Our scenario can be tested by future stage-IV experiments designed to probe the CMB and large scale structure.
Tsai, Jason Sheng-Hong; Du, Yan-Yi; Huang, Pei-Hsiang; Guo, Shu-Mei; Shieh, Leang-San; Chen, Yuhua
2011-07-01
In this paper, a digital redesign methodology of the iterative learning-based decentralized adaptive tracker is proposed to improve the dynamic performance of sampled-data linear large-scale control systems consisting of N interconnected multi-input multi-output subsystems, so that the system output will follow any trajectory which may not be presented by the analytic reference model initially. To overcome the interference of each sub-system and simplify the controller design, the proposed model reference decentralized adaptive control scheme constructs a decoupled well-designed reference model first. Then, according to the well-designed model, this paper develops a digital decentralized adaptive tracker based on the optimal analog control and prediction-based digital redesign technique for the sampled-data large-scale coupling system. In order to enhance the tracking performance of the digital tracker at specified sampling instants, we apply the iterative learning control (ILC) to train the control input via continual learning. As a result, the proposed iterative learning-based decentralized adaptive tracker not only has robust closed-loop decoupled property but also possesses good tracking performance at both transient and steady state. Besides, evolutionary programming is applied to search for a good learning gain to speed up the learning process of ILC. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
Partially acoustic dark matter, interacting dark radiation, and large scale structure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chacko, Zackaria; Cui, Yanou; Hong, Sungwoo
The standard paradigm of collisionless cold dark matter is in tension with measurements on large scales. In particular, the best fit values of the Hubble rate H 0 and the matter density perturbation σ 8 inferred from the cosmic microwave background seem inconsistent with the results from direct measurements. We show that both problems can be solved in a framework in which dark matter consists of two distinct components, a dominant component and a subdominant component. The primary component is cold and collisionless. The secondary component is also cold, but interacts strongly with dark radiation, which itself forms a tightlymore » coupled fluid. The growth of density perturbations in the subdominant component is inhibited by dark acoustic oscillations due to its coupling to the dark radiation, solving the σ 8 problem, while the presence of tightly coupled dark radiation ameliorates the H 0 problem. The subdominant component of dark matter and dark radiation continue to remain in thermal equilibrium until late times, inhibiting the formation of a dark disk. We present an example of a simple model that naturally realizes this scenario in which both constituents of dark matter are thermal WIMPs. Our scenario can be tested by future stage-IV experiments designed to probe the CMB and large scale structure.« less
Partially acoustic dark matter, interacting dark radiation, and large scale structure
Chacko, Zackaria; Cui, Yanou; Hong, Sungwoo; ...
2016-12-21
The standard paradigm of collisionless cold dark matter is in tension with measurements on large scales. In particular, the best fit values of the Hubble rate H 0 and the matter density perturbation σ 8 inferred from the cosmic microwave background seem inconsistent with the results from direct measurements. We show that both problems can be solved in a framework in which dark matter consists of two distinct components, a dominant component and a subdominant component. The primary component is cold and collisionless. The secondary component is also cold, but interacts strongly with dark radiation, which itself forms a tightlymore » coupled fluid. The growth of density perturbations in the subdominant component is inhibited by dark acoustic oscillations due to its coupling to the dark radiation, solving the σ 8 problem, while the presence of tightly coupled dark radiation ameliorates the H 0 problem. The subdominant component of dark matter and dark radiation continue to remain in thermal equilibrium until late times, inhibiting the formation of a dark disk. We present an example of a simple model that naturally realizes this scenario in which both constituents of dark matter are thermal WIMPs. Our scenario can be tested by future stage-IV experiments designed to probe the CMB and large scale structure.« less
Relativistic jets without large-scale magnetic fields
NASA Astrophysics Data System (ADS)
Parfrey, K.; Giannios, D.; Beloborodov, A.
2014-07-01
The canonical model of relativistic jets from black holes requires a large-scale ordered magnetic field to provide a significant magnetic flux through the ergosphere--in the Blandford-Znajek process, the jet power scales with the square of the magnetic flux. In many jet systems the presence of the required flux in the environment of the central engine is questionable. I will describe an alternative scenario, in which jets are produced by the continuous sequential accretion of small magnetic loops. The magnetic energy stored in these coronal flux systems is amplified by the differential rotation of the accretion disc and by the rotating spacetime of the black hole, leading to runaway field line inflation, magnetic reconnection in thin current layers, and the ejection of discrete bubbles of Poynting-flux-dominated plasma. For illustration I will show the results of general-relativistic force-free electrodynamic simulations of rotating black hole coronae, performed using a new resistivity model. The dissipation of magnetic energy by coronal reconnection events, as demonstrated in these simulations, is a potential source of the observed high-energy emission from accreting compact objects.
A comprehensive surface-groundwater flow model
NASA Astrophysics Data System (ADS)
Arnold, Jeffrey G.; Allen, Peter M.; Bernhardt, Gilbert
1993-02-01
In this study, a simple groundwater flow and height model was added to an existing basin-scale surface water model. The linked model is: (1) watershed scale, allowing the basin to be subdivided; (2) designed to accept readily available inputs to allow general use over large regions; (3) continuous in time to allow simulation of land management, including such factors as climate and vegetation changes, pond and reservoir management, groundwater withdrawals, and stream and reservoir withdrawals. The model is described, and is validated on a 471 km 2 watershed near Waco, Texas. This linked model should provide a comprehensive tool for water resource managers in development and planning.
A continuum theory of grain size evolution and damage
NASA Astrophysics Data System (ADS)
Ricard, Y.; Bercovici, D.
2009-01-01
Lithospheric shear localization, as occurs in the formation of tectonic plate boundaries, is often associated with diminished grain size (e.g., mylonites). Grain size reduction is typically attributed to dynamic recrystallization; however, theoretical models of shear localization arising from this hypothesis are problematic because (1) they require the simultaneous action of two creep mechanisms (diffusion and dislocation creep) that occur in different deformation regimes (i.e., in grain size stress space) and (2) the grain growth ("healing") laws employed by these models are derived from normal grain growth or coarsening theory, which are valid in the absence of deformation, although the shear localization setting itself requires deformation. Here we present a new first principles grained-continuum theory, which accounts for both coarsening and damage-induced grain size reduction in a monomineralic assemblage undergoing irrecoverable deformation. Damage per se is the generic process for generation of microcracks, defects, dislocations (including recrystallization), subgrains, nuclei, and cataclastic breakdown of grains. The theory contains coupled macroscopic continuum mechanical and grain-scale statistical components. The continuum level of the theory considers standard mass, momentum, and energy conservation, as well as entropy production, on a statistically averaged grained continuum. The grain-scale element of the theory describes both the evolution of the grain size distribution and mechanisms for both continuous grain growth and discontinuous grain fracture and coalescence. The continuous and discontinuous processes of grain size variation are prescribed by nonequilibrium thermodynamics (in particular, the treatment of entropy production provides the phenomenological laws for grain growth and reduction); grain size evolution thus incorporates the free energy differences between grains, including both grain boundary surface energy (which controls coarsening) and the contribution of deformational work to these free energies (which controls damage). In the absence of deformation, only two mechanisms that increase the average grain size are allowed by the second law of thermodynamics. One mechanism, involving continuous diffusive mass transport from small to large grains, captures the essential components of normal grain growth theories of Lifshitz-Slyosov and Hillert. The second mechanism involves the aggregation of grains and is described using a Smoluchovski formalism. With the inclusion of deformational work and damage, the theory predicts two mechanisms for which the thermodynamic requirement of entropy positivity always forces large grains to shrink and small ones to grow. The first such damage-driven mechanism involving continuous mass transfer from large to small grains tends to homogenize the distribution of grain size toward its initial mean grain size. The second damage mechanism favors the creation of small grains by discontinuous division of larger grains and reduces the mean grain size with time. When considered separately, most of these mechanisms allow for self-similar grain size distributions whose scales (i.e., statistical moments such as the mean, variance, and skewness) can all be described by a single grain scale, such as the mean or maximum. However, the combination of mechanisms, e.g., one that captures the competition between continuous coarsening and mean grain size reduction by breakage, does not generally permit a self-similar solution for the grain size distribution, which contradicts the classic assumption that grain growth laws allowing for both coarsening and recrystallization can be treated with a single grain scale such as the mean size.
Hirsch, Kimberly D; Strawser, Bryan E
Business continuity practitioners routinely determine which teams in their companies are critical and undertake extensive and rigorous planning processes. But what happens when a business is faced with an unanticipated long-term disruption that primarily affects non-critical teams? How can a company use the essential principles of business continuity and crisis management in order to respond? This paper explores a 2013 business disruption experienced by Target Corporation at one of its headquarters locations caused by a leak in the water line for an ice machine. Challenges encountered and reviewed include supporting non-critical teams, leadership of a multi-week business disruption and how remote work technologies have changed traditional continuity alternative workspace solution planning. Lessons learned from this activation are presented with implications for business continuity and emergency management planning that are applicable to any industry.
U.S. Geological Survey continuous monitoring workshop—Workshop summary report
Sullivan, Daniel J.; Joiner, John K.; Caslow, Kerry A.; Landers, Mark N.; Pellerin, Brian A.; Rasmussen, Patrick P.; Sheets, Rodney A.
2018-04-20
Executive SummaryThe collection of high-frequency (in other words, “continuous”) water data has been made easier over the years because of advances in technologies to measure, transmit, store, and query large, temporally dense datasets. Commercially available, in-situ sensors and data-collection platforms—together with new techniques for data analysis—provide an opportunity to monitor water quantity and quality at time scales during which meaningful changes occur. The U.S. Geological Survey (USGS) Continuous Monitoring Workshop was held to build stronger collaboration within the Water Mission Area on the collection, interpretation, and application of continuous monitoring data; share technical approaches for the collection and management of continuous data that improves consistency and efficiency across the USGS; and explore techniques and tools for the interpretation of continuous monitoring data, which increases the value to cooperators and the public. The workshop was organized into three major themes: Collecting Continuous Data, Understanding and Using Continuous Data, and Observing and Delivering Continuous Data in the Future. Presentations each day covered a variety of related topics, with a special session at the end of each day designed to bring discussion and problem solving to the forefront.The workshop brought together more than 70 USGS scientists and managers from across the Water Mission Area and Water Science Centers. Tools to manage, assure, control quality, and explore large streams of continuous water data are being developed by the USGS and other organizations and will be critical to making full use of these high-frequency data for research and monitoring. Disseminating continuous monitoring data and findings relevant to critical cooperator and societal issues is central to advancing the USGS networks and mission. Several important outcomes emerged from the presentations and breakout sessions.
Human and biophysical influences on fire occurrence in the United States
Hawbaker, Todd J.; Radeloff, Volker C.; Stewart, Susan I.; Hammer, Roger B.; Keuler, Nicholas S.; Clayton, Murray K.
2013-01-01
National-scale analyses of fire occurrence are needed to prioritize fire policy and management activities across the United States. However, the drivers of national-scale patterns of fire occurrence are not well understood, and how the relative importance of human or biophysical factors varies across the country is unclear. Our research goal was to model the drivers of fire occurrence within ecoregions across the conterminous United States. We used generalized linear models to compare the relative influence of human, vegetation, climate, and topographic variables on fire occurrence in the United States, as measured by MODIS active fire detections collected between 2000 and 2006. We constructed models for all fires and for large fires only and generated predictive maps to quantify fire occurrence probabilities. Areas with high fire occurrence probabilities were widespread in the Southeast, and localized in the Mountain West, particularly in southern California, Arizona, and New Mexico. Probabilities for large-fire occurrence were generally lower, but hot spots existed in the western and south-central United States The probability of fire occurrence is a critical component of fire risk assessments, in addition to vegetation type, fire behavior, and the values at risk. Many of the hot spots we identified have extensive development in the wildland–urban interface and are near large metropolitan areas. Our results demonstrated that human variables were important predictors of both all fires and large fires and frequently exhibited nonlinear relationships. However, vegetation, climate, and topography were also significant variables in most ecoregions. If recent housing growth trends and fire occurrence patterns continue, these areas will continue to challenge policies and management efforts seeking to balance the risks generated by wildfires with the ecological benefits of fire.
NASA Astrophysics Data System (ADS)
Storck, Pascal; Lettenmaier, Dennis P.; Bolton, Susan M.
2002-11-01
The results of a 3 year field study to observe the processes controlling snow interception by forest canopies and under canopy snow accumulation and ablation in mountain maritime climates are reported. The field study was further intended to provide data to develop and test models of forest canopy effects on beneath-canopy snowpack accumulation and melt and the plot and stand scales. Weighing lysimeters, cut-tree experiments, and manual snow surveys were deployed at a site in the Umpqua National Forest, Oregon (elevation 1200 m). A unique design for a weighing lysimeter was employed that allowed continuous measurements of snowpack evolution beneath a forest canopy to be taken at a scale unaffected by variability in canopy throughfall. Continuous observations of snowpack evolution in large clearings were made coincidentally with the canopy measurements. Large differences in snow accumulation and ablation were observed at sites beneath the forest canopy and in large clearings. These differences were not well described by simple relationships between the sites. Over the study period, approximately 60% of snowfall was intercepted by the canopy (up to a maximum of about 40 mm water equivalent). Instantaneous sublimation rates exceeded 0.5 mm per hour for short periods. However, apparent average sublimation from the intercepted snow was less than 1 mm per day and totaled approximately 100 mm per winter season. Approximately 72 and 28% of the remaining intercepted snow was removed as meltwater drip and large snow masses, respectively. Observed differences in snow interception rate and maximum snow interception capacity between Douglas fir (Pseudotsuga menziesii), white fir (Abies concolor), ponderosa pine (Pinus ponderosa), and lodgepole pine (Pinus contorta) were minimal.
Characterizing steady states of genome-scale metabolic networks in continuous cell cultures.
Fernandez-de-Cossio-Diaz, Jorge; Leon, Kalet; Mulet, Roberto
2017-11-01
In the continuous mode of cell culture, a constant flow carrying fresh media replaces culture fluid, cells, nutrients and secreted metabolites. Here we present a model for continuous cell culture coupling intra-cellular metabolism to extracellular variables describing the state of the bioreactor, taking into account the growth capacity of the cell and the impact of toxic byproduct accumulation. We provide a method to determine the steady states of this system that is tractable for metabolic networks of arbitrary complexity. We demonstrate our approach in a toy model first, and then in a genome-scale metabolic network of the Chinese hamster ovary cell line, obtaining results that are in qualitative agreement with experimental observations. We derive a number of consequences from the model that are independent of parameter values. The ratio between cell density and dilution rate is an ideal control parameter to fix a steady state with desired metabolic properties. This conclusion is robust even in the presence of multi-stability, which is explained in our model by a negative feedback loop due to toxic byproduct accumulation. A complex landscape of steady states emerges from our simulations, including multiple metabolic switches, which also explain why cell-line and media benchmarks carried out in batch culture cannot be extrapolated to perfusion. On the other hand, we predict invariance laws between continuous cell cultures with different parameters. A practical consequence is that the chemostat is an ideal experimental model for large-scale high-density perfusion cultures, where the complex landscape of metabolic transitions is faithfully reproduced.
Characterizing steady states of genome-scale metabolic networks in continuous cell cultures
Leon, Kalet; Mulet, Roberto
2017-01-01
In the continuous mode of cell culture, a constant flow carrying fresh media replaces culture fluid, cells, nutrients and secreted metabolites. Here we present a model for continuous cell culture coupling intra-cellular metabolism to extracellular variables describing the state of the bioreactor, taking into account the growth capacity of the cell and the impact of toxic byproduct accumulation. We provide a method to determine the steady states of this system that is tractable for metabolic networks of arbitrary complexity. We demonstrate our approach in a toy model first, and then in a genome-scale metabolic network of the Chinese hamster ovary cell line, obtaining results that are in qualitative agreement with experimental observations. We derive a number of consequences from the model that are independent of parameter values. The ratio between cell density and dilution rate is an ideal control parameter to fix a steady state with desired metabolic properties. This conclusion is robust even in the presence of multi-stability, which is explained in our model by a negative feedback loop due to toxic byproduct accumulation. A complex landscape of steady states emerges from our simulations, including multiple metabolic switches, which also explain why cell-line and media benchmarks carried out in batch culture cannot be extrapolated to perfusion. On the other hand, we predict invariance laws between continuous cell cultures with different parameters. A practical consequence is that the chemostat is an ideal experimental model for large-scale high-density perfusion cultures, where the complex landscape of metabolic transitions is faithfully reproduced. PMID:29131817
Particle swarm optimization algorithm for optimizing assignment of blood in blood banking system.
Olusanya, Micheal O; Arasomwan, Martins A; Adewumi, Aderemi O
2015-01-01
This paper reports the performance of particle swarm optimization (PSO) for the assignment of blood to meet patients' blood transfusion requests for blood transfusion. While the drive for blood donation lingers, there is need for effective and efficient management of available blood in blood banking systems. Moreover, inherent danger of transfusing wrong blood types to patients, unnecessary importation of blood units from external sources, and wastage of blood products due to nonusage necessitate the development of mathematical models and techniques for effective handling of blood distribution among available blood types in order to minimize wastages and importation from external sources. This gives rise to the blood assignment problem (BAP) introduced recently in literature. We propose a queue and multiple knapsack models with PSO-based solution to address this challenge. Simulation is based on sets of randomly generated data that mimic real-world population distribution of blood types. Results obtained show the efficiency of the proposed algorithm for BAP with no blood units wasted and very low importation, where necessary, from outside the blood bank. The result therefore can serve as a benchmark and basis for decision support tools for real-life deployment.
Husnawati, Husnawati; Afendi, Farit Mochamad; Darusman, Latifah K.; Altaf-Ul-Amin, Md.; Sato, Tetsuo; Ono, Naoaki; Sugiura, Tadao; Kanaya, Shigehiko
2014-01-01
Indonesia has the largest medicinal plant species in the world and these plants are used as Jamu medicines. Jamu medicines are popular traditional medicines from Indonesia and we need to systemize the formulation of Jamu and develop basic scientific principles of Jamu to meet the requirement of Indonesian Healthcare System. We propose a new approach to predict the relation between plant and disease using network analysis and supervised clustering. At the preliminary step, we assigned 3138 Jamu formulas to 116 diseases of International Classification of Diseases (ver. 10) which belong to 18 classes of disease from National Center for Biotechnology Information. The correlation measures between Jamu pairs were determined based on their ingredient similarity. Networks are constructed and analyzed by selecting highly correlated Jamu pairs. Clusters were then generated by using the network clustering algorithm DPClusO. By using matching score of a cluster, the dominant disease and high frequency plant associated to the cluster are determined. The plant to disease relations predicted by our method were evaluated in the context of previously published results and were found to produce around 90% successful predictions. PMID:24804251
Zhou, Xi; Xu, Huihua; Cheng, Jiyi; Zhao, Ni; Chen, Shih-Chi
2015-01-01
A continuous roll-to-roll microcontact printing (MCP) platform promises large-area nanoscale patterning with significantly improved throughput and a great variety of applications, e.g. precision patterning of metals, bio-molecules, colloidal nanocrystals, etc. Compared with nanoimprint lithography, MCP does not require a thermal imprinting step (which limits the speed and material choices), but instead, extreme precision with multi-axis positioning and misalignment correction capabilities for large area adaptation. In this work, we exploit a flexure-based mechanism that enables continuous MCP with 500 nm precision and 0.05 N force control. The fully automated roll-to-roll platform is coupled with a new backfilling MCP chemistry optimized for high-speed patterning of gold and silver. Gratings of 300, 400, 600 nm line-width at various locations on a 4-inch plastic substrate are fabricated at a speed of 60 cm/min. Our work represents the first example of roll-to-roll MCP with high reproducibility, wafer scale production capability at nanometer resolution. The precision roll-to-roll platform can be readily applied to other material systems. PMID:26037147
NASA Technical Reports Server (NTRS)
1995-01-01
This report contains the 1995 annual progress reports of the Research Fellows and students of the Center for Turbulence Research (CTR). In 1995 CTR continued its concentration on the development and application of large-eddy simulation to complex flows, development of novel modeling concepts for engineering computations in the Reynolds averaged framework, and turbulent combustion. In large-eddy simulation, a number of numerical and experimental issues have surfaced which are being addressed. The first group of reports in this volume are on large-eddy simulation. A key finding in this area was the revelation of possibly significant numerical errors that may overwhelm the effects of the subgrid-scale model. We also commissioned a new experiment to support the LES validation studies. The remaining articles in this report are concerned with Reynolds averaged modeling, studies of turbulence physics and flow generated sound, combustion, and simulation techniques. Fundamental studies of turbulent combustion using direct numerical simulations which started at CTR will continue to be emphasized. These studies and their counterparts carried out during the summer programs have had a noticeable impact on combustion research world wide.
NASA Astrophysics Data System (ADS)
Melnichenko, O.; Hacker, P. W.; Wentz, F. J.; Meissner, T.; Maximenko, N. A.; Potemra, J. T.
2016-12-01
To address the need for a consistent, continuous, long-term, high-resolution sea surface salinity (SSS) dataset for ocean research and applications, a trial SSS analysis is produced in the eastern tropical Pacific from multi-satellite observations. The new SSS data record is a synergy of data from two satellite missions. The beginning segment, covering the period from September 2011 to June 2015, utilizes Aquarius SSS data and is based on the optimum interpolation analysis developed at the University of Hawaii. The analysis is produced on a 0.25-degree grid and uses a dedicated bias-correction algorithm to correct the satellite retrievals for large-scale biases with respect to in-situ data. The time series is continued with the Soil Moisture Active Passive (SMAP) satellite-based SSS data provided by Remote Sensing Systems (RSS). To ensure consistency and continuity in the data record, SMAP SSS fields are adjusted using a set of optimally designed spatial filters and in-situ, primarily Argo, data to: (i) remove large-scale satellite biases, and (ii) reduce small-scale noise, while preserving the high spatial and temporal resolution of the data set. The consistency between the two sub-sets of the data record is evaluated during their overlapping period in April-June 2015. Verification studies show that SMAP SSS has a very good agreement with the Aquarius SSS, noting that SMAP SSS can provide better spatial resolution. The 5-yr long time series of SSS in the SPURS-2 domain (125oW, 10oN) shows fresher than normal SSS during the last year's El Nino event. The year-mean difference is about 0.5 psu. The annual cycle during the El Nino year also appears to be much weaker than in a normal year.
NASA Astrophysics Data System (ADS)
Long, Marianna M.; Bishop, John Bradford; Delucas, Lawrence J.; Nagabhushan, Tattanhalli L.; Reichert, Paul; Smith, G. David
1997-01-01
The Protein Crystal Growth Facility (PCF) is space-flight hardware that accommodates large scale protein crystal growth experiments using temperature change as the inductive step. Recent modifications include specialized instrumentation for monitoring crystal nucleation with laser light scattering. This paper reviews results from its first seven flights on the Space Shuttle, the last with laser light scattering instrumentation in place. The PCF's objective is twofold: (1) the production of high quality protein crystals for x-ray analysis and subsequent structure-based drug design and (2) preparation of a large quantity of relatively contaminant free crystals for use as time-release protein pharmaceuticals. The first three Shuttle flights with bovine insulin constituted the PCF's proof of concept, demonstrating that the space-grown crystals were larger and diffracted to higher resolution than their earth-grown counterparts. The later four PCF missions were used to grow recombinant human insulin crystals for x-ray analysis and continue productions trials aimed at the development of a processing facility for crystalline recombinant a-interferon.
Measurement of surface water runoff from plots of two different sizes
NASA Astrophysics Data System (ADS)
Joel, Abraham; Messing, Ingmar; Seguel, Oscar; Casanova, Manuel
2002-05-01
Intensities and amounts of water infiltration and runoff on sloping land are governed by the rainfall pattern and soil hydraulic conductivity, as well as by the microtopography and soil surface conditions. These components are closely interrelated and occur simultaneously, and their particular contribution may change during a rainfall event, or their effects may vary at different field scales. The scale effect on the process of infiltration/runoff was studied under natural field and rainfall conditions for two plot sizes: small plots of 0·25 m2 and large plots of 50 m2. The measurements were carried out in the central region of Chile in a piedmont most recently used as natural pastureland. Three blocks, each having one large plot and five small plots, were established. Cumulative rainfall and runoff quantities were sampled every 5 min. Significant variations in runoff responses to rainfall rates were found for the two plot sizes. On average, large plots yielded only 40% of runoff quantities produced on small plots per unit area. This difference between plot sizes was observed even during periods of continuous runoff.
Plate motions and deformations from geologic and geodetic data
NASA Technical Reports Server (NTRS)
Jordan, Thomas H.
1990-01-01
An analysis of geodetic data in the vicinity of the Crustal Dynamics Program (CDP) site at Vandenberg Air Force Base (VNDN) is presented. The utility of space-geodetic data in the monitoring of transient strains associated with earthquakes in tectonically active areas like California is investigated. Particular interest is in the possibility that space-geodetic methods may be able to provide critical new data on deformations precursory to large seismic events. Although earthquake precursory phenomena are not well understood, the monitoring of small strains in the vicinity of active faults is a promising technique for studying the mechanisms that nucleate large earthquakes and, ultimately, for earthquake prediction. Space-geodetic techniques are now capable of measuring baselines of tens to hundreds of kilometers with a precision of a few parts in 108. Within the next few years, it will be possible to record and analyze large-scale strain variations with this precision continuously in real time. Thus, space-geodetic techniques may become tools for earthquake prediction. In anticipation of this capability, several questions related to the temporal and spatial scales associated with subseismic deformation transients are examined.
NASA Astrophysics Data System (ADS)
Kempf, P.; Moernaut, J.; Vandoorne, W.; Van Daele, M. E.; Pino, M.; Urrutia, R.; De Batist, M. A. O.
2014-12-01
After the last decade of extreme tsunami events with catastrophic damage to infrastructure and a horrendous amount of casualties, it is clear that more and better paleotsunami records are needed to improve our understanding of the recurrence intervals and intensities of large-scale tsunamis. Coastal lakes (e.g. Bradley Lake, Cascadia; Kelsey et al., 2005) have the potential to contain long and continuous sedimentary records, which is an important asset in view of the centennial- to millennial-scale recurrence times of great tsunami-triggering earthquakes. Lake Huelde on Chiloé Island (42.5°S), Chile, is a coastal lake located in the middle of the Valdivia segment, which is known for having produced the strongest ever instrumentally recorded earthquake in 1960 AD (MW: 9.5), and other large earthquakes prior to that: i.e. 1837 AD, 1737 AD (no report of a tsunami) and 1575 AD (Lomnitz, 1970, 2004, Cisternas et al., 2005). We present a new 5400 yr-long paleotsunami record with a Bayesian age-depth model based on 23 radiocarbon dates that exceeds all previous paleotsunami records from the Valdivia segment, both in terms of length and of continuity. 18 events are described and a semi-quantitative measure of the event intensity at the study area is given, revealing at least two predecessors of the 1960 AD event in the mid to late Holocene that are equal in intensity. The resulting implications from the age-depth model and from the semi-quantitative intensity reconstruction are discussed in this contribution.
Development of the Electromagnetic Continuous Casting Technology for of Magnesium Alloys
NASA Astrophysics Data System (ADS)
Park, Joon-Pyo; Kim, Myoung-Gyun; Kim, Jong-Ho; Lee, Gyu-Chang
Currently, magnesium billets produced by ingot casting or direct chill casting process, result in low-quality surfaces and low productivity, Continuous casting technology to solve these problem has not only high-quality surface billets with fine-grained and homogeneous microstructure but also cost down. The latent heat of fusion per weight (J/g) of magnesium is similar to other metals, however, considering the heat emitted to the mold surface during continuous casting in meniscus region and converting it to the latent heat of fusion per volume, magnesium will be rapidly solidified in the mold during continuous casting, which induces subsequent surface defect formation. In this study, electromagnetic casting and stirring (EMC and EMS) techniques are proposed to control solidification process conveniently by compensating the low latent heat of solidification by volume and to fabricate magnesium billet with high-quality surface. This technique was extended to large scale billets up to 300 mm diameter and continuous casting was successfully conducted. Then magnesium billet was used for the fabrication of prototype automobile pulley.
NASA Astrophysics Data System (ADS)
Zhu, Yi-Qing; Liang, Wei-Feng; Zhang, Song
2018-01-01
Using multiple-scale mobile gravity data in the Sichuan-Yunnan area, we systematically analyzed the relationships between spatial-temporal gravity changes and the 2014 Ludian, Yunnan Province Ms6.5 earthquake and the 2014 Kangding Ms6.3, 2013 Lushan Ms7.0, and 2008 Wenchuan Ms8.0 earthquakes in Sichuan Province. Our main results are as follows. (1) Before the occurrence of large earthquakes, gravity anomalies occur in a large area around the epicenters. The directions of gravity change gradient belts usually agree roughly with the directions of the main fault zones of the study area. Such gravity changes might reflect the increase of crustal stress, as well as the significant active tectonic movements and surface deformations along fault zones, during the period of gestation of great earthquakes. (2) Continuous significant changes of the multiple-scale gravity fields, as well as greater gravity changes with larger time scales, can be regarded as medium-range precursors of large earthquakes. The subsequent large earthquakes always occur in the area where the gravity changes greatly. (3) The spatial-temporal gravity changes are very useful in determining the epicenter of coming large earthquakes. The large gravity networks are useful to determine the general areas of coming large earthquakes. However, the local gravity networks with high spatial-temporal resolution are suitable for determining the location of epicenters. Therefore, denser gravity observation networks are necessary for better forecasts of the epicenters of large earthquakes. (4) Using gravity changes from mobile observation data, we made medium-range forecasts of the Kangding, Ludian, Lushan, and Wenchuan earthquakes, with especially successful forecasts of the location of their epicenters. Based on the above discussions, we emphasize that medium-/long-term potential for large earthquakes might exist nowadays in some areas with significant gravity anomalies in the study region. Thus, the monitoring should be strengthened.
White, Mark; Wells, John S G; Butterworth, Tony
2014-09-01
To examine the literature related to a large-scale quality improvement initiative, the 'Productive Ward: Releasing Time to Care', providing a bibliometric profile that tracks the level of interest and scale of roll-out and adoption, discussing the implications for sustainability. Productive Ward: Releasing Time to Care (aka Productive Ward) is probably one of the most ambitious quality improvement efforts engaged by the UK-NHS. Politically and financially supported, its main driver was the NHS Institute for Innovation and Improvement. The NHS institute closed in early 2013 leaving a void of resources, knowledge and expertise. UK roll-out of the initiative is well established and has arguably peaked. International interest in the initiative however continues to develop. A comprehensive literature review was undertaken to identify the literature related to the Productive Ward and its implementation (January 2006-June 2013). A bibliometric analysis examined/reviewed the trends and identified/measured interest, spread and uptake. Overall distribution patterns identify a declining trend of interest, with reduced numbers of grey literature and evaluation publications. However, detailed examination of the data shows no reduction in peer-reviewed outputs. There is some evidence that international uptake of the initiative continues to generate publications and create interest. Sustaining this initiative in the UK will require re-energising, a new focus and financing. The transition period created by the closure of its creator may well contribute to further reduced levels of interest and publication outputs in the UK. However, international implementation, evaluation and associated publications could serve to attract professional/academic interest in this well-established, positively reported, quality improvement initiative. This paper provides nurses and ward teams involved in quality improvement programmes with a detailed, current-state, examination and analysis of the Productive Ward literature, highlighting the bibliometric patterns of this large-scale, international, quality improvement programme. It serves to disseminate updated publication information to those in clinical practice who are involved in Productive Ward or a similar quality improvement initiative. © 2014 John Wiley & Sons Ltd.
Dynamically consistent parameterization of mesoscale eddies. Part III: Deterministic approach
NASA Astrophysics Data System (ADS)
Berloff, Pavel
2018-07-01
This work continues development of dynamically consistent parameterizations for representing mesoscale eddy effects in non-eddy-resolving and eddy-permitting ocean circulation models and focuses on the classical double-gyre problem, in which the main dynamic eddy effects maintain eastward jet extension of the western boundary currents and its adjacent recirculation zones via eddy backscatter mechanism. Despite its fundamental importance, this mechanism remains poorly understood, and in this paper we, first, study it and, then, propose and test its novel parameterization. We start by decomposing the reference eddy-resolving flow solution into the large-scale and eddy components defined by spatial filtering, rather than by the Reynolds decomposition. Next, we find that the eastward jet and its recirculations are robustly present not only in the large-scale flow itself, but also in the rectified time-mean eddies, and in the transient rectified eddy component, which consists of highly anisotropic ribbons of the opposite-sign potential vorticity anomalies straddling the instantaneous eastward jet core and being responsible for its continuous amplification. The transient rectified component is separated from the flow by a novel remapping method. We hypothesize that the above three components of the eastward jet are ultimately driven by the small-scale transient eddy forcing via the eddy backscatter mechanism, rather than by the mean eddy forcing and large-scale nonlinearities. We verify this hypothesis by progressively turning down the backscatter and observing the induced flow anomalies. The backscatter analysis leads us to formulating the key eddy parameterization hypothesis: in an eddy-permitting model at least partially resolved eddy backscatter can be significantly amplified to improve the flow solution. Such amplification is a simple and novel eddy parameterization framework implemented here in terms of local, deterministic flow roughening controlled by single parameter. We test the parameterization skills in an hierarchy of non-eddy-resolving and eddy-permitting modifications of the original model and demonstrate, that indeed it can be highly efficient for restoring the eastward jet extension and its adjacent recirculation zones. The new deterministic parameterization framework not only combines remarkable simplicity with good performance but also is dynamically transparent, therefore, it provides a powerful alternative to the common eddy diffusion and emerging stochastic parameterizations.
NASA Astrophysics Data System (ADS)
Murray, K. D.; Lohman, R.
2017-12-01
Areas of large-scale subsidence are observed over much of the San Joaquin Valley of California due to the extraction of groundwater and hydrocarbons from the subsurface.These signals span regions with spatial extents of up to 100 km and have rates of up to 45 cm/yr or more. InSAR and GPS are complementary methods commonly used to measure such ground displacements and can provide important constraints on crustal deformation models, support groundwater studies, and inform water resource management efforts. However, current standard methods for processing these data sets and creating displacement time series are suboptimal for the deformation observed in areas like the San Joaquin Valley because (1) the ground surface properties are constantly changing due largely to agricultural activity, resulting in low coherence in half or more of a SAR frame, and (2) the deformation signals are distributed throughout the SAR frames, and are comparable to the size of the frames themselves. Therefore, referencing areas of deformation to non-deforming areas and correcting for long wavelength signals (e.g. atmospheric delays, orbital errors) is particularly difficult. We address these challenges by exploiting pixels that are stable in space and time, and use them for weighted spatial averaging and selective filtering before unwrapping. We then compare a range of methods for both long wavelength corrections and referencing via automatic partitioning of non-deforming areas, then benchmark results against continuous GPS measurements. Our final time series consist of nearly 15 years of displacement measurements from continuous GPS data, and Envisat, ALOS-1, Sentinel SAR data, and show significant temporal and spatial variations. We find that the choice of reference and long wavelength corrections can significantly bias long-term rate and seasonal amplitude estimates, causing variations of as much as 100% of the mean estimate. As we enter an era with free and open data access and regular observations plans from missions such as NISAR and the Sentinel constellation, our approach will help users evaluate the significance of observed deformation at a range of spatial scales and in areas with challenging surface properties.
NASA Astrophysics Data System (ADS)
Uchida, Taro; Sakurai, Wataru; Iuchi, Takuma; Izumiyama, Hiroaki; Borgatti, Lisa; Marcato, Gianluca; Pasuto, Alessandro
2018-04-01
Monitoring of sediment transport from hillslopes to channel networks as a consequence of floods with suspended and bedload transport, hyperconcentrated flows, debris and mud flows is essential not only for scientific issues, but also for prevention and mitigation of natural disasters, i.e. for hazard assessment, land use planning and design of torrent control interventions. In steep, potentially unstable terrains, ground-based continuous monitoring of hillslope and hydrological processes is still highly localized and expensive, especially in terms of manpower. In recent years, new seismic and acoustic methods have been developed for continuous bedload monitoring in mountain rivers. Since downstream bedload transport rate is controlled by upstream sediment supply from tributary channels and bed-external sources, continuous bedload monitoring might be an effective tool for detecting the sediments mobilized by debris flow processes in the upper catchment and thus represent an indirect method to monitor slope instability processes at the catchment scale. However, there is poor information about the effects of episodic sediment supply from upstream bed-external sources on downstream bedload transport rate at a single flood time scale. We have examined the effects of sediment supply due to upstream debris flow events on downstream bedload transport rate along the Yotagiri River, central Japan. To do this, we have conducted continuous bedload observations using a hydrophone (Japanese pipe microphone) located 6.4 km downstream the lower end of a tributary affected by debris flows. Two debris flows occurred during the two-years-long observation period. As expected, bedload transport rate for a given flow depth showed to be larger after storms triggering debris flows. That is, although the magnitude of sediment supply from debris flows is not large, their effect on bedload is propagating >6 km downstream at a single flood time scale. This indicates that continuous bedload observations could be effective for detecting sediment supply as a consequence of debris flow events.
Conceptual Research of Lunar-based Earth Observation for Polar Glacier Motion
NASA Astrophysics Data System (ADS)
Ruan, Zhixing; Liu, Guang; Ding, Yixing
2016-07-01
The ice flow velocity of glaciers is important for estimating the polar ice sheet mass balance, and it is of great significance for studies into rising sea level under the background of global warming. However so far the long-term and global measurements of these macro-scale motion processes of the polar glaciers have hardly been achieved by Earth Observation (EO) technique from the ground, aircraft or satellites in space. This paper, facing the demand for space technology for large-scale global environmental change observation,especially the changes of polar glaciers, and proposes a new concept involving setting up sensors on the lunar surface and using the Moon as a platform for Earth observation, transmitting the data back to Earth. Lunar-based Earth observation, which enables the Earth's large-scale, continuous, long-term dynamic motions to be measured, is expected to provide a new solution to the problems mentioned above. According to the pattern and characteristics of polar glaciers motion, we will propose a comprehensive investigation of Lunar-based Earth observation with synthetic aperture radar (SAR). Via theoretical modeling and experimental simulation inversion, intensive studies of Lunar-based Earth observation for the glacier motions in the polar regions will be implemented, including the InSAR basics theory, observation modes of InSAR and optimization methods of their key parameters. It will be of a great help to creatively expand the EO technique system from space. In addition, they will contribute to establishing the theoretical foundation for the realization of the global, long-term and continuous observation for the glacier motion phenomena in the Antarctic and the Arctic.
Reconsidering earthquake scaling
Gomberg, Joan S.; Wech, Aaron G.; Creager, Kenneth; Obara, K.; Agnew, Duncan
2016-01-01
The relationship (scaling) between scalar moment, M0, and duration, T, potentially provides key constraints on the physics governing fault slip. The prevailing interpretation of M0-T observations proposes different scaling for fast (earthquakes) and slow (mostly aseismic) slip populations and thus fundamentally different driving mechanisms. We show that a single model of slip events within bounded slip zones may explain nearly all fast and slow slip M0-T observations, and both slip populations have a change in scaling, where the slip area growth changes from 2-D when too small to sense the boundaries to 1-D when large enough to be bounded. We present new fast and slow slip M0-T observations that sample the change in scaling in each population, which are consistent with our interpretation. We suggest that a continuous but bimodal distribution of slip modes exists and M0-T observations alone may not imply a fundamental difference between fast and slow slip.
NASA Astrophysics Data System (ADS)
McKay, N.
2017-12-01
As timescale increases from years to centuries, the spatial scale of covariability in the climate system is hypothesized to increase as well. Covarying spatial scales are larger for temperature than for hydroclimate, however, both aspects of the climate system show systematic changes on large-spatial scales on orbital to tectonic timescales. The extent to which this phenomenon is evident in temperature and hydroclimate at centennial timescales is largely unknown. Recent syntheses of multidecadal to century-scale variability in hydroclimate during the past 2k in the Arctic, North America, and Australasia show little spatial covariability in hydroclimate during the Common Era. To determine 1) the evidence for systematic relationships between the spatial scale of climate covariability as a function of timescale, and 2) whether century-scale hydroclimate variability deviates from the relationship between spatial covariability and timescale, we quantify this phenomenon during the Common Era by calculating the e-folding distance in large instrumental and paleoclimate datasets. We calculate this metric of spatial covariability, at different timescales (1, 10 and 100-yr), for a large network of temperature and precipitation observations from the Global Historical Climatology Network (n=2447), from v2.0.0 of the PAGES2k temperature database (n=692), and from moisture-sensitive paleoclimate records North America, the Arctic, and the Iso2k project (n = 328). Initial results support the hypothesis that the spatial scale of covariability is larger for temperature, than for precipitation or paleoclimate hydroclimate indicators. Spatially, e-folding distances for temperature are largest at low latitudes and over the ocean. Both instrumental and proxy temperature data show clear evidence for increasing spatial extent as a function of timescale, but this phenomenon is very weak in the hydroclimate data analyzed here. In the proxy hydroclimate data, which are predominantly indicators of effective moisture, e-folding distance increases from annual to decadal timescales, but does not continue to increase to centennial timescales. Future work includes examining additional instrumental and proxy datasets of moisture variability, and extending the analysis to millennial timescales of variability.
Implementation Strategies for Large-Scale Transport Simulations Using Time Domain Particle Tracking
NASA Astrophysics Data System (ADS)
Painter, S.; Cvetkovic, V.; Mancillas, J.; Selroos, J.
2008-12-01
Time domain particle tracking is an emerging alternative to the conventional random walk particle tracking algorithm. With time domain particle tracking, particles are moved from node to node on one-dimensional pathways defined by streamlines of the groundwater flow field or by discrete subsurface features. The time to complete each deterministic segment is sampled from residence time distributions that include the effects of advection, longitudinal dispersion, a variety of kinetically controlled retention (sorption) processes, linear transformation, and temporal changes in groundwater velocities and sorption parameters. The simulation results in a set of arrival times at a monitoring location that can be post-processed with a kernel method to construct mass discharge (breakthrough) versus time. Implementation strategies differ for discrete flow (fractured media) systems and continuous porous media systems. The implementation strategy also depends on the scale at which hydraulic property heterogeneity is represented in the supporting flow model. For flow models that explicitly represent discrete features (e.g., discrete fracture networks), the sampling of residence times along segments is conceptually straightforward. For continuous porous media, such sampling needs to be related to the Lagrangian velocity field. Analytical or semi-analytical methods may be used to approximate the Lagrangian segment velocity distributions in aquifers with low-to-moderate variability, thereby capturing transport effects of subgrid velocity variability. If variability in hydraulic properties is large, however, Lagrangian velocity distributions are difficult to characterize and numerical simulations are required; in particular, numerical simulations are likely to be required for estimating the velocity integral scale as a basis for advective segment distributions. Aquifers with evolving heterogeneity scales present additional challenges. Large-scale simulations of radionuclide transport at two potential repository sites for high-level radioactive waste will be used to demonstrate the potential of the method. The simulations considered approximately 1000 source locations, multiple radionuclides with contrasting sorption properties, and abrupt changes in groundwater velocity associated with future glacial scenarios. Transport pathways linking the source locations to the accessible environment were extracted from discrete feature flow models that include detailed representations of the repository construction (tunnels, shafts, and emplacement boreholes) embedded in stochastically generated fracture networks. Acknowledgment The authors are grateful to SwRI Advisory Committee for Research, the Swedish Nuclear Fuel and Waste Management Company, and Posiva Oy for financial support.
Impact of large-scale dynamics on the microphysical properties of midlatitude cirrus
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muhlbauer, Andreas; Ackerman, Thomas P.; Comstock, Jennifer M.
2014-04-16
In situ microphysical observations 3 of mid-latitude cirrus collected during the Department of Energy Small Particles in Cirrus (SPAR-TICUS) field campaign are combined with an atmospheric state classification for the Atmospheric Radiation Measurement (ARM) Southern Great Plains (SGP) site to understand statistical relationships between cirrus microphysics and the large-scale meteorology. The atmospheric state classification is informed about the large-scale meteorology and state of cloudiness at the ARM SGP site by combining ECMWF ERA-Interim reanalysis data with 14 years of continuous observations from the millimeter-wavelength cloud radar. Almost half of the cirrus cloud occurrences in the vicinity of the ARM SGPmore » site during SPARTICUS can be explained by three distinct synoptic condi- tions, namely upper-level ridges, mid-latitude cyclones with frontal systems and subtropical flows. Probability density functions (PDFs) of cirrus micro- physical properties such as particle size distributions (PSDs), ice number con- centrations and ice water content (IWC) are examined and exhibit striking differences among the different synoptic regimes. Generally, narrower PSDs with lower IWC but higher ice number concentrations are found in cirrus sam- pled in upper-level ridges whereas cirrus sampled in subtropical flows, fronts and aged anvils show broader PSDs with considerably lower ice number con- centrations but higher IWC. Despite striking contrasts in the cirrus micro- physics for different large-scale environments, the PDFs of vertical velocity are not different, suggesting that vertical velocity PDFs are a poor predic-tor for explaining the microphysical variability in cirrus. Instead, cirrus mi- crophysical contrasts may be driven by differences in ice supersaturations or aerosols.« less
NASA Astrophysics Data System (ADS)
Flores, A. N.; Lakshmi, V.; Al-Barakat, R.; Maksimowicz, M.
2017-12-01
Land grabbing, the acquisition of large areas of land by external entities, results from interactions of complex global economic, social, and political processes. These transactions are controversial because they can result in large-scale disruptions to historical land uses, including increased intensity of agricultural practices and significant conversions in land cover. These large-scale disruptions have the potential to impact surface water and energy balance because vegetation controls the partitioning of incoming energy into latent and sensible heat fluxes and precipitation into runoff and infiltration. Because large-scale land acquisitions can impact local ecosystem services, it is important to document changes in terrestrial vegetation associated with these acquisitions to support the assessment of associated impacts on regional surface water and energy balance, spatiotemporal scales of those changes, and interactions and feedbacks with other processes, particularly in the atmosphere. We use remote sensing data from multiple satellite platforms to diagnose and characterize changes in terrestrial vegetation and ecohydrology in Mozambique during periods that bracket periods associated with significant. The Advanced very High Resolution Radiometer (AVHRR) sensor provides long-term continuous data that can document historical seasonal cycles of vegetation greenness. These data are augmented with analyses from Landsat multispectral data, which provides significantly higher spatial resolution. Here we quantify spatiotemporal changes in vegetation are associated with periods of significant land acquisitions in Mozambique. This analysis complements a suite of land-atmosphere modeling experiments designed to deduce potential changes in land surface water and energy budgets associated with these acquisitions. This work advance understanding of how telecouplings between global economic and political forcings and regional hydrology and climate.
Hakkenberg, C R; Zhu, K; Peet, R K; Song, C
2018-02-01
The central role of floristic diversity in maintaining habitat integrity and ecosystem function has propelled efforts to map and monitor its distribution across forest landscapes. While biodiversity studies have traditionally relied largely on ground-based observations, the immensity of the task of generating accurate, repeatable, and spatially-continuous data on biodiversity patterns at large scales has stimulated the development of remote-sensing methods for scaling up from field plot measurements. One such approach is through integrated LiDAR and hyperspectral remote-sensing. However, despite their efficiencies in cost and effort, LiDAR-hyperspectral sensors are still highly constrained in structurally- and taxonomically-heterogeneous forests - especially when species' cover is smaller than the image resolution, intertwined with neighboring taxa, or otherwise obscured by overlapping canopy strata. In light of these challenges, this study goes beyond the remote characterization of upper canopy diversity to instead model total vascular plant species richness in a continuous-cover North Carolina Piedmont forest landscape. We focus on two related, but parallel, tasks. First, we demonstrate an application of predictive biodiversity mapping, using nonparametric models trained with spatially-nested field plots and aerial LiDAR-hyperspectral data, to predict spatially-explicit landscape patterns in floristic diversity across seven spatial scales between 0.01-900 m 2 . Second, we employ bivariate parametric models to test the significance of individual, remotely-sensed predictors of plant richness to determine how parameter estimates vary with scale. Cross-validated results indicate that predictive models were able to account for 15-70% of variance in plant richness, with LiDAR-derived estimates of topography and forest structural complexity, as well as spectral variance in hyperspectral imagery explaining the largest portion of variance in diversity levels. Importantly, bivariate tests provide evidence of scale-dependence among predictors, such that remotely-sensed variables significantly predict plant richness only at spatial scales that sufficiently subsume geolocational imprecision between remotely-sensed and field data, and best align with stand components including plant size and density, as well as canopy gaps and understory growth patterns. Beyond their insights into the scale-dependent patterns and drivers of plant diversity in Piedmont forests, these results highlight the potential of remotely-sensible essential biodiversity variables for mapping and monitoring landscape floristic diversity from air- and space-borne platforms. © 2017 by the Ecological Society of America.
Paetkau, D; Waits, L P; Clarkson, P L; Craighead, L; Strobeck, C
1997-12-01
A large microsatellite data set from three species of bear (Ursidae) was used to empirically test the performance of six genetic distance measures in resolving relationships at a variety of scales ranging from adjacent areas in a continuous distribution to species that diverged several million years ago. At the finest scale, while some distance measures performed extremely well, statistics developed specifically to accommodate the mutational processes of microsatellites performed relatively poorly, presumably because of the relatively higher variance of these statistics. At the other extreme, no statistic was able to resolve the close sister relationship of polar bears and brown bears from more distantly related pairs of species. This failure is most likely due to constraints on allele distributions at microsatellite loci. At intermediate scales, both within continuous distributions and in comparisons to insular populations of late Pleistocene origin, it was not possible to define the point where linearity was lost for each of the statistics, except that it is clearly lost after relatively short periods of independent evolution. All of the statistics were affected by the amount of genetic diversity within the populations being compared, significantly complicating the interpretation of genetic distance data.
Paetkau, D.; Waits, L. P.; Clarkson, P. L.; Craighead, L.; Strobeck, C.
1997-01-01
A large microsatellite data set from three species of bear (Ursidae) was used to empirically test the performance of six genetic distance measures in resolving relationships at a variety of scales ranging from adjacent areas in a continuous distribution to species that diverged several million years ago. At the finest scale, while some distance measures performed extremely well, statistics developed specifically to accommodate the mutational processes of microsatellites performed relatively poorly, presumably because of the relatively higher variance of these statistics. At the other extreme, no statistic was able to resolve the close sister relationship of polar bears and brown bears from more distantly related pairs of species. This failure is most likely due to constraints on allele distributions at microsatellite loci. At intermediate scales, both within continuous distributions and in comparisons to insular populations of late Pleistocene origin, it was not possible to define the point where linearity was lost for each of the statistics, except that it is clearly lost after relatively short periods of independent evolution. All of the statistics were affected by the amount of genetic diversity within the populations being compared, significantly complicating the interpretation of genetic distance data. PMID:9409849
Fixed-bed bioreactor system for the microbial solubilization of coal
Scott, C.D.; Strandberg, G.W.
1987-09-14
A fixed-bed bioreactor system for the conversion of coal into microbially solubilized coal products. The fixed-bed bioreactor continuously or periodically receives coal and bio-reactants and provides for the large scale production of microbially solubilized coal products in an economical and efficient manner. An oxidation pretreatment process for rendering coal uniformly and more readily susceptible to microbial solubilization may be employed with the fixed-bed bioreactor. 1 fig., 1 tab.
The U.S. Army in Asia, 2030-2040
2014-01-01
in general. However, to be fully effective in a war with China, AirSea Battle would likely require early (if not pre- 5 There is a potential downside...to establish a regional sphere of influence, potentially espousing ideologies that are in conflict with core values of the international system ...China may wish to deploy large-scale ground forces punitively, over a set of limited objectives, or differentiating Between a “ Systemic Continuity