Application of a Dynamic Programming Algorithm for Weapon Target Assignment
2016-02-01
25] A . Turan , “Techniques for the Allocation of Resources Under Uncertainty,” Middle Eastern Technical University, Ankara, Turkey, 2012. [26] K...UNCLASSIFIED UNCLASSIFIED Application of a Dynamic Programming Algorithm for Weapon Target Assignment Lloyd Hammond Weapons and...optimisation techniques to support the decision making process. This report documents the methodology used to identify, develop and assess a
Quantum algorithm for support matrix machines
NASA Astrophysics Data System (ADS)
Duan, Bojia; Yuan, Jiabin; Liu, Ying; Li, Dan
2017-09-01
We propose a quantum algorithm for support matrix machines (SMMs) that efficiently addresses an image classification problem by introducing a least-squares reformulation. This algorithm consists of two core subroutines: a quantum matrix inversion (Harrow-Hassidim-Lloyd, HHL) algorithm and a quantum singular value thresholding (QSVT) algorithm. The two algorithms can be implemented on a universal quantum computer with complexity O[log(npq) ] and O[log(pq)], respectively, where n is the number of the training data and p q is the size of the feature space. By iterating the algorithms, we can find the parameters for the SMM classfication model. Our analysis shows that both HHL and QSVT algorithms achieve an exponential increase of speed over their classical counterparts.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-05
... Lloyds Metals & Engineers Ltd. and Lloyds Steel Industries Ltd.) (collectively, Lloyds) and Ushdev... and all affiliates, Lloyds Metals & Engineers Ltd., and Lloyds Steel Industries Ltd. Because we... DEPARTMENT OF COMMERCE International Trade Administration [A-533-502] Certain Welded Carbon Steel...
Estimating the Resources for Quantum Computation with the QuRE Toolbox
2013-05-31
quantum computing. Quantum Info. Comput., 9(7):666–682, July 2009. [13] M. Saffman, T. G. Walker, and K. Mølmer. Quantum information with rydberg atoms...109(5):735–750, 2011. [24] Aram Harrow , Avinatan Hassidim, and Seth Lloyd. Quantum algorithm for solving linear systems of equations. Phys. Rev
Clustering, Dimensionality Reduction, and Side Information
2006-01-01
Steinhaus in 1955 [243], Lloyd in 1957 [174], and MacQueen in 1967 [178]. The ISODATA algorithm by Ball and Hall in 1965 [8] can be regarded as an adaptive...1958. [242] R.R. Sokal and P.H.A. Sneath. Principles of Numerical Taxonomy. San Francisco, W. H. Freeman, 1963. [243] H. Steinhaus . Sur la division
Busciolano, Ronald J.
2002-01-01
The three main water-bearing units on Long Island, New York--the upper glacial aquifer (water table) and the underlying Magothy and Lloyd aquifers--are the sole source of water supply for more than 3 million people. Water-table and potentiometric-surface altitudes were contoured from water-level measurements made at 394 observation, public-supply, and industrial-supply wells during March-April 2000. In general, water-level altitudes in the upper glacial, Magothy, and Lloyd aquifers were lower throughout most parts of Long Island than those measured during March-April 1997. Changes in altitude during this period ranged from an increase of about 6 feet in the Magothy aquifer in southwestern Nassau County to a decrease of more than 8 feet in the upper glacial aquifer in eastern Suffolk County.
Quantum Algorithm for K-Nearest Neighbors Classification Based on the Metric of Hamming Distance
NASA Astrophysics Data System (ADS)
Ruan, Yue; Xue, Xiling; Liu, Heng; Tan, Jianing; Li, Xi
2017-11-01
K-nearest neighbors (KNN) algorithm is a common algorithm used for classification, and also a sub-routine in various complicated machine learning tasks. In this paper, we presented a quantum algorithm (QKNN) for implementing this algorithm based on the metric of Hamming distance. We put forward a quantum circuit for computing Hamming distance between testing sample and each feature vector in the training set. Taking advantage of this method, we realized a good analog for classical KNN algorithm by setting a distance threshold value t to select k - n e a r e s t neighbors. As a result, QKNN achieves O( n 3) performance which is only relevant to the dimension of feature vectors and high classification accuracy, outperforms Llyod's algorithm (Lloyd et al. 2013) and Wiebe's algorithm (Wiebe et al. 2014).
Hydrogeology of the Lloyd Aquifer on Long Island, New York-A Brief Summary of USGS Investigations
Chu, Anthony
2006-01-01
The four counties of Long Island (fig. 1) are underlain by a wedge-shaped sequence of unconsolidated deposits of Late Cretaceous and Pleistocene age that lie unconformably on crystalline bedrock (fig. 2). A saprolitic (weathered bedrock) zone 20 to 100 ft thick overlies the bedrock in most areas. The sequence of unconsolidated deposits thickens to the south and southeast by about 65 to 100 feet per mile and contains Long Island's fresh ground water. Long Island's ground-water system consists of four main aquifers-the upper glacial, the Jameco, the Magothy, and the Lloyd. The Lloyd aquifer underlies nearly all of Long Island (fig. 3), but pumpage from the Lloyd has been limited to the northern and southern coastal areas of the island by the New York State Department of Environmental Conservation since about 1955 (Garber, 1986). Coastal areas are exempt where the Lloyd is the only source of potable water. The former Jamaica Water Supply Corporation (now owned by New York City) is a noted exception withdrawing as much as 6 million gallons per day (Mgal/d) since the mid-1930s from the Lloyd in central Queens County. This paper: (1) provides a brief history of U.S. Geological Survey (USGS) studies that provided significant data on the Lloyd, (2) summarizes the hydraulic characteristics of the Lloyd as reported in those studies, and (3) describes present-day monitoring of the Lloyd by the USGS.
Probabilistic distance-based quantizer design for distributed estimation
NASA Astrophysics Data System (ADS)
Kim, Yoon Hak
2016-12-01
We consider an iterative design of independently operating local quantizers at nodes that should cooperate without interaction to achieve application objectives for distributed estimation systems. We suggest as a new cost function a probabilistic distance between the posterior distribution and its quantized one expressed as the Kullback Leibler (KL) divergence. We first present the analysis that minimizing the KL divergence in the cyclic generalized Lloyd design framework is equivalent to maximizing the logarithmic quantized posterior distribution on the average which can be further computationally reduced in our iterative design. We propose an iterative design algorithm that seeks to maximize the simplified version of the posterior quantized distribution and discuss that our algorithm converges to a global optimum due to the convexity of the cost function and generates the most informative quantized measurements. We also provide an independent encoding technique that enables minimization of the cost function and can be efficiently simplified for a practical use of power-constrained nodes. We finally demonstrate through extensive experiments an obvious advantage of improved estimation performance as compared with the typical designs and the novel design techniques previously published.
Geohydrology of the Lloyd Aquifer, Long Island, New York
Garber, M.S.
1986-01-01
The Lloyd aquifer contains only about 9% of the water stored in Long Island 's groundwater system but is the only source of potable water for several communities near the north and south shores. The Lloyd aquifer is virtually untapped throughout most of central Long Island because current legal restrictions permit its use only in coastal areas. The upper surface of the Lloyd aquifer ranges in depth from 100 ft below land surface on the north shore to more than 1,500 ft on the south shore. Aquifer thickness increases southward from 50 ft to about 500 ft. Transmissivity ranges from 1,500 to 19,000 sq ft/day. All recharge (35 to 40 mil gal/day) and nearly all discharge is through the overlying confining unit. Nearly all of the pumpage (approximately 20 mil gal/day) is in Queens and along the north and south shores of Nassau County. Potable water can be obtained on most of Long Island in larger quantities and at shallower depths from other aquifers than from the Lloyd. Local contamination of these other aquifers, however, may require at least temporary withdrawals from the Lloyd in noncoastal areas. Significant withdrawals from the Lloyd aquifer may lower the potentiometric surface and thereby induce landward movement of sea water into the aquifer in coastal areas. (Author 's abstract)
The Remarkable Journey of Lloyd Alexander
ERIC Educational Resources Information Center
Tunnel, Michael O.; Jacobs, James S.
2007-01-01
This article features Lloyd Alexander, an author who has produced some of the most elegant and powerful prose in the history of modern children's literature. Lloyd began writing seriously in high school, and though he wrote and submitted many poems and short stories, his only success was being named a finalist in the "Writer's Digest" Short Story…
The Emergence of Esther Lloyd-Jones
ERIC Educational Resources Information Center
Certis, Hannah
2014-01-01
Esther Lloyd-Jones made significant contributions to the field of student affairs during her career, and she is known most commonly for her influence on the 1937 "Student Personnel Point of View" (SPPV). Much of her work can be applied to modern student affairs philosophy and practice. This study explores the early life of Lloyd-Jones,…
5. Historic American Buildings Survey, Hans Padelt, Photographer Winter 1968 ...
5. Historic American Buildings Survey, Hans Padelt, Photographer Winter 1968 (2 1/4' x 2 3/4' negative), FIRST FLOOR, GENERAL VIEW OF DINING ROOM WITH FURNITURE DESIGNED BY FRANK LLOYD WRIGHT. - E. E. Boynton House, 16 East Boulevard, Rochester, Monroe County, NY
Prezioso, S; De Marco, P; Zuppella, P; Santucci, S; Ottaviano, L
2010-04-01
A prototype low cost table-top extreme ultraviolet (EUV) laser source (1.5 ns pulse duration, lambda=46.9 nm) was successfully employed as a laboratory scale interference nanolithography (INL) tool. Interference patterns were obtained with a simple Lloyd's mirror setup. Periodic structures on Polymethylmethacrylate/Si substrates were produced on large areas (8 mm(2)) with resolutions from 400 to 22.5 nm half pitch (the smallest resolution achieved so far with table-top EUV laser sources). The mechanical vibrations affecting both the laser source and Lloyd's setup were studied to determine if and how they affect the lateral resolution of the lithographic system. The vibration dynamics was described by a statistical model based on the assumption that the instantaneous position of the vibrating mechanical parts follows a normal distribution. An algorithm was developed to simulate the process of sample irradiation under different vibrations. The comparison between simulations and experiments allowed to estimate the characteristic amplitude of vibrations that was deduced to be lower than 50 nm. The same algorithm was used to reproduce the expected pattern profiles in the lambda/4 half pitch physical resolution limit. In that limit, a nonzero pattern modulation amplitude was obtained from the simulations, comparable to the peak-to-valley height (2-3 nm) measured for the 45 nm spaced fringes, indicating that the mechanical vibrations affecting the INL tool do not represent a limit in scaling down the resolution.
46 CFR 161.002-4 - General requirements.
Code of Federal Regulations, 2010 CFR
2010-10-01
... protected spaces. To meet this end, the basic requirements of the fire-protective systems are reliability... systems. (3) All parts of the system must pass the environmental tests for control and monitoring... tests of Lloyd's Register Type Approval System, Test Specification Number 1, as appropriate. (4) Those...
Modularity: An Application of General Systems Theory to Military Force Development
2005-01-01
1999). Context, modularity, and the cultural constitution of development. In P. Lloyd & C. Fernyhough (Eds.), Lev Vygotsky : Critical assessments...of General Systems Theory to Military Force Development 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT...Prescribed by ANSI Std Z39-18 MODULARITY: AN APPLICATION OF GENERAL SYSTEMS THEORY TO MILITARY FORCE DEVELOPMENT 279 R SEARCH MODULARITY: AN APPLICATION OF
Tchapet Njafa, J-P; Nana Engo, S G
2018-01-01
This paper presents the QAMDiagnos, a model of Quantum Associative Memory (QAM) that can be a helpful tool for medical staff without experience or laboratory facilities, for the diagnosis of four tropical diseases (malaria, typhoid fever, yellow fever and dengue) which have several similar signs and symptoms. The memory can distinguish a single infection from a polyinfection. Our model is a combination of the improved versions of the original linear quantum retrieving algorithm proposed by Ventura and the non-linear quantum search algorithm of Abrams and Lloyd. From the given simulation results, it appears that the efficiency of recognition is good when particular signs and symptoms of a disease are inserted given that the linear algorithm is the main algorithm. The non-linear algorithm helps confirm or correct the diagnosis or give some advice to the medical staff for the treatment. So, our QAMDiagnos that has a friendly graphical user interface for desktop and smart-phone is a sensitive and a low-cost diagnostic tool that enables rapid and accurate diagnosis of four tropical diseases. Copyright © 2017 Elsevier Ltd. All rights reserved.
Design of order statistics filters using feedforward neural networks
NASA Astrophysics Data System (ADS)
Maslennikova, Yu. S.; Bochkarev, V. V.
2016-08-01
In recent years significant progress have been made in the development of nonlinear data processing techniques. Such techniques are widely used in digital data filtering and image enhancement. Many of the most effective nonlinear filters based on order statistics. The widely used median filter is the best known order statistic filter. Generalized form of these filters could be presented based on Lloyd's statistics. Filters based on order statistics have excellent robustness properties in the presence of impulsive noise. In this paper, we present special approach for synthesis of order statistics filters using artificial neural networks. Optimal Lloyd's statistics are used for selecting of initial weights for the neural network. Adaptive properties of neural networks provide opportunities to optimize order statistics filters for data with asymmetric distribution function. Different examples demonstrate the properties and performance of presented approach.
46 CFR 116.300 - Structural design.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Structure § 116.300 Structural design. Except as otherwise allowed by this subpart, a vessel must comply... the vessel. (a) Steel hull vessels: (1) Rules and Regulations for the Classification of Yachts and Small Craft, Lloyd's Register of Shipping (Lloyd's); or (2) Rules for Building and Classing Steel...
46 CFR 116.300 - Structural design.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Structure § 116.300 Structural design. Except as otherwise allowed by this subpart, a vessel must comply... the vessel. (a) Steel hull vessels: (1) Rules and Regulations for the Classification of Yachts and Small Craft, Lloyd's Register of Shipping (Lloyd's); or (2) Rules for Building and Classing Steel...
46 CFR 116.300 - Structural design.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Structure § 116.300 Structural design. Except as otherwise allowed by this subpart, a vessel must comply... the vessel. (a) Steel hull vessels: (1) Rules and Regulations for the Classification of Yachts and Small Craft, Lloyd's Register of Shipping (Lloyd's); or (2) Rules for Building and Classing Steel...
46 CFR 116.300 - Structural design.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Structure § 116.300 Structural design. Except as otherwise allowed by this subpart, a vessel must comply... the vessel. (a) Steel hull vessels: (1) Rules and Regulations for the Classification of Yachts and Small Craft, Lloyd's Register of Shipping (Lloyd's); or (2) Rules for Building and Classing Steel...
46 CFR 116.300 - Structural design.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Structure § 116.300 Structural design. Except as otherwise allowed by this subpart, a vessel must comply... the vessel. (a) Steel hull vessels: (1) Rules and Regulations for the Classification of Yachts and Small Craft, Lloyd's Register of Shipping (Lloyd's); or (2) Rules for Building and Classing Steel...
The Xylaria names proposed by C. G. Lloyd
USDA-ARS?s Scientific Manuscript database
Seventy-one new Xylaria names that C. G. Lloyd proposed are annotated herein. Type and/or authentic materials of these names, when available, were studied. Twenty-four of these, including X. beccarii, X. brasiliensis, X. chordaeformis, X. cuneata, X. divisa, X. fimbriata, X. humosa, X. kedahae, X. l...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-31
..., LLC dba CODA Forwarding, Great American Alliance Insurance Company, Avalon Risk Management, HAPAG... Logistics, LLC dba Coda Forwarding (Dacon); Great American Alliance Insurance Company; Avalon Risk Management; Hapag Lloyd America, Inc. (Hapag Lloyd); and Mitsui OSK Lines (Mitsui), hereinafter ``Respondents...
NASA Astrophysics Data System (ADS)
Cottrell, William; Montero, Miguel
2018-02-01
In this note we investigate the role of Lloyd's computational bound in holographic complexity. Our goal is to translate the assumptions behind Lloyd's proof into the bulk language. In particular, we discuss the distinction between orthogonalizing and `simple' gates and argue that these notions are useful for diagnosing holographic complexity. We show that large black holes constructed from series circuits necessarily employ simple gates, and thus do not satisfy Lloyd's assumptions. We also estimate the degree of parallel processing required in this case for elementary gates to orthogonalize. Finally, we show that for small black holes at fixed chemical potential, the orthogonalization condition is satisfied near the phase transition, supporting a possible argument for the Weak Gravity Conjecture first advocated in [1].
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-19
... FEDERAL MARITIME COMMISSION [Docket No. 13-07] Global Link Logistics, Inc., v. Hapag-Lloyd AG; Notice of Filing of Complaint and Assignment Notice is given that a complaint has been filed with the Federal Maritime Commission (Commission) by Global Link Logistics, Inc. (``Global Link''), hereinafter...
Wm. Lloyd Stackhouse & Robert E. Kinsman: A tale of two chiropractors
Brown, Douglas M.
2013-01-01
This paper reviews the story of two childhood friends, Dr. Wm. Lloyd Stackhouse and Dr. Robert E. Kinsman, who attended the Canadian Memorial Chiropractic College (CMCC) together, graduated in 1953 to form an enduring partnership that included their immediate relatives, and to this day persists as a supportive tribe. PMID:23997249
46 CFR 161.002-1 - Incorporation by reference.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Amendments to SOLAS 74, and 1994 Amendments to SOLAS 74), 1992—161.002-4(b). National Fire Protection Association (NFPA) National Fire Protection Association, 1 Batterymarch Park, Quincy, MA 02269. NFPA 72, National Fire Alarm Code, 1993—161.002-4(b). Lloyd's Register of Shipping (LR) Lloyd's Register of Shipping...
46 CFR 161.002-1 - Incorporation by reference.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Amendments to SOLAS 74, and 1994 Amendments to SOLAS 74), 1992—161.002-4(b). National Fire Protection Association (NFPA) National Fire Protection Association, 1 Batterymarch Park, Quincy, MA 02269. NFPA 72, National Fire Alarm Code, 1993—161.002-4(b). Lloyd's Register of Shipping (LR) Lloyd's Register of Shipping...
John Lloyd Stephens and Frederick Catherwood: Mayan Explorers.
ERIC Educational Resources Information Center
McDermott, Michael
This mini-unit focuses on the lives and accomplishments of John Lloyd Stephens and Frederick Catherwood and their contacts with the Maya. This project deals specifically with how Stephens' published accounts and Catherwood's drawings became the basis from which all further Mayan research developed. These two explorers were the first to describe…
Entanglement-Based Machine Learning on a Quantum Computer
NASA Astrophysics Data System (ADS)
Cai, X.-D.; Wu, D.; Su, Z.-E.; Chen, M.-C.; Wang, X.-L.; Li, Li; Liu, N.-L.; Lu, C.-Y.; Pan, J.-W.
2015-03-01
Machine learning, a branch of artificial intelligence, learns from previous experience to optimize performance, which is ubiquitous in various fields such as computer sciences, financial analysis, robotics, and bioinformatics. A challenge is that machine learning with the rapidly growing "big data" could become intractable for classical computers. Recently, quantum machine learning algorithms [Lloyd, Mohseni, and Rebentrost, arXiv.1307.0411] were proposed which could offer an exponential speedup over classical algorithms. Here, we report the first experimental entanglement-based classification of two-, four-, and eight-dimensional vectors to different clusters using a small-scale photonic quantum computer, which are then used to implement supervised and unsupervised machine learning. The results demonstrate the working principle of using quantum computers to manipulate and classify high-dimensional vectors, the core mathematical routine in machine learning. The method can, in principle, be scaled to larger numbers of qubits, and may provide a new route to accelerate machine learning.
Enhanced Preliminary Assessment Report: Rocky Point Army Housing Units, Rocky Point, New York
1979-11-01
underlying Magothy aquifer; no water is obtained from the Lloyd (deep) aquifer in this area. (Withdrawal from the Lloyd aquifer is restricted to use by the...the Magothy aquifer. Pumpage from private wells used for farm and golf-course irrigation is unknown, but is estimated to be less than 0.5 million gal/d
Acoustic wave propagation and intensity fluctuations in shallow water 2006 experiment
NASA Astrophysics Data System (ADS)
Luo, Jing
Fluctuations of low frequency sound propagation in the presence of nonlinear internal waves during the Shallow Water 2006 experiment are analyzed. Acoustic waves and environmental data including on-board ship radar images were collected simultaneously before, during, and after a strong internal solitary wave packet passed through a source-receiver acoustic track. Analysis of the acoustic wave signals shows temporal intensity fluctuations. These fluctuations are affected by the passing internal wave and agrees well with the theory of the horizontal refraction of acoustic wave propagation in shallow water. The intensity focusing and defocusing that occurs in a fixed source-receiver configuration while internal wave packet approaches and passes the acoustic track is addressed in this thesis. Acoustic ray-mode theory is used to explain the modal evolution of broadband acoustic waves propagating in a shallow water waveguide in the presence of internal waves. Acoustic modal behavior is obtained from the data through modal decomposition algorithms applied to data collected by a vertical line array of hydrophones. Strong interference patterns are observed in the acoustic data, whose main cause is identified as the horizontal refraction referred to as the horizontal Lloyd mirror effect. To analyze this interference pattern, combined Parabolic Equation model and Vertical-mode horizontal-ray model are utilized. A semi-analytic formula for estimating the horizontal Lloyd mirror effect is developed.
Temme, K; Osborne, T J; Vollbrecht, K G; Poulin, D; Verstraete, F
2011-03-03
The original motivation to build a quantum computer came from Feynman, who imagined a machine capable of simulating generic quantum mechanical systems--a task that is believed to be intractable for classical computers. Such a machine could have far-reaching applications in the simulation of many-body quantum physics in condensed-matter, chemical and high-energy systems. Part of Feynman's challenge was met by Lloyd, who showed how to approximately decompose the time evolution operator of interacting quantum particles into a short sequence of elementary gates, suitable for operation on a quantum computer. However, this left open the problem of how to simulate the equilibrium and static properties of quantum systems. This requires the preparation of ground and Gibbs states on a quantum computer. For classical systems, this problem is solved by the ubiquitous Metropolis algorithm, a method that has basically acquired a monopoly on the simulation of interacting particles. Here we demonstrate how to implement a quantum version of the Metropolis algorithm. This algorithm permits sampling directly from the eigenstates of the Hamiltonian, and thus evades the sign problem present in classical simulations. A small-scale implementation of this algorithm should be achievable with today's technology.
Looking and Learning: The Solomon R. Guggenheim Museum, Frank Lloyd Wright
ERIC Educational Resources Information Center
Vatsky, Sharon
2007-01-01
Frank Lloyd Wright was born and raised on the farmlands of Wisconsin. His mother had a vision that her son would become a great architect. Wright was raised with strong guiding principles, a love of nature, a belief in the unity of all things, and a respect for discipline and hard work. He created the philosophy of "organic architecture," which…
Aiming at a Moving Target: Pilot Testing Ebook Readers in an Urban Academic Library
ERIC Educational Resources Information Center
Kiriakova, Maria; Okamoto, Karen S.; Zubarev, Mark; Gross, Gretchen
2010-01-01
Since the early 1990s, the Lloyd Sealy Library, where all four of this article's librarian-authors work in various capacities, has been providing the students and faculty of John Jay College of Criminal Justice with extensive electronic access to books and journals. In early 2009, the librarians at Lloyd Sealy decided to test a recent electronic…
2008-07-01
Photo-Optical Instrumentation Engineers (SPIE) Conference 6272, July 2006. 2. F. P. Wildi , G. Brusa, A. Riccardi, M. Lloyd-Hart, H. M. Martin, and L. M...2003. 3. G. Brusa, A. Riccardi, P. Salinari, F. P. Wildi , M. Lloyd-Hart, H. M. Martin, R. Allen, D. Fisher, D. L. Miller, R. Biasi, D. Gallieni, and F
Biological Degradation of Chinese Fir with Trametes Versicolor (L.) Lloyd
Chen, Meiling; Wang, Chuangui; Fei, Benhua; Ma, Xinxin; Zhang, Bo; Zhang, Shuangyan; Huang, Anmin
2017-01-01
Chinese fir (Cunninghamia lanceolata (Lamb.) Hook.) has been an important afforestation species in northeast China. It has obvious defects of buckling and cracking easily, which are caused by its chemical components. Trametes versicolor (L.) Lloyd, a white-rot fungus, can decompose the cellulose, hemicellulose, and lignin in the wood. White-rot fungus was used to biologically degrade Chinese fir wood. The effects of different degradation time on the Chinese fir wood’s mechanical properties, micromorphology, chemical components, and crystallinity were studied. The results showed that the heartwood of Chinese fir was more durable than the sapwood and the durability class of Chinese fir was III. Trametes versicolor (L.) Lloyd had a greater influence on the mechanical properties (especially with respect to the modulus of elasticity (MOE)) for the sapwood. Trametes versicolor (L.) Lloyd degraded Chinese fir and colonized the lumen of various wood cell types in Chinese fir, penetrated cell walls via pits, caused erosion troughs and bore holes, and removed all cell layers. The ability of white-rot fungus to change the chemical composition mass fraction for Chinese fir was: hemicellulose > lignin > cellulose. The durability of the chemical compositions was: lignin > cellulose > hemicellulose. The crystallinity of the cellulose decreased and the mean size of the ordered (crystalline) domains increased after being treated by white-rot fungus. PMID:28773191
1987-03-25
by Lloyd (1952) using generalized least squares instead of ordinary least squares, and by Wilk, % 20 Gnanadesikan , and Freeny (1963) using a maximum...plot. The half-normal distribution is a special case of the gamma distribution proposed by Wilk, Gnanadesikan , and Huyett (1962). VARIATIONS ON THE... Gnanadesikan , R. Probability plotting methods for the analysis of data. Biometrika, 1968, 55, 1-17. This paper describes and discusses graphical techniques
Vector and axial-vector decomposition of Einstein's gravitational action
NASA Astrophysics Data System (ADS)
Soh, Kwang S.
1991-08-01
Vector and axial-vector gravitational fields are introduced to express the Einstein action in the manner of electromagnetism. Their conformal scaling properties are examined, and the resemblance between the general coordinate and electromagnetic gauge transformation is elucidated. The chiral formulation of the gravitational action is constructed. I am deeply grateful to Professor S. Hawking, and Professor G. Lloyd for warm hospitality at DAMTP, and Darwin College, University of Cambridge, respectively. I also appreciate much help received from Dr. Q.-H. Park.
The Trust: The Classic Example of Soviet Manipulation.
1985-09-01
Poland , returned to Paris. Visited Prague for awhile; received no help. Late summer, 1921 (T) Yavk’hev stops in Reval, Estonia and explains the...Paris since he was asked to leave Poland at the end of the Russo-Polish War in 1921. He was still trying to drum up support from Western ’The New York...Liod orge-and bav~ inov at Lloyd George s private home. When Savinkov entered Lloyd George aqd his family were singing. They continued to sing and
Mutual information-based analysis of JPEG2000 contexts.
Liu, Zhen; Karam, Lina J
2005-04-01
Context-based arithmetic coding has been widely adopted in image and video compression and is a key component of the new JPEG2000 image compression standard. In this paper, the contexts used in JPEG2000 are analyzed using the mutual information, which is closely related to the compression performance. We first show that, when combining the contexts, the mutual information between the contexts and the encoded data will decrease unless the conditional probability distributions of the combined contexts are the same. Given I, the initial number of contexts, and F, the final desired number of contexts, there are S(I, F) possible context classification schemes where S(I, F) is called the Stirling number of the second kind. The optimal classification scheme is the one that gives the maximum mutual information. Instead of using an exhaustive search, the optimal classification scheme can be obtained through a modified generalized Lloyd algorithm with the relative entropy as the distortion metric. For binary arithmetic coding, the search complexity can be reduced by using dynamic programming. Our experimental results show that the JPEG2000 contexts capture the correlations among the wavelet coefficients very well. At the same time, the number of contexts used as part of the standard can be reduced without loss in the coding performance.
Effect of Nano-Aluminum and Fumed Silica Particles on Deflagration and Detonation of Nitromethane
2009-01-01
0178 Justin L. Sabourin a, Richard A. Yetter a, Blaine W. Asay b, Joseph M. Lloyd b, Victor E. Sanders b, Grant A. Risha c, Steven F. Son d a The...Fumed Silica Particles on Deflagration and Detonation of Nitromethane Justin L. Sabourin *, Richard A. Yetter The Pennsylvania State University...containing nAl, which was also found by other workers. 386 J. L. Sabourin , R. A. Yetter, B. W. Asay, J. M. Lloyd, V. E. Sanders, G. A. Risha, S. F. Son
A physics-motivated Centroidal Voronoi Particle domain decomposition method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fu, Lin, E-mail: lin.fu@tum.de; Hu, Xiangyu Y., E-mail: xiangyu.hu@tum.de; Adams, Nikolaus A., E-mail: nikolaus.adams@tum.de
2017-04-15
In this paper, we propose a novel domain decomposition method for large-scale simulations in continuum mechanics by merging the concepts of Centroidal Voronoi Tessellation (CVT) and Voronoi Particle dynamics (VP). The CVT is introduced to achieve a high-level compactness of the partitioning subdomains by the Lloyd algorithm which monotonically decreases the CVT energy. The number of computational elements between neighboring partitioning subdomains, which scales the communication effort for parallel simulations, is optimized implicitly as the generated partitioning subdomains are convex and simply connected with small aspect-ratios. Moreover, Voronoi Particle dynamics employing physical analogy with a tailored equation of state ismore » developed, which relaxes the particle system towards the target partition with good load balance. Since the equilibrium is computed by an iterative approach, the partitioning subdomains exhibit locality and the incremental property. Numerical experiments reveal that the proposed Centroidal Voronoi Particle (CVP) based algorithm produces high-quality partitioning with high efficiency, independently of computational-element types. Thus it can be used for a wide range of applications in computational science and engineering.« less
A physics-motivated Centroidal Voronoi Particle domain decomposition method
NASA Astrophysics Data System (ADS)
Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.
2017-04-01
In this paper, we propose a novel domain decomposition method for large-scale simulations in continuum mechanics by merging the concepts of Centroidal Voronoi Tessellation (CVT) and Voronoi Particle dynamics (VP). The CVT is introduced to achieve a high-level compactness of the partitioning subdomains by the Lloyd algorithm which monotonically decreases the CVT energy. The number of computational elements between neighboring partitioning subdomains, which scales the communication effort for parallel simulations, is optimized implicitly as the generated partitioning subdomains are convex and simply connected with small aspect-ratios. Moreover, Voronoi Particle dynamics employing physical analogy with a tailored equation of state is developed, which relaxes the particle system towards the target partition with good load balance. Since the equilibrium is computed by an iterative approach, the partitioning subdomains exhibit locality and the incremental property. Numerical experiments reveal that the proposed Centroidal Voronoi Particle (CVP) based algorithm produces high-quality partitioning with high efficiency, independently of computational-element types. Thus it can be used for a wide range of applications in computational science and engineering.
Lloyd Morgan's theory of instinct: from Darwinism to neo-Darwinism.
Richards, R J
1977-01-01
Darwin's proposal of two sources of instinct--natural selection and inherited habit--fostered among late nineteenth century evolutionists a variety of conflicting notions concerning the mechanisms of evolution. The British comparative psychologist C. Lloyd Morgan was a cardinal figure in restructuring the orthodox Darwinian conception to relieve the confusion besetting it and to meet the demands of the new biology of Weismann. This paper traces the development of Morgan's ideas about instinct against the background of his philosophic assumptions and the views of instinct theorists from Darwin and Romanes to McDougall and Lorenz.
1987-07-01
Raritan Formation and the Magothy Formation. The Raritan Formation, which rests on the bedrock, is subdivided into the Lloyd Sand Member and the clay...member, which is the uppermost part. The Raritan Formation is below sea level. The Magothy Formation outcrops at only a few locations on Long Island...the Magothy , and the Lloyd Sand member of the Raritan Formation. These aquifers are made up of sand and gravel and small amounts of silt and clay
2003-04-24
KENNEDY SPACE CENTER, FLA. - Jim Lloyd (left) and MER ATLO Logistics Manager Tom Shain shake hands after placing on the Mars Exploration Rover 1 (MER-1) a computer chip with about 35,000 laser-engraved signatures of visitors to the rovers at the Jet Propulsion Laboratory. The signatures include those of senators, artists, and John Glenn. The handshake also represents the passing of the "flame" of logistics job responsibilities at JPL to Lloyd who will be replacing Shain after his retirement. The identical Mars rovers are scheduled to launch June 5 and June 25 from Cape Canaveral Air Force Station.
Designing broad phononic band gaps for in-plane modes
NASA Astrophysics Data System (ADS)
Li, Yang Fan; Meng, Fei; Li, Shuo; Jia, Baohua; Zhou, Shiwei; Huang, Xiaodong
2018-03-01
Phononic crystals are known as artificial materials that can manipulate the propagation of elastic waves, and one essential feature of phononic crystals is the existence of forbidden frequency range of traveling waves called band gaps. In this paper, we have proposed an easy way to design phononic crystals with large in-plane band gaps. We demonstrated that the gap between two arbitrarily appointed bands of in-plane mode can be formed by employing a certain number of solid or hollow circular rods embedded in a matrix material. Topology optimization has been applied to find the best material distributions within the primitive unit cell with maximal band gap width. Our results reveal that the centroids of optimized rods coincide with the point positions generated by Lloyd's algorithm, which deepens our understandings on the formation mechanism of phononic in-plane band gaps.
Patterns and rates of ground-water flow on Long Island, New York
Buxton, Herbert T.; Modica, Edward
1992-01-01
Increased ground-water contamination from human activities on Long Island has prompted studies to define the pattern and rate of ground-water movement. A two-dimensional, fine-mesh, finite-element model consisting of 11,969 nodes and 22,880 elements was constructed to represent ground-water flow along a north-south section through central Long Island. The model represents average hydrologic conditions within a corridor approximately 15 miles wide. The model solves discrete approximations of both the potential and stream functions. The resulting flownet depicts flow paths and defines the vertical distribution of flow within the section. Ground-water flow rates decrease with depth. Sixty-two percent of the water flows no deeper than the upper glacial (water-table) aquifer, 38 percent enters the underlying Magothy aquifer, and only 3.1 percent enters the Lloyd aquifer. The limiting streamlines for flow to the Magothy and Lloyd aquifers indicate that aquifer recharge areas are narrow east-west bands through the center of the island. The recharge area of the Magothy aquifer is only 5.4 miles wide; that of the Lloyd aquifer is less than 0.5 miles. The distribution of ground-water traveltime and a flownet are calculated from model results; both are useful in the investigation of contaminant transport or the chemical evolution of ground water within the flow system. A major discontinuity in traveltime occurs across the streamline which separates the flow subsystems of the two confined aquifers. Water that reaches the Lloyd aquifer attains traveltimes as high as 10,000 years, whereas water that has not penetrated deeper than the Magothy aquifer attains traveltimes of only 2,000 years. The finite-element approach used in this study is particularly suited to ground-water systems that have complex hydrostratigraphy and cross-sectional symmetry.
MRT fuel element inspection at Dounreay
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gibson, J.
1997-08-01
To ensure that their production and inspection processes are performed in an acceptable manner, ie. auditable and traceable, the MTR Fuel Element Fabrication Plant at Dounreay operates to a documented quality system. This quality system, together with the fuel element manufacturing and inspection operations, has been independently certified to ISO9002-1987, EN29002-1987 and BS5750:Pt2:1987 by Lloyd`s Register Quality Assurance Limited (LRQA). This certification also provides dual accreditation to the relevant German, Dutch and Australian certification bodies. This paper briefly describes the quality system, together with the various inspection stages involved in the manufacture of MTR fuel elements at Dounreay.
Datasets on hub-height wind speed comparisons for wind farms in California.
Wang, Meina; Ullrich, Paul; Millstein, Dev
2018-08-01
This article includes the description of data information related to the research article entitled "The future of wind energy in California: Future projections with the Variable-Resolution CESM"[1], with reference number RENE_RENE-D-17-03392. Datasets from the Variable-Resolution CESM, Det Norske Veritas Germanischer Lloyd Virtual Met, MERRA-2, CFSR, NARR, ISD surface observations, and upper air sounding observations were used for calculating and comparing hub-height wind speed at multiple major wind farms across California. Information on hub-height wind speed interpolation and power curves at each wind farm sites are also presented. All datasets, except Det Norske Veritas Germanischer Lloyd Virtual Met, are publicly available for future analysis.
Misut, P.E.; Voss, C.I.
2007-01-01
Freshwater storage in deep aquifers of Brooklyn and Queens, New York, USA, is under consideration as an emergency water supply for New York City. The purpose of a New York City storage and recovery system is to provide an emergency water supply during times of drought or other contingencies and would entail longer-term storage phases than a typical annual cycle. There is concern amongst neighboring coastal communities that such a system would adversely impact their local water supplies via increased saltwater intrusion. This analysis uses three-dimensional modeling of variable-density ground-water flow and salt transport to study conditions under which hypothetical aquifer storage and recovery (ASR) may not adversely impact the coastal water supplies. A range of storage, pause, and recovery phase lengths and ASR cycle repetitions were used to test scenarios that emphasize control of potential saltwater intrusion. The USGS SUTRA code was used to simulate movement of the freshwater-saltwater transition zones in a detailed model of the upper glacial, Jameco, Magothy, and Lloyd aquifers of western Long Island, New York. Simulated transition zones in the upper glacial, Jameco, and Magothy aquifers reach a steady state for 1999 stress and recharge conditions within 1 ka; however, saltwater encroachment is ongoing in the Lloyd (deepest) aquifer, for which the effects of the rise in sea level since deglaciation on transition zone equilibration are retarded by many ka due to the thick, overlying Raritan confining unit. Pumping in the 20th century has also caused widening and landward movement of the Lloyd aquifer transition zone. Simulation of scenarios of freshwater storage by injection followed by phases of pause and recovery by extraction indicates that the effect of net storage when less water is recovered than injected is to set up a hydraulic saltwater intrusion barrier in the Lloyd aquifer which may have beneficial effects to coastal water users. ?? 2007 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Misut, Paul E.; Voss, Clifford I.
2007-04-01
SummaryFreshwater storage in deep aquifers of Brooklyn and Queens, New York, USA, is under consideration as an emergency water supply for New York City. The purpose of a New York City storage and recovery system is to provide an emergency water supply during times of drought or other contingencies and would entail longer-term storage phases than a typical annual cycle. There is concern amongst neighboring coastal communities that such a system would adversely impact their local water supplies via increased saltwater intrusion. This analysis uses three-dimensional modeling of variable-density ground-water flow and salt transport to study conditions under which hypothetical aquifer storage and recovery (ASR) may not adversely impact the coastal water supplies. A range of storage, pause, and recovery phase lengths and ASR cycle repetitions were used to test scenarios that emphasize control of potential saltwater intrusion. The USGS SUTRA code was used to simulate movement of the freshwater-saltwater transition zones in a detailed model of the upper glacial, Jameco, Magothy, and Lloyd aquifers of western Long Island, New York. Simulated transition zones in the upper glacial, Jameco, and Magothy aquifers reach a steady state for 1999 stress and recharge conditions within 1 ka; however, saltwater encroachment is ongoing in the Lloyd (deepest) aquifer, for which the effects of the rise in sea level since deglaciation on transition zone equilibration are retarded by many ka due to the thick, overlying Raritan confining unit. Pumping in the 20th century has also caused widening and landward movement of the Lloyd aquifer transition zone. Simulation of scenarios of freshwater storage by injection followed by phases of pause and recovery by extraction indicates that the effect of net storage when less water is recovered than injected is to set up a hydraulic saltwater intrusion barrier in the Lloyd aquifer which may have beneficial effects to coastal water users.
Smoothing of the spectrum of fibre Bragg gratings in the Lloyd-interferometer recording scheme
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abdullina, S R; Vlasov, Aleksandr A; Babin, Sergei A
2010-05-26
The possibility of apodization of fibre Bragg gratings (FBGs) recorded in the region of interference of the two parts of a Gaussian beam in a Lloyd interferometer is considered. The reflection spectra of FBGs are numerically simulated for different parameters of the recording beam and its displacement with respect to the dividing axis in the interferometer. Aconsiderable suppression of sidelobe resonances in the FBG spectrum during the displacement of the beam centre with respect to the dividing axis by half the beam radius is predicted and experimentally demonstrated. It is shown that this is caused by the equating of themore » mean value of the refractive index in the FBG region. (fibres)« less
Digital Documentation of Frank Lloyd Wright's Masterpiece, Fallingwater
NASA Astrophysics Data System (ADS)
Jerome, P.; Emilio, D.
2017-08-01
Since 1988, the professional staff of Architectural Preservation Studio (APS) has been involved with the conservation of Frank Lloyd Wright's Fallingwater in Mill Run, PA. Designed and erected from 1935 to 1939 as a weekend home for the Kauffman family, the complex consists of the main house and guest house. After five years of reports and prototype repairs, we produced a two-volume master plan. Using original Frank Lloyd Wright drawings from Avery Library as background drawings, we measured every surface and reproduced the drawings in CAD, also developing elevations of every room. Stone-by-stone drawings of every flagstone floor and terrace scheduled to be lifted were also created using overlapping film photography that was assembled into a photo mosaic. By 2005, we designed, administered and completed a four-phase exterior restoration, with the paint-stripping and repainting of interior rooms being performed during the brief winter period when the building is closed to the public on an ongoing basis. In 2016, we were invited back to the site to review conditions and advise on routine maintenance. At that time we proposed to re-document the buildings, this time using laser-scanning. Laser-scanning of the exterior was performed in May of 2016, and of the interior in March 2017, each over the course of four days. This paper will make a comparison between manual and digital techniques in terms of Fallingwater's documentation.
Characterizing conical refraction optical tweezers.
McDonald, C; McDougall, C; Rafailov, E; McGloin, D
2014-12-01
Conical refraction occurs when a beam of light travels through an appropriately cut biaxial crystal. By focusing the conically refracted beam through a high numerical aperture microscope objective, conical refraction optical tweezers can be created, allowing for particle manipulation in both Raman spots, and in the Lloyd/Poggendorff rings. We present a thorough quantification of the trapping properties of such a beam, focusing on the trap stiffness, and how this varies with trap power and trapped particle location. We show that the lower Raman spot can be thought of as a single-beam optical gradient force trap, while radiation pressure dominates in the upper Raman spot, leading to optical levitation rather than trapping. Particles in the Lloyd/Poggendorff rings experience a lower trap stiffness than particles in the lower Raman spot, but benefit from rotational control.
NASA Astrophysics Data System (ADS)
Hollman, J. C.
2007-07-01
The Bleek and Lloyd Manuscripts are an extraordinary resource that comprises some 12 000 pages of |xam Bushman beliefs collected in the 1870s in Cape Town, South Africa. About 17% of the collection concerns beliefs and observations of celestial bodies. This paper summarises |xam knowledge about the origins of the celestial bodies as recorded in the manuscripts and situates this within the larger context of the |xam worldview. The stars and planets originate from a mythological past in which they lived as 'people' who hunted and gathered as the |xam did in the past, but who also had characteristics that were to make them the entities that we recognise today. Certain astronomical bodies have consciousness and supernatural potency. They exert an influence over people's everyday lives.
Flexible particle manipulation techniques with conical refraction-based optical tweezers
NASA Astrophysics Data System (ADS)
McDougall, C.; Henderson, Robert; Carnegie, David J.; Sokolovskii, Grigorii S.; Rafailov, Edik U.; McGloin, David
2012-10-01
We present an optimized optical tweezers system based upon the conical refraction of circularly polarized light in a biaxial crystal. The described optical arrangement avoids distortions to the Lloyd plane rings that become apparent when working with circularly polarized light in conventional optical tweezers. We demonstrate that the intensity distribution of the conically diffracted light permits optical manipulation of high and low refractive index particles simultaneously. Such trapping is in three dimensions and not limited to the Lloyd plane rings. By removal of a quarter waveplate the system also permits the study of linearly polarized conical refraction. We show that particle position in the Raman plane is determined by beam power, and indicates that true optical tweezing is not taking place in this part of the beam.
Characterizing conical refraction optical tweezers
NASA Astrophysics Data System (ADS)
McDonald, C.; McDougall, C.; Rafailov, E.; McGloin, D.
2014-12-01
Conical refraction occurs when a beam of light travels through an appropriately cut biaxial crystal. By focussing the conically refracted beam through a high numerical aperture microscope objective, conical refraction optical tweezers can be created, allowing for particle manipulation in both Raman spots and in the Lloyd/Poggendorff rings. We present a thorough quantification of the trapping properties of such a beam, focussing on the trap stiffness and how this varies with trap power and trapped particle location. We show that the lower Raman spot can be thought of as a single-beam optical gradient force trap, while radiation pressure dominates in the upper Raman spot, leading to optical levitation rather than trapping. Particles in the Lloyd/Poggendorff rings experience a lower trap stiffness than particles in the lower Raman spot but benefit from rotational control.
Obituary: Lloyd V. Wallace (1927 - 2015)
NASA Astrophysics Data System (ADS)
Born in 1927 in Detroit, Michigan, in humble circumstances, Lloyd developed an early interest in solar and planetary astronomy and was a protégé of Ralph Nichols, a physics professor at the University of Western Ontario. Later he moved back to the United States and obtained his Ph.D in Astronomy at the University of Michigan in 1957 under Leo Goldberg. It was while he was at the University of Michigan that he met and married his wife, Ruth. At various times in his early career, and as the result of a complex series of events, he held Canadian, British, and United States citizenships and even found time to become an expert professional electrician. On acquiring his degree he obtained a position with Joe Chamberlain at the Yerkes Observatory and began a lifetime association with Chamberlain and Don Hunten (then a visitor to Yerkes) in atmospheric and spectroscopic research. In 1962 they moved to Tucson where Chamberlain became the head of the Space Division at the Kitt Peak National Observatory, a unit set up by the first director, Aden Meinel, to apply advances in technology to astronomical research. Lloyd was hired as the principal experimenter in the observatory's sounding rocket program, which was set up by the National Science Foundation to provide staff and visitor access to the upper atmosphere for research purposes. With this program he supervised a series of 39 Aerobee rocket flights from the White Sands Missile range to investigate upper atmosphere emissions, aeronomic processes, and make astronomical observations over a period of about 10 years. He was also involved in the first attempts to establish a remotely controlled 50&rdquo telescope on Kitt Peak and efforts within the Division to create an Earth orbiting astronomical telescope. In parallel with these activities Lloyd conducted research which was largely focused on spectroscopic investigations. In the early days these included measurement of upper atmospheric emissions, particularly visual dayglow, the discovery of Raman lines in Uranus, Lightning spectrum, and auroral emissions. During this time he also pursued theoretical studies of resonant line transfer and some of the first modelling of the thermal structure of outer planet atmospheres. With the conclusion of the rocket program he turned his attention to high-resolution studies of the sun and cool stars and to long-term study of the variability of atmospheric pollutants (HCl, HF. CO2) over Kitt Peak. His solar and cool star studies led to the production of several high-resolution digital atlases extending from the UV to the thermal IR, and in addition, studies of line variability and the molecular content of sunspots. Lloyd was a very private and genuine person, but with a very sharp wit. He was highly productive with 135 published papers bearing his name.
Monti, Jack; Busciolano, Ronald J.
2009-01-01
The U.S. Geological Survey (USGS), in cooperation with State and local agencies, systematically collects ground-water data at varying measurement frequencies to monitor the hydrologic situation on Long Island, New York. Each year during March and April, the USGS conducts a synoptic survey of hydrologic conditions to define the spatial distribution of the water table and potentiometric surfaces within the three main water-bearing units underlying Long Island - the upper glacial, Magothy, and Lloyd aquifers. These data and the maps constructed from them are commonly used in studies of Long Island's hydrology, and by water managers and suppliers for aquifer management and planning purposes. Water-level measurements made in 502 wells across Long Island during March-April 2006, were used to prepare the maps in this report. Measurements were made by the wetted-tape method to the nearest hundredth of a foot. Water-table and potentiometric-surface altitudes in these aquifers were contoured using these measurements. The water-table contours were interpreted using water-level data collected from 341 wells screened in the upper glacial aquifer and (or) shallow Magothy aquifer; the Magothy aquifer's potentiometric-surface contours were interpreted from measurements at 102 wells screened in the middle to deep Magothy aquifer and (or) contiguous and hydraulically connected Jameco aquifer; and the Lloyd aquifer's potentiometric-surface contours were interpreted from measurements at 59 wells screened in the Lloyd aquifer or contiguous and hydraulically connected North Shore aquifer. Many of the supply wells are in continuous operation and, therefore, were turned off for a minimum of 24 hours before measurements were made so that the water levels in the wells could recover to the level of the potentiometric head in the surrounding aquifer. Full recovery time at some of these supply wells can exceed 24 hours; therefore, water levels measured at these wells are assumed to be less accurate than those measured at observation wells, which are not pumped. In this report, all water-level altitudes are referenced to the National Geodetic Vertical Datum of 1929 (NGVD 29).
NASA Astrophysics Data System (ADS)
Mohseni, Masoud; Omar, Yasser; Engel, Gregory S.; Plenio, Martin B.
2014-08-01
List of contributors; Preface; Part I. Introduction: 1. Quantum biology: introduction Graham R. Fleming and Gregory D. Scholes; 2. Open quantum system approaches to biological systems Alireza Shabani, Masoud Mohseni, Seogjoo Jang, Akihito Ishizaki, Martin Plenio, Patrick Rebentrost, Alàn Aspuru-Guzik, Jianshu Cao, Seth Lloyd and Robert Silbey; 3. Generalized Förster resonance energy transfer Seogjoo Jang, Hoda Hossein-Nejad and Gregory D. Scholes; 4. Multidimensional electronic spectroscopy Tomáš Mančal; Part II. Quantum Effects in Bacterial Photosynthetic Energy Transfer: 5. Structure, function, and quantum dynamics of pigment protein complexes Ioan Kosztin and Klaus Schulten; 6. Direct observation of quantum coherence Gregory S. Engel; 7. Environment-assisted quantum transport Masoud Mohseni, Alàn Aspuru-Guzik, Patrick Rebentrost, Alireza Shabani, Seth Lloyd, Susana F. Huelga and Martin B. Plenio; Part III. Quantum Effects in Higher Organisms and Applications: 8. Excitation energy transfer in higher plants Elisabet Romero, Vladimir I. Novoderezhkin and Rienk van Grondelle; 9. Electron transfer in proteins Spiros S. Skourtis; 10. A chemical compass for bird navigation Ilia A. Solov'yov, Thorsten Ritz, Klaus Schulten and Peter J. Hore; 11. Quantum biology of retinal Klaus Schulten and Shigehiko Hayashi; 12. Quantum vibrational effects on sense of smell A. M. Stoneham, L. Turin, J. C. Brookes and A. P. Horsfield; 13. A perspective on possible manifestations of entanglement in biological systems Hans J. Briegel and Sandu Popescu; 14. Design and applications of bio-inspired quantum materials Mohan Sarovar, Dörthe M. Eisele and K. Birgitta Whaley; 15. Coherent excitons in carbon nanotubes Leonas Valkunas and Darius Abramavicius; Glossary; References; Index.
Le bilingue, surhomme ou infirme? (The Bilingual, Superman or Cripple?)
ERIC Educational Resources Information Center
Tournier, Michel
1975-01-01
Discusses good translation techniques, "true bilingualism," and the effect of bilingualism on cognitive development. (Text is in French.) Available from Lloyds Bank Chambers, 91 Newington Causeway, London SE1 6BN, England. (AM)
Present-day and future global bottom-up ship emission inventories including polar routes.
Paxian, Andreas; Eyring, Veronika; Beer, Winfried; Sausen, Robert; Wright, Claire
2010-02-15
We present a global bottom-up ship emission algorithm that calculates fuel consumption, emissions, and vessel traffic densities for present-day (2006) and two future scenarios (2050) considering the opening of Arctic polar routes due to projected sea ice decline. Ship movements and actual ship engine power per individual ship from Lloyd's Marine Intelligence Unit (LMIU) ship statistics for six months in 2006 and further mean engine data from literature serve as input. The developed SeaKLIM algorithm automatically finds the most probable shipping route for each combination of start and destination port of a certain ship movement by calculating the shortest path on a predefined model grid while considering land masses, sea ice, shipping canal sizes, and climatological mean wave heights. The resulting present-day ship activity agrees well with observations. The global fuel consumption of 221 Mt in 2006 lies in the range of previously published inventories when undercounting of ship numbers in the LMIU movement database (40,055 vessels) is considered. Extrapolated to 2007 and ship numbers per ship type of the recent International Maritime Organization (IMO) estimate (100,214 vessels), a fuel consumption of 349 Mt is calculated which is in good agreement with the IMO total of 333 Mt. The future scenarios show Arctic polar routes with regional fuel consumption on the Northeast and Northwest Passage increasing by factors of up to 9 and 13 until 2050, respectively.
Integrated Lloyd's mirror on planar waveguide facet as a spectrometer.
Morand, Alain; Benech, Pierre; Gri, Martine
2017-12-10
A low-cost and simple Fourier transform spectrometer based on the Lloyd's mirror configuration is proposed in order to have a very stable interferogram. A planar waveguide coupled to a fiber injection is used to spatially disperse the optical beam. A second beam superposed to the previous one is obtained by a total reflection of the incident beam on a vertical glass face integrated in the chip by dicing with a specific circular precision saw. The interferogram at the waveguide output is imaged on a near-infrared camera with an objective lens. The contrast and the fringe period are thus dependent on the type and the fiber position and can be optimized to the pixel size and the length of the camera. Spectral resolution close to λ/Δλ=80 is reached with a camera with 320 pixels of 25 μm width in a wavelength range from O to L bands.
Correlating regional natural hazards for global reinsurance risk assessment
NASA Astrophysics Data System (ADS)
Steptoe, Hamish; Maynard, Trevor; Economou, Theo; Fox, Helen; Wallace, Emily; Maisey, Paul
2016-04-01
Concurrent natural hazards represent an uncertainty in assessing exposure for the insurance industry. The recently implemented Solvency II Directive requires EU insurance companies to fully understand and justify their capital reserving and portfolio decisions. Lloyd's, the London insurance and reinsurance market, commissioned the Met Office to investigate the dependencies between different global extreme weather events (known to the industry as perils), and the mechanisms for these dependencies, with the aim of helping them assess their compound risk to the exposure of multiple simultaneous hazards. In this work, we base the analysis of hazard-to-hazard dependency on the interaction of different modes of global and regional climate variability. Lloyd's defined 16 key hazard regions, including Australian wildfires, flooding in China and EU windstorms, and we investigate the impact of 10 key climate modes on these areas. We develop a statistical model that facilitates rapid risk assessment whilst allowing for both temporal auto-correlation and, crucially, interdependencies between drivers. The simulator itself is built conditionally using autoregressive regression models for each driver conditional on the others. Whilst the baseline assumption within the (re)insurance industry is that different natural hazards are independent of each other, the assumption of independence of meteorological risks requires greater justification. Although our results suggest that most of the 120 hazard-hazard connections considered are likely to be independent of each other, 13 have significant dependence arising from one or more global modes of climate variability. This allows us to create a matrix of linkages describing the hazard dependency structure that Lloyd's can use to inform their understanding of risk.
NASA Astrophysics Data System (ADS)
Inanç, Arda; Kösoğlu, Gülşen; Yüksel, Heba; Naci Inci, Mehmet
2018-06-01
A new fibre optic Lloyd's mirror method is developed for extracting 3-D height distribution of various objects at the micron scale with a resolution of 4 μm. The fibre optic assembly is elegantly integrated to an optical microscope and a CCD camera. It is demonstrated that the proposed technique is quite suitable and practical to produce an interference pattern with an adjustable frequency. By increasing the distance between the fibre and the mirror with a micrometre stage in the Lloyd's mirror assembly, the separation between the two bright fringes is lowered down to the micron scale without using any additional elements as part of the optical projection unit. A fibre optic cable, whose polymer jacket is partially stripped, and a microfluidic channel are used as test objects to extract their surface topographies. Point by point sensitivity of the method is found to be around 8 μm, changing a couple of microns depending on the fringe frequency and the measured height. A straightforward calibration procedure for the phase to height conversion is also introduced by making use of the vertical moving stage of the optical microscope. The phase analysis of the acquired image is carried out by One Dimensional Continuous Wavelet Transform for which the chosen wavelet is the Morlet wavelet and the carrier removal of the projected fringe patterns is achieved by reference subtraction. Furthermore, flexible multi-frequency property of the proposed method allows measuring discontinuous heights where there are phase ambiguities like 2π by lowering the fringe frequency and eliminating the phase ambiguity.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-22
... Survey facility encumbers 45 acres for the Barrow Magnetic Observatory.'' is hereby corrected to read... Magnetic Observatory.'' Robert L. Lloyd, Supervisor, Lands, Realty and Title Transfer Program, Division of...
14. Historic American Buildings Survey, Plate # 54, 'Wohnhaus Martin, ...
14. Historic American Buildings Survey, Plate # 54, 'Wohnhaus Martin, Buffalo, N. Y. in Frank Lloyd Wright Ausgefuhrte Bauten (Berlin: Ernst Wasmuth A. G., 1911), MR. MARTIN'S SISTER'S HOUSE. - Darwin D. Martin House, 125 Jewett Parkway, Buffalo, Erie County, NY
8. Historic American Buildings Survey, Plate # 52, 'Wohnhaus Martin, ...
8. Historic American Buildings Survey, Plate # 52, 'Wohnhaus Martin, Buffalo, N. Y. Westseite und Einzelheit der Westseite' in Frank Lloyd Wright Ausgefuhrte Bauten (Berlin: Ernst Wasmuth A. G., 1911). - Darwin D. Martin House, 125 Jewett Parkway, Buffalo, Erie County, NY
Insights in Architecture. Resources in Education.
ERIC Educational Resources Information Center
Skena, K. George
1996-01-01
Focuses on new technological advances in the design process and especially the use of new materials to construct architectural marvels. Discusses the innovations of Le Corbusier, Mies van de Rohe, and Frank Lloyd Wright. Includes a student quiz and possible student outcomes. (JOW)
Optimization and Development of a Human Scent Collection Method
2007-06-04
19. Schoon, G. A. A., Scent Identification Lineups by Dogs (Canis familiaris): Experimental Design and Forensic Application. Applied Animal...Parker, Lloyd R., Morgan, Stephen L., Deming, Stanley N., Sequential Simplex Optimization. Chemometrics Series, ed. S.D. Brown. 1991, Boca Raton
2. WEST FRONT ENTRANCE, WITH OWNERS MR. & MRS. ISACC ...
2. WEST FRONT ENTRANCE, WITH OWNERS MR. & MRS. ISACC N. HAGAN (WHO CONTRACTED WITH FRANK LLOYD WRIGHT FOR THE DESIGN OF THIS HOUSE) - Isaac N. Hagan House, Kentuck Knob, U.S. Route 40 vicinity (Stewart Township), Chalkhill, Fayette County, PA
Florida Public Scoping Meeting | NOAA Gulf Spill Restoration
Louisiana Mississippi Texas Region-wide Open Ocean Data Media & News Publications Press Releases Story open at 6:30 p.m., and the meeting will begin at 7:30 p.m. Bayview Community Center 2001 Lloyd Street
Stumm, Frederick
2001-01-01
Great Neck, a peninsula, in the northwestern part of Nassau County, N.Y., is underlain by unconsolidated deposits that form a sequence of aquifers and confining units. Seven public-supply wells have been affected by the intrusion of saltwater from the surrounding embayments (Little Neck Bay, Long Island Sound, Manhasset Bay). Fifteen observation wells were drilled in 1991–96 for the collection of hydrogeologic, geochemical, and geophysical data to delineate the subsurface geology and extent of saltwater intrusion within the peninsula. Continuous high-resolution seismic-reflection surveys in the embayments surrounding the Great Neck peninsula and the Manhasset Neck peninsula to the east were completed in 1993 and 1994.Two hydrogeologic units are newly proposed herein.the North Shore aquifer and the North Shore confining unit. The new drill-core data collected in 1991–96 indicate that the Lloyd aquifer, the Raritan confining unit, and the Magothy aquifer have been completely removed from the northern part of the peninsula by extensive glacial erosion.Water levels at selected observation wells were measured quarterly throughout the study. The results from two studies of the effects of tides on ground-water levels in 1992 and 1993 indicate that water levels at wells screened within the North Shore and Lloyd aquifers respond to tides and pumping effects, but those in the overlying upper glacial aquifer (where the water table is located) do not. Data from quarterly water-level measurements and the tidal-effect studies indicate the North Shore and Lloyd aquifers to be hydraulically connected.Offshore seismic-reflection surveys in the surrounding embayments indicate at least two glacially eroded buried valleys with subhorizontal, parallel reflectors indicative of draped bedding that is interpreted as infilling by silt and clay. The buried valleys (1) truncate the surrounding coarse-grained deposits, (2) are asymmetrical and steep sided, (3) trend northwest-southeast, (4) are 2-4 miles long and about 1 mile wide, and (5) extend to more than 200 feet below sea level.Water from six public-supply wells screened in the Magothy and upper glacial aquifers contained volatile organic compounds in concentrations above the New York State Department of Health Drinking Water Maximum Contaminant Levels, as did water from one public-supply well screened in the Lloyd aquifer, and from three observation wells screened in the upper glacial and Magothy aquifers.Four distinct wedge-shaped areas of saltwater intrusion have been delineated within the aquifers in Great Neck; three areas extend into the Lloyd and North Shore aquifers, and the fourth area extends into the upper glacial aquifer. Three other areas of saltwater intrusion also have been detected. Borehole-geophysical-logging data indicate that four of these saltwater wedges range from 20 to 125 feet in thickness and have sharp freshwater-saltwater interfaces, and that maximum chloride concentrations in 1996 ranged from 141 to 13,750 milligrams per liter. Seven public-supply wells have either been shut down or are currently being affected by saltwater intrusion.
NASA Astrophysics Data System (ADS)
Jones, Nicola
2017-01-01
Robots, DNA and electricity bask in the limelight, as Blade Runner reboots, Kazakhstan gets energetic and a 'space tapestry' rolls out. It's quite a year -- and key anniversaries hit, too, for Canada, the anthropology dynamo the Peabody Museum and architect Frank Lloyd Wright. Nicola Jones reports.
Literature, Creativity and Imagination.
ERIC Educational Resources Information Center
Association for Childhood Education International, Washington, DC.
Three speeches given by prominent authors at regional workshops sponsored by the Joint Liaison Committee of the Association for Childhood Education International and the Children's Book Council are printed here. The author--Lloyd Alexander, Myra Cohn Livingston, and Virginia Hamilton--addressed the subject of "Literature, Creativity and…
77 FR 63296 - Marine Mammals; File No. 17115
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-16
... DEPARTMENT OF COMMERCE National Oceanic and Atmospheric Administration RIN 0648-XC100 Marine Mammals; File No. 17115 AGENCY: National Marine Fisheries Service (NMFS), National Oceanic and Atmospheric... permit has been issued to James Lloyd-Smith, Department of Ecology and Evolutionary Biology, University...
Being a Professional or Practising Professionally
ERIC Educational Resources Information Center
Dyer, Mary A.
2018-01-01
Research (Lloyd, E., and E. Hallet. [2010]. "Professionalising the Early Childhood Workforce in England: Work in Progress or Missed Opportunity?" "Contemporary Issues in Early Childhood" 11 (1): 75-788; Saks, M. [2012]. "Defining a Profession: The Role of Knowledge and Expertise." "Professions and…
10. Historic American Buildings Survey, Plate # 50, 'Wohnhaus Martin, ...
10. Historic American Buildings Survey, Plate # 50, 'Wohnhaus Martin, Buffalo, N. Y. Kamin in Wohnzimmer' in Frank Lloyd Wright Ausgefuhrte Bauten (Berlin: Ernst Wasmuth A. G., 1911), FIREPLACE IN LIVING ROOM, WEST SIDE. - Darwin D. Martin House, 125 Jewett Parkway, Buffalo, Erie County, NY
9. Historic American Buildings Survey, Plate # 44, 'Wohnhaus Martin, ...
9. Historic American Buildings Survey, Plate # 44, 'Wohnhaus Martin, Buffalo, N. Y. Grundriss des Hauptgeschosses' in Frank Lloyd Wright Ausgefuhrte Bauten (Berlin: Ernst Wasmuth A. G., 1911), PLAN OF THE MAIN FLOORS. - Darwin D. Martin House, 125 Jewett Parkway, Buffalo, Erie County, NY
75 FR 150 - Notice of Agreements Filed
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-04
...; Evergreen Line Joint Service Agreement; Hanjin Shipping Co., Ltd.; Hapag-Lloyd AG; Hyundai Merchant Marine... Overseas Container Line Limited; Yangming Marine Transport Corp.; and Zim Integrated Shipping Services, Ltd... initiatives and the reduction of air and water pollution. Agreement No.: 012008-004. Title: The 360 Quality...
NASA Astrophysics Data System (ADS)
Murdin, P.
2000-11-01
Radiophysicist and astronomer, born Ararat, Victoria, Australia, pioneered the use of a Lloyd's mirror arrangement for radio interferometry at Dover Heights in Australia, and located the source of solar radio noise within the disc of the Sun. As John Hey had suggested, the radio noise came from sunspots....
ERIC Educational Resources Information Center
Lybarger, Scott; Smith, Craig R.
1996-01-01
Reconstructs Lloyd Bitzer's situational model to serve as a guide for the generation of multiperspectival critical assessments of rhetorical discourse. Uses two of President Bush's speeches on the drug crisis to illustrate how the reconstructed model can account for such modern problems as multiple audiences, perceptions, and exigencies. (PA)
Spatial Modernist Architectural Artistic Concepts
NASA Astrophysics Data System (ADS)
Gudkova, T. V.; Gudkov, A. A.
2017-11-01
The development of a single spatial modernist conception had continued until the middle of the twentieth century. The first authors who proposed the new conceptual solutions of an architectural space that had the greatest impact on the further development of architecture were Le Corbusier, Frank Lloyd Wright, Mies van der Rohein. They embodied different approaches within the common modernist spatial concept using the language of morphological, symbolic and phenomenological descriptions of space. The concept was based on the simplification of functional links, integration of internal architectural space with the environment due to the vanishing of boundaries between them and expansion of their interrelation. Le Corbusier proposed a spatio-temporal concept based on the movement and tempo-rhythmics of the space “from inside to outside.” Frank Lloyd Wright proposed the concept of integral space where inner and outer spaces were the parts of a whole. Mies van der Rohein was the author of the universal space concept in which the idea of the “dissolution” of the inner space in the outer space was embodied.
The Air Force and the Cold War
2005-09-01
March 2001. 49An Air Force Association Special Report 49An Air Force As ociation Special Report CANAN , James. War in Space. Harper & Row, 1982...Press, 1989. GARDNER, Lloyd C. Spheres of Influence: The Great Powers Partition Europe, From Munich to Yalta. Ivan R. Dee Publisher, 1993. GARTHOFF
78 FR 32378 - Endangered and Threatened Species; Take of Anadromous Fish
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-30
... proposed adult management program associated with spring Chinook salmon hatchery plans for major... be sent to Allyson Purcell, National Marine Fisheries Services, Salmon Management Division, 1201 N.E... should be directed to the National Marine Fisheries Services, Salmon Management Division, 1201 N.E. Lloyd...
ERIC Educational Resources Information Center
Greenwell, Raymond N.; Seabold, Daniel E.
2014-01-01
The Gale-Shapley stable marriage theorem is a fascinating piece of twentieth-century mathematics that has many practical applications--from labor markets to school admissions--yet is accessible to secondary school mathematics students. David Gale and Lloyd Shapley were both mathematicians and economists who published their work on the Stable…
ERIC Educational Resources Information Center
McClure, Connie
2010-01-01
This article describes how the author teaches a fourth- and fifth-grade unit on architecture called the Art and Science of Planning Buildings. Rockville, Indiana has fine examples of architecture ranging from log cabins, classic Greek columns, Victorian houses, a mission-style theater, and Frank Lloyd Wright prairie-style homes. After reading…
75 FR 78245 - Notice of Agreements Filed
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-15
... Discussion Agreement. Parties: American President Lines, Ltd.; American Roll-On Roll-Off Carrier; Crowley... American Presidents Lines, Ltd. and Hapag-Lloyd USA, LLC. Agreement No.: 012112. Title: Evergreen/Maersk... Lines (Hong Kong) Co. Ltd., and Compania Sud-Americana de Vapores S.A. Filing Party: Tara L. Leiter, Esq...
Residential Group Care Quarterly. Volume 5, Number 3, Winter 2005
ERIC Educational Resources Information Center
Michael, Jennifer, Ed.
2005-01-01
This issue of "Residential Group Care Quarterly" contains the following articles: (1) "Promising Practices for Adequately Funding and Reimbursing Residential Services" (Lloyd Bullard); (2) "Closing the Gender Gap" (Erin Andersen); (3) "Residential Child Care: Guidelines for Physical Techniques, Crisis Prevention, and Management" (Kurk Lalemand);…
This case study defines well integrity by the prevention of vertical migration of fluids to protect drinking water resources. A generic shale development well is presented, including design, construction, operational phase, and its plug and abandonment.
Herlihy's thesis revisited: some notes on investment in children in Medieval Muslim societies.
Giladi, Avner
2011-01-01
David Herlihy proposed "that we seek to evaluate, and on occasion even to measure, the psychological and economic investment which families and societies in the past were willing to make in their children" and suggested an alternative to both the "theory of discovered childhood [in Europe]," as introduced by Philippe Ariès and the notion of Lloyd DeMause that the historical evolution of child-parent relations in general formed a continuous and irreversible process of progress. This article shows that although we lack some of the archival sources that are essential for reconstructing the real lives of children in the premodern Mediterranean Muslim world, we are still able, with the "investment" criterion in mind, to assess attitudes toward children, at least in some defined periods of time and geographical regions.
Tocqueville, Garrison, and the Perfection of Journalism.
ERIC Educational Resources Information Center
Nord, David Paul
The 1830s marked a lush first flowering of democratic journalism in America--participatory journalism of the sort that Alexis de Tocqueville heralded. But contrary to standard journalism history, this democratic press had nothing to do with the rise of the penny press; in fact, William Lloyd Garrison's abolitionist paper, "The…
The Fabrication of Arrays of Single Ions in Silicon via Ion Implantation
2014-02-01
Requirement of optical nonlinearity for photon count- ing. Physical Review A, 65:042304, 2002. [108] Seth Lloyd and Samuel L. Braunstein. Quantum...defects in metals. Journal of Physics F: Metal Physics, 3(2):295, 1973. [361] George D. Watkins . Intrinsic defects in silicon. Materials Science in Semicon
Residential Group Care Quarterly. Volume 5, Number 1, Summer 2004
ERIC Educational Resources Information Center
Kirkwood, Scott, Ed.
2004-01-01
This issue of "Residential Group Care Quarterly" contains the following articles: (1) "National Definitions and Data Collection for Residential Care Facilities' Use of Restraint and Seclusion" (Lloyd Bullard); (2) "CWLA Publishes Best Practices in Behavior Support and Intervention Assessment Instrument" (Nupur Gupta); (3) "Initial Findings of an…
Chronicle of Higher Education. Volume 50, Number 21, January 30, 2004
ERIC Educational Resources Information Center
Chronicle of Higher Education, 2004
2004-01-01
"Chronicle of Higher Education" presents an abundant source of news and information for college and university faculty members and administrators. This January 30, 2004 issue of "Chronicle for Higher Education" includes the following articles: (1) "Fighting to Get out of a Rio Slum" (Lloyd, Marion); (2) "Against…
Using Multiple Representations to Teach Composition of Functions
ERIC Educational Resources Information Center
Steketee, Scott; Scher, Daniel
2012-01-01
Composition of functions is one of the five big ideas identified in NCTM's "Developing Essential Understanding of Functions, Grades 9-12" (Cooney, Beckmann, and Lloyd 2010). Through multiple representations (another big idea) and the use of The Geometer's Sketchpad[R] (GSP), students can directly manipulate variables and thus see dynamic visual…
The Treatment of Composition in Secondary and Early Collegiate Mathematics Curricula
ERIC Educational Resources Information Center
Horvath, Aladar Karoly
2012-01-01
Composition has been described as essential for understanding functions (Carlson, Oehrtman, & Engelke, 2010; Cooney, Beckmann, & Lloyd, 2010). Studies of students' understanding of function composition have shown that students use multiplication and other operations in place of composition (Carlson et al., 2010; Horvath, 2010). While…
Hazardous Waste Cleanup: Central Hudson Gas & Electric Corporation in Highland, New York
This site is located about five miles west of the Hudson River in Town of Lloyd, New York, Ulster County. It has operated as a vehicle and equipment storage and repair facility for an electric power transmission company since the 1950's. Both current
Mentoring Matters: Mentoring in Community
ERIC Educational Resources Information Center
Lloyd, Rachel Malchow
2013-01-01
Rachel Malchow Lloyd describes the wonderful and quite natural potential for a team of teachers, through their professional conversations and routine planning, to model for less experienced teachers how to prepare for instruction, how to assess its effect, and how to assume a professional stance consistently in the daily execution of…
78 FR 35270 - Notice of Agreements Filed
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-12
...: American President Lines, Ltd. and APL Co. PTE Ltd. (operating as a single carrier); A.P. Moller-Maersk A/S... Container Lines Company, Ltd; Evergreen Line Joint Service Agreement, FMC No. 011982; Hanjin Shipping Co., Ltd.; Hapag-Lloyd A.G.; Hyundai Merchant Marine Co., Ltd.; Kawasaki Kisen Kaisha, Ltd.; Mediterranean...
DOT National Transportation Integrated Search
1995-02-01
World wide merchant vessel fire and explosion data were analyzed to determine the contribution of these casualties to the marine pollution problem. The source of information is the Lloyd's Casualty Information System Data Base. The major findings of ...
Adolescent Attitudes toward Psychiatric Medication: The Utility of the Drug Attitude Inventory
ERIC Educational Resources Information Center
Townsend, Lisa; Floersch, Jerry; Findling, Robert L.
2009-01-01
Background: Despite the effectiveness of psychotropic treatment for alleviating symptoms of psychiatric disorders, youth adherence to psychotropic medication regimens is low. Adolescent adherence rates range from 10-80% (Swanson, 2003; Cromer & Tarnowski, 1989; Lloyd et al., 1998; Brown, Borden, and Clingerman, 1985; Sleator, 1985) depending on…
Community College Humanities Review, 1995.
ERIC Educational Resources Information Center
Seabrook, John H., Ed.
1995-01-01
This annual volume of the Community College Humanities Review (CCHR) presents a wide range of articles dealing with humanities--from Lloyd Kaplan's attempts to set the record straight (by presenting a more accurate appraisal and a truer perspective of Dave Brubeck's outstanding contribution to the course of jazz) to Walter Krieglstein's…
Why Stereotypes Don't Even Make Good Defaults
ERIC Educational Resources Information Center
Connolly, Andrew C.; Fodor, Jerry A.; Gleitman, Lila R.; Gleitman, Henry
2007-01-01
Many concepts have stereotypes. This leaves open the question of whether concepts "are" stereotypes. It has been argued elsewhere that theories that identify concepts with their stereotypes or with stereotypical properties of their instances (e.g., Rosch, E. (1978). "Principles of categorization." In E. Rosch & B. B. Lloyd (Ed.), "Cognition and…
76 FR 14395 - Notice of Agreements Filed
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-16
...: Crowley Latin America Services, LLC; Dole Ocean Cargo Express; King Ocean Services de Venezuela/King Ocean.... Agreement No.: 012108-002. Title: The World Liner Data Agreement. Parties: ANL Container Line Pty Ltd.; A.P... Americana de Vapores S.A. ; Hamburg-Sud; Hapag-Lloyd AG; Independent Container Line Ltd.; Mediterranean...
46 CFR 177.300 - Structural design.
Code of Federal Regulations, 2010 CFR
2010-10-01
...) CONSTRUCTION AND ARRANGEMENT Hull Structure § 177.300 Structural design. Except as otherwise allowed by this... (incorporated by reference, see 46 CFR 175.600); (b) Steel hull vessels: (1) Lloyd's Yachts and Small Craft; or (2) ABS Steel Vessel Rules (< 61 Meters)(incorporated by reference, see 46 CFR 175.600); (c) Fiber...
46 CFR 177.300 - Structural design.
Code of Federal Regulations, 2014 CFR
2014-10-01
...) CONSTRUCTION AND ARRANGEMENT Hull Structure § 177.300 Structural design. Except as otherwise allowed by this... (incorporated by reference, see 46 CFR 175.600); (b) Steel hull vessels: (1) Lloyd's Yachts and Small Craft; or (2) ABS Steel Vessel Rules (<61 Meters)(incorporated by reference, see 46 CFR 175.600); (c) Fiber...
46 CFR 177.300 - Structural design.
Code of Federal Regulations, 2013 CFR
2013-10-01
...) CONSTRUCTION AND ARRANGEMENT Hull Structure § 177.300 Structural design. Except as otherwise allowed by this... (incorporated by reference, see 46 CFR 175.600); (b) Steel hull vessels: (1) Lloyd's Yachts and Small Craft; or (2) ABS Steel Vessel Rules (<61 Meters)(incorporated by reference, see 46 CFR 175.600); (c) Fiber...
46 CFR 177.300 - Structural design.
Code of Federal Regulations, 2011 CFR
2011-10-01
...) CONSTRUCTION AND ARRANGEMENT Hull Structure § 177.300 Structural design. Except as otherwise allowed by this... (incorporated by reference, see 46 CFR 175.600); (b) Steel hull vessels: (1) Lloyd's Yachts and Small Craft; or (2) ABS Steel Vessel Rules (< 61 Meters)(incorporated by reference, see 46 CFR 175.600); (c) Fiber...
46 CFR 177.300 - Structural design.
Code of Federal Regulations, 2012 CFR
2012-10-01
...) CONSTRUCTION AND ARRANGEMENT Hull Structure § 177.300 Structural design. Except as otherwise allowed by this... (incorporated by reference, see 46 CFR 175.600); (b) Steel hull vessels: (1) Lloyd's Yachts and Small Craft; or (2) ABS Steel Vessel Rules (< 61 Meters)(incorporated by reference, see 46 CFR 175.600); (c) Fiber...
You're No Jack Kennedy: Bentsen vs. Quayle.
ERIC Educational Resources Information Center
Arnold, Christa L.; Fadely, Dean
The 1988 Vice-Presidential debate between candidates Dan Quayle and Lloyd Bentsen served as a rhetorical forum for airing the policies and behaviors of each candidate. Elements of compliance-gaining and apologia were interrelated and overlapped throughout the debate. Both candidates effectively maneuvered these strategies and avoided direct…
Chronicle of Higher Education. Volume 51, Number 23, February 11, 2005
ERIC Educational Resources Information Center
Chronicle of Higher Education, 2005
2005-01-01
"Chronicle of Higher Education" presents an abundant source of news and information for college and university faculty members and administrators. This February 11, 2005 issue of "Chronicle of Higher Education" includes the following articles: (1) "A Giant Eye on the Stars" (Lloyd, Marion); (2) "Taiwanese…
Lloyd Bitzer's "Rhetorical Situation" and the "Exigencies" of Academic Discourse.
ERIC Educational Resources Information Center
Walzer, Arthur E.
Academic discourse, which takes its definitive characteristics from the papers written by professors to those in a particular discipline for the purpose of solving problems or furthering knowledge, is sustained by disciplinary rhetorical exigencies that prompt, shape, and convene an audience for such writing. The phrase "rhetorical…
Scenarios that describe cyber attacks on the electric grid consistently predict significant disruptions to the economy and citizens quality of life...phenomena that deserve further investigation, such as the importance of some individual power plants in influencing the adversarys probability of
13. Historic American Buildings Survey, Plate # 51, 'Wohnhaus Martin, ...
13. Historic American Buildings Survey, Plate # 51, 'Wohnhaus Martin, Buffalo, N. Y. Gesellschaftszimmer und Speisezimmer' in Frank Lloyd Wright Ausgefuhrte Bauten (Berlin: Ernst Wasmuth A. G., 1911), LEFT: RECEPTION ROOM, FIREPLACE ON NORTH WALL, RIGHT: DINING ROOM, VIEW TOWARD NORTHWEST. - Darwin D. Martin House, 125 Jewett Parkway, Buffalo, Erie County, NY
15. Historic American Buildings Survey, Plate # 55, 'Wohnhaus Martin, ...
15. Historic American Buildings Survey, Plate # 55, 'Wohnhaus Martin, Buffalo, N. Y. Grundrisse des Haupt und Obergeschosses' in Frank Lloyd Wright Ausgefuhrte Bauten (Berlin: Ernst Wasmuth A. G., 1911), PLAN OF THE FIRST AND SECOND FLOORS, MR. MARTIN'S SISTER'S HOUSE. - Darwin D. Martin House, 125 Jewett Parkway, Buffalo, Erie County, NY
A Brief Note on Evidence-Centered Design as a Mechanism for Assessment Development and Evaluation
ERIC Educational Resources Information Center
Bond, Lloyd
2014-01-01
Lloyd Bond comments here on the Focus article in this issue of "Measurement: Interdisciplinary Research and Perspectives". The Focus article is entitled: "How Task Features Impact Evidence from Assessments Embedded in Simulations and Games" (Russell G. Almond, Yoon Jeon Kim, Gertrudes Velasquez, and Valerie J. Shute). Bond…
Spacecraft Surface Charging Handbook
1992-11-01
Charging of Large Spwc Structure• . in Polut Otbil.’" Prweedings of thre Air For’e Grespykirs fitrano, W4r4 nop em Natural Charging of large Space Stru, ures...3, p. 1433- 1440, 1991. Bowman, C., Bogorad, A., Brucker, G., Seehra, S., and Lloyd, T., "ITO-Coated RF Transparent Materials for Antenna Sunscreen
Hispanic Diversity in New York City.
ERIC Educational Resources Information Center
Gurak, Douglas T.; And Others
1980-01-01
This issue of the Hispanic Research Center's journal contains four articles which focus on various aspects of the Hispanic community in New York City. In the first article, Douglas T. Gurak and Lloyd H. Rogler use data from censuses, ethnographic accounts, and public documents to profile New York City's Hispanic population. They review the…
Residential Group Care Quarterly. Volume 7, Number 3, Winter 2007
ERIC Educational Resources Information Center
Shenk, Emily, Ed.
2007-01-01
"Residential Group Care Quarterly" is published four times a year by the Child Welfare League of America (CWLA). This issue contains the following articles: (1) Building a Lasting Agency: The Leadership Institute (Letitia Howland); (2) For Our Safety: Examining High-Risk Interventions for Children and Youth (Michael A. Nunno, Lloyd Bullard, and…
The Identification of Rock Types in an Arid Region by Air Photo Patterns.
1981-06-01
Mountain, Texas. AAPG Bull. vol. 44. pp. 1785-1792. Harbour, R. L. 1972. ;eologv, 01the northern Franklin 11ountains. Texas and Neiv Mehxico. usGs Buill...Lloyd C. 1953. Upper Ordovician and Silurian Stratigraphy of Sacramento Mountains, Otero County. New Alexico. AAPG Bull. vol. 37, pp. 1894-1918. Pray
Translational research: putting the right price on innovation.
Reeve-Johnson, Lloyd
2015-03-21
The animal health sector makes a substantial contribution to innovation in the field of human health, but this is undervalued, says Lloyd Reeve-Johnson. He argues that this contribution needs to be recognised and quantified if the benefits of One Health, and the potential of the veterinary profession, are to be realised. British Veterinary Association.
Disorders of Bilirubin Metabolism
1966-01-01
These discussions are selected from the weekly staff conferences in the Department of Medicine, University of California Medical Center, San Francisco. They are prepared from transcriptions by Drs. Martin J. Cline and Hibbard E. Williams, Assistant Professors of Medicine, under the direction of Dr. Lloyd H. Smith, Professor of Medicine and Chairman of the Department of Medicine. PMID:5909869
11. Historic American Buildings Survey, Plate # 53, 'Wohnhaus Martin, ...
11. Historic American Buildings Survey, Plate # 53, 'Wohnhaus Martin, Buffalo, N. Y. Wohnzimmer mit Heizkorper und Beleuchtung' in Frank Lloyd Wright Ausgefuhrte Bauten (Berlin: Ernst Wasmuth A. G., 1911), LIVING ROOM NORTH SIDE, LOOKING TOWARDS DINING ROOM, DOORS TO PORCH AT RIGHT. - Darwin D. Martin House, 125 Jewett Parkway, Buffalo, Erie County, NY
12. Historic American Buildings Survey, Plate # 47, 'Wohnhaus Martin, ...
12. Historic American Buildings Survey, Plate # 47, 'Wohnhaus Martin, Buffalo, N. Y. Gewachshaus und Heizkorper mit Beleuchtung' in Frank Lloyd Wright Ausgefuhrte Bauten (Berlin: Ernst Wasmuth A. G., 1911), LEFT: BOOKCASE IN LIBRARY OR LIVING ROOM WITH LIGHTS, RIGHT: CONSERVATORY (DEMOLISHED). - Darwin D. Martin House, 125 Jewett Parkway, Buffalo, Erie County, NY
1492--the medical consequences.
Camargo, C A
1994-06-01
This discussion was selected from the weekly staff conferences in the Department of Medicine, University of California, San Francisco. Taken from a transcription, it has been edited by Nathan M. Bass, MD, PhD, Associate Professor of Medicine, under the direction of Lloyd H. Smith Jr, MD, Professor of Medicine and Associate Dean in the School of Medicine.
The Paradigm Shift: Leadership Challenges in the Public Sector Schools in Pakistan
ERIC Educational Resources Information Center
Mansoor, Zahida
2015-01-01
Previous research has established that school heads as leaders are vital to the successful implementation of educational reforms (Derek, 2009; Robinson, Lloyd, & Rowe, 2008). Education system in Pakistan is going through a paradigm shift from teacher centered to learner centered classrooms using English as the instructional language. The…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-21
... container in Respondents' custody to Complainants, and subsequently shipped the damaged container; failed to... liquidated three of five containers. Through these actions, Complainants allege that Respondent Int'l INC engaged in practice as an ocean transportation intermediary without a license and accepted cargo for an...
77 FR 41168 - Endangered and Threatened Species; Take of Anadromous Fish
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-12
... Assessment (EA) under the National Environmental Policy Act (NEPA) of the potential effects of two direct... EA for public review, comment, and submission of written data, views, arguments or other relevant.... Lloyd Boulevard, Suite 1100, Portland, OR 97232. Comments may also be submitted by email to: SnakeFallEA...
Flexible working motivates all staff.
2001-04-01
A recent survey has demolished the myth that work-life balance is only of interest to women with children. The survey, commissioned by Lloyds TSB on behalf of the Employers for Work Life Balance organisation, shows that young workers and men are equally interested in flexible working arrangements that allow them to pursue interests outside of work.
Using an Architectural Metaphor for Information Design in Hypertext.
ERIC Educational Resources Information Center
Deboard, Donn R.; Lee, Doris
2001-01-01
Uses Frank Lloyd Wright's (1867-1959) organic architecture as a metaphor to define the relationship between a part and a whole, whether the focus is on a building and its surroundings or information delivered via hypertext. Reviews effective strategies for designing text information via hypertext and incorporates three levels of information…
Contagious Ideas: Vulnerability, Epistemic Injustice and Counter-Terrorism in Education
ERIC Educational Resources Information Center
O'Donnell, Aislinn
2018-01-01
The article addresses the implications of Prevent and Channel for epistemic justice. The first section outlines the background of Prevent. It draws upon Moira Gatens and Genevieve Lloyd's concept of the collective imaginary, alongside Lorraine Code's concept of epistemologies of mastery, in order to outline some of the images and imaginaries that…
ERIC Educational Resources Information Center
McNeece, Molly
2009-01-01
Two fourth-grade teachers presented the idea of using the author's art class to inspire the students to write creatively. The theme of scary stories needed an art project to match. The author immediately had a favorite lesson in mind. By putting a small twist on one of her standard "Frank Lloyd Wright House" projects, scary plans began to take…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giorda, Paolo; Zanardi, Paolo; Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
We analyze the dynamical-algebraic approach to universal quantum control introduced in P. Zanardi and S. Lloyd, e-print quant-ph/0305013. The quantum state space H encoding information decomposes into irreducible sectors and subsystems associated with the group of available evolutions. If this group coincides with the unitary part of the group algebra CK of some group K then universal control is achievable over the K-irreducible components of H. This general strategy is applied to different kinds of bosonic systems. We first consider massive bosons in a double well and show how to achieve universal control over all finite-dimensional Fock sectors. We thenmore » discuss a multimode massless case giving the conditions for generating the whole infinite-dimensional multimode Heisenberg-Weyl enveloping algebra. Finally we show how to use an auxiliary bosonic mode coupled to finite-dimensional systems to generate high-order nonlinearities needed for universal control.« less
Monti, Jack; Como, Michael D.; Busciolano, Ronald J.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with State and local agencies, systematically collects groundwater data at varying measurement frequencies to monitor the hydrologic conditions on Long Island, New York. Each year during April and May, the USGS conducts a synoptic survey of water levels to define the spatial distribution of the water table and potentiometric surfaces within the three main water-bearing units underlying Long Island—the upper glacial, Magothy, and Lloyd aquifers (Smolensky and others, 1989)—and the hydraulically connected Jameco (Soren, 1971) and North Shore aquifers (Stumm, 2001). These data and the maps constructed from them are commonly used in studies of Long Island’s hydrology and are used by water managers and suppliers for aquifer management and planning purposes. Water-level measurements made in 503 monitoring wells, a network of observation and supply wells, and 16 streamgage locations across Long Island during April–May 2010 were used to prepare the maps in this report. Measurements were made by the wetted-tape method to the nearest hundredth of a foot. Water-table and potentiometric-surface altitudes in these aquifers were contoured by using these measurements. The water-table contours were interpreted by using water-level data collected from 16 streamgages, 349 observation wells, and 1 supply well screened in the upper glacial aquifer and (or) shallow Magothy aquifer; the Magothy aquifer’s potentiometric-surface contours were interpreted from measurements at 67 observation wells and 27 supply wells screened in the middle to deep Magothy aquifer and (or) the contiguous and hydraulically connected Jameco aquifer. The Lloyd aquifer’s potentiometric-surface contours were interpreted from measurements at 55 observation wells and 4 supply wells screened in the Lloyd aquifer or the contiguous and hydraulically connected North Shore aquifer. Many of the supply wells are in continuous operation and, therefore, were turned off for a minimum of 24 hours before measurements were made so that the water levels in the wells could recover to the level of the potentiometric head in the surrounding aquifer. Full recovery time at some of these supply wells can exceed 24 hours; therefore, water levels measured at these wells are assumed to be less accurate than those measured at observation wells, which are not pumped (Busciolano, 2002). In this report, all water-level altitudes are referenced to the National Geodetic Vertical Datum of 1929 (NGVD 29). Hydrographs are included on these maps for selected wells that are instrumented with recording equipment. These hydrographs are representative of the 2010 water year1 to show the changes that have occurred throughout that period. The synoptic survey water level measured at the well is included on each hydrograph.
ERIC Educational Resources Information Center
Rock, Maxine
A conference was held for college presidents, student work supervisors, and students who are involved in student-work programs. Attending the conference were representatives of the following colleges: Alice Lloyd College, Pippa Passes, Kentucky; Berea College, Berea, Kentucky; Berry College, Mount Berry, Georgia; Bethune-Cookman College, Daytona…
76 FR 70212 - Qualification of Drivers; Exemption Applications; Vision
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-10
.... Hershberger Patrick J. Hogan, Jr. Todd A. McBrian Amilton T. Monteiro Harold W. Mumford John W. Myre David G. Oakley Charles D. Oestreich John S. Olsen Thomas J. Prusik Brent L. Seaux Glen W. Sterling The exemptions.... Bequeaith Lloyd K. Brown Larry Chinn Kecia D. Clark-Welch Tommy R. Crouse Ben W. Davis Charles A. DeKnikker...
75 FR 36774 - Qualification of Drivers; Exemption Renewals; Vision
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-28
.... Mark, Douglas A. Mendoza, Michael R. Moore, Richard W. Neyens, John P. Rodrigues and Charles W. Towner... exemptions for Roy L. Allen, Lyle H. Banser, Lloyd J. Botsford, Walter M. Brown, Charley J. Davis, Derek T... with the goals and objectives of 49 U.S.C. 31136 and 31315. Issued on: June 21, 2010. Larry W. Minor...
Educational Technology for the Global Village: Worldwide Innovation and Best Practices
ERIC Educational Resources Information Center
Lloyd, Les, Ed.; Barreneche, Gabriel I., Ed.
2014-01-01
With this timely book, editors Les Lloyd and Gabriel Barreneche present an eye-opening look at projects that are innovating with technology to improve education and, indeed, the very quality of people's lives around the world. From collaborative learning communities and social networks to Web 2.0 tools, MOOCs, and mobiles, experts discuss an array…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-08
... Green Drive, Swedesboro, NJ 08085. October 16, 2013, 7:00 PM *....... Wyndham Garden Exton Valley Forge... project. Illustrations of these alternatives are provided in Appendix 1A. Blakely Road-Rock Raymond... Highway 30 to Rock Raymond Road where it would turn south to rejoin the Line 1278 pipeline route. Lloyd...
Nuclear Tubulin: A Novel Target for Breast Cancer Chemotherapy
2001-05-01
A. Castillo1, R.F. Luduena 3, and I. Meza 2 ’Departamentos de Biologia Celular and 2 Biomedicina Molecular, CINVESTA V del /PN, M6xico, D. F...resistance. J Biol Chem 270: 31269-31275. Hyams JS, Lloyd CW. 1994.-.The role of multiple tubulin isoforms in celular microtubule function. In: Raff E editor
Becker, Charles E.
1969-01-01
These discussions are selected from the weekly staff conferences in the Department of Medicine, University of California Medical Center, San Francisco. Taken from transcriptions, they are prepared by Drs. Martin J. Cline and Hibbard E. Williams, Associate Professors of Medicine, under the direction of Dr. Lloyd H. Smith, Jr., Professor of Medicine and Chairman of the Department of Medicine. ImagesFigure 1. PMID:5762466
A Christian College Devotes a Chapel to Many Faiths
ERIC Educational Resources Information Center
Supiano, Becky
2009-01-01
Chapman University had a perfectly good chapel, but officials wanted a new one. What they came up with shares more with a Frank Lloyd Wright house than with a typical church. Though Chapman is affiliated with the Disciples of Christ, a mainline Protestant denomination, college leaders wanted a place of worship free of the trappings of…
Concentrating Solar Power Projects - Lake Cargelligo | Concentrating Solar
Solar Storage Receiver, set out in a multi tower solar array. The Project consists of eight SSR's each mounted on its own tower. This graphite receiver acts as receiver, boiler and storage system. Status Date Manufacturer: Lloyd Energy Systems Pty Ltd Receiver Type: Graphite solar storage receiver Heat-Transfer Fluid
Congestive Heart Failure After Acute Myocardial Infarction
Scheinman, Melvin M.
1971-01-01
These discussions are selected from the weekly staff conferences in the Department of Medicine, University of California, San Francisco. Taken from transcriptions, they are prepared by Drs. Sydney E. Salmon and Robert W. Schrier, Assistant Professors of Medicine, under the direction of Dr. Lloyd H. Smith, Jr., Professor of Medicine and Chairman of the Department of Medicine. PMID:5100504
Camargo, Carlos A.
1994-01-01
This discussion was selected from the weekly staff conferences in the Department of Medicine, University of California, San Francisco. Taken from a transcription, it has been edited by Nathan M. Bass, MD, PhD, Associate Professor of Medicine, under the direction of Lloyd H. Smith Jr, MD, Professor of Medicine and Associate Dean in the School of Medicine. Images PMID:7519808
How We Got to Sesame Street; Art on Screen
ERIC Educational Resources Information Center
Goldstein, Evan R.
2009-01-01
In 1966 a group of friends gathered for a dinner party in Manhattan. As the evening was winding down, one of the guests, Lloyd N. Morrisett, a vice president at the Carnegie Corporation, turned to his host, a television executive named Joan Ganz Cooney, and asked a seemingly innocuous question: Can television educate young children? Unknown to…
ERIC Educational Resources Information Center
Adams, Catherine; Lloyd, Julian
2007-01-01
In this article, Catherine Adams, clinical senior lecturer in speech and language therapy at the University of Manchester, and Julian Lloyd, senior lecturer in psychology at Newman College, Birmingham, describe the implementation and effects of an intensive programme of speech and language therapy for children who have pragmatic language…
Lincoln Advanced Science and Engineering Reinforcement
1989-01-01
Chamblee Physics Lincoln University Kelvin Clark Physics Lincoln University Dwayne Cole Mechanical Engineering Howard University Francis Countiss Physics...Mathematics Lincoln University Spencer Lane Mechanical Engineering Howard University Edward Lawerence Physics Lincoln University Cyd Hall Actuarial Science...Pittsburgh Lloyd Hammond Ph.D., Bio-Chemistry Purdue University Timothy Moore M.S., Psychology Howard University * completedI During 1988, three (3
Image Coding Based on Address Vector Quantization.
NASA Astrophysics Data System (ADS)
Feng, Yushu
Image coding is finding increased application in teleconferencing, archiving, and remote sensing. This thesis investigates the potential of Vector Quantization (VQ), a relatively new source coding technique, for compression of monochromatic and color images. Extensions of the Vector Quantization technique to the Address Vector Quantization method have been investigated. In Vector Quantization, the image data to be encoded are first processed to yield a set of vectors. A codeword from the codebook which best matches the input image vector is then selected. Compression is achieved by replacing the image vector with the index of the code-word which produced the best match, the index is sent to the channel. Reconstruction of the image is done by using a table lookup technique, where the label is simply used as an address for a table containing the representative vectors. A code-book of representative vectors (codewords) is generated using an iterative clustering algorithm such as K-means, or the generalized Lloyd algorithm. A review of different Vector Quantization techniques are given in chapter 1. Chapter 2 gives an overview of codebook design methods including the Kohonen neural network to design codebook. During the encoding process, the correlation of the address is considered and Address Vector Quantization is developed for color image and monochrome image coding. Address VQ which includes static and dynamic processes is introduced in chapter 3. In order to overcome the problems in Hierarchical VQ, Multi-layer Address Vector Quantization is proposed in chapter 4. This approach gives the same performance as that of the normal VQ scheme but the bit rate is about 1/2 to 1/3 as that of the normal VQ method. In chapter 5, a Dynamic Finite State VQ based on a probability transition matrix to select the best subcodebook to encode the image is developed. In chapter 6, a new adaptive vector quantization scheme, suitable for color video coding, called "A Self -Organizing Adaptive VQ Technique" is presented. In addition to chapters 2 through 6 which report on new work, this dissertation includes one chapter (chapter 1) and part of chapter 2 which review previous work on VQ and image coding, respectively. Finally, a short discussion of directions for further research is presented in conclusion.
ERIC Educational Resources Information Center
Hall, Lincoln H.
Freshmen of lower and middle socioeconomic status identified by the W. Lloyd Warner Index of Status Characteristics were studied to determine: (1) differences in the motives, values, attitudes, goals, aspirations, self-concepts, and interests of these two groups; (2) if any of these factors distinguish between nonachievers (GPA below 2.0) of…
Como, Michael D.; Noll, Michael L.; Finkelstein, Jason S.; Monti, Jack; Busciolano, Ronald J.
2015-01-01
Hydrographs are included on these maps for selected wells that have digital recording equipment. These hydrographs are representative of the 2013 water year to show the changes that have occurred throughout that period. The synoptic survey water level measured at the well is included on each hydrograph.
Colorectal Cancer—A New Look at an Old Problem
Toribara, Neil W.
1994-01-01
This discussion was selected from the weekly staff conferences in the Department of Medicine, University of California, San Francisco. Taken from a transcription, it has been edited by Nathan M. Bass, MD, PhD, Associate Professor of Medicine, under the direction of Lloyd H. Smith Jr. MD, Professor of Medicine and Associate Dean in the School of Medicine. Images PMID:7810127
Swee'Pea and Other Playground Legends: Tales of Drugs, Violence and Basketball.
ERIC Educational Resources Information Center
Valenti, John; Naclerio, Ron
This biography chronicles the life of Lloyd "Swee'pea" Daniels, considered one of the finest basketball players ever to come out of New York City. The book also serves as a sociological expose of the dark side of collegiate and professional sports, in its description of a youngster chasing the dream of playing basketball, but finally…
Hornby's principles of fire control planning
H. T. Gisborne
1939-01-01
On August 27, 1937, Lloyd G. Hornby died of heart failure on the Toboggan Creek forest fire in the Clearwater National Forest. Few if any men in or out of the U.S. Forest Service have made a greater contribution to fire control planning than did he. In the following article, H. T. Gisborne outlines the principles of fire control planning developed by Mr. Hornby,...
ERIC Educational Resources Information Center
Diaz, Joseph O. Prewitt
1988-01-01
Responds to Dunn's paper on Hispanic-Anglo differences in IQ scores. Comments on Dunn's translation of Peabody Picture Vocabulary Test-Revised into Castilian Spanish, and concludes this version is inappropriate for mainland Puerto Rican and Mexican-American children due to improper translation and validation methods. Contains 27 references.…
2003-04-24
KENNEDY SPACE CENTER, FLA. - Jim Lloyd, with the Mars Exploration Rover (MER) program, places on MER-1 a computer chip with about 35,000 laser-engraved signatures of visitors to the rovers at the Jet Propulsion Laboratory. The signatures include those of senators, artists, and John Glenn. The identical Mars rovers are scheduled to launch June 5 and June 25 from Cape Canaveral Air Force Station.
ERIC Educational Resources Information Center
George Washington Univ. Medical Center, Washington, DC. Rehabilitation Research and Training Center.
This document contains 12 papers presented to medical and vocational rehabilitation professionals on the topic of vocational rehabilitation and End Stage Renal Disease (ESRD) at a Denver conference in 1979. The following papers are contained in this report: "Rehabilitation and ESRD: Services with a New Thrust" by Kathleen E. Lloyd;…
46 CFR 282.22 - Maintenance (upkeep) and repairs.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Hong Kong 15 50.35 7.55 U.S 5 0 0 100 22.75 45.5 10.35 U.S.-Foreign Cost Differential 27.39 (c... Electrical Repairs 11,868 11,117 Exterior Painting 5,456 7,974 Interior Painting 681 1,162 Estimate Totals... partially available—data, published by the classification societies and Lloyd's Voyage Record, reporting the...
Action growth of charged black holes with a single horizon
NASA Astrophysics Data System (ADS)
Cai, Rong-Gen; Sasaki, Misao; Wang, Shao-Jiang
2017-06-01
According to the conjecture "complexity equals action," the complexity of a holographic state is equal to the action of a Wheeler-DeWitt (WDW) patch of black holes in anti-de Sitter space. In this paper we calculate the action growth of charged black holes with a single horizon, paying attention to the contribution from a spacelike singularity inside the horizon. We consider two kinds of such charged black holes: one is a charged dilaton black hole, and the other is a Born-Infeld black hole with β2Q2<1 /4 . In both cases, although an electric charge appears in the black hole solutions, the inner horizon is absent; instead a spacelike singularity appears inside the horizon. We find that the action growth of the WDW patch of the charged black hole is finite and satisfies the Lloyd bound. As a check, we also calculate the action growth of a charged black hole with a phantom Maxwell field. In this case, although the contributions from the bulk integral and the spacelike singularity are individually divergent, these two divergences just cancel each other and a finite action growth is obtained. But in this case, the Lloyd bound is violated as expected.
Time Dependent Dielectric Breakdown in Copper Low-k Interconnects: Mechanisms and Reliability Models
Wong, Terence K.S.
2012-01-01
The time dependent dielectric breakdown phenomenon in copper low-k damascene interconnects for ultra large-scale integration is reviewed. The loss of insulation between neighboring interconnects represents an emerging back end-of-the-line reliability issue that is not fully understood. After describing the main dielectric leakage mechanisms in low-k materials (Poole-Frenkel and Schottky emission), the major dielectric reliability models that had appeared in the literature are discussed, namely: the Lloyd model, 1/E model, thermochemical E model, E1/2 models, E2 model and the Haase model. These models can be broadly categorized into those that consider only intrinsic breakdown (Lloyd, 1/E, E and Haase) and those that take into account copper migration in low-k materials (E1/2, E2). For each model, the physical assumptions and the proposed breakdown mechanism will be discussed, together with the quantitative relationship predicting the time to breakdown and supporting experimental data. Experimental attempts on validation of dielectric reliability models using data obtained from low field stressing are briefly discussed. The phenomenon of soft breakdown, which often precedes hard breakdown in porous ultra low-k materials, is highlighted for future research.
Hydrogeology and extent of saltwater intrusion on Manhasset Neck, Nassau County, New York
Stumm, Frederick; Lange, Andrew D.; Candela, J.L.
2002-01-01
Manhasset Neck, a peninsula on the northern shore of Long Island, N.Y., is underlain by unconsolidated deposits that form a sequence of aquifers and confning units. Ground water at several public-supply wells has been affected by the intrusion of saltwater from the surrounding embayments (Manhasset Bay, Long Island Sound, Hempstead Harbor). Twenty-two boreholes were drilled during 1992-96 for the collection of hydrogeologic, geochemical, and geophysical data to delineate the subsurface geology and the extent of saltwater intrusion within the peninsula. A series of continuous high-resolution seismic- reflection surveys was completed in 1993 and 1994 to delineate the character and extent of the hydrogeologic deposits beneath the embayments surrounding Manhasset Neck.The new drill-core data indicate two hydrogeologic units--the North Shore aquifer and the North Shore confining unit--where the Lloyd aquifer, Raritan confining unit, and the Magothy aquifer have been completely removed by glacial erosion.Water levels at selected observation wells were measured quarterly throughout the study. These data, and continuous water-level records, indicate that (1) the upper glacial (water-table) and Magothy aquifers are hydraulically connected and that their water levels do not respond to tidal fluctuations, and (2) the Lloyd and North Shore aquifers also are hydraulically connected, but their water levels do respond to pumping and tidal fluctuations.Offshore seismic-reflection surveys in the surrounding embayments, and drill-core samples, indicate at least four glacially eroded buried valleys with subhorizontal, parallel reflectors indicative of draped bedding that is interpreted as infilling by silt and clay. The buried valleys (1) truncate the surrounding coarse-grained deposits, (2) are asymmetrical and steep sided, (3) trend northwest-southeast, (4) are 2 to 4 miles long and about 1 mile wide, and (5) extend to more than 400 feet below sea level.Water from 12 public-supply wells screened in the Magothy and upper glacial aquifers contained volatile organic compounds in concentrations above the New York State Department of Health Drinking Water maximum contaminant levels, as did water from one public- supply well screened in the Lloyd aquifer and from two observation wells screened in the upper glacial aquifer.Five distinct areas of saltwater intrusion have been delineated in Manhasset Neck; three extend into the Lloyd and North Shore aquifers, and two extend into the upper glacial and Magothy aquifers. Borehole-geophysical-logging data indicate that several of these saltwater wedges range from a few feet to more than 125 feet in thickness and have sharp freshwater-saltwater interfaces, and that chloride concentrations within these wedges in 1997 ranged from 102 to 9,750 milligrams per liter. Several public-supply wells have either been shut down or are currently being affected by these saltwater wedges. Data show active saltwater intrusion in at least two of the wedges.
[Marine Emission Inventory and Its Temporal and Spatial Characteristics in the City of Shenzhen].
Yang, Jing; Yin, Pei-ling; Ye, Si-qi; Wang, Shui-sheng; Zheng, Jun-yu; Ou, Jia-min
2015-04-01
To analyze the characteristic of marine emission in Shenzhen City, activity-based and fuel-based approaches were utilized to develop the marine emission inventory for the year of 2010, using the vessel files from the Lloyd's register of shipping (LR) and vessel track data from the automatic identification system (AIS). The marine emission inventory was temporally (resolution: 1 hour) and spatially (resolution: 1 km x 1 km) allocated based on the vessel track data. Results showed that total emissions of SO2, NO(x), CO, PM10, PM2.5 and VOCs from marine vessels in Shenzhen City were about 13.6 x 10(3), 23.3 x 10(3), 2.2 x 10(3), 1.9 x 10(3), 1.7 x 10(3) and 1. x 10(3) t, respectively. Among various types of marine vessels, emission from container vessels was the highest; for different driving modes, hotelling mode was found with the largest mission. Marine emissions were generally higher in the daytime, with vessel-specific peaks. For spatial distributions, in general, marine emissions were zonally distributed with hot spots in the western port group, Dapeng Bay and the key waterway.
ECG compression using Slantlet and lifting wavelet transform with and without normalisation
NASA Astrophysics Data System (ADS)
Aggarwal, Vibha; Singh Patterh, Manjeet
2013-05-01
This article analyses the performance of: (i) linear transform: Slantlet transform (SLT), (ii) nonlinear transform: lifting wavelet transform (LWT) and (iii) nonlinear transform (LWT) with normalisation for electrocardiogram (ECG) compression. First, an ECG signal is transformed using linear transform and nonlinear transform. The transformed coefficients (TC) are then thresholded using bisection algorithm in order to match the predefined user-specified percentage root mean square difference (UPRD) within the tolerance. Then, the binary look up table is made to store the position map for zero and nonzero coefficients (NZCs). The NZCs are quantised by Max-Lloyd quantiser followed by Arithmetic coding. The look up table is encoded by Huffman coding. The results show that the LWT gives the best result as compared to SLT evaluated in this article. This transform is then considered to evaluate the effect of normalisation before thresholding. In case of normalisation, the TC is normalised by dividing the TC by ? (where ? is number of samples) to reduce the range of TC. The normalised coefficients (NC) are then thresholded. After that the procedure is same as in case of coefficients without normalisation. The results show that the compression ratio (CR) in case of LWT with normalisation is improved as compared to that without normalisation.
Indigenous Acoustic Detection.
1982-01-26
RESEARCH Contract #NO0014-80-C--0829 Task No. NR 139-004 FINAL REPORT Indigenous Acoustic Detection by James J. Whitesefl Secondary Education/Biology...methodology. DTIC. CD-58-PL. Lloyd, J. E. 1981. Personnel communication . Nevo, E. and S. A. Blondheim. 1972. Acoustic isolation in the speciation of...6. Walker, T. J. 1981. Personnel communication . *Walker, T. J. and J. J. Whitesell. 1981. Singing schedules and sites for a tropical burrowing
Order of Battle of the United States Army World War II
1945-12-01
Emil ...1 Lar 1945’: 13 Dec IS 13 Doc 1944- !; 13 Doc 1944 •3.3 Deo 1944 Maj Gen Emil F Rheinhardt Col Lloyd H Gibbons ’ ’’ Brig Gen Robert V Maraist...land • • ’ ’•’ ! Glen-Munchve Her . j Kaisers lautern Bad Durkheim . • Rbckenhausen . ..-..• Mainz ’ Wiesbaden Li’ch : Pfalz Pfalz Pfalz : Pfalz
ERIC Educational Resources Information Center
Mulrooney, Sarah
2009-01-01
One of the central themes addressed by this paper is the design of the curriculum for architectural education using three schools of architecture: the Bauhaus in Dessau, Crown Hall in Chicago and the Faculty of Architecture and Urbanism (FAU) in Sao Paulo. It also reflects on the practices in other schools such as Frank Lloyd Wright's Taliesin…
ERIC Educational Resources Information Center
Austin, Gary F., Ed.
1977-01-01
The report contains the papers given and resolution adopted at the 1976 conference of the Professional Rehabilitation Workers with the Adult Deaf. An introduction by the association's president (C. Lloyd) precedes the keynote address and response (by C. Mills and H. Hirschi, respectively) which focused on such issues as mainstreaming and Public…
The Air Staff of Tomorrow: Smarter, Faster, Better
2009-02-12
Lloyd W. Howell Jr., and Dov S. Zakheim, “Military of Millennials ,” Resilience Report at Strategy+Business, a Booz Allen Hamilton Publication, 10...decisions and take risk. In other words, the cognitive organization must have entrepreneur -minded people throughout the system, particularly on the...and Dov S. Zakheim, “Military of Millennials .” 63 Attracting talented people from commercial enterprises may require alternative pay and rank
"We Have a Great Task Ahead of Us!": Child-Hate in Roald Dahl's "The Witches"
ERIC Educational Resources Information Center
Curtis, James M.
2014-01-01
The depictions of cruel witches in Roald Dahl's novel "The Witches" echo the cruel, abusive measures taken by adults in the historical treatment of children. The concept of child-hatred, described by Lloyd Demause and other critics, is an effective lens through which to view the hyperbolized hatred of children described in "The…
Hypermetabolism as a Risk Factor for ALS
2013-09-01
hIBM) and sporadic Inclusion Body Myositis (sIBM). These disorders are characterized clinically by progressive muscle degeneration and weakness that...1186–1189 (2008). 3. Lloyd, T. E. Novel therapeutic approaches for inclusion body myositis . Current Opinion in Rheumatology 22, 658–664 (2010). 4...46, 424–430 (2008). 8. Weihl, C. C. & Pestronk, A. Sporadic inclusion body myositis : possible pathogenesis inferred from biomarkers. Curr Opin
Cognitive and Neural Bases of Skilled Performance.
1987-10-04
advantage is that this method is not computationally demanding, and model -specific analyses such as high -precision source localization with realistic...and a two- < " high -threshold model satisfy theoretical and pragmatic independence. Discrimination and bias measures from these two models comparing...recognition memory of patients with dementing diseases, amnesics, and normal controls. We found the two- high -threshold model to be more sensitive Lloyd
ERIC Educational Resources Information Center
Association for Education in Journalism and Mass Communication.
The Advertising section of the Proceedings contains the following 13 papers: "Offering a Creative Track in the Advertising Major: A Case History" (Beth E. Barnes and Carla V. Lloyd); "Messages of Individualism in French, Spanish, and American Television Advertising" (Ronald E. Taylor and Joyce Wolburg); "Frequency Levels…
Organization of Workshop on Emerging Technologies for In-Situ Processing
1992-08-31
for Atomic Layer Processin H . Helvajian Materials & Mechanics Technology Center Aerospace Corporation Los Angeles, California 90009 USA There exists...Krishna Saraswat (Stanford) $ 750 - Henry Helvajian (Aerospace Corp.) $ 500 - Lloyd Hariott (Bell Labs) $ 500 - Jon Orloff (Oregon Grad. Inst.) $ 750 - Tom...Ablation Deposition of Thin Films and surface Analysis by STM/AFM (Coffee) 3:30 PM Henry Helvajian Laser Material Interaction for Atomic Layer
"The Student Personnel Point of View" as a Catalyst for Dialogue: 75 Years and beyond
ERIC Educational Resources Information Center
Roberts, Dennis C.
2012-01-01
"The Student Personnel Point of View" (American Council on Education, 1937) has been a catalyst for dialogue on the purposes of student personnel and student affairs work for 75 years. Through analysis of the writings of Esther Lloyd-Jones and E. G. Williamson, two of the most prolific scholars of the early years of student personnel work, the…
Mice examined in Animal Laboratory of Lunar Receiving Laboratory
NASA Technical Reports Server (NTRS)
1969-01-01
Landrum Young (seated), Brown and Root-Northrup, and Russell Stullken, Manned Spacecraft Center, examine mice in the Animal laboratory of the Lunar Receiving Laboratory which have been inoculated with lunar sample material. wish for peace for all mankind. astronauts will be released from quarantine on August 11, 1969. Donald K. Slayton (right), MSC Director of Flight Crew Operations; and Lloyd Reeder, training coordinator.
Essential Hypertension—Where Are We Going?
Ives, Harlan E.
1990-01-01
This discussion was selected from the weekly staff conferences in the Department of Medicine, University of California, San Francisco. Taken from a transcription, it has been edited by Homer A. Boushey, MD, Professor of Medicine, and Nathan M. Bass, MD, PhD, Associate Professor of Medicine, under the direction of Lloyd H. Smith, Jr, MD, Professor of Medicine and Associate Dean in the School of Medicine. Images PMID:2244377
Reactive Planning in Air Combat Simulation
2003-06-01
International Simulation Technology and Training conference, (SimTecT 96), Melbourne , Australia, 1996. [TID 98] TIDHAR G., HEINZE C., SELVESTREL M...HEI 98] HEINZE C., SMITH B., CROSS M., "Thinking Quickly: Agents for Modelling Air Warfare", In Proceedings of Australian Joint conference on...Artificial Intelligence, AJCAI 98, Brisbane, Australia, 1998. [HEI 01] HEINZE C., GOSS S., JOSEFSSON T., BENNETT K., WAUGH S., LLOYD I., MURRAY G., OLDFIELD
Warnock, Jonathan P.; Eberhart, Shawn L.; Clawson, Steven R.; Noto, Christopher R.
2017-01-01
The Cleveland-Lloyd Dinosaur Quarry (CLDQ) is the densest deposit of Jurassic theropod dinosaurs discovered to date. Unlike typical Jurassic bone deposits, it is dominated by the presence of Allosaurus fragilis. Since excavation began in the 1920s, numerous hypotheses have been put forward to explain the taphonomy of CLDQ, including a predator trap, a drought assemblage, and a poison spring. In an effort to reconcile the various interpretations of the quarry and reach a consensus on the depositional history of CLDQ, new data is required to develop a robust taphonomic framework congruent with all available data. Here we present two new data sets that aid in the development of such a robust taphonomic framework for CLDQ. First, x-ray fluorescence of CLDQ sediments indicate elevated barite and sulfide minerals relative to other sediments from the Morrison Formation in the region, suggesting an ephemeral environment dominated by periods of hypereutrophic conditions during bone accumulation. Second, the degree of abrasion and hydraulic equivalency of small bone fragments dispersed throughout the matrix were analyzed from CLDQ. Results of these analyses suggest that bone fragments are autochthonous or parautochthonous and are derived from bones deposited in the assemblage rather than transported. The variability in abrasion exhibited by the fragments is most parsimoniously explained by local periodic re-working and re-deposition during seasonal fluctuations throughout the duration of the quarry assemblage. Collectively, these data support previous interpretations that the CLDQ represents an attritional assemblage in a poorly-drained overbank deposit where vertebrate remains were introduced post-mortem to an ephemeral pond during flood conditions. Furthermore, while the elevated heavy metals detected at the Cleveland-Lloyd Dinosaur Quarry are not likely the primary driver for the accumulation of carcasses, they are likely the result of multiple sources; some metals may be derived from post-depositional and diagenetic processes, and others are potentially produced from an abundance of decomposing vertebrate carcasses. These new data help to support the inferred depositional environment of the quarry as an ephemeral pond, and represent a significant step in understanding the taphonomy of the bonebed and Late Jurassic paleoecology in this region. PMID:28603668
Recent Developments in MC-ICP-MS for Uranium Isotopic Determination from Small Samples.
NASA Astrophysics Data System (ADS)
Field, P.; Lloyd, N. S.
2016-12-01
V002: Advances in approaches and instruments for isotope studies Session ID#: 12653 Recent Developments in MC-ICP-MS for Uranium Isotopic Determination from small samples.M. Paul Field 1 & Nicholas S. Lloyd. 1 Elemental Scientific Inc., Omaha, Nebraska, USA. field@icpms.com 2 Thermo Fisher Scientific, Hanna-Kunath-Str. 11, 28199 Bremen, Germany. nicholas.lloyd@thermofisher.com Uranium isotope ratio determination for nuclear, nuclear safeguards and for environmental applications can be challenging due to, 1) the large isotopic differences between samples and 2) low abundance of 234U and 236U. For some applications the total uranium quantities can be limited, or it is desirable to run at lower concentrations for radiological protection. Recent developments in inlet systems and detector technologies allow small samples to be analyzed at higher precisions using MC-ICP-MS. Here we evaluate the combination of Elemental Scientific apex omega desolvation system and microFAST-MC dual loop-loading flow-injection system with the Thermo Scientific NEPTUNE Plus MC-ICP-MS. The inlet systems allow for the automated syringe loading and injecting handling of small sample volumes with efficient desolvation to minimize the hydride interference on 236U. The highest ICP ion sampling efficiency is realized using the Thermo Scientific Jet Interface. Thermo Scientific 1013 ohm amplifier technology allows small ion beams to be measured at higher precision, offering the highest signal/noise ratio with a linear and stable response that covers a wide dynamic range (ca. 1 kcps - 30 Mcps). For nanogram quantities of low enriched and depleted uranium standards the 235U was measured with 1013 ohm amplifier technology. The minor isotopes (234U and 236U) were measured by SEM ion counters with RPQ lens filters, which offer the lowest detection limits. For sample amounts ca. 20 ng the minor isotopes can be moved onto 1013 ohm amplifiers and the 235U onto standard 1011 ohm amplifier. To illustrate the application a set of solutions from environmental particles [1] were analyzed, the use of precise three isotope ratio plots allows for source attribution with increased confidence. [1] Lloyd et al. 2009, J. Anal. At. Spectrom., 24(6), 752-758.
Ground-water resources of Kings and Queens Counties, Long Island, New York
Buxton, Herbert T.; Shernoff, Peter K.
1995-01-01
The aquifers beneath Kings and Queens Counties supplied an average of more than 120 Mgal/d (million gallons per day) for industrial and public water supply during 1904-47, but this pumping caused saltwater intrusion and a deterioration of water quality that led to the cessation of pumping for public supply in Kings County in 1947 and in western Queens County in 1974. Since the cessation of pumping in Kings and western Queens Counties, ground-water levels have recovered steadily, and the saltwater has partly dispersed and become diluted. In eastern Queens County, where pumpage for public supply averages 60 Mgal/d, all three major aquifers contain a large cone of depression. The saltwater-freshwater interface in the Jameco-Magothy aquifer already extends inland in southeastern Queens County and is moving toward this cone of depression. The pumping centers' proximity to the north shore also warrants monitoring for saltwater intrusion in the Flushing Bay area. Urbanization and development on western Long Island since before the tum of this century have caused significant changes in the ground-water budget (total inflow and outflow) and patterns of movement. Some of the major causes are: ( 1) intensive pumping for industrial and public supply; (2) paving of large land-surface areas; (3) installation of a vast network of combined (stonn and sanitary) sewers; (4) leakage from a water-supply-line network that carries more than 750 Mgal/d; and (5) burial of stream channels and extensive wetland areas near the shore.Elevated nitrate and chloride concentrations throughout the upper glacial (water-table) aquifer indicate widespread contamination from land surface. Localized contamination in the underlying Jameco-Magothy aquifer is attributed to downward migration in areas of hydraulic connection between aquifers where the Gardiners Clay is absent A channel eroded through the Raritan confining unit provides a pathway for migration of surface contaminants to the Lloyd aquifer sooner than anticipated Although ground water in the Lloyd aquifer is still pristine, present pumping rates and potentiometric levels in the Lloyd indicate that this aquifer is much more sensitive to withdrawals than the other aquifers are and contains an extremely limited water supply.
Water-in-Olivine Magma Ascent Chronometry: Every Crystal is a Clock
NASA Astrophysics Data System (ADS)
Newcombe, M. E.; Asimow, P. D.; Ferriss, E.; Barth, A.; Lloyd, A. S.; Hauri, E.; Plank, T. A.
2017-12-01
The syneruptive decompression rate of basaltic magma in volcanic conduits is thought to be a critical control on eruptive vigor. Recent efforts have constrained decompression rates using models of diffusive water loss from melt embayments (Lloyd et al. 2014; Ferguson et al. 2016), olivine-hosted melt inclusions (Chen et al. 2013; Le Voyer et al. 2014), and clinopyroxene phenocrysts (Lloyd et al. 2016). However, these techniques are difficult to apply because of the rarity of melt embayments and clinopyroxene phenocrysts suitable for analysis and the complexities associated with modeling water loss from melt inclusions. We are developing a new magma ascent chronometer based on syneruptive diffusive water loss from olivine phenocrysts. We have found water zonation in every olivine phenocryst we have measured, from explosive eruptions of Pavlof, Seguam, Fuego, Cerro Negro and Kilauea volcanoes. Phenocrysts were polished to expose a central plane normal to the crystallographic `b' axis and volatile concentration profiles were measured along `a' and `c' axes by SIMS or nanoSIMS. Profiles are compared to 1D and 3D finite-element models of diffusive water loss from olivine, with or without melt inclusions, whose boundaries are in equilibrium with a melt undergoing closed-system degassing. In every case, we observe faster water diffusion along the `a' axis, consistent with the diffusion anisotropy observed by Kohlstedt and Mackwell (1998) for the so-called `proton-polaron' mechanism of H-transport. Water concentration gradients along `a' match the 1D diffusion model with a diffusivity of 10-10 m2/s (see Plank et al., this meeting), olivine-melt partition coefficient of 0.0007-0.002 (based on melt inclusion-olivine pairs), and decompression rates equal to the best-fit values from melt embayment studies (Lloyd et al. 2014; Ferguson et al. 2016). Agreement between the melt embayment and water-in-olivine ascent chronometers at Fuego, Seguam, and Kilauea Iki demonstrates the potential of this new technique, which can be applied to any olivine-bearing mafic-intermediate eruption using common analytical tools (SIMS and FTIR). In theory, each crystal is a clock, with the potential to record variable ascent in the conduit, over the course of an eruption, and between eruptions.
NASA Astrophysics Data System (ADS)
Dai, LongGui; Yang, Fan; Yue, Gen; Jiang, Yang; Jia, Haiqiang; Wang, Wenxin; Chen, Hong
2014-11-01
Generally, nano-scale patterned sapphire substrate (NPSS) has better performance than micro-scale patterned sapphire substrate (MPSS) in improving the light extraction efficiency of LEDs. Laser interference lithography (LIL) is one of the powerful fabrication methods for periodic nanostructures without photo-masks for different designs. However, Lloyd's mirror LIL system has the disadvantage that fabricated patterns are inevitably distorted, especially for large-area twodimensional (2D) periodic nanostructures. Herein, we introduce two-beam LIL system to fabricate consistent large-area NPSS. Quantitative analysis and characterization indicate that the high uniformity of the photoresist arrays is achieved. Through the combination of dry etching and wet etching techniques, the well-defined NPSS with period of 460 nm were prepared on the whole sapphire substrate. The deviation is 4.34% for the bottom width of the triangle truncated pyramid arrays on the whole 2-inch sapphire substrate, which is suitable for the application in industrial production of NPSS.
ERIC Educational Resources Information Center
Alford, Betty J., Ed.; Perreault, George, Ed.; Zellner, Luana, Ed.; Ballenger, Julia W., Ed.
2011-01-01
This is the 2011 Yearbook of the National Council of Professors of Educational Administration (NCPEA). This Yearbook contains five parts. Part I, Invited Chapters, includes: (1) NCPEA President's Message, 2011 (Gary W. Kinsey); (2) Shadows and Images II (Lloyd Duvall); and (3) Micropolitics in the School: Teacher Leaders' Use of Political Skill…
2013-06-01
and security, vessel traffic management, accident and disaster response, search and rescue as well as law enforcement are collecting information...piracy threat. Individually Nigeria , Ghana, Benin, Togo, Cameroon and Senegal have taken practical steps to police their waters but they lack...use their vast natural resources for socio-economic development of their countries. Lloyd’s, the leading maritime insurer, has listed Nigeria , Benin
How Reader Girl Got Her Groove Back: One Woman's Heroic Quest to Overcome the Classics
ERIC Educational Resources Information Center
Hale, Shannon
2008-01-01
This author has been a "reader girl" since the third grade, when she first read "Trumpet of the Swan" on her own. Fourth grade brought C. S. Lewis, Lloyd Alexander, and Joan Aiken. Fifth grade was Cynthia Voigt, Anne McCaffrey, and Robin McKinley. And so it continued with Ellen Raskin, Patricia McKillip, and L. M. Montgomery, a veritable battalion…
ERIC Educational Resources Information Center
Congress of the U.S., Washington, DC. House Select Committee on Aging.
This document contains witnesses' testimonies and prepared statements from the Congressional hearing held in Chattanooga, Tennessee to gain that state's perspective on catastrophic health insurance. Opening statements are included from Representatives Marilyn Lloyd and Claude Pepper. Two panels of witnesses provide testimony. The first panel,…
Meta-Analytical Online Repository of Gene Expression Profiles of MDS Stem Cells
2015-12-01
Myelodysplastic syndrome , AML: Acute myeloid leukemia, ALL: Acute lymphoblastic leukemia Numbers in brackets are reference numbers. doi:10.1371/journal.pone...disorders such as acute leukemias and myelodysplastic syndromes would be distinguishable in our analysis. Unsupervised clustering showed that even though...al. Angiogenesis in acute and chronic leukemias and myelodysplastic syndromes . Blood. 2000;96:2240–2245. [PubMed: 10979972] 17. Yoon SY, Li CY, Lloyd
1979-08-01
flagellate, Tritrichomonas foetus . The specific activities for enzymes in the original homogenate, cumulative percentage distributions in the various...with another protozoan T. foetus (Lloyd, Lindmark and Muller in press). The lack of latency for this trypanosomal ATPase indicates the enzyme to occupy...flagellate protozoan Tritrichomonas foetus . J. Gen. Microbiol. (in press). . Lowry, 0. H., Rosebrough, N. D., Farr, A. L. and Randall, R. J. (1951) Protein 9
Handling Massive Models: Representation, Real-Time Display and Interaction
2008-09-16
Published, K. Ward, N. Galoppo, and M. Lin, "Interactive Virtual Hair Salon ", Presence, p. , vol. , (2007). Published, K. Ward, F. Bertails, T.-Y...Detection for Deformable Models using Representative-Triangles", Symposium on Interactive 3D Graphics and Games , p. , vol. , (2008). Published...Interactive 3D Graphics and Games (I3D), p. , vol. , (2008). Published, Brandon Lloyd, Naga K. Govindaraju, Cory Quammen, Steven E. Molnar, Dinesh
2003-04-24
KENNEDY SPACE CENTER, FLA. - Jim Lloyd, with the Mars Exploration Rover (MER) program, points to the place on MER-1 where he will place a computer chip with about 35,000 laser-engraved signatures of visitors to the rovers at the Jet Propulsion Laboratory. The signatures include those of senators, artists, and John Glenn. The identical Mars rovers are scheduled to launch June 5 and June 25 from Cape Canaveral Air Force Station.
Law Enforcement Methods for Counterinsurgency Operations
2005-05-26
Report to the Legislature 2003.” 52 David Starbuck , former Chief of Kansas City Gang Task Force, Interview by author, 9 November 2004, Kansas City...MO. 29 calculated violence against rival gangs, and synchronization with other Crip franchises , of which there are over 800 in the U.S. These are...54 J.D. Lloyd, Gangs, Contemporary Issues, 26. 55 David Starbuck , Interview by author. 31 leader, several lieutenants, a drug coordinator
Progress in the Treatment of Advanced Breast Cancer
Gordan, Gilbert S.
1969-01-01
These discussions are selected from the weekly staff conferences in the Department of Medicine, University of California Medical Center, San Francisco. Taken from transcriptions, they are prepared by Drs. Martin J. Cline and Hibbard E. Williams, Associate Professors of Medicine, under the direction of Dr. Lloyd H. Smith, Jr., Professor of Medicine and Chairman of the Department of Medicine. ImagesFigure 1.Figure 2.Figure 3. PMID:5798009
Processing Multiyear Procurement (MYP) Submissions - A Handbook for Air Force Program Offices
1985-05-01
Contracting and Manufacturing Policy; "Policy Letter 84-1l - Multiyear Contracting Guidance" Headquarters, United States Air Force, Washington, D.C...supplement to Air Force FAR Sup No. 17.191: as enclosed in Thomas E. Lloyd, Colonel, USAF, Assistant DCS/Contracting , and Manufacturing , "DCS...Contracting and Manufacturing Policy Letter 84-16, Multiyear Contracting Guidance." Headquarters, Air Force Systems Command, Andrew Air Force Base
NASA Astrophysics Data System (ADS)
Druart, Guillaume; Rommeluere, Sylvain; Viale, Thibault; Guerineau, Nicolas; Ribet-Mohamed, Isabelle; Crastes, Arnaud; Durand, Alain; Taboury, Jean
2014-05-01
Today, both military and civilian applications require miniaturized and cheap optical systems. One way to achieve this trend consists in decreasing the pixel pitch of focal plane arrays (FPA). In order to evaluate the performance of the overall optical systems, it is necessary to measure the modulation transfer function (MTF) of these pixels. However, small pixels lead to higher cut-off frequencies and therefore, original MTF measurements that are able to extract frequencies up to these high cut-off frequencies, are needed. In this paper, we will present a way to extract 1D MTF at high frequencies by projecting fringes on the FPA. The device uses a Lloyd mirror placed near and perpendicular to the focal plane array. Consequently, an interference pattern of fringes can be projected on the detector. By varying the angle of incidence of the light beam, we can tune the period of the interference fringes and, thus, explore a wide range of spatial frequencies, and mainly around the cut-off frequency of the pixel which is one of the most interesting area. Illustration of this method will be applied to a 640×480 microbolometer focal plane array with a pixel pitch of 17µm in the LWIR spectral region.
Hydrologic framework of Long Island, New York
Smolensky, Douglas A.; Buxton, Herbert T.; Shernoff, Peter K.
1990-01-01
Long Island, N.Y., is underlain by a mass of unconsolidated geologic deposits of clay, silt, sand, and gravel that overlie southward-sloping consolidated bedrock. These deposits are thinnest in northern Queens County (northwestern Long Island), where bedrock crops out, and increase to a maximum thickness of 2,000 ft in southeastern Long Island. This sequence of unconsolidated deposits consists of several distinct geologic units ranging in age from late Cretaceous through Pleistocene, with some recent deposits near shores and streams. These units are differentiated by age, depositional environment, and lithology in table 1. Investigations of ground-water availability and flow patterns may require information on the internal geometry of the hydrologic system that geologic correlations and interpretation alone cannot provide; hydrologic interpretations in which deposits are differentiated on the basis of water-transmitting properties are generally needed also. This set of maps and vertical sections depicts the hydrogeologic framework of the unconsolidated deposits that form Long Island's ground-water system. These deposits can be classified into eight major hydrogeologic units (table 1). The hydrogeologic interpretations presented herein are not everywhere consistent with strict geologic interpretation owing to facies changes and local variations in the water-transmitting properties within geologic units. These maps depict the upper-surface altitude of seven of the eight hydrogeologic units, which, in ascending order, are: consolidated bedrock, Lloyd aquifer, Raritan confining unit, Magothy aquifer, Monmouth greensand, Jameco aquifer, and Gardiners Clay. The upper glacial aquifer—the uppermost unit—is at land surface over most of Long Island and is, therefore, not included. The nine north-south hydrogeologic sections shown below depict the entire sequence of unconsolidated deposits and, together with the maps, provide a detailed three-dimensional interpretation of Long Island's hydrogeologic framework. The structure-contour map that shows the upper-surface altitude of the Cretaceous deposits is included to illustrate the erosional unconformity between the Cretaceous and overlying Pleistocene deposits. Pleistocene erosion played a major role in determining the shape and extent of the Lloyd aquifer, the Raritan confining unit, and the Magothy aquifer, and thus partly determined their hydrogeologic relation with subsequent (post-Cretaceous) deposits.
An extension of the QZ algorithm for solving the generalized matrix eigenvalue problem
NASA Technical Reports Server (NTRS)
Ward, R. C.
1973-01-01
This algorithm is an extension of Moler and Stewart's QZ algorithm with some added features for saving time and operations. Also, some additional properties of the QR algorithm which were not practical to implement in the QZ algorithm can be generalized with the combination shift QZ algorithm. Numerous test cases are presented to give practical application tests for algorithm. Based on results, this algorithm should be preferred over existing algorithms which attempt to solve the class of generalized eigenproblems where both matrices are singular or nearly singular.
Construction of Escherichia coli Strains for Conversion of Nitroacetophenones to ortho-Aminophenols
2003-11-01
curve for quantification by HPLC . NB, 2-nitrotoluene, 3-nitrotoluene, 4-nitrobiphenyl ether, and 1-nitronaphthalene and the aminophenols formed were...pH 8.0, containing 1% glucose. Production of aminophenol or 2AAP from the nitroacetophenone was monitored by HPLC . VOL. 69, 2003 BIOSYNTHESIS OF ortho...ortho- Aminophenols Venkateswarlu Kadiyala, Lloyd J. Nadeau, and Jim C. Spain* Air Force Research Laboratory, Tyndall Air Force Base, Florida 32403-5323
FAA Rotorcraft Research, Engineering, and Development Bibliography 1962-1989
1990-05-01
Albert G. Delucien) (NTIS: ADA 102 521) FAA/CT-88/10 Digital Systems Validation Handbook - Volume II (R.L. McDowall, Hardy P. Curd, Lloyd N. Popish... Digital Systems in Avionics and Flight Control Applications, Handbook - Volume I, (Ellis F. Hilt, Donald Eldredge, Jeff Webb, Charles Lucius, Michael S...Structure Statistics of Helicopter GPS Navigation with the Magnavox Z-Set (Robert D. Till) FAA/CT-82/115 Handbook - Volume I, Validation of Digital
Background Material on Structural Reform of the Department of Defense
1986-03-01
R D Y , Oklahoma THOMAS M FUGLIETTA, Pennsylvania ROY LYSON , Maryland DENNIS M HERTEL, Michigan MARILYN LLOYD. Tennessee NORMAN SISISKY, V i r...of the several Serv- ices, of defense strategy, of the overall defense program, and of how business gets transacted in the Pentagon. They must de... managing the careers of those officers best qualified for joint duty. Ac- tions are already being addressed by the Joint Chiefs to properly manage
Sensitivity Analysis of the Seakeeping Behavior of Trimaran Ships
2003-12-01
Architects and Marine Engineers; 1967. 827 p. [18] Lloyd ARJM. Seakeeping: Ship Behavior in Rough Weather. West Yorkshire ; Ellis Horwood Ltd ; 1989...INCAT Australia Pty Ltd . This design features side hulls with a very low freeboard at their bows and a definite, above-water center bow. Additional...composite ship, uses an Air Cushion Catamaran (ACC) design, which is an advanced variant of SES technology. Most recently, a co -operative design team that
1993-02-01
Dayton (513) 229-3951 Rsch Inst Peter Ahnert NOAA/MWS (703) 471-5302 Frank Schmidlin NASA/GSFC (804) 824-1618 Maurice Friedman Viz Mfg (617) 942-2000...NWS (801) 524-4000 G Mr. Tom Clemmons DPG (801) 831-4674 G Mr. Lloyd Corbett NAWCWPNS China Lake (619) 939-6058 M Ms Laurie Dalton (801) 776-6500 G
Army Groundwater Modeling Use and Needs Workshop
1993-06-01
MA 7th and C Street Coomerce City, CO 80022 (303) 289-0419 ES Appendx E List of A.hue Publ, reportingWoo b f orl0 This co0 rtionl of ,Miflrmat,0...Appendix C: Survey Form ................................. CI Appendix D: Panel Objectives, Members, and Topics .............. DI Appendix E : List of...contributed to the planning and organization of the workshop: Mr. Tony Dardeau, Ms. Cheryl Lloyd, Dr. Paul R. Schroeder, Mr. Mark E . Zappi, Mr
USDA-ARS?s Scientific Manuscript database
In order to provide an informational tool for assessing and prioritizing germplasm needs for ex situ conservation in the U.S. National Plant Germplasm System (NPGS), the USDA Agricultural Research Service in 2008 initiated a project to identify crop wild relatives (CWR) of major and minor crops. Eac...
Computer Software Used in U.S. Army Anthropometric Survey 1987-1988
1988-06-30
necessary and identify by block number) FIELD GROUP SUB-GROUP - HARDWARE, ANTHROPOMETRY SVRM _SOFTWARE, EDITING MEASUREMENT ERROR. l ABSTRACT...2. Churchill, Edmund, John T. McConville, Lloyd L. Laubach and Robert M. White. 1971. Anthropometry of U.S.-Army Aviators - 1970. Technical Report...72-52-CE (AD 743 528). U.S. Army Laboratories, Natick, Massachusetts. 3. Hertzberg, H.T.E., G.S. Daniels and Edmund Churchill. 1954. Anthropometry of
1983-06-01
large species lists into single numerical expressions. Species diversity is usually - defined as a function of the number of species (i.e. species...1958, Lloyd and Ghelardi 1964, Pielou 1969). The primary motivation * for calculating species diversity indices based on richness or abundance is...diversity was an intrinsic property in ecological processes and an important factor in defining ecosystem structure and function (McArthur 1955
Evaluation of the Material Point Method within CTH to Model 2-Dimensional Plate Impact Problems
2014-09-01
Howard University . 14. ABSTRACT The material point method (MPM) is a mixed Eulerian and Lagrangian computational method that allows for the... University in Washington, DC, as a second-year graduate student within mechanical engineering. I also attended Howard University for my undergraduate...Kevin Rugirello, Dr Andrew Tonge, Dr Jeffrey Lloyd, Dr Mary Jane Graham, and Dr Gbadebo Owolabi. vi Student Bio I am currently attending Howard
ERIC Educational Resources Information Center
Congress of the U.S., Washington, DC. Senate Committee on Finance.
This hearing, the first of three on welfare reform, focuses on "how we can reform welfare programs to conserve the best of the past and give us new latitude to deal with the emerging problems of the future," according to Senator Lloyd Bentsen, the Chairman of the Committee. The following individuals were witnesses: (1) Richard E. Lyng,…
Fibrinogen Recovery in Two Methods of Cryoprecipitate Preparation
1989-08-01
ERNEST A. HAYGOOD, 1st Lt, USAF Executive Officer, Civilian Institution Programs 17. COSATI CODES 18. SUBJECT TERMS (Continue on reverse if necessary...NAME OF RESPONSIBLE INDIVIDUAL 22b. TELEPHONE (Include Area Code) 22c. OFFICE SYMBOL ERNEST A. HAYGOOD, 1st Lt, USAF (513) 255-2259 AFIT/CI DDForm...u I iv ACKNOWLEDGEMENTS I would like to extend sincerest appreciation to Dr. Lloyd Lippert , my research advisor. Without his continued guidance
Costa, Klinger Vagner Teixeira da; Ferreira, Sonia Maria Soares; Menezes, Pedro de Lemos
The association between hearing loss and chronic kidney disease and hemodialysis has been well documented. However, the classification used for the degree of loss may underestimate the actual diagnosis due to specific characteristics related to the most affected auditory frequencies. Furthermore, correlations of hearing loss and hemodialysis time with hearing handicap remain unknown in this population. To compare the results of Lloyd's and Kaplan's and The Bureau Internacional d'Audiophonologie classifications in chronic kidney disease patients, and to correlate the averages calculated by their formulas with hemodialysis time and the hearing handicap. This is an analytical, observational and cross-sectional study with 80 patients on hemodialysis. Tympanometry, speech audiometry, pure tone audiometry and interview of patients with hearing loss through Hearing Handicap Inventory for Adults. Cases were classified according to the degree of loss. The correlations of tone averages with hemodialysis time and the total scores of Hearing Handicap Inventory for Adults and its domains were verified. 86 ears (53.75%) had hearing loss in at least one of the tonal averages in 48 patients who responded to Hearing Handicap Inventory for Adults. The Bureau Internacional d'Audiophonologie classification identified a greater number of cases (n=52) with some degree of disability compared to Lloyd and Kaplan (n=16). In the group with hemodialysis time of at least 2 years, there was weak but statistically significant correlation of The Bureau Internacional d'Audiophonologie classification average with hemodialysis time (r=0.363). There were moderate correlations of average The Bureau Internacional d'Audiophonologie classification (r=0.510) and tritone 2 (r=0.470) with the total scores of Hearing Handicap Inventory for Adults and with its social domain. The Bureau Internacional d'Audiophonologie classification seems to be more appropriate than Lloyd's and Kaplan's for use in this population; its average showed correlations with hearing loss in patients with hemodialysis time≥2 years and it exhibited moderate levels of correlation with the total score of Hearing Handicap Inventory for Adults and its social domain (r=0.557 and r=0.512). Copyright © 2016 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Verification of IEEE Compliant Subtractive Division Algorithms
NASA Technical Reports Server (NTRS)
Miner, Paul S.; Leathrum, James F., Jr.
1996-01-01
A parameterized definition of subtractive floating point division algorithms is presented and verified using PVS. The general algorithm is proven to satisfy a formal definition of an IEEE standard for floating point arithmetic. The utility of the general specification is illustrated using a number of different instances of the general algorithm.
2001-05-01
find the best that foreign lands had to offer in constitutional theory. They found separation of powers within a mixed constitution. 60 The Greek...many of the Founding Fathers indeed knew Polybius, especially his passages on the Roman Constitution, and the separation of powers .”61 The separation ...lead 60 Marshall D. Lloyd, "Polybius and the Founding Fathers: the separation of powers ." Database on
Network improves rural care. Interview by Donald E. Johnson.
Smith, L V
1991-12-01
Rural institutions may face one of the biggest challenges in providing up-to-date care that keeps patients in the local facility, rather than sending them to regional centers. In the following interview with Health Care Strategic Management publisher, Donald E.L. Johnson, Lloyd V. Smith, president of St. Luke's Hospitals MeritCare, a Fargo, N.D.-based health care network, reveals an educational outreach model for cardiac care that can be emulated by other institutions.
2003-04-24
KENNEDY SPACE CENTER, FLA. - Jim Lloyd, with the Mars Exploration Rover program, holds a computer chip with about 35,000 laser-engraved signatures of visitors to the Jet Propulsion Laboratory. The chip will be placed on the second rover to be launched to Mars (MER-1/MER-B); the first rover already has one. The signatures include those of senators, artists, and John Glenn. The identical Mars rovers are scheduled to launch June 5 and June 25 from Cape Canaveral Air Force Station.
ERIC Educational Resources Information Center
Lloyd, Joel J., Ed.
This collection of delivered papers at the seminar on information needs and utilization in the 1980's comprises the keynote address, "Information in the 1980's" by Donald Fink, and presentations from three panels. The first panel, moderated by Joel J. Lloyd, includes the following: "Aspects and Characteristics of the Future Information Society" by…
Generalization of some hidden subgroup algorithms for input sets of arbitrary size
NASA Astrophysics Data System (ADS)
Poslu, Damla; Say, A. C. Cem
2006-05-01
We consider the problem of generalizing some quantum algorithms so that they will work on input domains whose cardinalities are not necessarily powers of two. When analyzing the algorithms we assume that generating superpositions of arbitrary subsets of basis states whose cardinalities are not necessarily powers of two perfectly is possible. We have taken Ballhysa's model as a template and have extended it to Chi, Kim and Lee's generalizations of the Deutsch-Jozsa algorithm and to Simon's algorithm. With perfectly equal superpositions of input sets of arbitrary size, Chi, Kim and Lee's generalized Deutsch-Jozsa algorithms, both for evenly-distributed and evenly-balanced functions, worked with one-sided error property. For Simon's algorithm the success probability of the generalized algorithm is the same as that of the original for input sets of arbitrary cardinalities with equiprobable superpositions, since the property that the measured strings are all those which have dot product zero with the string we search, for the case where the function is 2-to-1, is not lost.
Power bases and attribution in three cultures.
Alanazi, Falah M; Rodrigues, Aroldo
2003-06-01
The authors used a Saudi context to verify the cross-cultural generality of findings (A. Rodrigues & K. L. Lloyd, 1998) reported for U.S. and Brazilian samples in which compliant behavior caused by reward, informational, and referent influences was perceived as more controllable and more internal than compliant behavior resulting from legitimate, expert, and coercive influences. This differential attribution led, in turn, to different affective and behavioral responses. In the present study, cognitive and affective reactions of Saudi students were measured with regard to compliant behavior (leading to a good outcome or a bad outcome) caused by each of the 6 bases of power described by B. H. Raven (1965). As expected, power bases had significant effects. However, when the outcome of the compliant behavior was bad, compliant behavior caused by a coercive influence led to the perception of more internality and controllability. Also--and not found in previous studies--the perception of less internality and controllability of compliant behavior was caused by an informational influence. Findings are discussed in the light of related research and Saudi cultural characteristics.
Generalizing Atoms in Constraint Logic
NASA Technical Reports Server (NTRS)
Page, C. David, Jr.; Frisch, Alan M.
1991-01-01
This paper studies the generalization of atomic formulas, or atoms, that are augmented with constraints on or among their terms. The atoms may also be viewed as definite clauses whose antecedents express the constraints. Atoms are generalized relative to a body of background information about the constraints. This paper first examines generalization of atoms with only monadic constraints. The paper develops an algorithm for the generalization task and discusses algorithm complexity. It then extends the algorithm to apply to atoms with constraints of arbitrary arity. The paper also presents semantic properties of the generalizations computed by the algorithms, making the algorithms applicable to such problems as abduction, induction, and knowledge base verification. The paper emphasizes the application to induction and presents a pac-learning result for constrained atoms.
The United States Air Force Summary, FY 1988/1989 (Amended). Fourteenth Edition
1988-05-15
UNClASSIFIEDI ,••o MARIANA ISLS ....••. 8 GUAM ..o ANDERSEI AFB o () II ,. BONIN ISLSo II • .... to t D ~, # # ()0KAOENA AB i ’or • o USAF MAJOR...Leath, Marvin (TX) McCurdy, Dave (OK) Foglietta, T. M. (PA) Dyson, Roy (MD) Hertel, Dennis M. (MI) Lloyd, Marilyn B. (TN) Sisisky, Norman (VA) Ray...1988) House Research and Development Subcommittee Democrats Price, Melvin (IL) Chmn. Aspin, Les (WI) Schroeder, Patricia (CO) McCurdy, Dave (OK) Hertel
A Correlational and Descriptive Study of Student Writing in Three Aims of Discourse.
1981-12-01
develop a new set of assumptions about invention, about the writer’s 4 M 5 purpose, about the relattonship between writer, subject, and audience . But just...triangle: subject, speaker/writer, and audience . 12 The number of aims can vary, from two (Britton) to six (Jakobson), but for the purposes of this study... Audience Oriented) C Klaus and Lloyd-Jones selected this particular model for its simplicity and usefulness; it was "based on the Pur- pose (goal
Navy Manager’s Guide for the Test and Evaluation Sections of MIL-H-46855.
1977-06-30
guidance and contributions: CDR Paul R. Chatelier , Naval A ir systems Comaand Dr. Lloyd Hitchcock , Naval A ir Development Center Mr. Ed L. Holshouser...Paci fic Mis sile Test Center LCDR William F. Moroney , Pac ifi c Mi ss i le Test Cente r W ithin the Boeing Aerospace Company , the program was...Reference 11). This standard presents HE design criteria, principles , and practices to be appl i ed in the design of systems, equipment and facili- ties
An Overview of DoD Policy for and Administration of Independent Research and Development
1975-05-01
Anderson, Captain William J. Lewandowski, Major James C . Roan, and Mr. Lloyd G. Mitchell to mention a few) kept the author abreast the latest...D Cost Principles (1959) A-I B. AEC Cost Principles for IR&D B-I C . Section 203, Public Law 91-441 C -i D. DOD IR&D Cost Principles (1971) D-1 E...Arii~bd Services Pkocurement Regulations ASRSC Armed Services Research Specialists Cotiitiiittee BOB Buireau bf the Budgse C Comptroiler CITE
The History of MIS-Y: U.S. Strategic Interrogation During World War II
2002-08-01
27Ian Dear, Escape and Evasion, (London, UK: Arms and Armour Press, 1997), 11. 28Lloyd R. Shoemaker, The Escape Factory (New York: St. Martin’s...soldiers are beginning to understand that they are the underdogs carrying the weight of the bureaucracy. 11. Building up the Nazi Gangster Ideal. In...and Evasion: Prisoner of War Breakouts and the Routes to Safety in World War Two. New York: Arms and Armour Press, 1997. DeForest, Orrin, and David
Airborne Missions in the Mediterranean, 1942-1945
1955-09-01
the po rt of Oran. How- by Lt. Gen . D wight D . Eisenhower thro ugh A llied ever, t he po rt, surrounded by clif s and bristling. Force Headquart...Some experienced officers calledForce, an American organization commanded by it "harebrained. " A ir Marshal William L . Welsh ,y _. Maj. Gen . Lloyd...that the troo p carriers be con-A ir Fo rce under Brig. Gen . James H. Dooli ttle . served fo r use in the race to Tunis after D -day. , D-day for the
Vesely, Rebecca
2008-02-04
A defeat in California for Gov. Arnold Schwarzenegger's comprehensive healthcare reform plan was not only a blow to advocates of the initiative but also could be a setback for other states that were hoping to follow in the Golden State's footsteps. Lloyd Dean, left, president and CEO of Catholic Healthcare West, said that he is "concerned this lack of progress will result in further deterioration of California's healthcare system".
ERIC Educational Resources Information Center
Congress of the U.S., Washington, DC. Senate Select Committee on Equal Educational Opportunity.
Contents of these hearings include the testimony of the following witnesses, as well as materials appended as pertinent to the hearings: (1) Lloyd Lewis, Jr., Chairman, Dayton City Planning Board and member of the Miami Valley Regional Planning Commission's Housing and Human Resources Advisory Committee; (2) Dale F. Bertsch, Executive Director,…
Artists concept of Apollo 11 Astronaut Neil Armstrong on the moon
NASA Technical Reports Server (NTRS)
1969-01-01
A Grumman Aircraft Engineering Corporation artist's concept depicting mankind's first walk on another celestianl body. Here, Astronaut Neil Armstrong, Apollo 11 commander, is making his first step onto the surface of the moon. In the background is the Earth, some 240,000 miles away. Armstrong. They are continuing their postflight debriefings. The three astronauts will be released from quarantine on August 11, 1969. Donald K. Slayton (right), MSC Director of Flight Crew Operations; and Lloyd Reeder, training coordinator.
The Clinical Significance of Water Pollution
1988-01-01
These discussions are selected from the weekly staff conferences in the Department of Medicine, University of California, San Francisco. Taken from transcriptions, they are prepared by Drs Homer A. Boushey, Professor of Medicine, and David G. Warnock, Associate Professor of Medicine, under the direction of Dr Lloyd H. Smith, Jr, Professor of Medicine and Associate Dean in the School of Medicine. Requests for reprints should be sent to the Department of Medicine, University of California, San Francisco, School of Medicine, San Francisco, CA 94143. Images PMID:3348027
The Coast Artillery Journal. Volume 76, Number 3, May-June 1933
1933-06-01
playthings as " Tommy -Guns." We will now discuss the subject of the Machine Gun in riot service. Although this weapon, fortunately, has never been used...Benjamin Bowering, student, C. & G. S. School, Ft. Leavenworth, to 62d, Ft. Totten , June 30. Captain James D. Brown, student, C. A. School, Ft. Monroe...G. S. School, Ft. Leavenworth, to 62d, Ft. Totten , June 30. Captain Lloyd W. Goeppert, student, C. A. School, Ft. Monroe, to 63d, Ft. MacArthur
Littoral Combat Ship (LCS) Mission Packages: Determining the Best Mix
2008-03-01
to thank CAPT Doug Otte , USN, CDR Doug Burton, USN, Colonel Ed Lesnowicz, USMC (Ret.), Lloyd Brown, LCDR Scott Hattway, USN, and LT John Baggett...clear it of any surface threats. Upon commencement, SUW LCS are following assigned PIM into the channel with an embarked MH-60R airborne. Upon enemy... PIM . Since LCS is a focused mission platform, a SUW LCS will not pursue anything other than a surface threat (i.e. it will not pursue, and cannot
Public-key encryption with chaos.
Kocarev, Ljupco; Sterjev, Marjan; Fekete, Attila; Vattay, Gabor
2004-12-01
We propose public-key encryption algorithms based on chaotic maps, which are generalization of well-known and commercially used algorithms: Rivest-Shamir-Adleman (RSA), ElGamal, and Rabin. For the case of generalized RSA algorithm we discuss in detail its software implementation and properties. We show that our algorithm is as secure as RSA algorithm.
Public-key encryption with chaos
NASA Astrophysics Data System (ADS)
Kocarev, Ljupco; Sterjev, Marjan; Fekete, Attila; Vattay, Gabor
2004-12-01
We propose public-key encryption algorithms based on chaotic maps, which are generalization of well-known and commercially used algorithms: Rivest-Shamir-Adleman (RSA), ElGamal, and Rabin. For the case of generalized RSA algorithm we discuss in detail its software implementation and properties. We show that our algorithm is as secure as RSA algorithm.
Region of Interest Imaging for a General Trajectory with the Rebinned BPF Algorithm*
Bian, Junguo; Xia, Dan; Sidky, Emil Y; Pan, Xiaochuan
2010-01-01
The back-projection-filtration (BPF) algorithm has been applied to image reconstruction for cone-beam configurations with general source trajectories. The BPF algorithm can reconstruct 3-D region-of-interest (ROI) images from data containing truncations. However, like many other existing algorithms for cone-beam configurations, the BPF algorithm involves a back-projection with a spatially varying weighting factor, which can result in the non-uniform noise levels in reconstructed images and increased computation time. In this work, we propose a BPF algorithm to eliminate the spatially varying weighting factor by using a rebinned geometry for a general scanning trajectory. This proposed BPF algorithm has an improved noise property, while retaining the advantages of the original BPF algorithm such as minimum data requirement. PMID:20617122
Region of Interest Imaging for a General Trajectory with the Rebinned BPF Algorithm.
Bian, Junguo; Xia, Dan; Sidky, Emil Y; Pan, Xiaochuan
2010-02-01
The back-projection-filtration (BPF) algorithm has been applied to image reconstruction for cone-beam configurations with general source trajectories. The BPF algorithm can reconstruct 3-D region-of-interest (ROI) images from data containing truncations. However, like many other existing algorithms for cone-beam configurations, the BPF algorithm involves a back-projection with a spatially varying weighting factor, which can result in the non-uniform noise levels in reconstructed images and increased computation time. In this work, we propose a BPF algorithm to eliminate the spatially varying weighting factor by using a rebinned geometry for a general scanning trajectory. This proposed BPF algorithm has an improved noise property, while retaining the advantages of the original BPF algorithm such as minimum data requirement.
van der Kant, Anne; Biro, Szilvia; Levelt, Claartje; Huijbregts, Stephan
2018-04-01
Both social perception and temperament in young infants have been related to social functioning later in life. Previous functional Near-Infrared Spectroscopy (fNIRS) data (Lloyd-Fox et al., 2009) showed larger blood-oxygenation changes for social compared to non-social stimuli in the posterior temporal cortex of five-month-old infants. We sought to replicate and extend these findings by using fNIRS to study the neural basis of social perception in relation to infant temperament (Negative Affect) in 37 five-to-eight-month-old infants. Infants watched short videos displaying either hand and facial movements of female actors (social dynamic condition) or moving toys and machinery (non-social dynamic condition), while fNIRS data were collected over temporal brain regions. Negative Affect was measured using the Infant Behavior Questionnaire. Results showed significantly larger blood-oxygenation changes in the right posterior-temporal region in the social compared to the non-social condition. Furthermore, this differential activation was smaller in infants showing higher Negative Affect. Our results replicate those of Lloyd-Fox et al. and confirmed that five-to-eight-month-old infants show cortical specialization for social perception. Furthermore, the decreased cortical sensitivity to social stimuli in infants showing high Negative Affect may be an early biomarker for later difficulties in social interaction. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Lower limb compartment syndrome following laparoscopic colorectal surgery: a review.
Rao, M M; Jayne, D
2011-05-01
In spite of recent advances in technology and technique, laparoscopic colorectal surgery is associated with increased operating times when compared with open surgery. This increases the risk of acute lower limb compartment syndrome. The aim of this review was to gain a better understanding of postoperative lower limb compartment syndrome following laparoscopic colorectal surgery and to suggest strategies to avoid its occurrence. A MEDLINE search was performed using the keywords 'compartment syndrome', 'laparoscopic surgery' and 'Lloyd-Davies position' between 1970 and 2008. All relevant articles were retrieved and reviewed. A total of 54 articles were retrieved. Of the 30 articles in English, five were reviews, six were original articles and 19 were case reports, of which only one was following laparoscopic colorectal surgery. The remaining 24 were non-English articles. Of these, two were reviews and 22 were case reports, of which only one was following laparoscopic colorectal surgery. The incidence of acute compartment syndrome following laparoscopic colorectal surgery is unknown. The following are believed to be risk factors for acute lower limb compartment syndrome: the Lloyd-Davies operating position with exaggerated Trendelenburg tilt, prolonged operative times and improper patient positioning. Simple strategies are suggested to reduce its occurrence. Simple preventative measures have been identified which may help to reduce the incidence of acute lower limb compartment syndrome. However, if suspected, timely surgical intervention with four-compartment fasciotomy remains the standard of care. © 2011 The Authors. Colorectal Disease © 2011 The Association of Coloproctology of Great Britain and Ireland.
Building a Science of Animal Minds: Lloyd Morgan, Experimentation, and Morgan's Canon.
Fitzpatrick, Simon; Goodrich, Grant
2017-08-01
Conwy Lloyd Morgan (1852-1936) is widely regarded as the father of modern comparative psychology. Yet, Morgan initially had significant doubts about whether a genuine science of comparative psychology was even possible, only later becoming more optimistic about our ability to make reliable inferences about the mental capacities of non-human animals. There has been a fair amount of disagreement amongst scholars of Morgan's work about the nature, timing, and causes of this shift in Morgan's thinking. We argue that Morgan underwent two quite different shifts of attitude towards the proper practice of comparative psychology. The first was a qualified acceptance of the Romanesian approach to comparative psychology that he had initially criticized. The second was a shift away from Romanes' reliance on systematizing anecdotal evidence of animal intelligence towards an experimental approach, focused on studying the development of behaviour. We emphasize the role of Morgan's evolving epistemological views in bringing about the first shift - in particular, his philosophy of science. We emphasize the role of an intriguing but overlooked figure in the history of comparative psychology in explaining the second shift, T. Mann Jones, whose correspondence with Morgan provided an important catalyst for Morgan's experimental turn, particularly the special focus on development. We also shed light on the intended function of Morgan's Canon, the methodological principle for which Morgan is now mostly known. The Canon can only be properly understood by seeing it in the context of Morgan's own unique experimental vision for comparative psychology.
NASA Astrophysics Data System (ADS)
Wingate, L.; Burlett, R.; Bosc, A.; Cross, A.; Devaux, M.; Grace, J.; Loustau, D.; Seibt, U.; Ogée, J.
2007-12-01
Studying the carbon and oxygen stable isotope signals from plants and soils can help us gain insight to mechanistic processes responsible for the net exchange of CO2 and water cycled between terrestrial ecosystems and the atmosphere. Chamber field measurements of component fluxes and their isotopic composition have been reported for a few ecosystems. These observations have revealed that isotopic signals for carbon and oxygen are dynamic over relatively short time scales (hrs and days) for both branches and soils (Seibt et al., 2006a; 2006b; Wingate et al., 2007), and not fully explained by currently available models (Seibt et al., 2006b; Wingate et al., 2007). Ecosystem isotope studies have been limited by flask sampling requirements in the past. To evaluate and refine our models of isotopic fractionation by plants and soil, we need high resolution continuous isotopic measurements over the growing season for different ecosystems. In this study, we coupled chambers with tunable diode laser spectroscopy techniques in the field to continuously capture the isotopic signals from the most important component fluxes contributing to the net ecosystem exchange of CO2 in a Pinus pinaster forest in south-west France. We obtained profiles of the carbon and oxygen isotope content of CO2 within and above the forest canopy. In addition, we measured branch photosynthetic 13C and 18O discrimination alongside the 13C and 18O isotopic composition of the branch, stem and soil respiration during a 6-month period in 2007. In this talk, we will present the first results from this field campaign. References Seibt, U., Wingate, L., Berry, J.A. and Lloyd, J. (2006a) Non steady state effects in diurnal 18O discrimination by Picea sitchensis branches in the field. Plant, Cell and Environment Vol 29, 928-939. Seibt, U., Wingate, L., Lloyd, J. and Berry, J.A. (2006b) Diurnally variable δ18O signatures of soil CO2 fluxes indicate carbonic anhydrase activity in a forest soil. JGR-Biogeosciences, Vol. 111, G04005, doi:10.1029/2006JG000177. Seibt, U., Wingate, L. and Berry, J.A. (2007) Nocturnal stomatal conductance effects on the δ18O of foliage gas exchange observed in two forest ecosystems. Tree Physiology, Vol. 27, 585-595. Wingate, L., Seibt, U., Moncrieff, J.B., Jarvis, P.G. and Lloyd, J. (2007) Variations in 13C discrimination during CO2 exchange by Picea sitchensis branches in the field. Plant, Cell and Environment doi: 10.1111/j.1365-3040.2007.01647.
NASA Astrophysics Data System (ADS)
Donnellan, A.; Grant Ludwig, L.; Rundle, J. B.; Parker, J. W.; Granat, R.; Heflin, M. B.; Pierce, M. E.; Wang, J.; Gunson, M.; Lyzenga, G. A.
2017-12-01
The 2010 M7.2 El Mayor - Cucapah earthquake caused extensive triggering of slip on faults proximal to the Salton Trough in southern California. Triggered slip and postseismic motions that have continued for over five years following the earthquake highlight connections between the El Mayor - Cucapah rupture and the network of faults that branch out along the southern Pacific - North American Plate Boundary. Coseismic triggering follows a network of conjugate faults from the northern end of the rupture to the Coachella segment of the southernmost San Andreas fault. Larger aftershocks and postseismic motions favor connections to the San Jacinto and Elsinore faults further west. The 2012 Brawley Swarm can be considered part of the branching on the Imperial Valley or east side of the plate boundary. Cluster analysis of long-term GPS velocities using Lloyds Algorithm, identifies bifurcation of the Pacific - North American plate boundary; The San Jacinto fault joins with the southern San Andreas fault, and the Salton Trough and Coachella segment of the San Andreas fault join with the Eastern California Shear Zone. The clustering analysis does not identify throughgoing deformation connecting the Coachella segment of the San Andreas fault with the rest of the San Andreas fault system through the San Gorgonio Pass. This observation is consistent with triggered slip from both the 1992 Landers and 2010 El Mayor - Cucapah earthquakes that follows the plate boundary bifurcation and with paleoseismic evidence of smaller earthquakes in the San Gorgonio Pass.
1992-03-01
LABORATORY rci-dr Civil Works Investigation Studies Work Unit 31138 92 4 22096 Best Avai~lable Copy ’ p ~ o) n< w : -c rnsu on’ ratur.- t crrJ’ria’ cr...Low Air Investigation Content Studies 6. AUTHOR(S) Work Unit 31138 Billy D. Neeley, W. E. McDonald, Michael K. Lloyd 7. PERFORMING ORGANIZATION NAME(S...as a part of Civil Works Investigation Studies Work Unit 31138 , "New Technologies for Testing and Evaluating Concrete." The study was conducted under
United States Air Force Statistical Digest, Fiscal Year 1992/1993 Estimate
1993-01-01
UNCLASSIFIED am INo tATLANTIC ATLANTIC AZORES LAJES F1E1I(~_:’ -, PACIFIC EAST CHINA SEA o’,.-., ,D. r) OUDENA AB i »r MISAWA AB o o II ,. BONIN ISlSo c... Dave (OK) Foglietta, T.M. (PA) Hertel, Dennis M. (MI) Lloyd, Marilyn B. (TN) Sisisky, Norman (VA) Ray, Richard (GA) Spratt, John M., Jr. (SC...RI) Saxton, Jam" H. (NJ) Cunningham, Randy (Duke) (CA) Franke, Gary (CT) DEMOCRATS Dellums, ROO;lld (CA) Chmn. McCurdy, Dave (OK) Foglietta, Thomas
AERIAL OVERVIEW, LOOKING SOUTH ACROSS INTERSTATE 2059 (BOTTOM RIGHT) TO ...
AERIAL OVERVIEW, LOOKING SOUTH ACROSS INTERSTATE 20-59 (BOTTOM RIGHT) TO THE ORIGINAL PLANNED INDUSTRIAL COMMUNITY WHOSE MAJOR ACCESS (CENTER) LEADS FROM THE TENNESSEE COAL & IRON CO. - US STEEL - US STEEL FAIRFIELD WORKS (NOT PICTURED) ACROSS GARY AVENUE AND THE COMMERCIAL DISTRICT TO THE CIVIC CENTER PLAZA WHICH IS SURROUNDED BY RESIDENTIAL DISTRICTS TO THE FORMER TCI-US STEEL EMPLOYEES (NOW LLOYD NOLAND) HOSPITAL (TOP CENTER). TO LEFT OF HOSPITAL IS PARKWAY, ONE OF THE MODEL INDUSTRIAL TOWN'S PRINCIPAL LANDSCAPED THOROUGHFARES. - City of Fairfield, Fairfield, Jefferson County, AL
Progressive Systemic Sclerosis—“Something Old, Something New, Something Borrowed, Something Blue”
Siegel, Robert C.
1973-01-01
These discussions are selected from the weekly staff conferences in the Department of Medicine, University of California, San Francisco. Taken from transcriptions, they are prepared by Drs. David W. Martin, Jr., Assistant Professor of Medicine, and Kenneth A. Woeber, Associate Professor of Medicine, under the direction of Dr. Lloyd H. Smith, Jr., Professor of Medicine and Chairman of the Department of Medicine. Requests for reprints should be sent to the Department of Medicine, University of California, San Francisco, CA 94143. ImagesFigure 1 PMID:4726949
Effect of the dilaton on holographic complexity growth
NASA Astrophysics Data System (ADS)
An, Yu-Sen; Peng, Rong-Hui
2018-03-01
In this paper, we investigate the action growth in various backgrounds in Einstein-Maxwell-dilaton theory. We calculate the full time evolution of action growth in the anti-de Sitter dilaton black hole and find it approaches the late time bound from above. We investigate the black hole which is asymptotically Lifshitz and obtain its late time and full time behavior. We find the violation of Lloyd bound in the late time limit and show the full time behavior approaching the late time bound from above and exhibiting some new features for z sufficiently large.
1991-04-01
African Air Forces, Middle East Air Command, based in Cairo, and RAP Malta Air Command. This, in effect, was a �theater� command in a larger sense, for...Force, under the command of AVM Sir Hugh Lloyd, and absorbed Malta Air Command and US XII Fighter Command, then under Pete Quesada, later commander...trained pilots, that exchange ratio steadily worsened for the enemy. In fact, the 5th Air Force could boast the two highest scoring American aces early
Sen. Ensign, John [R-NV
2010-01-26
Senate - 01/26/2010 Submitted in the Senate, considered, and agreed to without amendment and with a preamble by Unanimous Consent. (All Actions) Tracker: This bill has the status Agreed to in SenateHere are the steps for Status of Legislation:
Physician Payment Reform—An Idea Whose Time Has Come
Lee, Philip R.
1990-01-01
These discussions are selected from the weekly staff conferences in the Department of Medicine, University of California, San Francisco. Taken from transcriptions, they are prepared by Homer A. Boushey, MD, Professor of Medicine, and Nathan M. Bass, MD, PhD, Associate Professor of Medicine, under the direction of Lloyd H. Smith, Jr, MD, Professor of Medicine and Associate Dean in the School of Medicine. Requests for reprints should be sent to the Department of Medicine, University of California, San Francisco, School of Medicine, San Francisco, CA 94143. PMID:2185598
2003-04-24
KENNEDY SPACE CENTER, FLA. - Tom Shain, the MER ATLO logistics manager, holds a computer chip with about 35,000 laser-engraved signatures of visitors to the Mars Exploration Rovers at the Jet Propulsion Laboratory. He and Jim Lloyd, also with the program, will place the chip on the second rover to be launched to Mars (MER-1/MER-B); the first rover already has one. The signatures include those of senators, artists, and John Glenn. The identical Mars rovers are scheduled to launch June 5 and June 25 from Cape Canaveral Air Force Station.
Filtered-x generalized mixed norm (FXGMN) algorithm for active noise control
NASA Astrophysics Data System (ADS)
Song, Pucha; Zhao, Haiquan
2018-07-01
The standard adaptive filtering algorithm with a single error norm exhibits slow convergence rate and poor noise reduction performance under specific environments. To overcome this drawback, a filtered-x generalized mixed norm (FXGMN) algorithm for active noise control (ANC) system is proposed. The FXGMN algorithm is developed by using a convex mixture of lp and lq norms as the cost function that it can be viewed as a generalized version of the most existing adaptive filtering algorithms, and it will reduce to a specific algorithm by choosing certain parameters. Especially, it can be used to solve the ANC under Gaussian and non-Gaussian noise environments (including impulsive noise with symmetric α -stable (SαS) distribution). To further enhance the algorithm performance, namely convergence speed and noise reduction performance, a convex combination of the FXGMN algorithm (C-FXGMN) is presented. Moreover, the computational complexity of the proposed algorithms is analyzed, and a stability condition for the proposed algorithms is provided. Simulation results show that the proposed FXGMN and C-FXGMN algorithms can achieve better convergence speed and higher noise reduction as compared to other existing algorithms under various noise input conditions, and the C-FXGMN algorithm outperforms the FXGMN.
NASA Technical Reports Server (NTRS)
Gedney, Stephen D.; Lansing, Faiza
1993-01-01
The generalized Yee-algorithm is presented for the temporal full-wave analysis of planar microstrip devices. This algorithm has the significant advantage over the traditional Yee-algorithm in that it is based on unstructured and irregular grids. The robustness of the generalized Yee-algorithm is that structures that contain curved conductors or complex three-dimensional geometries can be more accurately, and much more conveniently modeled using standard automatic grid generation techniques. This generalized Yee-algorithm is based on the the time-marching solution of the discrete form of Maxwell's equations in their integral form. To this end, the electric and magnetic fields are discretized over a dual, irregular, and unstructured grid. The primary grid is assumed to be composed of general fitted polyhedra distributed throughout the volume. The secondary grid (or dual grid) is built up of the closed polyhedra whose edges connect the centroid's of adjacent primary cells, penetrating shared faces. Faraday's law and Ampere's law are used to update the fields normal to the primary and secondary grid faces, respectively. Subsequently, a correction scheme is introduced to project the normal fields onto the grid edges. It is shown that this scheme is stable, maintains second-order accuracy, and preserves the divergenceless nature of the flux densities. Finally, for computational efficiency the algorithm is structured as a series of sparse matrix-vector multiplications. Based on this scheme, the generalized Yee-algorithm has been implemented on vector and parallel high performance computers in a highly efficient manner.
Gradient descent learning algorithm overview: a general dynamical systems perspective.
Baldi, P
1995-01-01
Gives a unified treatment of gradient descent learning algorithms for neural networks using a general framework of dynamical systems. This general approach organizes and simplifies all the known algorithms and results which have been originally derived for different problems (fixed point/trajectory learning), for different models (discrete/continuous), for different architectures (forward/recurrent), and using different techniques (backpropagation, variational calculus, adjoint methods, etc.). The general approach can also be applied to derive new algorithms. The author then briefly examines some of the complexity issues and limitations intrinsic to gradient descent learning. Throughout the paper, the author focuses on the problem of trajectory learning.
A parallel algorithm for the eigenvalues and eigenvectors for a general complex matrix
NASA Technical Reports Server (NTRS)
Shroff, Gautam
1989-01-01
A new parallel Jacobi-like algorithm is developed for computing the eigenvalues of a general complex matrix. Most parallel methods for this parallel typically display only linear convergence. Sequential norm-reducing algorithms also exit and they display quadratic convergence in most cases. The new algorithm is a parallel form of the norm-reducing algorithm due to Eberlein. It is proven that the asymptotic convergence rate of this algorithm is quadratic. Numerical experiments are presented which demonstrate the quadratic convergence of the algorithm and certain situations where the convergence is slow are also identified. The algorithm promises to be very competitive on a variety of parallel architectures.
A General Event Location Algorithm with Applications to Eclipse and Station Line-of-Sight
NASA Technical Reports Server (NTRS)
Parker, Joel J. K.; Hughes, Steven P.
2011-01-01
A general-purpose algorithm for the detection and location of orbital events is developed. The proposed algorithm reduces the problem to a global root-finding problem by mapping events of interest (such as eclipses, station access events, etc.) to continuous, differentiable event functions. A stepping algorithm and a bracketing algorithm are used to detect and locate the roots. Examples of event functions and the stepping/bracketing algorithms are discussed, along with results indicating performance and accuracy in comparison to commercial tools across a variety of trajectories.
A General Event Location Algorithm with Applications to Eclispe and Station Line-of-Sight
NASA Technical Reports Server (NTRS)
Parker, Joel J. K.; Hughes, Steven P.
2011-01-01
A general-purpose algorithm for the detection and location of orbital events is developed. The proposed algorithm reduces the problem to a global root-finding problem by mapping events of interest (such as eclipses, station access events, etc.) to continuous, differentiable event functions. A stepping algorithm and a bracketing algorithm are used to detect and locate the roots. Examples of event functions and the stepping/bracketing algorithms are discussed, along with results indicating performance and accuracy in comparison to commercial tools across a variety of trajectories.
Multidimensional generalized-ensemble algorithms for complex systems.
Mitsutake, Ayori; Okamoto, Yuko
2009-06-07
We give general formulations of the multidimensional multicanonical algorithm, simulated tempering, and replica-exchange method. We generalize the original potential energy function E(0) by adding any physical quantity V of interest as a new energy term. These multidimensional generalized-ensemble algorithms then perform a random walk not only in E(0) space but also in V space. Among the three algorithms, the replica-exchange method is the easiest to perform because the weight factor is just a product of regular Boltzmann-like factors, while the weight factors for the multicanonical algorithm and simulated tempering are not a priori known. We give a simple procedure for obtaining the weight factors for these two latter algorithms, which uses a short replica-exchange simulation and the multiple-histogram reweighting techniques. As an example of applications of these algorithms, we have performed a two-dimensional replica-exchange simulation and a two-dimensional simulated-tempering simulation using an alpha-helical peptide system. From these simulations, we study the helix-coil transitions of the peptide in gas phase and in aqueous solution.
Simplification of multiple Fourier series - An example of algorithmic approach
NASA Technical Reports Server (NTRS)
Ng, E. W.
1981-01-01
This paper describes one example of multiple Fourier series which originate from a problem of spectral analysis of time series data. The example is exercised here with an algorithmic approach which can be generalized for other series manipulation on a computer. The generalized approach is presently pursued towards applications to a variety of multiple series and towards a general purpose algorithm for computer algebra implementation.
Misut, Paul; Aphale, Omkar
2014-01-01
A density-dependent groundwater flow and solute transport model of Manhasset Neck, Long Island, New York, was used to analyze (1) the effects of seasonal stress on the position of the freshwater/saltwater transition zone and (2) groundwater flowpaths. The following were used in the simulation: 182 transient stress periods, representing the historical record from 1920 to 2011, and 44 transient stress periods, representing future hypothetical conditions from 2011 to 2030. Simulated water-level and salinity (chloride concentration) values are compared with values from a previously developed two-stress-period (1905–1944 and 1945–2005) model. The 182-stress-period model produced salinity (chloride concentration) values that more accurately matched the observed salinity (chloride concentration) values in response to hydrologic stress than did the two-stress-period model, and salinity ranged from zero to about 3 parts per thousand (equivalent to zero to 1,660 milligrams per liter chloride). The 182-stress-period model produced improved calibration statistics of water-level measurements made throughout the study area than did the two-stress-period model, reducing the Lloyd aquifer root mean square error from 7.0 to 5.2 feet. Decreasing horizontal and vertical hydraulic conductivities (fixed anisotropy ratio) of the Lloyd and North Shore aquifers by 20 percent resulted in nearly doubling the simulated salinity(chloride concentration) increase at Port Washington observation well N12508. Groundwater flowpath analysis was completed for 24 production wells to delineate water source areas. The freshwater/saltwater transition zone moved toward and(or) away from wells during future hypothetical scenarios.
Formally biorthogonal polynomials and a look-ahead Levinson algorithm for general Toeplitz systems
NASA Technical Reports Server (NTRS)
Freund, Roland W.; Zha, Hongyuan
1992-01-01
Systems of linear equations with Toeplitz coefficient matrices arise in many important applications. The classical Levinson algorithm computes solutions of Toeplitz systems with only O(n(sub 2)) arithmetic operations, as compared to O(n(sub 3)) operations that are needed for solving general linear systems. However, the Levinson algorithm in its original form requires that all leading principal submatrices are nonsingular. An extension of the Levinson algorithm to general Toeplitz systems is presented. The algorithm uses look-ahead to skip over exactly singular, as well as ill-conditioned leading submatrices, and, at the same time, it still fully exploits the Toeplitz structure. In our derivation of this algorithm, we make use of the intimate connection of Toeplitz matrices with formally biorthogonal polynomials.
Neural Generalized Predictive Control: A Newton-Raphson Implementation
NASA Technical Reports Server (NTRS)
Soloway, Donald; Haley, Pamela J.
1997-01-01
An efficient implementation of Generalized Predictive Control using a multi-layer feedforward neural network as the plant's nonlinear model is presented. In using Newton-Raphson as the optimization algorithm, the number of iterations needed for convergence is significantly reduced from other techniques. The main cost of the Newton-Raphson algorithm is in the calculation of the Hessian, but even with this overhead the low iteration numbers make Newton-Raphson faster than other techniques and a viable algorithm for real-time control. This paper presents a detailed derivation of the Neural Generalized Predictive Control algorithm with Newton-Raphson as the minimization algorithm. Simulation results show convergence to a good solution within two iterations and timing data show that real-time control is possible. Comments about the algorithm's implementation are also included.
Discrete-Time Stable Generalized Self-Learning Optimal Control With Approximation Errors.
Wei, Qinglai; Li, Benkai; Song, Ruizhuo
2018-04-01
In this paper, a generalized policy iteration (GPI) algorithm with approximation errors is developed for solving infinite horizon optimal control problems for nonlinear systems. The developed stable GPI algorithm provides a general structure of discrete-time iterative adaptive dynamic programming algorithms, by which most of the discrete-time reinforcement learning algorithms can be described using the GPI structure. It is for the first time that approximation errors are explicitly considered in the GPI algorithm. The properties of the stable GPI algorithm with approximation errors are analyzed. The admissibility of the approximate iterative control law can be guaranteed if the approximation errors satisfy the admissibility criteria. The convergence of the developed algorithm is established, which shows that the iterative value function is convergent to a finite neighborhood of the optimal performance index function, if the approximate errors satisfy the convergence criterion. Finally, numerical examples and comparisons are presented.
Minimal-scan filtered backpropagation algorithms for diffraction tomography.
Pan, X; Anastasio, M A
1999-12-01
The filtered backpropagation (FBPP) algorithm, originally developed by Devaney [Ultrason. Imaging 4, 336 (1982)], has been widely used for reconstructing images in diffraction tomography. It is generally known that the FBPP algorithm requires scattered data from a full angular range of 2 pi for exact reconstruction of a generally complex-valued object function. However, we reveal that one needs scattered data only over the angular range 0 < or = phi < or = 3 pi/2 for exact reconstruction of a generally complex-valued object function. Using this insight, we develop and analyze a family of minimal-scan filtered backpropagation (MS-FBPP) algorithms, which, unlike the FBPP algorithm, use scattered data acquired from view angles over the range 0 < or = phi < or = 3 pi/2. We show analytically that these MS-FBPP algorithms are mathematically identical to the FBPP algorithm. We also perform computer simulation studies for validation, demonstration, and comparison of these MS-FBPP algorithms. The numerical results in these simulation studies corroborate our theoretical assertions.
Solar Occultation Retrieval Algorithm Development
NASA Technical Reports Server (NTRS)
Lumpe, Jerry D.
2004-01-01
This effort addresses the comparison and validation of currently operational solar occultation retrieval algorithms, and the development of generalized algorithms for future application to multiple platforms. initial development of generalized forward model algorithms capable of simulating transmission data from of the POAM II/III and SAGE II/III instruments. Work in the 2" quarter will focus on: completion of forward model algorithms, including accurate spectral characteristics for all instruments, and comparison of simulated transmission data with actual level 1 instrument data for specific occultation events.
Quantized Overcomplete Expansions: Analysis, Synthesis and Algorithms
1995-07-01
would be in the spirit of the Lempel - Ziv algorithm . The decoder would have to be aware of changes in the dictionary, but depending on the nature of the...37 3.4 A General Vector Compression Algorithm Based on Frames : : : : : : : : : : 40 ii 3.4.1 Design Considerations...x3.3. Along with exploring general properties of matching pursuit, we are interested in its application to compressing data vectors in RN. A general
Como, Michael D.; Finkelstein, Jason S.; Rivera, Simonette L.; Monti, Jack; Busciolano, Ronald J.
2018-06-06
The U.S. Geological Survey, in cooperation with State and local agencies, systematically collects groundwater data at varying measurement frequencies to monitor the hydrologic conditions on Long Island, New York. Each year during April and May, the U.S. Geological Survey completes a synoptic survey of water levels to define the spatial distribution of the water table and potentiometric surfaces within the three main water-bearing units underlying Long Island—the upper glacial, Magothy, and Lloyd aquifers—and the hydraulically connected Jameco and North Shore aquifers. These data and the maps constructed from them are commonly used in studies of the hydrology of Long Island and are used by water managers and suppliers for aquifer management and planning purposes.Water-level measurements made in 424 monitoring wells (observation and supply wells), 13 streamgages, and 2 lake gages across Long Island during April–May 2016 were used to prepare the maps in this report. Groundwater measurements were made by the wetted-tape or electric-tape method to the nearest hundredth of a foot. Contours of water-table and potentiometric-surface altitudes were created using the groundwater measurements. The water-table contours were interpreted using water-level data collected from 275 observation wells and 1 supply well screened in the upper glacial aquifer and the shallow Magothy aquifer and 13 streamgages and 2 lake gages. The potentiometric-surface contours of the Magothy aquifer were interpreted from measurements at 88 wells (61 observation wells and 27 supply wells) screened in the middle to deep Magothy aquifer and the contiguous and hydraulically connected Jameco aquifer. The potentiometric-surface contours of the Lloyd aquifer were interpreted from measurements at 60 wells (55 observation wells and 5 supply wells) screened in the Lloyd aquifer and the contiguous and hydraulically connected North Shore aquifer. Many of the supply wells are in continuous operation and, therefore, were turned off for a minimum of 24 hours before measurements were made to allow the water levels in the wells to recover to ambient (nonpumping) conditions. Full recovery time at some of these supply wells can exceed 24 hours; therefore, water levels measured at these wells are assumed to be less accurate than those measured at observation wells, which are not pumped. In addition to pumping stresses, density differences (saline water) also lower the water levels measured in certain wells. Recent water-quality data are lacking in these wells; therefore, a conversion to freshwater head could not be performed accurately and was not attempted. In this report, all water-level altitudes are referenced to the National Geodetic Vertical Datum of 1929 (NGVD 29).The land surface altitude, or topography, was obtained from the National Oceanic and Atmospheric Administration. The data were collected using light detection and ranging (lidar) and were used to produce a three-dimensional digital elevation model. The lidar data have a horizontal accuracy of 1.38 feet and a vertical accuracy of 0.40 foot at a 95-percent confidence level for the “open terrain” land-cover category. The digital elevation model was developed jointly by the National Oceanic and Atmospheric Administration and the U.S. Geological Survey as part of the Disaster Relief Appropriations Act of 2013. Land surface altitude is referenced to the North American Vertical Datum of 1988 (NAVD 88). On Long Island, NAVD 88 is approximately 1 foot higher than NGVD 29.Hydrographs are included on these maps for selected wells that have continuous digital recording equipment, and each hydrograph includes the water level measured during the synoptic survey. These hydrographs are representative of the 2016 water year and show the changes throughout that period; a water year is the 12-month period from October 1 to September 30 and is designated by the year in which it ends.
Zomer, Ella; Osborn, David; Nazareth, Irwin; Blackburn, Ruth; Burton, Alexandra; Hardoon, Sarah; Holt, Richard Ian Gregory; King, Michael; Marston, Louise; Morris, Stephen; Omar, Rumana; Petersen, Irene; Walters, Kate; Hunter, Rachael Maree
2017-09-05
To determine the cost-effectiveness of two bespoke severe mental illness (SMI)-specific risk algorithms compared with standard risk algorithms for primary cardiovascular disease (CVD) prevention in those with SMI. Primary care setting in the UK. The analysis was from the National Health Service perspective. 1000 individuals with SMI from The Health Improvement Network Database, aged 30-74 years and without existing CVD, populated the model. Four cardiovascular risk algorithms were assessed: (1) general population lipid, (2) general population body mass index (BMI), (3) SMI-specific lipid and (4) SMI-specific BMI, compared against no algorithm. At baseline, each cardiovascular risk algorithm was applied and those considered high risk ( > 10%) were assumed to be prescribed statin therapy while others received usual care. Quality-adjusted life years (QALYs) and costs were accrued for each algorithm including no algorithm, and cost-effectiveness was calculated using the net monetary benefit (NMB) approach. Deterministic and probabilistic sensitivity analyses were performed to test assumptions made and uncertainty around parameter estimates. The SMI-specific BMI algorithm had the highest NMB resulting in 15 additional QALYs and a cost saving of approximately £53 000 per 1000 patients with SMI over 10 years, followed by the general population lipid algorithm (13 additional QALYs and a cost saving of £46 000). The general population lipid and SMI-specific BMI algorithms performed equally well. The ease and acceptability of use of an SMI-specific BMI algorithm (blood tests not required) makes it an attractive algorithm to implement in clinical settings. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Generalized Jaynes-Cummings model as a quantum search algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romanelli, A.
2009-07-15
We propose a continuous time quantum search algorithm using a generalization of the Jaynes-Cummings model. In this model the states of the atom are the elements among which the algorithm realizes the search, exciting resonances between the initial and the searched states. This algorithm behaves like Grover's algorithm; the optimal search time is proportional to the square root of the size of the search set and the probability to find the searched state oscillates periodically in time. In this frame, it is possible to reinterpret the usual Jaynes-Cummings model as a trivial case of the quantum search algorithm.
PI-line-based image reconstruction in helical cone-beam computed tomography with a variable pitch.
Zou, Yu; Pan, Xiaochuan; Xia, Dan; Wang, Ge
2005-08-01
Current applications of helical cone-beam computed tomography (CT) involve primarily a constant pitch where the translating speed of the table and the rotation speed of the source-detector remain constant. However, situations do exist where it may be more desirable to use a helical scan with a variable translating speed of the table, leading a variable pitch. One of such applications could arise in helical cone-beam CT fluoroscopy for the determination of vascular structures through real-time imaging of contrast bolus arrival. Most of the existing reconstruction algorithms have been developed only for helical cone-beam CT with constant pitch, including the backprojection-filtration (BPF) and filtered-backprojection (FBP) algorithms that we proposed previously. It is possible to generalize some of these algorithms to reconstruct images exactly for helical cone-beam CT with a variable pitch. In this work, we generalize our BPF and FBP algorithms to reconstruct images directly from data acquired in helical cone-beam CT with a variable pitch. We have also performed a preliminary numerical study to demonstrate and verify the generalization of the two algorithms. The results of the study confirm that our generalized BPF and FBP algorithms can yield exact reconstruction in helical cone-beam CT with a variable pitch. It should be pointed out that our generalized BPF algorithm is the only algorithm that is capable of reconstructing exactly region-of-interest image from data containing transverse truncations.
1975-12-01
d g k p t VOWELS: a (father) I (bit) LIST *■ / ! ( /• 1 b d C k P t —a ( i I b d g k P t —I...7. 1 D d g k P t —I 1 ( *; 1 b d g k P t —a 1 -. -?■. 1 b d C k P --a ( y /j-; 1 b d E k...1. Nakatani, Lloyd H. and Kathleen D . Dukes, Sensitive Tes * of Speech Communication Quality. J. Acoust. Soc. Amer., Vol. 53, pp. 1083-1092,
Microwave Quantum Illumination
2016-07-29
Microwave Quantum Illumination Shabir Barzanjeh,1 Saikat Guha,2 Christian Weedbrook,3 David Vitali ,4 Jeffrey H. Shapiro,5 and Stefano Pirandola6...1.0 2 4 6 8 10 cl as si ca l tr an sm it te r (C T ) q u an tu m il lu m in at io n ( Q I)QI CT () w ( ) 0.000 1.540 unstable 1.000...Theory, (Cambridge Univ. Press, 2013). [3] C. Weedbrook, S. Pirandola, R. Garćıa-Patrón, N . J. Cerf, T. C. Ralph, J. H. Shapiro, and S. Lloyd, Rev
2010-02-01
8217Ŕ IPACE: lfCUNTY TACTlet lRAINIIIG CI! NT1 !IIt (NIITTC) /lllfll!tA DI!W!l.OPI&I!NT JI\\AH c co -c._ -.; Q) ~::: Ill~ +-’ ~~ ·- ~~ (f) ti li LA N...Manager WMD- CFMO 3919 Central Ave. Cheyenne, WY 82001 (307) 772-5066 (o) (307) 274-0844 ( cell ) --- - -Original Message----- From: Lloyd, Dee W Mr...772-5066 (o) (307) 274-0844 ( cell ) -----Original Message----- From: Pesenti, Cathryn M Civ USAF AFGSC 90 CES/CEV [mailto:Cathryn.Pesenti@warren
1983-02-01
aspect ratio is relatively small. Brooks (ref. 1) worked with rectangular fins of 0.62 and 1.24 aspect ratio in a water medium and showed very large ...airflow rates. Lloyd (ref. 3) worked with an aspect ratio 2.0 rectangular wing using a very wide range of jet momentum coefficient; his results were in...D-A1i35 688 EFFECTS OF BLOWING SPANWISE FROM THE TIPS OF LOW ASPECT in, RATIO WINGS OF VA .(U) NIELSEN ENGINEERING AND RESEARCH INC MOUNTAIN VIEW CA
Contribution of the biological crust to the soil CO2 efflux in a Mediterranean ecosystem
NASA Astrophysics Data System (ADS)
Morillas, Lourdes; Bellucco, Veronica; Lo Cascio, Mauro; Marras, Serena; Spano, Donatella; Mereu, Simone
2016-04-01
Lately, the important role of the soil biological crust (hereafter biocrust) in Mediterranean ecosystems is emerging from a multitude of articles. It is becoming apparent that the biocrust has an important role in regulating ecosystem functions and that it interacts with the woody and herbaceous vegetation to a degree depending on the availability of water among other factors. Here we present the first results of a wider project and focus on the contribution of the biocrust to soil CO2 efflux, and on how the respiration of the biocrust responds to soil water content and temperature. A manipulative experiment was performed in a Mediterranean shrubland ecosystem in Sardinia (Italy) to assess the contribution of the bicocrust to soil CO2 efflux and to identify the main environmental drivers of the CO2 efflux in this ecosystem. For 19 months,in situ soil CO2 efflux was measured over three different surfaces: soil deprived of biocrust (hereafter Soil), biocrust (hereafter BC) and intact soil (hereafter Soil+BC). For these surfaces, three different approaches were used to investigate the dependency of CO2 efflux on soil temperature and soil water content, e.g. a simple linear regression, a multi-linear equation, and a modified version of the most common used Lloyd and Taylor model (Lloyd and Taylor, 1994). Results showed that CO2 effluxes emitted by Soil, BC and Soil+BC were differently driven by soil moisture and temperature: BC respiration was mainly controlled by soil moisture at 5 cm depth, whereas both soil temperature and water content at 20 cm depth determined Soil CO2 efflux. Soil temperature and water content at 5 cm depth drove Soil+BC respiration. We also found that biocrust can contribute substantially (up to 60%) to the total soil respiration depending on its moisture content. This contribution persists even in periods in which deeper soil layers are inactive, as small water pulses can activate lichens, mosses and cyanobacteria associated to the biocrust as well as the metabolism of carbon in soils, while deeper soil layers remain dormant. The important differences observed in CO2 efflux between Soil and Soil+BC suggest that projections on carbon budgets may underestimate soil CO2 efflux in spatially heterogeneous Mediterranean areas. Thus, our results highlight the relevance of accounting for the biocrust contribution to soil respiration and its responses to environmental drivers. The ongoing and planned activities to understand the full complexity of all factors determining respiration in water limited environments are briefly discussed. Lloyd, J., Taylor, J. A., 1994. On the temperature dependence of soil respiration. Funct. Ecol. 8, 315-323.
NASA Astrophysics Data System (ADS)
Liu, Shixing; Liu, Chang; Hua, Wei; Guo, Yongxin
2016-11-01
By using the discrete variational method, we study the numerical method of the general nonholonomic system in the generalized Birkhoffian framework, and construct a numerical method of generalized Birkhoffian equations called a self-adjoint-preserving algorithm. Numerical results show that it is reasonable to study the nonholonomic system by the structure-preserving algorithm in the generalized Birkhoffian framework. Project supported by the National Natural Science Foundation of China (Grant Nos. 11472124, 11572145, 11202090, and 11301350), the Doctor Research Start-up Fund of Liaoning Province, China (Grant No. 20141050), the China Postdoctoral Science Foundation (Grant No. 2014M560203), and the General Science and Technology Research Plans of Liaoning Educational Bureau, China (Grant No. L2013005).
Theory and algorithms for image reconstruction on chords and within regions of interest
NASA Astrophysics Data System (ADS)
Zou, Yu; Pan, Xiaochuan; Sidky, Emilâ Y.
2005-11-01
We introduce a formula for image reconstruction on a chord of a general source trajectory. We subsequently develop three algorithms for exact image reconstruction on a chord from data acquired with the general trajectory. Interestingly, two of the developed algorithms can accommodate data containing transverse truncations. The widely used helical trajectory and other trajectories discussed in literature can be interpreted as special cases of the general trajectory, and the developed theory and algorithms are thus directly applicable to reconstructing images exactly from data acquired with these trajectories. For instance, chords on a helical trajectory are equivalent to the n-PI-line segments. In this situation, the proposed algorithms become the algorithms that we proposed previously for image reconstruction on PI-line segments. We have performed preliminary numerical studies, which include the study on image reconstruction on chords of two-circle trajectory, which is nonsmooth, and on n-PI lines of a helical trajectory, which is smooth. Quantitative results of these studies verify and demonstrate the proposed theory and algorithms.
Generalized Grover's Algorithm for Multiple Phase Inversion States
NASA Astrophysics Data System (ADS)
Byrnes, Tim; Forster, Gary; Tessler, Louis
2018-02-01
Grover's algorithm is a quantum search algorithm that proceeds by repeated applications of the Grover operator and the Oracle until the state evolves to one of the target states. In the standard version of the algorithm, the Grover operator inverts the sign on only one state. Here we provide an exact solution to the problem of performing Grover's search where the Grover operator inverts the sign on N states. We show the underlying structure in terms of the eigenspectrum of the generalized Hamiltonian, and derive an appropriate initial state to perform the Grover evolution. This allows us to use the quantum phase estimation algorithm to solve the search problem in this generalized case, completely bypassing the Grover algorithm altogether. We obtain a time complexity of this case of √{D /Mα }, where D is the search space dimension, M is the number of target states, and α ≈1 , which is close to the optimal scaling.
Tan, Jun; Nie, Zaiping
2018-05-12
Direction of Arrival (DOA) estimation of low-altitude targets is difficult due to the multipath coherent interference from the ground reflection image of the targets, especially for very high frequency (VHF) radars, which have antennae that are severely restricted in terms of aperture and height. The polarization smoothing generalized multiple signal classification (MUSIC) algorithm, which combines polarization smoothing and generalized MUSIC algorithm for polarization sensitive arrays (PSAs), was proposed to solve this problem in this paper. Firstly, the polarization smoothing pre-processing was exploited to eliminate the coherence between the direct and the specular signals. Secondly, we constructed the generalized MUSIC algorithm for low angle estimation. Finally, based on the geometry information of the symmetry multipath model, the proposed algorithm was introduced to convert the two-dimensional searching into one-dimensional searching, thus reducing the computational burden. Numerical results were provided to verify the effectiveness of the proposed method, showing that the proposed algorithm has significantly improved angle estimation performance in the low-angle area compared with the available methods, especially when the grazing angle is near zero.
Nigatu, Yeshambel T; Liu, Yan; Wang, JianLi
2016-07-22
Multivariable risk prediction algorithms are useful for making clinical decisions and for health planning. While prediction algorithms for new onset of major depression in the primary care attendees in Europe and elsewhere have been developed, the performance of these algorithms in different populations is not known. The objective of this study was to validate the PredictD algorithm for new onset of major depressive episode (MDE) in the US general population. Longitudinal study design was conducted with approximate 3-year follow-up data from a nationally representative sample of the US general population. A total of 29,621 individuals who participated in Wave 1 and 2 of the US National Epidemiologic Survey on Alcohol and Related Conditions (NESARC) and who did not have an MDE in the past year at Wave 1 were included. The PredictD algorithm was directly applied to the selected participants. MDE was assessed by the Alcohol Use Disorder and Associated Disabilities Interview Schedule, based on the DSM-IV criteria. Among the participants, 8 % developed an MDE over three years. The PredictD algorithm had acceptable discriminative power (C-statistics = 0.708, 95 % CI: 0.696, 0.720), but poor calibration (p < 0.001) with the NESARC data. In the European primary care attendees, the algorithm had a C-statistics of 0.790 (95 % CI: 0.767, 0.813) with a perfect calibration. The PredictD algorithm has acceptable discrimination, but the calibration capacity was poor in the US general population despite of re-calibration. Therefore, based on the results, at current stage, the use of PredictD in the US general population for predicting individual risk of MDE is not encouraged. More independent validation research is needed.
Esteban, Santiago; Rodríguez Tablado, Manuel; Peper, Francisco; Mahumud, Yamila S; Ricci, Ricardo I; Kopitowski, Karin; Terrasa, Sergio
2017-01-01
Precision medicine requires extremely large samples. Electronic health records (EHR) are thought to be a cost-effective source of data for that purpose. Phenotyping algorithms help reduce classification errors, making EHR a more reliable source of information for research. Four algorithm development strategies for classifying patients according to their diabetes status (diabetics; non-diabetics; inconclusive) were tested (one codes-only algorithm; one boolean algorithm, four statistical learning algorithms and six stacked generalization meta-learners). The best performing algorithms within each strategy were tested on the validation set. The stacked generalization algorithm yielded the highest Kappa coefficient value in the validation set (0.95 95% CI 0.91, 0.98). The implementation of these algorithms allows for the exploitation of data from thousands of patients accurately, greatly reducing the costs of constructing retrospective cohorts for research.
A robust return-map algorithm for general multisurface plasticity
Adhikary, Deepak P.; Jayasundara, Chandana T.; Podgorney, Robert K.; ...
2016-06-16
Three new contributions to the field of multisurface plasticity are presented for general situations with an arbitrary number of nonlinear yield surfaces with hardening or softening. A method for handling linearly dependent flow directions is described. A residual that can be used in a line search is defined. An algorithm that has been implemented and comprehensively tested is discussed in detail. Examples are presented to illustrate the computational cost of various components of the algorithm. The overall result is that a single Newton-Raphson iteration of the algorithm costs between 1.5 and 2 times that of an elastic calculation. Examples alsomore » illustrate the successful convergence of the algorithm in complicated situations. For example, without using the new contributions presented here, the algorithm fails to converge for approximately 50% of the trial stresses for a common geomechanical model of sedementary rocks, while the current algorithm results in complete success. Since it involves no approximations, the algorithm is used to quantify the accuracy of an efficient, pragmatic, but approximate, algorithm used for sedimentary-rock plasticity in a commercial software package. Furthermore, the main weakness of the algorithm is identified as the difficulty of correctly choosing the set of initially active constraints in the general setting.« less
Computations involving differential operators and their actions on functions
NASA Technical Reports Server (NTRS)
Crouch, Peter E.; Grossman, Robert; Larson, Richard
1991-01-01
The algorithms derived by Grossmann and Larson (1989) are further developed for rewriting expressions involving differential operators. The differential operators involved arise in the local analysis of nonlinear dynamical systems. These algorithms are extended in two different directions: the algorithms are generalized so that they apply to differential operators on groups and the data structures and algorithms are developed to compute symbolically the action of differential operators on functions. Both of these generalizations are needed for applications.
Shoemaker, W C; Patil, R; Appel, P L; Kram, H B
1992-11-01
A generalized decision tree or clinical algorithm for treatment of high-risk elective surgical patients was developed from a physiologic model based on empirical data. First, a large data bank was used to do the following: (1) describe temporal hemodynamic and oxygen transport patterns that interrelate cardiac, pulmonary, and tissue perfusion functions in survivors and nonsurvivors; (2) define optimal therapeutic goals based on the supranormal oxygen transport values of high-risk postoperative survivors; (3) compare the relative effectiveness of alternative therapies in a wide variety of clinical and physiologic conditions; and (4) to develop criteria for titration of therapy to the endpoints of the supranormal optimal goals using cardiac index (CI), oxygen delivery (DO2), and oxygen consumption (VO2) as proxy outcome measures. Second, a general purpose algorithm was generated from these data and tested in preoperatively randomized clinical trials of high-risk surgical patients. Improved outcome was demonstrated with this generalized algorithm. The concept that the supranormal values represent compensations that have survival value has been corroborated by several other groups. We now propose a unique approach to refine the generalized algorithm to develop customized algorithms and individualized decision analysis for each patient's unique problems. The present article describes a preliminary evaluation of the feasibility of artificial intelligence techniques to accomplish individualized algorithms that may further improve patient care and outcome.
ERIC Educational Resources Information Center
Kim, Seonghoon
2013-01-01
With known item response theory (IRT) item parameters, Lord and Wingersky provided a recursive algorithm for computing the conditional frequency distribution of number-correct test scores, given proficiency. This article presents a generalized algorithm for computing the conditional distribution of summed test scores involving real-number item…
NASA Astrophysics Data System (ADS)
Peters, S. T.; Schroeder, D. M.; Romero-Wolf, A.; Haynes, M.
2017-12-01
The Radar for Icy Moon Exploration (RIME) and the Radar for Europa Assessment and Sounding: Ocean to Near-surface (REASON) have been identified as potential candidates for the implementation of passive sounding as additional observing modes for the ESA and NASA missions to Ganymede and Europa. Recent work has shown the theoretical potential for Jupiter's decametric radiation to be used as a source for passive radio sounding of its icy moons. We are further developing and adapting this geophysical approach for use in terrestrial glaciology. Here, we present results from preliminary field testing of a prototype passive radio sounder from cliffs along the California coast. This includes both using a Lloyd's mirror to measure the Sun's direct path and its reflection off the ocean's surface and exploiting autocorrelation to detect the delay time of the echo. This is the first in-situ demonstration of the autocorrelation-based passive-sounding approach using an astronomical white noise signal. We also discuss preliminary field tests on rougher terrestrial and subglacial surfaces, including at Store Glacier in Greenland. Additionally, we present modeling and experimental results that demonstrate the feasibility of applying presumming approaches to the autocorrelations to achieve coherent gain from an inherently random signal. We note that while recording with wider bandwidths and greater delays places fundamental limits on the Lloyd's mirror approach, our new autocorrelation method has no such limitation. Furthermore, we show how achieving wide bandwidths via spectral-stitching methods allows us to obtain a finer range resolution than given by the receiver's instantaneous bandwidth. Finally, we discuss the potential for this technique to eliminate the need for active transmitters in certain types of ice sounding experiments, thereby reducing the complexity, power consumption, and cost of systems and observations.
Automatic control algorithm effects on energy production
NASA Technical Reports Server (NTRS)
Mcnerney, G. M.
1981-01-01
A computer model was developed using actual wind time series and turbine performance data to simulate the power produced by the Sandia 17-m VAWT operating in automatic control. The model was used to investigate the influence of starting algorithms on annual energy production. The results indicate that, depending on turbine and local wind characteristics, a bad choice of a control algorithm can significantly reduce overall energy production. The model can be used to select control algorithms and threshold parameters that maximize long term energy production. The results from local site and turbine characteristics were generalized to obtain general guidelines for control algorithm design.
Ascent guidance algorithm using lidar wind measurements
NASA Technical Reports Server (NTRS)
Cramer, Evin J.; Bradt, Jerre E.; Hardtla, John W.
1990-01-01
The formulation of a general nonlinear programming guidance algorithm that incorporates wind measurements in the computation of ascent guidance steering commands is discussed. A nonlinear programming (NLP) algorithm that is designed to solve a very general problem has the potential to address the diversity demanded by future launch systems. Using B-splines for the command functional form allows the NLP algorithm to adjust the shape of the command profile to achieve optimal performance. The algorithm flexibility is demonstrated by simulation of ascent with dynamic loading constraints through a set of random wind profiles with and without wind sensing capability.
Generalized enhanced suffix array construction in external memory.
Louza, Felipe A; Telles, Guilherme P; Hoffmann, Steve; Ciferri, Cristina D A
2017-01-01
Suffix arrays, augmented by additional data structures, allow solving efficiently many string processing problems. The external memory construction of the generalized suffix array for a string collection is a fundamental task when the size of the input collection or the data structure exceeds the available internal memory. In this article we present and analyze [Formula: see text] [introduced in CPM (External memory generalized suffix and [Formula: see text] arrays construction. In: Proceedings of CPM. pp 201-10, 2013)], the first external memory algorithm to construct generalized suffix arrays augmented with the longest common prefix array for a string collection. Our algorithm relies on a combination of buffers, induced sorting and a heap to avoid direct string comparisons. We performed experiments that covered different aspects of our algorithm, including running time, efficiency, external memory access, internal phases and the influence of different optimization strategies. On real datasets of size up to 24 GB and using 2 GB of internal memory, [Formula: see text] showed a competitive performance when compared to [Formula: see text] and [Formula: see text], which are efficient algorithms for a single string according to the related literature. We also show the effect of disk caching managed by the operating system on our algorithm. The proposed algorithm was validated through performance tests using real datasets from different domains, in various combinations, and showed a competitive performance. Our algorithm can also construct the generalized Burrows-Wheeler transform of a string collection with no additional cost except by the output time.
NASA Technical Reports Server (NTRS)
Leutenegger, Scott T.; Horton, Graham
1994-01-01
Recently the Multi-Level algorithm was introduced as a general purpose solver for the solution of steady state Markov chains. In this paper, we consider the performance of the Multi-Level algorithm for solving Nearly Completely Decomposable (NCD) Markov chains, for which special-purpose iteractive aggregation/disaggregation algorithms such as the Koury-McAllister-Stewart (KMS) method have been developed that can exploit the decomposability of the the Markov chain. We present experimental results indicating that the general-purpose Multi-Level algorithm is competitive, and can be significantly faster than the special-purpose KMS algorithm when Gauss-Seidel and Gaussian Elimination are used for solving the individual blocks.
Progress on a Taylor weak statement finite element algorithm for high-speed aerodynamic flows
NASA Technical Reports Server (NTRS)
Baker, A. J.; Freels, J. D.
1989-01-01
A new finite element numerical Computational Fluid Dynamics (CFD) algorithm has matured to the point of efficiently solving two-dimensional high speed real-gas compressible flow problems in generalized coordinates on modern vector computer systems. The algorithm employs a Taylor Weak Statement classical Galerkin formulation, a variably implicit Newton iteration, and a tensor matrix product factorization of the linear algebra Jacobian under a generalized coordinate transformation. Allowing for a general two-dimensional conservation law system, the algorithm has been exercised on the Euler and laminar forms of the Navier-Stokes equations. Real-gas fluid properties are admitted, and numerical results verify solution accuracy, efficiency, and stability over a range of test problem parameters.
Robust Adaptive Modified Newton Algorithm for Generalized Eigendecomposition and Its Application
NASA Astrophysics Data System (ADS)
Yang, Jian; Yang, Feng; Xi, Hong-Sheng; Guo, Wei; Sheng, Yanmin
2007-12-01
We propose a robust adaptive algorithm for generalized eigendecomposition problems that arise in modern signal processing applications. To that extent, the generalized eigendecomposition problem is reinterpreted as an unconstrained nonlinear optimization problem. Starting from the proposed cost function and making use of an approximation of the Hessian matrix, a robust modified Newton algorithm is derived. A rigorous analysis of its convergence properties is presented by using stochastic approximation theory. We also apply this theory to solve the signal reception problem of multicarrier DS-CDMA to illustrate its practical application. The simulation results show that the proposed algorithm has fast convergence and excellent tracking capability, which are important in a practical time-varying communication environment.
Algorithm-Dependent Generalization Bounds for Multi-Task Learning.
Liu, Tongliang; Tao, Dacheng; Song, Mingli; Maybank, Stephen J
2017-02-01
Often, tasks are collected for multi-task learning (MTL) because they share similar feature structures. Based on this observation, in this paper, we present novel algorithm-dependent generalization bounds for MTL by exploiting the notion of algorithmic stability. We focus on the performance of one particular task and the average performance over multiple tasks by analyzing the generalization ability of a common parameter that is shared in MTL. When focusing on one particular task, with the help of a mild assumption on the feature structures, we interpret the function of the other tasks as a regularizer that produces a specific inductive bias. The algorithm for learning the common parameter, as well as the predictor, is thereby uniformly stable with respect to the domain of the particular task and has a generalization bound with a fast convergence rate of order O(1/n), where n is the sample size of the particular task. When focusing on the average performance over multiple tasks, we prove that a similar inductive bias exists under certain conditions on the feature structures. Thus, the corresponding algorithm for learning the common parameter is also uniformly stable with respect to the domains of the multiple tasks, and its generalization bound is of the order O(1/T), where T is the number of tasks. These theoretical analyses naturally show that the similarity of feature structures in MTL will lead to specific regularizations for predicting, which enables the learning algorithms to generalize fast and correctly from a few examples.
A GENERAL ALGORITHM FOR THE CONSTRUCTION OF CONTOUR PLOTS
NASA Technical Reports Server (NTRS)
Johnson, W.
1994-01-01
The graphical presentation of experimentally or theoretically generated data sets frequently involves the construction of contour plots. A general computer algorithm has been developed for the construction of contour plots. The algorithm provides for efficient and accurate contouring with a modular approach which allows flexibility in modifying the algorithm for special applications. The algorithm accepts as input data values at a set of points irregularly distributed over a plane. The algorithm is based on an interpolation scheme in which the points in the plane are connected by straight line segments to form a set of triangles. In general, the data is smoothed using a least-squares-error fit of the data to a bivariate polynomial. To construct the contours, interpolation along the edges of the triangles is performed, using the bivariable polynomial if data smoothing was performed. Once the contour points have been located, the contour may be drawn. This program is written in FORTRAN IV for batch execution and has been implemented on an IBM 360 series computer with a central memory requirement of approximately 100K of 8-bit bytes. This computer algorithm was developed in 1981.
Empirical study of parallel LRU simulation algorithms
NASA Technical Reports Server (NTRS)
Carr, Eric; Nicol, David M.
1994-01-01
This paper reports on the performance of five parallel algorithms for simulating a fully associative cache operating under the LRU (Least-Recently-Used) replacement policy. Three of the algorithms are SIMD, and are implemented on the MasPar MP-2 architecture. Two other algorithms are parallelizations of an efficient serial algorithm on the Intel Paragon. One SIMD algorithm is quite simple, but its cost is linear in the cache size. The two other SIMD algorithm are more complex, but have costs that are independent on the cache size. Both the second and third SIMD algorithms compute all stack distances; the second SIMD algorithm is completely general, whereas the third SIMD algorithm presumes and takes advantage of bounds on the range of reference tags. Both MIMD algorithm implemented on the Paragon are general and compute all stack distances; they differ in one step that may affect their respective scalability. We assess the strengths and weaknesses of these algorithms as a function of problem size and characteristics, and compare their performance on traces derived from execution of three SPEC benchmark programs.
A generalized global alignment algorithm.
Huang, Xiaoqiu; Chao, Kun-Mao
2003-01-22
Homologous sequences are sometimes similar over some regions but different over other regions. Homologous sequences have a much lower global similarity if the different regions are much longer than the similar regions. We present a generalized global alignment algorithm for comparing sequences with intermittent similarities, an ordered list of similar regions separated by different regions. A generalized global alignment model is defined to handle sequences with intermittent similarities. A dynamic programming algorithm is designed to compute an optimal general alignment in time proportional to the product of sequence lengths and in space proportional to the sum of sequence lengths. The algorithm is implemented as a computer program named GAP3 (Global Alignment Program Version 3). The generalized global alignment model is validated by experimental results produced with GAP3 on both DNA and protein sequences. The GAP3 program extends the ability of standard global alignment programs to recognize homologous sequences of lower similarity. The GAP3 program is freely available for academic use at http://bioinformatics.iastate.edu/aat/align/align.html.
Pure field theories and MACSYMA algorithms
NASA Technical Reports Server (NTRS)
Ament, W. S.
1977-01-01
A pure field theory attempts to describe physical phenomena through singularity-free solutions of field equations resulting from an action principle. The physics goes into forming the action principle and interpreting specific results. Algorithms for the intervening mathematical steps are sketched. Vacuum general relativity is a pure field theory, serving as model and providing checks for generalizations. The fields of general relativity are the 10 components of a symmetric Riemannian metric tensor; those of the Einstein-Straus generalization are the 16 components of a nonsymmetric. Algebraic properties are exploited in top level MACSYMA commands toward performing some of the algorithms of that generalization. The light cone for the theory as left by Einstein and Straus is found and simplifications of that theory are discussed.
Guided particle swarm optimization method to solve general nonlinear optimization problems
NASA Astrophysics Data System (ADS)
Abdelhalim, Alyaa; Nakata, Kazuhide; El-Alem, Mahmoud; Eltawil, Amr
2018-04-01
The development of hybrid algorithms is becoming an important topic in the global optimization research area. This article proposes a new technique in hybridizing the particle swarm optimization (PSO) algorithm and the Nelder-Mead (NM) simplex search algorithm to solve general nonlinear unconstrained optimization problems. Unlike traditional hybrid methods, the proposed method hybridizes the NM algorithm inside the PSO to improve the velocities and positions of the particles iteratively. The new hybridization considers the PSO algorithm and NM algorithm as one heuristic, not in a sequential or hierarchical manner. The NM algorithm is applied to improve the initial random solution of the PSO algorithm and iteratively in every step to improve the overall performance of the method. The performance of the proposed method was tested over 20 optimization test functions with varying dimensions. Comprehensive comparisons with other methods in the literature indicate that the proposed solution method is promising and competitive.
1984-12-10
Medium Altitude Missions Branch: C-141 KAO Personnel, Mike Robinson, Mike Landis, Ed Hall, Tom Jones, John Graybeal, Louis Haughney, Brian Wright, Allan Meyer, Dick Gallant, Al Silva, Louis Russo, Hap Arnold, Randy Hobbs, Bill Laurie, Louis Foss, Sue Laurie, Tony Tieas, Tom Connors, Dave Brown, Alan Dunn, Don Oishi, Don Olson, Jim McClenahan, Wally Stahl, Sandy Mayville, Hank Hermosillo, Doug Ziebell, Ben Horita, Bill Hightower, Ron Sanchez, Terry Stoeffler, Lee Montz, Gene Moniz, John Brown, Bob America, Mike Craig, Kent Shiffer, Sandy Kogan, George Gull, Judy Pipher, Larry Helpher, Don MacKinnon, Jesse Bregmann, Jim Eilers, Nabil Hanania, Jim Cockrell, Keith Ackerman, Dave Walton, Lloyd Domeier, Pat Atchison
Leadership in nursing education: voices from the past.
Gosline, Mary Beth
2004-01-01
When education for nurses became a reality, leaders in the emerging profession spoke out early and often for educational improvements to prepare those who would nurse. The writings and speeches of Isabel Hampton Robb, Mary Adelaide Nutting, Lavinia Lloyd Dock, Lillian Wald, and Isabel Maitland Stewart formed the basis for a qualitative study that documents the voices of early nursing leaders who contributed to the development of nursing education as it moved from "training" toward professional education in a university setting. What is documented in the literature is the desire of these women to enhance the professional status of nursing through improvements in its educational system.
Recurring errors among recent history of psychology textbooks.
Thomas, Roger K
2007-01-01
Five recurring errors in history of psychology textbooks are discussed. One involves an identical misquotation. The remaining examples involve factual and interpretational errors that more than one and usually several textbook authors made. In at least 2 cases some facts were fabricated, namely, so-called facts associated with Pavlov's mugging and Descartes's reasons for choosing the pineal gland as the locus for mind-body interaction. A fourth example involves Broca's so-called discovery of the speech center, and the fifth example involves misinterpretations of Lloyd Morgan's intentions regarding his famous canon. When an error involves misinterpretation and thus misrepresentation, I will show why the misinterpretation is untenable.
ERIC Educational Resources Information Center
Congress of the U.S., Washington, DC. Senate Committee on Environment and Public Works.
A hearing was held to consider S.2997, a bill which would provide financial assistance for school construction to local educational agencies educating large numbers of immigrant children born in Mexico. In opening remarks, Senator Lloyd Bentsen, Texas, explained that 58,000 Mexicans immigrated to the US in 1977; towns along the American border,…
Anti-inflammatory activities of the chemical constituents isolated from Trametes versicolor.
Jin, Mei; Zhou, Wei; Jin, Chunshi; Jiang, Zhe; Diao, Shengbao; Jin, Zhehu; Li, Gao
2018-03-09
Twenty-seven compounds including nine triterpenoids (1-9), eight sterols (10-17), two ribonucleotides (18, 19), four phenols (20-23), three glycosides (24-26), and one furan (27) were isolated from the fruiting bodies of Trametes versicolor (L.) Lloyd. This study is the first confirmation of the presence of the 11 compounds (3, 5, 6, 8, 18, 20, 21, 23-25, and 27) isolated from the Polyporaceae family, with six of these (2 and 12-16) from the genus Trametes. Compounds 3, 4, 10, 11, 16 and 17 were found to significantly inhibit the production of NO, TNF-α and IL-6 in a dose-dependent manner.
NASA Astrophysics Data System (ADS)
Kondo, Shuhei; Shibata, Tadashi; Ohmi, Tadahiro
1995-02-01
We have investigated the learning performance of the hardware backpropagation (HBP) algorithm, a hardware-oriented learning algorithm developed for the self-learning architecture of neural networks constructed using neuron MOS (metal-oxide-semiconductor) transistors. The solution to finding a mirror symmetry axis in a 4×4 binary pixel array was tested by computer simulation based on the HBP algorithm. Despite the inherent restrictions imposed on the hardware-learning algorithm, HBP exhibits equivalent learning performance to that of the original backpropagation (BP) algorithm when all the pertinent parameters are optimized. Very importantly, we have found that HBP has a superior generalization capability over BP; namely, HBP exhibits higher performance in solving problems that the network has not yet learnt.
Generalized gradient algorithm for trajectory optimization
NASA Technical Reports Server (NTRS)
Zhao, Yiyuan; Bryson, A. E.; Slattery, R.
1990-01-01
The generalized gradient algorithm presented and verified as a basis for the solution of trajectory optimization problems improves the performance index while reducing path equality constraints, and terminal equality constraints. The algorithm is conveniently divided into two phases, of which the first, 'feasibility' phase yields a solution satisfying both path and terminal constraints, while the second, 'optimization' phase uses the results of the first phase as initial guesses.
A biconjugate gradient type algorithm on massively parallel architectures
NASA Technical Reports Server (NTRS)
Freund, Roland W.; Hochbruck, Marlis
1991-01-01
The biconjugate gradient (BCG) method is the natural generalization of the classical conjugate gradient algorithm for Hermitian positive definite matrices to general non-Hermitian linear systems. Unfortunately, the original BCG algorithm is susceptible to possible breakdowns and numerical instabilities. Recently, Freund and Nachtigal have proposed a novel BCG type approach, the quasi-minimal residual method (QMR), which overcomes the problems of BCG. Here, an implementation is presented of QMR based on an s-step version of the nonsymmetric look-ahead Lanczos algorithm. The main feature of the s-step Lanczos algorithm is that, in general, all inner products, except for one, can be computed in parallel at the end of each block; this is unlike the other standard Lanczos process where inner products are generated sequentially. The resulting implementation of QMR is particularly attractive on massively parallel SIMD architectures, such as the Connection Machine.
An acceleration framework for synthetic aperture radar algorithms
NASA Astrophysics Data System (ADS)
Kim, Youngsoo; Gloster, Clay S.; Alexander, Winser E.
2017-04-01
Algorithms for radar signal processing, such as Synthetic Aperture Radar (SAR) are computationally intensive and require considerable execution time on a general purpose processor. Reconfigurable logic can be used to off-load the primary computational kernel onto a custom computing machine in order to reduce execution time by an order of magnitude as compared to kernel execution on a general purpose processor. Specifically, Field Programmable Gate Arrays (FPGAs) can be used to accelerate these kernels using hardware-based custom logic implementations. In this paper, we demonstrate a framework for algorithm acceleration. We used SAR as a case study to illustrate the potential for algorithm acceleration offered by FPGAs. Initially, we profiled the SAR algorithm and implemented a homomorphic filter using a hardware implementation of the natural logarithm. Experimental results show a linear speedup by adding reasonably small processing elements in Field Programmable Gate Array (FPGA) as opposed to using a software implementation running on a typical general purpose processor.
Event-chain Monte Carlo algorithms for three- and many-particle interactions
NASA Astrophysics Data System (ADS)
Harland, J.; Michel, M.; Kampmann, T. A.; Kierfeld, J.
2017-02-01
We generalize the rejection-free event-chain Monte Carlo algorithm from many-particle systems with pairwise interactions to systems with arbitrary three- or many-particle interactions. We introduce generalized lifting probabilities between particles and obtain a general set of equations for lifting probabilities, the solution of which guarantees maximal global balance. We validate the resulting three-particle event-chain Monte Carlo algorithms on three different systems by comparison with conventional local Monte Carlo simulations: i) a test system of three particles with a three-particle interaction that depends on the enclosed triangle area; ii) a hard-needle system in two dimensions, where needle interactions constitute three-particle interactions of the needle end points; iii) a semiflexible polymer chain with a bending energy, which constitutes a three-particle interaction of neighboring chain beads. The examples demonstrate that the generalization to many-particle interactions broadens the applicability of event-chain algorithms considerably.
NASA Astrophysics Data System (ADS)
Noble, J. H.; Lubasch, M.; Stevens, J.; Jentschura, U. D.
2017-12-01
We describe a matrix diagonalization algorithm for complex symmetric (not Hermitian) matrices, A ̲ =A̲T, which is based on a two-step algorithm involving generalized Householder reflections based on the indefinite inner product 〈 u ̲ , v ̲ 〉 ∗ =∑iuivi. This inner product is linear in both arguments and avoids complex conjugation. The complex symmetric input matrix is transformed to tridiagonal form using generalized Householder transformations (first step). An iterative, generalized QL decomposition of the tridiagonal matrix employing an implicit shift converges toward diagonal form (second step). The QL algorithm employs iterative deflation techniques when a machine-precision zero is encountered "prematurely" on the super-/sub-diagonal. The algorithm allows for a reliable and computationally efficient computation of resonance and antiresonance energies which emerge from complex-scaled Hamiltonians, and for the numerical determination of the real energy eigenvalues of pseudo-Hermitian and PT-symmetric Hamilton matrices. Numerical reference values are provided.
An l1-TV algorithm for deconvolution with salt and pepper noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wohlberg, Brendt; Rodriguez, Paul
2008-01-01
There has recently been considerable interest in applying Total Variation with an {ell}{sup 1} data fidelity term to the denoising of images subject to salt and pepper noise, but the extension of this formulation to more general problems, such as deconvolution, has received little attention, most probably because most efficient algorithms for {ell}{sup 1}-TV denoising can not handle more general inverse problems. We apply the Iteratively Reweighted Norm algorithm to this problem, and compare performance with an alternative algorithm based on the Mumford-Shah functional.
Wang, JianLi; Sareen, Jitender; Patten, Scott; Bolton, James; Schmitz, Norbert; Birney, Arden
2014-05-01
Prediction algorithms are useful for making clinical decisions and for population health planning. However, such prediction algorithms for first onset of major depression do not exist. The objective of this study was to develop and validate a prediction algorithm for first onset of major depression in the general population. Longitudinal study design with approximate 3-year follow-up. The study was based on data from a nationally representative sample of the US general population. A total of 28 059 individuals who participated in Waves 1 and 2 of the US National Epidemiologic Survey on Alcohol and Related Conditions and who had not had major depression at Wave 1 were included. The prediction algorithm was developed using logistic regression modelling in 21 813 participants from three census regions. The algorithm was validated in participants from the 4th census region (n=6246). Major depression occurred since Wave 1 of the National Epidemiologic Survey on Alcohol and Related Conditions, assessed by the Alcohol Use Disorder and Associated Disabilities Interview Schedule-diagnostic and statistical manual for mental disorders IV. A prediction algorithm containing 17 unique risk factors was developed. The algorithm had good discriminative power (C statistics=0.7538, 95% CI 0.7378 to 0.7699) and excellent calibration (F-adjusted test=1.00, p=0.448) with the weighted data. In the validation sample, the algorithm had a C statistic of 0.7259 and excellent calibration (Hosmer-Lemeshow χ(2)=3.41, p=0.906). The developed prediction algorithm has good discrimination and calibration capacity. It can be used by clinicians, mental health policy-makers and service planners and the general public to predict future risk of having major depression. The application of the algorithm may lead to increased personalisation of treatment, better clinical decisions and more optimal mental health service planning.
Algorithm for Compressing Time-Series Data
NASA Technical Reports Server (NTRS)
Hawkins, S. Edward, III; Darlington, Edward Hugo
2012-01-01
An algorithm based on Chebyshev polynomials effects lossy compression of time-series data or other one-dimensional data streams (e.g., spectral data) that are arranged in blocks for sequential transmission. The algorithm was developed for use in transmitting data from spacecraft scientific instruments to Earth stations. In spite of its lossy nature, the algorithm preserves the information needed for scientific analysis. The algorithm is computationally simple, yet compresses data streams by factors much greater than two. The algorithm is not restricted to spacecraft or scientific uses: it is applicable to time-series data in general. The algorithm can also be applied to general multidimensional data that have been converted to time-series data, a typical example being image data acquired by raster scanning. However, unlike most prior image-data-compression algorithms, this algorithm neither depends on nor exploits the two-dimensional spatial correlations that are generally present in images. In order to understand the essence of this compression algorithm, it is necessary to understand that the net effect of this algorithm and the associated decompression algorithm is to approximate the original stream of data as a sequence of finite series of Chebyshev polynomials. For the purpose of this algorithm, a block of data or interval of time for which a Chebyshev polynomial series is fitted to the original data is denoted a fitting interval. Chebyshev approximation has two properties that make it particularly effective for compressing serial data streams with minimal loss of scientific information: The errors associated with a Chebyshev approximation are nearly uniformly distributed over the fitting interval (this is known in the art as the "equal error property"); and the maximum deviations of the fitted Chebyshev polynomial from the original data have the smallest possible values (this is known in the art as the "min-max property").
Pearce, Christopher M; McLeod, Adam; Patrick, Jon; Boyle, Douglas; Shearer, Marianne; Eustace, Paula; Pearce, Mary Catherine
2016-12-20
Every day, patients are admitted to the hospital with conditions that could have been effectively managed in the primary care sector. These admissions are expensive and in many cases are possible to avoid if early intervention occurs. General practitioners are in the best position to identify those at risk of imminent hospital presentation and admission; however, it is not always possible for all the factors to be considered. A lack of shared information contributes significantly to the challenge of understanding a patient's full medical history. Some health care systems around the world use algorithms to analyze patient data in order to predict events such as emergency presentation; however, those responsible for the design and use of such systems readily admit that the algorithms can only be used to assess the populations used to design the algorithm in the first place. The United Kingdom health care system has contributed data toward algorithm development, which is possible through the unified health care system in place there. The lack of unified patient records in Australia has made building an algorithm for local use a significant challenge. Our objective is to use linked patient records to track patient flow through primary and secondary health care in order to develop a tool that can be applied in real time at the general practice level. This algorithm will allow the generation of reports for general practitioners that indicate the relative risk of patients presenting to an emergency department. A previously designed tool was used to deidentify the general practice and hospital records of approximately 100,000 patients. Records were pooled for patients who had attended emergency departments within the Eastern Health Network of hospitals and general practices within the Eastern Health Network catchment. The next phase will involve development of a model using a predictive analytic machine learning algorithm. The model will be developed iteratively, testing the combination of variables that will provide the best predictive model. Records of approximately 97,000 patients who have attended both a general practice and an emergency department have been identified within the database. These records are currently being used to develop the predictive model. Records from general practice and emergency department visits have been identified and pooled for development of the algorithm. The next phase in the project will see validation and live testing of the algorithm in a practice setting. The algorithm will underpin a clinical decision support tool for general practitioners which will be tested for face validity in this initial study into its efficacy. ©Christopher M Pearce, Adam McLeod, Jon Patrick, Douglas Boyle, Marianne Shearer, Paula Eustace, Mary Catherine Pearce. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 20.12.2016.
Stumm, Frederick; Lange, Andrew D.; Candela, Jennifer L.
2004-01-01
The Oyster Bay study area, in the northern part of Nassau County, N.Y., is underlain by unconsolidated deposits that form a sequence of aquifers and confining units. At least one production well has been affected by the intrusion of saltwater from Hempstead Harbor, Long Island Sound, and Cold Spring Harbor. Nineteen boreholes were drilled during 1995-98 for the collection of hydrogeologic, geochemical, and geophysical data to delineate the subsurface geology and the extent of saltwater intrusion. Continuous high-resolution marine-seismic-reflection surveys in the surrounding embayments of the Oyster Bay study area were conducted in 1996.New drill-core data indicate two hydrogeologic units—the North Shore aquifer and the North Shore confining unit—where the Lloyd aquifer, the Raritan confining unit, and the Magothy aquifer have been completely removed by glacial erosion.Water levels at 95 observation wells were measured quarterly during 1995–98. These data and continuous water-level records indicated that (1) the upper glacial (water-table) and Magothy aquifers are hydraulically connected and that their water levels did not respond to tidal fluctuations, and (2) the Lloyd and North Shore aquifers are hydraulically connected and their water levels responded to pumping and to tidal fluctuations.Marine seismic-reflection surveys in the surrounding embayments indicate at least four glacially eroded buried valleys with subhorizontal, parallel reflectors indicative of draped bedding that is interpreted as infilling by silt and clay. The buried valleys (1) truncate the surrounding coarse-grained deposits, (2) are asymmetrical and steep sided, (3) trend northwest-southeast, (4) are several miles long and about 1 mile wide, and (5) extend to more than 500 feet below sea level.Water samples taken during 1995–98 from three production wells and six observation wells screened in the upper glacial and Magothy aquifers contained volatile organic compounds in concentrations that exceeded the New York State Department of Health Drinking Water Maximum Contaminant Levels. High iron or nitrate concentrations were detected in water samples taken in 1997–98 from 39 observation wells. Previous high concentrations resulted in the shutdown of two production wells.Four distinct areas of saltwater intrusion in the Oyster Bay study area were delineated—three were in the upper glacial aquifer, and the fourth was in the Lloyd aquifer. Borehole-geophysical-logging data indicated that three of these saltwater "wedges" ranged from a few feet thick to more than 100 feet thick and had sharp freshwater-saltwater interfaces. Chloride concentrations in water from eight observation wells within these wedges in 1997 ranged from 125 to 13,750 milligrams per liter. One production well in Bayville has been shut down as of 1996 and others in the area may be affected by these saltwater wedges.
NASA Astrophysics Data System (ADS)
Jia, Zhongxiao; Yang, Yanfei
2018-05-01
In this paper, we propose new randomization based algorithms for large scale linear discrete ill-posed problems with general-form regularization: subject to , where L is a regularization matrix. Our algorithms are inspired by the modified truncated singular value decomposition (MTSVD) method, which suits only for small to medium scale problems, and randomized SVD (RSVD) algorithms that generate good low rank approximations to A. We use rank-k truncated randomized SVD (TRSVD) approximations to A by truncating the rank- RSVD approximations to A, where q is an oversampling parameter. The resulting algorithms are called modified TRSVD (MTRSVD) methods. At every step, we use the LSQR algorithm to solve the resulting inner least squares problem, which is proved to become better conditioned as k increases so that LSQR converges faster. We present sharp bounds for the approximation accuracy of the RSVDs and TRSVDs for severely, moderately and mildly ill-posed problems, and substantially improve a known basic bound for TRSVD approximations. We prove how to choose the stopping tolerance for LSQR in order to guarantee that the computed and exact best regularized solutions have the same accuracy. Numerical experiments illustrate that the best regularized solutions by MTRSVD are as accurate as the ones by the truncated generalized singular value decomposition (TGSVD) algorithm, and at least as accurate as those by some existing truncated randomized generalized singular value decomposition (TRGSVD) algorithms. This work was supported in part by the National Science Foundation of China (Nos. 11771249 and 11371219).
Progress on a generalized coordinates tensor product finite element 3DPNS algorithm for subsonic
NASA Technical Reports Server (NTRS)
Baker, A. J.; Orzechowski, J. A.
1983-01-01
A generalized coordinates form of the penalty finite element algorithm for the 3-dimensional parabolic Navier-Stokes equations for turbulent subsonic flows was derived. This algorithm formulation requires only three distinct hypermatrices and is applicable using any boundary fitted coordinate transformation procedure. The tensor matrix product approximation to the Jacobian of the Newton linear algebra matrix statement was also derived. Tne Newton algorithm was restructured to replace large sparse matrix solution procedures with grid sweeping using alpha-block tridiagonal matrices, where alpha equals the number of dependent variables. Numerical experiments were conducted and the resultant data gives guidance on potentially preferred tensor product constructions for the penalty finite element 3DPNS algorithm.
Practical sliced configuration spaces for curved planar pairs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sacks, E.
1999-01-01
In this article, the author presents a practical configuration-space computation algorithm for pairs of curved planar parts, based on the general algorithm developed by Bajaj and the author. The general algorithm advances the theoretical understanding of configuration-space computation, but is too slow and fragile for some applications. The new algorithm solves these problems by restricting the analysis to parts bounded by line segments and circular arcs, whereas the general algorithm handles rational parametric curves. The trade-off is worthwhile, because the restricted class handles most robotics and mechanical engineering applications. The algorithm reduces run time by a factor of 60 onmore » nine representative engineering pairs, and by a factor of 9 on two human-knee pairs. It also handles common special pairs by specialized methods. A survey of 2,500 mechanisms shows that these methods cover 90% of pairs and yield an additional factor of 10 reduction in average run time. The theme of this article is that application requirements, as well as intrinsic theoretical interest, should drive configuration-space research.« less
A quantum–quantum Metropolis algorithm
Yung, Man-Hong; Aspuru-Guzik, Alán
2012-01-01
The classical Metropolis sampling method is a cornerstone of many statistical modeling applications that range from physics, chemistry, and biology to economics. This method is particularly suitable for sampling the thermal distributions of classical systems. The challenge of extending this method to the simulation of arbitrary quantum systems is that, in general, eigenstates of quantum Hamiltonians cannot be obtained efficiently with a classical computer. However, this challenge can be overcome by quantum computers. Here, we present a quantum algorithm which fully generalizes the classical Metropolis algorithm to the quantum domain. The meaning of quantum generalization is twofold: The proposed algorithm is not only applicable to both classical and quantum systems, but also offers a quantum speedup relative to the classical counterpart. Furthermore, unlike the classical method of quantum Monte Carlo, this quantum algorithm does not suffer from the negative-sign problem associated with fermionic systems. Applications of this algorithm include the study of low-temperature properties of quantum systems, such as the Hubbard model, and preparing the thermal states of sizable molecules to simulate, for example, chemical reactions at an arbitrary temperature. PMID:22215584
RICH: OPEN-SOURCE HYDRODYNAMIC SIMULATION ON A MOVING VORONOI MESH
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yalinewich, Almog; Steinberg, Elad; Sari, Re’em
2015-02-01
We present here RICH, a state-of-the-art two-dimensional hydrodynamic code based on Godunov’s method, on an unstructured moving mesh (the acronym stands for Racah Institute Computational Hydrodynamics). This code is largely based on the code AREPO. It differs from AREPO in the interpolation and time-advancement schemeS as well as a novel parallelization scheme based on Voronoi tessellation. Using our code, we study the pros and cons of a moving mesh (in comparison to a static mesh). We also compare its accuracy to other codes. Specifically, we show that our implementation of external sources and time-advancement scheme is more accurate and robustmore » than is AREPO when the mesh is allowed to move. We performed a parameter study of the cell rounding mechanism (Lloyd iterations) and its effects. We find that in most cases a moving mesh gives better results than a static mesh, but it is not universally true. In the case where matter moves in one way and a sound wave is traveling in the other way (such that relative to the grid the wave is not moving) a static mesh gives better results than a moving mesh. We perform an analytic analysis for finite difference schemes that reveals that a Lagrangian simulation is better than a Eulerian simulation in the case of a highly supersonic flow. Moreover, we show that Voronoi-based moving mesh schemes suffer from an error, which is resolution independent, due to inconsistencies between the flux calculation and the change in the area of a cell. Our code is publicly available as open source and designed in an object-oriented, user-friendly way that facilitates incorporation of new algorithms and physical processes.« less
ERIC Educational Resources Information Center
Haberman, Shelby J.
2013-01-01
A general program for item-response analysis is described that uses the stabilized Newton-Raphson algorithm. This program is written to be compliant with Fortran 2003 standards and is sufficiently general to handle independent variables, multidimensional ability parameters, and matrix sampling. The ability variables may be either polytomous or…
Klaczynski, Paul A.
2014-01-01
In Stanovich's (2009a, 2011) dual-process theory, analytic processing occurs in the algorithmic and reflective minds. Thinking dispositions, indexes of reflective mind functioning, are believed to regulate operations at the algorithmic level, indexed by general cognitive ability. General limitations at the algorithmic level impose constraints on, and affect the adequacy of, specific strategies and abilities (e.g., numeracy). In a study of 216 undergraduates, the hypothesis that thinking dispositions and general ability moderate the relationship between numeracy (understanding of mathematical concepts and attention to numerical information) and normative responses on probabilistic heuristics and biases (HB) problems was tested. Although all three individual difference measures predicted normative responses, the numeracy-normative response association depended on thinking dispositions and general ability. Specifically, numeracy directly affected normative responding only at relatively high levels of thinking dispositions and general ability. At low levels of thinking dispositions, neither general ability nor numeric skills related to normative responses. Discussion focuses on the consistency of these findings with the hypothesis that the implementation of specific skills is constrained by limitations at both the reflective level and the algorithmic level, methodological limitations that prohibit definitive conclusions, and alternative explanations. PMID:25071639
Klaczynski, Paul A
2014-01-01
In Stanovich's (2009a, 2011) dual-process theory, analytic processing occurs in the algorithmic and reflective minds. Thinking dispositions, indexes of reflective mind functioning, are believed to regulate operations at the algorithmic level, indexed by general cognitive ability. General limitations at the algorithmic level impose constraints on, and affect the adequacy of, specific strategies and abilities (e.g., numeracy). In a study of 216 undergraduates, the hypothesis that thinking dispositions and general ability moderate the relationship between numeracy (understanding of mathematical concepts and attention to numerical information) and normative responses on probabilistic heuristics and biases (HB) problems was tested. Although all three individual difference measures predicted normative responses, the numeracy-normative response association depended on thinking dispositions and general ability. Specifically, numeracy directly affected normative responding only at relatively high levels of thinking dispositions and general ability. At low levels of thinking dispositions, neither general ability nor numeric skills related to normative responses. Discussion focuses on the consistency of these findings with the hypothesis that the implementation of specific skills is constrained by limitations at both the reflective level and the algorithmic level, methodological limitations that prohibit definitive conclusions, and alternative explanations.
A General Algorithm for Reusing Krylov Subspace Information. I. Unsteady Navier-Stokes
NASA Technical Reports Server (NTRS)
Carpenter, Mark H.; Vuik, C.; Lucas, Peter; vanGijzen, Martin; Bijl, Hester
2010-01-01
A general algorithm is developed that reuses available information to accelerate the iterative convergence of linear systems with multiple right-hand sides A x = b (sup i), which are commonly encountered in steady or unsteady simulations of nonlinear equations. The algorithm is based on the classical GMRES algorithm with eigenvector enrichment but also includes a Galerkin projection preprocessing step and several novel Krylov subspace reuse strategies. The new approach is applied to a set of test problems, including an unsteady turbulent airfoil, and is shown in some cases to provide significant improvement in computational efficiency relative to baseline approaches.
Fast frequency acquisition via adaptive least squares algorithm
NASA Technical Reports Server (NTRS)
Kumar, R.
1986-01-01
A new least squares algorithm is proposed and investigated for fast frequency and phase acquisition of sinusoids in the presence of noise. This algorithm is a special case of more general, adaptive parameter-estimation techniques. The advantages of the algorithms are their conceptual simplicity, flexibility and applicability to general situations. For example, the frequency to be acquired can be time varying, and the noise can be nonGaussian, nonstationary and colored. As the proposed algorithm can be made recursive in the number of observations, it is not necessary to have a priori knowledge of the received signal-to-noise ratio or to specify the measurement time. This would be required for batch processing techniques, such as the fast Fourier transform (FFT). The proposed algorithm improves the frequency estimate on a recursive basis as more and more observations are obtained. When the algorithm is applied in real time, it has the extra advantage that the observations need not be stored. The algorithm also yields a real time confidence measure as to the accuracy of the estimator.
Query construction, entropy, and generalization in neural-network models
NASA Astrophysics Data System (ADS)
Sollich, Peter
1994-05-01
We study query construction algorithms, which aim at improving the generalization ability of systems that learn from examples by choosing optimal, nonredundant training sets. We set up a general probabilistic framework for deriving such algorithms from the requirement of optimizing a suitable objective function; specifically, we consider the objective functions entropy (or information gain) and generalization error. For two learning scenarios, the high-low game and the linear perceptron, we evaluate the generalization performance obtained by applying the corresponding query construction algorithms and compare it to training on random examples. We find qualitative differences between the two scenarios due to the different structure of the underlying rules (nonlinear and ``noninvertible'' versus linear); in particular, for the linear perceptron, random examples lead to the same generalization ability as a sequence of queries in the limit of an infinite number of examples. We also investigate learning algorithms which are ill matched to the learning environment and find that, in this case, minimum entropy queries can in fact yield a lower generalization ability than random examples. Finally, we study the efficiency of single queries and its dependence on the learning history, i.e., on whether the previous training examples were generated randomly or by querying, and the difference between globally and locally optimal query construction.
A noniterative greedy algorithm for multiframe point correspondence.
Shafique, Khurram; Shah, Mubarak
2005-01-01
This paper presents a framework for finding point correspondences in monocular image sequences over multiple frames. The general problem of multiframe point correspondence is NP-hard for three or more frames. A polynomial time algorithm for a restriction of this problem is presented and is used as the basis of the proposed greedy algorithm for the general problem. The greedy nature of the proposed algorithm allows it to be used in real-time systems for tracking and surveillance, etc. In addition, the proposed algorithm deals with the problems of occlusion, missed detections, and false positives by using a single noniterative greedy optimization scheme and, hence, reduces the complexity of the overall algorithm as compared to most existing approaches where multiple heuristics are used for the same purpose. While most greedy algorithms for point tracking do not allow for entry and exit of the points from the scene, this is not a limitation for the proposed algorithm. Experiments with real and synthetic data over a wide range of scenarios and system parameters are presented to validate the claims about the performance of the proposed algorithm.
Generalized Nonlinear Chirp Scaling Algorithm for High-Resolution Highly Squint SAR Imaging.
Yi, Tianzhu; He, Zhihua; He, Feng; Dong, Zhen; Wu, Manqing
2017-11-07
This paper presents a modified approach for high-resolution, highly squint synthetic aperture radar (SAR) data processing. Several nonlinear chirp scaling (NLCS) algorithms have been proposed to solve the azimuth variance of the frequency modulation rates that are caused by the linear range walk correction (LRWC). However, the azimuth depth of focusing (ADOF) is not handled well by these algorithms. The generalized nonlinear chirp scaling (GNLCS) algorithm that is proposed in this paper uses the method of series reverse (MSR) to improve the ADOF and focusing precision. It also introduces a high order processing kernel to avoid the range block processing. Simulation results show that the GNLCS algorithm can enlarge the ADOF and focusing precision for high-resolution highly squint SAR data.
Some Algorithms for the Recursive Input-Output Modeling of 2-D Systems.
1979-12-01
is viewed as a 2-D prediction problem. This problem is solved recursvl by general ling the r nQ0 .-- UNCLASSIFIED SECURITY CLASSIVICATIOI; OF THr3... generalizing to the 2-D case an algorithm due to Levinson in the I-D case. The predictors obtained by this algorithm are then showed to converge to...ijzn-i M-j a(z,) = I a i m , a0 0 = 1 (6a) i=0 j=0 is monic, and n m b(z,w) = I b ij zn’ipm’j (6b) i=o j=0 There is no loss of generality in making
NASA Technical Reports Server (NTRS)
Hess, Ronald A.
1990-01-01
A collection of technical papers are presented that cover modeling pilot interaction with automated digital avionics systems and guidance and control algorithms for contour and nap-of-the-earth flight. The titles of the papers presented are as follows: (1) Automation effects in a multiloop manual control system; (2) A qualitative model of human interaction with complex dynamic systems; (3) Generalized predictive control of dynamic systems; (4) An application of generalized predictive control to rotorcraft terrain-following flight; (5) Self-tuning generalized predictive control applied to terrain-following flight; and (6) Precise flight path control using a predictive algorithm.
A New Approach for Solving the Generalized Traveling Salesman Problem
NASA Astrophysics Data System (ADS)
Pop, P. C.; Matei, O.; Sabo, C.
The generalized traveling problem (GTSP) is an extension of the classical traveling salesman problem. The GTSP is known to be an NP-hard problem and has many interesting applications. In this paper we present a local-global approach for the generalized traveling salesman problem. Based on this approach we describe a novel hybrid metaheuristic algorithm for solving the problem using genetic algorithms. Computational results are reported for Euclidean TSPlib instances and compared with the existing ones. The obtained results point out that our hybrid algorithm is an appropriate method to explore the search space of this complex problem and leads to good solutions in a reasonable amount of time.
Using Physical Models to Explain a Division Algorithm.
ERIC Educational Resources Information Center
Vest, Floyd
1985-01-01
Develops a division algorithm in terms of familiar manipulations of concrete objects and presents it with a series of questions for diagnosis of students' understanding of the algorithm in terms of the concrete model utilized. Also offers general guidelines for using concrete illustrations to explain algorithms and other mathematical principles.…
Hot and dense plasma probing by soft X-ray lasers
NASA Astrophysics Data System (ADS)
Krůs, M.; Kozlová, M.; Nejdl, J.; Rus, B.
2018-01-01
Soft X-ray lasers, due to their short wavelength, its brightness, and good spatial coherence, are excellent sources for the diagnostics of dense plasmas (up to 1025 cm-3) which are relevant to e.g. inertial fusion. Several techniques and experimental results, which are obtained at the quasi-steady state scheme being collisionally pumped 21.2 nm neon-like zinc laser installed at PALS Research Center, are presented here; among them the plasma density measurement by a double Lloyd mirror interferometer, deflectometer based on Talbot effect measuring plasma density gradients itself, with a following ray tracing postprocessing. Moreover, the high spatial resolution (nm scale) plasma images can be obtained when soft X-ray lasers are used.
Bloor, D. U.
1982-01-01
There were many forms of club or contract practice in the nineteenth century, but the friendly societies were the most important. A brief history of the friendly societies is given. As they grew in numbers and importance so did the dissatisfaction of the doctors who worked with them. Discontent among the doctors led at the end of the century to a battle between the medical profession and the clubs. The issues which divided the clubs and the doctors were clearly defined but, although the battle was protracted, the doctors did not win or manage to change the system of medical provision for the poor. The club system was ended by Lloyd George when he introduced his National Insurance Act, 1911. PMID:7050375
The use of Lanczos's method to solve the large generalized symmetric definite eigenvalue problem
NASA Technical Reports Server (NTRS)
Jones, Mark T.; Patrick, Merrell L.
1989-01-01
The generalized eigenvalue problem, Kx = Lambda Mx, is of significant practical importance, especially in structural enginering where it arises as the vibration and buckling problem. A new algorithm, LANZ, based on Lanczos's method is developed. LANZ uses a technique called dynamic shifting to improve the efficiency and reliability of the Lanczos algorithm. A new algorithm for solving the tridiagonal matrices that arise when using Lanczos's method is described. A modification of Parlett and Scott's selective orthogonalization algorithm is proposed. Results from an implementation of LANZ on a Convex C-220 show it to be superior to a subspace iteration code.
NASA Technical Reports Server (NTRS)
Reichelt, Mark
1993-01-01
In this paper we describe a novel generalized SOR (successive overrelaxation) algorithm for accelerating the convergence of the dynamic iteration method known as waveform relaxation. A new convolution SOR algorithm is presented, along with a theorem for determining the optimal convolution SOR parameter. Both analytic and experimental results are given to demonstrate that the convergence of the convolution SOR algorithm is substantially faster than that of the more obvious frequency-independent waveform SOR algorithm. Finally, to demonstrate the general applicability of this new method, it is used to solve the differential-algebraic system generated by spatial discretization of the time-dependent semiconductor device equations.
NASA Technical Reports Server (NTRS)
Madavan, Nateri K.
2004-01-01
Differential Evolution (DE) is a simple, fast, and robust evolutionary algorithm that has proven effective in determining the global optimum for several difficult single-objective optimization problems. The DE algorithm has been recently extended to multiobjective optimization problem by using a Pareto-based approach. In this paper, a Pareto DE algorithm is applied to multiobjective aerodynamic shape optimization problems that are characterized by computationally expensive objective function evaluations. To improve computational expensive the algorithm is coupled with generalized response surface meta-models based on artificial neural networks. Results are presented for some test optimization problems from the literature to demonstrate the capabilities of the method.
Transform methods for precision continuum and control models of flexible space structures
NASA Technical Reports Server (NTRS)
Lupi, Victor D.; Turner, James D.; Chun, Hon M.
1991-01-01
An open loop optimal control algorithm is developed for general flexible structures, based on Laplace transform methods. A distributed parameter model of the structure is first presented, followed by a derivation of the optimal control algorithm. The control inputs are expressed in terms of their Fourier series expansions, so that a numerical solution can be easily obtained. The algorithm deals directly with the transcendental transfer functions from control inputs to outputs of interest, and structural deformation penalties, as well as penalties on control effort, are included in the formulation. The algorithm is applied to several structures of increasing complexity to show its generality.
Biclustering Protein Complex Interactions with a Biclique FindingAlgorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, Chris; Zhang, Anne Ya; Holbrook, Stephen
2006-12-01
Biclustering has many applications in text mining, web clickstream mining, and bioinformatics. When data entries are binary, the tightest biclusters become bicliques. We propose a flexible and highly efficient algorithm to compute bicliques. We first generalize the Motzkin-Straus formalism for computing the maximal clique from L{sub 1} constraint to L{sub p} constraint, which enables us to provide a generalized Motzkin-Straus formalism for computing maximal-edge bicliques. By adjusting parameters, the algorithm can favor biclusters with more rows less columns, or vice verse, thus increasing the flexibility of the targeted biclusters. We then propose an algorithm to solve the generalized Motzkin-Straus optimizationmore » problem. The algorithm is provably convergent and has a computational complexity of O(|E|) where |E| is the number of edges. It relies on a matrix vector multiplication and runs efficiently on most current computer architectures. Using this algorithm, we bicluster the yeast protein complex interaction network. We find that biclustering protein complexes at the protein level does not clearly reflect the functional linkage among protein complexes in many cases, while biclustering at protein domain level can reveal many underlying linkages. We show several new biologically significant results.« less
General optical discrete z transform: design and application.
Ngo, Nam Quoc
2016-12-20
This paper presents a generalization of the discrete z transform algorithm. It is shown that the GOD-ZT algorithm is a generalization of several important conventional discrete transforms. Based on the GOD-ZT algorithm, a tunable general optical discrete z transform (GOD-ZT) processor is synthesized using the silica-based finite impulse response transversal filter. To demonstrate the effectiveness of the method, the design and simulation of a tunable optical discrete Fourier transform (ODFT) processor as a special case of the synthesized GOD-ZT processor is presented. It is also shown that the ODFT processor can function as a real-time optical spectrum analyzer. The tunable ODFT has an important potential application as a tunable optical demultiplexer at the receiver end of an optical orthogonal frequency-division multiplexing transmission system.
The algorithms for rational spline interpolation of surfaces
NASA Technical Reports Server (NTRS)
Schiess, J. R.
1986-01-01
Two algorithms for interpolating surfaces with spline functions containing tension parameters are discussed. Both algorithms are based on the tensor products of univariate rational spline functions. The simpler algorithm uses a single tension parameter for the entire surface. This algorithm is generalized to use separate tension parameters for each rectangular subregion. The new algorithm allows for local control of tension on the interpolating surface. Both algorithms are illustrated and the results are compared with the results of bicubic spline and bilinear interpolation of terrain elevation data.
Linear-time general decoding algorithm for the surface code
NASA Astrophysics Data System (ADS)
Darmawan, Andrew S.; Poulin, David
2018-05-01
A quantum error correcting protocol can be substantially improved by taking into account features of the physical noise process. We present an efficient decoder for the surface code which can account for general noise features, including coherences and correlations. We demonstrate that the decoder significantly outperforms the conventional matching algorithm on a variety of noise models, including non-Pauli noise and spatially correlated noise. The algorithm is based on an approximate calculation of the logical channel using a tensor-network description of the noisy state.
NASA Astrophysics Data System (ADS)
Zaouche, Abdelouahib; Dayoub, Iyad; Rouvaen, Jean Michel; Tatkeu, Charles
2008-12-01
We propose a global convergence baud-spaced blind equalization method in this paper. This method is based on the application of both generalized pattern optimization and channel surfing reinitialization. The potentially used unimodal cost function relies on higher- order statistics, and its optimization is achieved using a pattern search algorithm. Since the convergence to the global minimum is not unconditionally warranted, we make use of channel surfing reinitialization (CSR) strategy to find the right global minimum. The proposed algorithm is analyzed, and simulation results using a severe frequency selective propagation channel are given. Detailed comparisons with constant modulus algorithm (CMA) are highlighted. The proposed algorithm performances are evaluated in terms of intersymbol interference, normalized received signal constellations, and root mean square error vector magnitude. In case of nonconstant modulus input signals, our algorithm outperforms significantly CMA algorithm with full channel surfing reinitialization strategy. However, comparable performances are obtained for constant modulus signals.
Shen, Peiping; Zhang, Tongli; Wang, Chunfeng
2017-01-01
This article presents a new approximation algorithm for globally solving a class of generalized fractional programming problems (P) whose objective functions are defined as an appropriate composition of ratios of affine functions. To solve this problem, the algorithm solves an equivalent optimization problem (Q) via an exploration of a suitably defined nonuniform grid. The main work of the algorithm involves checking the feasibility of linear programs associated with the interesting grid points. It is proved that the proposed algorithm is a fully polynomial time approximation scheme as the ratio terms are fixed in the objective function to problem (P), based on the computational complexity result. In contrast to existing results in literature, the algorithm does not require the assumptions on quasi-concavity or low-rank of the objective function to problem (P). Numerical results are given to illustrate the feasibility and effectiveness of the proposed algorithm.
Noise-enhanced clustering and competitive learning algorithms.
Osoba, Osonde; Kosko, Bart
2013-01-01
Noise can provably speed up convergence in many centroid-based clustering algorithms. This includes the popular k-means clustering algorithm. The clustering noise benefit follows from the general noise benefit for the expectation-maximization algorithm because many clustering algorithms are special cases of the expectation-maximization algorithm. Simulations show that noise also speeds up convergence in stochastic unsupervised competitive learning, supervised competitive learning, and differential competitive learning. Copyright © 2012 Elsevier Ltd. All rights reserved.
Yi, Tianzhu; He, Zhihua; He, Feng; Dong, Zhen; Wu, Manqing
2017-01-01
This paper presents an efficient and precise imaging algorithm for the large bandwidth sliding spotlight synthetic aperture radar (SAR). The existing sub-aperture processing method based on the baseband azimuth scaling (BAS) algorithm cannot cope with the high order phase coupling along the range and azimuth dimensions. This coupling problem causes defocusing along the range and azimuth dimensions. This paper proposes a generalized chirp scaling (GCS)-BAS processing algorithm, which is based on the GCS algorithm. It successfully mitigates the deep focus along the range dimension of a sub-aperture of the large bandwidth sliding spotlight SAR, as well as high order phase coupling along the range and azimuth dimensions. Additionally, the azimuth focusing can be achieved by this azimuth scaling method. Simulation results demonstrate the ability of the GCS-BAS algorithm to process the large bandwidth sliding spotlight SAR data. It is proven that great improvements of the focus depth and imaging accuracy are obtained via the GCS-BAS algorithm. PMID:28555057
Parallel Software Model Checking
2015-01-08
checker. This project will explore this strategy to parallelize the generalized PDR algorithm for software model checking. It belongs to TF1 due to its ... focus on formal verification . Generalized PDR. Generalized Property Driven Rechability (GPDR) i is an algorithm for solving HORN-SMT reachability...subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE 08
Generalized Nonlinear Chirp Scaling Algorithm for High-Resolution Highly Squint SAR Imaging
He, Zhihua; He, Feng; Dong, Zhen; Wu, Manqing
2017-01-01
This paper presents a modified approach for high-resolution, highly squint synthetic aperture radar (SAR) data processing. Several nonlinear chirp scaling (NLCS) algorithms have been proposed to solve the azimuth variance of the frequency modulation rates that are caused by the linear range walk correction (LRWC). However, the azimuth depth of focusing (ADOF) is not handled well by these algorithms. The generalized nonlinear chirp scaling (GNLCS) algorithm that is proposed in this paper uses the method of series reverse (MSR) to improve the ADOF and focusing precision. It also introduces a high order processing kernel to avoid the range block processing. Simulation results show that the GNLCS algorithm can enlarge the ADOF and focusing precision for high-resolution highly squint SAR data. PMID:29112151
A generalized algorithm to design finite field normal basis multipliers
NASA Technical Reports Server (NTRS)
Wang, C. C.
1986-01-01
Finite field arithmetic logic is central in the implementation of some error-correcting coders and some cryptographic devices. There is a need for good multiplication algorithms which can be easily realized. Massey and Omura recently developed a new multiplication algorithm for finite fields based on a normal basis representation. Using the normal basis representation, the design of the finite field multiplier is simple and regular. The fundamental design of the Massey-Omura multiplier is based on a design of a product function. In this article, a generalized algorithm to locate a normal basis in a field is first presented. Using this normal basis, an algorithm to construct the product function is then developed. This design does not depend on particular characteristics of the generator polynomial of the field.
Remote sensing image denoising application by generalized morphological component analysis
NASA Astrophysics Data System (ADS)
Yu, Chong; Chen, Xiong
2014-12-01
In this paper, we introduced a remote sensing image denoising method based on generalized morphological component analysis (GMCA). This novel algorithm is the further extension of morphological component analysis (MCA) algorithm to the blind source separation framework. The iterative thresholding strategy adopted by GMCA algorithm firstly works on the most significant features in the image, and then progressively incorporates smaller features to finely tune the parameters of whole model. Mathematical analysis of the computational complexity of GMCA algorithm is provided. Several comparison experiments with state-of-the-art denoising algorithms are reported. In order to make quantitative assessment of algorithms in experiments, Peak Signal to Noise Ratio (PSNR) index and Structural Similarity (SSIM) index are calculated to assess the denoising effect from the gray-level fidelity aspect and the structure-level fidelity aspect, respectively. Quantitative analysis on experiment results, which is consistent with the visual effect illustrated by denoised images, has proven that the introduced GMCA algorithm possesses a marvelous remote sensing image denoising effectiveness and ability. It is even hard to distinguish the original noiseless image from the recovered image by adopting GMCA algorithm through visual effect.
Fast, Parallel and Secure Cryptography Algorithm Using Lorenz's Attractor
NASA Astrophysics Data System (ADS)
Marco, Anderson Gonçalves; Martinez, Alexandre Souto; Bruno, Odemir Martinez
A novel cryptography method based on the Lorenz's attractor chaotic system is presented. The proposed algorithm is secure and fast, making it practical for general use. We introduce the chaotic operation mode, which provides an interaction among the password, message and a chaotic system. It ensures that the algorithm yields a secure codification, even if the nature of the chaotic system is known. The algorithm has been implemented in two versions: one sequential and slow and the other, parallel and fast. Our algorithm assures the integrity of the ciphertext (we know if it has been altered, which is not assured by traditional algorithms) and consequently its authenticity. Numerical experiments are presented, discussed and show the behavior of the method in terms of security and performance. The fast version of the algorithm has a performance comparable to AES, a popular cryptography program used commercially nowadays, but it is more secure, which makes it immediately suitable for general purpose cryptography applications. An internet page has been set up, which enables the readers to test the algorithm and also to try to break into the cipher.
NASA Technical Reports Server (NTRS)
Chen, C. P.; Wu, S. T.
1992-01-01
The objective of this investigation has been to develop an algorithm (or algorithms) for the improvement of the accuracy and efficiency of the computer fluid dynamics (CFD) models to study the fundamental physics of combustion chamber flows, which are necessary ultimately for the design of propulsion systems such as SSME and STME. During this three year study (May 19, 1978 - May 18, 1992), a unique algorithm was developed for all speed flows. This newly developed algorithm basically consists of two pressure-based algorithms (i.e. PISOC and MFICE). This PISOC is a non-iterative scheme and the FICE is an iterative scheme where PISOC has the characteristic advantages on low and high speed flows and the modified FICE has shown its efficiency and accuracy to compute the flows in the transonic region. A new algorithm is born from a combination of these two algorithms. This newly developed algorithm has general application in both time-accurate and steady state flows, and also was tested extensively for various flow conditions, such as turbulent flows, chemically reacting flows, and multiphase flows.
Implementing a self-structuring data learning algorithm
NASA Astrophysics Data System (ADS)
Graham, James; Carson, Daniel; Ternovskiy, Igor
2016-05-01
In this paper, we elaborate on what we did to implement our self-structuring data learning algorithm. To recap, we are working to develop a data learning algorithm that will eventually be capable of goal driven pattern learning and extrapolation of more complex patterns from less complex ones. At this point we have developed a conceptual framework for the algorithm, but have yet to discuss our actual implementation and the consideration and shortcuts we needed to take to create said implementation. We will elaborate on our initial setup of the algorithm and the scenarios we used to test our early stage algorithm. While we want this to be a general algorithm, it is necessary to start with a simple scenario or two to provide a viable development and testing environment. To that end, our discussion will be geared toward what we include in our initial implementation and why, as well as what concerns we may have. In the future, we expect to be able to apply our algorithm to a more general approach, but to do so within a reasonable time, we needed to pick a place to start.
Generalized SMO algorithm for SVM-based multitask learning.
Cai, Feng; Cherkassky, Vladimir
2012-06-01
Exploiting additional information to improve traditional inductive learning is an active research area in machine learning. In many supervised-learning applications, training data can be naturally separated into several groups, and incorporating this group information into learning may improve generalization. Recently, Vapnik proposed a general approach to formalizing such problems, known as "learning with structured data" and its support vector machine (SVM) based optimization formulation called SVM+. Liang and Cherkassky showed the connection between SVM+ and multitask learning (MTL) approaches in machine learning, and proposed an SVM-based formulation for MTL called SVM+MTL for classification. Training the SVM+MTL classifier requires the solution of a large quadratic programming optimization problem which scales as O(n(3)) with sample size n. So there is a need to develop computationally efficient algorithms for implementing SVM+MTL. This brief generalizes Platt's sequential minimal optimization (SMO) algorithm to the SVM+MTL setting. Empirical results show that, for typical SVM+MTL problems, the proposed generalized SMO achieves over 100 times speed-up, in comparison with general-purpose optimization routines.
Generalization Performance of Regularized Ranking With Multiscale Kernels.
Zhou, Yicong; Chen, Hong; Lan, Rushi; Pan, Zhibin
2016-05-01
The regularized kernel method for the ranking problem has attracted increasing attentions in machine learning. The previous regularized ranking algorithms are usually based on reproducing kernel Hilbert spaces with a single kernel. In this paper, we go beyond this framework by investigating the generalization performance of the regularized ranking with multiscale kernels. A novel ranking algorithm with multiscale kernels is proposed and its representer theorem is proved. We establish the upper bound of the generalization error in terms of the complexity of hypothesis spaces. It shows that the multiscale ranking algorithm can achieve satisfactory learning rates under mild conditions. Experiments demonstrate the effectiveness of the proposed method for drug discovery and recommendation tasks.
General Quantum Meet-in-the-Middle Search Algorithm Based on Target Solution of Fixed Weight
NASA Astrophysics Data System (ADS)
Fu, Xiang-Qun; Bao, Wan-Su; Wang, Xiang; Shi, Jian-Hong
2016-10-01
Similar to the classical meet-in-the-middle algorithm, the storage and computation complexity are the key factors that decide the efficiency of the quantum meet-in-the-middle algorithm. Aiming at the target vector of fixed weight, based on the quantum meet-in-the-middle algorithm, the algorithm for searching all n-product vectors with the same weight is presented, whose complexity is better than the exhaustive search algorithm. And the algorithm can reduce the storage complexity of the quantum meet-in-the-middle search algorithm. Then based on the algorithm and the knapsack vector of the Chor-Rivest public-key crypto of fixed weight d, we present a general quantum meet-in-the-middle search algorithm based on the target solution of fixed weight, whose computational complexity is \\sumj = 0d {(O(\\sqrt {Cn - k + 1d - j }) + O(C_kj log C_k^j))} with Σd i =0 Ck i memory cost. And the optimal value of k is given. Compared to the quantum meet-in-the-middle search algorithm for knapsack problem and the quantum algorithm for searching a target solution of fixed weight, the computational complexity of the algorithm is lower. And its storage complexity is smaller than the quantum meet-in-the-middle-algorithm. Supported by the National Basic Research Program of China under Grant No. 2013CB338002 and the National Natural Science Foundation of China under Grant No. 61502526
Progress report on studies of salt-water encroachment on Long Island, New York, 1953
Lusczynski, N.J.; Upson, J.E.
1954-01-01
Nearly all the water used on Long Island, N. Y., is derived by wells from the thick and extensive water-bearing formations that underlie and compose the entire island. The unconsolidated deposits, consisting of sand, gravel, and clay, range in thickness from a few feet in northern Queens County to more than 2,000 feet in southern Suffolk County. Four main and relatively distinct aquifers, all interconnected hydraulically to a greater or lesser degree, have been recognized and delineated at least in a general way. They are, from younger to older, the upper Pleistocene deposits, in which the ground water is mainly unconfined, and three formations in which the water is generally confined - the Jameco gravel, of Pleistocene age, and the Magothy (?) formation and the Lloyd sand member of the Rartian formation, both of Lake Cretaceous age. Except for some artificial recharge, these aquifers are replenished entirely by infiltration of precipitation. Under natural conditions, the fresh water moves into and through the formations, discharging into the sea. With the growth of population on Long Island and the continuously increasing use of water over the years, not only has the infiltration of precipitation been seriously impeded at places, but the withdrawals from the ground-water reservoir have increased markedly. These factors have upset the natural balance between the fresh surface and ground water of the island and the surrounding sea water, and with increased use of water will do so more and more, thus leading to salt-water encroachment. In a sense, the whole problem of utilization of ground water on Long Island is one of determining how much ground water can be withdrawn without serious salt-water encroachment.
Priest, D L; Karageorghis, C I; Sharp, N C C
2004-03-01
The purpose of the present study was to investigate the characteristics and effects of motivational music in British gymnasia. The secondary purpose was to determine whether the characteristics and effects of motivational music were invariant in relation to gender, age, frequency of gymnasium attendance, and the time of day at which exercise participants attended gymnasia. Participants (n=532) from 29 David Lloyd Leisure exercise facilities across Britain responded to a questionnaire that was designed to assess music preferences during exercise via 2 open-ended questions and 1 scaled-response item. A content analysis of the questionnaire data yielded 45 analytic properties that were grouped into the following categories: specific music factors, general music factors, music programme factors, delivery factors, televisual factors, personal factors, contextual factors, and psychophysical response factors. The relative incidence of these analytic properties across gender groups (male/female), age groups (16-26 y, 27-34 y, 35-45 y, 46+ y), frequency of attendance groups (low, medium, high), and time of attendance groups (morning, afternoon, evening) was tested by use of chi(2) analyses. Of the personal variables tested, age exerted the greatest influence on musical preference during exercise; older participants expressed a preference for quieter, slower, and generally less overtly stimulative music. Music programmes that are prescribed to accompany exercise should be varied in terms of musical idiom and date of release. Such programmes will account for the preferences of different groups of exercise participants that attend gymnasia at different times of the day. Further, the music chosen should be characterised by a strong rhythmical component.
Hoffmann, S
1992-12-01
A prospective evaluation was made of an algorithm for a selective use of throat swabs in patients with sore throat in general practice. The algorithm states that a throat swab should be obtained (a) in all children younger than 15 years; (b) in patients aged 15 years or more who have pain on swallowing and at least three of four signs (enlarged or hyperaemic tonsils; exudate; enlarged or tender angular lymph nodes; and a temperature > or = 38 degrees C); and (c) in adults aged 15-44 years with pain on swallowing and one or two of the four signs, but not both cough and coryza. Group A streptococci were found by laboratory culture in 30% of throat swabs from 1783 patients. Using these results as the reference, the algorithm was 95% sensitive and 26% specific, and assigned 80% of the patients to be swabbed. Its positive and negative predictive values in this setting were 36% and 92%, respectively. It is concluded that this algorithm may be useful in general practice.
Prediction of dynamical systems by symbolic regression
NASA Astrophysics Data System (ADS)
Quade, Markus; Abel, Markus; Shafi, Kamran; Niven, Robert K.; Noack, Bernd R.
2016-07-01
We study the modeling and prediction of dynamical systems based on conventional models derived from measurements. Such algorithms are highly desirable in situations where the underlying dynamics are hard to model from physical principles or simplified models need to be found. We focus on symbolic regression methods as a part of machine learning. These algorithms are capable of learning an analytically tractable model from data, a highly valuable property. Symbolic regression methods can be considered as generalized regression methods. We investigate two particular algorithms, the so-called fast function extraction which is a generalized linear regression algorithm, and genetic programming which is a very general method. Both are able to combine functions in a certain way such that a good model for the prediction of the temporal evolution of a dynamical system can be identified. We illustrate the algorithms by finding a prediction for the evolution of a harmonic oscillator based on measurements, by detecting an arriving front in an excitable system, and as a real-world application, the prediction of solar power production based on energy production observations at a given site together with the weather forecast.
NASA Astrophysics Data System (ADS)
Katzav, Joel
2014-05-01
I bring out the limitations of four important views of what the target of useful climate model assessment is. Three of these views are drawn from philosophy. They include the views of Elisabeth Lloyd and Wendy Parker, and an application of Bayesian confirmation theory. The fourth view I criticise is based on the actual practice of climate model assessment. In bringing out the limitations of these four views, I argue that an approach to climate model assessment that neither demands too much of such assessment nor threatens to be unreliable will, in typical cases, have to aim at something other than the confirmation of claims about how the climate system actually is. This means, I suggest, that the Intergovernmental Panel on Climate Change's (IPCC's) focus on establishing confidence in climate model explanations and predictions is misguided. So too, it means that standard epistemologies of science with pretensions to generality, e.g., Bayesian epistemologies, fail to illuminate the assessment of climate models. I go on to outline a view that neither demands too much nor threatens to be unreliable, a view according to which useful climate model assessment typically aims to show that certain climatic scenarios are real possibilities and, when the scenarios are determined to be real possibilities, partially to determine how remote they are.
2018-01-01
Reports an error in "Facing Humanness: Facial Width-to-Height Ratio Predicts Ascriptions of Humanity" by Jason C. Deska, E. Paige Lloyd and Kurt Hugenberg ( Journal of Personality and Social Psychology , Advanced Online Publication, Aug 28, 2017, np). In the article, there is a data error in the Results section of Study 1c. The fourth sentence of the fourth paragraph should read as follows: High fWHR targets (M= 74.39, SD=18.25) were rated as equivalently evolved as their low fWHR counterparts (M=79.39, SD=15.91). (The following abstract of the original article appeared in record 2017-36694-001.) The ascription of mind to others is central to social cognition. Most research on the ascription of mind has focused on motivated, top-down processes. The current work provides novel evidence that facial width-to-height ratio (fWHR) serves as a bottom-up perceptual signal of humanness. Using a range of well-validated operational definitions of humanness, we provide evidence across 5 studies that target faces with relatively greater fWHR are seen as less than fully human compared with their relatively lower fWHR counterparts. We then present 2 ancillary studies exploring whether the fWHR-to-humanness link is mediated by previously established fWHR-trait links in the literature. Finally, 3 additional studies extend this fWHR-humanness link beyond measurements of humanness, demonstrating that the fWHR-humanness link has consequences for downstream social judgments including the sorts of crimes people are perceived to be guilty of and the social tasks for which they seem helpful. In short, we provide evidence for the hypothesis that individuals with relatively greater facial width-to-height ratio are routinely denied sophisticated, humanlike minds. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
A systematic investigation of computation models for predicting Adverse Drug Reactions (ADRs).
Kuang, Qifan; Wang, MinQi; Li, Rong; Dong, YongCheng; Li, Yizhou; Li, Menglong
2014-01-01
Early and accurate identification of adverse drug reactions (ADRs) is critically important for drug development and clinical safety. Computer-aided prediction of ADRs has attracted increasing attention in recent years, and many computational models have been proposed. However, because of the lack of systematic analysis and comparison of the different computational models, there remain limitations in designing more effective algorithms and selecting more useful features. There is therefore an urgent need to review and analyze previous computation models to obtain general conclusions that can provide useful guidance to construct more effective computational models to predict ADRs. In the current study, the main work is to compare and analyze the performance of existing computational methods to predict ADRs, by implementing and evaluating additional algorithms that have been earlier used for predicting drug targets. Our results indicated that topological and intrinsic features were complementary to an extent and the Jaccard coefficient had an important and general effect on the prediction of drug-ADR associations. By comparing the structure of each algorithm, final formulas of these algorithms were all converted to linear model in form, based on this finding we propose a new algorithm called the general weighted profile method and it yielded the best overall performance among the algorithms investigated in this paper. Several meaningful conclusions and useful findings regarding the prediction of ADRs are provided for selecting optimal features and algorithms.
Trajectory NG: portable, compressed, general molecular dynamics trajectories.
Spångberg, Daniel; Larsson, Daniel S D; van der Spoel, David
2011-10-01
We present general algorithms for the compression of molecular dynamics trajectories. The standard ways to store MD trajectories as text or as raw binary floating point numbers result in very large files when efficient simulation programs are used on supercomputers. Our algorithms are based on the observation that differences in atomic coordinates/velocities, in either time or space, are generally smaller than the absolute values of the coordinates/velocities. Also, it is often possible to store values at a lower precision. We apply several compression schemes to compress the resulting differences further. The most efficient algorithms developed here use a block sorting algorithm in combination with Huffman coding. Depending on the frequency of storage of frames in the trajectory, either space, time, or combinations of space and time differences are usually the most efficient. We compare the efficiency of our algorithms with each other and with other algorithms present in the literature for various systems: liquid argon, water, a virus capsid solvated in 15 mM aqueous NaCl, and solid magnesium oxide. We perform tests to determine how much precision is necessary to obtain accurate structural and dynamic properties, as well as benchmark a parallelized implementation of the algorithms. We obtain compression ratios (compared to single precision floating point) of 1:3.3-1:35 depending on the frequency of storage of frames and the system studied.
A General Exponential Framework for Dimensionality Reduction.
Wang, Su-Jing; Yan, Shuicheng; Yang, Jian; Zhou, Chun-Guang; Fu, Xiaolan
2014-02-01
As a general framework, Laplacian embedding, based on a pairwise similarity matrix, infers low dimensional representations from high dimensional data. However, it generally suffers from three issues: 1) algorithmic performance is sensitive to the size of neighbors; 2) the algorithm encounters the well known small sample size (SSS) problem; and 3) the algorithm de-emphasizes small distance pairs. To address these issues, here we propose exponential embedding using matrix exponential and provide a general framework for dimensionality reduction. In the framework, the matrix exponential can be roughly interpreted by the random walk over the feature similarity matrix, and thus is more robust. The positive definite property of matrix exponential deals with the SSS problem. The behavior of the decay function of exponential embedding is more significant in emphasizing small distance pairs. Under this framework, we apply matrix exponential to extend many popular Laplacian embedding algorithms, e.g., locality preserving projections, unsupervised discriminant projections, and marginal fisher analysis. Experiments conducted on the synthesized data, UCI, and the Georgia Tech face database show that the proposed new framework can well address the issues mentioned above.
Investigation of BPF algorithm in cone-beam CT with 2D general trajectories.
Zou, Jing; Gui, Jianbao; Rong, Junyan; Hu, Zhanli; Zhang, Qiyang; Xia, Dan
2012-01-01
A mathematical derivation was conducted to illustrate that exact 3D image reconstruction could be achieved for z-homogeneous phantoms from data acquired with 2D general trajectories using the back projection filtration (BPF) algorithm. The conclusion was verified by computer simulation and experimental result with a circular scanning trajectory. Furthermore, the effect of the non-uniform degree along z-axis of the phantoms on the accuracy of the 3D reconstruction by BPF algorithm was investigated by numerical simulation with a gradual-phantom and a disk-phantom. The preliminary result showed that the performance of BPF algorithm improved with the z-axis homogeneity of the scanned object.
Landscape Analysis and Algorithm Development for Plateau Plagued Search Spaces
2011-02-28
Final Report for AFOSR #FA9550-08-1-0422 Landscape Analysis and Algorithm Development for Plateau Plagued Search Spaces August 1, 2008 to November 30...focused on developing high level general purpose algorithms , such as Tabu Search and Genetic Algorithms . However, understanding of when and why these... algorithms perform well still lags. Our project extended the theory of certain combi- natorial optimization problems to develop analytical
Billings, Seth D.; Boctor, Emad M.; Taylor, Russell H.
2015-01-01
We present a probabilistic registration algorithm that robustly solves the problem of rigid-body alignment between two shapes with high accuracy, by aptly modeling measurement noise in each shape, whether isotropic or anisotropic. For point-cloud shapes, the probabilistic framework additionally enables modeling locally-linear surface regions in the vicinity of each point to further improve registration accuracy. The proposed Iterative Most-Likely Point (IMLP) algorithm is formed as a variant of the popular Iterative Closest Point (ICP) algorithm, which iterates between point-correspondence and point-registration steps. IMLP’s probabilistic framework is used to incorporate a generalized noise model into both the correspondence and the registration phases of the algorithm, hence its name as a most-likely point method rather than a closest-point method. To efficiently compute the most-likely correspondences, we devise a novel search strategy based on a principal direction (PD)-tree search. We also propose a new approach to solve the generalized total-least-squares (GTLS) sub-problem of the registration phase, wherein the point correspondences are registered under a generalized noise model. Our GTLS approach has improved accuracy, efficiency, and stability compared to prior methods presented for this problem and offers a straightforward implementation using standard least squares. We evaluate the performance of IMLP relative to a large number of prior algorithms including ICP, a robust variant on ICP, Generalized ICP (GICP), and Coherent Point Drift (CPD), as well as drawing close comparison with the prior anisotropic registration methods of GTLS-ICP and A-ICP. The performance of IMLP is shown to be superior with respect to these algorithms over a wide range of noise conditions, outliers, and misalignments using both mesh and point-cloud representations of various shapes. PMID:25748700
NASA Astrophysics Data System (ADS)
Su, Yuanchao; Sun, Xu; Gao, Lianru; Li, Jun; Zhang, Bing
2016-10-01
Endmember extraction is a key step in hyperspectral unmixing. A new endmember extraction framework is proposed for hyperspectral endmember extraction. The proposed approach is based on the swarm intelligence (SI) algorithm, where discretization is used to solve the SI algorithm because pixels in a hyperspectral image are naturally defined within a discrete space. Moreover, a "distance" factor is introduced into the objective function to limit the endmember numbers which is generally limited in real scenarios, while traditional SI algorithms likely produce superabundant spectral signatures, which generally belong to the same classes. Three endmember extraction methods are proposed based on the artificial bee colony, ant colony optimization, and particle swarm optimization algorithms. Experiments with both simulated and real hyperspectral images indicate that the proposed framework can improve the accuracy of endmember extraction.
Local ROI Reconstruction via Generalized FBP and BPF Algorithms along More Flexible Curves.
Yu, Hengyong; Ye, Yangbo; Zhao, Shiying; Wang, Ge
2006-01-01
We study the local region-of-interest (ROI) reconstruction problem, also referred to as the local CT problem. Our scheme includes two steps: (a) the local truncated normal-dose projections are extended to global dataset by combining a few global low-dose projections; (b) the ROI are reconstructed by either the generalized filtered backprojection (FBP) or backprojection-filtration (BPF) algorithms. The simulation results show that both the FBP and BPF algorithms can reconstruct satisfactory results with image quality in the ROI comparable to that of the corresponding global CT reconstruction.
Zhao, Yingfeng; Liu, Sanyang
2016-01-01
We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.
Interior point techniques for LP and NLP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evtushenko, Y.
By using surjective mapping the initial constrained optimization problem is transformed to a problem in a new space with only equality constraints. For the numerical solution of the latter problem we use the generalized gradient-projection method and Newton`s method. After inverse transformation to the initial space we obtain the family of numerical methods for solving optimization problems with equality and inequality constraints. In the linear programming case after some simplification we obtain Dikin`s algorithm, affine scaling algorithm and generalized primal dual interior point linear programming algorithm.
Two generalizations of Kohonen clustering
NASA Technical Reports Server (NTRS)
Bezdek, James C.; Pal, Nikhil R.; Tsao, Eric C. K.
1993-01-01
The relationship between the sequential hard c-means (SHCM), learning vector quantization (LVQ), and fuzzy c-means (FCM) clustering algorithms is discussed. LVQ and SHCM suffer from several major problems. For example, they depend heavily on initialization. If the initial values of the cluster centers are outside the convex hull of the input data, such algorithms, even if they terminate, may not produce meaningful results in terms of prototypes for cluster representation. This is due in part to the fact that they update only the winning prototype for every input vector. The impact and interaction of these two families with Kohonen's self-organizing feature mapping (SOFM), which is not a clustering method, but which often leads ideas to clustering algorithms is discussed. Then two generalizations of LVQ that are explicitly designed as clustering algorithms are presented; these algorithms are referred to as generalized LVQ = GLVQ; and fuzzy LVQ = FLVQ. Learning rules are derived to optimize an objective function whose goal is to produce 'good clusters'. GLVQ/FLVQ (may) update every node in the clustering net for each input vector. Neither GLVQ nor FLVQ depends upon a choice for the update neighborhood or learning rate distribution - these are taken care of automatically. Segmentation of a gray tone image is used as a typical application of these algorithms to illustrate the performance of GLVQ/FLVQ.
NASA Technical Reports Server (NTRS)
Hedgley, D. R.
1978-01-01
An efficient algorithm for selecting the degree of a polynomial that defines a curve that best approximates a data set was presented. This algorithm was applied to both oscillatory and nonoscillatory data without loss of generality.
ERIC Educational Resources Information Center
Hofmann, Richard J.
1978-01-01
A general factor analysis computer algorithm is briefly discussed. The algorithm is highly transportable with minimum limitations on the number of observations. Both singular and non-singular data can be analyzed. (Author/JKS)
Improved multivariate polynomial factoring algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, P.S.
1978-10-01
A new algorithm for factoring multivariate polynomials over the integers based on an algorithm by Wang and Rothschild is described. The new algorithm has improved strategies for dealing with the known problems of the original algorithm, namely, the leading coefficient problem, the bad-zero problem and the occurrence of extraneous factors. It has an algorithm for correctly predetermining leading coefficients of the factors. A new and efficient p-adic algorithm named EEZ is described. Bascially it is a linearly convergent variable-by-variable parallel construction. The improved algorithm is generally faster and requires less store then the original algorithm. Machine examples with comparative timingmore » are included.« less
General entanglement-assisted transformation for bipartite pure quantum states
NASA Astrophysics Data System (ADS)
Song, Wei; Huang, Yan; Nai-LeLiu; Chen, Zeng-Bing
2007-01-01
We introduce the general catalysts for pure entanglement transformations under local operations and classical communications in such a way that we disregard the profit and loss of entanglement of the catalysts per se. As such, the possibilities of pure entanglement transformations are greatly expanded. We also design an efficient algorithm to detect whether a k × k general catalyst exists for a given entanglement transformation. This algorithm can also be exploited to witness the existence of standard catalysts.
NASA Technical Reports Server (NTRS)
Hall, Steven R.; Walker, Bruce K.
1990-01-01
A new failure detection and isolation algorithm for linear dynamic systems is presented. This algorithm, the Orthogonal Series Generalized Likelihood Ratio (OSGLR) test, is based on the assumption that the failure modes of interest can be represented by truncated series expansions. This assumption leads to a failure detection algorithm with several desirable properties. Computer simulation results are presented for the detection of the failures of actuators and sensors of a C-130 aircraft. The results show that the OSGLR test generally performs as well as the GLR test in terms of time to detect a failure and is more robust to failure mode uncertainty. However, the OSGLR test is also somewhat more sensitive to modeling errors than the GLR test.
Generalization Analysis of Fredholm Kernel Regularized Classifiers.
Gong, Tieliang; Xu, Zongben; Chen, Hong
2017-07-01
Recently, a new framework, Fredholm learning, was proposed for semisupervised learning problems based on solving a regularized Fredholm integral equation. It allows a natural way to incorporate unlabeled data into learning algorithms to improve their prediction performance. Despite rapid progress on implementable algorithms with theoretical guarantees, the generalization ability of Fredholm kernel learning has not been studied. In this letter, we focus on investigating the generalization performance of a family of classification algorithms, referred to as Fredholm kernel regularized classifiers. We prove that the corresponding learning rate can achieve [Formula: see text] ([Formula: see text] is the number of labeled samples) in a limiting case. In addition, a representer theorem is provided for the proposed regularized scheme, which underlies its applications.
Chong, Kok-Keong; Wong, Chee-Woon; Siaw, Fei-Lu; Yew, Tiong-Keat; Ng, See-Seng; Liang, Meng-Suan; Lim, Yun-Seng; Lau, Sing-Liong
2009-01-01
A novel on-axis general sun-tracking formula has been integrated in the algorithm of an open-loop sun-tracking system in order to track the sun accurately and cost effectively. Sun-tracking errors due to installation defects of the 25 m2 prototype solar concentrator have been analyzed from recorded solar images with the use of a CCD camera. With the recorded data, misaligned angles from ideal azimuth-elevation axes have been determined and corrected by a straightforward changing of the parameters' values in the general formula of the tracking algorithm to improve the tracking accuracy to 2.99 mrad, which falls below the encoder resolution limit of 4.13 mrad. PMID:22408483
DOE Office of Scientific and Technical Information (OSTI.GOV)
Staschus, K.
1985-01-01
In this dissertation, efficient algorithms for electric-utility capacity expansion planning with renewable energy are developed. The algorithms include a deterministic phase that quickly finds a near-optimal expansion plan using derating and a linearized approximation to the time-dependent availability of nondispatchable energy sources. A probabilistic second phase needs comparatively few computer-time consuming probabilistic simulation iterations to modify this solution towards the optimal expansion plan. For the deterministic first phase, two algorithms, based on a Lagrangian Dual decomposition and a Generalized Benders Decomposition, are developed. The probabilistic second phase uses a Generalized Benders Decomposition approach. Extensive computational tests of the algorithms aremore » reported. Among the deterministic algorithms, the one based on Lagrangian Duality proves fastest. The two-phase approach is shown to save up to 80% in computing time as compared to a purely probabilistic algorithm. The algorithms are applied to determine the optimal expansion plan for the Tijuana-Mexicali subsystem of the Mexican electric utility system. A strong recommendation to push conservation programs in the desert city of Mexicali results from this implementation.« less
Global Optimality of the Successive Maxbet Algorithm.
ERIC Educational Resources Information Center
Hanafi, Mohamed; ten Berge, Jos M. F.
2003-01-01
It is known that the Maxbet algorithm, which is an alternative to the method of generalized canonical correlation analysis and Procrustes analysis, may converge to local maxima. Discusses an eigenvalue criterion that is sufficient, but not necessary, for global optimality of the successive Maxbet algorithm. (SLD)
Fast Lossless Compression of Multispectral-Image Data
NASA Technical Reports Server (NTRS)
Klimesh, Matthew
2006-01-01
An algorithm that effects fast lossless compression of multispectral-image data is based on low-complexity, proven adaptive-filtering algorithms. This algorithm is intended for use in compressing multispectral-image data aboard spacecraft for transmission to Earth stations. Variants of this algorithm could be useful for lossless compression of three-dimensional medical imagery and, perhaps, for compressing image data in general.
NASA Technical Reports Server (NTRS)
Pflaum, Christoph
1996-01-01
A multilevel algorithm is presented that solves general second order elliptic partial differential equations on adaptive sparse grids. The multilevel algorithm consists of several V-cycles. Suitable discretizations provide that the discrete equation system can be solved in an efficient way. Numerical experiments show a convergence rate of order Omicron(1) for the multilevel algorithm.
NASA Technical Reports Server (NTRS)
Janich, Karl W.
2005-01-01
The At-Least version of the Generalized Minimum Spanning Tree Problem (L-GMST) is a problem in which the optimal solution connects all defined clusters of nodes in a given network at a minimum cost. The L-GMST is NPHard; therefore, metaheuristic algorithms have been used to find reasonable solutions to the problem as opposed to computationally feasible exact algorithms, which many believe do not exist for such a problem. One such metaheuristic uses a swarm-intelligent Ant Colony System (ACS) algorithm, in which agents converge on a solution through the weighing of local heuristics, such as the shortest available path and the number of agents that recently used a given path. However, in a network using a solution derived from the ACS algorithm, some nodes may move around to different clusters and cause small changes in the network makeup. Rerunning the algorithm from the start would be somewhat inefficient due to the significance of the changes, so a genetic algorithm based on the top few solutions found in the ACS algorithm is proposed to quickly and efficiently adapt the network to these small changes.
A general heuristic for genome rearrangement problems.
Dias, Ulisses; Galvão, Gustavo Rodrigues; Lintzmayer, Carla Négri; Dias, Zanoni
2014-06-01
In this paper, we present a general heuristic for several problems in the genome rearrangement field. Our heuristic does not solve any problem directly, it is rather used to improve the solutions provided by any non-optimal algorithm that solve them. Therefore, we have implemented several algorithms described in the literature and several algorithms developed by ourselves. As a whole, we implemented 23 algorithms for 9 well known problems in the genome rearrangement field. A total of 13 algorithms were implemented for problems that use the notions of prefix and suffix operations. In addition, we worked on 5 algorithms for the classic problem of sorting by transposition and we conclude the experiments by presenting results for 3 approximation algorithms for the sorting by reversals and transpositions problem and 2 approximation algorithms for the sorting by reversals problem. Another algorithm with better approximation ratio can be found for the last genome rearrangement problem, but it is purely theoretical with no practical implementation. The algorithms we implemented in addition to our heuristic lead to the best practical results in each case. In particular, we were able to improve results on the sorting by transpositions problem, which is a very special case because many efforts have been made to generate algorithms with good results in practice and some of these algorithms provide results that equal the optimum solutions in many cases. Our source codes and benchmarks are freely available upon request from the authors so that it will be easier to compare new approaches against our results.
The evaluation of the OSGLR algorithm for restructurable controls
NASA Technical Reports Server (NTRS)
Bonnice, W. F.; Wagner, E.; Hall, S. R.; Motyka, P.
1986-01-01
The detection and isolation of commercial aircraft control surface and actuator failures using the orthogonal series generalized likelihood ratio (OSGLR) test was evaluated. The OSGLR algorithm was chosen as the most promising algorithm based on a preliminary evaluation of three failure detection and isolation (FDI) algorithms (the detection filter, the generalized likelihood ratio test, and the OSGLR test) and a survey of the literature. One difficulty of analytic FDI techniques and the OSGLR algorithm in particular is their sensitivity to modeling errors. Therefore, methods of improving the robustness of the algorithm were examined with the incorporation of age-weighting into the algorithm being the most effective approach, significantly reducing the sensitivity of the algorithm to modeling errors. The steady-state implementation of the algorithm based on a single cruise linear model was evaluated using a nonlinear simulation of a C-130 aircraft. A number of off-nominal no-failure flight conditions including maneuvers, nonzero flap deflections, different turbulence levels and steady winds were tested. Based on the no-failure decision functions produced by off-nominal flight conditions, the failure detection performance at the nominal flight condition was determined. The extension of the algorithm to a wider flight envelope by scheduling the linear models used by the algorithm on dynamic pressure and flap deflection was also considered. Since simply scheduling the linear models over the entire flight envelope is unlikely to be adequate, scheduling of the steady-state implentation of the algorithm was briefly investigated.
Measuring the globalization of cities from the new regionalism perspective.
Ergüzel, Oylum Şehvez; Tunahan, Hakan; Esen, Sinan
2016-01-01
The study aims to analyze the export performance of countries and of cities within them to identify synchronized or unsynchronized movement between them. In the empirical part of the study, the measurements used to analyze the export performance of the countries included in the literature are applied to establish the export performance of a single city-Sakarya, Turkey. These measurements include the Herfindahl-Hirchman product and market concentration indices, the Lawrence index, the trade complementarity index, and the Grubel-Lloyd intra-industry index, as well as additional indicators with local or regional contexts. The limited number of studies analyzing the export competitiveness of a single city with relevant formats in the literature reveal the significance of the study.
2008-01-13
KENNEDY SPACE CENTER, FLA. -- Lloyd Johns, with Lockheed Martin, attaches the replacement feed-through connector in the engine cutoff, or ECO, sensor system to the internal connector on space shuttle Atlantis' external tank. The feed-through connector passes the wires from the inside of the tank to the outside. Results of a tanking test on Dec. 18 pointed to an open circuit in the feed-through connector wiring, which is located at the base of the tank. The pins in the replacement connector have been precisely soldered to create a connection that allows sensors inside the tank to send signals to the computers onboard Atlantis. The work is being done on Launch Pad 39A. Space shuttle Atlantis is now targeted for launch on Feb. 7. Photo credit: NASA/George Shelton
2008-01-13
KENNEDY SPACE CENTER, FLA. -- Lloyd Johns, with Lockheed Martin, attaches the replacement feed-through connector in the engine cutoff, or ECO, sensor system to the internal connector on space shuttle Atlantis' external tank. The feed-through connector passes the wires from the inside of the tank to the outside. Results of a tanking test on Dec. 18 pointed to an open circuit in the feed-through connector wiring, which is located at the base of the tank. The pins in the replacement connector have been precisely soldered to create a connection that allows sensors inside the tank to send signals to the computers onboard Atlantis. The work is being done on Launch Pad 39A. Space shuttle Atlantis is now targeted for launch on Feb. 7. Photo credit: NASA/George Shelton
NASA Astrophysics Data System (ADS)
He, Zhenzong; Qi, Hong; Wang, Yuqing; Ruan, Liming
2014-10-01
Four improved Ant Colony Optimization (ACO) algorithms, i.e. the probability density function based ACO (PDF-ACO) algorithm, the Region ACO (RACO) algorithm, Stochastic ACO (SACO) algorithm and Homogeneous ACO (HACO) algorithm, are employed to estimate the particle size distribution (PSD) of the spheroidal particles. The direct problems are solved by the extended Anomalous Diffraction Approximation (ADA) and the Lambert-Beer law. Three commonly used monomodal distribution functions i.e. the Rosin-Rammer (R-R) distribution function, the normal (N-N) distribution function, and the logarithmic normal (L-N) distribution function are estimated under dependent model. The influence of random measurement errors on the inverse results is also investigated. All the results reveal that the PDF-ACO algorithm is more accurate than the other three ACO algorithms and can be used as an effective technique to investigate the PSD of the spheroidal particles. Furthermore, the Johnson's SB (J-SB) function and the modified beta (M-β) function are employed as the general distribution functions to retrieve the PSD of spheroidal particles using PDF-ACO algorithm. The investigation shows a reasonable agreement between the original distribution function and the general distribution function when only considering the variety of the length of the rotational semi-axis.
Image processing via VLSI: A concept paper
NASA Technical Reports Server (NTRS)
Nathan, R.
1982-01-01
Implementing specific image processing algorithms via very large scale integrated systems offers a potent solution to the problem of handling high data rates. Two algorithms stand out as being particularly critical -- geometric map transformation and filtering or correlation. These two functions form the basis for data calibration, registration and mosaicking. VLSI presents itself as an inexpensive ancillary function to be added to almost any general purpose computer and if the geometry and filter algorithms are implemented in VLSI, the processing rate bottleneck would be significantly relieved. A set of image processing functions that limit present systems to deal with future throughput needs, translates these functions to algorithms, implements via VLSI technology and interfaces the hardware to a general purpose digital computer is developed.
A Systematic Investigation of Computation Models for Predicting Adverse Drug Reactions (ADRs)
Kuang, Qifan; Wang, MinQi; Li, Rong; Dong, YongCheng; Li, Yizhou; Li, Menglong
2014-01-01
Background Early and accurate identification of adverse drug reactions (ADRs) is critically important for drug development and clinical safety. Computer-aided prediction of ADRs has attracted increasing attention in recent years, and many computational models have been proposed. However, because of the lack of systematic analysis and comparison of the different computational models, there remain limitations in designing more effective algorithms and selecting more useful features. There is therefore an urgent need to review and analyze previous computation models to obtain general conclusions that can provide useful guidance to construct more effective computational models to predict ADRs. Principal Findings In the current study, the main work is to compare and analyze the performance of existing computational methods to predict ADRs, by implementing and evaluating additional algorithms that have been earlier used for predicting drug targets. Our results indicated that topological and intrinsic features were complementary to an extent and the Jaccard coefficient had an important and general effect on the prediction of drug-ADR associations. By comparing the structure of each algorithm, final formulas of these algorithms were all converted to linear model in form, based on this finding we propose a new algorithm called the general weighted profile method and it yielded the best overall performance among the algorithms investigated in this paper. Conclusion Several meaningful conclusions and useful findings regarding the prediction of ADRs are provided for selecting optimal features and algorithms. PMID:25180585
Generalized image contrast enhancement technique based on Heinemann contrast discrimination model
NASA Astrophysics Data System (ADS)
Liu, Hong; Nodine, Calvin F.
1994-03-01
This paper presents a generalized image contrast enhancement technique which equalizes perceived brightness based on the Heinemann contrast discrimination model. This is a modified algorithm which presents an improvement over the previous study by Mokrane in its mathematically proven existence of a unique solution and in its easily tunable parameterization. The model uses a log-log representation of contrast luminosity between targets and the surround in a fixed luminosity background setting. The algorithm consists of two nonlinear gray-scale mapping functions which have seven parameters, two of which are adjustable Heinemann constants. Another parameter is the background gray level. The remaining four parameters are nonlinear functions of gray scale distribution of the image, and can be uniquely determined once the previous three are given. Tests have been carried out to examine the effectiveness of the algorithm for increasing the overall contrast of images. It can be demonstrated that the generalized algorithm provides better contrast enhancement than histogram equalization. In fact, the histogram equalization technique is a special case of the proposed mapping.
Energy Efficient Data Transmission for Sensors with Wireless Charging
Luo, Junzhou; Wu, Weiwei; Gao, Hong
2018-01-01
This paper studies the problem of maximizing the energy utilization for data transmission in sensors with periodical wireless charging process while taking into account the thermal effect. Two classes of problems are analyzed: one is the case that wireless charging can process for only a limited period of time, and the other is the case that wireless charging can process for a long enough time. Algorithms are proposed to solve the problems and analysis of these algorithms are also provided. For the first problem, three subproblems are studied, and, for the general problem, we give an algorithm that can derive a performance bound of (1−12m)(OPT−E) compared to an optimal solution. In addition, for the second problem, we provide an algorithm with 2m2m−1OPT+1 performance bound for the general problem. Simulations confirm the analysis of the algorithms. PMID:29419770
Energy Efficient Data Transmission for Sensors with Wireless Charging.
Fang, Xiaolin; Luo, Junzhou; Wu, Weiwei; Gao, Hong
2018-02-08
This paper studies the problem of maximizing the energy utilization for data transmission in sensors with periodical wireless charging process while taking into account the thermal effect. Two classes of problems are analyzed: one is the case that wireless charging can process for only a limited period of time, and the other is the case that wireless charging can process for a long enough time. Algorithms are proposed to solve the problems and analysis of these algorithms are also provided. For the first problem, three subproblems are studied, and, for the general problem, we give an algorithm that can derive a performance bound of ( 1 - 1 2 m ) ( O P T - E ) compared to an optimal solution. In addition, for the second problem, we provide an algorithm with 2 m 2 m - 1 O P T + 1 performance bound for the general problem. Simulations confirm the analysis of the algorithms.
NASA Astrophysics Data System (ADS)
Kazemzadeh Azad, Saeid
2018-01-01
In spite of considerable research work on the development of efficient algorithms for discrete sizing optimization of steel truss structures, only a few studies have addressed non-algorithmic issues affecting the general performance of algorithms. For instance, an important question is whether starting the design optimization from a feasible solution is fruitful or not. This study is an attempt to investigate the effect of seeding the initial population with feasible solutions on the general performance of metaheuristic techniques. To this end, the sensitivity of recently proposed metaheuristic algorithms to the feasibility of initial candidate designs is evaluated through practical discrete sizing of real-size steel truss structures. The numerical experiments indicate that seeding the initial population with feasible solutions can improve the computational efficiency of metaheuristic structural optimization algorithms, especially in the early stages of the optimization. This paves the way for efficient metaheuristic optimization of large-scale structural systems.
A Shifted Block Lanczos Algorithm 1: The Block Recurrence
NASA Technical Reports Server (NTRS)
Grimes, Roger G.; Lewis, John G.; Simon, Horst D.
1990-01-01
In this paper we describe a block Lanczos algorithm that is used as the key building block of a software package for the extraction of eigenvalues and eigenvectors of large sparse symmetric generalized eigenproblems. The software package comprises: a version of the block Lanczos algorithm specialized for spectrally transformed eigenproblems; an adaptive strategy for choosing shifts, and efficient codes for factoring large sparse symmetric indefinite matrices. This paper describes the algorithmic details of our block Lanczos recurrence. This uses a novel combination of block generalizations of several features that have only been investigated independently in the past. In particular new forms of partial reorthogonalization, selective reorthogonalization and local reorthogonalization are used, as is a new algorithm for obtaining the M-orthogonal factorization of a matrix. The heuristic shifting strategy, the integration with sparse linear equation solvers and numerical experience with the code are described in a companion paper.
NASA Technical Reports Server (NTRS)
Jain, Abhinandan
2011-01-01
Ndarts software provides algorithms for computing quantities associated with the dynamics of articulated, rigid-link, multibody systems. It is designed as a general-purpose dynamics library that can be used for the modeling of robotic platforms, space vehicles, molecular dynamics, and other such applications. The architecture and algorithms in Ndarts are based on the Spatial Operator Algebra (SOA) theory for computational multibody and robot dynamics developed at JPL. It uses minimal, internal coordinate models. The algorithms are low-order, recursive scatter/ gather algorithms. In comparison with the earlier Darts++ software, this version has a more general and cleaner design needed to support a larger class of computational dynamics needs. It includes a frames infrastructure, allows algorithms to operate on subgraphs of the system, and implements lazy and deferred computation for better efficiency. Dynamics modeling modules such as Ndarts are core building blocks of control and simulation software for space, robotic, mechanism, bio-molecular, and material systems modeling.
NASA Astrophysics Data System (ADS)
Bolodurina, I. P.; Parfenov, D. I.
2017-10-01
The goal of our investigation is optimization of network work in virtual data center. The advantage of modern infrastructure virtualization lies in the possibility to use software-defined networks. However, the existing optimization of algorithmic solutions does not take into account specific features working with multiple classes of virtual network functions. The current paper describes models characterizing the basic structures of object of virtual data center. They including: a level distribution model of software-defined infrastructure virtual data center, a generalized model of a virtual network function, a neural network model of the identification of virtual network functions. We also developed an efficient algorithm for the optimization technology of containerization of virtual network functions in virtual data center. We propose an efficient algorithm for placing virtual network functions. In our investigation we also generalize the well renowned heuristic and deterministic algorithms of Karmakar-Karp.
NASA Astrophysics Data System (ADS)
Walker, Joel W.
2014-08-01
The M T2, or "s-transverse mass", statistic was developed to associate a parent mass scale to a missing transverse energy signature, given that escaping particles are generally expected in pairs, while collider experiments are sensitive to just a single transverse momentum vector sum. This document focuses on the generalized extension of that statistic to asymmetric one- and two-step decay chains, with arbitrary child particle masses and upstream missing transverse momentum. It provides a unified theoretical formulation, complete solution classification, taxonomy of critical points, and technical algorithmic prescription for treatment of the event scale. An implementation of the described algorithm is available for download, and is also a deployable component of the author's selection cut software package AEAC uS (Algorithmic Event Arbiter and C ut Selector). appendices address combinatoric event assembly, algorithm validation, and a complete pseudocode.
A general optimality criteria algorithm for a class of engineering optimization problems
NASA Astrophysics Data System (ADS)
Belegundu, Ashok D.
2015-05-01
An optimality criteria (OC)-based algorithm for optimization of a general class of nonlinear programming (NLP) problems is presented. The algorithm is only applicable to problems where the objective and constraint functions satisfy certain monotonicity properties. For multiply constrained problems which satisfy these assumptions, the algorithm is attractive compared with existing NLP methods as well as prevalent OC methods, as the latter involve computationally expensive active set and step-size control strategies. The fixed point algorithm presented here is applicable not only to structural optimization problems but also to certain problems as occur in resource allocation and inventory models. Convergence aspects are discussed. The fixed point update or resizing formula is given physical significance, which brings out a strength and trim feature. The number of function evaluations remains independent of the number of variables, allowing the efficient solution of problems with large number of variables.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Enghauser, Michael
2016-02-01
The goal of the Domestic Nuclear Detection Office (DNDO) Algorithm Improvement Program (AIP) is to facilitate gamma-radiation detector nuclide identification algorithm development, improvement, and validation. Accordingly, scoring criteria have been developed to objectively assess the performance of nuclide identification algorithms. In addition, a Microsoft Excel spreadsheet application for automated nuclide identification scoring has been developed. This report provides an overview of the equations, nuclide weighting factors, nuclide equivalencies, and configuration weighting factors used by the application for scoring nuclide identification algorithm performance. Furthermore, this report presents a general overview of the nuclide identification algorithm scoring application including illustrative examples.
Convergence and Applications of a Gossip-Based Gauss-Newton Algorithm
NASA Astrophysics Data System (ADS)
Li, Xiao; Scaglione, Anna
2013-11-01
The Gauss-Newton algorithm is a popular and efficient centralized method for solving non-linear least squares problems. In this paper, we propose a multi-agent distributed version of this algorithm, named Gossip-based Gauss-Newton (GGN) algorithm, which can be applied in general problems with non-convex objectives. Furthermore, we analyze and present sufficient conditions for its convergence and show numerically that the GGN algorithm achieves performance comparable to the centralized algorithm, with graceful degradation in case of network failures. More importantly, the GGN algorithm provides significant performance gains compared to other distributed first order methods.
Automatic computation and solution of generalized harmonic balance equations
NASA Astrophysics Data System (ADS)
Peyton Jones, J. C.; Yaser, K. S. A.; Stevenson, J.
2018-02-01
Generalized methods are presented for generating and solving the harmonic balance equations for a broad class of nonlinear differential or difference equations and for a general set of harmonics chosen by the user. In particular, a new algorithm for automatically generating the Jacobian of the balance equations enables efficient solution of these equations using continuation methods. Efficient numeric validation techniques are also presented, and the combined algorithm is applied to the analysis of dc, fundamental, second and third harmonic response of a nonlinear automotive damper.
Reaction rates for a generalized reaction-diffusion master equation
Hellander, Stefan; Petzold, Linda
2016-01-19
It has been established that there is an inherent limit to the accuracy of the reaction-diffusion master equation. Specifically, there exists a fundamental lower bound on the mesh size, below which the accuracy deteriorates as the mesh is refined further. In this paper we extend the standard reaction-diffusion master equation to allow molecules occupying neighboring voxels to react, in contrast to the traditional approach in which molecules react only when occupying the same voxel. We derive reaction rates, in two dimensions as well as three dimensions, to obtain an optimal match to the more fine-grained Smoluchowski model, and show inmore » two numerical examples that the extended algorithm is accurate for a wide range of mesh sizes, allowing us to simulate systems that are intractable with the standard reaction-diffusion master equation. In addition, we show that for mesh sizes above the fundamental lower limit of the standard algorithm, the generalized algorithm reduces to the standard algorithm. We derive a lower limit for the generalized algorithm which, in both two dimensions and three dimensions, is on the order of the reaction radius of a reacting pair of molecules.« less
Reaction rates for a generalized reaction-diffusion master equation
Hellander, Stefan; Petzold, Linda
2016-01-01
It has been established that there is an inherent limit to the accuracy of the reaction-diffusion master equation. Specifically, there exists a fundamental lower bound on the mesh size, below which the accuracy deteriorates as the mesh is refined further. In this paper we extend the standard reaction-diffusion master equation to allow molecules occupying neighboring voxels to react, in contrast to the traditional approach in which molecules react only when occupying the same voxel. We derive reaction rates, in two dimensions as well as three dimensions, to obtain an optimal match to the more fine-grained Smoluchowski model, and show in two numerical examples that the extended algorithm is accurate for a wide range of mesh sizes, allowing us to simulate systems that are intractable with the standard reaction-diffusion master equation. In addition, we show that for mesh sizes above the fundamental lower limit of the standard algorithm, the generalized algorithm reduces to the standard algorithm. We derive a lower limit for the generalized algorithm which, in both two dimensions and three dimensions, is on the order of the reaction radius of a reacting pair of molecules. PMID:26871190
Local ROI Reconstruction via Generalized FBP and BPF Algorithms along More Flexible Curves
Ye, Yangbo; Zhao, Shiying; Wang, Ge
2006-01-01
We study the local region-of-interest (ROI) reconstruction problem, also referred to as the local CT problem. Our scheme includes two steps: (a) the local truncated normal-dose projections are extended to global dataset by combining a few global low-dose projections; (b) the ROI are reconstructed by either the generalized filtered backprojection (FBP) or backprojection-filtration (BPF) algorithms. The simulation results show that both the FBP and BPF algorithms can reconstruct satisfactory results with image quality in the ROI comparable to that of the corresponding global CT reconstruction. PMID:23165018
A general algorithm for the construction of contour plots
NASA Technical Reports Server (NTRS)
Johnson, W.; Silva, F.
1981-01-01
An algorithm is described that performs the task of drawing equal level contours on a plane, which requires interpolation in two dimensions based on data prescribed at points distributed irregularly over the plane. The approach is described in detail. The computer program that implements the algorithm is documented and listed.
Demonstration of a 3D vision algorithm for space applications
NASA Technical Reports Server (NTRS)
Defigueiredo, Rui J. P. (Editor)
1987-01-01
This paper reports an extension of the MIAG algorithm for recognition and motion parameter determination of general 3-D polyhedral objects based on model matching techniques and using movement invariants as features of object representation. Results of tests conducted on the algorithm under conditions simulating space conditions are presented.
Population-based metaheuristic optimization in neutron optics and shielding design
NASA Astrophysics Data System (ADS)
DiJulio, D. D.; Björgvinsdóttir, H.; Zendler, C.; Bentley, P. M.
2016-11-01
Population-based metaheuristic algorithms are powerful tools in the design of neutron scattering instruments and the use of these types of algorithms for this purpose is becoming more and more commonplace. Today there exists a wide range of algorithms to choose from when designing an instrument and it is not always initially clear which may provide the best performance. Furthermore, due to the nature of these types of algorithms, the final solution found for a specific design scenario cannot always be guaranteed to be the global optimum. Therefore, to explore the potential benefits and differences between the varieties of these algorithms available, when applied to such design scenarios, we have carried out a detailed study of some commonly used algorithms. For this purpose, we have developed a new general optimization software package which combines a number of common metaheuristic algorithms within a single user interface and is designed specifically with neutronic calculations in mind. The algorithms included in the software are implementations of Particle-Swarm Optimization (PSO), Differential Evolution (DE), Artificial Bee Colony (ABC), and a Genetic Algorithm (GA). The software has been used to optimize the design of several problems in neutron optics and shielding, coupled with Monte-Carlo simulations, in order to evaluate the performance of the various algorithms. Generally, the performance of the algorithms depended on the specific scenarios, however it was found that DE provided the best average solutions in all scenarios investigated in this work.
Generalized algebraic scene-based nonuniformity correction algorithm.
Ratliff, Bradley M; Hayat, Majeed M; Tyo, J Scott
2005-02-01
A generalization of a recently developed algebraic scene-based nonuniformity correction algorithm for focal plane array (FPA) sensors is presented. The new technique uses pairs of image frames exhibiting arbitrary one- or two-dimensional translational motion to compute compensator quantities that are then used to remove nonuniformity in the bias of the FPA response. Unlike its predecessor, the generalization does not require the use of either a blackbody calibration target or a shutter. The algorithm has a low computational overhead, lending itself to real-time hardware implementation. The high-quality correction ability of this technique is demonstrated through application to real IR data from both cooled and uncooled infrared FPAs. A theoretical and experimental error analysis is performed to study the accuracy of the bias compensator estimates in the presence of two main sources of error.
Graph embedding and extensions: a general framework for dimensionality reduction.
Yan, Shuicheng; Xu, Dong; Zhang, Benyu; Zhang, Hong-Jiang; Yang, Qiang; Lin, Stephen
2007-01-01
Over the past few decades, a large family of algorithms - supervised or unsupervised; stemming from statistics or geometry theory - has been designed to provide different solutions to the problem of dimensionality reduction. Despite the different motivations of these algorithms, we present in this paper a general formulation known as graph embedding to unify them within a common framework. In graph embedding, each algorithm can be considered as the direct graph embedding or its linear/kernel/tensor extension of a specific intrinsic graph that describes certain desired statistical or geometric properties of a data set, with constraints from scale normalization or a penalty graph that characterizes a statistical or geometric property that should be avoided. Furthermore, the graph embedding framework can be used as a general platform for developing new dimensionality reduction algorithms. By utilizing this framework as a tool, we propose a new supervised dimensionality reduction algorithm called Marginal Fisher Analysis in which the intrinsic graph characterizes the intraclass compactness and connects each data point with its neighboring points of the same class, while the penalty graph connects the marginal points and characterizes the interclass separability. We show that MFA effectively overcomes the limitations of the traditional Linear Discriminant Analysis algorithm due to data distribution assumptions and available projection directions. Real face recognition experiments show the superiority of our proposed MFA in comparison to LDA, also for corresponding kernel and tensor extensions.
Akbari, Hamed; Bilello, Michel; Da, Xiao; Davatzikos, Christos
2015-01-01
Evaluating various algorithms for the inter-subject registration of brain magnetic resonance images (MRI) is a necessary topic receiving growing attention. Existing studies evaluated image registration algorithms in specific tasks or using specific databases (e.g., only for skull-stripped images, only for single-site images, etc.). Consequently, the choice of registration algorithms seems task- and usage/parameter-dependent. Nevertheless, recent large-scale, often multi-institutional imaging-related studies create the need and raise the question whether some registration algorithms can 1) generally apply to various tasks/databases posing various challenges; 2) perform consistently well, and while doing so, 3) require minimal or ideally no parameter tuning. In seeking answers to this question, we evaluated 12 general-purpose registration algorithms, for their generality, accuracy and robustness. We fixed their parameters at values suggested by algorithm developers as reported in the literature. We tested them in 7 databases/tasks, which present one or more of 4 commonly-encountered challenges: 1) inter-subject anatomical variability in skull-stripped images; 2) intensity homogeneity, noise and large structural differences in raw images; 3) imaging protocol and field-of-view (FOV) differences in multi-site data; and 4) missing correspondences in pathology-bearing images. Totally 7,562 registrations were performed. Registration accuracies were measured by (multi-)expert-annotated landmarks or regions of interest (ROIs). To ensure reproducibility, we used public software tools, public databases (whenever possible), and we fully disclose the parameter settings. We show evaluation results, and discuss the performances in light of algorithms’ similarity metrics, transformation models and optimization strategies. We also discuss future directions for the algorithm development and evaluations. PMID:24951685
NASA Technical Reports Server (NTRS)
Rash, James
2014-01-01
NASA's space data-communications infrastructure-the Space Network and the Ground Network-provide scheduled (as well as some limited types of unscheduled) data-communications services to user spacecraft. The Space Network operates several orbiting geostationary platforms (the Tracking and Data Relay Satellite System (TDRSS)), each with its own servicedelivery antennas onboard. The Ground Network operates service-delivery antennas at ground stations located around the world. Together, these networks enable data transfer between user spacecraft and their mission control centers on Earth. Scheduling data-communications events for spacecraft that use the NASA communications infrastructure-the relay satellites and the ground stations-can be accomplished today with software having an operational heritage dating from the 1980s or earlier. An implementation of the scheduling methods and algorithms disclosed and formally specified herein will produce globally optimized schedules with not only optimized service delivery by the space data-communications infrastructure but also optimized satisfaction of all user requirements and prescribed constraints, including radio frequency interference (RFI) constraints. Evolutionary algorithms, a class of probabilistic strategies for searching large solution spaces, is the essential technology invoked and exploited in this disclosure. Also disclosed are secondary methods and algorithms for optimizing the execution efficiency of the schedule-generation algorithms themselves. The scheduling methods and algorithms as presented are adaptable to accommodate the complexity of scheduling the civilian and/or military data-communications infrastructure within the expected range of future users and space- or ground-based service-delivery assets. Finally, the problem itself, and the methods and algorithms, are generalized and specified formally. The generalized methods and algorithms are applicable to a very broad class of combinatorial-optimization problems that encompasses, among many others, the problem of generating optimal space-data communications schedules.
Single-phase power distribution system power flow and fault analysis
NASA Technical Reports Server (NTRS)
Halpin, S. M.; Grigsby, L. L.
1992-01-01
Alternative methods for power flow and fault analysis of single-phase distribution systems are presented. The algorithms for both power flow and fault analysis utilize a generalized approach to network modeling. The generalized admittance matrix, formed using elements of linear graph theory, is an accurate network model for all possible single-phase network configurations. Unlike the standard nodal admittance matrix formulation algorithms, the generalized approach uses generalized component models for the transmission line and transformer. The standard assumption of a common node voltage reference point is not required to construct the generalized admittance matrix. Therefore, truly accurate simulation results can be obtained for networks that cannot be modeled using traditional techniques.
Extrapolation methods for vector sequences
NASA Technical Reports Server (NTRS)
Smith, David A.; Ford, William F.; Sidi, Avram
1987-01-01
This paper derives, describes, and compares five extrapolation methods for accelerating convergence of vector sequences or transforming divergent vector sequences to convergent ones. These methods are the scalar epsilon algorithm (SEA), vector epsilon algorithm (VEA), topological epsilon algorithm (TEA), minimal polynomial extrapolation (MPE), and reduced rank extrapolation (RRE). MPE and RRE are first derived and proven to give the exact solution for the right 'essential degree' k. Then, Brezinski's (1975) generalization of the Shanks-Schmidt transform is presented; the generalized form leads from systems of equations to TEA. The necessary connections are then made with SEA and VEA. The algorithms are extended to the nonlinear case by cycling, the error analysis for MPE and VEA is sketched, and the theoretical support for quadratic convergence is discussed. Strategies for practical implementation of the methods are considered.
Geometry correction Algorithm for UAV Remote Sensing Image Based on Improved Neural Network
NASA Astrophysics Data System (ADS)
Liu, Ruian; Liu, Nan; Zeng, Beibei; Chen, Tingting; Yin, Ninghao
2018-03-01
Aiming at the disadvantage of current geometry correction algorithm for UAV remote sensing image, a new algorithm is proposed. Adaptive genetic algorithm (AGA) and RBF neural network are introduced into this algorithm. And combined with the geometry correction principle for UAV remote sensing image, the algorithm and solving steps of AGA-RBF are presented in order to realize geometry correction for UAV remote sensing. The correction accuracy and operational efficiency is improved through optimizing the structure and connection weight of RBF neural network separately with AGA and LMS algorithm. Finally, experiments show that AGA-RBF algorithm has the advantages of high correction accuracy, high running rate and strong generalization ability.
Distributed Matrix Completion: Applications to Cooperative Positioning in Noisy Environments
2013-12-11
positioning, and a gossip version of low-rank approximation were developed. A convex relaxation for positioning in the presence of noise was shown...computing the leading eigenvectors of a large data matrix through gossip algorithms. A new algorithm is proposed that amounts to iteratively multiplying...generalization of gossip algorithms for consensus. The algorithms outperform state-of-the-art methods in a communication-limited scenario. Positioning via
An adaptive replacement algorithm for paged-memory computer systems.
NASA Technical Reports Server (NTRS)
Thorington, J. M., Jr.; Irwin, J. D.
1972-01-01
A general class of adaptive replacement schemes for use in paged memories is developed. One such algorithm, called SIM, is simulated using a probability model that generates memory traces, and the results of the simulation of this adaptive scheme are compared with those obtained using the best nonlookahead algorithms. A technique for implementing this type of adaptive replacement algorithm with state of the art digital hardware is also presented.
Parallel Algorithms for Least Squares and Related Computations.
1991-03-22
for dense computations in linear algebra . The work has recently been published in a general reference book on parallel algorithms by SIAM. AFO SR...written his Ph.D. dissertation with the principal investigator. (See publication 6.) • Parallel Algorithms for Dense Linear Algebra Computations. Our...and describe and to put into perspective a selection of the more important parallel algorithms for numerical linear algebra . We give a major new
Hasani, Mojtaba H; Gharibzadeh, Shahriar; Farjami, Yaghoub; Tavakkoli, Jahan
2013-09-01
Various numerical algorithms have been developed to solve the Khokhlov-Kuznetsov-Zabolotskaya (KZK) parabolic nonlinear wave equation. In this work, a generalized time-domain numerical algorithm is proposed to solve the diffraction term of the KZK equation. This algorithm solves the transverse Laplacian operator of the KZK equation in three-dimensional (3D) Cartesian coordinates using a finite-difference method based on the five-point implicit backward finite difference and the five-point Crank-Nicolson finite difference discretization techniques. This leads to a more uniform discretization of the Laplacian operator which in turn results in fewer calculation gridding nodes without compromising accuracy in the diffraction term. In addition, a new empirical algorithm based on the LU decomposition technique is proposed to solve the system of linear equations obtained from this discretization. The proposed empirical algorithm improves the calculation speed and memory usage, while the order of computational complexity remains linear in calculation of the diffraction term in the KZK equation. For evaluating the accuracy of the proposed algorithm, two previously published algorithms are used as comparison references: the conventional 2D Texas code and its generalization for 3D geometries. The results show that the accuracy/efficiency performance of the proposed algorithm is comparable with the established time-domain methods.
NASA Astrophysics Data System (ADS)
Li, Xiaofeng; Xiang, Suying; Zhu, Pengfei; Wu, Min
2015-12-01
In order to avoid the inherent deficiencies of the traditional BP neural network, such as slow convergence speed, that easily leading to local minima, poor generalization ability and difficulty in determining the network structure, the dynamic self-adaptive learning algorithm of the BP neural network is put forward to improve the function of the BP neural network. The new algorithm combines the merit of principal component analysis, particle swarm optimization, correlation analysis and self-adaptive model, hence can effectively solve the problems of selecting structural parameters, initial connection weights and thresholds and learning rates of the BP neural network. This new algorithm not only reduces the human intervention, optimizes the topological structures of BP neural networks and improves the network generalization ability, but also accelerates the convergence speed of a network, avoids trapping into local minima, and enhances network adaptation ability and prediction ability. The dynamic self-adaptive learning algorithm of the BP neural network is used to forecast the total retail sale of consumer goods of Sichuan Province, China. Empirical results indicate that the new algorithm is superior to the traditional BP network algorithm in predicting accuracy and time consumption, which shows the feasibility and effectiveness of the new algorithm.
Computing Quantitative Characteristics of Finite-State Real-Time Systems
1994-05-04
Current methods for verifying real - time systems are essentially decision procedures that establish whether the system model satisfies a given...specification. We present a general method for computing quantitative information about finite-state real - time systems . We have developed algorithms that...our technique can be extended to a more general representation of real - time systems , namely, timed transition graphs. The algorithms presented in this
Autonomous evolution of topographic regularities in artificial neural networks.
Gauci, Jason; Stanley, Kenneth O
2010-07-01
Looking to nature as inspiration, for at least the past 25 years, researchers in the field of neuroevolution (NE) have developed evolutionary algorithms designed specifically to evolve artificial neural networks (ANNs). Yet the ANNs evolved through NE algorithms lack the distinctive characteristics of biological brains, perhaps explaining why NE is not yet a mainstream subject of neural computation. Motivated by this gap, this letter shows that when geometry is introduced to evolved ANNs through the hypercube-based neuroevolution of augmenting topologies algorithm, they begin to acquire characteristics that indeed are reminiscent of biological brains. That is, if the neurons in evolved ANNs are situated at locations in space (i.e., if they are given coordinates), then, as experiments in evolving checkers-playing ANNs in this letter show, topographic maps with symmetries and regularities can evolve spontaneously. The ability to evolve such maps is shown in this letter to provide an important advantage in generalization. In fact, the evolved maps are sufficiently informative that their analysis yields the novel insight that the geometry of the connectivity patterns of more general players is significantly smoother and more contiguous than less general ones. Thus, the results reveal a correlation between generality and smoothness in connectivity patterns. They also hint at the intriguing possibility that as NE matures as a field, its algorithms can evolve ANNs of increasing relevance to those who study neural computation in general.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Enghauser, Michael
2015-02-01
The goal of the Domestic Nuclear Detection Office (DNDO) Algorithm Improvement Program (AIP) is to facilitate gamma-radiation detector nuclide identification algorithm development, improvement, and validation. Accordingly, scoring criteria have been developed to objectively assess the performance of nuclide identification algorithms. In addition, a Microsoft Excel spreadsheet application for automated nuclide identification scoring has been developed. This report provides an overview of the equations, nuclide weighting factors, nuclide equivalencies, and configuration weighting factors used by the application for scoring nuclide identification algorithm performance. Furthermore, this report presents a general overview of the nuclide identification algorithm scoring application including illustrative examples.
NASA Astrophysics Data System (ADS)
Pohl, M.; Hagemann, U.; Liebe, M.; Sommer, M.; Augustin, J.
2012-04-01
The hilly young moraine landscape of north-eastern Germany is dominated by the cultivation of energy crops like maize. It is suspected that this cultivation can increase erosion effects and lead to the release of soil carbon (C). Therefore, in an interdisciplinary approach, the CarboZALF project investigates the impact of various factors such as erosion on greenhouse gas (GHG) fluxes and C dynamics on the site and the landscape level. From the CarboZalf-D project site located in the Uckermark, we present measured and modeled GHG fluxes (CO2 and CH4) and C dynamics of maize on four erosion-related soil types: a) haplic luvisol, b) eroded haplic luvisol, c) haplic regosol (calcaric) and d) endogleyic colluvic regosol. CO2 flux measurements of ecosystem respiration (Reco) and net ecosystem exchange (NEE) were conducted every four weeks by using a non-flow-through non-steady-state closed chamber system (Livingston and Hutchinson 1995) based on Drösler (2005). Measurement gaps of NEE were filled by modeling the Reco fluxes using the Lloyd-Taylor (Lloyd and Taylor 1994) method and the gross primary production (GPP) fluxes using Michaelis-Menten (Michaelis and Menten 1913) modeling approach. Annual NEE balances were then calculated based on the modeled Reco and GPP fluxes. CH4 fluxes were measured bi-weekly using a static chamber system with interval sampling. The system C budget is the sum of annual NEE, C export and CH4-C values. The endogleyic colluvic regosol featured the highest uptake of CH4 (< 1 kg C ha-1 yr-1), but the impact of erosion on the cumulative CH4 fluxes was very small. However, erosion and deposition had a significant impact on GPP, NEE and the C export, but with little differences between the resulting annual C balances. All investigated soil types were C sinks, storing 620 - 2600 kg C ha-1 yr-1. We conclude that i) maize cultivation must not be accompanied by soil organic carbon loss; ii) erosion seems to cause spatial variability of GHG fluxes and soil organic carbon budgets at least at the site level. Due to the temporal variability of GHG fluxes, generalized conclusions are only possible after long term investigations. This also applies to the question concerning the degree to which erosion influences C dynamics at the landscape scale. Drösler, M. 2005. Trace Gas Exchange and climatic relevance of bog ecosystems, Southern Germany, phD-thesis, TU München, München Livingston, G.P. & Hutchinson, G.L. 1995. Enclosure-based measurement of trace gas exchange: Applications and sources of error. p. 14-51. In P.A. Matson & Harriss, R.C. (ed.) Methods in ecology - Biogenic trace gases: Measuring emissions from soil and water. Blackwell Science, Oxford, England
SU-E-T-577: Commissioning of a Deterministic Algorithm for External Photon Beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, T; Finlay, J; Mesina, C
Purpose: We report commissioning results for a deterministic algorithm for external photon beam treatment planning. A deterministic algorithm solves the radiation transport equations directly using a finite difference method, thus improve the accuracy of dose calculation, particularly under heterogeneous conditions with results similar to that of Monte Carlo (MC) simulation. Methods: Commissioning data for photon energies 6 – 15 MV includes the percentage depth dose (PDD) measured at SSD = 90 cm and output ratio in water (Spc), both normalized to 10 cm depth, for field sizes between 2 and 40 cm and depths between 0 and 40 cm. Off-axismore » ratio (OAR) for the same set of field sizes was used at 5 depths (dmax, 5, 10, 20, 30 cm). The final model was compared with the commissioning data as well as additional benchmark data. The benchmark data includes dose per MU determined for 17 points for SSD between 80 and 110 cm, depth between 5 and 20 cm, and lateral offset of up to 16.5 cm. Relative comparisons were made in a heterogeneous phantom made of cork and solid water. Results: Compared to the commissioning beam data, the agreement are generally better than 2% with large errors (up to 13%) observed in the buildup regions of the FDD and penumbra regions of the OAR profiles. The overall mean standard deviation is 0.04% when all data are taken into account. Compared to the benchmark data, the agreements are generally better than 2%. Relative comparison in heterogeneous phantom is in general better than 4%. Conclusion: A commercial deterministic algorithm was commissioned for megavoltage photon beams. In a homogeneous medium, the agreement between the algorithm and measurement at the benchmark points is generally better than 2%. The dose accuracy for a deterministic algorithm is better than a convolution algorithm in heterogeneous medium.« less
Generalized fuzzy C-means clustering algorithm with improved fuzzy partitions.
Zhu, Lin; Chung, Fu-Lai; Wang, Shitong
2009-06-01
The fuzziness index m has important influence on the clustering result of fuzzy clustering algorithms, and it should not be forced to fix at the usual value m = 2. In view of its distinctive features in applications and its limitation in having m = 2 only, a recent advance of fuzzy clustering called fuzzy c-means clustering with improved fuzzy partitions (IFP-FCM) is extended in this paper, and a generalized algorithm called GIFP-FCM for more effective clustering is proposed. By introducing a novel membership constraint function, a new objective function is constructed, and furthermore, GIFP-FCM clustering is derived. Meanwhile, from the viewpoints of L(p) norm distance measure and competitive learning, the robustness and convergence of the proposed algorithm are analyzed. Furthermore, the classical fuzzy c-means algorithm (FCM) and IFP-FCM can be taken as two special cases of the proposed algorithm. Several experimental results including its application to noisy image texture segmentation are presented to demonstrate its average advantage over FCM and IFP-FCM in both clustering and robustness capabilities.
Phase Response Design of Recursive All-Pass Digital Filters Using a Modified PSO Algorithm
2015-01-01
This paper develops a new design scheme for the phase response of an all-pass recursive digital filter. A variant of particle swarm optimization (PSO) algorithm will be utilized for solving this kind of filter design problem. It is here called the modified PSO (MPSO) algorithm in which another adjusting factor is more introduced in the velocity updating formula of the algorithm in order to improve the searching ability. In the proposed method, all of the designed filter coefficients are firstly collected to be a parameter vector and this vector is regarded as a particle of the algorithm. The MPSO with a modified velocity formula will force all particles into moving toward the optimal or near optimal solution by minimizing some defined objective function of the optimization problem. To show the effectiveness of the proposed method, two different kinds of linear phase response design examples are illustrated and the general PSO algorithm is compared as well. The obtained results show that the MPSO is superior to the general PSO for the phase response design of digital recursive all-pass filter. PMID:26366168
Asquith, William H.
2014-01-01
The implementation characteristics of two method of L-moments (MLM) algorithms for parameter estimation of the 4-parameter Asymmetric Exponential Power (AEP4) distribution are studied using the R environment for statistical computing. The objective is to validate the algorithms for general application of the AEP4 using R. An algorithm was introduced in the original study of the L-moments for the AEP4. A second or alternative algorithm is shown to have a larger L-moment-parameter domain than the original. The alternative algorithm is shown to provide reliable parameter production and recovery of L-moments from fitted parameters. A proposal is made for AEP4 implementation in conjunction with the 4-parameter Kappa distribution to create a mixed-distribution framework encompassing the joint L-skew and L-kurtosis domains. The example application provides a demonstration of pertinent algorithms with L-moment statistics and two 4-parameter distributions (AEP4 and the Generalized Lambda) for MLM fitting to a modestly asymmetric and heavy-tailed dataset using R.
Efficient Deterministic Finite Automata Minimization Based on Backward Depth Information.
Liu, Desheng; Huang, Zhiping; Zhang, Yimeng; Guo, Xiaojun; Su, Shaojing
2016-01-01
Obtaining a minimal automaton is a fundamental issue in the theory and practical implementation of deterministic finite automatons (DFAs). A minimization algorithm is presented in this paper that consists of two main phases. In the first phase, the backward depth information is built, and the state set of the DFA is partitioned into many blocks. In the second phase, the state set is refined using a hash table. The minimization algorithm has a lower time complexity O(n) than a naive comparison of transitions O(n2). Few states need to be refined by the hash table, because most states have been partitioned by the backward depth information in the coarse partition. This method achieves greater generality than previous methods because building the backward depth information is independent of the topological complexity of the DFA. The proposed algorithm can be applied not only to the minimization of acyclic automata or simple cyclic automata, but also to automata with high topological complexity. Overall, the proposal has three advantages: lower time complexity, greater generality, and scalability. A comparison to Hopcroft's algorithm demonstrates experimentally that the algorithm runs faster than traditional algorithms.
A preliminary investigation of ROI-image reconstruction with the rebinned BPF algorithm
NASA Astrophysics Data System (ADS)
Bian, Junguo; Xia, Dan; Yu, Lifeng; Sidky, Emil Y.; Pan, Xiaochuan
2008-03-01
The back-projection filtration (BPF)algorithm is capable of reconstructing ROI images from truncated data acquired with a wide class of general trajectories. However, it has been observed that, similar to other algorithms for convergent beam geometries, the BPF algorithm involves a spatially varying weighting factor in the backprojection step. This weighting factor can not only increase the computation load, but also amplify the noise in reconstructed images The weighting factor can be eliminated by appropriately rebinning the measured cone-beam data into fan-parallel-beam data. Such an appropriate data rebinning not only removes the weighting factor, but also retain other favorable properties of the BPF algorithm. In this work, we conduct a preliminary study of the rebinned BPF algorithm and its noise property. Specifically, we consider an application in which the detector and source can move in several directions for achieving ROI data acquisition. The combined motion of the detector and source generally forms a complex trajectory. We investigate in this work image reconstruction within an ROI from data acquired in this kind of applications.
NASA Astrophysics Data System (ADS)
Huang, Ding-jiang; Ivanova, Nataliya M.
2016-02-01
In this paper, we explain in more details the modern treatment of the problem of group classification of (systems of) partial differential equations (PDEs) from the algorithmic point of view. More precisely, we revise the classical Lie algorithm of construction of symmetries of differential equations, describe the group classification algorithm and discuss the process of reduction of (systems of) PDEs to (systems of) equations with smaller number of independent variables in order to construct invariant solutions. The group classification algorithm and reduction process are illustrated by the example of the generalized Zakharov-Kuznetsov (GZK) equations of form ut +(F (u)) xxx +(G (u)) xyy +(H (u)) x = 0. As a result, a complete group classification of the GZK equations is performed and a number of new interesting nonlinear invariant models which have non-trivial invariance algebras are obtained. Lie symmetry reductions and exact solutions for two important invariant models, i.e., the classical and modified Zakharov-Kuznetsov equations, are constructed. The algorithmic framework for group analysis of differential equations presented in this paper can also be applied to other nonlinear PDEs.
Reduction from cost-sensitive ordinal ranking to weighted binary classification.
Lin, Hsuan-Tien; Li, Ling
2012-05-01
We present a reduction framework from ordinal ranking to binary classification. The framework consists of three steps: extracting extended examples from the original examples, learning a binary classifier on the extended examples with any binary classification algorithm, and constructing a ranker from the binary classifier. Based on the framework, we show that a weighted 0/1 loss of the binary classifier upper-bounds the mislabeling cost of the ranker, both error-wise and regret-wise. Our framework allows not only the design of good ordinal ranking algorithms based on well-tuned binary classification approaches, but also the derivation of new generalization bounds for ordinal ranking from known bounds for binary classification. In addition, our framework unifies many existing ordinal ranking algorithms, such as perceptron ranking and support vector ordinal regression. When compared empirically on benchmark data sets, some of our newly designed algorithms enjoy advantages in terms of both training speed and generalization performance over existing algorithms. In addition, the newly designed algorithms lead to better cost-sensitive ordinal ranking performance, as well as improved listwise ranking performance.
NASA Technical Reports Server (NTRS)
Rash, James L.
2010-01-01
NASA's space data-communications infrastructure, the Space Network and the Ground Network, provide scheduled (as well as some limited types of unscheduled) data-communications services to user spacecraft via orbiting relay satellites and ground stations. An implementation of the methods and algorithms disclosed herein will be a system that produces globally optimized schedules with not only optimized service delivery by the space data-communications infrastructure but also optimized satisfaction of all user requirements and prescribed constraints, including radio frequency interference (RFI) constraints. Evolutionary search, a class of probabilistic strategies for searching large solution spaces, constitutes the essential technology in this disclosure. Also disclosed are methods and algorithms for optimizing the execution efficiency of the schedule-generation algorithm itself. The scheduling methods and algorithms as presented are adaptable to accommodate the complexity of scheduling the civilian and/or military data-communications infrastructure. Finally, the problem itself, and the methods and algorithms, are generalized and specified formally, with applicability to a very broad class of combinatorial optimization problems.
NASA Astrophysics Data System (ADS)
Rysavy, Steven; Flores, Arturo; Enciso, Reyes; Okada, Kazunori
2008-03-01
This paper presents an experimental study for assessing the applicability of general-purpose 3D segmentation algorithms for analyzing dental periapical lesions in cone-beam computed tomography (CBCT) scans. In the field of Endodontics, clinical studies have been unable to determine if a periapical granuloma can heal with non-surgical methods. Addressing this issue, Simon et al. recently proposed a diagnostic technique which non-invasively classifies target lesions using CBCT. Manual segmentation exploited in their study, however, is too time consuming and unreliable for real world adoption. On the other hand, many technically advanced algorithms have been proposed to address segmentation problems in various biomedical and non-biomedical contexts, but they have not yet been applied to the field of dentistry. Presented in this paper is a novel application of such segmentation algorithms to the clinically-significant dental problem. This study evaluates three state-of-the-art graph-based algorithms: a normalized cut algorithm based on a generalized eigen-value problem, a graph cut algorithm implementing energy minimization techniques, and a random walks algorithm derived from discrete electrical potential theory. In this paper, we extend the original 2D formulation of the above algorithms to segment 3D images directly and apply the resulting algorithms to the dental CBCT images. We experimentally evaluate quality of the segmentation results for 3D CBCT images, as well as their 2D cross sections. The benefits and pitfalls of each algorithm are highlighted.
A robust data scaling algorithm to improve classification accuracies in biomedical data.
Cao, Xi Hang; Stojkovic, Ivan; Obradovic, Zoran
2016-09-09
Machine learning models have been adapted in biomedical research and practice for knowledge discovery and decision support. While mainstream biomedical informatics research focuses on developing more accurate models, the importance of data preprocessing draws less attention. We propose the Generalized Logistic (GL) algorithm that scales data uniformly to an appropriate interval by learning a generalized logistic function to fit the empirical cumulative distribution function of the data. The GL algorithm is simple yet effective; it is intrinsically robust to outliers, so it is particularly suitable for diagnostic/classification models in clinical/medical applications where the number of samples is usually small; it scales the data in a nonlinear fashion, which leads to potential improvement in accuracy. To evaluate the effectiveness of the proposed algorithm, we conducted experiments on 16 binary classification tasks with different variable types and cover a wide range of applications. The resultant performance in terms of area under the receiver operation characteristic curve (AUROC) and percentage of correct classification showed that models learned using data scaled by the GL algorithm outperform the ones using data scaled by the Min-max and the Z-score algorithm, which are the most commonly used data scaling algorithms. The proposed GL algorithm is simple and effective. It is robust to outliers, so no additional denoising or outlier detection step is needed in data preprocessing. Empirical results also show models learned from data scaled by the GL algorithm have higher accuracy compared to the commonly used data scaling algorithms.
Contextual classification of multispectral image data: Approximate algorithm
NASA Technical Reports Server (NTRS)
Tilton, J. C. (Principal Investigator)
1980-01-01
An approximation to a classification algorithm incorporating spatial context information in a general, statistical manner is presented which is computationally less intensive. Classifications that are nearly as accurate are produced.
Cascade generalized predictive control strategy for boiler drum level.
Xu, Min; Li, Shaoyuan; Cai, Wenjian
2005-07-01
This paper proposes a cascade model predictive control scheme for boiler drum level control. By employing generalized predictive control structures for both inner and outer loops, measured and unmeasured disturbances can be effectively rejected, and drum level at constant load is maintained. In addition, nonminimum phase characteristic and system constraints in both loops can be handled effectively by generalized predictive control algorithms. Simulation results are provided to show that cascade generalized predictive control results in better performance than that of well tuned cascade proportional integral differential controllers. The algorithm has also been implemented to control a 75-MW boiler plant, and the results show an improvement over conventional control schemes.
Polarization transformation as an algorithm for automatic generalization and quality assessment
NASA Astrophysics Data System (ADS)
Qian, Haizhong; Meng, Liqiu
2007-06-01
Since decades it has been a dream of cartographers to computationally mimic the generalization processes in human brains for the derivation of various small-scale target maps or databases from a large-scale source map or database. This paper addresses in a systematic way the polarization transformation (PT) - a new algorithm that serves both the purpose of automatic generalization of discrete features and the quality assurance. By means of PT, two dimensional point clusters or line networks in the Cartesian system can be transformed into a polar coordinate system, which then can be unfolded as a single spectrum line r = f(α), where r and a stand for the polar radius and the polar angle respectively. After the transformation, the original features will correspond to nodes on the spectrum line delimited between 0° and 360° along the horizontal axis, and between the minimum and maximum polar radius along the vertical axis. Since PT is a lossless transformation, it allows a straighforward analysis and comparison of the original and generalized distributions, thus automatic generalization and quality assurance can be down in this way. Examples illustrate that PT algorithm meets with the requirement of generalization of discrete spatial features and is more scientific.
Doha, E.H.; Abd-Elhameed, W.M.; Youssri, Y.H.
2014-01-01
Two families of certain nonsymmetric generalized Jacobi polynomials with negative integer indexes are employed for solving third- and fifth-order two point boundary value problems governed by homogeneous and nonhomogeneous boundary conditions using a dual Petrov–Galerkin method. The idea behind our method is to use trial functions satisfying the underlying boundary conditions of the differential equations and the test functions satisfying the dual boundary conditions. The resulting linear systems from the application of our method are specially structured and they can be efficiently inverted. The use of generalized Jacobi polynomials simplify the theoretical and numerical analysis of the method and also leads to accurate and efficient numerical algorithms. The presented numerical results indicate that the proposed numerical algorithms are reliable and very efficient. PMID:26425358
A generalized memory test algorithm
NASA Technical Reports Server (NTRS)
Milner, E. J.
1982-01-01
A general algorithm for testing digital computer memory is presented. The test checks that (1) every bit can be cleared and set in each memory work, and (2) bits are not erroneously cleared and/or set elsewhere in memory at the same time. The algorithm can be applied to any size memory block and any size memory word. It is concise and efficient, requiring the very few cycles through memory. For example, a test of 16-bit-word-size memory requries only 384 cycles through memory. Approximately 15 seconds were required to test a 32K block of such memory, using a microcomputer having a cycle time of 133 nanoseconds.
NASA Technical Reports Server (NTRS)
Bridgeman, J. O.; Steger, J. L.; Caradonna, F. X.
1982-01-01
An implicit, approximate-factorization, finite-difference algorithm has been developed for the computation of unsteady, inviscid transonic flows in two and three dimensions. The computer program solves the full-potential equation in generalized coordinates in conservation-law form in order to properly capture shock-wave position and speed. A body-fitted coordinate system is employed for the simple and accurate treatment of boundary conditions on the body surface. The time-accurate algorithm is modified to a conventional ADI relaxation scheme for steady-state computations. Results from two- and three-dimensional steady and two-dimensional unsteady calculations are compared with existing methods.
Phase retrieval in generalized optical interferometry systems.
Farriss, Wesley E; Fienup, James R; Malhotra, Tanya; Vamivakas, A Nick
2018-02-05
Modal analysis of an optical field via generalized interferometry (GI) is a novel technique that treats said field as a linear superposition of transverse modes and recovers the amplitudes of modal weighting coefficients. We use phase retrieval by nonlinear optimization to recover the phase of these modal weighting coefficients. Information diversity increases the robustness of the algorithm by better constraining the solution. Additionally, multiple sets of random starting phase values assist the algorithm in overcoming local minima. The algorithm was able to recover nearly all coefficient phases for simulated fields consisting of up to 21 superpositioned Hermite Gaussian modes from simulated data and proved to be resilient to shot noise.
Quasi-kernel polynomials and convergence results for quasi-minimal residual iterations
NASA Technical Reports Server (NTRS)
Freund, Roland W.
1992-01-01
Recently, Freund and Nachtigal have proposed a novel polynominal-based iteration, the quasi-minimal residual algorithm (QMR), for solving general nonsingular non-Hermitian linear systems. Motivated by the QMR method, we have introduced the general concept of quasi-kernel polynomials, and we have shown that the QMR algorithm is based on a particular instance of quasi-kernel polynomials. In this paper, we continue our study of quasi-kernel polynomials. In particular, we derive bounds for the norms of quasi-kernel polynomials. These results are then applied to obtain convergence theorems both for the QMR method and for a transpose-free variant of QMR, the TFQMR algorithm.
Software for universal noiseless coding
NASA Technical Reports Server (NTRS)
Rice, R. F.; Schlutsmeyer, A. P.
1981-01-01
An overview is provided of the universal noiseless coding algorithms as well as their relationship to the now available FORTRAN implementations. It is suggested that readers considering investigating the utility of these algorithms for actual applications should consult both NASA's Computer Software Management and Information Center (COSMIC) and descriptions of coding techniques provided by Rice (1979). Examples of applying these techniques have also been given by Rice (1975, 1979, 1980). Attention is given to reversible preprocessing, general implementation instructions, naming conventions, and calling arguments. A general applicability of the considered algorithms to solving practical problems is obtained because most real data sources can be simply transformed into the required form by appropriate preprocessing.
Constant-pressure nested sampling with atomistic dynamics
NASA Astrophysics Data System (ADS)
Baldock, Robert J. N.; Bernstein, Noam; Salerno, K. Michael; Pártay, Lívia B.; Csányi, Gábor
2017-10-01
The nested sampling algorithm has been shown to be a general method for calculating the pressure-temperature-composition phase diagrams of materials. While the previous implementation used single-particle Monte Carlo moves, these are inefficient for condensed systems with general interactions where single-particle moves cannot be evaluated faster than the energy of the whole system. Here we enhance the method by using all-particle moves: either Galilean Monte Carlo or the total enthalpy Hamiltonian Monte Carlo algorithm, introduced in this paper. We show that these algorithms enable the determination of phase transition temperatures with equivalent accuracy to the previous method at 1 /N of the cost for an N -particle system with general interactions, or at equal cost when single-particle moves can be done in 1 /N of the cost of a full N -particle energy evaluation. We demonstrate this speed-up for the freezing and condensation transitions of the Lennard-Jones system and show the utility of the algorithms by calculating the order-disorder phase transition of a binary Lennard-Jones model alloy, the eutectic of copper-gold, the density anomaly of water, and the condensation and solidification of bead-spring polymers. The nested sampling method with all three algorithms is implemented in the pymatnest software.
Differentially Private Empirical Risk Minimization
Chaudhuri, Kamalika; Monteleoni, Claire; Sarwate, Anand D.
2011-01-01
Privacy-preserving machine learning algorithms are crucial for the increasingly common setting in which personal data, such as medical or financial records, are analyzed. We provide general techniques to produce privacy-preserving approximations of classifiers learned via (regularized) empirical risk minimization (ERM). These algorithms are private under the ε-differential privacy definition due to Dwork et al. (2006). First we apply the output perturbation ideas of Dwork et al. (2006), to ERM classification. Then we propose a new method, objective perturbation, for privacy-preserving machine learning algorithm design. This method entails perturbing the objective function before optimizing over classifiers. If the loss and regularizer satisfy certain convexity and differentiability criteria, we prove theoretical results showing that our algorithms preserve privacy, and provide generalization bounds for linear and nonlinear kernels. We further present a privacy-preserving technique for tuning the parameters in general machine learning algorithms, thereby providing end-to-end privacy guarantees for the training process. We apply these results to produce privacy-preserving analogues of regularized logistic regression and support vector machines. We obtain encouraging results from evaluating their performance on real demographic and benchmark data sets. Our results show that both theoretically and empirically, objective perturbation is superior to the previous state-of-the-art, output perturbation, in managing the inherent tradeoff between privacy and learning performance. PMID:21892342
Ray Tracing Through Non-Imaging Concentrators
NASA Astrophysics Data System (ADS)
Greynolds, Alan W.
1984-01-01
A generalized algorithm for tracing rays through both imaging and non-imaging radiation collectors is presented. A computer program based on the algorithm is then applied to analyzing various two-stage Winston concentrators.
NASA Astrophysics Data System (ADS)
Lu, Wei-Tao; Zhang, Hua; Wang, Shun-Jin
2008-07-01
Symplectic algebraic dynamics algorithm (SADA) for ordinary differential equations is applied to solve numerically the circular restricted three-body problem (CR3BP) in dynamical astronomy for both stable motion and chaotic motion. The result is compared with those of Runge-Kutta algorithm and symplectic algorithm under the fourth order, which shows that SADA has higher accuracy than the others in the long-term calculations of the CR3BP.
A parallel algorithm for generation and assembly of finite element stiffness and mass matrices
NASA Technical Reports Server (NTRS)
Storaasli, O. O.; Carmona, E. A.; Nguyen, D. T.; Baddourah, M. A.
1991-01-01
A new algorithm is proposed for parallel generation and assembly of the finite element stiffness and mass matrices. The proposed assembly algorithm is based on a node-by-node approach rather than the more conventional element-by-element approach. The new algorithm's generality and computation speed-up when using multiple processors are demonstrated for several practical applications on multi-processor Cray Y-MP and Cray 2 supercomputers.
Computational mechanics analysis tools for parallel-vector supercomputers
NASA Technical Reports Server (NTRS)
Storaasli, O. O.; Nguyen, D. T.; Baddourah, M. A.; Qin, J.
1993-01-01
Computational algorithms for structural analysis on parallel-vector supercomputers are reviewed. These parallel algorithms, developed by the authors, are for the assembly of structural equations, 'out-of-core' strategies for linear equation solution, massively distributed-memory equation solution, unsymmetric equation solution, general eigen-solution, geometrically nonlinear finite element analysis, design sensitivity analysis for structural dynamics, optimization algorithm and domain decomposition. The source code for many of these algorithms is available from NASA Langley.
Parallelization strategies for continuum-generalized method of moments on the multi-thread systems
NASA Astrophysics Data System (ADS)
Bustamam, A.; Handhika, T.; Ernastuti, Kerami, D.
2017-07-01
Continuum-Generalized Method of Moments (C-GMM) covers the Generalized Method of Moments (GMM) shortfall which is not as efficient as Maximum Likelihood estimator by using the continuum set of moment conditions in a GMM framework. However, this computation would take a very long time since optimizing regularization parameter. Unfortunately, these calculations are processed sequentially whereas in fact all modern computers are now supported by hierarchical memory systems and hyperthreading technology, which allowing for parallel computing. This paper aims to speed up the calculation process of C-GMM by designing a parallel algorithm for C-GMM on the multi-thread systems. First, parallel regions are detected for the original C-GMM algorithm. There are two parallel regions in the original C-GMM algorithm, that are contributed significantly to the reduction of computational time: the outer-loop and the inner-loop. Furthermore, this parallel algorithm will be implemented with standard shared-memory application programming interface, i.e. Open Multi-Processing (OpenMP). The experiment shows that the outer-loop parallelization is the best strategy for any number of observations.
NASA Astrophysics Data System (ADS)
Vatankhah, Saeed; Renaut, Rosemary A.; Ardestani, Vahid E.
2018-04-01
We present a fast algorithm for the total variation regularization of the 3-D gravity inverse problem. Through imposition of the total variation regularization, subsurface structures presenting with sharp discontinuities are preserved better than when using a conventional minimum-structure inversion. The associated problem formulation for the regularization is nonlinear but can be solved using an iteratively reweighted least-squares algorithm. For small-scale problems the regularized least-squares problem at each iteration can be solved using the generalized singular value decomposition. This is not feasible for large-scale, or even moderate-scale, problems. Instead we introduce the use of a randomized generalized singular value decomposition in order to reduce the dimensions of the problem and provide an effective and efficient solution technique. For further efficiency an alternating direction algorithm is used to implement the total variation weighting operator within the iteratively reweighted least-squares algorithm. Presented results for synthetic examples demonstrate that the novel randomized decomposition provides good accuracy for reduced computational and memory demands as compared to use of classical approaches.
Generalized sidelobe canceller beamforming method for ultrasound imaging.
Wang, Ping; Li, Na; Luo, Han-Wu; Zhu, Yong-Kun; Cui, Shi-Gang
2017-03-01
A modified generalized sidelobe canceller (IGSC) algorithm is proposed to enhance the resolution and robustness against the noise of the traditional generalized sidelobe canceller (GSC) and coherence factor combined method (GSC-CF). In the GSC algorithm, weighting vector is divided into adaptive and non-adaptive parts, while the non-adaptive part does not block all the desired signal. A modified steer vector of the IGSC algorithm is generated by the projection of the non-adaptive vector on the signal space constructed by the covariance matrix of received data. The blocking matrix is generated based on the orthogonal complementary space of the modified steer vector and the weighting vector is updated subsequently. The performance of IGSC was investigated by simulations and experiments. Through simulations, IGSC outperformed GSC-CF in terms of spatial resolution by 0.1 mm regardless there is noise or not, as well as the contrast ratio respect. The proposed IGSC can be further improved by combining with CF. The experimental results also validated the effectiveness of the proposed algorithm with dataset provided by the University of Michigan.
DynamO: a free O(N) general event-driven molecular dynamics simulator.
Bannerman, M N; Sargant, R; Lue, L
2011-11-30
Molecular dynamics algorithms for systems of particles interacting through discrete or "hard" potentials are fundamentally different to the methods for continuous or "soft" potential systems. Although many software packages have been developed for continuous potential systems, software for discrete potential systems based on event-driven algorithms are relatively scarce and specialized. We present DynamO, a general event-driven simulation package, which displays the optimal O(N) asymptotic scaling of the computational cost with the number of particles N, rather than the O(N) scaling found in most standard algorithms. DynamO provides reference implementations of the best available event-driven algorithms. These techniques allow the rapid simulation of both complex and large (>10(6) particles) systems for long times. The performance of the program is benchmarked for elastic hard sphere systems, homogeneous cooling and sheared inelastic hard spheres, and equilibrium Lennard-Jones fluids. This software and its documentation are distributed under the GNU General Public license and can be freely downloaded from http://marcusbannerman.co.uk/dynamo. Copyright © 2011 Wiley Periodicals, Inc.
Advanced Techniques for Scene Analysis
2010-06-01
robustness prefers a bigger intergration window to handle larger motions. The advantage of pyramidal implementation is that, while each motion vector dL...labeled SAR images. Now the previous algorithm leads to a more dedicated classifier for the particular target; however, our algorithm trades generality for...accuracy is traded for generality. 7.3.2 I-RELIEF Feature weighting transforms the original feature vector x into a new feature vector x′ by assigning each
1992-12-01
Dynamics and Free Energy Perturbation Methods." Reviews in Computational Chem- istry edited by Kenny B. Lipkowitz and Donald B. Boyd, chapter 8, 295-320...atomic motions during annealing, allows the search to probabilistically move in a locally non-optimal direction. The probability of doing so is...Network processors communicate via communication links. This type of communication is generally very slow relative to other processor activities
Optimized Waterspace Management and Scheduling Using Mixed-Integer Linear Programming
2016-01-01
Complete [30]. Proposition 4.1 satisfies the first criterion. For the second criterion, we will use the Traveling Salesman Problem (TSP), which has been...A branch and cut algorithm for the symmetric generalized traveling salesman problem , Operations Research 45 (1997) 378–394. [33] J. Silberholz, B...Golden, The generalized traveling salesman problem : A new genetic algorithm ap- proach, Extended Horizons: Advances in Computing, Optimization, and
Quantum Information Processing in the Wall of Cytoskeletal Microtubules
Qiu, Xijun; Wu, Tongcheng; Li, Ruxin
2006-01-01
Microtubules (MT) are composed of 13 protofilaments, each of which is a series of two-state tubulin dimers. In the MT wall, these dimers can be pictured as “lattice” sites similar to crystal lattices. Based on the pseudo-spin model, two different location states of the mobile electron in each dimer are proposed. Accordingly, the MT wall is described as an anisotropic two-dimensional (2D) pseudo-spin system considering a periodic triangular “lattice”. Because three different “spin-spin” interactions in each cell exist periodically in the whole MT wall, the system may be shown to be an array of three types of two-pseudo-spin-state dimers. For the above-mentioned condition, the processing of quantum information is presented by using the scheme developed by Lloyd. PMID:19669447
A Note on Monotonicity Assumptions for Exact Unconditional Tests in Binary Matched-pairs Designs
Li, Xiaochun; Liu, Mengling; Goldberg, Judith D.
2011-01-01
Summary Exact unconditional tests have been widely applied to test the difference between two probabilities for 2×2 matched-pairs binary data with small sample size. In this context, Lloyd (2008, Biometrics 64, 716–723) proposed an E + M p-value, that showed better performance than the existing M p-value and C p-value. However, the analytical calculation of the E + M p-value requires that the Barnard convexity condition be satisfied; this can be challenging to prove theoretically. In this paper, by a simple reformulation, we show that a weaker condition, conditional monotonicity, is sufficient to calculate all three p-values (M, C and E + M) and their corresponding exact sizes. Moreover, this conditional monotonicity condition is applicable to non-inferiority tests. PMID:21466507
How the Sun Knocks Out My Cell Phone from 150 Million Kilometers Away
NASA Technical Reports Server (NTRS)
Ladbury, Ray
2014-01-01
Large solar particle events (SPE) threaten many elements of critical infrastructure. A 2013 study by Lloyds of London and Atmospheric and Environmental Research recently found that if a worst-case solar event like the 1859 Carrington Event struck our planet now, it could result on $0.6-$2.36 trillion in damages to the economy. In March 2014, researchers Y. D. Liu et al. revealed that just such an event had narrowly missed Earth in July 2012. The event was observed by the STEREO A spacecraft. In this presentation, we examine how the sun can pack such a punch from 150 million km away, the threats such solar particle events pose, their mechanisms and the efforts NASA and other space agencies are carrying out to understand and mitigate such risks.
Acoustic multipath arrivals in the horizontal plane due to approaching nonlinear internal waves.
Badiey, Mohsen; Katsnelson, Boris G; Lin, Ying-Tsong; Lynch, James F
2011-04-01
Simultaneous measurements of acoustic wave transmissions and a nonlinear internal wave packet approaching an along-shelf acoustic path during the Shallow Water 2006 experiment are reported. The incoming internal wave packet acts as a moving frontal layer reflecting (or refracting) sound in the horizontal plane. Received acoustic signals are filtered into acoustic normal mode arrivals. It is shown that a horizontal multipath interference is produced. This has previously been called a horizontal Lloyd's mirror. The interference between the direct path and the refracted path depends on the mode number and frequency of the acoustic signal. A mechanism for the multipath interference is shown. Preliminary modeling results of this dynamic interaction using vertical modes and horizontal parabolic equation models are in good agreement with the observed data.
Paruthi, Shalini; Brooks, Lee J.; D'Ambrosio, Carolyn; Hall, Wendy A.; Kotagal, Suresh; Lloyd, Robin M.; Malow, Beth A.; Maski, Kiran; Nichols, Cynthia; Quan, Stuart F.; Rosen, Carol L.; Troester, Matthew M.; Wise, Merrill S.
2016-01-01
Sleep is essential for optimal health in children and adolescents. Members of the American Academy of Sleep Medicine developed consensus recommendations for the amount of sleep needed to promote optimal health in children and adolescents using a modified RAND Appropriateness Method. The recommendations are summarized here. A manuscript detailing the conference proceedings and the evidence supporting these recommendations will be published in the Journal of Clinical Sleep Medicine. Citation: Paruthi S, Brooks LJ, D'Ambrosio C, Hall WA, Kotagal S, Lloyd RM, Malow BA, Maski K, Nichols C, Quan SF, Rosen CL, Troester MM, Wise MS. Recommended amount of sleep for pediatric populations: a consensus statement of the American Academy of Sleep Medicine. J Clin Sleep Med 2016;12(6):785–786. PMID:27250809
Optimization of Selected Remote Sensing Algorithms for Embedded NVIDIA Kepler GPU Architecture
NASA Technical Reports Server (NTRS)
Riha, Lubomir; Le Moigne, Jacqueline; El-Ghazawi, Tarek
2015-01-01
This paper evaluates the potential of embedded Graphic Processing Units in the Nvidias Tegra K1 for onboard processing. The performance is compared to a general purpose multi-core CPU and full fledge GPU accelerator. This study uses two algorithms: Wavelet Spectral Dimension Reduction of Hyperspectral Imagery and Automated Cloud-Cover Assessment (ACCA) Algorithm. Tegra K1 achieved 51 for ACCA algorithm and 20 for the dimension reduction algorithm, as compared to the performance of the high-end 8-core server Intel Xeon CPU with 13.5 times higher power consumption.
Classification of voting algorithms for N-version software
NASA Astrophysics Data System (ADS)
Tsarev, R. Yu; Durmuş, M. S.; Üstoglu, I.; Morozov, V. A.
2018-05-01
A voting algorithm in N-version software is a crucial component that evaluates the execution of each of the N versions and determines the correct result. Obviously, the result of the voting algorithm determines the outcome of the N-version software in general. Thus, the choice of the voting algorithm is a vital issue. A lot of voting algorithms were already developed and they may be selected for implementation based on the specifics of the analysis of input data. However, the voting algorithms applied in N-version software are not classified. This article presents an overview of classic and recent voting algorithms used in N-version software and the authors' classification of the voting algorithms. Moreover, the steps of the voting algorithms are presented and the distinctive features of the voting algorithms in Nversion software are defined.
Safety of orally administered, USP-compliant levothyroxine sodium tablets in dogs.
Hare, J E; Morrow, C M K; Caldwell, J; Lloyd, W E
2018-04-01
The safety of synthetic levothyroxine sodium tablets (Thyro-Tabs® Canine; LLOYD, Inc.) in dogs was evaluated in a randomized, sham-dose controlled, parallel-group study. Young, healthy, euthyroid Beagle dogs were randomized into four groups (four females and four males per group) and received single daily doses of 0×, 2× (0.044 mg/kg), 6× (0.132 mg/kg), or 10× (0.22 mg/kg) the labeled starting dose of 0.022 mg kg -1 day -1 for 182 days. Every 2 weeks, physical examinations, electrocardiology examinations, and sample collections for thyroid panel, hematology, serum biochemistry, coagulation panel, and urinalysis were performed. At the end of the study, the dogs were euthanized and full necropsies performed. The most overt finding was the expected dose-dependent increase in serum concentrations of total and free thyroxine with dose-dependent suppression of the hypothalamic-pituitary-thyroid axis as evidenced by decreased serum thyroid-stimulating hormone concentrations, decreased thyroid+parathyroid/body weight ratios, and a trend for decreased pituitary weight/brain weight ratios. Clinical signs of thyrotoxicosis (excitation, tachypnea, tachycardia) in the treated dogs were sporadic with no dose-response relationship. Other findings statistically associated with levothyroxine treatment were generally mild and not clinically important. In summary, doses of levothyroxine sodium up to 10× the labeled starting dose were well tolerated in healthy dogs. © 2017 The Authors. Journal of Veterinary Pharmacology and Therapeutics Published by John Wiley & Sons Ltd.
Algorithm Diversity for Resilent Systems
2016-06-27
data structures. 15. SUBJECT TERMS computer security, software diversity, program transformation 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF 18...systematic method for transforming Datalog rules with general universal and existential quantification into efficient algorithms with precise complexity...worst case in the size of the ground rules. There are numerous choices during the transformation that lead to diverse algorithms and different
Data-driven advice for applying machine learning to bioinformatics problems
Olson, Randal S.; La Cava, William; Mustahsan, Zairah; Varik, Akshay; Moore, Jason H.
2017-01-01
As the bioinformatics field grows, it must keep pace not only with new data but with new algorithms. Here we contribute a thorough analysis of 13 state-of-the-art, commonly used machine learning algorithms on a set of 165 publicly available classification problems in order to provide data-driven algorithm recommendations to current researchers. We present a number of statistical and visual comparisons of algorithm performance and quantify the effect of model selection and algorithm tuning for each algorithm and dataset. The analysis culminates in the recommendation of five algorithms with hyperparameters that maximize classifier performance across the tested problems, as well as general guidelines for applying machine learning to supervised classification problems. PMID:29218881
A theoretical comparison of evolutionary algorithms and simulated annealing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, W.E.
1995-08-28
This paper theoretically compares the performance of simulated annealing and evolutionary algorithms. Our main result is that under mild conditions a wide variety of evolutionary algorithms can be shown to have greater performance than simulated annealing after a sufficiently large number of function evaluations. This class of EAs includes variants of evolutionary strategie and evolutionary programming, the canonical genetic algorithm, as well as a variety of genetic algorithms that have been applied to combinatorial optimization problems. The proof of this result is based on a performance analysis of a very general class of stochastic optimization algorithms, which has implications formore » the performance of a variety of other optimization algorithm.« less
NASA Astrophysics Data System (ADS)
Chemla (林力娜), Karine
The texts of algorithms fall under the general rubric of instructional texts, discussed by J. Virbel in this book. An algorithm has two facets. It has a text—a written text—, which usually appears to be an enumerated list of operations. In addition, whenever an algorithm is applied to a specific set of numerical values, practitioners derive from its text a sequence of actions, or operations, to be carried out. In the execution of the algorithm, these actions generate events that constitute a flow of computations eventually yielding numerical results. This chapter aims mainly to develop some reflections on the relationship between these two facets: the text and the different sequences of actions that practitioners derive from it. I use two tools in my argumentation. Firstly, I use the description of textual enumerations, as developed by Jacques Virbel, to find out how enumerations of operations were carried out in the text of algorithms and how these enumerations were used. Then I focus on the language acts carried out in some of the sentences composing the texts, since, when prescribing operations, the texts of the algorithms differ in that they use distinct ways of carrying out directives. The conclusion highlights different ways in which the text of an algorithm can be general and convey meanings that go beyond simply prescribing operations.
Any Two Learning Algorithms Are (Almost) Exactly Identical
NASA Technical Reports Server (NTRS)
Wolpert, David H.
2000-01-01
This paper shows that if one is provided with a loss function, it can be used in a natural way to specify a distance measure quantifying the similarity of any two supervised learning algorithms, even non-parametric algorithms. Intuitively, this measure gives the fraction of targets and training sets for which the expected performance of the two algorithms differs significantly. Bounds on the value of this distance are calculated for the case of binary outputs and 0-1 loss, indicating that any two learning algorithms are almost exactly identical for such scenarios. As an example, for any two algorithms A and B, even for small input spaces and training sets, for less than 2e(-50) of all targets will the difference between A's and B's generalization performance of exceed 1%. In particular, this is true if B is bagging applied to A, or boosting applied to A. These bounds can be viewed alternatively as telling us, for example, that the simple English phrase 'I expect that algorithm A will generalize from the training set with an accuracy of at least 75% on the rest of the target' conveys 20,000 bytes of information concerning the target. The paper ends by discussing some of the subtleties of extending the distance measure to give a full (non-parametric) differential geometry of the manifold of learning algorithms.
Trans-algorithmic nature of learning in biological systems.
Shimansky, Yury P
2018-05-02
Learning ability is a vitally important, distinctive property of biological systems, which provides dynamic stability in non-stationary environments. Although several different types of learning have been successfully modeled using a universal computer, in general, learning cannot be described by an algorithm. In other words, algorithmic approach to describing the functioning of biological systems is not sufficient for adequate grasping of what is life. Since biosystems are parts of the physical world, one might hope that adding some physical mechanisms and principles to the concept of algorithm could provide extra possibilities for describing learning in its full generality. However, a straightforward approach to that through the so-called physical hypercomputation so far has not been successful. Here an alternative approach is proposed. Biosystems are described as achieving enumeration of possible physical compositions though random incremental modifications inflicted on them by active operating resources (AORs) in the environment. Biosystems learn through algorithmic regulation of the intensity of the above modifications according to a specific optimality criterion. From the perspective of external observers, biosystems move in the space of different algorithms driven by random modifications imposed by the environmental AORs. A particular algorithm is only a snapshot of that motion, while the motion itself is essentially trans-algorithmic. In this conceptual framework, death of unfit members of a population, for example, is viewed as a trans-algorithmic modification made in the population as a biosystem by environmental AORs. Numerous examples of AOR utilization in biosystems of different complexity, from viruses to multicellular organisms, are provided.
A finite element algorithm for high-lying eigenvalues with Neumann and Dirichlet boundary conditions
NASA Astrophysics Data System (ADS)
Báez, G.; Méndez-Sánchez, R. A.; Leyvraz, F.; Seligman, T. H.
2014-01-01
We present a finite element algorithm that computes eigenvalues and eigenfunctions of the Laplace operator for two-dimensional problems with homogeneous Neumann or Dirichlet boundary conditions, or combinations of either for different parts of the boundary. We use an inverse power plus Gauss-Seidel algorithm to solve the generalized eigenvalue problem. For Neumann boundary conditions the method is much more efficient than the equivalent finite difference algorithm. We checked the algorithm by comparing the cumulative level density of the spectrum obtained numerically with the theoretical prediction given by the Weyl formula. We found a systematic deviation due to the discretization, not to the algorithm itself.
Negri, Lucas; Nied, Ademir; Kalinowski, Hypolito; Paterno, Aleksander
2011-01-01
This paper presents a benchmark for peak detection algorithms employed in fiber Bragg grating spectrometric interrogation systems. The accuracy, precision, and computational performance of currently used algorithms and those of a new proposed artificial neural network algorithm are compared. Centroid and gaussian fitting algorithms are shown to have the highest precision but produce systematic errors that depend on the FBG refractive index modulation profile. The proposed neural network displays relatively good precision with reduced systematic errors and improved computational performance when compared to other networks. Additionally, suitable algorithms may be chosen with the general guidelines presented. PMID:22163806
Problem solving with genetic algorithms and Splicer
NASA Technical Reports Server (NTRS)
Bayer, Steven E.; Wang, Lui
1991-01-01
Genetic algorithms are highly parallel, adaptive search procedures (i.e., problem-solving methods) loosely based on the processes of population genetics and Darwinian survival of the fittest. Genetic algorithms have proven useful in domains where other optimization techniques perform poorly. The main purpose of the paper is to discuss a NASA-sponsored software development project to develop a general-purpose tool for using genetic algorithms. The tool, called Splicer, can be used to solve a wide variety of optimization problems and is currently available from NASA and COSMIC. This discussion is preceded by an introduction to basic genetic algorithm concepts and a discussion of genetic algorithm applications.
NASA Astrophysics Data System (ADS)
Wu, Lifu; Qiu, Xiaojun; Guo, Yecai
2018-06-01
To tune the noise amplification in the feedback system caused by the waterbed effect effectively, an adaptive algorithm is proposed in this paper by replacing the scalar leaky factor of the leaky FxLMS algorithm with a real symmetric Toeplitz matrix. The elements in the matrix are calculated explicitly according to the noise amplification constraints, which are defined based on a simple but efficient method. Simulations in an ANC headphone application demonstrate that the proposed algorithm can adjust the frequency band of noise amplification more effectively than the FxLMS algorithm and the leaky FxLMS algorithm.
A Novel Color Image Encryption Algorithm Based on Quantum Chaos Sequence
NASA Astrophysics Data System (ADS)
Liu, Hui; Jin, Cong
2017-03-01
In this paper, a novel algorithm of image encryption based on quantum chaotic is proposed. The keystreams are generated by the two-dimensional logistic map as initial conditions and parameters. And then general Arnold scrambling algorithm with keys is exploited to permute the pixels of color components. In diffusion process, a novel encryption algorithm, folding algorithm, is proposed to modify the value of diffused pixels. In order to get the high randomness and complexity, the two-dimensional logistic map and quantum chaotic map are coupled with nearest-neighboring coupled-map lattices. Theoretical analyses and computer simulations confirm that the proposed algorithm has high level of security.
Faster quantum searching with almost any diffusion operator
NASA Astrophysics Data System (ADS)
Tulsi, Avatar
2015-05-01
Grover's search algorithm drives a quantum system from an initial state |s > to a desired final state |t > by using selective phase inversions of these two states. Earlier, we studied a generalization of Grover's algorithm that relaxes the assumption of the efficient implementation of Is, the selective phase inversion of the initial state, also known as a diffusion operator. This assumption is known to become a serious handicap in cases of physical interest. Our general search algorithm works with almost any diffusion operator Ds with the only restriction of having |s > as one of its eigenstates. The price that we pay for using any operator is an increase in the number of oracle queries by a factor of O (B ) , where B is a characteristic of the eigenspectrum of Ds and can be large in some situations. Here we show that by using a quantum Fourier transform, we can regain the optimal query complexity of Grover's algorithm without losing the freedom of using any diffusion operator for quantum searching. However, the total number of operators required by the algorithm is still O (B ) times more than that of Grover's algorithm. So our algorithm offers an advantage only if the oracle operator is computationally more expensive than the diffusion operator, which is true in most search problems.
Comparison Of Eigenvector-Based Statistical Pattern Recognition Algorithms For Hybrid Processing
NASA Astrophysics Data System (ADS)
Tian, Q.; Fainman, Y.; Lee, Sing H.
1989-02-01
The pattern recognition algorithms based on eigenvector analysis (group 2) are theoretically and experimentally compared in this part of the paper. Group 2 consists of Foley-Sammon (F-S) transform, Hotelling trace criterion (HTC), Fukunaga-Koontz (F-K) transform, linear discriminant function (LDF) and generalized matched filter (GMF). It is shown that all eigenvector-based algorithms can be represented in a generalized eigenvector form. However, the calculations of the discriminant vectors are different for different algorithms. Summaries on how to calculate the discriminant functions for the F-S, HTC and F-K transforms are provided. Especially for the more practical, underdetermined case, where the number of training images is less than the number of pixels in each image, the calculations usually require the inversion of a large, singular, pixel correlation (or covariance) matrix. We suggest solving this problem by finding its pseudo-inverse, which requires inverting only the smaller, non-singular image correlation (or covariance) matrix plus multiplying several non-singular matrices. We also compare theoretically the effectiveness for classification with the discriminant functions from F-S, HTC and F-K with LDF and GMF, and between the linear-mapping-based algorithms and the eigenvector-based algorithms. Experimentally, we compare the eigenvector-based algorithms using a set of image data bases each image consisting of 64 x 64 pixels.
Multiobjective generalized extremal optimization algorithm for simulation of daylight illuminants
NASA Astrophysics Data System (ADS)
Kumar, Srividya Ravindra; Kurian, Ciji Pearl; Gomes-Borges, Marcos Eduardo
2017-10-01
Daylight illuminants are widely used as references for color quality testing and optical vision testing applications. Presently used daylight simulators make use of fluorescent bulbs that are not tunable and occupy more space inside the quality testing chambers. By designing a spectrally tunable LED light source with an optimal number of LEDs, cost, space, and energy can be saved. This paper describes an application of the generalized extremal optimization (GEO) algorithm for selection of the appropriate quantity and quality of LEDs that compose the light source. The multiobjective approach of this algorithm tries to get the best spectral simulation with minimum fitness error toward the target spectrum, correlated color temperature (CCT) the same as the target spectrum, high color rendering index (CRI), and luminous flux as required for testing applications. GEO is a global search algorithm based on phenomena of natural evolution and is especially designed to be used in complex optimization problems. Several simulations have been conducted to validate the performance of the algorithm. The methodology applied to model the LEDs, together with the theoretical basis for CCT and CRI calculation, is presented in this paper. A comparative result analysis of M-GEO evolutionary algorithm with the Levenberg-Marquardt conventional deterministic algorithm is also presented.
Accelerated probabilistic inference of RNA structure evolution
Holmes, Ian
2005-01-01
Background Pairwise stochastic context-free grammars (Pair SCFGs) are powerful tools for evolutionary analysis of RNA, including simultaneous RNA sequence alignment and secondary structure prediction, but the associated algorithms are intensive in both CPU and memory usage. The same problem is faced by other RNA alignment-and-folding algorithms based on Sankoff's 1985 algorithm. It is therefore desirable to constrain such algorithms, by pre-processing the sequences and using this first pass to limit the range of structures and/or alignments that can be considered. Results We demonstrate how flexible classes of constraint can be imposed, greatly reducing the computational costs while maintaining a high quality of structural homology prediction. Any score-attributed context-free grammar (e.g. energy-based scoring schemes, or conditionally normalized Pair SCFGs) is amenable to this treatment. It is now possible to combine independent structural and alignment constraints of unprecedented general flexibility in Pair SCFG alignment algorithms. We outline several applications to the bioinformatics of RNA sequence and structure, including Waterman-Eggert N-best alignments and progressive multiple alignment. We evaluate the performance of the algorithm on test examples from the RFAM database. Conclusion A program, Stemloc, that implements these algorithms for efficient RNA sequence alignment and structure prediction is available under the GNU General Public License. PMID:15790387
Iselin, Greg; Le Brocque, Robyne; Kenardy, Justin; Anderson, Vicki; McKinlay, Lynne
2010-10-01
Controversy surrounds the classification of posttraumatic stress disorder (PTSD), particularly in children and adolescents with traumatic brain injury (TBI). In these populations, it is difficult to differentiate TBI-related organic memory loss from dissociative amnesia. Several alternative PTSD classification algorithms have been proposed for use with children. This paper investigates DSM-IV-TR and alternative PTSD classification algorithms, including and excluding the dissociative amnesia item, in terms of their ability to predict psychosocial function following pediatric TBI. A sample of 184 children aged 6-14 years were recruited following emergency department presentation and/or hospital admission for TBI. PTSD was assessed via semi-structured clinical interview (CAPS-CA) with the child at 3 months post-injury. Psychosocial function was assessed using the parent report CHQ-PF50. Two alternative classification algorithms, the PTSD-AA and 2 of 3 algorithms, reached statistical significance. While the inclusion of the dissociative amnesia item increased prevalence rates across algorithms, it generally resulted in weaker associations with psychosocial function. The PTSD-AA algorithm appears to have the strongest association with psychosocial function following TBI in children and adolescents. Removing the dissociative amnesia item from the diagnostic algorithm generally results in improved validity. Copyright 2010 Elsevier Ltd. All rights reserved.
Abejuela, Harmony Raylen; Osser, David N
2016-01-01
This revision of previous algorithms for the pharmacotherapy of generalized anxiety disorder was developed by the Psychopharmacology Algorithm Project at the Harvard South Shore Program. Algorithms from 1999 and 2010 and associated references were reevaluated. Newer studies and reviews published from 2008-14 were obtained from PubMed and analyzed with a focus on their potential to justify changes in the recommendations. Exceptions to the main algorithm for special patient populations, such as women of childbearing potential, pregnant women, the elderly, and those with common medical and psychiatric comorbidities, were considered. Selective serotonin reuptake inhibitors (SSRIs) are still the basic first-line medication. Early alternatives include duloxetine, buspirone, hydroxyzine, pregabalin, or bupropion, in that order. If response is inadequate, then the second recommendation is to try a different SSRI. Additional alternatives now include benzodiazepines, venlafaxine, kava, and agomelatine. If the response to the second SSRI is unsatisfactory, then the recommendation is to try a serotonin-norepinephrine reuptake inhibitor (SNRI). Other alternatives to SSRIs and SNRIs for treatment-resistant or treatment-intolerant patients include tricyclic antidepressants, second-generation antipsychotics, and valproate. This revision of the GAD algorithm responds to issues raised by new treatments under development (such as pregabalin) and organizes the evidence systematically for practical clinical application.
C-semiring Frameworks for Minimum Spanning Tree Problems
NASA Astrophysics Data System (ADS)
Bistarelli, Stefano; Santini, Francesco
In this paper we define general algebraic frameworks for the Minimum Spanning Tree problem based on the structure of c-semirings. We propose general algorithms that can compute such trees by following different cost criteria, which must be all specific instantiation of c-semirings. Our algorithms are extensions of well-known procedures, as Prim or Kruskal, and show the expressivity of these algebraic structures. They can deal also with partially-ordered costs on the edges.
El-Qulity, Said Ali; Mohamed, Ali Wagdy
2016-01-01
This paper proposes a nonlinear integer goal programming model (NIGPM) for solving the general problem of admission capacity planning in a country as a whole. The work aims to satisfy most of the required key objectives of a country related to the enrollment problem for higher education. The system general outlines are developed along with the solution methodology for application to the time horizon in a given plan. The up-to-date data for Saudi Arabia is used as a case study and a novel evolutionary algorithm based on modified differential evolution (DE) algorithm is used to solve the complexity of the NIGPM generated for different goal priorities. The experimental results presented in this paper show their effectiveness in solving the admission capacity for higher education in terms of final solution quality and robustness. PMID:26819583
El-Qulity, Said Ali; Mohamed, Ali Wagdy
2016-01-01
This paper proposes a nonlinear integer goal programming model (NIGPM) for solving the general problem of admission capacity planning in a country as a whole. The work aims to satisfy most of the required key objectives of a country related to the enrollment problem for higher education. The system general outlines are developed along with the solution methodology for application to the time horizon in a given plan. The up-to-date data for Saudi Arabia is used as a case study and a novel evolutionary algorithm based on modified differential evolution (DE) algorithm is used to solve the complexity of the NIGPM generated for different goal priorities. The experimental results presented in this paper show their effectiveness in solving the admission capacity for higher education in terms of final solution quality and robustness.
A Novel General Imaging Formation Algorithm for GNSS-Based Bistatic SAR.
Zeng, Hong-Cheng; Wang, Peng-Bo; Chen, Jie; Liu, Wei; Ge, LinLin; Yang, Wei
2016-02-26
Global Navigation Satellite System (GNSS)-based bistatic Synthetic Aperture Radar (SAR) recently plays a more and more significant role in remote sensing applications for its low-cost and real-time global coverage capability. In this paper, a general imaging formation algorithm was proposed for accurately and efficiently focusing GNSS-based bistatic SAR data, which avoids the interpolation processing in traditional back projection algorithms (BPAs). A two-dimensional point target spectrum model was firstly presented, and the bulk range cell migration correction (RCMC) was consequently derived for reducing range cell migration (RCM) and coarse focusing. As the bulk RCMC seriously changes the range history of the radar signal, a modified and much more efficient hybrid correlation operation was introduced for compensating residual phase errors. Simulation results were presented based on a general geometric topology with non-parallel trajectories and unequal velocities for both transmitter and receiver platforms, showing a satisfactory performance by the proposed method.
A Novel General Imaging Formation Algorithm for GNSS-Based Bistatic SAR
Zeng, Hong-Cheng; Wang, Peng-Bo; Chen, Jie; Liu, Wei; Ge, LinLin; Yang, Wei
2016-01-01
Global Navigation Satellite System (GNSS)-based bistatic Synthetic Aperture Radar (SAR) recently plays a more and more significant role in remote sensing applications for its low-cost and real-time global coverage capability. In this paper, a general imaging formation algorithm was proposed for accurately and efficiently focusing GNSS-based bistatic SAR data, which avoids the interpolation processing in traditional back projection algorithms (BPAs). A two-dimensional point target spectrum model was firstly presented, and the bulk range cell migration correction (RCMC) was consequently derived for reducing range cell migration (RCM) and coarse focusing. As the bulk RCMC seriously changes the range history of the radar signal, a modified and much more efficient hybrid correlation operation was introduced for compensating residual phase errors. Simulation results were presented based on a general geometric topology with non-parallel trajectories and unequal velocities for both transmitter and receiver platforms, showing a satisfactory performance by the proposed method. PMID:26927117
NASA Technical Reports Server (NTRS)
Nachtigal, Noel M.
1991-01-01
The Lanczos algorithm can be used both for eigenvalue problems and to solve linear systems. However, when applied to non-Hermitian matrices, the classical Lanczos algorithm is susceptible to breakdowns and potential instabilities. In addition, the biconjugate gradient (BCG) algorithm, which is the natural generalization of the conjugate gradient algorithm to non-Hermitian linear systems, has a second source of breakdowns, independent of the Lanczos breakdowns. Here, we present two new results. We propose an implementation of a look-ahead variant of the Lanczos algorithm which overcomes the breakdowns by skipping over those steps where a breakdown or a near-breakdown would occur. The new algorithm can handle look-ahead steps of any length and requires the same number of matrix-vector products and inner products per step as the classical Lanczos algorithm without look-ahead. Based on the proposed look-ahead Lanczos algorithm, we then present a novel BCG-like approach, the quasi-minimal residual (QMR) method, which avoids the second source of breakdowns in the BCG algorithm. We present details of the new method and discuss some of its properties. In particular, we discuss the relationship between QMR and BCG, showing how one can recover the BCG iterates, when they exist, from the QMR iterates. We also present convergence results for QMR, showing the connection between QMR and the generalized minimal residual (GMRES) algorithm, the optimal method in this class of methods. Finally, we give some numerical examples, both for eigenvalue computations and for non-Hermitian linear systems.
A generalized Condat's algorithm of 1D total variation regularization
NASA Astrophysics Data System (ADS)
Makovetskii, Artyom; Voronin, Sergei; Kober, Vitaly
2017-09-01
A common way for solving the denosing problem is to utilize the total variation (TV) regularization. Many efficient numerical algorithms have been developed for solving the TV regularization problem. Condat described a fast direct algorithm to compute the processed 1D signal. Also there exists a direct algorithm with a linear time for 1D TV denoising referred to as the taut string algorithm. The Condat's algorithm is based on a dual problem to the 1D TV regularization. In this paper, we propose a variant of the Condat's algorithm based on the direct 1D TV regularization problem. The usage of the Condat's algorithm with the taut string approach leads to a clear geometric description of the extremal function. Computer simulation results are provided to illustrate the performance of the proposed algorithm for restoration of degraded signals.
Rahaman, Mijanur; Pang, Chin-Tzong; Ishtyak, Mohd; Ahmad, Rais
2017-01-01
In this article, we introduce a perturbed system of generalized mixed quasi-equilibrium-like problems involving multi-valued mappings in Hilbert spaces. To calculate the approximate solutions of the perturbed system of generalized multi-valued mixed quasi-equilibrium-like problems, firstly we develop a perturbed system of auxiliary generalized multi-valued mixed quasi-equilibrium-like problems, and then by using the celebrated Fan-KKM technique, we establish the existence and uniqueness of solutions of the perturbed system of auxiliary generalized multi-valued mixed quasi-equilibrium-like problems. By deploying an auxiliary principle technique and an existence result, we formulate an iterative algorithm for solving the perturbed system of generalized multi-valued mixed quasi-equilibrium-like problems. Lastly, we study the strong convergence analysis of the proposed iterative sequences under monotonicity and some mild conditions. These results are new and generalize some known results in this field.
Sorting on STAR. [CDC computer algorithm timing comparison
NASA Technical Reports Server (NTRS)
Stone, H. S.
1978-01-01
Timing comparisons are given for three sorting algorithms written for the CDC STAR computer. One algorithm is Hoare's (1962) Quicksort, which is the fastest or nearly the fastest sorting algorithm for most computers. A second algorithm is a vector version of Quicksort that takes advantage of the STAR's vector operations. The third algorithm is an adaptation of Batcher's (1968) sorting algorithm, which makes especially good use of vector operations but has a complexity of N(log N)-squared as compared with a complexity of N log N for the Quicksort algorithms. In spite of its worse complexity, Batcher's sorting algorithm is competitive with the serial version of Quicksort for vectors up to the largest that can be treated by STAR. Vector Quicksort outperforms the other two algorithms and is generally preferred. These results indicate that unusual instruction sets can introduce biases in program execution time that counter results predicted by worst-case asymptotic complexity analysis.
PLQP & Company: Decidable Logics for Quantum Algorithms
NASA Astrophysics Data System (ADS)
Baltag, Alexandru; Bergfeld, Jort; Kishida, Kohei; Sack, Joshua; Smets, Sonja; Zhong, Shengyang
2014-10-01
We introduce a probabilistic modal (dynamic-epistemic) quantum logic PLQP for reasoning about quantum algorithms. We illustrate its expressivity by using it to encode the correctness of the well-known quantum search algorithm, as well as of a quantum protocol known to solve one of the paradigmatic tasks from classical distributed computing (the leader election problem). We also provide a general method (extending an idea employed in the decidability proof in Dunn et al. (J. Symb. Log. 70:353-359, 2005)) for proving the decidability of a range of quantum logics, interpreted on finite-dimensional Hilbert spaces. We give general conditions for the applicability of this method, and in particular we apply it to prove the decidability of PLQP.
QMR: A Quasi-Minimal Residual method for non-Hermitian linear systems
NASA Technical Reports Server (NTRS)
Freund, Roland W.; Nachtigal, Noel M.
1990-01-01
The biconjugate gradient (BCG) method is the natural generalization of the classical conjugate gradient algorithm for Hermitian positive definite matrices to general non-Hermitian linear systems. Unfortunately, the original BCG algorithm is susceptible to possible breakdowns and numerical instabilities. A novel BCG like approach is presented called the quasi-minimal residual (QMR) method, which overcomes the problems of BCG. An implementation of QMR based on a look-ahead version of the nonsymmetric Lanczos algorithm is proposed. It is shown how BCG iterates can be recovered stably from the QMR process. Some further properties of the QMR approach are given and an error bound is presented. Finally, numerical experiments are reported.
General ultrafast pulse measurement using the cross-correlation single-shot sonogram technique.
Reid, Derryck T; Garduno-Mejia, Jesus
2004-03-15
The cross-correlation single-shot sonogram technique offers exact pulse measurement and real-time pulse monitoring via an intuitive time-frequency trace whose shape and orientation directly indicate the spectral chirp of an ultrashort laser pulse. We demonstrate an algorithm that solves a fundamental limitation of the cross-correlation sonogram method, namely, that the time-gating operation is implemented using a replica of the measured pulse rather than the ideal delta-function-like pulse. Using a modified principal-components generalized projections algorithm, we experimentally show accurate pulse retrieval of an asymmetric double pulse, a case that is prone to systematic error when one is using the original sonogram retrieval algorithm.
Fuzzy Algorithm for the Detection of Incidents in the Transport System
ERIC Educational Resources Information Center
Nikolaev, Andrey B.; Sapego, Yuliya S.; Jakubovich, Anatolij N.; Berner, Leonid I.; Stroganov, Victor Yu.
2016-01-01
In the paper it's proposed an algorithm for the management of traffic incidents, aimed at minimizing the impact of incidents on the road traffic in general. The proposed algorithm is based on the theory of fuzzy sets and provides identification of accidents, as well as the adoption of appropriate measures to address them as soon as possible. A…
Research on numerical algorithms for large space structures
NASA Technical Reports Server (NTRS)
Denman, E. D.
1982-01-01
Numerical algorithms for large space structures were investigated with particular emphasis on decoupling method for analysis and design. Numerous aspects of the analysis of large systems ranging from the algebraic theory to lambda matrices to identification algorithms were considered. A general treatment of the algebraic theory of lambda matrices is presented and the theory is applied to second order lambda matrices.
Graphics processing unit-assisted lossless decompression
Loughry, Thomas A.
2016-04-12
Systems and methods for decompressing compressed data that has been compressed by way of a lossless compression algorithm are described herein. In a general embodiment, a graphics processing unit (GPU) is programmed to receive compressed data packets and decompress such packets in parallel. The compressed data packets are compressed representations of an image, and the lossless compression algorithm is a Rice compression algorithm.