Investigating the unification of LOFAR-detected powerful AGN in the Boötes field
NASA Astrophysics Data System (ADS)
Morabito, Leah K.; Williams, W. L.; Duncan, Kenneth J.; Röttgering, H. J. A.; Miley, George; Saxena, Aayush; Barthel, Peter; Best, P. N.; Bruggen, M.; Brunetti, G.; Chyży, K. T.; Engels, D.; Hardcastle, M. J.; Harwood, J. J.; Jarvis, Matt J.; Mahony, E. K.; Prandoni, I.; Shimwell, T. W.; Shulevski, A.; Tasse, C.
2017-08-01
Low radio frequency surveys are important for testing unified models of radio-loud quasars and radio galaxies. Intrinsically similar sources that are randomly oriented on the sky will have different projected linear sizes. Measuring the projected linear sizes of these sources provides an indication of their orientation. Steep-spectrum isotropic radio emission allows for orientation-free sample selection at low radio frequencies. We use a new radio survey of the Boötes field at 150 MHz made with the Low-Frequency Array (LOFAR) to select a sample of radio sources. We identify 60 radio sources with powers P > 1025.5 W Hz-1 at 150 MHz using cross-matched multiwavelength information from the AGN and Galaxy Evolution Survey, which provides spectroscopic redshifts and photometric identification of 16 quasars and 44 radio galaxies. When considering the radio spectral slope only, we find that radio sources with steep spectra have projected linear sizes that are on average 4.4 ± 1.4 larger than those with flat spectra. The projected linear sizes of radio galaxies are on average 3.1 ± 1.0 larger than those of quasars (2.0 ± 0.3 after correcting for redshift evolution). Combining these results with three previous surveys, we find that the projected linear sizes of radio galaxies and quasars depend on redshift but not on power. The projected linear size ratio does not correlate with either parameter. The LOFAR data are consistent within the uncertainties with theoretical predictions of the correlation between the quasar fraction and linear size ratio, based on an orientation-based unification scheme.
ERIC Educational Resources Information Center
Wilson, Jason; Lawman, Joshua; Murphy, Rachael; Nelson, Marissa
2011-01-01
This article describes a probability project used in an upper division, one-semester probability course with third-semester calculus and linear algebra prerequisites. The student learning outcome focused on developing the skills necessary for approaching project-sized math/stat application problems. These skills include appropriately defining…
The NASA High Speed ASE Project: Computational Analyses of a Low-Boom Supersonic Configuration
NASA Technical Reports Server (NTRS)
Silva, Walter A.; DeLaGarza, Antonio; Zink, Scott; Bounajem, Elias G.; Johnson, Christopher; Buonanno, Michael; Sanetrik, Mark D.; Yoo, Seung Y.; Kopasakis, George; Christhilf, David M.;
2014-01-01
A summary of NASA's High Speed Aeroservoelasticity (ASE) project is provided with a focus on a low-boom supersonic configuration developed by Lockheed-Martin and referred to as the N+2 configuration. The summary includes details of the computational models developed to date including a linear finite element model (FEM), linear unsteady aerodynamic models, structured and unstructured CFD grids, and discussion of the FEM development including sizing and structural constraints applied to the N+2 configuration. Linear results obtained to date include linear mode shapes and linear flutter boundaries. In addition to the tasks associated with the N+2 configuration, a summary of the work involving the development of AeroPropulsoServoElasticity (APSE) models is also discussed.
Heather, F J; Childs, D Z; Darnaude, A M; Blanchard, J L
2018-01-01
Accurate information on the growth rates of fish is crucial for fisheries stock assessment and management. Empirical life history parameters (von Bertalanffy growth) are widely fitted to cross-sectional size-at-age data sampled from fish populations. This method often assumes that environmental factors affecting growth remain constant over time. The current study utilized longitudinal life history information contained in otoliths from 412 juveniles and adults of gilthead seabream, Sparus aurata, a commercially important species fished and farmed throughout the Mediterranean. Historical annual growth rates over 11 consecutive years (2002-2012) in the Gulf of Lions (NW Mediterranean) were reconstructed to investigate the effect of temperature variations on the annual growth of this fish. S. aurata growth was modelled linearly as the relationship between otolith size at year t against otolith size at the previous year t-1. The effect of temperature on growth was modelled with linear mixed effects models and a simplified linear model to be implemented in a cohort Integral Projection Model (cIPM). The cIPM was used to project S. aurata growth, year to year, under different temperature scenarios. Our results determined current increasing summer temperatures to have a negative effect on S. aurata annual growth in the Gulf of Lions. They suggest that global warming already has and will further have a significant impact on S. aurata size-at-age, with important implications for age-structured stock assessments and reference points used in fisheries.
An efficient variable projection formulation for separable nonlinear least squares problems.
Gan, Min; Li, Han-Xiong
2014-05-01
We consider in this paper a class of nonlinear least squares problems in which the model can be represented as a linear combination of nonlinear functions. The variable projection algorithm projects the linear parameters out of the problem, leaving the nonlinear least squares problems involving only the nonlinear parameters. To implement the variable projection algorithm more efficiently, we propose a new variable projection functional based on matrix decomposition. The advantage of the proposed formulation is that the size of the decomposed matrix may be much smaller than those of previous ones. The Levenberg-Marquardt algorithm using finite difference method is then applied to minimize the new criterion. Numerical results show that the proposed approach achieves significant reduction in computing time.
Re-Mediating Classroom Activity with a Non-Linear, Multi-Display Presentation Tool
ERIC Educational Resources Information Center
Bligh, Brett; Coyle, Do
2013-01-01
This paper uses an Activity Theory framework to evaluate the use of a novel, multi-screen, non-linear presentation tool. The Thunder tool allows presenters to manipulate and annotate multiple digital slides and to concurrently display a selection of juxtaposed resources across a wall-sized projection area. Conventional, single screen presentation…
Ensemble Weight Enumerators for Protograph LDPC Codes
NASA Technical Reports Server (NTRS)
Divsalar, Dariush
2006-01-01
Recently LDPC codes with projected graph, or protograph structures have been proposed. In this paper, finite length ensemble weight enumerators for LDPC codes with protograph structures are obtained. Asymptotic results are derived as the block size goes to infinity. In particular we are interested in obtaining ensemble average weight enumerators for protograph LDPC codes which have minimum distance that grows linearly with block size. As with irregular ensembles, linear minimum distance property is sensitive to the proportion of degree-2 variable nodes. In this paper the derived results on ensemble weight enumerators show that linear minimum distance condition on degree distribution of unstructured irregular LDPC codes is a sufficient but not a necessary condition for protograph LDPC codes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schreiner, S.; Paschal, C.B.; Galloway, R.L.
Four methods of producing maximum intensity projection (MIP) images were studied and compared. Three of the projection methods differ in the interpolation kernel used for ray tracing. The interpolation kernels include nearest neighbor interpolation, linear interpolation, and cubic convolution interpolation. The fourth projection method is a voxel projection method that is not explicitly a ray-tracing technique. The four algorithms` performance was evaluated using a computer-generated model of a vessel and using real MR angiography data. The evaluation centered around how well an algorithm transferred an object`s width to the projection plane. The voxel projection algorithm does not suffer from artifactsmore » associated with the nearest neighbor algorithm. Also, a speed-up in the calculation of the projection is seen with the voxel projection method. Linear interpolation dramatically improves the transfer of width information from the 3D MRA data set over both nearest neighbor and voxel projection methods. Even though the cubic convolution interpolation kernel is theoretically superior to the linear kernel, it did not project widths more accurately than linear interpolation. A possible advantage to the nearest neighbor interpolation is that the size of small vessels tends to be exaggerated in the projection plane, thereby increasing their visibility. The results confirm that the way in which an MIP image is constructed has a dramatic effect on information contained in the projection. The construction method must be chosen with the knowledge that the clinical information in the 2D projections in general will be different from that contained in the original 3D data volume. 27 refs., 16 figs., 2 tabs.« less
A log-linear model approach to estimation of population size using the line-transect sampling method
Anderson, D.R.; Burnham, K.P.; Crain, B.R.
1978-01-01
The technique of estimating wildlife population size and density using the belt or line-transect sampling method has been used in many past projects, such as the estimation of density of waterfowl nestling sites in marshes, and is being used currently in such areas as the assessment of Pacific porpoise stocks in regions of tuna fishing activity. A mathematical framework for line-transect methodology has only emerged in the last 5 yr. In the present article, we extend this mathematical framework to a line-transect estimator based upon a log-linear model approach.
NASA Astrophysics Data System (ADS)
Ahmed, S. Jbara; Zulkafli, Othaman; M, A. Saeed
2016-05-01
Based on the Schrödinger equation for envelope function in the effective mass approximation, linear and nonlinear optical absorption coefficients in a multi-subband lens quantum dot are investigated. The effects of quantum dot size on the interband and intraband transitions energy are also analyzed. The finite element method is used to calculate the eigenvalues and eigenfunctions. Strain and In-mole-fraction effects are also studied, and the results reveal that with the decrease of the In-mole fraction, the amplitudes of linear and nonlinear absorption coefficients increase. The present computed results show that the absorption coefficients of transitions between the first excited states are stronger than those of the ground states. In addition, it has been found that the quantum dot size affects the amplitudes and peak positions of linear and nonlinear absorption coefficients while the incident optical intensity strongly affects the nonlinear absorption coefficients. Project supported by the Ministry of Higher Education and Scientific Research in Iraq, Ibnu Sina Institute and Physics Department of Universiti Teknologi Malaysia (UTM RUG Vote No. 06-H14).
Spectral-Spatial Shared Linear Regression for Hyperspectral Image Classification.
Haoliang Yuan; Yuan Yan Tang
2017-04-01
Classification of the pixels in hyperspectral image (HSI) is an important task and has been popularly applied in many practical applications. Its major challenge is the high-dimensional small-sized problem. To deal with this problem, lots of subspace learning (SL) methods are developed to reduce the dimension of the pixels while preserving the important discriminant information. Motivated by ridge linear regression (RLR) framework for SL, we propose a spectral-spatial shared linear regression method (SSSLR) for extracting the feature representation. Comparing with RLR, our proposed SSSLR has the following two advantages. First, we utilize a convex set to explore the spatial structure for computing the linear projection matrix. Second, we utilize a shared structure learning model, which is formed by original data space and a hidden feature space, to learn a more discriminant linear projection matrix for classification. To optimize our proposed method, an efficient iterative algorithm is proposed. Experimental results on two popular HSI data sets, i.e., Indian Pines and Salinas demonstrate that our proposed methods outperform many SL methods.
A low-cost and portable realization on fringe projection three-dimensional measurement
NASA Astrophysics Data System (ADS)
Xiao, Suzhi; Tao, Wei; Zhao, Hui
2015-12-01
Fringe projection three-dimensional measurement is widely applied in a wide range of industrial application. The traditional fringe projection system has the disadvantages of high expense, big size, and complicated calibration requirements. In this paper we introduce a low-cost and portable realization on three-dimensional measurement with Pico projector. It has the advantages of low cost, compact physical size, and flexible configuration. For the proposed fringe projection system, there is no restriction to camera and projector's relative alignment on parallelism and perpendicularity for installation. Moreover, plane-based calibration method is adopted in this paper that avoids critical requirements on calibration system such as additional gauge block or precise linear z stage. What is more, error sources existing in the proposed system are introduced in this paper. The experimental results demonstrate the feasibility of the proposed low cost and portable fringe projection system.
Protograph based LDPC codes with minimum distance linearly growing with block size
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Jones, Christopher; Dolinar, Sam; Thorpe, Jeremy
2005-01-01
We propose several LDPC code constructions that simultaneously achieve good threshold and error floor performance. Minimum distance is shown to grow linearly with block size (similar to regular codes of variable degree at least 3) by considering ensemble average weight enumerators. Our constructions are based on projected graph, or protograph, structures that support high-speed decoder implementations. As with irregular ensembles, our constructions are sensitive to the proportion of degree-2 variable nodes. A code with too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code with too many such nodes tends to not exhibit a minimum distance that grows linearly in block length. In this paper we also show that precoding can be used to lower the threshold of regular LDPC codes. The decoding thresholds of the proposed codes, which have linearly increasing minimum distance in block size, outperform that of regular LDPC codes. Furthermore, a family of low to high rate codes, with thresholds that adhere closely to their respective channel capacity thresholds, is presented. Simulation results for a few example codes show that the proposed codes have low error floors as well as good threshold SNFt performance.
Simple, Defensible Sample Sizes Based on Cost Efficiency
Bacchetti, Peter; McCulloch, Charles E.; Segal, Mark R.
2009-01-01
Summary The conventional approach of choosing sample size to provide 80% or greater power ignores the cost implications of different sample size choices. Costs, however, are often impossible for investigators and funders to ignore in actual practice. Here, we propose and justify a new approach for choosing sample size based on cost efficiency, the ratio of a study’s projected scientific and/or practical value to its total cost. By showing that a study’s projected value exhibits diminishing marginal returns as a function of increasing sample size for a wide variety of definitions of study value, we are able to develop two simple choices that can be defended as more cost efficient than any larger sample size. The first is to choose the sample size that minimizes the average cost per subject. The second is to choose sample size to minimize total cost divided by the square root of sample size. This latter method is theoretically more justifiable for innovative studies, but also performs reasonably well and has some justification in other cases. For example, if projected study value is assumed to be proportional to power at a specific alternative and total cost is a linear function of sample size, then this approach is guaranteed either to produce more than 90% power or to be more cost efficient than any sample size that does. These methods are easy to implement, based on reliable inputs, and well justified, so they should be regarded as acceptable alternatives to current conventional approaches. PMID:18482055
Aeroelastic Sizing for High-Speed Research (HSR) Longitudinal Control Alternatives Project (LCAP)
NASA Technical Reports Server (NTRS)
Walsh, Joanne L.; Dunn, H. J.; Stroud, W. Jefferson; Barthelemy, J.-F.; Weston, Robert P.; Martin, Carl J.; Bennett, Robert M.
2005-01-01
The Longitudinal Control Alternatives Project (LCAP) compared three high-speed civil transport configurations to determine potential advantages of the three associated longitudinal control concepts. The three aircraft configurations included a conventional configuration with a layout having a horizontal aft tail, a configuration with a forward canard in addition to a horizontal aft tail, and a configuration with only a forward canard. The three configurations were aeroelastically sized and were compared on the basis of operational empty weight (OEW) and longitudinal control characteristics. The sized structure consisted of composite honeycomb sandwich panels on both the wing and the fuselage. Design variables were the core depth of the sandwich and the thicknesses of the composite material which made up the face sheets of the sandwich. Each configuration was sized for minimum structural weight under linear and nonlinear aeroelastic loads subject to strain, buckling, ply-mixture, and subsonic and supersonic flutter constraints. This report describes the methods that were used and the results that were generated for the aeroelastic sizing of the three configurations.
Towards an Early Software Effort Estimation Based on Functional and Non-Functional Requirements
NASA Astrophysics Data System (ADS)
Kassab, Mohamed; Daneva, Maya; Ormandjieva, Olga
The increased awareness of the non-functional requirements as a key to software project and product success makes explicit the need to include them in any software project effort estimation activity. However, the existing approaches to defining size-based effort relationships still pay insufficient attention to this need. This paper presents a flexible, yet systematic approach to the early requirements-based effort estimation, based on Non-Functional Requirements ontology. It complementarily uses one standard functional size measurement model and a linear regression technique. We report on a case study which illustrates the application of our solution approach in context and also helps evaluate our experiences in using it.
Scanning linear estimation: improvements over region of interest (ROI) methods
NASA Astrophysics Data System (ADS)
Kupinski, Meredith K.; Clarkson, Eric W.; Barrett, Harrison H.
2013-03-01
In tomographic medical imaging, a signal activity is typically estimated by summing voxels from a reconstructed image. We introduce an alternative estimation scheme that operates on the raw projection data and offers a substantial improvement, as measured by the ensemble mean-square error (EMSE), when compared to using voxel values from a maximum-likelihood expectation-maximization (MLEM) reconstruction. The scanning-linear (SL) estimator operates on the raw projection data and is derived as a special case of maximum-likelihood estimation with a series of approximations to make the calculation tractable. The approximated likelihood accounts for background randomness, measurement noise and variability in the parameters to be estimated. When signal size and location are known, the SL estimate of signal activity is unbiased, i.e. the average estimate equals the true value. By contrast, unpredictable bias arising from the null functions of the imaging system affect standard algorithms that operate on reconstructed data. The SL method is demonstrated for two different tasks: (1) simultaneously estimating a signal’s size, location and activity; (2) for a fixed signal size and location, estimating activity. Noisy projection data are realistically simulated using measured calibration data from the multi-module multi-resolution small-animal SPECT imaging system. For both tasks, the same set of images is reconstructed using the MLEM algorithm (80 iterations), and the average and maximum values within the region of interest (ROI) are calculated for comparison. This comparison shows dramatic improvements in EMSE for the SL estimates. To show that the bias in ROI estimates affects not only absolute values but also relative differences, such as those used to monitor the response to therapy, the activity estimation task is repeated for three different signal sizes.
Yang, Jian; Zhang, David; Yang, Jing-Yu; Niu, Ben
2007-04-01
This paper develops an unsupervised discriminant projection (UDP) technique for dimensionality reduction of high-dimensional data in small sample size cases. UDP can be seen as a linear approximation of a multimanifolds-based learning framework which takes into account both the local and nonlocal quantities. UDP characterizes the local scatter as well as the nonlocal scatter, seeking to find a projection that simultaneously maximizes the nonlocal scatter and minimizes the local scatter. This characteristic makes UDP more intuitive and more powerful than the most up-to-date method, Locality Preserving Projection (LPP), which considers only the local scatter for clustering or classification tasks. The proposed method is applied to face and palm biometrics and is examined using the Yale, FERET, and AR face image databases and the PolyU palmprint database. The experimental results show that UDP consistently outperforms LPP and PCA and outperforms LDA when the training sample size per class is small. This demonstrates that UDP is a good choice for real-world biometrics applications.
Linear programming computational experience with onyx
DOE Office of Scientific and Technical Information (OSTI.GOV)
Atrek, E.
1994-12-31
ONYX is a linear programming software package based on an efficient variation of the gradient projection method. When fully configured, it is intended for application to industrial size problems. While the computational experience is limited at the time of this abstract, the technique is found to be robust and competitive with existing methodology in terms of both accuracy and speed. An overview of the approach is presented together with a description of program capabilities, followed by a discussion of up-to-date computational experience with the program. Conclusions include advantages of the approach and envisioned future developments.
Maternal Weight Gain as a Predictor of Litter Size in Swiss Webster, C57BL/6J, and BALB/cJ mice.
Finlay, James B; Liu, Xueli; Ermel, Richard W; Adamson, Trinka W
2015-11-01
An important task facing both researchers and animal core facilities is producing sufficient mice for a given project. The inherent biologic variability of mouse reproduction and litter size further challenges effective research planning. A lack of precision in project planning contributes to the high cost of animal research, overproduction (thus waste) of animals, and inappropriate allocation of facility resources. To examine the extent daily prepartum maternal weight gain predicts litter size in 2 commonly used mouse strains (BALB/cJ and C57BL/6J) and one mouse stock (Swiss Webster), we weighed ≥ 25 pregnant dams of each strain or stock daily from the morning on which a vaginal plug (day 0) was present. On the morning when dams delivered their pups, we recorded the weight of the dam, the weight of the litter itself, and the number of pups. Litter sizes ranged from 1 to 7 pups for BALB/cJ, 2 to 13 for Swiss Webster, and 5 to 11 for C57BL/6J mice. Linear regression models (based on weight change from day 0) demonstrated that maternal weight gain at day 9 (BALB/cJ), day 11 (Swiss Webster), or day 14 (C57BL/6J) was a significant predictor of litter size. When tested prospectively, the linear regression model for each strain or stock was found to be accurate. These data indicate that the number of pups that will be born can be estimated accurately by using maternal weight gain at specific or stock-specific time points.
Electrochemical growth of linear conducting crystals in microgravity
NASA Technical Reports Server (NTRS)
Cronise, Raymond J., IV
1988-01-01
Much attention has been given to the synthesis of linear conducting materials. These inorganic, organic, and polymeric materials have some very interesting electrical and optical properties, including low temperature superconductivity. Because of the anisotropic nature of these compounds, impurities and defects strongly influences the unique physical properties of such crystals. Investigations have demonstrated that electrochemical growth has provided the most reproducible and purest crystals. Space, specifically microgravity, eliminates phenomena such as buoyancy driven convection, and could permit formation of crystals many times purer than the ones grown to date. Several different linear conductors were flown on Get Away Special G-007 on board the Space Shuttle Columbia, STS 61-C, the first of a series of Project Explorer payloads. These compounds were grown by electrochemical methods, and the growth was monitored by photographs taken throughout the mission. Due to some thermal problems, no crystals of appreciable size were grown. The experimental results will be incorporated into improvements for the next 2 missions of Project Explorer. The results and conclusions of the first mission are discussed.
Spinnato, J; Roubaud, M-C; Burle, B; Torrésani, B
2015-06-01
The main goal of this work is to develop a model for multisensor signals, such as magnetoencephalography or electroencephalography (EEG) signals that account for inter-trial variability, suitable for corresponding binary classification problems. An important constraint is that the model be simple enough to handle small size and unbalanced datasets, as often encountered in BCI-type experiments. The method involves the linear mixed effects statistical model, wavelet transform, and spatial filtering, and aims at the characterization of localized discriminant features in multisensor signals. After discrete wavelet transform and spatial filtering, a projection onto the relevant wavelet and spatial channels subspaces is used for dimension reduction. The projected signals are then decomposed as the sum of a signal of interest (i.e., discriminant) and background noise, using a very simple Gaussian linear mixed model. Thanks to the simplicity of the model, the corresponding parameter estimation problem is simplified. Robust estimates of class-covariance matrices are obtained from small sample sizes and an effective Bayes plug-in classifier is derived. The approach is applied to the detection of error potentials in multichannel EEG data in a very unbalanced situation (detection of rare events). Classification results prove the relevance of the proposed approach in such a context. The combination of the linear mixed model, wavelet transform and spatial filtering for EEG classification is, to the best of our knowledge, an original approach, which is proven to be effective. This paper improves upon earlier results on similar problems, and the three main ingredients all play an important role.
Fast and low-cost structured light pattern sequence projection.
Wissmann, Patrick; Forster, Frank; Schmitt, Robert
2011-11-21
We present a high-speed and low-cost approach for structured light pattern sequence projection. Using a fast rotating binary spatial light modulator, our method is potentially capable of projection frequencies in the kHz domain, while enabling pattern rasterization as low as 2 μm pixel size and inherently linear grayscale reproduction quantized at 12 bits/pixel or better. Due to the circular arrangement of the projected fringe patterns, we extend the widely used ray-plane triangulation method to ray-cone triangulation and provide a detailed description of the optical calibration procedure. Using the proposed projection concept in conjunction with the recently published coded phase shift (CPS) pattern sequence, we demonstrate high accuracy 3-D measurement at 200 Hz projection frequency and 20 Hz 3-D reconstruction rate. © 2011 Optical Society of America
Krylov subspace methods - Theory, algorithms, and applications
NASA Technical Reports Server (NTRS)
Sad, Youcef
1990-01-01
Projection methods based on Krylov subspaces for solving various types of scientific problems are reviewed. The main idea of this class of methods when applied to a linear system Ax = b, is to generate in some manner an approximate solution to the original problem from the so-called Krylov subspace span. Thus, the original problem of size N is approximated by one of dimension m, typically much smaller than N. Krylov subspace methods have been very successful in solving linear systems and eigenvalue problems and are now becoming popular for solving nonlinear equations. The main ideas in Krylov subspace methods are shown and their use in solving linear systems, eigenvalue problems, parabolic partial differential equations, Liapunov matrix equations, and nonlinear system of equations are discussed.
NASA Astrophysics Data System (ADS)
Vardoulaki, Eleni; Faustino Jimenez Andrade, Eric; Delvecchio, Ivan; Karim, Alexander; Smolčić, Vernesa; Magnelli, Benjamin; Bertoldi, Frank; Schinnener, Eva; Sargent, Mark; Finoguenov, Alexis; VLA COSMOS Team
2018-01-01
The radio sources associated with active galactic nuclei (AGN) can exhibit a variety of radio structures, from simple to more complex, giving rise to a variety of classification schemes. The question which still remains open, given deeper surveys revealing new populations of radio sources, is whether this plethora of radio structures can be attributed to the physical properties of the host or to the environment. Here we present an analysis on the radio structure of radio-selected AGN from the VLA-COSMOS Large Project at 3 GHz (JVLA-COSMOS; Smolčić et al.) in relation to: 1) their linear projected size, 2) the Eddington ratio, and 3) the environment their hosts lie within. We classify these as FRI (jet-like) and FRII (lobe-like) based on the FR-type classification scheme, and compare them to a sample of jet-less radio AGN in JVLA-COSMOS. We measure their linear projected sizes using a semi-automatic machine learning technique. Their Eddington ratios are calculated from X-ray data available for COSMOS. As environmental probes we take the X-ray groups (hundreds kpc) and the density fields (~Mpc-scale) in COSMOS. We find that FRII radio sources are on average larger than FRIs, which agrees with literature. But contrary to past studies, we find no dichotomy in FR objects in JVLA-COSMOS given their Eddington ratios, as on average they exhibit similar values. Furthermore our results show that the large-scale environment does not explain the observed dichotomy in lobe- and jet-like FR-type objects as both types are found on similar environments, but it does affect the shape of the radio structure introducing bents for objects closer to the centre of an X-ray group.
Optical inspection system for cylindrical objects
Brenden, Byron B.; Peters, Timothy J.
1989-01-01
In the inspection of cylindrical objects, particularly O-rings, the object is translated through a field of view and a linear light trace is projected on its surface. An image of the light trace is projected on a mask, which has a size and shape corresponding to the size and shape which the image would have if the surface of the object were perfect. If there is a defect, light will pass the mask and be sensed by a detector positioned behind the mask. Preferably, two masks and associated detectors are used, one mask being convex to pass light when the light trace falls on a projection from the surface and the other concave, to pass light when the light trace falls on a depression in the surface. The light trace may be either dynamic, formed by a scanned laser beam, or static, formed by such a beam focussed by a cylindrical lens. Means are provided to automatically keep the illuminating receiving systems properly aligned.
Evaluation of an enhanced gravity-based fine-coal circuit for high-sulfur coal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohanty, M.K.; Samal, A.R.; Palit, A.
One of the main objectives of this study was to evaluate a fine-coal cleaning circuit using an enhanced gravity separator specifically for a high sulfur coal application. The evaluation not only included testing of individual unit operations used for fine-coal classification, cleaning and dewatering, but also included testing of the complete circuit simultaneously. At a scale of nearly 2 t/h, two alternative circuits were evaluated to clean a minus 0.6-mm coal stream utilizing a 150-mm-diameter classifying cyclone, a linear screen having a projected surface area of 0.5 m{sup 2}, an enhanced gravity separator having a bowl diameter of 250 mmmore » and a screen-bowl centrifuge having a bowl diameter of 500 mm. The cleaning and dewatering components of both circuits were the same; however, one circuit used a classifying cyclone whereas the other used a linear screen as the classification device. An industrial size coal spiral was used to clean the 2- x 0.6-mm coal size fraction for each circuit to estimate the performance of a complete fine-coal circuit cleaning a minus 2-mm particle size coal stream. The 'linear screen + enhanced gravity separator + screen-bowl circuit' provided superior sulfur and ash-cleaning performance to the alternative circuit that used a classifying cyclone in place of the linear screen. Based on these test data, it was estimated that the use of the recommended circuit to treat 50 t/h of minus 2-mm size coal having feed ash and sulfur contents of 33.9% and 3.28%, respectively, may produce nearly 28.3 t/h of clean coal with product ash and sulfur contents of 9.15% and 1.61 %, respectively.« less
Myoglobin structure and function: A multiweek biochemistry laboratory project.
Silverstein, Todd P; Kirk, Sarah R; Meyer, Scott C; Holman, Karen L McFarlane
2015-01-01
We have developed a multiweek laboratory project in which students isolate myoglobin and characterize its structure, function, and redox state. The important laboratory techniques covered in this project include size-exclusion chromatography, electrophoresis, spectrophotometric titration, and FTIR spectroscopy. Regarding protein structure, students work with computer modeling and visualization of myoglobin and its homologues, after which they spectroscopically characterize its thermal denaturation. Students also study protein function (ligand binding equilibrium) and are instructed on topics in data analysis (calibration curves, nonlinear vs. linear regression). This upper division biochemistry laboratory project is a challenging and rewarding one that not only exposes students to a wide variety of important biochemical laboratory techniques but also ties those techniques together to work with a single readily available and easily characterized protein, myoglobin. © 2015 International Union of Biochemistry and Molecular Biology.
Grain Growth and Precipitation Behavior of Iridium Alloy DOP-26 During Long Term Aging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pierce, Dean T.; Muralidharan, Govindarajan; Fox, Ethan E.
The influence of long term aging on grain growth and precipitate sizes and spatial distribution in iridium alloy DOP-26 was studied. Samples of DOP-26 were fabricated using the new process, recrystallized for 1 hour (h) at 1375 C, then aged at either 1300, 1400, or 1500 C for times ranging from 50 to 10,000 h. Grain size measurements (vertical and horizontal mean linear intercept and horizontal and vertical projection) and analyses of iridium-thorium precipitates (size and spacing) were made on the longitudinal, transverse, and rolling surfaces of the as-recrystallized and aged specimens from which the two-dimensional spatial distribution and meanmore » sizes of the precipitates were obtained. The results obtained from this study are intended to provide input to grain growth models.« less
NASA Astrophysics Data System (ADS)
Yin, J. J.; Chang, F.; Li, S. L.; Yao, X. L.; Sun, J. R.; Xiao, Y.
2017-12-01
To clarify the evolution of damage for typical carbon woven fabric/epoxy laminates exposed to lightning strike, artificial lightning testing on carbon woven fabric/epoxy laminates were conducted, damage was assessed using visual inspection and damage peeling approaches. Relationships between damage size and action integral were also elucidated. Results showed that damage appearance of carbon woven fabric/epoxy laminate presents circular distribution, and center of the circle located at the lightning attachment point approximately, there exist no damage projected area dislocations for different layers, visual damage territory represents maximum damage scope; visible damage can be categorized into five modes: resin ablation, fiber fracture and sublimation, delamination, ablation scallops and block-shaped ply-lift; delamination damage due to resin pyrolysis and internal pressure exist obvious distinguish; project area of total damage is linear with action integral for the same type specimens, that of resin ablation damage is linear with action integral, but no correlation with specimen type, for all specimens, damage depth is linear with logarithm of action integral. The coupled thermal-electrical model constructed is capable to simulate the ablation damage for carbon woven fabric/epoxy laminates exposed to simulated lightning current through experimental verification.
Laser SRS tracker for reverse prototyping tasks
NASA Astrophysics Data System (ADS)
Kolmakov, Egor; Redka, Dmitriy; Grishkanich, Aleksandr; Tsvetkov, Konstantin
2017-10-01
According to the current great interest concerning Large-Scale Metrology applications in many different fields of manufacturing industry, technologies and techniques for dimensional measurement have recently shown a substantial improvement. Ease-of-use, logistic and economic issues, as well as metrological performance, are assuming a more and more important role among system requirements. The project is planned to conduct experimental studies aimed at identifying the impact of the application of the basic laws of chip and microlasers as radiators on the linear-angular characteristics of existing measurement systems. The project is planned to conduct experimental studies aimed at identifying the impact of the application of the basic laws of microlasers as radiators on the linear-angular characteristics of existing measurement systems. The system consists of a distributed network-based layout, whose modularity allows to fit differently sized and shaped working volumes by adequately increasing the number of sensing units. Differently from existing spatially distributed metrological instruments, the remote sensor devices are intended to provide embedded data elaboration capabilities, in order to share the overall computational load.
Projection methods for the numerical solution of Markov chain models
NASA Technical Reports Server (NTRS)
Saad, Youcef
1989-01-01
Projection methods for computing stationary probability distributions for Markov chain models are presented. A general projection method is a method which seeks an approximation from a subspace of small dimension to the original problem. Thus, the original matrix problem of size N is approximated by one of dimension m, typically much smaller than N. A particularly successful class of methods based on this principle is that of Krylov subspace methods which utilize subspaces of the form span(v,av,...,A(exp m-1)v). These methods are effective in solving linear systems and eigenvalue problems (Lanczos, Arnoldi,...) as well as nonlinear equations. They can be combined with more traditional iterative methods such as successive overrelaxation, symmetric successive overrelaxation, or with incomplete factorization methods to enhance convergence.
Virtual rigid body: a new optical tracking paradigm in image-guided interventions
NASA Astrophysics Data System (ADS)
Cheng, Alexis; Lee, David S.; Deshmukh, Nishikant; Boctor, Emad M.
2015-03-01
Tracking technology is often necessary for image-guided surgical interventions. Optical tracking is one the options, but it suffers from line of sight and workspace limitations. Optical tracking is accomplished by attaching a rigid body marker, having a pattern for pose detection, onto a tool or device. A larger rigid body results in more accurate tracking, but at the same time large size limits its usage in a crowded surgical workspace. This work presents a prototype of a novel optical tracking method using a virtual rigid body (VRB). We define the VRB as a 3D rigid body marker in the form of pattern on a surface generated from a light source. Its pose can be recovered by observing the projected pattern with a stereo-camera system. The rigid body's size is no longer physically limited as we can manufacture small size light sources. Conventional optical tracking also requires line of sight to the rigid body. VRB overcomes these limitations by detecting a pattern projected onto the surface. We can project the pattern onto a region of interest, allowing the pattern to always be in the view of the optical tracker. This helps to decrease the occurrence of occlusions. This manuscript describes the method and results compared with conventional optical tracking in an experiment setup using known motions. The experiments are done using an optical tracker and a linear-stage, resulting in targeting errors of 0.38mm+/-0.28mm with our method compared to 0.23mm+/-0.22mm with conventional optical markers. Another experiment that replaced the linear stage with a robot arm resulted in rotational errors of 0.50+/-0.31° and 2.68+/-2.20° and the translation errors of 0.18+/-0.10 mm and 0.03+/-0.02 mm respectively.
NASA Astrophysics Data System (ADS)
Ren, Diandong; Karoly, David J.
2008-03-01
Observations from seven Central Asian glaciers (35-55°N; 70-95°E) are used, together with regional temperature data, to infer uncertain parameters for a simple linear model of the glacier length variations. The glacier model is based on first order glacier dynamics and requires the knowledge of reference states of forcing and glacier perturbation magnitude. An adjoint-based variational method is used to optimally determine the glacier reference states in 1900 and the uncertain glacier model parameters. The simple glacier model is then used to estimate the glacier length variations until 2060 using regional temperature projections from an ensemble of climate model simulations for a future climate change scenario (SRES A2). For the period 2000-2060, all glaciers are projected to experience substantial further shrinkage, especially those with gentle slopes (e.g., Glacier Chogo Lungma retreats ˜4 km). Although nearly one-third of the year 2000 length will be reduced for some small glaciers, the existence of the glaciers studied here is not threatened by year 2060. The differences between the individual glacier responses are large. No straightforward relationship is found between glacier size and the projected fractional change of its length.
The linear sizes tolerances and fits system modernization
NASA Astrophysics Data System (ADS)
Glukhov, V. I.; Grinevich, V. A.; Shalay, V. V.
2018-04-01
The study is carried out on the urgent topic for technical products quality providing in the tolerancing process of the component parts. The aim of the paper is to develop alternatives for improving the system linear sizes tolerances and dimensional fits in the international standard ISO 286-1. The tasks of the work are, firstly, to classify as linear sizes the elements additionally linear coordinating sizes that determine the detail elements location and, secondly, to justify the basic deviation of the tolerance interval for the element's linear size. The geometrical modeling method of real details elements, the analytical and experimental methods are used in the research. It is shown that the linear coordinates are the dimensional basis of the elements linear sizes. To standardize the accuracy of linear coordinating sizes in all accuracy classes, it is sufficient to select in the standardized tolerance system only one tolerance interval with symmetrical deviations: Js for internal dimensional elements (holes) and js for external elements (shafts). The main deviation of this coordinating tolerance is the average zero deviation, which coincides with the nominal value of the coordinating size. Other intervals of the tolerance system are remained for normalizing the accuracy of the elements linear sizes with a fundamental change in the basic deviation of all tolerance intervals is the maximum deviation corresponding to the limit of the element material: EI is the lower tolerance for the of the internal elements (holes) sizes and es is the upper tolerance deviation for the outer elements (shafts) sizes. It is the sizes of the material maximum that are involved in the of the dimensional elements mating of the shafts and holes and determine the fits type.
Wolff, Sebastian; Bucher, Christian
2013-01-01
This article presents asynchronous collision integrators and a simple asynchronous method treating nodal restraints. Asynchronous discretizations allow individual time step sizes for each spatial region, improving the efficiency of explicit time stepping for finite element meshes with heterogeneous element sizes. The article first introduces asynchronous variational integration being expressed by drift and kick operators. Linear nodal restraint conditions are solved by a simple projection of the forces that is shown to be equivalent to RATTLE. Unilateral contact is solved by an asynchronous variant of decomposition contact response. Therein, velocities are modified avoiding penetrations. Although decomposition contact response is solving a large system of linear equations (being critical for the numerical efficiency of explicit time stepping schemes) and is needing special treatment regarding overconstraint and linear dependency of the contact constraints (for example from double-sided node-to-surface contact or self-contact), the asynchronous strategy handles these situations efficiently and robust. Only a single constraint involving a very small number of degrees of freedom is considered at once leading to a very efficient solution. The treatment of friction is exemplified for the Coulomb model. Special care needs the contact of nodes that are subject to restraints. Together with the aforementioned projection for restraints, a novel efficient solution scheme can be presented. The collision integrator does not influence the critical time step. Hence, the time step can be chosen independently from the underlying time-stepping scheme. The time step may be fixed or time-adaptive. New demands on global collision detection are discussed exemplified by position codes and node-to-segment integration. Numerical examples illustrate convergence and efficiency of the new contact algorithm. Copyright © 2013 The Authors. International Journal for Numerical Methods in Engineering published by John Wiley & Sons, Ltd. PMID:23970806
GPU-accelerated iterative reconstruction for limited-data tomography in CBCT systems.
de Molina, Claudia; Serrano, Estefania; Garcia-Blas, Javier; Carretero, Jesus; Desco, Manuel; Abella, Monica
2018-05-15
Standard cone-beam computed tomography (CBCT) involves the acquisition of at least 360 projections rotating through 360 degrees. Nevertheless, there are cases in which only a few projections can be taken in a limited angular span, such as during surgery, where rotation of the source-detector pair is limited to less than 180 degrees. Reconstruction of limited data with the conventional method proposed by Feldkamp, Davis and Kress (FDK) results in severe artifacts. Iterative methods may compensate for the lack of data by including additional prior information, although they imply a high computational burden and memory consumption. We present an accelerated implementation of an iterative method for CBCT following the Split Bregman formulation, which reduces computational time through GPU-accelerated kernels. The implementation enables the reconstruction of large volumes (>1024 3 pixels) using partitioning strategies in forward- and back-projection operations. We evaluated the algorithm on small-animal data for different scenarios with different numbers of projections, angular span, and projection size. Reconstruction time varied linearly with the number of projections and quadratically with projection size but remained almost unchanged with angular span. Forward- and back-projection operations represent 60% of the total computational burden. Efficient implementation using parallel processing and large-memory management strategies together with GPU kernels enables the use of advanced reconstruction approaches which are needed in limited-data scenarios. Our GPU implementation showed a significant time reduction (up to 48 ×) compared to a CPU-only implementation, resulting in a total reconstruction time from several hours to few minutes.
Covariance Matrix Estimation for the Cryo-EM Heterogeneity Problem*
Katsevich, E.; Katsevich, A.; Singer, A.
2015-01-01
In cryo-electron microscopy (cryo-EM), a microscope generates a top view of a sample of randomly oriented copies of a molecule. The problem of single particle reconstruction (SPR) from cryo-EM is to use the resulting set of noisy two-dimensional projection images taken at unknown directions to reconstruct the three-dimensional (3D) structure of the molecule. In some situations, the molecule under examination exhibits structural variability, which poses a fundamental challenge in SPR. The heterogeneity problem is the task of mapping the space of conformational states of a molecule. It has been previously suggested that the leading eigenvectors of the covariance matrix of the 3D molecules can be used to solve the heterogeneity problem. Estimating the covariance matrix is challenging, since only projections of the molecules are observed, but not the molecules themselves. In this paper, we formulate a general problem of covariance estimation from noisy projections of samples. This problem has intimate connections with matrix completion problems and high-dimensional principal component analysis. We propose an estimator and prove its consistency. When there are finitely many heterogeneity classes, the spectrum of the estimated covariance matrix reveals the number of classes. The estimator can be found as the solution to a certain linear system. In the cryo-EM case, the linear operator to be inverted, which we term the projection covariance transform, is an important object in covariance estimation for tomographic problems involving structural variation. Inverting it involves applying a filter akin to the ramp filter in tomography. We design a basis in which this linear operator is sparse and thus can be tractably inverted despite its large size. We demonstrate via numerical experiments on synthetic datasets the robustness of our algorithm to high levels of noise. PMID:25699132
Solving large tomographic linear systems: size reduction and error estimation
NASA Astrophysics Data System (ADS)
Voronin, Sergey; Mikesell, Dylan; Slezak, Inna; Nolet, Guust
2014-10-01
We present a new approach to reduce a sparse, linear system of equations associated with tomographic inverse problems. We begin by making a modification to the commonly used compressed sparse-row format, whereby our format is tailored to the sparse structure of finite-frequency (volume) sensitivity kernels in seismic tomography. Next, we cluster the sparse matrix rows to divide a large matrix into smaller subsets representing ray paths that are geographically close. Singular value decomposition of each subset allows us to project the data onto a subspace associated with the largest eigenvalues of the subset. After projection we reject those data that have a signal-to-noise ratio (SNR) below a chosen threshold. Clustering in this way assures that the sparse nature of the system is minimally affected by the projection. Moreover, our approach allows for a precise estimation of the noise affecting the data while also giving us the ability to identify outliers. We illustrate the method by reducing large matrices computed for global tomographic systems with cross-correlation body wave delays, as well as with surface wave phase velocity anomalies. For a massive matrix computed for 3.7 million Rayleigh wave phase velocity measurements, imposing a threshold of 1 for the SNR, we condensed the matrix size from 1103 to 63 Gbyte. For a global data set of multiple-frequency P wave delays from 60 well-distributed deep earthquakes we obtain a reduction to 5.9 per cent. This type of reduction allows one to avoid loss of information due to underparametrizing models. Alternatively, if data have to be rejected to fit the system into computer memory, it assures that the most important data are preserved.
Perng, Wei; Rifas-Shiman, Sheryl L; Kramer, Michael S; Haugaard, Line K; Oken, Emily; Gillman, Matthew W; Belfort, Mandy B
2016-02-01
In recent years, the prevalence of hypertension and prehypertension increased markedly among children and adolescents, highlighting the importance of identifying determinants of elevated blood pressure early in life. Low birth weight and rapid early childhood weight gain are associated with higher future blood pressure. However, few studies have examined the timing of postnatal weight gain in relation to later blood pressure, and little is known regarding the contribution of linear growth. We studied 957 participants in Project Viva, an ongoing US prebirth cohort. We examined the relations of gains in body mass index z-score and length/height z-score during 4 early life age intervals (birth to 6 months, 6 months to 1 year, 1 to 2 years, and 2 to 3 years) with blood pressure during mid-childhood (6-10 years) and evaluated whether these relations differed by birth size. After accounting for confounders, each additional z-score gain in body mass index during birth to 6 months and 2 to 3 years was associated with 0.81 (0.15, 1.46) and 1.61 (0.33, 2.89) mm Hg higher systolic blood pressure, respectively. Length/height gain was unrelated to mid-childhood blood pressure, and there was no evidence of effect modification by birth size for body mass index or length/height z-score gain. Our findings suggest that more rapid gain in body mass index during the first 6 postnatal months and in the preschool years may lead to higher systolic blood pressure in mid-childhood, regardless of size at birth. Strategies to reduce accrual of excess adiposity during early life may reduce mid-childhood blood pressure, which may also impact adult blood pressure and cardiovascular health. © 2015 American Heart Association, Inc.
Circumstellar Disks Around Rapidly Rotating Be-type Stars
NASA Astrophysics Data System (ADS)
Touhami, Yamina
2012-01-01
Be stars are rapidly rotating B-type stars that eject large amounts of gaseous material into a circumstellar equatorial disk. The existence of this disk has been confirmed through the presence of several observational signatures such as the strong hydrogen emission lines, the IR flux excess, and the linear polarization detected from these systems. Here we report simultaneous near-IR interferometric and spectroscopic observations of circumstellar disks around Be stars obtained with the CHARA Array long baseline interferometer and the Mimir spectrograph at Lowell observatory. The goal of this project was to measure precise angular sizes and to characterize the fundamental geometrical and physical properties of the circumstellar disks. We were able to determine spatial extensions, inclinations, and position angles, as well as the gas density profile of the circumstellar disks using an elliptical Gaussian model and a physical thick disk model, and we show that the K-band interferometric angular sizes of the circumstellar disks are correlated with the H-alpha angular sizes. By combining the projected rotational velocity of the Be star with the disk inclination derived from interferometry, we provide estimates of the equatorial rotational velocities of these rapidly rotating Be stars.
Speed scanning system based on solid-state microchip laser for architectural planning
NASA Astrophysics Data System (ADS)
Redka, Dmitriy; Grishkanich, Alexsandr S.; Kolmakov, Egor; Tsvetkov, Konstantin
2017-10-01
According to the current great interest concerning Large-Scale Metrology applications in many different fields of manufacturing industry, technologies and techniques for dimensional measurement have recently shown a substantial improvement. Ease-of-use, logistic and economic issues, as well as metrological performance, are assuming a more and more important role among system requirements. The project is planned to conduct experimental studies aimed at identifying the impact of the application of the basic laws of microlasers as radiators on the linear-angular characteristics of existing measurement systems. The project is planned to conduct experimental studies aimed at identifying the impact of the application of the basic laws of microlasers as radiators on the linear-angular characteristics of existing measurement systems. The system consists of a distributed network-based layout, whose modularity allows to fit differently sized and shaped working volumes by adequately increasing the number of sensing units. Differently from existing spatially distributed metrological instruments, the remote sensor devices are intended to provide embedded data elaboration capabilities, in order to share the overall computational load.
Coordinate measuring system based on microchip lasers for reverse prototyping
NASA Astrophysics Data System (ADS)
Iakovlev, Alexey; Grishkanich, Alexsandr S.; Redka, Dmitriy; Tsvetkov, Konstantin
2017-02-01
According to the current great interest concerning Large-Scale Metrology applications in many different fields of manufacturing industry, technologies and techniques for dimensional measurement have recently shown a substantial improvement. Ease-of-use, logistic and economic issues, as well as metrological performance, are assuming a more and more important role among system requirements. The project is planned to conduct experimental studies aimed at identifying the impact of the application of the basic laws of chip and microlasers as radiators on the linear-angular characteristics of existing measurement systems. The project is planned to conduct experimental studies aimed at identifying the impact of the application of the basic laws of microlasers as radiators on the linear-angular characteristics of existing measurement systems. The system consists of a distributed network-based layout, whose modularity allows to fit differently sized and shaped working volumes by adequately increasing the number of sensing units. Differently from existing spatially distributed metrological instruments, the remote sensor devices are intended to provide embedded data elaboration capabilities, in order to share the overall computational load.
Fast computation of an optimal controller for large-scale adaptive optics.
Massioni, Paolo; Kulcsár, Caroline; Raynaud, Henri-François; Conan, Jean-Marc
2011-11-01
The linear quadratic Gaussian regulator provides the minimum-variance control solution for a linear time-invariant system. For adaptive optics (AO) applications, under the hypothesis of a deformable mirror with instantaneous response, such a controller boils down to a minimum-variance phase estimator (a Kalman filter) and a projection onto the mirror space. The Kalman filter gain can be computed by solving an algebraic Riccati matrix equation, whose computational complexity grows very quickly with the size of the telescope aperture. This "curse of dimensionality" makes the standard solvers for Riccati equations very slow in the case of extremely large telescopes. In this article, we propose a way of computing the Kalman gain for AO systems by means of an approximation that considers the turbulence phase screen as the cropped version of an infinite-size screen. We demonstrate the advantages of the methods for both off- and on-line computational time, and we evaluate its performance for classical AO as well as for wide-field tomographic AO with multiple natural guide stars. Simulation results are reported.
Concentrating Solar Power Projects - Puerto Errado 2 Thermosolar Power
linear Fresnel reflector system. Status Date: April 26, 2013 Project Overview Project Name: Puerto Errado . (Novatec Biosol AG) (15%) Technology: Linear Fresnel reflector Turbine Capacity: Net: 30.0 MW Gross: 30.0 ? Background Technology: Linear Fresnel reflector Status: Operational Country: Spain City: Calasparra Region
Model-based Estimation for Pose, Velocity of Projectile from Stereo Linear Array Image
NASA Astrophysics Data System (ADS)
Zhao, Zhuxin; Wen, Gongjian; Zhang, Xing; Li, Deren
2012-01-01
The pose (position and attitude) and velocity of in-flight projectiles have major influence on the performance and accuracy. A cost-effective method for measuring the gun-boosted projectiles is proposed. The method adopts only one linear array image collected by the stereo vision system combining a digital line-scan camera and a mirror near the muzzle. From the projectile's stereo image, the motion parameters (pose and velocity) are acquired by using a model-based optimization algorithm. The algorithm achieves optimal estimation of the parameters by matching the stereo projection of the projectile and that of the same size 3D model. The speed and the AOA (angle of attack) could also be determined subsequently. Experiments are made to test the proposed method.
Microminiature linear split Stirling cryogenic cooler for portable infrared imagers
NASA Astrophysics Data System (ADS)
Veprik, A.; Vilenchik, H.; Riabzev, S.; Pundak, N.
2007-04-01
Novel tactics employed in carrying out military and antiterrorist operations call for the development of a new generation of warfare, among which sophisticated portable infrared (IR) imagers for surveillance, reconnaissance, targeting and navigation play an important role. The superior performance of such imagers relies on novel optronic technologies and maintaining the infrared focal plane arrays at cryogenic temperatures using closed cycle refrigerators. Traditionally, rotary driven Stirling cryogenic engines are used for this purpose. As compared to their military off-theshelf linear rivals, they are lighter, more compact and normally consume less electrical power. Latest technological advances in industrial development of high-temperature (100K) infrared detectors initialized R&D activity towards developing microminiature cryogenic coolers, both of rotary and linear types. On this occasion, split linearly driven cryogenic coolers appear to be more suitable for the above applications. Their known advantages include flexibility in the system design, inherently longer life time, low vibration export and superior aural stealth. Moreover, recent progress in designing highly efficient "moving magnet" resonant linear drives and driving electronics enable further essential reduction of the cooler size, weight and power consumption. The authors report on the development and project status of a novel Ricor model K527 microminiature split Stirling linear cryogenic cooler designed especially for the portable infrared imagers.
Berker, Yannick; Karp, Joel S; Schulz, Volkmar
2017-09-01
The use of scattered coincidences for attenuation correction of positron emission tomography (PET) data has recently been proposed. For practical applications, convergence speeds require further improvement, yet there exists a trade-off between convergence speed and the risk of non-convergence. In this respect, a maximum-likelihood gradient-ascent (MLGA) algorithm and a two-branch back-projection (2BP), which was previously proposed, were evaluated. MLGA was combined with the Armijo step size rule; and accelerated using conjugate gradients, Nesterov's momentum method, and data subsets of different sizes. In 2BP, we varied the subset size, an important determinant of convergence speed and computational burden. We used three sets of simulation data to evaluate the impact of a spatial scale factor. The Armijo step size allowed 10-fold increased step sizes compared to native MLGA. Conjugate gradients and Nesterov momentum lead to slightly faster, yet non-uniform convergence; improvements were mostly confined to later iterations, possibly due to the non-linearity of the problem. MLGA with data subsets achieved faster, uniform, and predictable convergence, with a speed-up factor equivalent to the number of subsets and no increase in computational burden. By contrast, 2BP computational burden increased linearly with the number of subsets due to repeated evaluation of the objective function, and convergence was limited to the case of many (and therefore small) subsets, which resulted in high computational burden. Possibilities of improving 2BP appear limited. While general-purpose acceleration methods appear insufficient for MLGA, results suggest that data subsets are a promising way of improving MLGA performance.
The Next Linear Collider Program-News
The Next Linear Collider at SLAC Navbar The Next Linear Collider In The Press The Secretary of Linear Collider is a high-priority goal of this plan. http://www.sc.doe.gov/Sub/Facilities_for_future/20 -term projects in conceputal stages (the Linear Collider is the highest priority project in this
Schwarz, C; Thier, P
1996-12-16
Dendritic features of identified projection neurons in two precerebellar nuclei, the pontine nuclei (PN) and the nucleus reticularis tegmenti pontis (NRTP) were established by using a combination of retrograde tracing (injection of fluorogold or rhodamine labelled latex micro-spheres into the cerebellum) with subsequent intracellular filling (lucifer yellow) in fixed slices of pontine brainstem. A multivariate analysis revealed that parameters selected to characterize the dendritic tree such as size of dendritic field, number of branching points, and length of terminal dendrites did not deviate significantly between different regions of the PN and the NRTP. On the other hand, projection neurons in ventral regions of the PN were characterized by an irregular coverage of their distal dendrites by appendages while those in the dorsal PN and the NRTP were virtually devoid of them. The NRTP, dorsal, and medial PN tended to display larger somata and more primary dendrites than ventral regions of the PN. These differences, however, do not allow the differentiation of projection neurons within the PN from those in the NRTP. They rather reflect a dorso-ventral gradient ignoring the border between the nuclei. Accordingly, a cluster analysis did not differentiate distinct types of projection neurons within the total sample. In both nuclei, multiple linear regression analysis revealed that the size of dendritic fields was strongly correlated with the length of terminal dendrites while it did not depend on other parameters of the dendritic field. Thus, larger dendritic fields seem not to be accompanied by a higher complexity but rather may be used to extend the reach of a projection neuron within the arrangement of afferent terminals. We suggest that these similarities within dendritic properties in PN and NRTP projection neurons reflect similar processing of afferent information in both precerebellar nuclei.
In situ fragmentation and rock particle sorting on arid hills
NASA Astrophysics Data System (ADS)
McGrath, Gavan S.; Nie, Zhengyao; Dyskin, Arcady; Byrd, Tia; Jenner, Rowan; Holbeche, Georgina; Hinz, Christoph
2013-03-01
Transport processes are often proposed to explain the sorting of rock particles on arid hillslopes, where mean rock particle size often decreases in the downslope direction. Here we show that in situ fragmentation of rock particles can also produce similar patterns. A total of 93,414 rock particles were digitized from 880 photographs of the surface of three mesa hills in the Great Sandy Desert, Australia. Rock particles were characterized by the projected Feret's diameter and circularity. Distance from the duricrust cap was found to be a more robust explanatory variable for diameter than the local hillslope gradient. Mean diameter decreased exponentially downslope, while the fractional area covered by rock particles decreased linearly. Rock particle diameters were distributed lognormally, with both the location and scale parameters decreasing approximately linearly downslope. Rock particle circularity distributions showed little change; only a slight shift in the mode to more circular particles was noted to occur downslope. A dynamic fragmentation model was used to assess whether in situ weathering alone could reproduce the observed downslope fining of diameters. Modeled and observed size distributions agreed well and both displayed a preferential loss of relatively large rock particles and an apparent approach to a terminal size distribution of the rocks downslope. We show this is consistent with a size effect in material strength, where large rocks are more susceptible to fatigue failure under stress than smaller rocks. In situ fragmentation therefore produces qualitatively similar patterns to those that would be expected to arise from selective transport.
Penis size interacts with body shape and height to influence male attractiveness.
Mautz, Brian S; Wong, Bob B M; Peters, Richard A; Jennions, Michael D
2013-04-23
Compelling evidence from many animal taxa indicates that male genitalia are often under postcopulatory sexual selection for characteristics that increase a male's relative fertilization success. There could, however, also be direct precopulatory female mate choice based on male genital traits. Before clothing, the nonretractable human penis would have been conspicuous to potential mates. This observation has generated suggestions that human penis size partly evolved because of female choice. Here we show, based upon female assessment of digitally projected life-size, computer-generated images, that penis size interacts with body shape and height to determine male sexual attractiveness. Positive linear selection was detected for penis size, but the marginal increase in attractiveness eventually declined with greater penis size (i.e., quadratic selection). Penis size had a stronger effect on attractiveness in taller men than in shorter men. There was a similar increase in the positive effect of penis size on attractiveness with a more masculine body shape (i.e., greater shoulder-to-hip ratio). Surprisingly, larger penis size and greater height had almost equivalent positive effects on male attractiveness. Our results support the hypothesis that female mate choice could have driven the evolution of larger penises in humans. More broadly, our results show that precopulatory sexual selection can play a role in the evolution of genital traits.
Penis size interacts with body shape and height to influence male attractiveness
Mautz, Brian S.; Wong, Bob B. M.; Peters, Richard A.; Jennions, Michael D.
2013-01-01
Compelling evidence from many animal taxa indicates that male genitalia are often under postcopulatory sexual selection for characteristics that increase a male’s relative fertilization success. There could, however, also be direct precopulatory female mate choice based on male genital traits. Before clothing, the nonretractable human penis would have been conspicuous to potential mates. This observation has generated suggestions that human penis size partly evolved because of female choice. Here we show, based upon female assessment of digitally projected life-size, computer-generated images, that penis size interacts with body shape and height to determine male sexual attractiveness. Positive linear selection was detected for penis size, but the marginal increase in attractiveness eventually declined with greater penis size (i.e., quadratic selection). Penis size had a stronger effect on attractiveness in taller men than in shorter men. There was a similar increase in the positive effect of penis size on attractiveness with a more masculine body shape (i.e., greater shoulder-to-hip ratio). Surprisingly, larger penis size and greater height had almost equivalent positive effects on male attractiveness. Our results support the hypothesis that female mate choice could have driven the evolution of larger penises in humans. More broadly, our results show that precopulatory sexual selection can play a role in the evolution of genital traits. PMID:23569234
NASA Astrophysics Data System (ADS)
Hutchings, Joanne; Kendall, Catherine; Shepherd, Neil; Barr, Hugh; Stone, Nicholas
2010-11-01
Rapid Raman mapping has the potential to be used for automated histopathology diagnosis, providing an adjunct technique to histology diagnosis. The aim of this work is to evaluate the feasibility of automated and objective pathology classification of Raman maps using linear discriminant analysis. Raman maps of esophageal tissue sections are acquired. Principal component (PC)-fed linear discriminant analysis (LDA) is carried out using subsets of the Raman map data (6483 spectra). An overall (validated) training classification model performance of 97.7% (sensitivity 95.0 to 100% and specificity 98.6 to 100%) is obtained. The remainder of the map spectra (131,672 spectra) are projected onto the classification model resulting in Raman images, demonstrating good correlation with contiguous hematoxylin and eosin (HE) sections. Initial results suggest that LDA has the potential to automate pathology diagnosis of esophageal Raman images, but since the classification of test spectra is forced into existing training groups, further work is required to optimize the training model. A small pixel size is advantageous for developing the training datasets using mapping data, despite lengthy mapping times, due to additional morphological information gained, and could facilitate differentiation of further tissue groups, such as the basal cells/lamina propria, in the future, but larger pixels sizes (and faster mapping) may be more feasible for clinical application.
Element enrichment factor calculation using grain-size distribution and functional data regression.
Sierra, C; Ordóñez, C; Saavedra, A; Gallego, J R
2015-01-01
In environmental geochemistry studies it is common practice to normalize element concentrations in order to remove the effect of grain size. Linear regression with respect to a particular grain size or conservative element is a widely used method of normalization. In this paper, the utility of functional linear regression, in which the grain-size curve is the independent variable and the concentration of pollutant the dependent variable, is analyzed and applied to detrital sediment. After implementing functional linear regression and classical linear regression models to normalize and calculate enrichment factors, we concluded that the former regression technique has some advantages over the latter. First, functional linear regression directly considers the grain-size distribution of the samples as the explanatory variable. Second, as the regression coefficients are not constant values but functions depending on the grain size, it is easier to comprehend the relationship between grain size and pollutant concentration. Third, regularization can be introduced into the model in order to establish equilibrium between reliability of the data and smoothness of the solutions. Copyright © 2014 Elsevier Ltd. All rights reserved.
Concentrating Solar Power Projects - Linear Fresnel Reflector Projects |
Kimberlina solar thermal power plant, a linear Fresnel reflector system located near Bakersfield, California Solar Thermal Project eLLO Solar Thermal Project (Llo) IRESEN 1 MWe CSP-ORC pilot project Kimberlina Solar Thermal Power Plant (Kimberlina) Liddell Power Station Puerto Errado 1 Thermosolar Power Plant
Effects of pole flux distribution in a homopolar linear synchronous machine
NASA Astrophysics Data System (ADS)
Balchin, M. J.; Eastham, J. F.; Coles, P. C.
1994-05-01
Linear forms of synchronous electrical machine are at present being considered as the propulsion means in high-speed, magnetically levitated (Maglev) ground transportation systems. A homopolar form of machine is considered in which the primary member, which carries both ac and dc windings, is supported on the vehicle. Test results and theoretical predictions are presented for a design of machine intended for driving a 100 passenger vehicle at a top speed of 400 km/h. The layout of the dc magnetic circuit is examined to locate the best position for the dc winding from the point of view of minimum core weight. Measurements of flux build-up under the machine at different operating speeds are given for two types of secondary pole: solid and laminated. The solid pole results, which are confirmed theoretically, show that this form of construction is impractical for high-speed drives. Measured motoring characteristics are presented for a short length of machine which simulates conditions at the leading and trailing ends of the full-sized machine. Combination of the results with those from a cylindrical version of the machine make it possible to infer the performance of the full-sized traction machine. This gives 0.8 pf and 0.9 efficiency at 300 km/h, which is much better than the reported performance of a comparable linear induction motor (0.52 pf and 0.82 efficiency). It is therefore concluded that in any projected high-speed Maglev systems, a linear synchronous machine should be the first choice as the propulsion means.
Ortega, Ileana; Martín, Alberto; Díaz, Yusbelly
2011-03-01
Astropecten marginatus is a sea star widely distributed in Northern and Eastern South America, found on sandy and muddy bottoms, in shallow and deep waters. To describe some of its ecological characteristics, we calculated it spatial-temporal distribution, population parameters (based on size and weight) and diet in the Orinoco Delta ecoregion (Venezuela). The ecoregion was divided in three sections: Golfo de Paria, Boca de Serpiente and Plataforma Deltana. Samples for the rainy and dry seasons came from megabenthos surveys of the "Línea Base Ambiental Plataforma Deltana (LBAPD)" and "Corocoro Fase I (CFI)" projects. The collected sea stars were measured, weighted and dissected by the oral side to extract their stomach and identify the preys consumed. A total of 570 sea stars were collected in LBAPD project and 306 in CFI one. The highest densities were found during the dry season in almost all sections. In LBAPD project the highest density was in "Plataforma Deltana" section (0.007 +/- 0.022 ind/m2 in dry season and 0.014 +/- 0.06 ind/m2 in rainy season) and in the CFI project the densities in "Golfo de Paria" section were 0.705 +/- 0.829 ind/m2 in rainy season and 1.027 +/- 1.107 ind/m2 in dry season. The most frequent size range was 3.1-4.6cm. The highest biomass was found in "Golfo de Paria" section (7.581 +/- 0.018 mg/m2 in dry season and 0.005 +/- 6.542 x 10(-06) mg/m2 in rainy season for 2004-2005 and 3.979 +/- 4.024 mg/m2 in dry season; and 3.117 +/- 3.137 mg/m2 in rainy season for 2006). A linear relationship was found between the sea star size and its weight but no relationship was observed between its size and the depth where it was collected. Mollusks are dominant in the sea star diet (47.4% in abundance). The diet in any of the sections, seasons or between projects or size class was heterogeneous, using multivariate ordinations (MDS) and SIMPER analysis and there was no difference in the prey number or food elements that a sea star can eat. Although A. marginatus has been described as a predator, in this study were also inferred scavenger and detritivorous habits.
Breast MRI radiomics: comparison of computer- and human-extracted imaging phenotypes.
Sutton, Elizabeth J; Huang, Erich P; Drukker, Karen; Burnside, Elizabeth S; Li, Hui; Net, Jose M; Rao, Arvind; Whitman, Gary J; Zuley, Margarita; Ganott, Marie; Bonaccio, Ermelinda; Giger, Maryellen L; Morris, Elizabeth A
2017-01-01
In this study, we sought to investigate if computer-extracted magnetic resonance imaging (MRI) phenotypes of breast cancer could replicate human-extracted size and Breast Imaging-Reporting and Data System (BI-RADS) imaging phenotypes using MRI data from The Cancer Genome Atlas (TCGA) project of the National Cancer Institute. Our retrospective interpretation study involved analysis of Health Insurance Portability and Accountability Act-compliant breast MRI data from The Cancer Imaging Archive, an open-source database from the TCGA project. This study was exempt from institutional review board approval at Memorial Sloan Kettering Cancer Center and the need for informed consent was waived. Ninety-one pre-operative breast MRIs with verified invasive breast cancers were analysed. Three fellowship-trained breast radiologists evaluated the index cancer in each case according to size and the BI-RADS lexicon for shape, margin, and enhancement (human-extracted image phenotypes [HEIP]). Human inter-observer agreement was analysed by the intra-class correlation coefficient (ICC) for size and Krippendorff's α for other measurements. Quantitative MRI radiomics of computerised three-dimensional segmentations of each cancer generated computer-extracted image phenotypes (CEIP). Spearman's rank correlation coefficients were used to compare HEIP and CEIP. Inter-observer agreement for HEIP varied, with the highest agreement seen for size (ICC 0.679) and shape (ICC 0.527). The computer-extracted maximum linear size replicated the human measurement with p < 10 -12 . CEIP of shape, specifically sphericity and irregularity, replicated HEIP with both p values < 0.001. CEIP did not demonstrate agreement with HEIP of tumour margin or internal enhancement. Quantitative radiomics of breast cancer may replicate human-extracted tumour size and BI-RADS imaging phenotypes, thus enabling precision medicine.
Parametrizing linear generalized Langevin dynamics from explicit molecular dynamics simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gottwald, Fabian; Karsten, Sven; Ivanov, Sergei D., E-mail: sergei.ivanov@uni-rostock.de
2015-06-28
Fundamental understanding of complex dynamics in many-particle systems on the atomistic level is of utmost importance. Often the systems of interest are of macroscopic size but can be partitioned into a few important degrees of freedom which are treated most accurately and others which constitute a thermal bath. Particular attention in this respect attracts the linear generalized Langevin equation, which can be rigorously derived by means of a linear projection technique. Within this framework, a complicated interaction with the bath can be reduced to a single memory kernel. This memory kernel in turn is parametrized for a particular system studied,more » usually by means of time-domain methods based on explicit molecular dynamics data. Here, we discuss that this task is more naturally achieved in frequency domain and develop a Fourier-based parametrization method that outperforms its time-domain analogues. Very surprisingly, the widely used rigid bond method turns out to be inappropriate in general. Importantly, we show that the rigid bond approach leads to a systematic overestimation of relaxation times, unless the system under study consists of a harmonic bath bi-linearly coupled to the relevant degrees of freedom.« less
Implementation of projective measurements with linear optics and continuous photon counting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takeoka, Masahiro; Sasaki, Masahide; Loock, Peter van
2005-02-01
We investigate the possibility of implementing a given projection measurement using linear optics and arbitrarily fast feedforward based on the continuous detection of photons. In particular, we systematically derive the so-called Dolinar scheme that achieves the minimum-error discrimination of binary coherent states. Moreover, we show that the Dolinar-type approach can also be applied to projection measurements in the regime of photonic-qubit signals. Our results demonstrate that for implementing a projection measurement with linear optics, in principle, unit success probability may be approached even without the use of expensive entangled auxiliary states, as they are needed in all known (near-)deterministic linear-opticsmore » proposals.« less
A statistically defined anthropomorphic software breast phantom.
Lau, Beverly A; Reiser, Ingrid; Nishikawa, Robert M; Bakic, Predrag R
2012-06-01
Digital anthropomorphic breast phantoms have emerged in the past decade because of recent advances in 3D breast x-ray imaging techniques. Computer phantoms in the literature have incorporated power-law noise to represent glandular tissue and branching structures to represent linear components such as ducts. When power-law noise is added to those phantoms in one piece, the simulated fibroglandular tissue is distributed randomly throughout the breast, resulting in dense tissue placement that may not be observed in a real breast. The authors describe a method for enhancing an existing digital anthropomorphic breast phantom by adding binarized power-law noise to a limited area of the breast. Phantoms with (0.5 mm)(3) voxel size were generated using software developed by Bakic et al. Between 0% and 40% of adipose compartments in each phantom were replaced with binarized power-law noise (β = 3.0) ranging from 0.1 to 0.6 volumetric glandular fraction. The phantoms were compressed to 7.5 cm thickness, then blurred using a 3 × 3 boxcar kernel and up-sampled to (0.1 mm)(3) voxel size using trilinear interpolation. Following interpolation, the phantoms were adjusted for volumetric glandular fraction using global thresholding. Monoenergetic phantom projections were created, including quantum noise and simulated detector blur. Texture was quantified in the simulated projections using power-spectrum analysis to estimate the power-law exponent β from 25.6 × 25.6 mm(2) regions of interest. Phantoms were generated with total volumetric glandular fraction ranging from 3% to 24%. Values for β (averaged per projection view) were found to be between 2.67 and 3.73. Thus, the range of textures of the simulated breasts covers the textures observed in clinical images. Using these new techniques, digital anthropomorphic breast phantoms can be generated with a variety of glandular fractions and patterns. β values for this new phantom are comparable with published values for breast tissue in x-ray projection modalities. The combination of conspicuous linear structures and binarized power-law noise added to a limited area of the phantom qualitatively improves its realism. © 2012 American Association of Physicists in Medicine.
The three-dimensional structure of cumulus clouds over the ocean. 1: Structural analysis
NASA Technical Reports Server (NTRS)
Kuo, Kwo-Sen; Welch, Ronald M.; Weger, Ronald C.; Engelstad, Mark A.; Sengupta, S. K.
1993-01-01
Thermal channel (channel 6, 10.4-12.5 micrometers) images of five Landsat thematic mapper cumulus scenes over the ocean are examined. These images are thresholded using the standard International Satellite Cloud Climatology Project (ISCCP) thermal threshold algorithm. The individual clouds in the cloud fields are segmented to obtain their structural statistics which include size distribution, orientation angle, horizontal aspect ratio, and perimeter-to-area (PtA) relationship. The cloud size distributions exhibit a double power law with the smaller clouds having a smaller absolute exponent. The cloud orientation angles, horizontal aspect ratios, and PtA exponents are found in good agreement with earlier studies. A technique also is developed to recognize individual cells within a cloud so that statistics of cloud cellular structure can be obtained. Cell structural statistics are computed for each cloud. Unicellular clouds are generally smaller (less than or equal to 1 km) and have smaller PtA exponents, while multicellular clouds are larger (greater than or equal to 1 km) and have larger PtA exponents. Cell structural statistics are similar to those of the smaller clouds. When each cell is approximated as a quadric surface using a linear least squares fit, most cells have the shape of a hyperboloid of one sheet, but about 15% of the cells are best modeled by a hyperboloid of two sheets. Less than 1% of the clouds are ellipsoidal. The number of cells in a cloud increases slightly faster than linearly with increasing cloud size. The mean nearest neighbor distance between cells in a cloud, however, appears to increase linearly with increasing cloud size and to reach a maximum when the cloud effective diameter is about 10 km; then it decreases with increasing cloud size. Sensitivity studies of threshold and lapse rate show that neither has a significant impact upon the results. A goodness-of-fit ratio is used to provide a quantitative measure of the individual cloud results. Significantly improved results are obtained after applying a smoothing operator, suggesting the eliminating subresolution scale variations with higher spatial resolution may yield even better shape analyses.
VizieR Online Data Catalog: Main-sequence A, F, G, and K stars photometry (Boyajian+, 2013)
NASA Astrophysics Data System (ADS)
Boyajian, T. S.; von Braun, K.; van Belle, G.; Farrington, C.; Schaefer, G.; Jones, J.; White, R.; McAlister, H. A.; Ten Brummelaar, T. A.; Ridgway, S.; Gies, D.; Sturmann, L.; Sturmann, J.; Turner, N. H.; Goldfinger, P. J.; Vargas, N.
2016-07-01
Akin to the observing outlined in DT1 and DT2, observations for this project were made with the CHARA Array, a long-baseline optical/infrared interferometer located on Mount Wilson Observatory in southern California. The target stars were selected based on their approximate angular size (a function of their intrinsic linear size and distance to the observer). We limit the selection to stars with angular sizes >0.45mas, in order to adequately resolve their sizes to a few percent precision with the selected instrument setup. Note that all stars that meet this requirement are brighter than the instrumental limits of our detector by several magnitudes. The stars also have no known stellar companion within 3-arcsec to avoid contamination of incoherent light in the interferometers' field of view. From 2008 to 2012, we used the CHARA Classic beam combiner operating in the H band (λH=1.67um) and the K' band (λK'=2.14um) to collect observations of 23 stars using CHARA's longest baseline combinations. (5 data files).
Fixed-point image orthorectification algorithms for reduced computational cost
NASA Astrophysics Data System (ADS)
French, Joseph Clinton
Imaging systems have been applied to many new applications in recent years. With the advent of low-cost, low-power focal planes and more powerful, lower cost computers, remote sensing applications have become more wide spread. Many of these applications require some form of geolocation, especially when relative distances are desired. However, when greater global positional accuracy is needed, orthorectification becomes necessary. Orthorectification is the process of projecting an image onto a Digital Elevation Map (DEM), which removes terrain distortions and corrects the perspective distortion by changing the viewing angle to be perpendicular to the projection plane. Orthorectification is used in disaster tracking, landscape management, wildlife monitoring and many other applications. However, orthorectification is a computationally expensive process due to floating point operations and divisions in the algorithm. To reduce the computational cost of on-board processing, two novel algorithm modifications are proposed. One modification is projection utilizing fixed-point arithmetic. Fixed point arithmetic removes the floating point operations and reduces the processing time by operating only on integers. The second modification is replacement of the division inherent in projection with a multiplication of the inverse. The inverse must operate iteratively. Therefore, the inverse is replaced with a linear approximation. As a result of these modifications, the processing time of projection is reduced by a factor of 1.3x with an average pixel position error of 0.2% of a pixel size for 128-bit integer processing and over 4x with an average pixel position error of less than 13% of a pixel size for a 64-bit integer processing. A secondary inverse function approximation is also developed that replaces the linear approximation with a quadratic. The quadratic approximation produces a more accurate approximation of the inverse, allowing for an integer multiplication calculation to be used in place of the traditional floating point division. This method increases the throughput of the orthorectification operation by 38% when compared to floating point processing. Additionally, this method improves the accuracy of the existing integer-based orthorectification algorithms in terms of average pixel distance, increasing the accuracy of the algorithm by more than 5x. The quadratic function reduces the pixel position error to 2% and is still 2.8x faster than the 128-bit floating point algorithm.
Solution of nonlinear time-dependent PDEs through componentwise approximation of matrix functions
NASA Astrophysics Data System (ADS)
Cibotarica, Alexandru; Lambers, James V.; Palchak, Elisabeth M.
2016-09-01
Exponential propagation iterative (EPI) methods provide an efficient approach to the solution of large stiff systems of ODEs, compared to standard integrators. However, the bulk of the computational effort in these methods is due to products of matrix functions and vectors, which can become very costly at high resolution due to an increase in the number of Krylov projection steps needed to maintain accuracy. In this paper, it is proposed to modify EPI methods by using Krylov subspace spectral (KSS) methods, instead of standard Krylov projection methods, to compute products of matrix functions and vectors. Numerical experiments demonstrate that this modification causes the number of Krylov projection steps to become bounded independently of the grid size, thus dramatically improving efficiency and scalability. As a result, for each test problem featured, as the total number of grid points increases, the growth in computation time is just below linear, while other methods achieved this only on selected test problems or not at all.
Nonlinear Entanglement and its Application to Generating Cat States
NASA Astrophysics Data System (ADS)
Shen, Y.; Assad, S. M.; Grosse, N. B.; Li, X. Y.; Reid, M. D.; Lam, P. K.
2015-03-01
The Einstein-Podolsky-Rosen (EPR) paradox, which was formulated to argue for the incompleteness of quantum mechanics, has since metamorphosed into a resource for quantum information. The EPR entanglement describes the strength of linear correlations between two objects in terms of a pair of conjugate observables in relation to the Heisenberg uncertainty limit. We propose that entanglement can be extended to include nonlinear correlations. We examine two driven harmonic oscillators that are coupled via third-order nonlinearity can exhibit quadraticlike nonlinear entanglement which, after a projective measurement on one of the oscillators, collapses the other into a cat state of tunable size.
Nonlinear entanglement and its application to generating cat States.
Shen, Y; Assad, S M; Grosse, N B; Li, X Y; Reid, M D; Lam, P K
2015-03-13
The Einstein-Podolsky-Rosen (EPR) paradox, which was formulated to argue for the incompleteness of quantum mechanics, has since metamorphosed into a resource for quantum information. The EPR entanglement describes the strength of linear correlations between two objects in terms of a pair of conjugate observables in relation to the Heisenberg uncertainty limit. We propose that entanglement can be extended to include nonlinear correlations. We examine two driven harmonic oscillators that are coupled via third-order nonlinearity can exhibit quadraticlike nonlinear entanglement which, after a projective measurement on one of the oscillators, collapses the other into a cat state of tunable size.
Complexation of Polyelectrolyte Micelles with Oppositely Charged Linear Chains.
Kalogirou, Andreas; Gergidis, Leonidas N; Miliou, Kalliopi; Vlahos, Costas
2017-03-02
The formation of interpolyelectrolyte complexes (IPECs) from linear AB diblock copolymer precursor micelles and oppositely charged linear homopolymers is studied by means of molecular dynamics simulations. All beads of the linear polyelectrolyte (C) are charged with elementary quenched charge +1e, whereas in the diblock copolymer only the solvophilic (A) type beads have quenched charge -1e. For the same Bjerrum length, the ratio of positive to negative charges, Z +/- , of the mixture and the relative length of charged moieties r determine the size of IPECs. We found a nonmonotonic variation of the size of the IPECs with Z +/- . For small Z +/- values, the IPECs retain the size of the precursor micelle, whereas at larger Z +/- values the IPECs decrease in size due to the contraction of the corona and then increase as the aggregation number of the micelle increases. The minimum size of the IPECs is obtained at lower Z +/- values when the length of the hydrophilic block of the linear diblock copolymer decreases. The aforementioned findings are in agreement with experimental results. At a smaller Bjerrum length, we obtain the same trends but at even smaller Z +/- values. The linear homopolymer charged units are distributed throughout the corona.
High-Speed Edge-Detecting Line Scan Smart Camera
NASA Technical Reports Server (NTRS)
Prokop, Norman F.
2012-01-01
A high-speed edge-detecting line scan smart camera was developed. The camera is designed to operate as a component in a NASA Glenn Research Center developed inlet shock detection system. The inlet shock is detected by projecting a laser sheet through the airflow. The shock within the airflow is the densest part and refracts the laser sheet the most in its vicinity, leaving a dark spot or shadowgraph. These spots show up as a dip or negative peak within the pixel intensity profile of an image of the projected laser sheet. The smart camera acquires and processes in real-time the linear image containing the shock shadowgraph and outputting the shock location. Previously a high-speed camera and personal computer would perform the image capture and processing to determine the shock location. This innovation consists of a linear image sensor, analog signal processing circuit, and a digital circuit that provides a numerical digital output of the shock or negative edge location. The smart camera is capable of capturing and processing linear images at over 1,000 frames per second. The edges are identified as numeric pixel values within the linear array of pixels, and the edge location information can be sent out from the circuit in a variety of ways, such as by using a microcontroller and onboard or external digital interface to include serial data such as RS-232/485, USB, Ethernet, or CAN BUS; parallel digital data; or an analog signal. The smart camera system can be integrated into a small package with a relatively small number of parts, reducing size and increasing reliability over the previous imaging system..
Application of Nearly Linear Solvers to Electric Power System Computation
NASA Astrophysics Data System (ADS)
Grant, Lisa L.
To meet the future needs of the electric power system, improvements need to be made in the areas of power system algorithms, simulation, and modeling, specifically to achieve a time frame that is useful to industry. If power system time-domain simulations could run in real-time, then system operators would have situational awareness to implement online control and avoid cascading failures, significantly improving power system reliability. Several power system applications rely on the solution of a very large linear system. As the demands on power systems continue to grow, there is a greater computational complexity involved in solving these large linear systems within reasonable time. This project expands on the current work in fast linear solvers, developed for solving symmetric and diagonally dominant linear systems, in order to produce power system specific methods that can be solved in nearly-linear run times. The work explores a new theoretical method that is based on ideas in graph theory and combinatorics. The technique builds a chain of progressively smaller approximate systems with preconditioners based on the system's low stretch spanning tree. The method is compared to traditional linear solvers and shown to reduce the time and iterations required for an accurate solution, especially as the system size increases. A simulation validation is performed, comparing the solution capabilities of the chain method to LU factorization, which is the standard linear solver for power flow. The chain method was successfully demonstrated to produce accurate solutions for power flow simulation on a number of IEEE test cases, and a discussion on how to further improve the method's speed and accuracy is included.
Su, Nan-Yao; Lee, Sang-Hee
2008-04-01
Marked termites were released in a linear-connected foraging arena, and the spatial heterogeneity of their capture probabilities was averaged for both directions at distance r from release point to obtain a symmetrical distribution, from which the density function of directionally averaged capture probability P(x) was derived. We hypothesized that as marked termites move into the population and given sufficient time, the directionally averaged capture probability may reach an equilibrium P(e) over the distance r and thus satisfy the equal mixing assumption of the mark-recapture protocol. The equilibrium capture probability P(e) was used to estimate the population size N. The hypothesis was tested in a 50-m extended foraging arena to simulate the distance factor of field colonies of subterranean termites. Over the 42-d test period, the density functions of directionally averaged capture probability P(x) exhibited four phases: exponential decline phase, linear decline phase, equilibrium phase, and postequilibrium phase. The equilibrium capture probability P(e), derived as the intercept of the linear regression during the equilibrium phase, correctly projected N estimates that were not significantly different from the known number of workers in the arena. Because the area beneath the probability density function is a constant (50% in this study), preequilibrium regression parameters and P(e) were used to estimate the population boundary distance 1, which is the distance between the release point and the boundary beyond which the population is absent.
Signal Prediction With Input Identification
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Chen, Ya-Chin
1999-01-01
A novel coding technique is presented for signal prediction with applications including speech coding, system identification, and estimation of input excitation. The approach is based on the blind equalization method for speech signal processing in conjunction with the geometric subspace projection theory to formulate the basic prediction equation. The speech-coding problem is often divided into two parts, a linear prediction model and excitation input. The parameter coefficients of the linear predictor and the input excitation are solved simultaneously and recursively by a conventional recursive least-squares algorithm. The excitation input is computed by coding all possible outcomes into a binary codebook. The coefficients of the linear predictor and excitation, and the index of the codebook can then be used to represent the signal. In addition, a variable-frame concept is proposed to block the same excitation signal in sequence in order to reduce the storage size and increase the transmission rate. The results of this work can be easily extended to the problem of disturbance identification. The basic principles are outlined in this report and differences from other existing methods are discussed. Simulations are included to demonstrate the proposed method.
Nonlinear optimization with linear constraints using a projection method
NASA Technical Reports Server (NTRS)
Fox, T.
1982-01-01
Nonlinear optimization problems that are encountered in science and industry are examined. A method of projecting the gradient vector onto a set of linear contraints is developed, and a program that uses this method is presented. The algorithm that generates this projection matrix is based on the Gram-Schmidt method and overcomes some of the objections to the Rosen projection method.
Learning Incoherent Sparse and Low-Rank Patterns from Multiple Tasks
Chen, Jianhui; Liu, Ji; Ye, Jieping
2013-01-01
We consider the problem of learning incoherent sparse and low-rank patterns from multiple tasks. Our approach is based on a linear multi-task learning formulation, in which the sparse and low-rank patterns are induced by a cardinality regularization term and a low-rank constraint, respectively. This formulation is non-convex; we convert it into its convex surrogate, which can be routinely solved via semidefinite programming for small-size problems. We propose to employ the general projected gradient scheme to efficiently solve such a convex surrogate; however, in the optimization formulation, the objective function is non-differentiable and the feasible domain is non-trivial. We present the procedures for computing the projected gradient and ensuring the global convergence of the projected gradient scheme. The computation of projected gradient involves a constrained optimization problem; we show that the optimal solution to such a problem can be obtained via solving an unconstrained optimization subproblem and an Euclidean projection subproblem. We also present two projected gradient algorithms and analyze their rates of convergence in details. In addition, we illustrate the use of the presented projected gradient algorithms for the proposed multi-task learning formulation using the least squares loss. Experimental results on a collection of real-world data sets demonstrate the effectiveness of the proposed multi-task learning formulation and the efficiency of the proposed projected gradient algorithms. PMID:24077658
Learning Incoherent Sparse and Low-Rank Patterns from Multiple Tasks.
Chen, Jianhui; Liu, Ji; Ye, Jieping
2012-02-01
We consider the problem of learning incoherent sparse and low-rank patterns from multiple tasks. Our approach is based on a linear multi-task learning formulation, in which the sparse and low-rank patterns are induced by a cardinality regularization term and a low-rank constraint, respectively. This formulation is non-convex; we convert it into its convex surrogate, which can be routinely solved via semidefinite programming for small-size problems. We propose to employ the general projected gradient scheme to efficiently solve such a convex surrogate; however, in the optimization formulation, the objective function is non-differentiable and the feasible domain is non-trivial. We present the procedures for computing the projected gradient and ensuring the global convergence of the projected gradient scheme. The computation of projected gradient involves a constrained optimization problem; we show that the optimal solution to such a problem can be obtained via solving an unconstrained optimization subproblem and an Euclidean projection subproblem. We also present two projected gradient algorithms and analyze their rates of convergence in details. In addition, we illustrate the use of the presented projected gradient algorithms for the proposed multi-task learning formulation using the least squares loss. Experimental results on a collection of real-world data sets demonstrate the effectiveness of the proposed multi-task learning formulation and the efficiency of the proposed projected gradient algorithms.
Busettini, C; Miles, F A; Schwarz, U; Carl, J R
1994-01-01
Recent experiments on monkeys have indicated that the eye movements induced by brief translation of either the observer or the visual scene are a linear function of the inverse of the viewing distance. For the movements of the observer, the room was dark and responses were attributed to a translational vestibulo-ocular reflex (TVOR) that senses the motion through the otolith organs; for the movements of the scene, which elicit ocular following, the scene was projected and adjusted in size and speed so that the retinal stimulation was the same at all distances. The shared dependence on viewing distance was consistent with the hypothesis that the TVOR and ocular following are synergistic and share central pathways. The present experiments looked for such dependencies on viewing distance in human subjects. When briefly accelerated along the interaural axis in the dark, human subjects generated compensatory eye movements that were also a linear function of the inverse of the viewing distance to a previously fixated target. These responses, which were attributed to the TVOR, were somewhat weaker than those previously recorded from monkeys using similar methods. When human subjects faced a tangent screen onto which patterned images were projected, brief motion of those images evoked ocular following responses that showed statistically significant dependence on viewing distance only with low-speed stimuli (10 degrees/s). This dependence was at best weak and in the reverse direction of that seen with the TVOR, i.e., responses increased as viewing distance increased. We suggest that in generating an internal estimate of viewing distance subjects may have used a confounding cue in the ocular-following paradigm--the size of the projected scene--which was varied directly with the viewing distance in these experiments (in order to preserve the size of the retinal image). When movements of the subject were randomly interleaved with the movements of the scene--to encourage the expectation of ego-motion--the dependence of ocular following on viewing distance altered significantly: with higher speed stimuli (40 degrees/s) many responses (63%) now increased significantly as viewing distance decreased, though less vigorously than the TVOR. We suggest that the expectation of motion results in the subject placing greater weight on cues such as vergence and accommodation that provide veridical distance information in our experimental situation: cue selection is context specific.
NASA Astrophysics Data System (ADS)
Jeyakumar, S.
2016-06-01
The dependence of the turnover frequency on the linear size is presented for a sample of Giga-hertz Peaked Spectrum and Compact Steep Spectrum radio sources derived from complete samples. The dependence of the luminosity of the emission at the peak frequency with the linear size and the peak frequency is also presented for the galaxies in the sample. The luminosity of the smaller sources evolve strongly with the linear size. Optical depth effects have been included to the 3D model for the radio source of Kaiser to study the spectral turnover. Using this model, the observed trend can be explained by synchrotron self-absorption. The observed trend in the peak-frequency-linear-size plane is not affected by the luminosity evolution of the sources.
High correlations between MRI brain volume measurements based on NeuroQuant® and FreeSurfer.
Ross, David E; Ochs, Alfred L; Tate, David F; Tokac, Umit; Seabaugh, John; Abildskov, Tracy J; Bigler, Erin D
2018-05-30
NeuroQuant ® (NQ) and FreeSurfer (FS) are commonly used computer-automated programs for measuring MRI brain volume. Previously they were reported to have high intermethod reliabilities but often large intermethod effect size differences. We hypothesized that linear transformations could be used to reduce the large effect sizes. This study was an extension of our previously reported study. We performed NQ and FS brain volume measurements on 60 subjects (including normal controls, patients with traumatic brain injury, and patients with Alzheimer's disease). We used two statistical approaches in parallel to develop methods for transforming FS volumes into NQ volumes: traditional linear regression, and Bayesian linear regression. For both methods, we used regression analyses to develop linear transformations of the FS volumes to make them more similar to the NQ volumes. The FS-to-NQ transformations based on traditional linear regression resulted in effect sizes which were small to moderate. The transformations based on Bayesian linear regression resulted in all effect sizes being trivially small. To our knowledge, this is the first report describing a method for transforming FS to NQ data so as to achieve high reliability and low effect size differences. Machine learning methods like Bayesian regression may be more useful than traditional methods. Copyright © 2018 Elsevier B.V. All rights reserved.
A Fast Projection-Based Algorithm for Clustering Big Data.
Wu, Yun; He, Zhiquan; Lin, Hao; Zheng, Yufei; Zhang, Jingfen; Xu, Dong
2018-06-07
With the fast development of various techniques, more and more data have been accumulated with the unique properties of large size (tall) and high dimension (wide). The era of big data is coming. How to understand and discover new knowledge from these data has attracted more and more scholars' attention and has become the most important task in data mining. As one of the most important techniques in data mining, clustering analysis, a kind of unsupervised learning, could group a set data into objectives(clusters) that are meaningful, useful, or both. Thus, the technique has played very important role in knowledge discovery in big data. However, when facing the large-sized and high-dimensional data, most of the current clustering methods exhibited poor computational efficiency and high requirement of computational source, which will prevent us from clarifying the intrinsic properties and discovering the new knowledge behind the data. Based on this consideration, we developed a powerful clustering method, called MUFOLD-CL. The principle of the method is to project the data points to the centroid, and then to measure the similarity between any two points by calculating their projections on the centroid. The proposed method could achieve linear time complexity with respect to the sample size. Comparison with K-Means method on very large data showed that our method could produce better accuracy and require less computational time, demonstrating that the MUFOLD-CL can serve as a valuable tool, at least may play a complementary role to other existing methods, for big data clustering. Further comparisons with state-of-the-art clustering methods on smaller datasets showed that our method was fastest and achieved comparable accuracy. For the convenience of most scholars, a free soft package was constructed.
NASA Astrophysics Data System (ADS)
Collier, Jordan; Filipovic, Miroslav; Norris, Ray; Chow, Kate; Huynh, Minh; Banfield, Julie; Tothill, Nick; Sirothia, Sandeep Kumar; Shabala, Stanislav
2014-04-01
This proposal is a continuation of an extensive project (the core of Collier's PhD) to explore the earliest stages of AGN formation, using Gigahertz-Peaked Spectrum (GPS) and Compact Steep Spectrum (CSS) sources. Both are widely believed to represent the earliest stages of radio-loud AGN evolution, with GPS sources preceding CSS sources. In this project, we plan to (a) test this hypothesis, (b) place GPS and CSS sources into an evolutionary sequence with a number of other young AGN candidates, and (c) search for evidence of the evolving accretion mode. We will do this using high-resolution radio observations, with a number of other multiwavelength age indicators, of a carefully selected complete faint sample of 80 GPS/CSS sources. Analysis of the C2730 ELAIS-S1 data shows that we have so far met our goals, resolving the jets of 10/49 sources, and measuring accurate spectral indices from 0.843-10 GHz. This particular proposal is to almost triple the sample size by observing an additional 80 GPS/CSS sources in the Chandra Deep Field South (arguably the best-studied field) and allow a turnover frequency - linear size relation to be derived at >10-sigma. Sources found to be unresolved in our final sample will subsequently be observed with VLBI. Comparing those sources resolved with ATCA to the more compact sources resolved with VLBI will give a distribution of source sizes, helping to answer the question of whether all GPS/CSS sources grow to larger sizes.
Secret Message Decryption: Group Consulting Projects Using Matrices and Linear Programming
ERIC Educational Resources Information Center
Gurski, Katharine F.
2009-01-01
We describe two short group projects for finite mathematics students that incorporate matrices and linear programming into fictional consulting requests presented as a letter to the students. The students are required to use mathematics to decrypt secret messages in one project involving matrix multiplication and inversion. The second project…
Mat-Rix-Toe: Improving Writing through a Game-Based Project in Linear Algebra
ERIC Educational Resources Information Center
Graham-Squire, Adam; Farnell, Elin; Stockton, Julianna Connelly
2014-01-01
The Mat-Rix-Toe project utilizes a matrix-based game to deepen students' understanding of linear algebra concepts and strengthen students' ability to express themselves mathematically. The project was administered in three classes using slightly different approaches, each of which included some editing component to encourage the…
Effect of thermal cycling on composites reinforced with two differently sized silica-glass fibers.
Meriç, Gökçe; Ruyter, I Eystein
2007-09-01
To evaluate the effects of thermal cycling on the flexural properties of composites reinforced with two differently sized fibers. Acid-washed, woven, fused silica-glass fibers, were heat-treated at 500 degrees C, silanized and sized with one of two sizing resins (linear poly(butyl methacrylate)) (PBMA), cross-linked poly(methyl methacrylate) (PMMA). Subsequently the fibers were incorporated into a polymer matrix. Two test groups with fibers and one control group without fibers were prepared. The flexural properties of the composite reinforced with linear PBMA-sized fibers were evaluated by 3-point bend testing before thermal cycling. The specimens from all three groups were thermally cycled in water (12,000 cycles, 5/55 degrees C, dwell time 30 s), and afterwards tested by 3-point bending. SEM micrographs were taken of the fibers and of the fractured fiber reinforced composites (FRC). The reduction of ultimate flexural strength after thermal cycling was less than 20% of that prior to thermal cycling for composites reinforced with linear PBMA-sized silica-glass fibers. The flexural strength of the composite reinforced with cross-linked PMMA-sized fibers was reduced to less than half of the initial value. This study demonstrated that thermal cycling differently influences the flexural properties of composites reinforced with different sized silica-glass fibers. The interfacial linear PBMA-sizing polymer acts as a stress-bearing component for the high interfacial stresses during thermal cycling due to the flexible structure of the linear PBMA above Tg. The cross-linked PMMA-sizing, however, acts as a rigid component and therefore causes adhesive fracture between the fibers and matrix after the fatigue process of thermal cycling and flexural fracture.
Cavitation erosion - scale effect and model investigations
NASA Astrophysics Data System (ADS)
Geiger, F.; Rutschmann, P.
2015-12-01
The experimental works presented in here contribute to the clarification of erosive effects of hydrodynamic cavitation. Comprehensive cavitation erosion test series were conducted for transient cloud cavitation in the shear layer of prismatic bodies. The erosion pattern and erosion rates were determined with a mineral based volume loss technique and with a metal based pit count system competitively. The results clarified the underlying scale effects and revealed a strong non-linear material dependency, which indicated significantly different damage processes for both material types. Furthermore, the size and dynamics of the cavitation clouds have been assessed by optical detection. The fluctuations of the cloud sizes showed a maximum value for those cavitation numbers related to maximum erosive aggressiveness. The finding suggests the suitability of a model approach which relates the erosion process to cavitation cloud dynamics. An enhanced experimental setup is projected to further clarify these issues.
Visual exploration of high-dimensional data through subspace analysis and dynamic projections
Liu, S.; Wang, B.; Thiagarajan, J. J.; ...
2015-06-01
Here, we introduce a novel interactive framework for visualizing and exploring high-dimensional datasets based on subspace analysis and dynamic projections. We assume the high-dimensional dataset can be represented by a mixture of low-dimensional linear subspaces with mixed dimensions, and provide a method to reliably estimate the intrinsic dimension and linear basis of each subspace extracted from the subspace clustering. Subsequently, we use these bases to define unique 2D linear projections as viewpoints from which to visualize the data. To understand the relationships among the different projections and to discover hidden patterns, we connect these projections through dynamic projections that createmore » smooth animated transitions between pairs of projections. We introduce the view transition graph, which provides flexible navigation among these projections to facilitate an intuitive exploration. Finally, we provide detailed comparisons with related systems, and use real-world examples to demonstrate the novelty and usability of our proposed framework.« less
Visual Exploration of High-Dimensional Data through Subspace Analysis and Dynamic Projections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, S.; Wang, B.; Thiagarajan, Jayaraman J.
2015-06-01
We introduce a novel interactive framework for visualizing and exploring high-dimensional datasets based on subspace analysis and dynamic projections. We assume the high-dimensional dataset can be represented by a mixture of low-dimensional linear subspaces with mixed dimensions, and provide a method to reliably estimate the intrinsic dimension and linear basis of each subspace extracted from the subspace clustering. Subsequently, we use these bases to define unique 2D linear projections as viewpoints from which to visualize the data. To understand the relationships among the different projections and to discover hidden patterns, we connect these projections through dynamic projections that create smoothmore » animated transitions between pairs of projections. We introduce the view transition graph, which provides flexible navigation among these projections to facilitate an intuitive exploration. Finally, we provide detailed comparisons with related systems, and use real-world examples to demonstrate the novelty and usability of our proposed framework.« less
Size effects in non-linear heat conduction with flux-limited behaviors
NASA Astrophysics Data System (ADS)
Li, Shu-Nan; Cao, Bing-Yang
2017-11-01
Size effects are discussed for several non-linear heat conduction models with flux-limited behaviors, including the phonon hydrodynamic, Lagrange multiplier, hierarchy moment, nonlinear phonon hydrodynamic, tempered diffusion, thermon gas and generalized nonlinear models. For the phonon hydrodynamic, Lagrange multiplier and tempered diffusion models, heat flux will not exist in problems with sufficiently small scale. The existence of heat flux needs the sizes of heat conduction larger than their corresponding critical sizes, which are determined by the physical properties and boundary temperatures. The critical sizes can be regarded as the theoretical limits of the applicable ranges for these non-linear heat conduction models with flux-limited behaviors. For sufficiently small scale heat conduction, the phonon hydrodynamic and Lagrange multiplier models can also predict the theoretical possibility of violating the second law and multiplicity. Comparisons are also made between these non-Fourier models and non-linear Fourier heat conduction in the type of fast diffusion, which can also predict flux-limited behaviors.
Effect of wire size on maxillary arch force/couple systems for a simulated high canine malocclusion.
Major, Paul W; Toogood, Roger W; Badawi, Hisham M; Carey, Jason P; Seru, Surbhi
2014-12-01
To better understand the effects of copper nickel titanium (CuNiTi) archwire size on bracket-archwire mechanics through the analysis of force/couple distributions along the maxillary arch. The hypothesis is that wire size is linearly related to the forces and moments produced along the arch. An Orthodontic Simulator was utilized to study a simplified high canine malocclusion. Force/couple distributions produced by passive and elastic ligation using two wire sizes (Damon 0.014 and 0.018 inch) measured with a sample size of 144. The distribution and variation in force/couple loading around the arch is a complicated function of wire size. The use of a thicker wire increases the force/couple magnitudes regardless of ligation method. Owing to the non-linear material behaviour of CuNiTi, this increase is less than would occur based on linear theory as would apply for stainless steel wires. The results demonstrate that an increase in wire size does not result in a proportional increase of applied force/moment. This discrepancy is explained in terms of the non-linear properties of CuNiTi wires. This non-proportional force response in relation to increased wire size warrants careful consideration when selecting wires in a clinical setting. © 2014 British Orthodontic Society.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maidana, C. O.; Hunt, A. W.; Idaho State University, Department of Physics, PO Box 8106, Pocatello, ID 83209
2007-02-12
As part of the Reactor Accelerator Coupling Experiment (RACE) a set of preliminary studies were conducted to design a transport beam line that could bring a 25 MeV electron beam from a Linear Accelerator to a neutron-producing target inside a subcritical system. Because of the relatively low energy beam, the beam size and a relatively long beam line (implicating a possible divergence problem) different parameters and models were studied before a final design could be submitted for assembly. This report shows the first results obtained from different simulations of the transport line optics and dynamics.
Nature of size effects in compact models of field effect transistors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Torkhov, N. A., E-mail: trkf@mail.ru; Scientific-Research Institute of Semiconductor Devices, Tomsk 634050; Tomsk State University of Control Systems and Radioelectronics, Tomsk 634050
Investigations have shown that in the local approximation (for sizes L < 100 μm), AlGaN/GaN high electron mobility transistor (HEMT) structures satisfy to all properties of chaotic systems and can be described in the language of fractal geometry of fractional dimensions. For such objects, values of their electrophysical characteristics depend on the linear sizes of the examined regions, which explain the presence of the so-called size effects—dependences of the electrophysical and instrumental characteristics on the linear sizes of the active elements of semiconductor devices. In the present work, a relationship has been established for the linear model parameters of themore » equivalent circuit elements of internal transistors with fractal geometry of the heteroepitaxial structure manifested through a dependence of its relative electrophysical characteristics on the linear sizes of the examined surface areas. For the HEMTs, this implies dependences of their relative static (A/mm, mA/V/mm, Ω/mm, etc.) and microwave characteristics (W/mm) on the width d of the sink-source channel and on the number of sections n that leads to a nonlinear dependence of the retrieved parameter values of equivalent circuit elements of linear internal transistor models on n and d. Thus, it has been demonstrated that the size effects in semiconductors determined by the fractal geometry must be taken into account when investigating the properties of semiconductor objects on the levels less than the local approximation limit and designing and manufacturing field effect transistors. In general, the suggested approach allows a complex of problems to be solved on designing, optimizing, and retrieving the parameters of equivalent circuits of linear and nonlinear models of not only field effect transistors but also any arbitrary semiconductor devices with nonlinear instrumental characteristics.« less
3D Wavelet-Based Filter and Method
Moss, William C.; Haase, Sebastian; Sedat, John W.
2008-08-12
A 3D wavelet-based filter for visualizing and locating structural features of a user-specified linear size in 2D or 3D image data. The only input parameter is a characteristic linear size of the feature of interest, and the filter output contains only those regions that are correlated with the characteristic size, thus denoising the image.
Size Evolution and Stochastic Models: Explaining Ostracod Size through Probabilistic Distributions
NASA Astrophysics Data System (ADS)
Krawczyk, M.; Decker, S.; Heim, N. A.; Payne, J.
2014-12-01
The biovolume of animals has functioned as an important benchmark for measuring evolution throughout geologic time. In our project, we examined the observed average body size of ostracods over time in order to understand the mechanism of size evolution in these marine organisms. The body size of ostracods has varied since the beginning of the Ordovician, where the first true ostracods appeared. We created a stochastic branching model to create possible evolutionary trees of ostracod size. Using stratigraphic ranges for ostracods compiled from over 750 genera in the Treatise on Invertebrate Paleontology, we calculated overall speciation and extinction rates for our model. At each timestep in our model, new lineages can evolve or existing lineages can become extinct. Newly evolved lineages are assigned sizes based on their parent genera. We parameterized our model to generate neutral and directional changes in ostracod size to compare with the observed data. New sizes were chosen via a normal distribution, and the neutral model selected new sizes differentials centered on zero, allowing for an equal chance of larger or smaller ostracods at each speciation. Conversely, the directional model centered the distribution on a negative value, giving a larger chance of smaller ostracods. Our data strongly suggests that the overall direction of ostracod evolution has been following a model that directionally pushes mean ostracod size down, shying away from a neutral model. Our model was able to match the magnitude of size decrease. Our models had a constant linear decrease while the actual data had a much more rapid initial rate followed by a constant size. The nuance of the observed trends ultimately suggests a more complex method of size evolution. In conclusion, probabilistic methods can provide valuable insight into possible evolutionary mechanisms determining size evolution in ostracods.
The Effect of Perspective on Presence and Space Perception
Ling, Yun; Nefs, Harold T.; Brinkman, Willem-Paul; Qu, Chao; Heynderickx, Ingrid
2013-01-01
In this paper we report two experiments in which the effect of perspective projection on presence and space perception was investigated. In Experiment 1, participants were asked to score a presence questionnaire when looking at a virtual classroom. We manipulated the vantage point, the viewing mode (binocular versus monocular viewing), the display device/screen size (projector versus TV) and the center of projection. At the end of each session of Experiment 1, participants were asked to set their preferred center of projection such that the image seemed most natural to them. In Experiment 2, participants were asked to draw a floor plan of the virtual classroom. The results show that field of view, viewing mode, the center of projection and display all significantly affect presence and the perceived layout of the virtual environment. We found a significant linear relationship between presence and perceived layout of the virtual classroom, and between the preferred center of projection and perceived layout. The results indicate that the way in which virtual worlds are presented is critical for the level of experienced presence. The results also suggest that people ignore veridicality and they experience a higher level of presence while viewing elongated virtual environments compared to viewing the original intended shape. PMID:24223156
NASA Astrophysics Data System (ADS)
Zaripov, D. I.; Renfu, Li
2018-05-01
The implementation of high-efficiency digital image correlation methods based on a zero-normalized cross-correlation (ZNCC) procedure for high-speed, time-resolved measurements using a high-resolution digital camera is associated with big data processing and is often time consuming. In order to speed-up ZNCC computation, a high-speed technique based on a parallel projection correlation procedure is proposed. The proposed technique involves the use of interrogation window projections instead of its two-dimensional field of luminous intensity. This simplification allows acceleration of ZNCC computation up to 28.8 times compared to ZNCC calculated directly, depending on the size of interrogation window and region of interest. The results of three synthetic test cases, such as a one-dimensional uniform flow, a linear shear flow and a turbulent boundary-layer flow, are discussed in terms of accuracy. In the latter case, the proposed technique is implemented together with an iterative window-deformation technique. On the basis of the results of the present work, the proposed technique is recommended to be used for initial velocity field calculation, with further correction using more accurate techniques.
Design of measuring system for wire diameter based on sub-pixel edge detection algorithm
NASA Astrophysics Data System (ADS)
Chen, Yudong; Zhou, Wang
2016-09-01
Light projection method is often used in measuring system for wire diameter, which is relatively simpler structure and lower cost, and the measuring accuracy is limited by the pixel size of CCD. Using a CCD with small pixel size can improve the measuring accuracy, but will increase the cost and difficulty of making. In this paper, through the comparative analysis of a variety of sub-pixel edge detection algorithms, polynomial fitting method is applied for data processing in measuring system for wire diameter, to improve the measuring accuracy and enhance the ability of anti-noise. In the design of system structure, light projection method with orthogonal structure is used for the detection optical part, which can effectively reduce the error caused by line jitter in the measuring process. For the electrical part, ARM Cortex-M4 microprocessor is used as the core of the circuit module, which can not only drive double channel linear CCD but also complete the sampling, processing and storage of the CCD video signal. In addition, ARM microprocessor can complete the high speed operation of the whole measuring system for wire diameter in the case of no additional chip. The experimental results show that sub-pixel edge detection algorithm based on polynomial fitting can make up for the lack of single pixel size and improve the precision of measuring system for wire diameter significantly, without increasing hardware complexity of the entire system.
Switching times of nanoscale FePt: Finite size effects on the linear reversal mechanism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ellis, M. O. A.; Chantrell, R. W.
2015-04-20
The linear reversal mechanism in FePt grains ranging from 2.316 nm to 5.404 nm has been simulated using atomistic spin dynamics, parametrized from ab-initio calculations. The Curie temperature and the critical temperature (T{sup *}), at which the linear reversal mechanism occurs, are observed to decrease with system size whilst the temperature window T{sup *}
Form features provide a cue to the angular velocity of rotating objects
Blair, Christopher David; Goold, Jessica; Killebrew, Kyle; Caplovitz, Gideon Paul
2013-01-01
As an object rotates, each location on the object moves with an instantaneous linear velocity dependent upon its distance from the center of rotation, while the object as a whole rotates with a fixed angular velocity. Does the perceived rotational speed of an object correspond to its angular velocity, linear velocities, or some combination of the two? We had observers perform relative speed judgments of different sized objects, as changing the size of an object changes the linear velocity of each location on the object’s surface, while maintaining the object’s angular velocity. We found that the larger a given object is, the faster it is perceived to rotate. However, the observed relationships between size and perceived speed cannot be accounted for simply by size-related changes in linear velocity. Further, the degree to which size influences perceived rotational speed depends on the shape of the object. Specifically, perceived rotational speeds of objects with corners or regions of high contour curvature were less affected by size. The results suggest distinct contour features, such as corners or regions of high or discontinuous contour curvature, provide cues to the angular velocity of a rotating object. PMID:23750970
Form features provide a cue to the angular velocity of rotating objects.
Blair, Christopher David; Goold, Jessica; Killebrew, Kyle; Caplovitz, Gideon Paul
2014-02-01
As an object rotates, each location on the object moves with an instantaneous linear velocity, dependent upon its distance from the center of rotation, whereas the object as a whole rotates with a fixed angular velocity. Does the perceived rotational speed of an object correspond to its angular velocity, linear velocities, or some combination of the two? We had observers perform relative speed judgments of different-sized objects, as changing the size of an object changes the linear velocity of each location on the object's surface, while maintaining the object's angular velocity. We found that the larger a given object is, the faster it is perceived to rotate. However, the observed relationships between size and perceived speed cannot be accounted for simply by size-related changes in linear velocity. Further, the degree to which size influences perceived rotational speed depends on the shape of the object. Specifically, perceived rotational speeds of objects with corners or regions of high-contour curvature were less affected by size. The results suggest distinct contour features, such as corners or regions of high or discontinuous contour curvature, provide cues to the angular velocity of a rotating object. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Constructive Learning in Undergraduate Linear Algebra
ERIC Educational Resources Information Center
Chandler, Farrah Jackson; Taylor, Dewey T.
2008-01-01
In this article we describe a project that we used in our undergraduate linear algebra courses to help our students successfully master fundamental concepts and definitions and generate interest in the course. We describe our philosophy and discuss the projects overall success.
NASA Astrophysics Data System (ADS)
Dai, Shengyun; Pan, Xiaoning; Ma, Lijuan; Huang, Xingguo; Du, Chenzhao; Qiao, Yanjiang; Wu, Zhisheng
2018-05-01
Particle size is of great importance for the quantitative model of the NIR diffuse reflectance. In this paper, the effect of sample particle size on the measurement of harpagoside in Radix Scrophulariae powder by near infrared diffuse (NIR) reflectance spectroscopy was explored. High-performance liquid chromatography (HPLC) was employed as a reference method to construct the quantitative particle size model. Several spectral preprocessing methods were compared, and particle size models obtained by different preprocessing methods for establishing the partial least-squares (PLS) models of harpagoside. Data showed that the particle size distribution of 125-150 μm for Radix Scrophulariae exhibited the best prediction ability with R2pre=0.9513, RMSEP=0.1029 mg·g-1, and RPD = 4.78. For the hybrid granularity calibration model, the particle size distribution of 90-180 μm exhibited the best prediction ability with R2pre=0.8919, RMSEP=0.1632 mg·g-1, and RPD = 3.09. Furthermore, the Kubelka-Munk theory was used to relate the absorption coefficient k (concentration-dependent) and scatter coefficient s (particle size-dependent). The scatter coefficient s was calculated based on the Kubelka-Munk theory to study the changes of s after being mathematically preprocessed. A linear relationship was observed between k/s and absorption A within a certain range and the value for k/s was greater than 4. According to this relationship, the model was more accurately constructed with the particle size distribution of 90-180 μm when s was kept constant or in a small linear region. This region provided a good reference for the linear modeling of diffuse reflectance spectroscopy. To establish a diffuse reflectance NIR model, further accurate assessment should be obtained in advance for a precise linear model.
Haws, Kelly L; Liu, Peggy J
2016-02-01
Many restaurants are increasingly required to display calorie information on their menus. We present a study examining how consumers' food choices are affected by the presence of calorie information on restaurant menus. However, unlike prior research on this topic, we focus on the effect of calorie information on food choices made from a menu that contains both full size portions and half size portions of entrées. This different focus is important because many restaurants increasingly provide more than one portion size option per entrée. Additionally, we examine whether the impact of calorie information differs depending on whether full portions are cheaper per unit than half portions (non-linear pricing) or whether they have a similar per unit price (linear pricing). We find that when linear pricing is used, calorie information leads people to order fewer calories. This decrease occurs as people switch from unhealthy full sized portions to healthy full sized portions, not to unhealthy half sized portions. In contrast, when non-linear pricing is used, calorie information has no impact on calories selected. Considering the impact of calorie information on consumers' choices from menus with more than one entrée portion size option is increasingly important given restaurant and legislative trends, and the present research demonstrates that calorie information and pricing scheme may interact to affect choices from such menus. Copyright © 2015 Elsevier Ltd. All rights reserved.
GPU computing with Kaczmarz’s and other iterative algorithms for linear systems
Elble, Joseph M.; Sahinidis, Nikolaos V.; Vouzis, Panagiotis
2009-01-01
The graphics processing unit (GPU) is used to solve large linear systems derived from partial differential equations. The differential equations studied are strongly convection-dominated, of various sizes, and common to many fields, including computational fluid dynamics, heat transfer, and structural mechanics. The paper presents comparisons between GPU and CPU implementations of several well-known iterative methods, including Kaczmarz’s, Cimmino’s, component averaging, conjugate gradient normal residual (CGNR), symmetric successive overrelaxation-preconditioned conjugate gradient, and conjugate-gradient-accelerated component-averaged row projections (CARP-CG). Computations are preformed with dense as well as general banded systems. The results demonstrate that our GPU implementation outperforms CPU implementations of these algorithms, as well as previously studied parallel implementations on Linux clusters and shared memory systems. While the CGNR method had begun to fall out of favor for solving such problems, for the problems studied in this paper, the CGNR method implemented on the GPU performed better than the other methods, including a cluster implementation of the CARP-CG method. PMID:20526446
NASA Astrophysics Data System (ADS)
Pan, Zhenwen; Lamarche, Cody; Cour, Ishviene; Rawat, Naveen; Manning, Lane; Headrick, Randall; Furis, Madalina; Physics Dept.; Material Science Program, University of Vermont, Burlington, VT 05405 Team
2011-03-01
We employed a combination of linear dichroism and photoluminescence microscopy with spatial resolution of 5 μ m to study the excitonic properties of solution-processed metal-free phthalocyanine (H2Pc) crystalline thin films with millimeter-sized grains. We observe a highly-localized, sharp, monomer-like emission at the high angle grain boundaries, in contrast to samples with more uniform grain orientation where no such feature has been observed. The energy difference between the grain boundary luminescence and the HOMO-LUMO singlet exciton recombination of the crystalline H2Pc is measured to be 160meV. Our systematic survey of grain boundaries indicates this localized state is never present at low angle boundaries where the π -orbital overlap between adjacent grains is significant. It supports recent results which associated a decrease in carrier mobility with the presence of large angle boundaries in similar crystalline pentacene films. This project is supported by DMR- 0722451; DMR-0348354; DMR- 0821268.
Projection of angular momentum via linear algebra
Johnson, Calvin W.; O'Mara, Kevin D.
2017-12-01
Projection of many-body states with good angular momentum from an initial state is usually accomplished by a three-dimensional integral. Here, we show how projection can instead be done by solving a straightforward system of linear equations. We demonstrate the method and give sample applications tomore » $$^{48}$$Cr and $$^{60}$$Fe in the $pf$ shell. This new projection scheme, which is competitive against the standard numerical quadrature, should also be applicable to other quantum numbers such as isospin and particle number.« less
Projection of angular momentum via linear algebra
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Calvin W.; O'Mara, Kevin D.
Projection of many-body states with good angular momentum from an initial state is usually accomplished by a three-dimensional integral. Here, we show how projection can instead be done by solving a straightforward system of linear equations. We demonstrate the method and give sample applications tomore » $$^{48}$$Cr and $$^{60}$$Fe in the $pf$ shell. This new projection scheme, which is competitive against the standard numerical quadrature, should also be applicable to other quantum numbers such as isospin and particle number.« less
Projection of angular momentum via linear algebra
NASA Astrophysics Data System (ADS)
Johnson, Calvin W.; O'Mara, Kevin D.
2017-12-01
Projection of many-body states with good angular momentum from an initial state is usually accomplished by a three-dimensional integral. We show how projection can instead be done by solving a straightforward system of linear equations. We demonstrate the method and give sample applications to 48Cr and 60Fe in the p f shell. This new projection scheme, which is competitive against the standard numerical quadrature, should also be applicable to other quantum numbers such as isospin and particle number.
NASA Astrophysics Data System (ADS)
Picard, Francis; Ilias, Samir; Asselin, Daniel; Boucher, Marc-André; Duchesne, François; Jacob, Michel; Larouche, Carl; Vachon, Carl; Niall, Keith K.; Jerominek, Hubert
2011-02-01
A MEMS based technology for projection display is reviewed. This technology relies on mechanically flexible and reflective microbridges made of aluminum alloy. A linear array of such micromirrors is combined with illumination and Schlieren optics to produce a pixels line. Each microbridge in the array is individually controlled using electrostatic actuation to adjust the pixels intensities. Results of the simulation, fabrication and characterization of these microdevices are presented. Activation voltages below 250 V with response times below 10 μs were obtained for 25 μm × 25 μm micromirrors. With appropriate actuation voltage waveforms, response times of 5 μs and less are achievable. A damage threshold of the mirrors above 8 kW/cm2 has been evaluated. Development of the technology has produced projector engines demonstrating this light modulation principle. The most recent of these engines is DVI compatible and displays VGA video streams at 60 Hz. Recently applications have emerged that impose more stringent requirements on the dimensions of the MEMS array and associated optical system. This triggered a scale down study to evaluate the minimum micromirror size achievable, the impact of this reduced size on the damage threshold and the achievable minimum size of the associated optical system. Preliminary results of this scale down study are reported. FRAM with active surface as small as 5 μm × 5 μm have been investigated. Simulations have shown that such micromirrors could be activated with 107 V to achieve f-number of 1.25. The damage threshold has been estimated for various FRAM sizes. Finally, design of a conceptual miniaturized projector based on 1000×1 array of 5 μm × 5 μm micromirrors is presented. The volume of this projector concept is about 12 cm3.
Human Detection from a Mobile Robot Using Fusion of Laser and Vision Information
Fotiadis, Efstathios P.; Garzón, Mario; Barrientos, Antonio
2013-01-01
This paper presents a human detection system that can be employed on board a mobile platform for use in autonomous surveillance of large outdoor infrastructures. The prediction is based on the fusion of two detection modules, one for the laser and another for the vision data. In the laser module, a novel feature set that better encapsulates variations due to noise, distance and human pose is proposed. This enhances the generalization of the system, while at the same time, increasing the outdoor performance in comparison with current methods. The vision module uses the combination of the histogram of oriented gradients descriptor and the linear support vector machine classifier. Current approaches use a fixed-size projection to define regions of interest on the image data using the range information from the laser range finder. When applied to small size unmanned ground vehicles, these techniques suffer from misalignment, due to platform vibrations and terrain irregularities. This is effectively addressed in this work by using a novel adaptive projection technique, which is based on a probabilistic formulation of the classifier performance. Finally, a probability calibration step is introduced in order to optimally fuse the information from both modules. Experiments in real world environments demonstrate the robustness of the proposed method. PMID:24008280
Human detection from a mobile robot using fusion of laser and vision information.
Fotiadis, Efstathios P; Garzón, Mario; Barrientos, Antonio
2013-09-04
This paper presents a human detection system that can be employed on board a mobile platform for use in autonomous surveillance of large outdoor infrastructures. The prediction is based on the fusion of two detection modules, one for the laser and another for the vision data. In the laser module, a novel feature set that better encapsulates variations due to noise, distance and human pose is proposed. This enhances the generalization of the system, while at the same time, increasing the outdoor performance in comparison with current methods. The vision module uses the combination of the histogram of oriented gradients descriptor and the linear support vector machine classifier. Current approaches use a fixed-size projection to define regions of interest on the image data using the range information from the laser range finder. When applied to small size unmanned ground vehicles, these techniques suffer from misalignment, due to platform vibrations and terrain irregularities. This is effectively addressed in this work by using a novel adaptive projection technique, which is based on a probabilistic formulation of the classifier performance. Finally, a probability calibration step is introduced in order to optimally fuse the information from both modules. Experiments in real world environments demonstrate the robustness of the proposed method.
Wolves adapt territory size, not pack size to local habitat quality.
Kittle, Andrew M; Anderson, Morgan; Avgar, Tal; Baker, James A; Brown, Glen S; Hagens, Jevon; Iwachewski, Ed; Moffatt, Scott; Mosser, Anna; Patterson, Brent R; Reid, Douglas E B; Rodgers, Arthur R; Shuter, Jen; Street, Garrett M; Thompson, Ian D; Vander Vennen, Lucas M; Fryxell, John M
2015-09-01
1. Although local variation in territorial predator density is often correlated with habitat quality, the causal mechanism underlying this frequently observed association is poorly understood and could stem from facultative adjustment in either group size or territory size. 2. To test between these alternative hypotheses, we used a novel statistical framework to construct a winter population-level utilization distribution for wolves (Canis lupus) in northern Ontario, which we then linked to a suite of environmental variables to determine factors influencing wolf space use. Next, we compared habitat quality metrics emerging from this analysis as well as an independent measure of prey abundance, with pack size and territory size to investigate which hypothesis was most supported by the data. 3. We show that wolf space use patterns were concentrated near deciduous, mixed deciduous/coniferous and disturbed forest stands favoured by moose (Alces alces), the predominant prey species in the diet of wolves in northern Ontario, and in proximity to linear corridors, including shorelines and road networks remaining from commercial forestry activities. 4. We then demonstrate that landscape metrics of wolf habitat quality - projected wolf use, probability of moose occupancy and proportion of preferred land cover classes - were inversely related to territory size but unrelated to pack size. 5. These results suggest that wolves in boreal ecosystems alter territory size, but not pack size, in response to local variation in habitat quality. This could be an adaptive strategy to balance trade-offs between territorial defence costs and energetic gains due to resource acquisition. That pack size was not responsive to habitat quality suggests that variation in group size is influenced by other factors such as intraspecific competition between wolf packs. © 2015 The Authors. Journal of Animal Ecology © 2015 British Ecological Society.
DOT National Transportation Integrated Search
2009-06-01
The primary objective of this study was to design and test a first flush-based stormwater treatment device for elevated linear transportation projects/roadways that is capable of complying with MS4 regulations. The innovative idea behind the device i...
Patch size has no effect on insect visitation rate per unit area in garden-scale flower patches
NASA Astrophysics Data System (ADS)
Garbuzov, Mihail; Madsen, Andy; Ratnieks, Francis L. W.
2015-01-01
Previous studies investigating the effect of flower patch size on insect flower visitation rate have compared relatively large patches (10-1000s m2) and have generally found a negative relationship per unit area or per flower. Here, we investigate the effects of patch size on insect visitation in patches of smaller area (range c. 0.1-3.1 m2), which are of particular relevance to ornamental flower beds in parks and gardens. We studied two common garden plant species in full bloom with 6 patch sizes each: borage (Borago officinalis) and lavender (Lavandula × intermedia 'Grosso'). We quantified flower visitation by insects by making repeated counts of the insects foraging at each patch. On borage, all insects were honey bees (Apis mellifera, n = 5506 counts). On lavender, insects (n = 737 counts) were bumble bees (Bombus spp., 76.9%), flies (Diptera, 22.4%), and butterflies (Lepidoptera, 0.7%). On both plant species we found positive linear effects of patch size on insect numbers. However, there was no effect of patch size on the number of insects per unit area or per flower and, on lavender, for all insects combined or only bumble bees. The results show that it is possible to make unbiased comparisons of the attractiveness of plant species or varieties to flower-visiting insects using patches of different size within the small scale range studied and make possible projects aimed at comparing ornamental plant varieties using existing garden flower patches of variable area.
Balabin, Roman M; Smirnov, Sergey V
2011-04-29
During the past several years, near-infrared (near-IR/NIR) spectroscopy has increasingly been adopted as an analytical tool in various fields from petroleum to biomedical sectors. The NIR spectrum (above 4000 cm(-1)) of a sample is typically measured by modern instruments at a few hundred of wavelengths. Recently, considerable effort has been directed towards developing procedures to identify variables (wavelengths) that contribute useful information. Variable selection (VS) or feature selection, also called frequency selection or wavelength selection, is a critical step in data analysis for vibrational spectroscopy (infrared, Raman, or NIRS). In this paper, we compare the performance of 16 different feature selection methods for the prediction of properties of biodiesel fuel, including density, viscosity, methanol content, and water concentration. The feature selection algorithms tested include stepwise multiple linear regression (MLR-step), interval partial least squares regression (iPLS), backward iPLS (BiPLS), forward iPLS (FiPLS), moving window partial least squares regression (MWPLS), (modified) changeable size moving window partial least squares (CSMWPLS/MCSMWPLSR), searching combination moving window partial least squares (SCMWPLS), successive projections algorithm (SPA), uninformative variable elimination (UVE, including UVE-SPA), simulated annealing (SA), back-propagation artificial neural networks (BP-ANN), Kohonen artificial neural network (K-ANN), and genetic algorithms (GAs, including GA-iPLS). Two linear techniques for calibration model building, namely multiple linear regression (MLR) and partial least squares regression/projection to latent structures (PLS/PLSR), are used for the evaluation of biofuel properties. A comparison with a non-linear calibration model, artificial neural networks (ANN-MLP), is also provided. Discussion of gasoline, ethanol-gasoline (bioethanol), and diesel fuel data is presented. The results of other spectroscopic techniques application, such as Raman, ultraviolet-visible (UV-vis), or nuclear magnetic resonance (NMR) spectroscopies, can be greatly improved by an appropriate feature selection choice. Copyright © 2011 Elsevier B.V. All rights reserved.
Flexible cue combination in the guidance of attention in visual search
Brand, John; Oriet, Chris; Johnson, Aaron P.; Wolfe, Jeremy M.
2014-01-01
Hodsoll and Humphreys (2001) have assessed the relative contributions of stimulus-driven and user-driven knowledge on linearly- and nonlinearly separable search. However, the target feature used to determine linear separability in their task (i.e., target size) was required to locate the target. In the present work, we investigated the contributions of stimulus-driven and user-driven knowledge when a linearly- or nonlinearly-separable feature is available but not required for target identification. We asked observers to complete a series of standard color X orientation conjunction searches in which target size was either linearly- or nonlinearly separable from the size of the distractors. When guidance by color X orientation and by size information are both available, observers rely on whichever information results in the best search efficiency. This is the case irrespective of whether we provide target foreknowledge by blocking stimulus conditions, suggesting that feature information is used in both a stimulus-driven and user-driven fashion. PMID:25463553
Projective-Dual Method for Solving Systems of Linear Equations with Nonnegative Variables
NASA Astrophysics Data System (ADS)
Ganin, B. V.; Golikov, A. I.; Evtushenko, Yu. G.
2018-02-01
In order to solve an underdetermined system of linear equations with nonnegative variables, the projection of a given point onto its solutions set is sought. The dual of this problem—the problem of unconstrained maximization of a piecewise-quadratic function—is solved by Newton's method. The problem of unconstrained optimization dual of the regularized problem of finding the projection onto the solution set of the system is considered. A connection of duality theory and Newton's method with some known algorithms of projecting onto a standard simplex is shown. On the example of taking into account the specifics of the constraints of the transport linear programming problem, the possibility to increase the efficiency of calculating the generalized Hessian matrix is demonstrated. Some examples of numerical calculations using MATLAB are presented.
Overview of Krylov subspace methods with applications to control problems
NASA Technical Reports Server (NTRS)
Saad, Youcef
1989-01-01
An overview of projection methods based on Krylov subspaces are given with emphasis on their application to solving matrix equations that arise in control problems. The main idea of Krylov subspace methods is to generate a basis of the Krylov subspace Span and seek an approximate solution the the original problem from this subspace. Thus, the original matrix problem of size N is approximated by one of dimension m typically much smaller than N. Krylov subspace methods have been very successful in solving linear systems and eigenvalue problems and are now just becoming popular for solving nonlinear equations. It is shown how they can be used to solve partial pole placement problems, Sylvester's equation, and Lyapunov's equation.
Morphology filter bank for extracting nodular and linear patterns in medical images.
Hashimoto, Ryutaro; Uchiyama, Yoshikazu; Uchimura, Keiichi; Koutaki, Gou; Inoue, Tomoki
2017-04-01
Using image processing to extract nodular or linear shadows is a key technique of computer-aided diagnosis schemes. This study proposes a new method for extracting nodular and linear patterns of various sizes in medical images. We have developed a morphology filter bank that creates multiresolution representations of an image. Analysis bank of this filter bank produces nodular and linear patterns at each resolution level. Synthesis bank can then be used to perfectly reconstruct the original image from these decomposed patterns. Our proposed method shows better performance based on a quantitative evaluation using a synthesized image compared with a conventional method based on a Hessian matrix, often used to enhance nodular and linear patterns. In addition, experiments show that our method can be applied to the followings: (1) microcalcifications of various sizes in mammograms can be extracted, (2) blood vessels of various sizes in retinal fundus images can be extracted, and (3) thoracic CT images can be reconstructed while removing normal vessels. Our proposed method is useful for extracting nodular and linear shadows or removing normal structures in medical images.
REBL: design progress toward 16 nm half-pitch maskless projection electron beam lithography
NASA Astrophysics Data System (ADS)
McCord, Mark A.; Petric, Paul; Ummethala, Upendra; Carroll, Allen; Kojima, Shinichi; Grella, Luca; Shriyan, Sameet; Rettner, Charles T.; Bevis, Chris F.
2012-03-01
REBL (Reflective Electron Beam Lithography) is a novel concept for high speed maskless projection electron beam lithography. Originally targeting 45 nm HP (half pitch) under a DARPA funded contract, we are now working on optimizing the optics and architecture for the commercial silicon integrated circuit fabrication market at the equivalent of 16 nm HP. The shift to smaller features requires innovation in most major subsystems of the tool, including optics, stage, and metrology. We also require better simulation and understanding of the exposure process. In order to meet blur requirements for 16 nm lithography, we are both shrinking the pixel size and reducing the beam current. Throughput will be maintained by increasing the number of columns as well as other design optimizations. In consequence, the maximum stage speed required to meet wafer throughput targets at 16 nm will be much less than originally planned for at 45 nm. As a result, we are changing the stage architecture from a rotary design to a linear design that can still meet the throughput requirements but with more conventional technology that entails less technical risk. The linear concept also allows for simplifications in the datapath, primarily from being able to reuse pattern data across dies and columns. Finally, we are now able to demonstrate working dynamic pattern generator (DPG) chips, CMOS chips with microfabricated lenslets on top to prevent crosstalk between pixels.
NASA Astrophysics Data System (ADS)
Wamser, Kyle
Hyperspectral imagery and the corresponding ability to conduct analysis below the pixel level have tremendous potential to aid in landcover monitoring. During large ecosystem restoration projects, being able to monitor specific aspects of the recovery over large and often inaccessible areas under constrained finances are major challenges. The Civil Air Patrol's Airborne Real-time Cueing Hyperspectral Enhanced Reconnaissance (ARCHER) can provide hyperspectral data in most parts of the United States at relatively low cost. Although designed specifically for use in locating downed aircraft, the imagery holds the potential to identify specific aspects of landcover at far greater fidelity than traditional multispectral means. The goals of this research were to improve the use of ARCHER hyperspectral imagery to classify sub-canopy and open-area vegetation in coniferous forests located in the Southern Rockies and to determine how much fidelity might be lost from a baseline of 1 meter spatial resolution resampled to 2 and 5 meter pixel size to simulate higher altitude collection. Based on analysis comparing linear spectral unmixing with a traditional supervised classification, the linear spectral unmixing proved to be statistically superior. More importantly, however, linear spectral unmixing provided additional sub-pixel information that was unavailable using other techniques. The second goal of determining fidelity loss based on spatial resolution was more difficult to determine due to how the data are represented. Furthermore, the 2 and 5 meter imagery were obtained by resampling the 1 meter imagery and therefore may not be representative of the quality of actual 2 or 5 meter imagery. Ultimately, the information derived from this research may be useful in better utilizing hyperspectral imagery to conduct forest monitoring and assessment.
NASA Astrophysics Data System (ADS)
Horiuchi, Toshiyuki; Watanabe, Jun; Suzuki, Yuta; Iwasaki, Jun-ya
2017-05-01
Two dimensional code marks are often used for the production management. In particular, in the production lines of liquid-crystal-display panels and others, data on fabrication processes such as production number and process conditions are written on each substrate or device in detail, and they are used for quality managements. For this reason, lithography system specialized in code mark printing is developed. However, conventional systems using lamp projection exposure or laser scan exposure are very expensive. Therefore, development of a low-cost exposure system using light emitting diodes (LEDs) and optical fibers with squared ends arrayed in a matrix is strongly expected. In the past research, feasibility of such a new exposure system was demonstrated using a handmade system equipped with 100 LEDs with a central wavelength of 405 nm, a 10×10 matrix of optical fibers with 1 mm square ends, and a 10X projection lens. Based on these progresses, a new method for fabricating large-scale arrays of finer fibers with squared ends was developed in this paper. At most 40 plastic optical fibers were arranged in a linear gap of an arraying instrument, and simultaneously squared by heating them on a hotplate at 120°C for 7 min. Fiber sizes were homogeneous within 496+/-4 μm. In addition, average light leak was improved from 34.4 to 21.3% by adopting the new method in place of conventional one by one squaring method. Square matrix arrays necessary for printing code marks will be obtained by piling the newly fabricated linear arrays up.
Small Class Size and Its Effects.
ERIC Educational Resources Information Center
Biddle, Bruce J.; Berliner, David C.
2002-01-01
Describes several prominent early grades small-class-size projects and their effects on student achievement: Indiana's Project Prime Time, Tennessee's Project STAR (Student/Teacher Achievement Ratio), Wisconsin's SAGE (Student Achievement Guarantee in Education) Program, and the California class-size-reduction program. Lists several conclusions,…
University of Chicago School Mathematics Project (UCSMP) Algebra. WWC Intervention Report
ERIC Educational Resources Information Center
What Works Clearinghouse, 2009
2009-01-01
University of Chicago School Mathematics Project (UCSMP) Algebra is a one-year course covering three primary topics: (1) linear and quadratic expressions, sentences, and functions; (2) exponential expressions and functions; and (3) linear systems. Topics from geometry, probability, and statistics are integrated with the appropriate algebra.…
2007-08-01
Infinite plate with a hole: sequence of meshes produced by h-refinement. The geometry of the coarsest mesh...recalled with an emphasis on k -refinement. In Section 3, the use of high-order NURBS within a projection technique is studied in the geometri - cally linear...case with a B̄ method to investigate the choice of approximation and projection spaces with NURBS.
A comparative study of linear and nonlinear anomaly detectors for hyperspectral imagery
NASA Astrophysics Data System (ADS)
Goldberg, Hirsh; Nasrabadi, Nasser M.
2007-04-01
In this paper we implement various linear and nonlinear subspace-based anomaly detectors for hyperspectral imagery. First, a dual window technique is used to separate the local area around each pixel into two regions - an inner-window region (IWR) and an outer-window region (OWR). Pixel spectra from each region are projected onto a subspace which is defined by projection bases that can be generated in several ways. Here we use three common pattern classification techniques (Principal Component Analysis (PCA), Fisher Linear Discriminant (FLD) Analysis, and the Eigenspace Separation Transform (EST)) to generate projection vectors. In addition to these three algorithms, the well-known Reed-Xiaoli (RX) anomaly detector is also implemented. Each of the four linear methods is then implicitly defined in a high- (possibly infinite-) dimensional feature space by using a nonlinear mapping associated with a kernel function. Using a common machine-learning technique known as the kernel trick all dot products in the feature space are replaced with a Mercer kernel function defined in terms of the original input data space. To determine how anomalous a given pixel is, we then project the current test pixel spectra and the spectral mean vector of the OWR onto the linear and nonlinear projection vectors in order to exploit the statistical differences between the IWR and OWR pixels. Anomalies are detected if the separation of the projection of the current test pixel spectra and the OWR mean spectra are greater than a certain threshold. Comparisons are made using receiver operating characteristics (ROC) curves.
Liu, S.; Bremer, P. -T; Jayaraman, J. J.; ...
2016-06-04
Linear projections are one of the most common approaches to visualize high-dimensional data. Since the space of possible projections is large, existing systems usually select a small set of interesting projections by ranking a large set of candidate projections based on a chosen quality measure. However, while highly ranked projections can be informative, some lower ranked ones could offer important complementary information. Therefore, selection based on ranking may miss projections that are important to provide a global picture of the data. Here, the proposed work fills this gap by presenting the Grassmannian Atlas, a framework that captures the global structuresmore » of quality measures in the space of all projections, which enables a systematic exploration of many complementary projections and provides new insights into the properties of existing quality measures.« less
Controlling the Size and Shape of the Elastin-Like Polypeptide based Micelles
NASA Astrophysics Data System (ADS)
Streletzky, Kiril; Shuman, Hannah; Maraschky, Adam; Holland, Nolan
Elastin-like polypeptide (ELP) trimer constructs make reliable environmentally responsive micellar systems because they exhibit a controllable transition from being water-soluble at low temperatures to aggregating at high temperatures. It has been shown that depending on the specific details of the ELP design (length of the ELP chain, pH and salt concentration) micelles can vary in size and shape between spherical micelles with diameter 30-100 nm to elongated particles with an aspect ratio of about 10. This makes ELP trimers a convenient platform for developing potential drug delivery and bio-sensing applications as well as for understanding micelle formation in ELP systems. Since at a given salt concentration, the headgroup area for each foldon should be constant, the size of the micelles is expected to be proportional to the volume of the linear ELP available per foldon headgroup. Therefore, adding linear ELPs to a system of ELP-foldon should result in changes of the micelle volume allowing to control micelle size and possibly shape. The effects of addition of linear ELPs on size, shape, and molecular weight of micelles at different salt concentrations were studied by a combination of Dynamic Light Scattering and Static Light Scattering. The initial results on 50 µM ELP-foldon samples (at low salt) show that Rh of mixed micelles increases more than 5-fold as the amount of linear ELP raised from 0 to 50 µM. It was also found that a given mixture of linear and trimer constructs has two temperature-based transitions and therefore displays three predominant size regimes.
Space Construction System Analysis. Special Emphasis Studies
NASA Technical Reports Server (NTRS)
1979-01-01
Generic concepts were analyzed to determine: (1) the maximum size of a deployable solar array which might be packaged into a single orbit payload bay; (2) the optimal overall shape of a large erectable structure for large satellite projects; (3) the optimization of electronic communication with emphasis on the number of antennas and their diameters; and (4) the number of beams, traffic growth, and projections and frequencies were found feasible to package a deployable solar array which could generate over 250 kilowatts of electrical power. Also, it was found that the linear-shaped erectable structure is better for ease of construction and installation of systems, and compares favorably on several other counts. The study of electronic communication technology indicated that proliferation of individual satellites will crowd the spectrum by the early 1990's, so that there will be a strong tendency toward a small number of communications platforms over the continental U.S.A. with many antennas and multiple spot beams.
NASA Astrophysics Data System (ADS)
Jedamzik, Ralf; Westerhoff, Thomas
2017-09-01
The coefficient of thermal expansion (CTE) and its spatial homogeneity from small to large formats is the most important property of ZERODUR. Since more than a decade SCHOTT has documented the excellent CTE homogeneity. It started with reviews of past astronomical telescope projects like the VLT, Keck and GTC mirror blanks and continued with dedicated evaluations of the production. In recent years, extensive CTE measurements on samples cut from randomly selected single ZERODUR parts in meter size and formats of arbitrary shape, large production boules and even 4 m sized blanks have demonstrated the excellent CTE homogeneity in production. The published homogeneity data shows single ppb/K peak to valley CTE variations on medium spatial scale of several cm down to small spatial scale of only a few mm mostly at the limit of the measurement reproducibility. This review paper summarizes the results also in respect to the increased CTE measurement accuracy over the last decade of ZERODUR production.
ESEA Title I Linking Project. Final Report.
ERIC Educational Resources Information Center
Holmes, Susan E.
The Rasch model for test score equating was compared with three other equating procedures as methods for implementing the norm referenced method (RMC Model A) of evaluating ESEA Title I projects. The Rasch model and its theoretical limitations were described. The three other equating methods used were: linear observed score equating, linear true…
ERIC Educational Resources Information Center
Williams-Candek, Maryellen
2016-01-01
How better to begin the study of linear equations in an algebra class than to determine what students already know about the subject? A seventh-grade algebra class in a suburban school undertook a project early in the school year that was completed before they began studying linear relations and functions. The project, which might have been…
Next Linear Collider Home Page
Welcome to the Next Linear Collider NLC Home Page If you would like to learn about linear colliders in general and about this next-generation linear collider project's mission, design ideas, and Linear Collider. line | NLC Home | NLC Technical | SLAC | mcdunn Tuesday, February 14, 2006 01:32:11 PM
TU-F-CAMPUS-T-03: A Novel Iris Quality Assurance Phantom for the CyberKnife Radiosurgery System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Descovich, M; Pinnaduwage, D; Sudhyadhom, A
Purpose: A novel CCD camera and conical scintillator based phantom that is capable of measuring the targeting and field size accuracy of a robotic radiosurgery system has been developed. This work investigates its application in measuring the field sizes and beam divergence of the CyberKnife variable aperture collimator (Iris). Methods: The phantom was placed on the treatment couch and the robot position was adjusted to obtain an anterior -posterior beam perpendicular to the cone’s central axis. The FWHM of the 12 Iris apertures (5, 7.5, 10, 12.5, 15, 20, 25, 30, 35, 40, 50, and 60 mm) were measured frommore » the beam flux map on the conical scintillator surface as seen by the CCD camera. For each measurement 30 MU were delivered to the phantom at a dose rate of 1000 MU/min. The measurements were repeated at 4 SAD distances between 75 and 85 cm. These readings were used to project the aperture size as if the flux map on the scintillator were located 80 cm from the source (SSD). These projected FWHM beam diameters were then compared to the commissioning data. Results: A series of 12 beam divergence equations were obtained from the 4 sets of data using linear trend lines on Excel scatter plots. These equations were then used to project the FWHM measurements at 80 cm SSD. The average aperture accuracy for beams from 5 through 40 mm was 0.08 mm. The accuracy for the 50 and 60 mm beams were 0.33 and 0.58 mm when compared to film commissioning data. Conclusion: The experimental results for 10 apertures agree with the stated Iris accuracy of ±0.2 mm at 80 cm SAD. The results for the 50 and 60 mm aperture were repeatable and can serve as a reliable trend indicator of any deviations away from the commissioning values. Brett Nelson is President/CTO of Logos Systems.« less
Projection Operator: A Step Towards Certification of Adaptive Controllers
NASA Technical Reports Server (NTRS)
Larchev, Gregory V.; Campbell, Stefan F.; Kaneshige, John T.
2010-01-01
One of the major barriers to wider use of adaptive controllers in commercial aviation is the lack of appropriate certification procedures. In order to be certified by the Federal Aviation Administration (FAA), an aircraft controller is expected to meet a set of guidelines on functionality and reliability while not negatively impacting other systems or safety of aircraft operations. Due to their inherent time-variant and non-linear behavior, adaptive controllers cannot be certified via the metrics used for linear conventional controllers, such as gain and phase margin. Projection Operator is a robustness augmentation technique that bounds the output of a non-linear adaptive controller while conforming to the Lyapunov stability rules. It can also be used to limit the control authority of the adaptive component so that the said control authority can be arbitrarily close to that of a linear controller. In this paper we will present the results of applying the Projection Operator to a Model-Reference Adaptive Controller (MRAC), varying the amount of control authority, and comparing controller s performance and stability characteristics with those of a linear controller. We will also show how adjusting Projection Operator parameters can make it easier for the controller to satisfy the certification guidelines by enabling a tradeoff between controller s performance and robustness.
Quantitative Analysis of Cotton Canopy Size in Field Conditions Using a Consumer-Grade RGB-D Camera.
Jiang, Yu; Li, Changying; Paterson, Andrew H; Sun, Shangpeng; Xu, Rui; Robertson, Jon
2017-01-01
Plant canopy structure can strongly affect crop functions such as yield and stress tolerance, and canopy size is an important aspect of canopy structure. Manual assessment of canopy size is laborious and imprecise, and cannot measure multi-dimensional traits such as projected leaf area and canopy volume. Field-based high throughput phenotyping systems with imaging capabilities can rapidly acquire data about plants in field conditions, making it possible to quantify and monitor plant canopy development. The goal of this study was to develop a 3D imaging approach to quantitatively analyze cotton canopy development in field conditions. A cotton field was planted with 128 plots, including four genotypes of 32 plots each. The field was scanned by GPhenoVision (a customized field-based high throughput phenotyping system) to acquire color and depth images with GPS information in 2016 covering two growth stages: canopy development, and flowering and boll development. A data processing pipeline was developed, consisting of three steps: plot point cloud reconstruction, plant canopy segmentation, and trait extraction. Plot point clouds were reconstructed using color and depth images with GPS information. In colorized point clouds, vegetation was segmented from the background using an excess-green (ExG) color filter, and cotton canopies were further separated from weeds based on height, size, and position information. Static morphological traits were extracted on each day, including univariate traits (maximum and mean canopy height and width, projected canopy area, and concave and convex volumes) and a multivariate trait (cumulative height profile). Growth rates were calculated for univariate static traits, quantifying canopy growth and development. Linear regressions were performed between the traits and fiber yield to identify the best traits and measurement time for yield prediction. The results showed that fiber yield was correlated with static traits after the canopy development stage ( R 2 = 0.35-0.71) and growth rates in early canopy development stages ( R 2 = 0.29-0.52). Multi-dimensional traits (e.g., projected canopy area and volume) outperformed one-dimensional traits, and the multivariate trait (cumulative height profile) outperformed univariate traits. The proposed approach would be useful for identification of quantitative trait loci (QTLs) controlling canopy size in genetics/genomics studies or for fiber yield prediction in breeding programs and production environments.
NASA Astrophysics Data System (ADS)
Tisdell, Christopher C.
2017-11-01
For over 50 years, the learning of teaching of a priori bounds on solutions to linear differential equations has involved a Euclidean approach to measuring the size of a solution. While the Euclidean approach to a priori bounds on solutions is somewhat manageable in the learning and teaching of the proofs involving second-order, linear problems with constant co-efficients, we believe it is not pedagogically optimal. Moreover, the Euclidean method becomes pedagogically unwieldy in the proofs involving higher-order cases. The purpose of this work is to propose a simpler pedagogical approach to establish a priori bounds on solutions by considering a different way of measuring the size of a solution to linear problems, which we refer to as the Uber size. The Uber form enables a simplification of pedagogy from the literature and the ideas are accessible to learners who have an understanding of the Fundamental Theorem of Calculus and the exponential function, both usually seen in a first course in calculus. We believe that this work will be of mathematical and pedagogical interest to those who are learning and teaching in the area of differential equations or in any of the numerous disciplines where linear differential equations are used.
NASA Technical Reports Server (NTRS)
Swanson, Gregory; Cheatwood, Neil; Johnson, Keith; Calomino, Anthony; Gilles, Brian; Anderson, Paul; Bond, Bruce
2016-01-01
Over a decade of work has been conducted in the development of NASAs Hypersonic Inflatable Aerodynamic Decelerator (HIAD) deployable aeroshell technology. This effort has included multiple ground test campaigns and flight tests culminating in the HIAD projects second generation (Gen-2) aeroshell system. The HIAD project team has developed, fabricated, and tested stacked-torus inflatable structures (IS) with flexible thermal protection systems (F-TPS) ranging in diameters from 3-6m, with cone angles of 60 and 70 deg. To meet NASA and commercial near term objectives, the HIAD team must scale the current technology up to 12-15m in diameter. Therefore, the HIAD projects experience in scaling the technology has reached a critical juncture. Growing from a 6m to a 15m-class system will introduce many new structural and logistical challenges to an already complicated manufacturing process.Although the general architecture and key aspects of the HIAD design scale well to larger vehicles, details of the technology will need to be reevaluated and possibly redesigned for use in a 15m-class HIAD system. These include: layout and size of the structural webbing that transfers load throughout the IS, inflatable gas barrier design, torus diameter and braid construction, internal pressure and inflation line routing, adhesives used for coating and bonding, and F-TPS gore design and seam fabrication. The logistics of fabricating and testing the IS and the F-TPS also become more challenging with increased scale. Compared to the 6m aeroshell (the largest HIAD built to date), a 12m aeroshell has four times the cross-sectional area, and a 15m one has over six times the area. This means that fabrication and test procedures will need to be reexamined to ac-count for the sheer size and weight of the aeroshell components. This will affect a variety of steps in the manufacturing process, such as: stacking the tori during assembly, stitching the structural webbing, initial inflation of tori, and stitching of F-TPS gores. Additionally, new approaches and hardware will be required for handling and ground testing of both individual tori and the fully assembled HIADs.There are also noteworthy benefits of scaling up the HIAD aeroshell to a 15m-class system. Two complications in working with handmade textile structures are the non-linearity of the material components and the role of human accuracy during fabrication. Larger, more capable, HIAD structures should see much larger operational loads, potentially bringing the structural response of the material components out of the non-linear regime and into the preferred linear response range. Also, making the reasonable assumption that the magnitude of fabrication accuracy remains constant as the structures grow, the relative effect of fabrication errors should decrease as a percentage of the textile component size. Combined, these two effects improve the predictive capability and the uniformity of the structural response for a 12-15m HIAD.In this presentation, a handful of the challenges and associated mitigation plans will be discussed, as well as an update on current 12m aeroshell manufacturing and testing that is addressing these challenges
NASA Technical Reports Server (NTRS)
Swanson, G. T.; Cheatwood, F. M.; Johnson, R. K.; Hughes, S. J.; Calomino, A. M.
2016-01-01
Over a decade of work has been conducted in the development of NASA's Hypersonic Inflatable Aerodynamic Decelerator (HIAD) deployable aeroshell technology. This effort has included multiple ground test campaigns and flight tests culminating in the HIAD project's second generation (Gen-2) aeroshell system. The HIAD project team has developed, fabricated, and tested stacked-torus inflatable structures (IS) with flexible thermal protection systems (F-TPS) ranging in diameters from 3-6 meters, with cone angles of 60 and 70 degrees. To meet NASA and commercial near-term objectives, the HIAD team must scale the current technology up to 12-15 meters in diameter. Therefore, the HIAD project's experience in scaling the technology has reached a critical juncture. Growing from a 6-meter to a 15-meter class system will introduce many new structural and logistical challenges to an already complicated manufacturing process. Although the general architecture and key aspects of the HIAD design scale well to larger vehicles, details of the technology will need to be reevaluated and possibly redesigned for use in a 15-meter-class HIAD system. These include: layout and size of the structural webbing that transfers load throughout the IS, inflatable gas barrier design, torus diameter and braid construction, internal pressure and inflation line routing, adhesives used for coating and bonding, and F-TPS gore design and seam fabrication. The logistics of fabricating and testing the IS and the F-TPS also become more challenging with increased scale. Compared to the 6-meter aeroshell (the largest HIAD built to date), a 12-meter aeroshell has four times the cross-sectional area, and a 15-meter one has over six times the area. This means that fabrication and test procedures will need to be reexamined to account for the sheer size and weight of the aeroshell components. This will affect a variety of steps in the manufacturing process, such as: stacking the tori during assembly, stitching the structural webbing, initial inflation of tori, and stitching of F-TPS gores. Additionally, new approaches and hardware will be required for handling and ground testing of both individual tori and the fully assembled HIADs. There are also noteworthy benefits of scaling up the HIAD aeroshell to a 15m-class system. Two complications in working with handmade textile structures are the non-linearity of the material components and the role of human accuracy during fabrication. Larger, more capable, HIAD structures should see much larger operational loads, potentially bringing the structural response of the material components out of the non-linear regime and into the preferred linear response range. Also, making the reasonable assumption that the magnitude of fabrication accuracy remains constant as the structures grow, the relative effect of fabrication errors should decrease as a percentage of the textile component size. Combined, these two effects improve the predictive capability and the uniformity of the structural response for a 12-15-meter HIAD. In this presentation, a handful of the challenges and associated mitigation plans will be discussed, as well as an update on current manufacturing and testing that addressing these challenges.
NASA Technical Reports Server (NTRS)
Cheatwood, F. McNeil; Swanson, Gregory T.; Johnson, R. Keith; Hughes, Stephen; Calomino, Anthony; Gilles, Brian; Anderson, Paul; Bond, Bruce
2016-01-01
Over a decade of work has been conducted in the development of NASA's Hypersonic Inflatable Aerodynamic Decelerator (HIAD) deployable aeroshell technology. This effort has included multiple ground test campaigns and flight tests culminating in the HIAD project's second generation (Gen-2) aeroshell system. The HIAD project team has developed, fabricated, and tested stacked-torus inflatable structures (IS) with flexible thermal protection systems (F-TPS) ranging in diameters from 3-6m, with cone angles of 60 and 70 deg. To meet NASA and commercial near term objectives, the HIAD team must scale the current technology up to 12-15m in diameter. The HIAD project's experience in scaling the technology has reached a critical juncture. Growing from a 6m to a 15m class system will introduce many new structural and logistical challenges to an already complicated manufacturing process. Although the general architecture and key aspects of the HIAD design scale well to larger vehicles, details of the technology will need to be reevaluated and possibly redesigned for use in a 15m-class HIAD system. These include: layout and size of the structural webbing that transfers load throughout the IS, inflatable gas barrier design, torus diameter and braid construction, internal pressure and inflation line routing, adhesives used for coating and bonding, and F-TPS gore design and seam fabrication. The logistics of fabricating and testing the IS and the F-TPS also become more challenging with increased scale. Compared to the 6m aeroshell (the largest HIAD built to date), a 12m aeroshell has four times the cross-sectional area, and a 15m one has over six times the area. This means that fabrication and test procedures will need to be reexamined to account for the sheer size and weight of the aeroshell components. This will affect a variety of steps in the manufacturing process, such as: stacking the tori during assembly, stitching the structural webbing, initial inflation of tori, and stitching of F-TPS gores. Additionally, new approaches and hardware will be required for handling and ground testing of both individual tori and the fully assembled HIADs. There are also noteworthy benefits of scaling up the HIAD aeroshell to 15m-class system. Two complications in working with handmade textiles structures are the non-linearity of the materials and the role of human accuracy during fabrication. Larger, more capable, HIAD structures should see much larger operational loads, potentially bringing the structural response of the materials out of the non-linear regime and into the preferred linear response range. Also, making the reasonable assumption that the magnitude of fabrication accuracy remains constant as the structures grow, the relative effect of fabrication errors should decrease as a percentage of the textile component size. Combined, these two effects improve the predictive capability and the uniformity of the structural response for a 12-15m class HIAD. In this paper, the challenges and associated mitigation plans related to scaling up the HIAD stacked-torus aeroshell to a 15m class system will be discussed. In addition, the benefits of enlarging the structure will be further explored.
Development of neutron measurement in high gamma field using new nuclear emulsion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kawarabayashi, J.; Ishihara, K.; Takagi, K.
2011-07-01
To precisely measure the neutron emissions from a spent fuel assembly of a fast breeder reactor, we formed nuclear emulsions based on a non-sensitized Oscillation Project with Emulsion tracking Apparatus (OPERA) film with AgBr grain sizes of 60, 90, and 160 nm. The efficiency for {sup 252}Cf neutron detection of the new emulsion was calculated to be 0.7 x 10{sup -4}, which corresponded to an energy range from 0.3 to 2 MeV and was consistent with a preliminary estimate based on experimental results. The sensitivity of the new emulsion was also experimentally estimated by irradiating with 565 keV and 14more » MeV neutrons. The emulsion with an AgBr grain size of 60 nm had the lowest sensitivity among the above three emulsions but was still sensitive enough to detect protons. Furthermore, the experimental data suggested that there was a threshold linear energy transfer of 15 keV/{mu}m for the new emulsion, below which no silver clusters developed. Further development of nuclear emulsion with an AgBr grain size of a few tens of nanometers will be the next stage of the present study. (authors)« less
Extended and broad Ly α emission around a BAL quasar at z ˜ 5
NASA Astrophysics Data System (ADS)
Ginolfi, M.; Maiolino, R.; Carniani, S.; Arrigoni Battaia, F.; Cantalupo, S.; Schneider, R.
2018-05-01
In this work we report deep MUSE observations of a broad absorption line (BAL) quasar at z ˜ 5, revealing a Ly α nebula with a maximum projected linear size of ˜60 kpc around the quasar (down to our 2σ SB limit per layer of ˜ 9× 10^{-19} erg s^{-1} cm^{-2} arcsec^{-2} for a 1 arcsec2 aperture). After correcting for the cosmological surface brightness dimming, we find that our nebula, at z ˜ 5, has an intrinsically less extended Ly α emission than nebulae at lower redshift. However, such a discrepancy is greatly reduced when referring to comoving distances, which take into account the cosmological growth of dark matter (DM) haloes, suggesting a positive correlation between the size of Ly α nebulae and the sizes of DM haloes/structures around quasars. Differently from the typical nebulae around radio-quiet non-BAL quasars, in the inner regions (˜10 kpc) of the circumgalactic medium of our source, the velocity dispersion of the Ly α emission is very high (FWHM > 1000 km s-1), suggesting that in our case we may be probing outflowing material associated with the quasar.
Construction of trypanosome artificial mini-chromosomes.
Lee, M G; E, Y; Axelrod, N
1995-01-01
We report the preparation of two linear constructs which, when transformed into the procyclic form of Trypanosoma brucei, become stably inherited artificial mini-chromosomes. Both of the two constructs, one of 10 kb and the other of 13 kb, contain a T.brucei PARP promoter driving a chloramphenicol acetyltransferase (CAT) gene. In the 10 kb construct the CAT gene is followed by one hygromycin phosphotransferase (Hph) gene, and in the 13 kb construct the CAT gene is followed by three tandemly linked Hph genes. At each end of these linear molecules are telomere repeats and subtelomeric sequences. Electroporation of these linear DNA constructs into the procyclic form of T.brucei generated hygromycin-B resistant cell lines. In these cell lines, the input DNA remained linear and bounded by the telomere ends, but it increased in size. In the cell lines generated by the 10 kb construct, the input DNA increased in size to 20-50 kb. In the cell lines generated by the 13 kb constructs, two sizes of linear DNAs containing the input plasmid were detected: one of 40-50 kb and the other of 150 kb. The increase in size was not the result of in vivo tandem repetitions of the input plasmid, but represented the addition of new sequences. These Hph containing linear DNA molecules were maintained stably in cell lines for at least 20 generations in the absence of drug selection and were subsequently referred to as trypanosome artificial mini-chromosomes, or TACs. Images PMID:8532534
Study on Resources Assessment of Coal Seams covered by Long-Distance Oil & Gas Pipelines
NASA Astrophysics Data System (ADS)
Han, Bing; Fu, Qiang; Pan, Wei; Hou, Hanfang
2018-01-01
The assessment of mineral resources covered by construction projects plays an important role in reducing the overlaying of important mineral resources and ensuring the smooth implementation of construction projects. To take a planned long-distance gas pipeline as an example, the assessment method and principles for coal resources covered by linear projects are introduced. The areas covered by multiple coal seams are determined according to the linear projection method, and the resources covered by pipelines directly and indirectly are estimated by using area segmentation method on the basis of original blocks. The research results can provide references for route optimization of projects and compensation for mining right..
NASA Technical Reports Server (NTRS)
Majda, G.
1985-01-01
A large set of variable coefficient linear systems of ordinary differential equations which possess two different time scales, a slow one and a fast one is considered. A small parameter epsilon characterizes the stiffness of these systems. A system of o.d.e.s. in this set is approximated by a general class of multistep discretizations which includes both one-leg and linear multistep methods. Sufficient conditions are determined under which each solution of a multistep method is uniformly bounded, with a bound which is independent of the stiffness of the system of o.d.e.s., when the step size resolves the slow time scale, but not the fast one. This property is called stability with large step sizes. The theory presented lets one compare properties of one-leg methods and linear multistep methods when they approximate variable coefficient systems of stiff o.d.e.s. In particular, it is shown that one-leg methods have better stability properties with large step sizes than their linear multistep counter parts. The theory also allows one to relate the concept of D-stability to the usual notions of stability and stability domains and to the propagation of errors for multistep methods which use large step sizes.
On the development of HSCT tail sizing criteria using linear matrix inequalities
NASA Technical Reports Server (NTRS)
Kaminer, Isaac
1995-01-01
This report presents the results of a study to extend existing high speed civil transport (HSCT) tail sizing criteria using linear matrix inequalities (LMI). In particular, the effects of feedback specifications, such as MIL STD 1797 Level 1 and 2 flying qualities requirements, and actuator amplitude and rate constraints on the maximum allowable cg travel for a given set of tail sizes are considered. Results comparing previously developed industry criteria and the LMI methodology on an HSCT concept airplane are presented.
Fast optical transillumination tomography with large-size projection acquisition.
Huang, Hsuan-Ming; Xia, Jinjun; Haidekker, Mark A
2008-10-01
Techniques such as optical coherence tomography and diffuse optical tomography have been shown to effectively image highly scattering samples such as tissue. An additional modality has received much less attention: Optical transillumination (OT) tomography, a modality that promises very high acquisition speed for volumetric scans. With the motivation to image tissue-engineered blood vessels for possible biomechanical testing, we have developed a fast OT device using a collimated, noncoherent beam with a large diameter together with a large-size CMOS camera that has the ability to acquire 3D projections in a single revolution of the sample. In addition, we used accelerated iterative reconstruction techniques to improve image reconstruction speed, while at the same time obtaining better image quality than through filtered backprojection. The device was tested using ink-filled polytetrafluorethylene tubes to determine geometric reconstruction accuracy and recovery of absorbance. Even in the presence of minor refractive index mismatch, the weighted error of the measured radius was <5% in all cases, and a high linear correlation of ink absorbance determined with a photospectrometer of R(2) = 0.99 was found, although the OT device systematically underestimated absorbance. Reconstruction time was improved from several hours (standard arithmetic reconstruction) to 90 s per slice with our optimized algorithm. Composed of only a light source, two spatial filters, a sample bath, and a CMOS camera, this device was extremely simple and cost-efficient to build.
Liu, Lan; Jiang, Tao
2007-01-01
With the launch of the international HapMap project, the haplotype inference problem has attracted a great deal of attention in the computational biology community recently. In this paper, we study the question of how to efficiently infer haplotypes from genotypes of individuals related by a pedigree without mating loops, assuming that the hereditary process was free of mutations (i.e. the Mendelian law of inheritance) and recombinants. We model the haplotype inference problem as a system of linear equations as in [10] and present an (optimal) linear-time (i.e. O(mn) time) algorithm to generate a particular solution (A particular solution of any linear system is an assignment of numerical values to the variables in the system which satisfies the equations in the system.) to the haplotype inference problem, where m is the number of loci (or markers) in a genotype and n is the number of individuals in the pedigree. Moreover, the algorithm also provides a general solution (A general solution of any linear system is denoted by the span of a basis in the solution space to its associated homogeneous system, offset from the origin by a vector, namely by any particular solution. A general solution for ZRHC is very useful in practice because it allows the end user to efficiently enumerate all solutions for ZRHC and performs tasks such as random sampling.) in O(mn2) time, which is optimal because the size of a general solution could be as large as Theta(mn2). The key ingredients of our construction are (i) a fast consistency checking procedure for the system of linear equations introduced in [10] based on a careful investigation of the relationship between the equations (ii) a novel linear-time method for solving linear equations without invoking the Gaussian elimination method. Although such a fast method for solving equations is not known for general systems of linear equations, we take advantage of the underlying loop-free pedigree graph and some special properties of the linear equations.
Ellis, Shane R; Soltwisch, Jens; Heeren, Ron M A
2014-05-01
In this study, we describe the implementation of a position- and time-sensitive detection system (Timepix detector) to directly visualize the spatial distributions of the matrix-assisted laser desorption ionization ion cloud in a linear-time-of-flight (MALDI linear-ToF) as it is projected onto the detector surface. These time-resolved images allow direct visualization of m/z-dependent ion focusing effects that occur within the ion source of the instrument. The influence of key parameters, namely extraction voltage (E(V)), pulsed-ion extraction (PIE) delay, and even the matrix-dependent initial ion velocity was investigated and were found to alter the focusing properties of the ion-optical system. Under certain conditions where the spatial focal plane coincides with the detector plane, so-called x-y space focusing could be observed (i.e., the focusing of the ion cloud to a small, well-defined spot on the detector). Such conditions allow for the stigmatic ion imaging of intact proteins for the first time on a commercial linear ToF-MS system. In combination with the ion-optical magnification of the system (~100×), a spatial resolving power of 11–16 μm with a pixel size of 550 nm was recorded within a laser spot diameter of ~125 μm. This study demonstrates both the diagnostic and analytical advantages offered by the Timepix detector in ToF-MS.
ERIC Educational Resources Information Center
Tisdell, Christopher C.
2017-01-01
For over 50 years, the learning of teaching of "a priori" bounds on solutions to linear differential equations has involved a Euclidean approach to measuring the size of a solution. While the Euclidean approach to "a priori" bounds on solutions is somewhat manageable in the learning and teaching of the proofs involving…
NASA Technical Reports Server (NTRS)
2004-01-01
[figure removed for brevity, see original site]
Released 28 June 2004 The atmosphere of Mars is a dynamic system. Water-ice clouds, fog, and hazes can make imaging the surface from space difficult. Dust storms can grow from local disturbances to global sizes, through which imaging is impossible. Seasonal temperature changes are the usual drivers in cloud and dust storm development and growth. Eons of atmospheric dust storm activity has left its mark on the surface of Mars. Dust carried aloft by the wind has settled out on every available surface; sand dunes have been created and moved by centuries of wind; and the effect of continual sand-blasting has modified many regions of Mars, creating yardangs and other unusual surface forms. This image was acquired during early spring near the North Pole. The linear 'ripples' are transparent water-ice clouds. This linear form is typical for polar clouds. The black regions on the margins of this image are areas of saturation caused by the build up of scattered light from the bright polar material during the long image exposure. Image information: VIS instrument. Latitude 68.1, Longitude 147.9 East (212.1 West). 38 meter/pixel resolution. Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time. NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.NASA Technical Reports Server (NTRS)
2004-01-01
[figure removed for brevity, see original site]
Released 30 June 2004 The atmosphere of Mars is a dynamic system. Water-ice clouds, fog, and hazes can make imaging the surface from space difficult. Dust storms can grow from local disturbances to global sizes, through which imaging is impossible. Seasonal temperature changes are the usual drivers in cloud and dust storm development and growth. Eons of atmospheric dust storm activity has left its mark on the surface of Mars. Dust carried aloft by the wind has settled out on every available surface; sand dunes have been created and moved by centuries of wind; and the effect of continual sand-blasting has modified many regions of Mars, creating yardangs and other unusual surface forms. This image of the North Polar water-ice clouds shows how surface topography can affect the linear form. Notice that the crater at the bottom of the image is causing a deflection in the linear form. Image information: VIS instrument. Latitude 68.4, Longitude 100.7 East (259.3 West). 38 meter/pixel resolution. Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time. NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.Human centromere genomics: now it's personal.
Hayden, Karen E
2012-07-01
Advances in human genomics have accelerated studies in evolution, disease, and cellular regulation. However, centromere sequences, defining the chromosomal interface with spindle microtubules, remain largely absent from ongoing genomic studies and disconnected from functional, genome-wide analyses. This disparity results from the challenge of predicting the linear order of multi-megabase-sized regions that are composed almost entirely of near-identical satellite DNA. Acknowledging these challenges, the field of human centromere genomics possesses the potential to rapidly advance given the availability of individual, or personalized, genome projects matched with the promise of long-read sequencing technologies. Here I review the current genomic model of human centromeres in consideration of those studies involving functional datasets that examine the role of sequence in centromere identity.
Results on 3D interconnection from AIDA WP3
NASA Astrophysics Data System (ADS)
Moser, Hans-Günther; AIDA-WP3
2016-09-01
From 2010 to 2014 the EU funded AIDA project established in one of its work packages (WP3) a network of groups working collaboratively on advanced 3D integration of electronic circuits and semiconductor sensors for applications in particle physics. The main motivation came from the severe requirements on pixel detectors for tracking and vertexing at future Particle Physics experiments at LHC, super-B factories and linear colliders. To go beyond the state-of-the-art, the main issues were studying low mass, high bandwidth applications, with radiation hardness capabilities, with low power consumption, offering complex functionality, with small pixel size and without dead regions. The interfaces and interconnects of sensors to electronic readout integrated circuits are a key challenge for new detector applications.
Feng, Cun-Fang; Xu, Xin-Jian; Wang, Sheng-Jun; Wang, Ying-Hai
2008-06-01
We study projective-anticipating, projective, and projective-lag synchronization of time-delayed chaotic systems on random networks. We relax some limitations of previous work, where projective-anticipating and projective-lag synchronization can be achieved only on two coupled chaotic systems. In this paper, we realize projective-anticipating and projective-lag synchronization on complex dynamical networks composed of a large number of interconnected components. At the same time, although previous work studied projective synchronization on complex dynamical networks, the dynamics of the nodes are coupled partially linear chaotic systems. In this paper, the dynamics of the nodes of the complex networks are time-delayed chaotic systems without the limitation of the partial linearity. Based on the Lyapunov stability theory, we suggest a generic method to achieve the projective-anticipating, projective, and projective-lag synchronization of time-delayed chaotic systems on random dynamical networks, and we find both its existence and sufficient stability conditions. The validity of the proposed method is demonstrated and verified by examining specific examples using Ikeda and Mackey-Glass systems on Erdos-Renyi networks.
Orthodontics: computer-aided diagnosis and treatment planning
NASA Astrophysics Data System (ADS)
Yi, Yaxing; Li, Zhongke; Wei, Suyuan; Deng, Fanglin; Yao, Sen
2000-10-01
The purpose of this article is to introduce the outline of our newly developed computer-aided 3D dental cast analyzing system with laser scanning, and its preliminary clinical applications. The system is composed of a scanning device and a personal computer as a scanning controller and post processor. The scanning device is composed of a laser beam emitter, two sets of linear CCD cameras and a table which is rotatable by two-degree-of-freedom. The rotating is controlled precisely by a personal computer. The dental cast is projected and scanned with a laser beam. Triangulation is applied to determine the location of each point. Generation of 3D graphics of the dental cast takes approximately 40 minutes. About 170,000 sets of X,Y,Z coordinates are store for one dental cast. Besides the conventional linear and angular measurements of the dental cast, we are also able to demonstrate the size of the top surface area of each molar. The advantage of this system is that it facilitates the otherwise complicated and time- consuming mock surgery necessary for treatment planning in orthognathic surgery.
YAP(Ce) crystal characterization with proton beam up to 60 MeV
NASA Astrophysics Data System (ADS)
Randazzo, N.; Sipala, V.; Aiello, S.; Lo Presti, D.; Cirrone, G. A. P.; Cuttone, G.; Di Rosa, F.
2008-02-01
A YAP(Ce) crystal was characterized with a proton beam up to 60 MeV. Tests were performed to investigate the possibility of using this detector as a proton calorimeter. The size of the crystal was chosen so that the proton energy is totally lost inside the medium. The authors propose to use the YAP(Ce) crystal in medical applications for proton therapy. In particular, in proton computed tomography (pCT) project it is necessary as a calorimeter in order to measure the proton residual energy after the phantom. Energy resolution, linearity, and light yield were measured in the Laboratori Nazionali del Sud with the CATANA proton beam [ http://www.lns.infn.it/CATANA/CATANA] and the results are shown in this paper. The crystal shows a good resolution (3% at 60 MeV proton beam) and it shows good linearity for different proton beam energies (1% at 30-60 MeV energy range). The crystal performances confirm that the YAP(Ce) crystal represents a good solution for these kinds of application.
Digital control of magnetic bearings in a cryogenic cooler
NASA Technical Reports Server (NTRS)
Feeley, J.; Law, A.; Lind, F.
1990-01-01
This paper describes the design of a digital control system for control of magnetic bearings used in a spaceborne cryogenic cooler. The cooler was developed by Philips Laboratories for the NASA Goddard Space Flight Center. Six magnetic bearing assemblies are used to levitate the piston, displacer, and counter-balance of the cooler. The piston and displacer are driven by linear motors in accordance with Stirling cycle thermodynamic principles to produce the desired cooling effect. The counter-balance is driven by a third linear motor to cancel motion induced forces that would otherwise be transmitted to the spacecraft. An analog control system is currently used for bearing control. The purpose of this project is to investigate the possibilities for improved performance using digital control. Areas for potential improvement include transient and steady state control characteristics, robustness, reliability, adaptability, alternate control modes, size, weight, and cost. The present control system is targeted for the Intel 80196 microcontroller family. The eventual introduction of application specific integrated circuit (ASIC) technology to this problem may produce a unique and elegant solution both here and in related industrial problems.
Synthesizing Dynamic Programming Algorithms from Linear Temporal Logic Formulae
NASA Technical Reports Server (NTRS)
Rosu, Grigore; Havelund, Klaus
2001-01-01
The problem of testing a linear temporal logic (LTL) formula on a finite execution trace of events, generated by an executing program, occurs naturally in runtime analysis of software. We present an algorithm which takes an LTL formula and generates an efficient dynamic programming algorithm. The generated algorithm tests whether the LTL formula is satisfied by a finite trace of events given as input. The generated algorithm runs in linear time, its constant depending on the size of the LTL formula. The memory needed is constant, also depending on the size of the formula.
SparseBeads data: benchmarking sparsity-regularized computed tomography
NASA Astrophysics Data System (ADS)
Jørgensen, Jakob S.; Coban, Sophia B.; Lionheart, William R. B.; McDonald, Samuel A.; Withers, Philip J.
2017-12-01
Sparsity regularization (SR) such as total variation (TV) minimization allows accurate image reconstruction in x-ray computed tomography (CT) from fewer projections than analytical methods. Exactly how few projections suffice and how this number may depend on the image remain poorly understood. Compressive sensing connects the critical number of projections to the image sparsity, but does not cover CT, however empirical results suggest a similar connection. The present work establishes for real CT data a connection between gradient sparsity and the sufficient number of projections for accurate TV-regularized reconstruction. A collection of 48 x-ray CT datasets called SparseBeads was designed for benchmarking SR reconstruction algorithms. Beadpacks comprising glass beads of five different sizes as well as mixtures were scanned in a micro-CT scanner to provide structured datasets with variable image sparsity levels, number of projections and noise levels to allow the systematic assessment of parameters affecting performance of SR reconstruction algorithms6. Using the SparseBeads data, TV-regularized reconstruction quality was assessed as a function of numbers of projections and gradient sparsity. The critical number of projections for satisfactory TV-regularized reconstruction increased almost linearly with the gradient sparsity. This establishes a quantitative guideline from which one may predict how few projections to acquire based on expected sample sparsity level as an aid in planning of dose- or time-critical experiments. The results are expected to hold for samples of similar characteristics, i.e. consisting of few, distinct phases with relatively simple structure. Such cases are plentiful in porous media, composite materials, foams, as well as non-destructive testing and metrology. For samples of other characteristics the proposed methodology may be used to investigate similar relations.
Project W-320, 241-C-106 sluicing HVAC calculations, Volume 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bailey, J.W.
1998-08-07
This supporting document has been prepared to make the FDNW calculations for Project W-320, readily retrievable. The report contains the following calculations: Exhaust airflow sizing for Tank 241-C-106; Equipment sizing and selection recirculation fan; Sizing high efficiency mist eliminator; Sizing electric heating coil; Equipment sizing and selection of recirculation condenser; Chiller skid system sizing and selection; High efficiency metal filter shielding input and flushing frequency; and Exhaust skid stack sizing and fan sizing.
Predicting discovery rates of genomic features.
Gravel, Simon
2014-06-01
Successful sequencing experiments require judicious sample selection. However, this selection must often be performed on the basis of limited preliminary data. Predicting the statistical properties of the final sample based on preliminary data can be challenging, because numerous uncertain model assumptions may be involved. Here, we ask whether we can predict "omics" variation across many samples by sequencing only a fraction of them. In the infinite-genome limit, we find that a pilot study sequencing 5% of a population is sufficient to predict the number of genetic variants in the entire population within 6% of the correct value, using an estimator agnostic to demography, selection, or population structure. To reach similar accuracy in a finite genome with millions of polymorphisms, the pilot study would require ∼15% of the population. We present computationally efficient jackknife and linear programming methods that exhibit substantially less bias than the state of the art when applied to simulated data and subsampled 1000 Genomes Project data. Extrapolating based on the National Heart, Lung, and Blood Institute Exome Sequencing Project data, we predict that 7.2% of sites in the capture region would be variable in a sample of 50,000 African Americans and 8.8% in a European sample of equal size. Finally, we show how the linear programming method can also predict discovery rates of various genomic features, such as the number of transcription factor binding sites across different cell types. Copyright © 2014 by the Genetics Society of America.
Time-frequency filtering and synthesis from convex projections
NASA Astrophysics Data System (ADS)
White, Langford B.
1990-11-01
This paper describes the application of the theory of projections onto convex sets to time-frequency filtering and synthesis problems. We show that the class of Wigner-Ville Distributions (WVD) of L2 signals form the boundary of a closed convex subset of L2(R2). This result is obtained by considering the convex set of states on the Heisenberg group of which the ambiguity functions form the extreme points. The form of the projection onto the set of WVDs is deduced. Various linear and non-linear filtering operations are incorporated by formulation as convex projections. An example algorithm for simultaneous time-frequency filtering and synthesis is suggested.
Using nonlinear quantile regression to estimate the self-thinning boundary curve
Quang V. Cao; Thomas J. Dean
2015-01-01
The relationship between tree size (quadratic mean diameter) and tree density (number of trees per unit area) has been a topic of research and discussion for many decades. Starting with Reineke in 1933, the maximum size-density relationship, on a log-log scale, has been assumed to be linear. Several techniques, including linear quantile regression, have been employed...
Scalable screen-size enlargement by multi-channel viewing-zone scanning holography.
Takaki, Yasuhiro; Nakaoka, Mitsuki
2016-08-08
Viewing-zone scanning holographic displays can enlarge both the screen size and the viewing zone. However, limitations exist in the screen size enlargement process even if the viewing zone is effectively enlarged. This study proposes a multi-channel viewing-zone scanning holographic display comprising multiple projection systems and a planar scanner to enable the scalable enlargement of the screen size. Each projection system produces an enlarged image of the screen of a MEMS spatial light modulator. The multiple enlarged images produced by the multiple projection systems are seamlessly tiled on the planar scanner. This screen size enlargement process reduces the viewing zones of the projection systems, which are horizontally scanned by the planar scanner comprising a rotating off-axis lens and a vertical diffuser to enlarge the viewing zone. A screen size of 7.4 in. and a viewing-zone angle of 43.0° are demonstrated.
Polar Bear Conservation Status in Relation to Projected Sea-ice Conditions
NASA Astrophysics Data System (ADS)
Regehr, E. V.
2015-12-01
The status of the world's 19 subpopulations of polar bears (Ursus maritimus) varies as a function of sea-ice conditions, ecology, management, and other factors. Previous methods to project the response of polar bears to loss of Arctic sea ice—the primary threat to the species—include expert opinion surveys, Bayesian Networks providing qualitative stressor assessments, and subpopulations-specific demographic analyses. Here, we evaluated the global conservation status of polar bears using a data-based sensitivity analysis. First, we estimated generation length for subpopulations with available data (n=11). Second, we developed standardized sea-ice metrics representing habitat availability. Third, we projected global population size under alternative assumptions for relationships between sea ice and subpopulation abundance. Estimated generation length (median = 11.4 years; 95%CI = 9.8 to 13.6) and sea-ice change (median = loss of 1.26 ice-covered days per year; 95%CI = 0.70 to 3.37) varied across subpopulations. Assuming a one-to-one proportional relationship between sea ice and abundance, the median percent change in global population size over three polar bear generations was -30% (95%CI = -35% to -25%). Assuming a linear relationship between sea ice and normalized estimates of subpopulation abundance, median percent change was -4% (95% CI = -62% to +50%) or -43% (95% CI = -76% to -20%), depending on how subpopulations were grouped and how inference was extended from relatively well-studied subpopulations (n=7) to those with little or no data. Our findings suggest the potential for large reductions in polar bear numbers over the next three polar bear generations if sea-ice loss due to climate change continues as forecasted.
Battaglia, P; Malara, D; Ammendolia, G; Romeo, T; Andaloro, F
2015-09-01
Length-mass relationships and linear regressions are given for otolith size (length and height) and standard length (LS ) of certain mesopelagic fishes (Myctophidae, Paralepididae, Phosichthyidae and Stomiidae) living in the central Mediterranean Sea. The length-mass relationship showed isometric growth in six species, whereas linear regressions of LS and otolith size fit the data well for all species. These equations represent a useful tool for dietary studies on Mediterranean marine predators. © 2015 The Fisheries Society of the British Isles.
Shear Melting of a Colloidal Glass
NASA Astrophysics Data System (ADS)
Eisenmann, Christoph; Kim, Chanjoong; Mattsson, Johan; Weitz, David A.
2010-01-01
We use confocal microscopy to explore shear melting of colloidal glasses, which occurs at strains of ˜0.08, coinciding with a strongly non-Gaussian step size distribution. For larger strains, the particle mean square displacement increases linearly with strain and the step size distribution becomes Gaussian. The effective diffusion coefficient varies approximately linearly with shear rate, consistent with a modified Stokes-Einstein relationship in which thermal energy is replaced by shear energy and the length scale is set by the size of cooperatively moving regions consisting of ˜3 particles.
The extreme risk of personal data breaches and the erosion of privacy
NASA Astrophysics Data System (ADS)
Wheatley, Spencer; Maillart, Thomas; Sornette, Didier
2016-01-01
Personal data breaches from organisations, enabling mass identity fraud, constitute an extreme risk. This risk worsens daily as an ever-growing amount of personal data are stored by organisations and on-line, and the attack surface surrounding this data becomes larger and harder to secure. Further, breached information is distributed and accumulates in the hands of cyber criminals, thus driving a cumulative erosion of privacy. Statistical modeling of breach data from 2000 through 2015 provides insights into this risk: A current maximum breach size of about 200 million is detected, and is expected to grow by fifty percent over the next five years. The breach sizes are found to be well modeled by an extremely heavy tailed truncated Pareto distribution, with tail exponent parameter decreasing linearly from 0.57 in 2007 to 0.37 in 2015. With this current model, given a breach contains above fifty thousand items, there is a ten percent probability of exceeding ten million. A size effect is unearthed where both the frequency and severity of breaches scale with organisation size like s0.6. Projections indicate that the total amount of breached information is expected to double from two to four billion items within the next five years, eclipsing the population of users of the Internet. This massive and uncontrolled dissemination of personal identities raises fundamental concerns about privacy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fike, Jeffrey A.
2013-08-01
The construction of stable reduced order models using Galerkin projection for the Euler or Navier-Stokes equations requires a suitable choice for the inner product. The standard L2 inner product is expected to produce unstable ROMs. For the non-linear Navier-Stokes equations this means the use of an energy inner product. In this report, Galerkin projection for the non-linear Navier-Stokes equations using the L2 inner product is implemented as a first step toward constructing stable ROMs for this set of physics.
Truck size and weight enforcement technologies : state of the practice
DOT National Transportation Integrated Search
2009-05-01
This report is a deliverable of Task 2 of FHWAs Truck Size and Weight Enforcement Technology Project. The primary project objective was to recommend strategies to encourage the deployment of roadside technologies to improve truck size and weight e...
Illusory Distance Modulates Perceived Size of Afterimage despite the Disappearance of Depth Cues
Liu, Shengxi; Lei, Quan
2016-01-01
It is known that the perceived size of an afterimage is modulated by the perceived distance between the observer and the depth plane on which the afterimage is projected (Emmert’s law). Illusions like Ponzo demonstrate that illusory distance induced by depth cues can also affect the perceived size of an object. In this study, we report that the illusory distance not only modulates the perceived size of object’s afterimage during the presence of the depth cues, but the modulation persists after the disappearance of the depth cues. We used an adapted version of the classic Ponzo illusion. Illusory depth perception was induced by linear perspective cues with two tilted lines converging at the upper boundary of the display. Two horizontal bars were placed between the two lines, resulting in a percept of the upper bar to be farther away than the lower bar. Observers were instructed to make judgment about the relative size of the afterimage of the lower and the upper bars after adaptation. When the perspective cues and the bars were static, the illusory effect of the Ponzo afterimage is consistent with that of the traditional size-distance illusion. When the perspective cues were flickering and the bars were static, only the afterimage of the latter was perceived, yet still a considerable amount of the illusory effect was perceived. The results could not be explained by memory of a prejudgment of the bar length during the adaptation phase. The findings suggest that cooccurrences of depth cues and object may link a depth marker for the object, so that the perceived size of the object or its afterimage is modulated by feedback of depth information from higher-level visual cortex even when there is no depth cues directly available on the retinal level. PMID:27391335
Diaz, Francisco J; Berg, Michel J; Krebill, Ron; Welty, Timothy; Gidal, Barry E; Alloway, Rita; Privitera, Michael
2013-12-01
Due to concern and debate in the epilepsy medical community and to the current interest of the US Food and Drug Administration (FDA) in revising approaches to the approval of generic drugs, the FDA is currently supporting ongoing bioequivalence studies of antiepileptic drugs, the EQUIGEN studies. During the design of these crossover studies, the researchers could not find commercial or non-commercial statistical software that quickly allowed computation of sample sizes for their designs, particularly software implementing the FDA requirement of using random-effects linear models for the analyses of bioequivalence studies. This article presents tables for sample-size evaluations of average bioequivalence studies based on the two crossover designs used in the EQUIGEN studies: the four-period, two-sequence, two-formulation design, and the six-period, three-sequence, three-formulation design. Sample-size computations assume that random-effects linear models are used in bioequivalence analyses with crossover designs. Random-effects linear models have been traditionally viewed by many pharmacologists and clinical researchers as just mathematical devices to analyze repeated-measures data. In contrast, a modern view of these models attributes an important mathematical role in theoretical formulations in personalized medicine to them, because these models not only have parameters that represent average patients, but also have parameters that represent individual patients. Moreover, the notation and language of random-effects linear models have evolved over the years. Thus, another goal of this article is to provide a presentation of the statistical modeling of data from bioequivalence studies that highlights the modern view of these models, with special emphasis on power analyses and sample-size computations.
Short Round Sub-Linear Zero-Knowledge Argument for Linear Algebraic Relations
NASA Astrophysics Data System (ADS)
Seo, Jae Hong
Zero-knowledge arguments allows one party to prove that a statement is true, without leaking any other information than the truth of the statement. In many applications such as verifiable shuffle (as a practical application) and circuit satisfiability (as a theoretical application), zero-knowledge arguments for mathematical statements related to linear algebra are essentially used. Groth proposed (at CRYPTO 2009) an elegant methodology for zero-knowledge arguments for linear algebraic relations over finite fields. He obtained zero-knowledge arguments of the sub-linear size for linear algebra using reductions from linear algebraic relations to equations of the form z = x *' y, where x, y ∈ Fnp are committed vectors, z ∈ Fp is a committed element, and *' : Fnp × Fnp → Fp is a bilinear map. These reductions impose additional rounds on zero-knowledge arguments of the sub-linear size. The round complexity of interactive zero-knowledge arguments is an important measure along with communication and computational complexities. We focus on minimizing the round complexity of sub-linear zero-knowledge arguments for linear algebra. To reduce round complexity, we propose a general transformation from a t-round zero-knowledge argument, satisfying mild conditions, to a (t - 2)-round zero-knowledge argument; this transformation is of independent interest.
Y-cell receptive field and collicular projection of parasol ganglion cells in macaque monkey retina
Crook, Joanna D.; Peterson, Beth B.; Packer, Orin S.; Robinson, Farrel R.; Troy, John B.; Dacey, Dennis M.
2009-01-01
The distinctive parasol ganglion cell of the primate retina transmits a transient, spectrally non-opponent signal to the magnocellular layers of the lateral geniculate nucleus (LGN). Parasol cells show well-recognized parallels with the alpha-Y cell of other mammals, yet two key alpha-Y cell properties, a collateral projection to the superior colliculus and nonlinear spatial summation, have not been clearly established for parasol cells. Here we show by retrograde photodynamic staining that parasol cells project to the superior colliculus. Photostained dendritic trees formed characteristic spatial mosaics and afforded unequivocal identification of the parasol cells among diverse collicular-projecting cell types. Loose-patch recordings were used to demonstrate for all parasol cells a distinct Y-cell receptive field ‘signature’ marked by a non-linear mechanism that responded to contrast-reversing gratings at twice the stimulus temporal frequency (second Fourier harmonic, F2) independent of stimulus spatial phase. The F2 component showed high contrast gain and temporal sensitivity and appeared to originate from a region coextensive with that of the linear receptive field center. The F2 spatial frequency response peaked well beyond the resolution limit of the linear receptive field center, showing a Gaussian center radius of ~15 μm. Blocking inner retinal inhibition elevated the F2 response, suggesting that amacrine circuitry does not generate this non-linearity. Our data are consistent with a pooled-subunit model of the parasol-Y cell receptive field in which summation from an array of transient, partially rectifying cone bipolar cells accounts for both linear and non-linear components of the receptive field. PMID:18971470
Overview of the DAEDALOS project
NASA Astrophysics Data System (ADS)
Bisagni, Chiara
2015-10-01
The "Dynamics in Aircraft Engineering Design and Analysis for Light Optimized Structures" (DAEDALOS) project aimed to develop methods and procedures to determine dynamic loads by considering the effects of dynamic buckling, material damping and mechanical hysteresis during aircraft service. Advanced analysis and design principles were assessed with the scope of partly removing the uncertainty and the conservatism of today's design and certification procedures. To reach these objectives a DAEDALOS aircraft model representing a mid-size business jet was developed. Analysis and in-depth investigation of the dynamic response were carried out on full finite element models and on hybrid models. Material damping was experimentally evaluated, and different methods for damping evaluation were developed, implemented in finite element codes and experimentally validated. They include a strain energy method, a quasi-linear viscoelastic material model, and a generalized Maxwell viscous material damping. Panels and shells representative of typical components of the DAEDALOS aircraft model were experimentally tested subjected to static as well as dynamic loads. Composite and metallic components of the aircraft model were investigated to evaluate the benefit in terms of weight saving.
NASA Astrophysics Data System (ADS)
Lafitte, Pauline; Melis, Ward; Samaey, Giovanni
2017-07-01
We present a general, high-order, fully explicit relaxation scheme which can be applied to any system of nonlinear hyperbolic conservation laws in multiple dimensions. The scheme consists of two steps. In a first (relaxation) step, the nonlinear hyperbolic conservation law is approximated by a kinetic equation with stiff BGK source term. Then, this kinetic equation is integrated in time using a projective integration method. After taking a few small (inner) steps with a simple, explicit method (such as direct forward Euler) to damp out the stiff components of the solution, the time derivative is estimated and used in an (outer) Runge-Kutta method of arbitrary order. We show that, with an appropriate choice of inner step size, the time step restriction on the outer time step is similar to the CFL condition for the hyperbolic conservation law. Moreover, the number of inner time steps is also independent of the stiffness of the BGK source term. We discuss stability and consistency, and illustrate with numerical results (linear advection, Burgers' equation and the shallow water and Euler equations) in one and two spatial dimensions.
REopt: A Platform for Energy System Integration and Optimization: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simpkins, T.; Cutler, D.; Anderson, K.
2014-08-01
REopt is NREL's energy planning platform offering concurrent, multi-technology integration and optimization capabilities to help clients meet their cost savings and energy performance goals. The REopt platform provides techno-economic decision-support analysis throughout the energy planning process, from agency-level screening and macro planning to project development to energy asset operation. REopt employs an integrated approach to optimizing a site?s energy costs by considering electricity and thermal consumption, resource availability, complex tariff structures including time-of-use, demand and sell-back rates, incentives, net-metering, and interconnection limits. Formulated as a mixed integer linear program, REopt recommends an optimally-sized mix of conventional and renewable energy, andmore » energy storage technologies; estimates the net present value associated with implementing those technologies; and provides the cost-optimal dispatch strategy for operating them at maximum economic efficiency. The REopt platform can be customized to address a variety of energy optimization scenarios including policy, microgrid, and operational energy applications. This paper presents the REopt techno-economic model along with two examples of recently completed analysis projects.« less
van Mantgem, P.J.; Stephenson, N.L.
2005-01-01
1 We assess the use of simple, size-based matrix population models for projecting population trends for six coniferous tree species in the Sierra Nevada, California. We used demographic data from 16 673 trees in 15 permanent plots to create 17 separate time-invariant, density-independent population projection models, and determined differences between trends projected from initial surveys with a 5-year interval and observed data during two subsequent 5-year time steps. 2 We detected departures from the assumptions of the matrix modelling approach in terms of strong growth autocorrelations. We also found evidence of observation errors for measurements of tree growth and, to a more limited degree, recruitment. Loglinear analysis provided evidence of significant temporal variation in demographic rates for only two of the 17 populations. 3 Total population sizes were strongly predicted by model projections, although population dynamics were dominated by carryover from the previous 5-year time step (i.e. there were few cases of recruitment or death). Fractional changes to overall population sizes were less well predicted. Compared with a null model and a simple demographic model lacking size structure, matrix model projections were better able to predict total population sizes, although the differences were not statistically significant. Matrix model projections were also able to predict short-term rates of survival, growth and recruitment. Mortality frequencies were not well predicted. 4 Our results suggest that simple size-structured models can accurately project future short-term changes for some tree populations. However, not all populations were well predicted and these simple models would probably become more inaccurate over longer projection intervals. The predictive ability of these models would also be limited by disturbance or other events that destabilize demographic rates. ?? 2005 British Ecological Society.
Optical digitizing and strategies to combine different views of an optical sensor
NASA Astrophysics Data System (ADS)
Duwe, Hans P.
1997-09-01
Non-contact digitization of objects and surfaces with optical sensors based on fringe or pattern projection in combination with a CCD-camera allows a representation of surfaces with pointclouds equals x, y, z data points. To digitize the total surface of an object, it is necessary to combine the different measurement data obtained by the optical sensor from different views. Depending on the size of the object and the required accuracy of the measured data, different sensor set-ups with handling system or a combination of linear and rotation axes are described. Furthermore, strategies to match the overlapping pointclouds of a digitized object are introduced. This is very important especially for the digitization of large objects like 1:1 car models, etc. With different sensor sizes, it is possible to digitize small objects like teeth, crowns, inlays, etc. with an overall accuracy of 20 micrometer as well as large objects like car models, with a total accuracy of 0.5 mm. The various applications in the field of optical digitization are described.
Direct Numerical Simulation of turbulent heat transfer up to Reτ = 2000
NASA Astrophysics Data System (ADS)
Hoyas, Sergio; Pérez-Quiles, Jezabel; Lluesma-Rodríguez, Federico
2017-11-01
We present a new set of direct numerical simulations of turbulent heat transfer in a channel flow for a Prandtl number of 0.71 and a friction Reynolds number of 2000. Mixed boundary conditions, i.e., wall temperature is time independent and varies linearly along streamwise component, have been used as boundary conditions for the thermal field. The effect of the size of the box in the one point statistics of the thermal field, and the kinetic energy, dissipation and turbulent budgets has been studied, showing that a domain with streamwise and spanwise sizes of 4 πh and 2 πh, where h is the channel half-height, is large enough to reproduce the one point statistics of larger boxes. The scaling of the previous quantities with respect to the Reynolds number has been also studied using a new dataset of simulations at smaller Reynolds number, finding two different scales for the inner and outer layers of the flow. Funded by project ENE2015-71333-R of the Spanish Ministerio de Economía y Competitividad.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pieper, Andreas; Kreutzer, Moritz; Alvermann, Andreas, E-mail: alvermann@physik.uni-greifswald.de
2016-11-15
We study Chebyshev filter diagonalization as a tool for the computation of many interior eigenvalues of very large sparse symmetric matrices. In this technique the subspace projection onto the target space of wanted eigenvectors is approximated with filter polynomials obtained from Chebyshev expansions of window functions. After the discussion of the conceptual foundations of Chebyshev filter diagonalization we analyze the impact of the choice of the damping kernel, search space size, and filter polynomial degree on the computational accuracy and effort, before we describe the necessary steps towards a parallel high-performance implementation. Because Chebyshev filter diagonalization avoids the need formore » matrix inversion it can deal with matrices and problem sizes that are presently not accessible with rational function methods based on direct or iterative linear solvers. To demonstrate the potential of Chebyshev filter diagonalization for large-scale problems of this kind we include as an example the computation of the 10{sup 2} innermost eigenpairs of a topological insulator matrix with dimension 10{sup 9} derived from quantum physics applications.« less
For the depolarization of linearly polarized light by smoke particles
NASA Astrophysics Data System (ADS)
Sun, Wenbo; Liu, Zhaoyan; Videen, Gorden; Fu, Qiang; Muinonen, Karri; Winker, David M.; Lukashin, Constantine; Jin, Zhonghai; Lin, Bing; Huang, Jianping
2013-06-01
The CALIPSO satellite mission consistently measures volume (including molecule and particulate) light depolarization ratio of ∼2% for smoke, compared to ∼1% for marine aerosols and ∼15% for dust. The observed ∼2% smoke depolarization ratio comes primarily from the nonspherical habits of particles in the smoke at certain particle sizes. In this study, the depolarization of linearly polarized light by small sphere aggregates and irregular Gaussian-shaped particles is studied, to reveal the physics between the depolarization of linearly polarized light and smoke aerosol shape and size. It is found that the depolarization ratio curves of Gaussian-deformed spheres are very similar to sphere aggregates in terms of scattering-angle dependence and particle size parameters when particle size parameter is smaller than 1.0π. This demonstrates that small randomly oriented nonspherical particles have some common depolarization properties as functions of scattering angle and size parameter. This may be very useful information for characterization and active remote sensing of smoke particles using polarized light. We also show that the depolarization ratio from the CALIPSO measurements could be used to derive smoke aerosol particle size. From the calculation results for light depolarization ratio by Gaussian-shaped smoke particles and the CALIPSO-measured light depolarization ratio of ∼2% for smoke, the mean particle size of South-African smoke is estimated to be about half of the 532nm wavelength of the CALIPSO lidar.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei, J; Chao, M
2016-06-15
Purpose: To develop a novel strategy to extract the respiratory motion of the thoracic diaphragm from kilovoltage cone beam computed tomography (CBCT) projections by a constrained linear regression optimization technique. Methods: A parabolic function was identified as the geometric model and was employed to fit the shape of the diaphragm on the CBCT projections. The search was initialized by five manually placed seeds on a pre-selected projection image. Temporal redundancies, the enabling phenomenology in video compression and encoding techniques, inherent in the dynamic properties of the diaphragm motion together with the geometrical shape of the diaphragm boundary and the associatedmore » algebraic constraint that significantly reduced the searching space of viable parabolic parameters was integrated, which can be effectively optimized by a constrained linear regression approach on the subsequent projections. The innovative algebraic constraints stipulating the kinetic range of the motion and the spatial constraint preventing any unphysical deviations was able to obtain the optimal contour of the diaphragm with minimal initialization. The algorithm was assessed by a fluoroscopic movie acquired at anteriorposterior fixed direction and kilovoltage CBCT projection image sets from four lung and two liver patients. The automatic tracing by the proposed algorithm and manual tracking by a human operator were compared in both space and frequency domains. Results: The error between the estimated and manual detections for the fluoroscopic movie was 0.54mm with standard deviation (SD) of 0.45mm, while the average error for the CBCT projections was 0.79mm with SD of 0.64mm for all enrolled patients. The submillimeter accuracy outcome exhibits the promise of the proposed constrained linear regression approach to track the diaphragm motion on rotational projection images. Conclusion: The new algorithm will provide a potential solution to rendering diaphragm motion and ultimately improving tumor motion management for radiation therapy of cancer patients.« less
Electric-field-induced association of colloidal particles
NASA Astrophysics Data System (ADS)
Fraden, Seth; Hurd, Alan J.; Meyer, Robert B.
1989-11-01
Dilute suspensions of micron diameter dielectric spheres confined to two dimensions are induced to aggregate linearly by application of an electric field. The growth of the average cluster size agrees well with the Smoluchowski equation, but the evolution of the measured cluster size distribution exhibits significant departures from theory at large times due to the formation of long linear clusters which effectively partition space into isolated one-dimensional strips.
Are larger dental practices more efficient? An analysis of dental services production.
Lipscomb, J; Douglass, C W
1986-01-01
Whether cost-efficiency in dental services production increases with firm size is investigated through application of an activity analysis production function methodology to data from a national survey of dental practices. Under this approach, service delivery in a dental practice is modeled as a linear programming problem that acknowledges distinct input-output relationships for each service. These service-specific relationships are then combined to yield projections of overall dental practice productivity, subject to technical and organizational constraints. The activity analysis reported here represents arguably the most detailed evaluation yet of the relationship between dental practice size and cost-efficiency, controlling for such confounding factors as fee and service-mix differences across firms. We conclude that cost-efficiency does increase with practice size, over the range from solo to four-dentist practices. Largely because of data limitations, we were unable to test satisfactorily for scale economies in practices with five or more dentists. Within their limits, our findings are generally consistent with results from the neoclassical production function literature. From the standpoint of consumer welfare, the critical question raised (but not resolved) here is whether these apparent production efficiencies of group practice are ultimately translated by the market into lower fees, shorter queues, or other nonprice benefits. PMID:3102404
Han, Sungmin; Chu, Jun-Uk; Park, Jong Woong; Youn, Inchan
2018-05-15
Proprioceptive afferent activities recorded by a multichannel microelectrode have been used to decode limb movements to provide sensory feedback signals for closed-loop control in a functional electrical stimulation (FES) system. However, analyzing the high dimensionality of neural activity is one of the major challenges in real-time applications. This paper proposes a linear feature projection method for the real-time decoding of ankle and knee joint angles. Single-unit activity was extracted as a feature vector from proprioceptive afferent signals that were recorded from the L7 dorsal root ganglion during passive movements of ankle and knee joints. The dimensionality of this feature vector was then reduced using a linear feature projection composed of projection pursuit and negentropy maximization (PP/NEM). Finally, a time-delayed Kalman filter was used to estimate the ankle and knee joint angles. The PP/NEM approach had a better decoding performance than did other feature projection methods, and all processes were completed within the real-time constraints. These results suggested that the proposed method could be a useful decoding method to provide real-time feedback signals in closed-loop FES systems.
A Sawmill Manager Adapts To Change With Linear Programming
George F. Dutrow; James E. Granskog
1973-01-01
Linear programming provides guidelines for increasing sawmill capacity and flexibility and for determining stumpagepurchasing strategy. The operator of a medium-sized sawmill implemented improvements suggested by linear programming analysis; results indicate a 45 percent increase in revenue and a 36 percent hike in volume processed.
Musuku, Adrien; Tan, Aimin; Awaiye, Kayode; Trabelsi, Fethi
2013-09-01
Linear calibration is usually performed using eight to ten calibration concentration levels in regulated LC-MS bioanalysis because a minimum of six are specified in regulatory guidelines. However, we have previously reported that two-concentration linear calibration is as reliable as or even better than using multiple concentrations. The purpose of this research is to compare two-concentration with multiple-concentration linear calibration through retrospective data analysis of multiple bioanalytical projects that were conducted in an independent regulated bioanalytical laboratory. A total of 12 bioanalytical projects were randomly selected: two validations and two studies for each of the three most commonly used types of sample extraction methods (protein precipitation, liquid-liquid extraction, solid-phase extraction). When the existing data were retrospectively linearly regressed using only the lowest and the highest concentration levels, no extra batch failure/QC rejection was observed and the differences in accuracy and precision between the original multi-concentration regression and the new two-concentration linear regression are negligible. Specifically, the differences in overall mean apparent bias (square root of mean individual bias squares) are within the ranges of -0.3% to 0.7% and 0.1-0.7% for the validations and studies, respectively. The differences in mean QC concentrations are within the ranges of -0.6% to 1.8% and -0.8% to 2.5% for the validations and studies, respectively. The differences in %CV are within the ranges of -0.7% to 0.9% and -0.3% to 0.6% for the validations and studies, respectively. The average differences in study sample concentrations are within the range of -0.8% to 2.3%. With two-concentration linear regression, an average of 13% of time and cost could have been saved for each batch together with 53% of saving in the lead-in for each project (the preparation of working standard solutions, spiking, and aliquoting). Furthermore, examples are given as how to evaluate the linearity over the entire concentration range when only two concentration levels are used for linear regression. To conclude, two-concentration linear regression is accurate and robust enough for routine use in regulated LC-MS bioanalysis and it significantly saves time and cost as well. Copyright © 2013 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
He Guangjun; Duan Wenshan; Tian Duoxiang
2008-04-15
For unmagnetized dusty plasma with many different dust grain species containing both hot isothermal electrons and ions, both the linear dispersion relation and the Kadomtsev-Petviashvili equation for small, but finite amplitude dust acoustic waves are obtained. The linear dispersion relation is investigated numerically. Furthermore, the variations of amplitude, width, and propagation velocity of the nonlinear solitary wave with an arbitrary dust size distribution function are studied as well. Moreover, both the power law distribution and the Gaussian distribution are approximately simulated by using appropriate arbitrary dust size distribution functions.
Tsai, Shirley C; Tsai, Chen S
2013-08-01
A linear theory on temporal instability of megahertz Faraday waves for monodisperse microdroplet ejection based on mass conservation and linearized Navier-Stokes equations is presented using the most recently observed micrometer- sized droplet ejection from a millimeter-sized spherical water ball as a specific example. The theory is verified in the experiments utilizing silicon-based multiple-Fourier horn ultrasonic nozzles at megahertz frequency to facilitate temporal instability of the Faraday waves. Specifically, the linear theory not only correctly predicted the Faraday wave frequency and onset threshold of Faraday instability, the effect of viscosity, the dynamics of droplet ejection, but also established the first theoretical formula for the size of the ejected droplets, namely, the droplet diameter equals four-tenths of the Faraday wavelength involved. The high rate of increase in Faraday wave amplitude at megahertz drive frequency subsequent to onset threshold, together with enhanced excitation displacement on the nozzle end face, facilitated by the megahertz multiple Fourier horns in resonance, led to high-rate ejection of micrometer- sized monodisperse droplets (>10(7) droplets/s) at low electrical drive power (<;1 W) with short initiation time (<;0.05 s). This is in stark contrast to the Rayleigh-Plateau instability of a liquid jet, which ejects one droplet at a time. The measured diameters of the droplets ranging from 2.2 to 4.6 μm at 2 to 1 MHz drive frequency fall within the optimum particle size range for pulmonary drug delivery.
Case studies of transportation investment to identify the impacts on the local and state economy.
DOT National Transportation Integrated Search
2013-01-01
This project provides case studies of the impact of transportation investments on local economies. We use multiple : approaches to measure impacts since the effects of transportation projects can vary according to the size of a : project and the size...
46 CFR 160.040-2 - Type and size.
Code of Federal Regulations, 2012 CFR
2012-10-01
...: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Line-Throwing Appliance, Impulse-Projected Rocket Type (and Equipment) § 160.040-2 Type and size. (a) Impulse-projected rocket type line-throwing appliances required by... and hand directed, or suitably supported and hand directed. (b) Impulse-projected rocket type line...
46 CFR 160.040-2 - Type and size.
Code of Federal Regulations, 2011 CFR
2011-10-01
...: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Line-Throwing Appliance, Impulse-Projected Rocket Type (and Equipment) § 160.040-2 Type and size. (a) Impulse-projected rocket type line-throwing appliances required by... and hand directed, or suitably supported and hand directed. (b) Impulse-projected rocket type line...
46 CFR 160.040-2 - Type and size.
Code of Federal Regulations, 2010 CFR
2010-10-01
...: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Line-Throwing Appliance, Impulse-Projected Rocket Type (and Equipment) § 160.040-2 Type and size. (a) Impulse-projected rocket type line-throwing appliances required by... and hand directed, or suitably supported and hand directed. (b) Impulse-projected rocket type line...
46 CFR 160.040-2 - Type and size.
Code of Federal Regulations, 2013 CFR
2013-10-01
...: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Line-Throwing Appliance, Impulse-Projected Rocket Type (and Equipment) § 160.040-2 Type and size. (a) Impulse-projected rocket type line-throwing appliances required by... and hand directed, or suitably supported and hand directed. (b) Impulse-projected rocket type line...
46 CFR 160.040-2 - Type and size.
Code of Federal Regulations, 2014 CFR
2014-10-01
...: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Line-Throwing Appliance, Impulse-Projected Rocket Type (and Equipment) § 160.040-2 Type and size. (a) Impulse-projected rocket type line-throwing appliances required by... and hand directed, or suitably supported and hand directed. (b) Impulse-projected rocket type line...
A Study of Functional Polymer Colloids Prepared Using Thiol-Ene/Yne Click Chemistry
NASA Astrophysics Data System (ADS)
Durham, Olivia Z.
This project demonstrates the first instance of thiol-ene chemistry as the polymerization method for the production of polymer colloids in two-phase heterogeneous suspensions, miniemulsions, and emulsions. This work was also expanded to thiol-yne chemistry for the production of polymer particles containing increased crosslinking density. The utility of thiol-ene and thiol-yne chemistries for polymerization and polymer modification is well established in bulk systems. These reactions are considered 'click' reactions, which can be defined as processes that are both facile and simple, offering high yields with nearly 100% conversion, no side products, easy product separation, compatibility with a diverse variety of commercially available starting materials, and orthogonality with other chemistries. In addition, thiol-ene and thiol-yne chemistry follow a step-growth mechanism for the development of highly uniform polymer networks, where polymer growth is dependent on the coupling of functional groups. These step-growth polymerization systems are in stark contrast to the chain-growth mechanisms of acrylic and styrenic monomers that have dominated the field of conventional heterogeneous polymerizations. Preliminary studies evaluated the mechanism of particle production in suspension and miniemulsion systems. Monomer droplets were compared to the final polymer particles to confirm that particle growth occurred through the polymerization of monomer droplets. Additional parameters examined include homogenization energy (mechanical mixing), diluent species and concentration, and monomer content. These reactions were conducted using photoinitiation to yield particles in a matter of minutes with diameters in the size range of several microns to hundreds of microns in suspensions or submicron particles in miniemulsions. Improved control over the particle size and size distribution was examined through variation of reaction parameters. In addition, a method of seeded suspension polymerization was attempted. This project was further expanded through an extensive evaluation of stabilizers in thiol-ene suspension polymerizations. The scope of stabilizers used included synthetic surfactants (ionic and nonionic), natural gums, and colloidal silica (Pickering stabilization). Suspension polymerizations were further expanded to include thiol-yne chemistry for the evaluation of polymer composition and thermal properties. In addition, polymer particles with excess ene, yne, or thiol functionality were successfully developed to demonstrate the potential for further functionalization. The self-limiting behavior of thiol-ene/yne reactions allows for successful synthesis of functional polymer colloids using off-stoichiometric amounts of monomers. This capacity to control functionality is illustrated through the creation of fluorescent polymer particles using both an in situ thiol-ene polymerization reaction with a vinyl chromophore as well as through post-polymerization modification of thiol-ene and thiol-yne polymers with excess thiol functionality via thiol-isocyanate chemistry. To produce smaller polymer particles without the need for intense homogenization energy or high stabilizer concentrations, an emulsion polymerization system was implemented using a water soluble-thermal initiator. It was found that unlike thiol-ene suspensions, which are limited to crosslinked systems, thiol-ene emulsion polymerizations allowed for the production of polymer particles comprised of either crosslinked or linear polymer networks. For the crosslinked systems, various anionic SDS surfactant concentrations were examined to observe the influence on particle size. In linear polymer systems, variations in polymer composition were examined. Preliminary studies performed with a monomer with an ethylene glycol-like structure indicated that the synthesis of polymer particles with narrower size distributions compared to any of the other emulsion compositions was possible. Finally, thiol-ene chemistry was also employed toward the synthesis of degradable polyanhydride polymer particles. Unlike the aforementioned studies, the approach to particle synthesis was conducted by using a premade thiol-ene polymer. Various linear thiol-ene polyanhydrides were emulsified in water or buffered solutions via sonication. Polymer latex was obtained upon solvent evaporation of the dichloromethane (DCM) solvent used to solubilize the polymer. In this work, variation of polymer composition as well as degradation was examined. Additional experiments included a study of the release of Rhodamine B dye, functionalization of the linear polymers, and studies involving the delay of degradation through the incorporation of crosslinking in the polymer particles. The projects presented herein provide an innovative approach to the synthesis of polymer colloids using thiol-ene and thiol-yne 'click' chemistry in both heterogeneous polymerizations as well as through solvent evaporation of premade polymer solutions. Polymer colloids prove to be an area of great interest for numerous applications that encompass various areas involving biomedical and industrial technologies including paints and coatings, cosmetics, diagnostics, and drug delivery. Improvements in methods of chemical synthesis as well as advances in the tailoring of material properties are of utmost importance for the ever increasing demands of new technologies and educational enlightenment.
Wu, Yabei; Lu, Huanzhang; Zhao, Fei; Zhang, Zhiyong
2016-01-01
Shape serves as an important additional feature for space target classification, which is complementary to those made available. Since different shapes lead to different projection functions, the projection property can be regarded as one kind of shape feature. In this work, the problem of estimating the projection function from the infrared signature of the object is addressed. We show that the projection function of any rotationally symmetric object can be approximately represented as a linear combination of some base functions. Based on this fact, the signal model of the emissivity-area product sequence is constructed, which is a particular mathematical function of the linear coefficients and micro-motion parameters. Then, the least square estimator is proposed to estimate the projection function and micro-motion parameters jointly. Experiments validate the effectiveness of the proposed method. PMID:27763500
Observation of Droplet Size Oscillations in a Two Phase Fluid under Shear Flow
NASA Astrophysics Data System (ADS)
Courbin, Laurent; Panizza, Pascal
2004-11-01
It is well known that complex fluids exhibit strong couplings between their microstructure and the flow field. Such couplings may lead to unusual non linear rheological behavior. Because energy is constantly brought to the system, richer dynamic behavior such as non linear oscillatory or chaotic response is expected. We report on the observation of droplet size oscillations at fixed shear rate. At low shear rates, we observe two steady states for which the droplet size results from a balance between capillary and viscous stress. For intermediate shear rates, the droplet size becomes a periodic function of time. We propose a phenomenological model to account for the observed phenomenon and compare numerical results to experimental data.
Neuromuscular Control of Rapid Linear Accelerations in Fish
2016-06-22
2014 30-Apr-2015 Approved for Public Release; Distribution Unlimited Final Report: Neuromuscular Control of Rapid Linear Accelerations in Fish The...it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. Tufts University Research... Control of Rapid Linear Accelerations in Fish Report Title In this project, we measured muscle activity, body movements, and flow patterns during linear
Engaging High School Youth in Paleobiology Research
NASA Astrophysics Data System (ADS)
Saltzman, J.; Heim, N. A.; Payne, J.
2013-12-01
The chasm between classroom science and scientific research is bridged by the History of Life Internships at Stanford University. Nineteen interns recorded more than 25,500 linear body size measurements of fossil echinoderms and ostracods spanning more than 11,000 species. The interns were selected from a large pool of applicants, and well-established relationships with local teachers at schools serving underrepresented groups in STEM fields were leveraged to ensure a diverse mix of applicants. The lead investigator has been hosting interns in his research group for seven years, in the process measuring over 36,000 foraminfera species as well as representatives from many other fossil groups. We (faculty member, researcher, and educators) all find this very valuable to engage youth in novel research projects. We are able to create an environment where high school students can make genuine contributions to jmportant and unsolved scientific problems, not only through data collection but also through original data analysis. Science often involves long intervals of data collection, which can be tedious, and big questions often require big datasets. Body size evolution is ideally suited to this type of program, as the data collection process requires substantial person-power but not deep technical expertise or expensive equipment. Students are therefore able to engage in the full scientific process, posing previously unanswered questions regarding the evolution of animal size, compiling relevant data, and then analyzing the data in order to test their hypotheses. Some of the projects students developed were truly creative and fun to see come together. Communicating is a critical step in science yet is often lost in the science classroom. The interns submitted seven abstracts to this meeting for the youth session entitled Bright STaRS based on their research projects. To round out the experience, students also learn about the broad field of earth sciences through traditional lectures, active learning exercises, discussions of primary and secondary literature, guest speakers, lab tours and field trips (including to the UC Museum of Paleontology, Hayward fault, fossiliferous Pliocene outcrops, and tidepools). We will use a survey to assess the impact of the History of Life Internships on participant attitudes toward science and careers in science.
NASA Technical Reports Server (NTRS)
Majda, George
1986-01-01
One-leg and multistep discretizations of variable-coefficient linear systems of ODEs having both slow and fast time scales are investigated analytically. The stability properties of these discretizations are obtained independent of ODE stiffness and compared. The results of numerical computations are presented in tables, and it is shown that for large step sizes the stability of one-leg methods is better than that of the corresponding linear multistep methods.
Linear Chord Diagrams with Long Chords
NASA Astrophysics Data System (ADS)
Sullivan, Everett
A linear chord diagram of size n is a partition of the first 2n integers into sets of size two. These diagrams appear in many different contexts in combinatorics and other areas of mathematics, particularly knot theory. We explore various constraints that produce diagrams which have no short chords. A number of patterns appear from the results of these constraints which we can prove using techniques ranging from explicit bijections to non-commutative algebra.
van der Laan, J. D.; Sandia National Lab.; Scrymgeour, D. A.; ...
2015-03-13
We find for infrared wavelengths there are broad ranges of particle sizes and refractive indices that represent fog and rain where the use of circular polarization can persist to longer ranges than linear polarization. Using polarization tracking Monte Carlo simulations for varying particle size, wavelength, and refractive index, we show that for specific scene parameters circular polarization outperforms linear polarization in maintaining the intended polarization state for large optical depths. This enhancement with circular polarization can be exploited to improve range and target detection in obscurant environments that are important in many critical sensing applications. Specifically, circular polarization persists bettermore » than linear for radiation fog in the short-wave infrared, for advection fog in the short-wave infrared and the long-wave infrared, and large particle sizes of Sahara dust around the 4 micron wavelength.« less
NASA Astrophysics Data System (ADS)
Neves de Campos, Thiago
This research examines the distortionary effects of a discovered and undeveloped sequential modular offshore project under five different designs for a production-sharing agreement (PSA). The model differs from previous research by looking at the effect of taxation from the perspective of a host government, where the objective is to maximize government utility over government revenue generated by the project and the non-pecuniary benefits to society. This research uses Modern Asset Pricing (MAP) theory, which is able to provide a good measure of the asset value accruing to various stakeholders in the project combined with the optimal decision rule for the development of the investment opportunity. Monte Carlo simulation was also applied to incorporate into the model the most important sources of risk associated with the project and to account for non-linearity in the cash flows. For a complete evaluation of how the fiscal system affects the project development, an investor's behavioral model was constructed, incorporating three operational decisions: investment timing, capacity size and early abandonment. The model considers four sources of uncertainty that affect the project value and the firm's optimal decision: the long run oil price and short-run deviations from that price, cost escalation and the reservoir recovery rate. The optimizations outcomes show that all fiscal systems evaluated produce distortion over the companies' optimal decisions, and companies adjust their choices to avoid taxation in different ways according to the fiscal system characteristics. Moreover, it is revealed that fiscal systems with tax provisions that try to capture additional project profits based on production profitability measures leads to stronger distortions in the project investment and output profile. It is also shown that a model based on a fixed percentage rate is the system that creates the least distortion. This is because companies will be subjected to the same government share of profit oil independently of any operational decision which they can make to change the production profile to evade taxation.
Prananingrum, Widyasri; Tomotake, Yoritoki; Naito, Yoshihito; Bae, Jiyoung; Sekine, Kazumitsu; Hamada, Kenichi; Ichikawa, Tetsuo
2016-08-01
The prosthetic applications of titanium have been challenging because titanium does not possess suitable properties for the conventional casting method using the lost wax technique. We have developed a production method for biomedical application of porous titanium using a moldless process. This study aimed to evaluate the physical and mechanical properties of porous titanium using various particle sizes, shapes, and mixing ratio of titanium powder to wax binder for use in prosthesis production. CP Ti powders with different particle sizes, shapes, and mixing ratios were divided into five groups. A 90:10wt% mixture of titanium powder and wax binder was prepared manually at 70°C. After debinding at 380°C, the specimen was sintered in Ar at 1100°C without a mold for 1h. The linear shrinkage ratio of sintered specimens ranged from 2.5% to 14.2%. The linear shrinkage ratio increased with decreasing particle size. While the linear shrinkage ratio of Groups 3, 4, and 5 were approximately 2%, Group 1 showed the highest shrinkage of all. The bending strength ranged from 106 to 428MPa under the influence of porosity. Groups 1 and 2 presented low porosity followed by higher strength. The shear bond strength ranged from 32 to 100MPa. The shear bond strength was also particle-size dependent. The decrease in the porosity increased the linear shrinkage ratio and bending strength. Shrinkage and mechanical strength required for prostheses were dependent on the particle size and shape of titanium powders. These findings suggested that this production method can be applied to the prosthetic framework by selecting the material design. Copyright © 2016 Elsevier Ltd. All rights reserved.
Dependence of Raman Spectral Intensity on Crystal Size in Organic Nano Energetics.
Patel, Rajen B; Stepanov, Victor; Qiu, Hongwei
2016-08-01
Raman spectra for various nitramine energetic compounds were investigated as a function of crystal size at the nanoscale regime. In the case of 2,4,6,8,10,12-hexanitro-2,4,6,8,10,12-hexaazaisowurtzitane (CL-20), there was a linear relationship between intensity of Raman spectra and crystal size. Notably, the Raman modes between 120 cm(-1) and 220 cm(-1) were especially affected, and at the smallest crystal size, were completely eliminated. The Raman spectral intensity of octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine (HMX), like that of CL-20's, depended linearly on crystal size. The Raman spectral intensity of 1,3,5-trinitroperhydro-1,3,5-triazine (RDX), however, was not observably changed by crystal size. A non-nitramine explosive compound, 2,4,6-triamino-1,3,5- trinitrobenzene (TATB), was also investigated. Its spectral intensity was also found to correlate linearly with crystal size, although substantially less so than that of HMX and CL-20. To explain the observed trends, it is hypothesized that disordered molecular arrangement, originating from the crystal surface, may be responsible. In particular, it appears that the thickness of the disordered surface layer is dependent on molecular characteristics, including size and conformational flexibility. Furthermore, as the mean crystal size decreases, the volume fraction of disordered molecules within a specimen increases, consequently, weakening the Raman intensity. These results could have practical benefit for allowing the facile monitoring of crystal size during manufacturing. Finally, these findings could lead to deep insights into the general structure of the surface of crystals. © The Author(s) 2016.
Axial diffusivity of the corona radiata correlated with ventricular size in adult hydrocephalus.
Cauley, Keith A; Cataltepe, Oguz
2014-07-01
Hydrocephalus causes changes in the diffusion-tensor properties of periventricular white matter. Understanding the nature of these changes may aid in the diagnosis and treatment planning of this relatively common neurologic condition. Because ventricular size is a common measure of the severity of hydrocephalus, we hypothesized that a quantitative correlation could be made between the ventricular size and diffusion-tensor changes in the periventricular corona radiata. In this article, we investigated this relationship in adult patients with hydrocephalus and in healthy adult subjects. Diffusion-tensor imaging metrics of the corona radiata were correlated with ventricular size in 14 adult patients with acute hydrocephalus, 16 patients with long-standing hydrocephalus, and 48 consecutive healthy adult subjects. Regression analysis was performed to investigate the relationship between ventricular size and the diffusion-tensor metrics of the corona radiata. Subject age was analyzed as a covariable. There is a linear correlation between fractional anisotropy of the corona radiata and ventricular size in acute hydrocephalus (r = 0.784, p < 0.001), with positive correlation with axial diffusivity (r = 0.636, p = 0.014) and negative correlation with radial diffusivity (r = 0.668, p = 0.009). In healthy subjects, axial diffusion in the periventricular corona radiata is more strongly correlated with ventricular size than with patient age (r = 0.466, p < 0.001, compared with r = 0.058, p = 0.269). Axial diffusivity of the corona radiata is linearly correlated with ventricular size in healthy adults and in patients with hydrocephalus. Radial diffusivity of the corona radiata decreases linearly with ventricular size in acute hydrocephalus but is not significantly correlated with ventricular size in healthy subjects or in patients with long-standing hydrocephalus.
A hybrid incremental projection method for thermal-hydraulics applications
NASA Astrophysics Data System (ADS)
Christon, Mark A.; Bakosi, Jozsef; Nadiga, Balasubramanya T.; Berndt, Markus; Francois, Marianne M.; Stagg, Alan K.; Xia, Yidong; Luo, Hong
2016-07-01
A new second-order accurate, hybrid, incremental projection method for time-dependent incompressible viscous flow is introduced in this paper. The hybrid finite-element/finite-volume discretization circumvents the well-known Ladyzhenskaya-Babuška-Brezzi conditions for stability, and does not require special treatment to filter pressure modes by either Rhie-Chow interpolation or by using a Petrov-Galerkin finite element formulation. The use of a co-velocity with a high-resolution advection method and a linearly consistent edge-based treatment of viscous/diffusive terms yields a robust algorithm for a broad spectrum of incompressible flows. The high-resolution advection method is shown to deliver second-order spatial convergence on mixed element topology meshes, and the implicit advective treatment significantly increases the stable time-step size. The algorithm is robust and extensible, permitting the incorporation of features such as porous media flow, RANS and LES turbulence models, and semi-/fully-implicit time stepping. A series of verification and validation problems are used to illustrate the convergence properties of the algorithm. The temporal stability properties are demonstrated on a range of problems with 2 ≤ CFL ≤ 100. The new flow solver is built using the Hydra multiphysics toolkit. The Hydra toolkit is written in C++ and provides a rich suite of extensible and fully-parallel components that permit rapid application development, supports multiple discretization techniques, provides I/O interfaces, dynamic run-time load balancing and data migration, and interfaces to scalable popular linear solvers, e.g., in open-source packages such as HYPRE, PETSc, and Trilinos.
A hybrid incremental projection method for thermal-hydraulics applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christon, Mark A.; Bakosi, Jozsef; Nadiga, Balasubramanya T.
In this paper, a new second-order accurate, hybrid, incremental projection method for time-dependent incompressible viscous flow is introduced in this paper. The hybrid finite-element/finite-volume discretization circumvents the well-known Ladyzhenskaya–Babuška–Brezzi conditions for stability, and does not require special treatment to filter pressure modes by either Rhie–Chow interpolation or by using a Petrov–Galerkin finite element formulation. The use of a co-velocity with a high-resolution advection method and a linearly consistent edge-based treatment of viscous/diffusive terms yields a robust algorithm for a broad spectrum of incompressible flows. The high-resolution advection method is shown to deliver second-order spatial convergence on mixed element topology meshes,more » and the implicit advective treatment significantly increases the stable time-step size. The algorithm is robust and extensible, permitting the incorporation of features such as porous media flow, RANS and LES turbulence models, and semi-/fully-implicit time stepping. A series of verification and validation problems are used to illustrate the convergence properties of the algorithm. The temporal stability properties are demonstrated on a range of problems with 2 ≤ CFL ≤ 100. The new flow solver is built using the Hydra multiphysics toolkit. The Hydra toolkit is written in C++ and provides a rich suite of extensible and fully-parallel components that permit rapid application development, supports multiple discretization techniques, provides I/O interfaces, dynamic run-time load balancing and data migration, and interfaces to scalable popular linear solvers, e.g., in open-source packages such as HYPRE, PETSc, and Trilinos.« less
A hybrid incremental projection method for thermal-hydraulics applications
Christon, Mark A.; Bakosi, Jozsef; Nadiga, Balasubramanya T.; ...
2016-07-01
In this paper, a new second-order accurate, hybrid, incremental projection method for time-dependent incompressible viscous flow is introduced in this paper. The hybrid finite-element/finite-volume discretization circumvents the well-known Ladyzhenskaya–Babuška–Brezzi conditions for stability, and does not require special treatment to filter pressure modes by either Rhie–Chow interpolation or by using a Petrov–Galerkin finite element formulation. The use of a co-velocity with a high-resolution advection method and a linearly consistent edge-based treatment of viscous/diffusive terms yields a robust algorithm for a broad spectrum of incompressible flows. The high-resolution advection method is shown to deliver second-order spatial convergence on mixed element topology meshes,more » and the implicit advective treatment significantly increases the stable time-step size. The algorithm is robust and extensible, permitting the incorporation of features such as porous media flow, RANS and LES turbulence models, and semi-/fully-implicit time stepping. A series of verification and validation problems are used to illustrate the convergence properties of the algorithm. The temporal stability properties are demonstrated on a range of problems with 2 ≤ CFL ≤ 100. The new flow solver is built using the Hydra multiphysics toolkit. The Hydra toolkit is written in C++ and provides a rich suite of extensible and fully-parallel components that permit rapid application development, supports multiple discretization techniques, provides I/O interfaces, dynamic run-time load balancing and data migration, and interfaces to scalable popular linear solvers, e.g., in open-source packages such as HYPRE, PETSc, and Trilinos.« less
Birone, Claire; Brown, Maria; Hernandez, Jesus; Neff, Sherry; Williams, Daniel; Allaire, Marc; Orville, Allen M.; Sweet, Robert M.; Soares, Alexei S.
2014-01-01
High throughput screening technologies such as acoustic droplet ejection (ADE) greatly increase the rate at which X-ray diffraction data can be acquired from crystals. One promising high throughput screening application of ADE is to rapidly combine protein crystals with fragment libraries. In this approach, each fragment soaks into a protein crystal either directly on data collection media or on a moving conveyor belt which then delivers the crystals to the X-ray beam. By simultaneously handling multiple crystals combined with fragment specimens, these techniques relax the automounter duty-cycle bottleneck that currently prevents optimal exploitation of third generation synchrotrons. Two factors limit the speed and scope of projects that are suitable for fragment screening using techniques such as ADE. Firstly, in applications where the high throughput screening apparatus is located inside the X-ray station (such as the conveyor belt system described above), the speed of data acquisition is limited by the time required for each fragment to soak into its protein crystal. Secondly, in applications where crystals are combined with fragments directly on data acquisition media (including both of the ADE methods described above), the maximum time that fragments have to soak into crystals is limited by evaporative dehydration of the protein crystals during the fragment soak. Here we demonstrate that both of these problems can be minimized by using small crystals, because the soak time required for a fragment hit to attain high occupancy depends approximately linearly on crystal size. PMID:24988328
Cole, Krystal; Roessler, Christian G; Mulé, Elizabeth A; Benson-Xu, Emma J; Mullen, Jeffrey D; Le, Benjamin A; Tieman, Alanna M; Birone, Claire; Brown, Maria; Hernandez, Jesus; Neff, Sherry; Williams, Daniel; Allaire, Marc; Orville, Allen M; Sweet, Robert M; Soares, Alexei S
2014-01-01
High throughput screening technologies such as acoustic droplet ejection (ADE) greatly increase the rate at which X-ray diffraction data can be acquired from crystals. One promising high throughput screening application of ADE is to rapidly combine protein crystals with fragment libraries. In this approach, each fragment soaks into a protein crystal either directly on data collection media or on a moving conveyor belt which then delivers the crystals to the X-ray beam. By simultaneously handling multiple crystals combined with fragment specimens, these techniques relax the automounter duty-cycle bottleneck that currently prevents optimal exploitation of third generation synchrotrons. Two factors limit the speed and scope of projects that are suitable for fragment screening using techniques such as ADE. Firstly, in applications where the high throughput screening apparatus is located inside the X-ray station (such as the conveyor belt system described above), the speed of data acquisition is limited by the time required for each fragment to soak into its protein crystal. Secondly, in applications where crystals are combined with fragments directly on data acquisition media (including both of the ADE methods described above), the maximum time that fragments have to soak into crystals is limited by evaporative dehydration of the protein crystals during the fragment soak. Here we demonstrate that both of these problems can be minimized by using small crystals, because the soak time required for a fragment hit to attain high occupancy depends approximately linearly on crystal size.
Cui, Zaixu; Gong, Gaolang
2018-06-02
Individualized behavioral/cognitive prediction using machine learning (ML) regression approaches is becoming increasingly applied. The specific ML regression algorithm and sample size are two key factors that non-trivially influence prediction accuracies. However, the effects of the ML regression algorithm and sample size on individualized behavioral/cognitive prediction performance have not been comprehensively assessed. To address this issue, the present study included six commonly used ML regression algorithms: ordinary least squares (OLS) regression, least absolute shrinkage and selection operator (LASSO) regression, ridge regression, elastic-net regression, linear support vector regression (LSVR), and relevance vector regression (RVR), to perform specific behavioral/cognitive predictions based on different sample sizes. Specifically, the publicly available resting-state functional MRI (rs-fMRI) dataset from the Human Connectome Project (HCP) was used, and whole-brain resting-state functional connectivity (rsFC) or rsFC strength (rsFCS) were extracted as prediction features. Twenty-five sample sizes (ranged from 20 to 700) were studied by sub-sampling from the entire HCP cohort. The analyses showed that rsFC-based LASSO regression performed remarkably worse than the other algorithms, and rsFCS-based OLS regression performed markedly worse than the other algorithms. Regardless of the algorithm and feature type, both the prediction accuracy and its stability exponentially increased with increasing sample size. The specific patterns of the observed algorithm and sample size effects were well replicated in the prediction using re-testing fMRI data, data processed by different imaging preprocessing schemes, and different behavioral/cognitive scores, thus indicating excellent robustness/generalization of the effects. The current findings provide critical insight into how the selected ML regression algorithm and sample size influence individualized predictions of behavior/cognition and offer important guidance for choosing the ML regression algorithm or sample size in relevant investigations. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Datta, Anubhav; Johnson, Wayne R.
2009-01-01
This paper has two objectives. The first objective is to formulate a 3-dimensional Finite Element Model for the dynamic analysis of helicopter rotor blades. The second objective is to implement and analyze a dual-primal iterative substructuring based Krylov solver, that is parallel and scalable, for the solution of the 3-D FEM analysis. The numerical and parallel scalability of the solver is studied using two prototype problems - one for ideal hover (symmetric) and one for a transient forward flight (non-symmetric) - both carried out on up to 48 processors. In both hover and forward flight conditions, a perfect linear speed-up is observed, for a given problem size, up to the point of substructure optimality. Substructure optimality and the linear parallel speed-up range are both shown to depend on the problem size as well as on the selection of the coarse problem. With a larger problem size, linear speed-up is restored up to the new substructure optimality. The solver also scales with problem size - even though this conclusion is premature given the small prototype grids considered in this study.
Simple and multiple linear regression: sample size considerations.
Hanley, James A
2016-11-01
The suggested "two subjects per variable" (2SPV) rule of thumb in the Austin and Steyerberg article is a chance to bring out some long-established and quite intuitive sample size considerations for both simple and multiple linear regression. This article distinguishes two of the major uses of regression models that imply very different sample size considerations, neither served well by the 2SPV rule. The first is etiological research, which contrasts mean Y levels at differing "exposure" (X) values and thus tends to focus on a single regression coefficient, possibly adjusted for confounders. The second research genre guides clinical practice. It addresses Y levels for individuals with different covariate patterns or "profiles." It focuses on the profile-specific (mean) Y levels themselves, estimating them via linear compounds of regression coefficients and covariates. By drawing on long-established closed-form variance formulae that lie beneath the standard errors in multiple regression, and by rearranging them for heuristic purposes, one arrives at quite intuitive sample size considerations for both research genres. Copyright © 2016 Elsevier Inc. All rights reserved.
Mniszewski, S M; Cawkwell, M J; Wall, M E; Mohd-Yusof, J; Bock, N; Germann, T C; Niklasson, A M N
2015-10-13
We present an algorithm for the calculation of the density matrix that for insulators scales linearly with system size and parallelizes efficiently on multicore, shared memory platforms with small and controllable numerical errors. The algorithm is based on an implementation of the second-order spectral projection (SP2) algorithm [ Niklasson, A. M. N. Phys. Rev. B 2002 , 66 , 155115 ] in sparse matrix algebra with the ELLPACK-R data format. We illustrate the performance of the algorithm within self-consistent tight binding theory by total energy calculations of gas phase poly(ethylene) molecules and periodic liquid water systems containing up to 15,000 atoms on up to 16 CPU cores. We consider algorithm-specific performance aspects, such as local vs nonlocal memory access and the degree of matrix sparsity. Comparisons to sparse matrix algebra implementations using off-the-shelf libraries on multicore CPUs, graphics processing units (GPUs), and the Intel many integrated core (MIC) architecture are also presented. The accuracy and stability of the algorithm are illustrated with long duration Born-Oppenheimer molecular dynamics simulations of 1000 water molecules and a 303 atom Trp cage protein solvated by 2682 water molecules.
Probing the Locality of Excited States with Linear Algebra.
Etienne, Thibaud
2015-04-14
This article reports a novel theoretical approach related to the analysis of molecular excited states. The strategy introduced here involves gathering two pieces of physical information, coming from Hilbert and direct space operations, into a general, unique quantum mechanical descriptor of electronic transitions' locality. Moreover, the projection of Hilbert and direct space-derived indices in an Argand plane delivers a straightforward way to visually probe the ability of a dye to undergo a long- or short-range charge-transfer. This information can be applied, for instance, to the analysis of the electronic response of families of dyes to light absorption by unveiling the trend of a given push-pull chromophore to increase the electronic cloud polarization magnitude of its main transition with respect to the size extension of its conjugated spacer. We finally demonstrate that all the quantities reported in this article can be reliably approximated by a linear algebraic derivation, based on the contraction of detachment/attachment density matrices from canonical to atomic space. This alternative derivation has the remarkable advantage of a very low computational cost with respect to the previously used numerical integrations, making fast and accurate characterization of large molecular systems' excited states easily affordable.
Paix, Alexandre; Wang, Yuemeng; Smith, Harold E.; Lee, Chih-Yung S.; Calidas, Deepika; Lu, Tu; Smith, Jarrett; Schmidt, Helen; Krause, Michael W.; Seydoux, Geraldine
2014-01-01
Homology-directed repair (HDR) of double-strand DNA breaks is a promising method for genome editing, but is thought to be less efficient than error-prone nonhomologous end joining in most cell types. We have investigated HDR of double-strand breaks induced by CRISPR-associated protein 9 (Cas9) in Caenorhabditis elegans. We find that HDR is very robust in the C. elegans germline. Linear repair templates with short (∼30–60 bases) homology arms support the integration of base and gene-sized edits with high efficiency, bypassing the need for selection. Based on these findings, we developed a systematic method to mutate, tag, or delete any gene in the C. elegans genome without the use of co-integrated markers or long homology arms. We generated 23 unique edits at 11 genes, including premature stops, whole-gene deletions, and protein fusions to antigenic peptides and GFP. Whole-genome sequencing of five edited strains revealed the presence of passenger variants, but no mutations at predicted off-target sites. The method is scalable for multi-gene editing projects and could be applied to other animals with an accessible germline. PMID:25249454
Concentrating Solar Power Projects | Concentrating Solar Power | NREL
construction, or under development. CSP technologies include parabolic trough, linear Fresnel reflector, power Technology-listing by parabolic trough, linear Fresnel reflector, power tower, or dish/engine systems Status
Concentrating Solar Power Projects - Dacheng Dunhuang 50MW Molten Salt
project Status Date: September 29, 2016 Project Overview Project Name: Dacheng Dunhuang 50MW Molten Salt ., Ltd Technology: Linear Fresnel reflector Turbine Capacity: Net: 50.0 MW Gross: 50.0 MW Status: Under reflector Status: Under development Country: China City: Dunhuang Region: Gansu Province Contact(s
NASA Astrophysics Data System (ADS)
Bijańska, Jolanta; Wodarski, Krzysztof; Wójcik, Janusz
2016-06-01
Efficient and effective preparation the production of new products is important requirement for a functioning and development of small and medium-sized enterprises. One of the methods, which support the fulfilment of this condition is project management. This publication presents the results of considerations, which are aimed at developing a project management model of preparation the production of a new product, adopted to specificity of small and medium-sized enterprises.
Mathematical modelling in engineering: an alternative way to teach Linear Algebra
NASA Astrophysics Data System (ADS)
Domínguez-García, S.; García-Planas, M. I.; Taberna, J.
2016-10-01
Technological advances require that basic science courses for engineering, including Linear Algebra, emphasize the development of mathematical strengths associated with modelling and interpretation of results, which are not limited only to calculus abilities. Based on this consideration, we have proposed a project-based learning, giving a dynamic classroom approach in which students modelled real-world problems and turn gain a deeper knowledge of the Linear Algebra subject. Considering that most students are digital natives, we use the e-portfolio as a tool of communication between students and teachers, besides being a good place making the work visible. In this article, we present an overview of the design and implementation of a project-based learning for a Linear Algebra course taught during the 2014-2015 at the 'ETSEIB'of Universitat Politècnica de Catalunya (UPC).
NASA Technical Reports Server (NTRS)
2004-01-01
[figure removed for brevity, see original site]
Released 29 June 2004 The atmosphere of Mars is a dynamic system. Water-ice clouds, fog, and hazes can make imaging the surface from space difficult. Dust storms can grow from local disturbances to global sizes, through which imaging is impossible. Seasonal temperature changes are the usual drivers in cloud and dust storm development and growth. Eons of atmospheric dust storm activity has left its mark on the surface of Mars. Dust carried aloft by the wind has settled out on every available surface; sand dunes have been created and moved by centuries of wind; and the effect of continual sand-blasting has modified many regions of Mars, creating yardangs and other unusual surface forms. Like yesterday's image, the linear 'ripples' are water-ice clouds. As spring is deepening at the North Pole these clouds are becoming more prevalent. Image information: VIS instrument. Latitude 68.9, Longitude 135.5 East (224.5 West). 38 meter/pixel resolution. Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time. NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.ERIC Educational Resources Information Center
Dolan, Thomas G.
2003-01-01
Describes project delivery methods that are replacing the traditional Design/Bid/Build linear approach to the management, design, and construction of new facilities. These variations can enhance construction management and teamwork. (SLD)
Influence of equilibrium shear flow in the parallel magnetic direction on edge localized mode crash
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Y.; Xiong, Y. Y.; Chen, S. Y., E-mail: sychen531@163.com
2016-04-15
The influence of the parallel shear flow on the evolution of peeling-ballooning (P-B) modes is studied with the BOUT++ four-field code in this paper. The parallel shear flow has different effects in linear simulation and nonlinear simulation. In the linear simulations, the growth rate of edge localized mode (ELM) can be increased by Kelvin-Helmholtz term, which can be caused by the parallel shear flow. In the nonlinear simulations, the results accord with the linear simulations in the linear phase. However, the ELM size is reduced by the parallel shear flow in the beginning of the turbulence phase, which is recognizedmore » as the P-B filaments' structure. Then during the turbulence phase, the ELM size is decreased by the shear flow.« less
Linear Approximation SAR Azimuth Processing Study
NASA Technical Reports Server (NTRS)
Lindquist, R. B.; Masnaghetti, R. K.; Belland, E.; Hance, H. V.; Weis, W. G.
1979-01-01
A segmented linear approximation of the quadratic phase function that is used to focus the synthetic antenna of a SAR was studied. Ideal focusing, using a quadratic varying phase focusing function during the time radar target histories are gathered, requires a large number of complex multiplications. These can be largely eliminated by using linear approximation techniques. The result is a reduced processor size and chip count relative to ideally focussed processing and a correspondingly increased feasibility for spaceworthy implementation. A preliminary design and sizing for a spaceworthy linear approximation SAR azimuth processor meeting requirements similar to those of the SEASAT-A SAR was developed. The study resulted in a design with approximately 1500 IC's, 1.2 cubic feet of volume, and 350 watts of power for a single look, 4000 range cell azimuth processor with 25 meters resolution.
NASA Technical Reports Server (NTRS)
2004-01-01
[figure removed for brevity, see original site]
Released 2 July 2004 The atmosphere of Mars is a dynamic system. Water-ice clouds, fog, and hazes can make imaging the surface from space difficult. Dust storms can grow from local disturbances to global sizes, through which imaging is impossible. Seasonal temperature changes are the usual drivers in cloud and dust storm development and growth. Eons of atmospheric dust storm activity has left its mark on the surface of Mars. Dust carried aloft by the wind has settled out on every available surface; sand dunes have been created and moved by centuries of wind; and the effect of continual sand-blasting has modified many regions of Mars, creating yardangs and other unusual surface forms. This image was acquired during mid-spring near the North Pole. The linear water-ice clouds are now regional in extent and often interact with neighboring cloud system, as seen in this image. The bottom of the image shows how the interaction can destroy the linear nature. While the surface is still visible through most of the clouds, there is evidence that dust is also starting to enter the atmosphere. Image information: VIS instrument. Latitude 68.4, Longitude 180 East (180 West). 38 meter/pixel resolution. Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time. NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.NASA Technical Reports Server (NTRS)
2004-01-01
[figure removed for brevity, see original site]
Released 1 July 2004 The atmosphere of Mars is a dynamic system. Water-ice clouds, fog, and hazes can make imaging the surface from space difficult. Dust storms can grow from local disturbances to global sizes, through which imaging is impossible. Seasonal temperature changes are the usual drivers in cloud and dust storm development and growth. Eons of atmospheric dust storm activity has left its mark on the surface of Mars. Dust carried aloft by the wind has settled out on every available surface; sand dunes have been created and moved by centuries of wind; and the effect of continual sand-blasting has modified many regions of Mars, creating yardangs and other unusual surface forms. This image was acquired during mid-spring near the North Pole. The linear water-ice clouds are now regional in extent and often interact with neighboring cloud system, as seen in this image. The bottom of the image shows how the interaction can destroy the linear nature. While the surface is still visible through most of the clouds, there is evidence that dust is also starting to enter the atmosphere. Image information: VIS instrument. Latitude 68.4, Longitude 258.8 East (101.2 West). 38 meter/pixel resolution. Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time. NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.NASA Astrophysics Data System (ADS)
Ge, Jun; Chan, Heang-Ping; Sahiner, Berkman; Zhang, Yiheng; Wei, Jun; Hadjiiski, Lubomir M.; Zhou, Chuan
2007-03-01
We are developing a computerized technique to reduce intra- and interplane ghosting artifacts caused by high-contrast objects such as dense microcalcifications (MCs) or metal markers on the reconstructed slices of digital tomosynthesis mammography (DTM). In this study, we designed a constrained iterative artifact reduction method based on a priori 3D information of individual MCs. We first segmented individual MCs on projection views (PVs) using an automated MC detection system. The centroid and the contrast profile of the individual MCs in the 3D breast volume were estimated from the backprojection of the segmented individual MCs on high-resolution (0.1 mm isotropic voxel size) reconstructed DTM slices. An isolated volume of interest (VOI) containing one or a few MCs is then modeled as a high-contrast object embedded in a local homogeneous background. A shift-variant 3D impulse response matrix (IRM) of the projection-reconstruction (PR) system for the extracted VOI was calculated using the DTM geometry and the reconstruction algorithm. The PR system for this VOI is characterized by a system of linear equations. A constrained iterative method was used to solve these equations for the effective linear attenuation coefficients (eLACs) within the isolated VOI. Spatial constraint and positivity constraint were used in this method. Finally, the intra- and interplane artifacts on the whole breast volume resulting from the MC were calculated using the corresponding impulse responses and subsequently subtracted from the original reconstructed slices. The performance of our artifact-reduction method was evaluated using a computer-simulated MC phantom, as well as phantom images and patient DTMs obtained with IRB approval. A GE prototype DTM system that acquires 21 PVs in 3º increments over a +/-30º range was used for image acquisition in this study. For the computer-simulated MC phantom, the eLACs can be estimated accurately, thus the interplane artifacts were effectively removed. For MCs in phantom and patient DTMs, our method reduced the artifacts but also created small over-corrected areas in some cases. Potential reasons for this may include: the simplified mathematical modeling of the forward projection process, and the amplified noise in the solution of the system of linear equations.
The Observational and Theoretical Tidal Radii of Globular Clusters in M87
NASA Astrophysics Data System (ADS)
Webb, Jeremy J.; Sills, Alison; Harris, William E.
2012-02-01
Globular clusters have linear sizes (tidal radii) which theory tells us are determined by their masses and by the gravitational potential of their host galaxy. To explore the relationship between observed and expected radii, we utilize the globular cluster population of the Virgo giant M87. Unusually deep, high signal-to-noise images of M87 are used to measure the effective and limiting radii of approximately 2000 globular clusters. To compare with these observations, we simulate a globular cluster population that has the same characteristics as the observed M87 cluster population. Placing these simulated clusters in the well-studied tidal field of M87, the orbit of each cluster is solved and the theoretical tidal radius of each cluster is determined. We compare the predicted relationship between cluster size and projected galactocentric distance to observations. We find that for an isotropic distribution of cluster velocities, theoretical tidal radii are approximately equal to observed limiting radii for R gc < 10 kpc. However, the isotropic simulation predicts a steep increase in cluster size at larger radii, which is not observed in large galaxies beyond the Milky Way. To minimize the discrepancy between theory and observations, we explore the effects of orbital anisotropy on cluster sizes, and suggest a possible orbital anisotropy profile for M87 which yields a better match between theory and observations. Finally, we suggest future studies which will establish a stronger link between theoretical tidal radii and observed radii.
[Detection of linear chromosomes and plasmids among 15 genera in the Actinomycetales].
Ma, Ning; Ma, Wei; Jiang, Chenglin; Fang, Ping; Qin, Zhongjun
2003-10-01
Bacterial chromosomes and plasmids are commonly circular, however, linear chromosomes and plasmids were discovered among 5 genera of the Actinomycetales. Here, we use pulsed field gel electrophoresis to study the genomes of 19 species which belong to 15 genera in the Actinomycetales. All chromosomes of 19 species are linear DNA, and linear plasmids with different sizes and copy numbers are detected among 5 species. This work provide basis for investigating the possible novel functions of linear replicons beyond Streptomyces and also helps to develop Actinomycetales artificial linear chromosome.
Offshore Wind Plant Balance-of-Station Cost Drivers and Sensitivities (Poster)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saur, G.; Maples, B.; Meadows, B.
2012-09-01
With Balance of System (BOS) costs contributing up to 70% of the installed capital cost, it is fundamental to understanding the BOS costs for offshore wind projects as well as potential cost trends for larger offshore turbines. NREL developed a BOS model using project cost estimates developed by GL Garrad Hassan. Aspects of BOS covered include engineering and permitting, ports and staging, transportation and installation, vessels, foundations, and electrical. The data introduce new scaling relationships for each BOS component to estimate cost as a function of turbine parameters and size, project parameters and size, and soil type. Based on themore » new BOS model, an analysis to understand the non-turbine costs associated with offshore turbine sizes ranging from 3 MW to 6 MW and offshore wind plant sizes ranging from 100 MW to 1000 MW has been conducted. This analysis establishes a more robust baseline cost estimate, identifies the largest cost components of offshore wind project BOS, and explores the sensitivity of the levelized cost of energy to permutations in each BOS cost element. This presentation shows results from the model that illustrates the potential impact of turbine size and project size on the cost of energy from US offshore wind plants.« less
Shade response of a full size TESSERA module
NASA Astrophysics Data System (ADS)
Slooff, Lenneke H.; Carr, Anna J.; de Groot, Koen; Jansen, Mark J.; Okel, Lars; Jonkman, Rudi; Bakker, Jan; de Gier, Bart; Harthoorn, Adriaan
2017-08-01
A full size TESSERA shade tolerant module has been made and was tested under various shadow conditions. The results show that the dedicated electrical interconnection of cells result in an almost linear response under shading. Furthermore, the voltage at maximum power point is almost independent of the shadow. This decreases the demand on the voltage range of the inverter. The increased shadow linearity results in a calculated increase in annual yield of about 4% for a typical Dutch house.
NASA Technical Reports Server (NTRS)
James, Mark; Wells, Doug; Allen, Phillip; Wallin, Kim
2017-01-01
Recently proposed modifications to ASTM E399 would provide a new size-insensitive approach to analyzing the force-displacement test record. The proposed size-insensitive linear-elastic fracture toughness, KIsi, targets a consistent 0.5mm crack extension for all specimen sizes by using an offset secant that is a function of the specimen ligament length. The KIsi evaluation also removes the Pmax/PQ criterion and increases the allowable specimen deformation. These latter two changes allow more plasticity at the crack tip, prompting the review undertaken in this work to ensure the validity of this new interpretation of the force-displacement curve. This paper provides a brief review of the proposed KIsi methodology and summarizes a finite element study into the effects of increased crack tip plasticity on the method given the allowance for additional specimen deformation. The study has two primary points of investigation: the effect of crack tip plasticity on compliance change in the force-displacement record and the continued validity of linear-elastic fracture mechanics to describe the crack front conditions. The analytical study illustrates that linear-elastic fracture mechanics assumptions remain valid at the increased deformation limit; however, the influence of plasticity on the compliance change in the test record is problematic. A proposed revision to the validity criteria for the KIsi test method is briefly discussed.
Herrera, Javier
2009-05-01
While pollinators may in general select for large, morphologically uniform floral phenotypes, drought stress has been proposed as a destabilizing force that may favour small flowers and/or promote floral variation within species. The general validity of this concept was checked by surveying a taxonomically diverse array of 38 insect-pollinated Mediterranean species. The interplay between fresh biomass investment, linear size and percentage corolla allocation was studied. Allometric relationships between traits were investigated by reduced major-axis regression, and qualitative correlates of floral variation explored using general linear-model MANOVA. Across species, flowers were perfectly isometrical with regard to corolla allocation (i.e. larger flowers were just scaled-up versions of smaller ones and vice versa). In contrast, linear size and biomass varied allometrically (i.e. there were shape variations, in addition to variations in size). Most floral variables correlated positively and significantly across species, except corolla allocation, which was largely determined by family membership and floral symmetry. On average, species with bilateral flowers allocated more to the corolla than those with radial flowers. Plant life-form was immaterial to all of the studied traits. Flower linear size variation was in general low among conspecifics (coefficients of variation around 10 %), whereas biomass was in general less uniform (e.g. 200-400 mg in Cistus salvifolius). Significant among-population differences were detected for all major quantitative floral traits. Flower miniaturization can allow an improved use of reproductive resources under prevailingly stressful conditions. The hypothesis that flower size reflects a compromise between pollinator attraction, water requirements and allometric constraints among floral parts is discussed.
Bayesian reconstruction of projection reconstruction NMR (PR-NMR).
Yoon, Ji Won
2014-11-01
Projection reconstruction nuclear magnetic resonance (PR-NMR) is a technique for generating multidimensional NMR spectra. A small number of projections from lower-dimensional NMR spectra are used to reconstruct the multidimensional NMR spectra. In our previous work, it was shown that multidimensional NMR spectra are efficiently reconstructed using peak-by-peak based reversible jump Markov chain Monte Carlo (RJMCMC) algorithm. We propose an extended and generalized RJMCMC algorithm replacing a simple linear model with a linear mixed model to reconstruct close NMR spectra into true spectra. This statistical method generates samples in a Bayesian scheme. Our proposed algorithm is tested on a set of six projections derived from the three-dimensional 700 MHz HNCO spectrum of a protein HasA. Copyright © 2014 Elsevier Ltd. All rights reserved.
Comparison of linear synchronous and induction motors
DOT National Transportation Integrated Search
2004-06-01
A propulsion prade study was conducted as part of the Colorado Maglev Project of FTA's Urban Maglev Technology Development Program to identify and evaluate prospective linear motor designs that could potentially meet the system performance requiremen...
Concentrating Solar Power Projects - Liddell Power Station | Concentrating
: Linear Fresnel reflector Turbine Capacity: Net: 3.0 MW Gross: 3.0 MW Status: Currently Non-Operational Start Year: 2012 Do you have more information, corrections, or comments? Background Technology: Linear
Plastic strain is a mixture of avalanches and quasireversible deformations: Study of various sizes
NASA Astrophysics Data System (ADS)
Szabó, Péter; Ispánovity, Péter Dusán; Groma, István
2015-02-01
The size dependence of plastic flow is studied by discrete dislocation dynamical simulations of systems with various amounts of interacting dislocations while the stress is slowly increased. The regions between avalanches in the individual stress curves as functions of the plastic strain were found to be nearly linear and reversible where the plastic deformation obeys an effective equation of motion with a nearly linear force. For small plastic deformation, the mean values of the stress-strain curves obey a power law over two decades. Here and for somewhat larger plastic deformations, the mean stress-strain curves converge for larger sizes, while their variances shrink, both indicating the existence of a thermodynamical limit. The converging averages decrease with increasing size, in accordance with size effects from experiments. For large plastic deformations, where steady flow sets in, the thermodynamical limit was not realized in this model system.
Woodworth-Jefcoats, Phoebe A; Polovina, Jeffrey J; Dunne, John P; Blanchard, Julia L
2013-03-01
Output from an earth system model is paired with a size-based food web model to investigate the effects of climate change on the abundance of large fish over the 21st century. The earth system model, forced by the Intergovernmental Panel on Climate Change (IPCC) Special report on emission scenario A2, combines a coupled climate model with a biogeochemical model including major nutrients, three phytoplankton functional groups, and zooplankton grazing. The size-based food web model includes linkages between two size-structured pelagic communities: primary producers and consumers. Our investigation focuses on seven sites in the North Pacific, each highlighting a specific aspect of projected climate change, and includes top-down ecosystem depletion through fishing. We project declines in large fish abundance ranging from 0 to 75.8% in the central North Pacific and increases of up to 43.0% in the California Current (CC) region over the 21st century in response to change in phytoplankton size structure and direct physiological effects. We find that fish abundance is especially sensitive to projected changes in large phytoplankton density and our model projects changes in the abundance of large fish being of the same order of magnitude as changes in the abundance of large phytoplankton. Thus, studies that address only climate-induced impacts to primary production without including changes to phytoplankton size structure may not adequately project ecosystem responses. © 2012 Blackwell Publishing Ltd.
Hydroacoustic Evaluation of Juvenile Salmonid Passage and Distribution at Lookout Point Dam, 2010
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khan, Fenton; Johnson, Gary E.; Royer, Ida M.
Pacific Northwest National Laboratory evaluated juvenile salmonid passage and distribution at Lookout Point Dam (LOP) on the Middle Fork Willamette River for the U.S. Army Corps of Engineers, Portland District (USACE), to provide data to support decisions on long-term measures to enhance downstream passage at LOP and others dams in USACE's Willamette Valley Project. This study was conducted in response to the listing of Upper Willamette River Spring Chinook salmon (Oncorhynchus tshawytscha) and Upper Willamette River steelhead (O. mykiss) as threatened under the Endangered Species Act. We conducted a hydroacoustic evaluation of juvenile salmonid passage and distribution at LOP duringmore » February 2010 through January 2011. Findings from this 1 year of study should be applied carefully because annual variation can be expected due to variability in adult salmon escapement, egg-to-fry and fry-to-smolt survival rates, reservoir rearing and predation, dam operations, and weather. Fish passage rates for smolt-size fish (> {approx}90 mm and < 300 mm) were highest during December-January and lowest in mid-summer through early fall. Passage peaks were also evident in early spring, early summer, and late fall. During the entire study period, an estimated total of 142,463 fish {+-} 4,444 (95% confidence interval) smolt-size fish passed through turbine penstock intakes. Of this total, 84% passed during December-January. Run timing for small-size fish ({approx}65-90 mm) peaked (702 fish) on December 18. Diel periodicity of smolt-size fish showing crepuscular peaks was evident in fish passage into turbine penstock intakes. Relatively few fish passed into the Regulating Outlets (ROs) when they were open in summer (2 fish/d) and winter (8 fish/d). Overall, when the ROs were open, RO efficiency (RO passage divided by total project passage) was 0.004. In linear regression analyses, daily fish passage (turbines and ROs combined) for smolt-size fish was significantly related to project discharge (P<0.001). This relationship was positive, but there was no relationship between total project passage and forebay elevation (P=0.48) or forebay elevation delta, i.e., day-to-day change in forebay elevation (P=0.16). In multiple regression analyses, a relatively parsimonious model was selected that predicted the observed data well. The multiple regression model indicates a positive trend between expected daily fish passage and each of the three variables in the model-Julian day, log(discharge), and log(abs(forebay delta)); i.e., as any of the environmental variables increase, expected daily fish passage increases. For vertical distribution of fish at the face of the dam, fish were surface-oriented with 62%-80% occurring above 10 m deep. The highest percentage of fish (30%-60%) was found between 5-10-m-deep. During spring and summer, mean target strengths for the analysis periods ranged from -44.2 to -42.1 dB. These values are indicative of yearling-sized juvenile salmon. In contrast, mean target strengths in fall and winter were about -49.0 dB, which are representative of subyearling-sized fish. The high-resolution spatial and temporal data reported herein provide detailed information about vertical, horizontal, diel, daily, and seasonal fish passage rates and distributions at LOP from March 2010 through January 2011. This information will support management decisions on design and development of surface passage and collection devices to help restore Chinook salmon populations in the Middle Fork Willamette River watershed above LOP.« less
Low Emittance Tuning Studies for SuperB
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liuzzo, Simone; /INFN, Pisa; Biagini, Maria
2012-07-06
SuperB[1] is an international project for an asymmetric 2 rings collider at the B mesons cm energy to be built in the Rome area in Italy. The two rings will have very small beam sizes at the Interaction Point and very small emittances, similar to the Linear Collider Damping Rings ones. In particular, the ultra low vertical emittances, 7 pm in the LER and 4 pm in the HER, need a careful study of the misalignment errors effects on the machine performances. Studies on the closed orbit, vertical dispersion and coupling corrections have been carried out in order to specifymore » the maximum allowed errors and to provide a procedure for emittance tuning. A new tool which combines MADX and Matlab routines has been developed, allowing for both corrections and tuning. Results of these studies are presented.« less
Ion Motion Induced Emittance Growth of Matched Electron Beams in Plasma Wakefields.
An, Weiming; Lu, Wei; Huang, Chengkun; Xu, Xinlu; Hogan, Mark J; Joshi, Chan; Mori, Warren B
2017-06-16
Plasma-based acceleration is being considered as the basis for building a future linear collider. Nonlinear plasma wakefields have ideal properties for accelerating and focusing electron beams. Preservation of the emittance of nano-Coulomb beams with nanometer scale matched spot sizes in these wakefields remains a critical issue due to ion motion caused by their large space charge forces. We use fully resolved quasistatic particle-in-cell simulations of electron beams in hydrogen and lithium plasmas, including when the accelerated beam has different emittances in the two transverse planes. The projected emittance initially grows and rapidly saturates with a maximum emittance growth of less than 80% in hydrogen and 20% in lithium. The use of overfocused beams is found to dramatically reduce the emittance growth. The underlying physics that leads to the lower than expected emittance growth is elucidated.
Numerical simulations of microwave heating of liquids: enhancements using Krylov subspace methods
NASA Astrophysics Data System (ADS)
Lollchund, M. R.; Dookhitram, K.; Sunhaloo, M. S.; Boojhawon, R.
2013-04-01
In this paper, we compare the performances of three iterative solvers for large sparse linear systems arising in the numerical computations of incompressible Navier-Stokes (NS) equations. These equations are employed mainly in the simulation of microwave heating of liquids. The emphasis of this work is on the application of Krylov projection techniques such as Generalized Minimal Residual (GMRES) to solve the Pressure Poisson Equations that result from discretisation of the NS equations. The performance of the GMRES method is compared with the traditional Gauss-Seidel (GS) and point successive over relaxation (PSOR) techniques through their application to simulate the dynamics of water housed inside a vertical cylindrical vessel which is subjected to microwave radiation. It is found that as the mesh size increases, GMRES gives the fastest convergence rate in terms of computational times and number of iterations.
Method of Conjugate Radii for Solving Linear and Nonlinear Systems
NASA Technical Reports Server (NTRS)
Nachtsheim, Philip R.
1999-01-01
This paper describes a method to solve a system of N linear equations in N steps. A quadratic form is developed involving the sum of the squares of the residuals of the equations. Equating the quadratic form to a constant yields a surface which is an ellipsoid. For different constants, a family of similar ellipsoids can be generated. Starting at an arbitrary point an orthogonal basis is constructed and the center of the family of similar ellipsoids is found in this basis by a sequence of projections. The coordinates of the center in this basis are the solution of linear system of equations. A quadratic form in N variables requires N projections. That is, the current method is an exact method. It is shown that the sequence of projections is equivalent to a special case of the Gram-Schmidt orthogonalization process. The current method enjoys an advantage not shared by the classic Method of Conjugate Gradients. The current method can be extended to nonlinear systems without modification. For nonlinear equations the Method of Conjugate Gradients has to be augmented with a line-search procedure. Results for linear and nonlinear problems are presented.
THE EFFECT OF PROJECTION ON DERIVED MASS-SIZE AND LINEWIDTH-SIZE RELATIONSHIPS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shetty, Rahul; Kauffmann, Jens; Goodman, Alyssa A.
2010-04-01
Power-law mass-size and linewidth-size correlations, two of 'Larson's laws', are often studied to assess the dynamical state of clumps within molecular clouds. Using the result of a hydrodynamic simulation of a molecular cloud, we investigate how geometric projection may affect the derived Larson relationships. We find that large-scale structures in the column density map have similar masses and sizes to those in the three-dimensional simulation (position-position-position, PPP). Smaller scale clumps in the column density map are measured to be more massive than the PPP clumps, due to the projection of all emitting gas along lines of sight. Further, due tomore » projection effects, structures in a synthetic spectral observation (position-position-velocity, PPV) may not necessarily correlate with physical structures in the simulation. In considering the turbulent velocities only, the linewidth-size relationship in the PPV cube is appreciably different from that measured from the simulation. Including thermal pressure in the simulated line widths imposes a minimum line width, which results in a better agreement in the slopes of the linewidth-size relationships, though there are still discrepancies in the offsets, as well as considerable scatter. Employing commonly used assumptions in a virial analysis, we find similarities in the computed virial parameters of the structures in the PPV and PPP cubes. However, due to the discrepancies in the linewidth-size and mass-size relationships in the PPP and PPV cubes, we caution that applying a virial analysis to observed clouds may be misleading due to geometric projection effects. We speculate that consideration of physical processes beyond kinetic and gravitational pressure would be required for accurately assessing whether complex clouds, such as those with highly filamentary structure, are bound.« less
Methods to achieve accurate projection of regional and global raster databases
Usery, E.L.; Seong, J.C.; Steinwand, D.R.; Finn, M.P.
2002-01-01
This research aims at building a decision support system (DSS) for selecting an optimum projection considering various factors, such as pixel size, areal extent, number of categories, spatial pattern of categories, resampling methods, and error correction methods. Specifically, this research will investigate three goals theoretically and empirically and, using the already developed empirical base of knowledge with these results, develop an expert system for map projection of raster data for regional and global database modeling. The three theoretical goals are as follows: (1) The development of a dynamic projection that adjusts projection formulas for latitude on the basis of raster cell size to maintain equal-sized cells. (2) The investigation of the relationships between the raster representation and the distortion of features, number of categories, and spatial pattern. (3) The development of an error correction and resampling procedure that is based on error analysis of raster projection.
Concentrating Solar Power Projects - Puerto Errado 1 Thermosolar Power
linear Fresnel reflector system. Status Date: September 7, 2011 Photo showing an aerial view at an angle ): Novatec Solar España S.L. (100%) Technology: Linear Fresnel reflector Turbine Capacity: Gross: 1.4 MW Technology: Linear Fresnel reflector Status: Operational Country: Spain City: Calasparra Region: Murcia Lat
Local projection stabilization for linearized Brinkman-Forchheimer-Darcy equation
NASA Astrophysics Data System (ADS)
Skrzypacz, Piotr
2017-09-01
The Local Projection Stabilization (LPS) is presented for the linearized Brinkman-Forchheimer-Darcy equation with high Reynolds numbers. The considered equation can be used to model porous medium flows in chemical reactors of packed bed type. The detailed finite element analysis is presented for the case of nonconstant porosity. The enriched variant of LPS is based on the equal order interpolation for the velocity and pressure. The optimal error bounds for the velocity and pressure errors are justified numerically.
ERIC Educational Resources Information Center
Metcalf, Richard M.
Although there has been previous research concerned with image size, brightness, and contrast in projection standards, the work has lacked careful conceptualization. In this study, size was measured in terms of the visual angle subtended by the material, brightness was stated in foot-lamberts, and contrast was defined as the ratio of the…
Nature of bonding and cooperativity in linear DMSO clusters: A DFT, AIM and NCI analysis.
Venkataramanan, Natarajan Sathiyamoorthy; Suvitha, Ambigapathy
2018-05-01
This study aims to cast light on the nature of interactions and cooperativity that exists in linear dimethyl sulfoxide (DMSO) clusters using dispersion corrected density functional theory. In the linear DMSO, DMSO molecules in the middle of the clusters are bound strongly than at the terminal. The plot of the total binding energy of the clusters vs the cluster size and mean polarizabilities vs cluster size shows an excellent linearity demonstrating the presence of cooperativity effect. The computed incremental binding energy of the clusters remains nearly constant, implying that DMSO addition at the terminal site can happen to form an infinite chain. In the linear clusters, two σ-hole at the terminal DMSO molecules were found and the value on it was found to increase with the increase in cluster size. The quantum theory of atoms in molecules topography shows the existence of hydrogen and SO⋯S type in linear tetramer and larger clusters. In the dimer and trimer SO⋯OS type of interaction exists. In 2D non-covalent interactions plot, additional peaks in the regions which contribute to the stabilization of the clusters were observed and it splits in the trimer and intensifies in the larger clusters. In the trimer and larger clusters in addition to the blue patches due to hydrogen bonds, additional, light blue patches were seen between the hydrogen atom of the methyl groups and the sulphur atom of the nearby DMSO molecule. Thus, in addition to the strong H-bonds, strong electrostatic interactions between the sulphur atom and methyl hydrogens exists in the linear clusters. Copyright © 2018 Elsevier Inc. All rights reserved.
The Seismic Tool-Kit (STK): an open source software for seismology and signal processing.
NASA Astrophysics Data System (ADS)
Reymond, Dominique
2016-04-01
We present an open source software project (GNU public license), named STK: Seismic ToolKit, that is dedicated mainly for seismology and signal processing. The STK project that started in 2007, is hosted by SourceForge.net, and count more than 19 500 downloads at the date of writing. The STK project is composed of two main branches: First, a graphical interface dedicated to signal processing (in the SAC format (SAC_ASCII and SAC_BIN): where the signal can be plotted, zoomed, filtered, integrated, derivated, ... etc. (a large variety of IFR and FIR filter is proposed). The estimation of spectral density of the signal are performed via the Fourier transform, with visualization of the Power Spectral Density (PSD) in linear or log scale, and also the evolutive time-frequency representation (or sonagram). The 3-components signals can be also processed for estimating their polarization properties, either for a given window, or either for evolutive windows along the time. This polarization analysis is useful for extracting the polarized noises, differentiating P waves, Rayleigh waves, Love waves, ... etc. Secondly, a panel of Utilities-Program are proposed for working in a terminal mode, with basic programs for computing azimuth and distance in spherical geometry, inter/auto-correlation, spectral density, time-frequency for an entire directory of signals, focal planes, and main components axis, radiation pattern of P waves, Polarization analysis of different waves (including noize), under/over-sampling the signals, cubic-spline smoothing, and linear/non linear regression analysis of data set. A MINimum library of Linear AlGebra (MIN-LINAG) is also provided for computing the main matrix process like: QR/QL decomposition, Cholesky solve of linear system, finding eigen value/eigen vectors, QR-solve/Eigen-solve of linear equations systems ... etc. STK is developed in C/C++, mainly under Linux OS, and it has been also partially implemented under MS-Windows. Usefull links: http://sourceforge.net/projects/seismic-toolkit/ http://sourceforge.net/p/seismic-toolkit/wiki/browse_pages/
Extraction and Propagation of an Intense Rotating Electron Beam,
1982-10-01
radiochromic foils positioned at z = 25 cm. The equal transmission density contours are ranked in linear order of increasing exposure (increasing current...flux encircled by the cathode e = %rc2Bc. Linearizing the equation of motion around the equilibrium, we can find the wavelength of small radial...the beam rotation. The mask which precedes the scint- illator is a linear array of dots while the projection is made up of two disjoint linear arrays
Linear-sweep voltammetry of a soluble redox couple in a cylindrical electrode
NASA Technical Reports Server (NTRS)
Weidner, John W.
1991-01-01
An approach is described for using the linear sweep voltammetry (LSV) technique to study the kinetics of flooded porous electrodes by assuming a porous electrode as a collection of identical noninterconnected cylindrical pores that are filled with electrolyte. This assumption makes possible to study the behavior of this ideal electrode as that of a single pore. Alternatively, for an electrode of a given pore-size distribution, it is possible to predict the performance of different pore sizes and then combine the performance values.
Final Report: CNC Micromachines LDRD No.10793
DOE Office of Scientific and Technical Information (OSTI.GOV)
JOKIEL JR., BERNHARD; BENAVIDES, GILBERT L.; BIEG, LOTHAR F.
2003-04-01
The three-year LDRD ''CNC Micromachines'' was successfully completed at the end of FY02. The project had four major breakthroughs in spatial motion control in MEMS: (1) A unified method for designing scalable planar and spatial on-chip motion control systems was developed. The method relies on the use of parallel kinematic mechanisms (PKMs) that when properly designed provide different types of motion on-chip without the need for post-fabrication assembly, (2) A new type of actuator was developed--the linear stepping track drive (LSTD) that provides open loop linear position control that is scalable in displacement, output force and step size. Several versionsmore » of this actuator were designed, fabricated and successfully tested. (3) Different versions of XYZ translation only and PTT motion stages were designed, successfully fabricated and successfully tested demonstrating absolutely that on-chip spatial motion control systems are not only possible, but are a reality. (4) Control algorithms, software and infrastructure based on MATLAB were created and successfully implemented to drive the XYZ and PTT motion platforms in a controlled manner. The control software is capable of reading an M/G code machine tool language file, decode the instructions and correctly calculate and apply position and velocity trajectories to the motion devices linear drive inputs to position the device platform along the trajectory as specified by the input file. A full and detailed account of design methodology, theory and experimental results (failures and successes) is provided.« less
NASA Technical Reports Server (NTRS)
Holloway, Sidney E., III; Crossley, Edward A.; Miller, James B.; Jones, Irby W.; Davis, C. Calvin; Behun, Vaughn D.; Goodrich, Lewis R., Sr.
1995-01-01
Linear proof-mass actuator (LPMA) is friction-driven linear mass actuator capable of applying controlled force to structure in outer space to damp out oscillations. Capable of high accelerations and provides smooth, bidirectional travel of mass. Design eliminates gears and belts. LPMA strong enough to be used terrestrially where linear actuators needed to excite or damp out oscillations. High flexibility designed into LPMA by varying size of motors, mass, and length of stroke, and by modifying control software.
Use of the Hotelling observer to optimize image reconstruction in digital breast tomosynthesis
Sánchez, Adrian A.; Sidky, Emil Y.; Pan, Xiaochuan
2015-01-01
Abstract. We propose an implementation of the Hotelling observer that can be applied to the optimization of linear image reconstruction algorithms in digital breast tomosynthesis. The method is based on considering information within a specific region of interest, and it is applied to the optimization of algorithms for detectability of microcalcifications. Several linear algorithms are considered: simple back-projection, filtered back-projection, back-projection filtration, and Λ-tomography. The optimized algorithms are then evaluated through the reconstruction of phantom data. The method appears robust across algorithms and parameters and leads to the generation of algorithm implementations which subjectively appear optimized for the task of interest. PMID:26702408
Factors Affecting Acoustics and Speech Intelligibility in the Operating Room: Size Matters.
McNeer, Richard R; Bennett, Christopher L; Horn, Danielle Bodzin; Dudaryk, Roman
2017-06-01
Noise in health care settings has increased since 1960 and represents a significant source of dissatisfaction among staff and patients and risk to patient safety. Operating rooms (ORs) in which effective communication is crucial are particularly noisy. Speech intelligibility is impacted by noise, room architecture, and acoustics. For example, sound reverberation time (RT60) increases with room size, which can negatively impact intelligibility, while room objects are hypothesized to have the opposite effect. We explored these relationships by investigating room construction and acoustics of the surgical suites at our institution. We studied our ORs during times of nonuse. Room dimensions were measured to calculate room volumes (VR). Room content was assessed by estimating size and assigning items into 5 volume categories to arrive at an adjusted room content volume (VC) metric. Psychoacoustic analyses were performed by playing sweep tones from a speaker and recording the impulse responses (ie, resulting sound fields) from 3 locations in each room. The recordings were used to calculate 6 psychoacoustic indices of intelligibility. Multiple linear regression was performed using VR and VC as predictor variables and each intelligibility index as an outcome variable. A total of 40 ORs were studied. The surgical suites were characterized by a large degree of construction and surface finish heterogeneity and varied in size from 71.2 to 196.4 m (average VR = 131.1 [34.2] m). An insignificant correlation was observed between VR and VC (Pearson correlation = 0.223, P = .166). Multiple linear regression model fits and β coefficients for VR were highly significant for each of the intelligibility indices and were best for RT60 (R = 0.666, F(2, 37) = 39.9, P < .0001). For Dmax (maximum distance where there is <15% loss of consonant articulation), both VR and VC β coefficients were significant. For RT60 and Dmax, after controlling for VC, partial correlations were 0.825 (P < .0001) and 0.718 (P < .0001), respectively, while after controlling for VR, partial correlations were -0.322 (P = .169) and 0.381 (P < .05), respectively. Our results suggest that the size and contents of an OR can predict a range of psychoacoustic indices of speech intelligibility. Specifically, increasing OR size correlated with worse speech intelligibility, while increasing amounts of OR contents correlated with improved speech intelligibility. This study provides valuable descriptive data and a predictive method for identifying existing ORs that may benefit from acoustic modifiers (eg, sound absorption panels). Additionally, it suggests that room dimensions and projected clinical use should be considered during the design phase of OR suites to optimize acoustic performance.
Determining Dissolved Oxygen Levels
ERIC Educational Resources Information Center
Boucher, Randy
2010-01-01
This project was used in a mathematical modeling and introduction to differential equations course for first-year college students. The students worked in two-person groups and were given three weeks to complete the project. Students were given this project three weeks into the course, after basic first order linear differential equation and…
Schwandt, E F; Wagner, J J; Engle, T E; Bartle, S J; Thomson, D U; Reinhardt, C D
2016-03-01
Crossbred yearling steers ( = 360; 395 ± 33.1 kg initial BW) were used to evaluate the effects of dry-rolled corn (DRC) particle size in diets containing 20% wet distiller's grains plus solubles on feedlot performance, carcass characteristics, and starch digestibility. Steers were used in a randomized complete block design and allocated to 36 pens (9 pens/treatment, with 10 animals/pen). Treatments were coarse DRC (4,882 μm), medium DRC (3,760 μm), fine DRC (2,359 μm), and steam-flaked corn (0.35 kg/L; SFC). Final BW and ADG were not affected by treatment ( > 0.05). Dry matter intake was greater and G:F was lower ( < 0.05) for steers fed DRC vs. steers fed SFC. There was a linear decrease ( < 0.05) in DMI in the final 5 wk on feed with decreasing DRC particle size. Fecal starch decreased (linear, < 0.01) as DRC particle size decreased. In situ starch disappearance was lower for DRC vs. SFC ( < 0.05) and linearly increased ( < 0.05) with decreasing particle size at 8 and 24 h. Reducing DRC particle size did not influence growth performance but increased starch digestion and influenced DMI of cattle on finishing diets. No differences ( > 0.10) were observed among treatments for any of the carcass traits measured. Results indicate improved ruminal starch digestibility, reduced fecal starch concentration, and reduced DMI with decreasing DRC particle size in feedlot diets containing 20% wet distiller's grains on a DM basis.
Adsorption of Poly(methyl methacrylate) on Concave Al2O3 Surfaces in Nanoporous Membranes
Nunnery, Grady; Hershkovits, Eli; Tannenbaum, Allen; Tannenbaum, Rina
2009-01-01
The objective of this study was to determine the influence of polymer molecular weight and surface curvature on the adsorption of polymers onto concave surfaces. Poly(methyl methacrylate) (PMMA) of various molecular weights was adsorbed onto porous aluminum oxide membranes having various pore sizes, ranging from 32 to 220 nm. The surface coverage, expressed as repeat units per unit surface area, was observed to vary linearly with molecular weight for molecular weights below ~120 000 g/mol. The coverage was independent of molecular weight above this critical molar mass, as was previously reported for the adsorption of PMMA on convex surfaces. Furthermore, the coverage varied linearly with pore size. A theoretical model was developed to describe curvature-dependent adsorption by considering the density gradient that exists between the surface and the edge of the adsorption layer. According to this model, the density gradient of the adsorbed polymer segments scales inversely with particle size, while the total coverage scales linearly with particle size, in good agreement with experiment. These results show that the details of the adsorption of polymers onto concave surfaces with cylindrical geometries can be used to calculate molecular weight (below a critical molecular weight) if pore size is known. Conversely, pore size can also be determined with similar adsorption experiments. Most significantly, for polymers above a critical molecular weight, the precise molecular weight need not be known in order to determine pore size. Moreover, the adsorption developed and validated in this work can be used to predict coverage also onto surfaces with different geometries. PMID:19415910
Choosing the appropriate forecasting model for predictive parameter control.
Aleti, Aldeida; Moser, Irene; Meedeniya, Indika; Grunske, Lars
2014-01-01
All commonly used stochastic optimisation algorithms have to be parameterised to perform effectively. Adaptive parameter control (APC) is an effective method used for this purpose. APC repeatedly adjusts parameter values during the optimisation process for optimal algorithm performance. The assignment of parameter values for a given iteration is based on previously measured performance. In recent research, time series prediction has been proposed as a method of projecting the probabilities to use for parameter value selection. In this work, we examine the suitability of a variety of prediction methods for the projection of future parameter performance based on previous data. All considered prediction methods have assumptions the time series data has to conform to for the prediction method to provide accurate projections. Looking specifically at parameters of evolutionary algorithms (EAs), we find that all standard EA parameters with the exception of population size conform largely to the assumptions made by the considered prediction methods. Evaluating the performance of these prediction methods, we find that linear regression provides the best results by a very small and statistically insignificant margin. Regardless of the prediction method, predictive parameter control outperforms state of the art parameter control methods when the performance data adheres to the assumptions made by the prediction method. When a parameter's performance data does not adhere to the assumptions made by the forecasting method, the use of prediction does not have a notable adverse impact on the algorithm's performance.
NASA Technical Reports Server (NTRS)
2003-01-01
[figure removed for brevity, see original site] Just north of the hematite deposit in Meridiani Planum, the remnants of a formerly extensive layer of material remain as isolated knobs and buttes. Note the transition from north to south in the size and frequency of these features, a reflection of the decreasing elevation along this trend.Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.Image information: VIS instrument. Latitude -0, Longitude 353 East (7 West). 19 meter/pixel resolution.Supervised orthogonal discriminant subspace projects learning for face recognition.
Chen, Yu; Xu, Xiao-Hong
2014-02-01
In this paper, a new linear dimension reduction method called supervised orthogonal discriminant subspace projection (SODSP) is proposed, which addresses high-dimensionality of data and the small sample size problem. More specifically, given a set of data points in the ambient space, a novel weight matrix that describes the relationship between the data points is first built. And in order to model the manifold structure, the class information is incorporated into the weight matrix. Based on the novel weight matrix, the local scatter matrix as well as non-local scatter matrix is defined such that the neighborhood structure can be preserved. In order to enhance the recognition ability, we impose an orthogonal constraint into a graph-based maximum margin analysis, seeking to find a projection that maximizes the difference, rather than the ratio between the non-local scatter and the local scatter. In this way, SODSP naturally avoids the singularity problem. Further, we develop an efficient and stable algorithm for implementing SODSP, especially, on high-dimensional data set. Moreover, the theoretical analysis shows that LPP is a special instance of SODSP by imposing some constraints. Experiments on the ORL, Yale, Extended Yale face database B and FERET face database are performed to test and evaluate the proposed algorithm. The results demonstrate the effectiveness of SODSP. Copyright © 2013 Elsevier Ltd. All rights reserved.
Solar granulation and statistical crystallography: A modeling approach using size-shape relations
NASA Technical Reports Server (NTRS)
Noever, D. A.
1994-01-01
The irregular polygonal pattern of solar granulation is analyzed for size-shape relations using statistical crystallography. In contrast to previous work which has assumed perfectly hexagonal patterns for granulation, more realistic accounting of cell (granule) shapes reveals a broader basis for quantitative analysis. Several features emerge as noteworthy: (1) a linear correlation between number of cell-sides and neighboring shapes (called Aboav-Weaire's law); (2) a linear correlation between both average cell area and perimeter and the number of cell-sides (called Lewis's law and a perimeter law, respectively) and (3) a linear correlation between cell area and squared perimeter (called convolution index). This statistical picture of granulation is consistent with a finding of no correlation in cell shapes beyond nearest neighbors. A comparative calculation between existing model predictions taken from luminosity data and the present analysis shows substantial agreements for cell-size distributions. A model for understanding grain lifetimes is proposed which links convective times to cell shape using crystallographic results.
Pakes, D; Boulding, E G
2010-08-01
Empirical estimates of selection gradients caused by predators are common, yet no one has quantified how these estimates vary with predator ontogeny. We used logistic regression to investigate how selection on gastropod shell thickness changed with predator size. Only small and medium purple shore crabs (Hemigrapsus nudus) exerted a linear selection gradient for increased shell-thickness within a single population of the intertidal snail (Littorina subrotundata). The shape of the fitness function for shell thickness was confirmed to be linear for small and medium crabs but was humped for large male crabs, suggesting no directional selection. A second experiment using two prey species to amplify shell thickness differences established that the selection differential on adult snails decreased linearly as crab size increased. We observed differences in size distribution and sex ratios among three natural shore crab populations that may cause spatial and temporal variation in predator-mediated selection on local snail populations.
Overestimation of the Projected Size of Objects on the Surface of Mirrors and Windows
ERIC Educational Resources Information Center
Lawson, Rebecca; Bertamini, Marco; Liu, Dan
2007-01-01
Four experiments investigated judgments of the size of projections of objects on the glass surface of mirrors and windows. The authors tested different ways of explaining the task to overcome the difficulty that people had in understanding what the projection was, and they varied the distance of the observer and the object to the mirror or window…
The Non-linear Health Consequences of Living in Larger Cities.
Rocha, Luis E C; Thorson, Anna E; Lambiotte, Renaud
2015-10-01
Urbanization promotes economy, mobility, access, and availability of resources, but on the other hand, generates higher levels of pollution, violence, crime, and mental distress. The health consequences of the agglomeration of people living close together are not fully understood. Particularly, it remains unclear how variations in the population size across cities impact the health of the population. We analyze the deviations from linearity of the scaling of several health-related quantities, such as the incidence and mortality of diseases, external causes of death, wellbeing, and health care availability, in respect to the population size of cities in Brazil, Sweden, and the USA. We find that deaths by non-communicable diseases tend to be relatively less common in larger cities, whereas the per capita incidence of infectious diseases is relatively larger for increasing population size. Healthier lifestyle and availability of medical support are disproportionally higher in larger cities. The results are connected with the optimization of human and physical resources and with the non-linear effects of social networks in larger populations. An urban advantage in terms of health is not evident, and using rates as indicators to compare cities with different population sizes may be insufficient.
Herrera, Javier
2009-01-01
Background and Aims While pollinators may in general select for large, morphologically uniform floral phenotypes, drought stress has been proposed as a destabilizing force that may favour small flowers and/or promote floral variation within species. Methods The general validity of this concept was checked by surveying a taxonomically diverse array of 38 insect-pollinated Mediterranean species. The interplay between fresh biomass investment, linear size and percentage corolla allocation was studied. Allometric relationships between traits were investigated by reduced major-axis regression, and qualitative correlates of floral variation explored using general linear-model MANOVA. Key Results Across species, flowers were perfectly isometrical with regard to corolla allocation (i.e. larger flowers were just scaled-up versions of smaller ones and vice versa). In contrast, linear size and biomass varied allometrically (i.e. there were shape variations, in addition to variations in size). Most floral variables correlated positively and significantly across species, except corolla allocation, which was largely determined by family membership and floral symmetry. On average, species with bilateral flowers allocated more to the corolla than those with radial flowers. Plant life-form was immaterial to all of the studied traits. Flower linear size variation was in general low among conspecifics (coefficients of variation around 10 %), whereas biomass was in general less uniform (e.g. 200–400 mg in Cistus salvifolius). Significant among-population differences were detected for all major quantitative floral traits. Conclusions Flower miniaturization can allow an improved use of reproductive resources under prevailingly stressful conditions. The hypothesis that flower size reflects a compromise between pollinator attraction, water requirements and allometric constraints among floral parts is discussed. PMID:19258340
SU-G-IeP4-06: Feasibility of External Beam Treatment Field Verification Using Cherenkov Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Black, P; Na, Y; Wuu, C
2016-06-15
Purpose: Cherenkov light emission has been shown to correlate with ionizing radiation (IR) dose delivery in solid tissue. In order to properly correlate Cherenkov light images with real time dose delivery in a patient, we must account for geometric and intensity distortions arising from observation angle, as well as the effect of monitor units (MU) and field size on Cherenkov light emission. To test the feasibility of treatment field verification, we first focused on Cherenkov light emission efficiency based on MU and known field size (FS). Methods: Cherenkov light emission was captured using a PI-MAX4 intensified charge coupled device(ICCD) systemmore » (Princeton Instruments), positioned at a fixed angle of 40° relative to the beam central axis. A Varian TrueBeam linear accelerator (linac) was operated at 6MV and 600MU/min to deliver an Anterior-Posterior beam to a 5cm thick block phantom positioned at 100cm Source-to-Surface-Distance(SSD). FS of 10×10, 5×5, and 2×2cm{sup 2} were used. Before beam delivery projected light field images were acquired, ensuring that geometric distortions were consistent when measuring Cherenkov field discrepancies. Cherenkov image acquisition was triggered by linac target current. 500 frames were acquired for each FS. Composite images were created through summation of frames and background subtraction. MU per image was calculated based on linac pulse delay of 2.8ms. Cherenkov and projected light FS were evaluated using ImageJ software. Results: Mean Cherenkov FS discrepancies compared to light field were <0.5cm for 5.6, 2.8, and 8.6 MU for 10×10, 5×5, and 2×2cm{sup 2} FS, respectably. Discrepancies were reduced with increasing field size and MU. We predict a minimum of 100 frames is needed for reliable confirmation of delivered FS. Conclusion: Current discrepancies in Cherenkov field sizes are within a usable range to confirm treatment delivery in standard and respiratory gated clinical scenarios at MU levels appropriate to standard MLC position segments.« less
Pan, Hung-Yin; Chen, Carton W; Huang, Chih-Hung
2018-04-17
Soil bacteria Streptomyces are the most important producers of secondary metabolites, including most known antibiotics. These bacteria and their close relatives are unique in possessing linear chromosomes, which typically harbor 20 to 30 biosynthetic gene clusters of tens to hundreds of kb in length. Many Streptomyces chromosomes are accompanied by linear plasmids with sizes ranging from several to several hundred kb. The large linear plasmids also often contain biosynthetic gene clusters. We have developed a targeted recombination procedure for arm exchanges between a linear plasmid and a linear chromosome. A chromosomal segment inserted in an artificially constructed plasmid allows homologous recombination between the two replicons at the homology. Depending on the design, the recombination may result in two recombinant replicons or a single recombinant chromosome with the loss of the recombinant plasmid that lacks a replication origin. The efficiency of such targeted recombination ranges from 9 to 83% depending on the locations of the homology (and thus the size of the chromosomal arm exchanged), essentially eliminating the necessity of selection. The targeted recombination is useful for the efficient engineering of the Streptomyces genome for large-scale deletion, addition, and shuffling.
Cook, James P; Mahajan, Anubha; Morris, Andrew P
2017-02-01
Linear mixed models are increasingly used for the analysis of genome-wide association studies (GWAS) of binary phenotypes because they can efficiently and robustly account for population stratification and relatedness through inclusion of random effects for a genetic relationship matrix. However, the utility of linear (mixed) models in the context of meta-analysis of GWAS of binary phenotypes has not been previously explored. In this investigation, we present simulations to compare the performance of linear and logistic regression models under alternative weighting schemes in a fixed-effects meta-analysis framework, considering designs that incorporate variable case-control imbalance, confounding factors and population stratification. Our results demonstrate that linear models can be used for meta-analysis of GWAS of binary phenotypes, without loss of power, even in the presence of extreme case-control imbalance, provided that one of the following schemes is used: (i) effective sample size weighting of Z-scores or (ii) inverse-variance weighting of allelic effect sizes after conversion onto the log-odds scale. Our conclusions thus provide essential recommendations for the development of robust protocols for meta-analysis of binary phenotypes with linear models.
Nonlinear Electrostatic Properties of Lunar Dust
NASA Technical Reports Server (NTRS)
Irwin, Stacy A.
2012-01-01
A laboratory experiment was designed to study the induction charging and charge decay characteristics of small dielectric particles, or glass beads. Initially, the goal of the experiment was further understanding of induction charging of lunar dust particles. However, the mechanism of charging became a point of greater interest as the project continued. Within an environmentally-controlled acrylic glove box was placed a large parallel plate capacitor at high-voltage (HV) power supply with reversible polarity. Spherical 1-mm and 0.5-mm glass beads, singly, were placed between the plates, and their behaviors recorded on video and quantified. Nearly a hundred trials at various humidities were performed. The analysis of the results indicated a non-linear relationship between humidity and particle charge exchange time (CET), for both sizes of beads. Further, a difference in CET for top-resting beads and bottom-resting beads hinted at a different charging mechanism than that of simple induction. Results from the I-mm bead trials were presented at several space science and physics conferences in 2008 and 2009, and were published as a Master's thesis in August 2009. Tangential work stemming from this project resulted in presentations at other international conferences in 2010, and selection to attend workshop on granular matter flow 2011.
Pairs of galaxies in low density regions of a combined redshift catalog
NASA Technical Reports Server (NTRS)
Charlton, Jane C.; Salpeter, Edwin E.
1990-01-01
The distributions of projected separations and radial velocity differences of pairs of galaxies in the CfA and Southern Sky Redshift Survey (SSRS) redshift catalogs are examined. The authors focus on pairs that fall in low density environments rather than in clusters or large groups. The projected separation distribution is nearly flat, while uncorrelated galaxies would have given one linearly rising with r sub p. There is no break in this curve even below 50 kpc, the minimum halo size consistent with measured galaxy rotation curves. The significant number of pairs at small separations is inconsistent with the N-body result that galaxies with overlapping halos will rapidly merge, unless there are significant amounts of matter distributed out to a few hundred kpc of the galaxies. This dark matter may either be in distinct halos or more loosely distributed. Large halos would allow pairs at initially large separations to head toward merger, replenishing the distribution at small separations. In the context of this model, the authors estimate that roughly 10 to 25 percent of these low density galaxies are the product of a merger, compared with the elliptical/SO fraction of 18 percent, observed in low density regions of the sample.
Rescia, Alejandro J; Astrada, Elizabeth N; Bono, Julieta; Blasco, Carlos A; Meli, Paula; Adámoli, Jorge M
2006-08-01
A linear engineering project--i.e. a pipeline--has a potential long- and short-term impact on the environment and on the inhabitants therein. We must find better, less expensive, and less time-consuming ways to obtain information on the environment and on any modifications resulting from anthropic activity. We need scientifically sound, rapid and affordable assessment and monitoring methods. Construction companies, industries and the regulating government organisms lack the resources needed to conduct long-term basic studies of the environment. Thus there is a need to make the necessary adjustments and improvements in the environmental data considered useful for this development project. More effective and less costly methods are generally needed. We characterized the landscape of the study area, situated in the center and north-east of Argentina. Little is known of the ecology of this region and substantial research is required in order to develop sustainable uses and, at the same time, to develop methods for reducing impacts, both primary and secondary, resulting from anthropic activity in this area. Furthermore, we made an assessment of the environmental impact of the planned linear project, applying an ad hoc impact index, and we analyzed the different alternatives for a corridor, each one of these involving different sections of the territory. Among the alternative corridors considered, this study locates the most suitable ones in accordance with a selection criterion based on different environmental and conservation aspects. We selected the corridor that we considered to be the most compatible--i.e. with the least potential environmental impact--for the possible construction and operation of the linear project. This information, along with suitable measures for mitigating possible impacts, should be the basis of an environmental management plan for the design process and location of the project. We pointed out the objectivity and efficiency of this methodological approach, along with the possibility of integrating the information in order to allow for the application thereof in this type of study.
A cosmic web filament revealed in Lyman-α emission around a luminous high-redshift quasar.
Cantalupo, Sebastiano; Arrigoni-Battaia, Fabrizio; Prochaska, J Xavier; Hennawi, Joseph F; Madau, Piero
2014-02-06
Simulations of structure formation in the Universe predict that galaxies are embedded in a 'cosmic web', where most baryons reside as rarefied and highly ionized gas. This material has been studied for decades in absorption against background sources, but the sparseness of these inherently one-dimensional probes preclude direct constraints on the three-dimensional morphology of the underlying web. Here we report observations of a cosmic web filament in Lyman-α emission, discovered during a survey for cosmic gas fluorescently illuminated by bright quasars at redshift z ≈ 2.3. With a linear projected size of approximately 460 physical kiloparsecs, the Lyman-α emission surrounding the radio-quiet quasar UM 287 extends well beyond the virial radius of any plausible associated dark-matter halo and therefore traces intergalactic gas. The estimated cold gas mass of the filament from the observed emission-about 10(12.0 ± 0.5)/C(1/2) solar masses, where C is the gas clumping factor-is more than ten times larger than what is typically found in cosmological simulations, suggesting that a population of intergalactic gas clumps with subkiloparsec sizes may be missing in current numerical models.
Monte Carlo simulation of star/linear and star/star blends with chemically identical monomers
NASA Astrophysics Data System (ADS)
Theodorakis, P. E.; Avgeropoulos, A.; Freire, J. J.; Kosmas, M.; Vlahos, C.
2007-11-01
The effects of chain size and architectural asymmetry on the miscibility of blends with chemically identical monomers, differing only in their molecular weight and architecture, are studied via Monte Carlo simulation by using the bond fluctuation model. Namely, we consider blends composed of linear/linear, star/linear and star/star chains. We found that linear/linear blends are more miscible than the corresponding star/star mixtures. In star/linear blends, the increase in the volume fraction of the star chains increases the miscibility. For both star/linear and star/star blends, the miscibility decreases with the increase in star functionality. When we increase the molecular weight of linear chains of star/linear mixtures the miscibility decreases. Our findings are compared with recent analytical and experimental results.
Voltage and pace-capture mapping of linear ablation lesions overestimates chronic ablation gap size.
O'Neill, Louisa; Harrison, James; Chubb, Henry; Whitaker, John; Mukherjee, Rahul K; Bloch, Lars Ølgaard; Andersen, Niels Peter; Dam, Høgni; Jensen, Henrik K; Niederer, Steven; Wright, Matthew; O'Neill, Mark; Williams, Steven E
2018-04-26
Conducting gaps in lesion sets are a major reason for failure of ablation procedures. Voltage mapping and pace-capture have been proposed for intra-procedural identification of gaps. We aimed to compare gap size measured acutely and chronically post-ablation to macroscopic gap size in a porcine model. Intercaval linear ablation was performed in eight Göttingen minipigs with a deliberate gap of ∼5 mm left in the ablation line. Gap size was measured by interpolating ablation contact force values between ablation tags and thresholding at a low force cut-off of 5 g. Bipolar voltage mapping and pace-capture mapping along the length of the line were performed immediately, and at 2 months, post-ablation. Animals were euthanized and gap sizes were measured macroscopically. Voltage thresholds to define scar were determined by receiver operating characteristic analysis as <0.56 mV (acutely) and <0.62 mV (chronically). Taking the macroscopic gap size as gold standard, error in gap measurements were determined for voltage, pace-capture, and ablation contact force maps. All modalities overestimated chronic gap size, by 1.4 ± 2.0 mm (ablation contact force map), 5.1 ± 3.4 mm (pace-capture), and 9.5 ± 3.8 mm (voltage mapping). Error on ablation contact force map gap measurements were significantly less than for voltage mapping (P = 0.003, Tukey's multiple comparisons test). Chronically, voltage mapping and pace-capture mapping overestimated macroscopic gap size by 11.9 ± 3.7 and 9.8 ± 3.5 mm, respectively. Bipolar voltage and pace-capture mapping overestimate the size of chronic gap formation in linear ablation lesions. The most accurate estimation of chronic gap size was achieved by analysis of catheter-myocardium contact force during ablation.
Marcar, Valentine L; Baselgia, Silvana; Lüthi-Eisenegger, Barbara; Jäncke, Lutz
2018-03-01
Retinal input processing in the human visual system involves a phasic and tonic neural response. We investigated the role of the magno- and parvocellular systems by comparing the influence of the active neural population size and its discharge activity on the amplitude and latency of four VEP components. We recorded the scalp electric potential of 20 human volunteers viewing a series of dartboard images presented as a pattern reversing and pattern on-/offset stimulus. These patterns were designed to vary both neural population size coding the temporal- and spatial luminance contrast property and the discharge activity of the population involved in a systematic manner. When the VEP amplitude reflected the size of the neural population coding the temporal luminance contrast property of the image, the influence of luminance contrast followed the contrast response function of the parvocellular system. When the VEP amplitude reflected the size of the neural population responding to the spatial luminance contrast property the image, the influence of luminance contrast followed the contrast response function of the magnocellular system. The latencies of the VEP components examined exhibited the same behavior across our stimulus series. This investigation demonstrates the complex interplay of the magno- and parvocellular systems on the neural response as captured by the VEP. It also demonstrates a linear relationship between stimulus property, neural response, and the VEP and reveals the importance of feedback projections in modulating the ongoing neural response. In doing so, it corroborates the conclusions of our previous study.
Kan, Hirohito; Kasai, Harumasa; Arai, Nobuyuki; Kunitomo, Hiroshi; Hirose, Yasujiro; Shibamoto, Yuta
2016-09-01
An effective background field removal technique is desired for more accurate quantitative susceptibility mapping (QSM) prior to dipole inversion. The aim of this study was to evaluate the accuracy of regularization enabled sophisticated harmonic artifact reduction for phase data with varying spherical kernel sizes (REV-SHARP) method using a three-dimensional head phantom and human brain data. The proposed REV-SHARP method used the spherical mean value operation and Tikhonov regularization in the deconvolution process, with varying 2-14mm kernel sizes. The kernel sizes were gradually reduced, similar to the SHARP with varying spherical kernel (VSHARP) method. We determined the relative errors and relationships between the true local field and estimated local field in REV-SHARP, VSHARP, projection onto dipole fields (PDF), and regularization enabled SHARP (RESHARP). Human experiment was also conducted using REV-SHARP, VSHARP, PDF, and RESHARP. The relative errors in the numerical phantom study were 0.386, 0.448, 0.838, and 0.452 for REV-SHARP, VSHARP, PDF, and RESHARP. REV-SHARP result exhibited the highest correlation between the true local field and estimated local field. The linear regression slopes were 1.005, 1.124, 0.988, and 0.536 for REV-SHARP, VSHARP, PDF, and RESHARP in regions of interest on the three-dimensional head phantom. In human experiments, no obvious errors due to artifacts were present in REV-SHARP. The proposed REV-SHARP is a new method combined with variable spherical kernel size and Tikhonov regularization. This technique might make it possible to be more accurate backgroud field removal and help to achive better accuracy of QSM. Copyright © 2016 Elsevier Inc. All rights reserved.
U.S. Balance-of-Station Cost Drivers and Sensitivities (Presentation)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maples, B.
2012-10-01
With balance-of-system (BOS) costs contributing up to 70% of the installed capital cost, it is fundamental to understanding the BOS costs for offshore wind projects as well as potential cost trends for larger offshore turbines. NREL developed a BOS model using project cost estimates developed by GL Garrad Hassan. Aspects of BOS covered include engineering and permitting, ports and staging, transportation and installation, vessels, foundations, and electrical. The data introduce new scaling relationships for each BOS component to estimate cost as a function of turbine parameters and size, project parameters and size, and soil type. Based on the new BOSmore » model, an analysis to understand the non‐turbine costs has been conducted. This analysis establishes a more robust baseline cost estimate, identifies the largest cost components of offshore wind project BOS, and explores the sensitivity of the levelized cost of energy to permutations in each BOS cost element. This presentation shows results from the model that illustrates the potential impact of turbine size and project size on the cost of energy from U.S. offshore wind plants.« less
GIS Tools to Estimate Average Annual Daily Traffic
DOT National Transportation Integrated Search
2012-06-01
This project presents five tools that were created for a geographical information system to estimate Annual Average Daily : Traffic using linear regression. Three of the tools can be used to prepare spatial data for linear regression. One tool can be...
Application of laser speckle to randomized numerical linear algebra
NASA Astrophysics Data System (ADS)
Valley, George C.; Shaw, Thomas J.; Stapleton, Andrew D.; Scofield, Adam C.; Sefler, George A.; Johannson, Leif
2018-02-01
We propose and simulate integrated optical devices for accelerating numerical linear algebra (NLA) calculations. Data is modulated on chirped optical pulses and these propagate through a multimode waveguide where speckle provides the random projections needed for NLA dimensionality reduction.
NASA Astrophysics Data System (ADS)
Granton, Patrick V.; Dekker, Kurtis H.; Battista, Jerry J.; Jordan, Kevin J.
2016-04-01
Optical cone-beam computed tomographic (CBCT) scanning of 3D radiochromic dosimeters may provide a practical method for 3D dose verification in radiation therapy. However, in cone-beam geometry stray light contaminates the projection images, degrading the accuracy of reconstructed linear attenuation coefficients. Stray light was measured using a beam pass aperture array (BPA) and structured illumination methods. The stray-to-primary ray ratio (SPR) along the central axis was found to be 0.24 for a 5% gelatin hydrogel, representative of radiochromic hydrogels. The scanner was modified by moving the spectral filter from the detector to the source, changing the light’s spatial fluence pattern and lowering the acceptance angle by extending distance between the source and object. These modifications reduced the SPR significantly from 0.24 to 0.06. The accuracy of the reconstructed linear attenuation coefficients for uniform carbon black liquids was compared to independent spectrometer measurements. Reducing the stray light increased the range of accurate transmission readings. In order to evaluate scanner performance for the more challenging application to small field dosimetry, a carbon black finger gel phantom was prepared. Reconstructions of the phantom from CBCT and fan-beam CT scans were compared. The modified source resulted in improved agreement. Subtraction of residual stray light, measured with BPA or structured illumination from each projection further improved agreement. Structured illumination was superior to BPA for measuring stray light for the smaller 1.2 and 0.5 cm diameter phantom fingers. At the costs of doubling the scanner size and tripling the number of scans, CBCT reconstructions of low-scattering hydrogel dosimeters agreed with those of fan-beam CT scans.
LSRN: A PARALLEL ITERATIVE SOLVER FOR STRONGLY OVER- OR UNDERDETERMINED SYSTEMS*
Meng, Xiangrui; Saunders, Michael A.; Mahoney, Michael W.
2014-01-01
We describe a parallel iterative least squares solver named LSRN that is based on random normal projection. LSRN computes the min-length solution to minx∈ℝn ‖Ax − b‖2, where A ∈ ℝm × n with m ≫ n or m ≪ n, and where A may be rank-deficient. Tikhonov regularization may also be included. Since A is involved only in matrix-matrix and matrix-vector multiplications, it can be a dense or sparse matrix or a linear operator, and LSRN automatically speeds up when A is sparse or a fast linear operator. The preconditioning phase consists of a random normal projection, which is embarrassingly parallel, and a singular value decomposition of size ⌈γ min(m, n)⌉ × min(m, n), where γ is moderately larger than 1, e.g., γ = 2. We prove that the preconditioned system is well-conditioned, with a strong concentration result on the extreme singular values, and hence that the number of iterations is fully predictable when we apply LSQR or the Chebyshev semi-iterative method. As we demonstrate, the Chebyshev method is particularly efficient for solving large problems on clusters with high communication cost. Numerical results show that on a shared-memory machine, LSRN is very competitive with LAPACK’s DGELSD and a fast randomized least squares solver called Blendenpik on large dense problems, and it outperforms the least squares solver from SuiteSparseQR on sparse problems without sparsity patterns that can be exploited to reduce fill-in. Further experiments show that LSRN scales well on an Amazon Elastic Compute Cloud cluster. PMID:25419094
A Progressive Black Top Hat Transformation Algorithm for Estimating Valley Volumes from DEM Data
NASA Astrophysics Data System (ADS)
Luo, W.; Pingel, T.; Heo, J.; Howard, A. D.
2013-12-01
The amount of valley incision and valley volume are important parameters in geomorphology and hydrology research, because they are related to the amount erosion (and thus the volume of sediments) and the amount of water needed to create the valley. This is not only the case for terrestrial research but also for planetary research as such figuring out how much water was on Mars. With readily available digital elevation model (DEM) data, the Black Top Hat (BTH) transformation, an image processing technique for extracting dark features on a variable background, has been applied to DEM data to extract valley depth and estimate valley volume. However, previous studies typically use one single structuring element size for extracting the valley feature and one single threshold value for removing noise, resulting in some finer features such as tributaries not being extracted and underestimation of valley volume. Inspired by similar algorithms used in LiDAR data analysis to separate above ground features and bare earth topography, here we propose a progressive BTH (PBTH) transformation algorithm, where the structuring elements size is progressively increased to extract valleys of different orders. In addition, a slope based threshold was introduced to automatically adjust the threshold values for structuring elements with different sizes. Connectivity and shape parameters of the masked regions were used to keep the long linear valleys while removing other smaller non-connected regions. Preliminary application of the PBTH to Grand Canyon and two sites on Mars has produced promising results. More testing and fine-tuning is in progress. The ultimate goal of the project is to apply the algorithm to estimate the volume of valley networks on Mars and the volume of water needed to form the valleys we observe today and thus infer the nature of the hydrologic cycle on early Mars. The project is funded by NASA's Mars Data Analysis program.
Characterization of operating parameters of an in vivo micro CT system
NASA Astrophysics Data System (ADS)
Ghani, Muhammad U.; Ren, Liqiang; Yang, Kai; Chen, Wei R.; Wu, Xizeng; Liu, Hong
2016-03-01
The objective of this study was to characterize the operating parameters of an in-vivo micro CT system. In-plane spatial resolution, noise, geometric accuracy, CT number uniformity and linearity, and phase effects were evaluated using various phantoms. The system employs a flat panel detector with a 127 μm pixel pitch, and a micro focus x-ray tube with a focal spot size ranging from 5-30 μm. The system accommodates three magnification sets of 1.72, 2.54 and 5.10. The in-plane cutoff frequencies (10% MTF) ranged from 2.31 lp/mm (60 mm FOV, M=1.72, 2×2 binning) to 13 lp/mm (10 mm FOV, M=5.10, 1×1 binning). The results were qualitatively validated by a resolution bar pattern phantom and the smallest visible lines were in 30-40 μm range. Noise power spectrum (NPS) curves revealed that the noise peaks exponentially increased as the geometric magnification (M) increased. True in-plane pixel spacing and slice thickness were within 2% of the system's specifications. The CT numbers in cone beam modality are greatly affected by scattering and thus they do not remain the same in the three magnifications. A high linear relationship (R2 > 0.999) was found between the measured CT numbers and Hydroxyapatite (HA) loadings of the rods of a water filled mouse phantom. Projection images of a laser cut acrylic edge acquired at a small focal spot size of 5 μm with 1.5 fps revealed that noticeable phase effects occur at M=5.10 in the form of overshooting at the boundary of air and acrylic. In order to make the CT numbers consistent across all the scan settings, scatter correction methods may be a valuable improvement for this system.
NASA Astrophysics Data System (ADS)
Woodworth-Jefcoats, Phoebe A.; Polovina, Jeffrey J.; Howell, Evan A.; Blanchard, Julia L.
2015-11-01
We compare two ecosystem model projections of 21st century climate change and fishing impacts in the central North Pacific. Both a species-based and a size-based ecosystem modeling approach are examined. While both models project a decline in biomass across all sizes in response to climate change and a decline in large fish biomass in response to increased fishing mortality, the models vary significantly in their handling of climate and fishing scenarios. For example, based on the same climate forcing the species-based model projects a 15% decline in catch by the end of the century while the size-based model projects a 30% decline. Disparities in the models' output highlight the limitations of each approach by showing the influence model structure can have on model output. The aspects of bottom-up change to which each model is most sensitive appear linked to model structure, as does the propagation of interannual variability through the food web and the relative impact of combined top-down and bottom-up change. Incorporating integrated size- and species-based ecosystem modeling approaches into future ensemble studies may help separate the influence of model structure from robust projections of ecosystem change.
ERIC Educational Resources Information Center
Bills, Linda G.; Wilford, Valerie
A project was conducted from 1980 to 1982 to determine the costs and benefits of OCLC use in 29 small and medium-sized member libraries of the Illinois Valley Library System (IVLS). Academic, school, public, and special libraries participated in the project. Based on written attitude surveys of and interviews with library directors, staff,…
Dry etching of chrome for photomasks for 100-nm technology using chemically amplified resist
NASA Astrophysics Data System (ADS)
Mueller, Mark; Komarov, Serguie; Baik, Ki-Ho
2002-07-01
Photo mask etching for the 100nm technology node places new requirements on dry etching processes. As the minimum-size features on the mask, such as assist bars and optical proximity correction (OPC) patterns, shrink down to 100nm, it is necessary to produce etch CD biases of below 20nm in order to reproduce minimum resist features into chrome with good pattern fidelity. In addition, vertical profiles are necessary. In previous generations of photomask technology, footing and sidewall profile slope were tolerated, since this dry etch profile was an improvement from wet etching. However, as feature sizes shrink, it is extremely important to select etch processes which do not generate a foot, because this will affect etch linearity and also limit the smallest etched feature size. Chemically amplified resist (CAR) from TOK is patterned with a 50keV MEBES eXara e-beam writer, allowing for patterning of small features with vertical resist profiles. This resist is developed for raster scan 50 kV e-beam systems. It has high contrast, good coating characteristics, good dry etch selectivity, and high environmental stability. Chrome etch process development has been performed using Design of Experiments to optimize parameters such as sidewall profile, etch CD bias, etch CD linearity for varying sizes of line/space patterns, etch CD linearity for varying sizes of isolated lines and spaces, loading effects, and application to contact etching.
On the repeated measures designs and sample sizes for randomized controlled trials.
Tango, Toshiro
2016-04-01
For the analysis of longitudinal or repeated measures data, generalized linear mixed-effects models provide a flexible and powerful tool to deal with heterogeneity among subject response profiles. However, the typical statistical design adopted in usual randomized controlled trials is an analysis of covariance type analysis using a pre-defined pair of "pre-post" data, in which pre-(baseline) data are used as a covariate for adjustment together with other covariates. Then, the major design issue is to calculate the sample size or the number of subjects allocated to each treatment group. In this paper, we propose a new repeated measures design and sample size calculations combined with generalized linear mixed-effects models that depend not only on the number of subjects but on the number of repeated measures before and after randomization per subject used for the analysis. The main advantages of the proposed design combined with the generalized linear mixed-effects models are (1) it can easily handle missing data by applying the likelihood-based ignorable analyses under the missing at random assumption and (2) it may lead to a reduction in sample size, compared with the simple pre-post design. The proposed designs and the sample size calculations are illustrated with real data arising from randomized controlled trials. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Regional Climate Sensitivity- and Historical-Based Projections to 2100
NASA Astrophysics Data System (ADS)
Hébert, Raphaël.; Lovejoy, Shaun
2018-05-01
Reliable climate projections at the regional scale are needed in order to evaluate climate change impacts and inform policy. We develop an alternative method for projections based on the transient climate sensitivity (TCS), which relies on a linear relationship between the forced temperature response and the strongly increasing anthropogenic forcing. The TCS is evaluated at the regional scale (5° by 5°), and projections are made accordingly to 2100 using the high and low Representative Concentration Pathways emission scenarios. We find that there are large spatial discrepancies between the regional TCS from 5 historical data sets and 32 global climate model (GCM) historical runs and furthermore that the global mean GCM TCS is about 15% too high. Given that the GCM Representative Concentration Pathway scenario runs are mostly linear with respect to their (inadequate) TCS, we conclude that historical methods of regional projection are better suited given that they are directly calibrated on the real world (historical) climate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heroux, Michael Allen; Marker, Bryan
This report summarizes the progress made as part of a one year lab-directed research and development (LDRD) project to fund the research efforts of Bryan Marker at the University of Texas at Austin. The goal of the project was to develop new techniques for automatically tuning the performance of dense linear algebra kernels. These kernels often represent the majority of computational time in an application. The primary outcome from this work is a demonstration of the value of model driven engineering as an approach to accurately predict and study performance trade-offs for dense linear algebra computations.
The Woodworker's Website: A Project Management Case Study
ERIC Educational Resources Information Center
Jance, Marsha
2014-01-01
A case study that focuses on building a website for a woodworking business is discussed. Project management and linear programming techniques can be used to determine the time required to complete the website project discussed in the case. This case can be assigned to students in an undergraduate or graduate decision modeling or management science…
Orthogonal Projection in Teaching Regression and Financial Mathematics
ERIC Educational Resources Information Center
Kachapova, Farida; Kachapov, Ilias
2010-01-01
Two improvements in teaching linear regression are suggested. The first is to include the population regression model at the beginning of the topic. The second is to use a geometric approach: to interpret the regression estimate as an orthogonal projection and the estimation error as the distance (which is minimized by the projection). Linear…
Concentrating Solar Power Projects - Urat 50MW Fresnel CSP project |
Concentrating Solar Power | NREL 50MW Fresnel CSP project Status Date: September 29, 2016 Turbine Capacity: Net: 50.0 MW Gross: 50.0 MW Status: Under development Do you have more information , corrections, or comments? Background Technology: Linear Fresnel reflector Status: Under development Country
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Y; Rottmann, J; Myronakis, M
2016-06-15
Purpose: The purpose of this study was to validate the use of a cascaded linear system model for MV cone-beam CT (CBCT) using a multi-layer (MLI) electronic portal imaging device (EPID) and provide experimental insight into image formation. A validated 3D model provides insight into salient factors affecting reconstructed image quality, allowing potential for optimizing detector design for CBCT applications. Methods: A cascaded linear system model was developed to investigate the potential improvement in reconstructed image quality for MV CBCT using an MLI EPID. Inputs to the three-dimensional (3D) model include projection space MTF and NPS. Experimental validation was performedmore » on a prototype MLI detector installed on the portal imaging arm of a Varian TrueBeam radiotherapy system. CBCT scans of up to 898 projections over 360 degrees were acquired at exposures of 16 and 64 MU. Image volumes were reconstructed using a Feldkamp-type (FDK) filtered backprojection (FBP) algorithm. Flat field images and scans of a Catphan model 604 phantom were acquired. The effect of 2×2 and 4×4 detector binning was also examined. Results: Using projection flat fields as an input, examination of the modeled and measured NPS in the axial plane exhibits good agreement. Binning projection images was shown to improve axial slice SDNR by a factor of approximately 1.4. This improvement is largely driven by a decrease in image noise of roughly 20%. However, this effect is accompanied by a subsequent loss in image resolution. Conclusion: The measured axial NPS shows good agreement with the theoretical calculation using a linear system model. Binning of projection images improves SNR of large objects on the Catphan phantom by decreasing noise. Specific imaging tasks will dictate the implementation image binning to two-dimensional projection images. The project was partially supported by a grant from Varian Medical Systems, Inc. and grant No. R01CA188446-01 from the National Cancer Institute.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Charles, P. H., E-mail: p.charles@qut.edu.au; Crowe, S. B.; Langton, C. M.
Purpose: This work introduces the concept of very small field size. Output factor (OPF) measurements at these field sizes require extremely careful experimental methodology including the measurement of dosimetric field size at the same time as each OPF measurement. Two quantifiable scientific definitions of the threshold of very small field size are presented. Methods: A practical definition was established by quantifying the effect that a 1 mm error in field size or detector position had on OPFs and setting acceptable uncertainties on OPF at 1%. Alternatively, for a theoretical definition of very small field size, the OPFs were separated intomore » additional factors to investigate the specific effects of lateral electronic disequilibrium, photon scatter in the phantom, and source occlusion. The dominant effect was established and formed the basis of a theoretical definition of very small fields. Each factor was obtained using Monte Carlo simulations of a Varian iX linear accelerator for various square field sizes of side length from 4 to 100 mm, using a nominal photon energy of 6 MV. Results: According to the practical definition established in this project, field sizes ≤15 mm were considered to be very small for 6 MV beams for maximal field size uncertainties of 1 mm. If the acceptable uncertainty in the OPF was increased from 1.0% to 2.0%, or field size uncertainties are 0.5 mm, field sizes ≤12 mm were considered to be very small. Lateral electronic disequilibrium in the phantom was the dominant cause of change in OPF at very small field sizes. Thus the theoretical definition of very small field size coincided to the field size at which lateral electronic disequilibrium clearly caused a greater change in OPF than any other effects. This was found to occur at field sizes ≤12 mm. Source occlusion also caused a large change in OPF for field sizes ≤8 mm. Based on the results of this study, field sizes ≤12 mm were considered to be theoretically very small for 6 MV beams. Conclusions: Extremely careful experimental methodology including the measurement of dosimetric field size at the same time as output factor measurement for each field size setting and also very precise detector alignment is required at field sizes at least ≤12 mm and more conservatively≤15 mm for 6 MV beams. These recommendations should be applied in addition to all the usual considerations for small field dosimetry, including careful detector selection.« less
Charles, P H; Cranmer-Sargison, G; Thwaites, D I; Crowe, S B; Kairn, T; Knight, R T; Kenny, J; Langton, C M; Trapp, J V
2014-04-01
This work introduces the concept of very small field size. Output factor (OPF) measurements at these field sizes require extremely careful experimental methodology including the measurement of dosimetric field size at the same time as each OPF measurement. Two quantifiable scientific definitions of the threshold of very small field size are presented. A practical definition was established by quantifying the effect that a 1 mm error in field size or detector position had on OPFs and setting acceptable uncertainties on OPF at 1%. Alternatively, for a theoretical definition of very small field size, the OPFs were separated into additional factors to investigate the specific effects of lateral electronic disequilibrium, photon scatter in the phantom, and source occlusion. The dominant effect was established and formed the basis of a theoretical definition of very small fields. Each factor was obtained using Monte Carlo simulations of a Varian iX linear accelerator for various square field sizes of side length from 4 to 100 mm, using a nominal photon energy of 6 MV. According to the practical definition established in this project, field sizes ≤ 15 mm were considered to be very small for 6 MV beams for maximal field size uncertainties of 1 mm. If the acceptable uncertainty in the OPF was increased from 1.0% to 2.0%, or field size uncertainties are 0.5 mm, field sizes ≤ 12 mm were considered to be very small. Lateral electronic disequilibrium in the phantom was the dominant cause of change in OPF at very small field sizes. Thus the theoretical definition of very small field size coincided to the field size at which lateral electronic disequilibrium clearly caused a greater change in OPF than any other effects. This was found to occur at field sizes ≤ 12 mm. Source occlusion also caused a large change in OPF for field sizes ≤ 8 mm. Based on the results of this study, field sizes ≤ 12 mm were considered to be theoretically very small for 6 MV beams. Extremely careful experimental methodology including the measurement of dosimetric field size at the same time as output factor measurement for each field size setting and also very precise detector alignment is required at field sizes at least ≤ 12 mm and more conservatively ≤ 15 mm for 6 MV beams. These recommendations should be applied in addition to all the usual considerations for small field dosimetry, including careful detector selection. © 2014 American Association of Physicists in Medicine.
NASA Technical Reports Server (NTRS)
Chao, Luen-Yuan; Shetty, Dinesh K.
1992-01-01
Statistical analysis and correlation between pore-size distribution and fracture strength distribution using the theory of extreme-value statistics is presented for a sintered silicon nitride. The pore-size distribution on a polished surface of this material was characterized, using an automatic optical image analyzer. The distribution measured on the two-dimensional plane surface was transformed to a population (volume) distribution, using the Schwartz-Saltykov diameter method. The population pore-size distribution and the distribution of the pore size at the fracture origin were correllated by extreme-value statistics. Fracture strength distribution was then predicted from the extreme-value pore-size distribution, usin a linear elastic fracture mechanics model of annular crack around pore and the fracture toughness of the ceramic. The predicted strength distribution was in good agreement with strength measurements in bending. In particular, the extreme-value statistics analysis explained the nonlinear trend in the linearized Weibull plot of measured strengths without postulating a lower-bound strength.
On remote sensing of small aerosol particles with polarized light
NASA Astrophysics Data System (ADS)
Sun, W.
2012-12-01
The CALIPSO satellite mission consistently measures volume (including molecule and particulate) light depolarization ratio of ~2% for smoke, compared to ~1% for marine aerosols and ~15% for dust. The observed ~2% smoke depolarization ratio comes primarily from the nonspherical habits of particles in the smoke at certain particle sizes. The depolarization of linearly polarized light by small sphere aggregates and irregular Gaussian-shaped particles is studied, to reveal the physics between the depolarization of linearly polarized light and aerosol shape and size. It is found that randomly oriented nonspherical particles have some common depolarization properties as functions of scattering angle and size parameter. This may be very useful information for active remote sensing of small nonspherical aerosols using polarized light. We also show that the depolarization ratio from the CALIPSO measurements could be used to derive smoke aerosol particle size. The mean particle size of South-African smoke is estimated to be about half of the 532 nm wavelength of the CALIPSO lidar.
NASA Astrophysics Data System (ADS)
Barnaś, Dawid; Bieniasz, Lesław K.
2017-07-01
We have recently developed a vectorized Thomas solver for quasi-block tridiagonal linear algebraic equation systems using Streaming SIMD Extensions (SSE) and Advanced Vector Extensions (AVX) in operations on dense blocks [D. Barnaś and L. K. Bieniasz, Int. J. Comput. Meth., accepted]. The acceleration caused by vectorization was observed for large block sizes, but was less satisfactory for small blocks. In this communication we report on another version of the solver, optimized for small blocks of size up to four rows and/or columns.
Infrared laser spectroscopy of the linear C13 carbon cluster
NASA Technical Reports Server (NTRS)
Giesen, T. F.; Van Orden, A.; Hwang, H. J.; Fellers, R. S.; Provencal, R. A.; Saykally, R. J.
1994-01-01
The infrared absorption spectrum of a linear, 13-atom carbon cluster (C13) has been observed by using a supersonic cluster beam-diode laser spectrometer. Seventy-six rovibrational transitions were measured near 1809 wave numbers and assigned to an antisymmetric stretching fundamental in the 1 sigma g+ ground state of C13. This definitive structural characterization of a carbon cluster in the intermediate size range between C10 and C20 is in apparent conflict with theoretical calculations, which predict that clusters of this size should exist as planar monocyclic rings.
Independent Testing of JWST Detector Prototypes
NASA Technical Reports Server (NTRS)
Figer, D. F.; Rauscher, B. J.; Regan, M. W.; Balleza, J.; Bergeron, L.; Morse, E.; Stockman, H. S.
2003-01-01
The Independent Detector Testing Laboratory (IDTL) is jointly operated by the Space Telescope Science Institute (STScI) and the Johns Hopkins University (MU), and is assisting the James Webb Space Telescope (JWST) mission in choosing and operating the best near-infrared detectors under a NASA Grant. The JWST is the centerpiece of the NASA Office of Space Science theme, the Astronomical Search for Origins, and the highest priority astronomy project for the next decade, according to the National Academy of Science. JWST will need to have the sensitivity to see the first light in the Universe to determine how galaxies formed in the web of dark matter that existed when the Universe was in its infancy (z approx. 10 - 20). To achieve this goal, the JWST Project must pursue an aggressive technology program and advance infrared detectors to performance levels beyond what is now possible. As part of this program, NASA has selected the IDTL to verify comparative performance between prototype JWST detectors developed by Rockwell Scientific (HgCdTe) and Raytheon (InSb). The IDTL is charged with obtaining an independent assessment of the ability of these two competing technologies to achieve the demanding specifications of the JWST program within the 0.6 - 5 approx. mum bandpass and in an ultra-low background (less than 0.01 e'/s/pixel) environment. We describe results from the JWST Detector Characterization Project that is being performed in the IDTL. In this project, we are measuring first-order detector parameters, i.e. dark current, read noise, QE, intra-pixel sensitivity, linearity, as functions of temperature, well size, and operational mode.
Independent Testing of JWST Detector Prototypes
NASA Technical Reports Server (NTRS)
Figer, Donald F.; Rauscher, Bernie J.; Regan, Michael W.; Morse, Ernie; Balleza, Jesus; Bergeron, Louis; Stockman, H. S.
2004-01-01
The Independent Detector Testing Laboratory (IDTL) is jointly operated by the Space Telescope Science Institute (STScI) and the Johns Hopkins University (JHU), and is assisting the James Webb Space Telescope (JWST) mission in choosing and operating the best near-infrared detectors. The JWST is the centerpiece of the NASA Office of Space Science theme, the Astronomical Search for Origins, and the highest priority astronomy project for the next decade, according to the National Academy of Science. JWST will need to have the sensitivity to see the first light in the Universe to determine how galaxies formed in the web of dark matter that existed when the Universe was in its infancy (z is approximately 10-20). To achieve this goal, the JWST Project must pursue an aggressive technology program and advance infrared detectors to performance levels beyond what is now possible. As part of this program, NASA has selected the IDTL to verify comparative performance between prototype JWST detectors developed by Rockwell Scientific (HgCdTe) and Raytheon (InSb). The IDTL is charged with obtaining an independent assessment of the ability of these two competing technologies to achieve the demanding specifications of the JWST program within the 0.6-5 micron bandpass and in an ultra-low background (less than 0.01 e(-)/s/pixel) environment. We describe results from the JWST Detector Characterization Project that is being performed in the LDTL. In this project, we are measuring first-order detector parameters, i.e. dark current, read noise, QE, intra-pixel sensitivity, linearity, as functions of temperature, well size, and operational mode.
Creating Effective Type for the Classroom.
ERIC Educational Resources Information Center
Fitzsimons, Dennis
1989-01-01
Defines basic typographic terminology and offers two classroom projects using microcomputers to create and use type. Discusses typeface, type families, type style, type size, and type font. Examples of student projects that include the creation of bulletin board displays and page-size maps. (KO)
NASA Technical Reports Server (NTRS)
Ponomarev, A. L.; Brenner, D.; Hlatky, L. R.; Sachs, R. K.
2000-01-01
DNA double-strand breaks (DSBs) produced by densely ionizing radiation are not located randomly in the genome: recent data indicate DSB clustering along chromosomes. Stochastic DSB clustering at large scales, from > 100 Mbp down to < 0.01 Mbp, is modeled using computer simulations and analytic equations. A random-walk, coarse-grained polymer model for chromatin is combined with a simple track structure model in Monte Carlo software called DNAbreak and is applied to data on alpha-particle irradiation of V-79 cells. The chromatin model neglects molecular details but systematically incorporates an increase in average spatial separation between two DNA loci as the number of base-pairs between the loci increases. Fragment-size distributions obtained using DNAbreak match data on large fragments about as well as distributions previously obtained with a less mechanistic approach. Dose-response relations, linear at small doses of high linear energy transfer (LET) radiation, are obtained. They are found to be non-linear when the dose becomes so large that there is a significant probability of overlapping or close juxtaposition, along one chromosome, for different DSB clusters from different tracks. The non-linearity is more evident for large fragments than for small. The DNAbreak results furnish an example of the RLC (randomly located clusters) analytic formalism, which generalizes the broken-stick fragment-size distribution of the random-breakage model that is often applied to low-LET data.
Projective formulation of Maggi's method for nonholonomic systems analysis
NASA Astrophysics Data System (ADS)
Blajer, Wojciech
1992-04-01
A projective interpretation of Maggi'a approach to dynamic analysis of nonholonomic systems is presented. Both linear and nonlinear constraint cases are treatment in unified fashion, using the language of vector spaces and tensor algebra analysis.
Linear Collider project database
&D projects circa 2005 List of who is thinking of working on what. At present this includes non SLAC, FNAL, and Cornell meetings. Ordered list of who is thinking of working on what. At present this
Concentrating Solar Power Projects - IRESEN 1 MWe CSP-ORC pilot project |
Start Year: 2016 Do you have more information, corrections, or comments? Background Technology: Linear : 1,700 MWh/yr Contact(s): Webmaster Solar Break Ground: 2015 Start Production: September 2016 Cost
The Effect of Primary School Size on Academic Achievement
ERIC Educational Resources Information Center
Gershenson, Seth; Langbein, Laura
2015-01-01
Evidence on optimal school size is mixed. We estimate the effect of transitory changes in school size on the academic achievement of fourth-and fifth-grade students in North Carolina using student-level longitudinal administrative data. Estimates of value-added models that condition on school-specific linear time trends and a variety of…
NASA Astrophysics Data System (ADS)
Rutherford, Jeffrey S.; Day, John W.; D'Elia, Christopher F.; Wiegman, Adrian R. H.; Willson, Clinton S.; Caffey, Rex H.; Shaffer, Gary P.; Lane, Robert R.; Batker, David
2018-04-01
Flood control levees cut off the supply of sediment to Mississippi delta coastal wetlands, and contribute to putting much of the delta on a trajectory for continued submergence in the 21st century. River sediment diversions have been proposed as a method to provide a sustainable supply of sediment to the delta, but the frequency and magnitude of these diversions needs further assessment. Previous studies suggested operating river sediment diversions based on the size and frequency of natural crevasse events, which were large (>5000 m3/s) and infrequent (active < once a year) in the last naturally active delta. This study builds on these previous works by quantitatively assessing tradeoffs for a large, infrequent diversion into the forested wetlands of the Maurepas swamp. Land building was estimated for several diversion sizes and years inactive using a delta progradation model. A benefit-cost analysis (BCA) combined model land building results with an ecosystem service valuation and estimated costs. Results demonstrated that land building is proportional to diversion size and inversely proportional to years inactive. Because benefits were assumed to scale linearly with land gain, and costs increase with diversion size, there are disadvantages to operating large diversions less often, compared to smaller diversions more often for the immediate project area. Literature suggests that infrequent operation would provide additional gains (through increased benefits and reduced ecosystem service costs) to the broader Lake Maurepas-Pontchartrain-Borgne ecosystem. Future research should incorporate these additional effects into this type of BCA, to see if this changes the outcome for large, infrequent diversions.
Improved Linear-Ion-Trap Frequency Standard
NASA Technical Reports Server (NTRS)
Prestage, John D.
1995-01-01
Improved design concept for linear-ion-trap (LIT) frequency-standard apparatus proposed. Apparatus contains lengthened linear ion trap, and ions processed alternately in two regions: ions prepared in upper region of trap, then transported to lower region for exposure to microwave radiation, then returned to upper region for optical interrogation. Improved design intended to increase long-term frequency stability of apparatus while reducing size, mass, and cost.
(Where) Is Functional Decline Isolating? Disordered Environments and the Onset of Disability.
Schafer, Markus H
2018-03-01
The onset of disability is believed to undermine social connectedness and raise the risk of social isolation, yet spatial environments are seldom considered in this process. This study examines whether unruly home and neighborhood conditions intensify the association between disability onset and several dimensions of social connectedness. I incorporate longitudinal data from the National Social Life, Health, and Aging Project, which contains environmental evaluations conducted by trained observers ( N = 1,558). Results from Poisson, ordinal logistic, and linear regression models reveal heterogeneous consequences of disablement: disability onset was associated with reduced core network size, fewer friends, lower likelihood of social interaction, and less overall social connectedness-though mainly when accompanied by higher levels of household disorder. There was limited evidence that neighborhood disorder moderated consequences of disability. Findings point to the importance of the home as an environmental resource and underscore important contextual contingencies in the isolating consequences of disability.
Chaos and Forecasting - Proceedings of the Royal Society Discussion Meeting
NASA Astrophysics Data System (ADS)
Tong, Howell
1995-04-01
The Table of Contents for the full book PDF is as follows: * Preface * Orthogonal Projection, Embedding Dimension and Sample Size in Chaotic Time Series from a Statistical Perspective * A Theory of Correlation Dimension for Stationary Time Series * On Prediction and Chaos in Stochastic Systems * Locally Optimized Prediction of Nonlinear Systems: Stochastic and Deterministic * A Poisson Distribution for the BDS Test Statistic for Independence in a Time Series * Chaos and Nonlinear Forecastability in Economics and Finance * Paradigm Change in Prediction * Predicting Nonuniform Chaotic Attractors in an Enzyme Reaction * Chaos in Geophysical Fluids * Chaotic Modulation of the Solar Cycle * Fractal Nature in Earthquake Phenomena and its Simple Models * Singular Vectors and the Predictability of Weather and Climate * Prediction as a Criterion for Classifying Natural Time Series * Measuring and Characterising Spatial Patterns, Dynamics and Chaos in Spatially-Extended Dynamical Systems and Ecologies * Non-Linear Forecasting and Chaos in Ecology and Epidemiology: Measles as a Case Study
Ion Motion Induced Emittance Growth of Matched Electron Beams in Plasma Wakefields
DOE Office of Scientific and Technical Information (OSTI.GOV)
An, Weiming; Lu, Wei; Huang, Chengkun
2017-06-14
Plasma-based acceleration is being considered as the basis for building a future linear collider. Nonlinear plasma wakefields have ideal properties for accelerating and focusing electron beams. Preservation of the emittance of nano-Coulomb beams with nanometer scale matched spot sizes in these wakefields remains a critical issue due to ion motion caused by their large space charge forces. We use fully resolved quasistatic particle-in-cell simulations of electron beams in hydrogen and lithium plasmas, including when the accelerated beam has different emittances in the two transverse planes. The projected emittance initially grows and rapidly saturates with a maximum emittance growth of lessmore » than 80% in hydrogen and 20% in lithium. The use of overfocused beams is found to dramatically reduce the emittance growth. In conclusion, the underlying physics that leads to the lower than expected emittance growth is elucidated.« less
Experimental light scattering by small particles: system design and calibration
NASA Astrophysics Data System (ADS)
Maconi, Göran; Kassamakov, Ivan; Penttilä, Antti; Gritsevich, Maria; Hæggström, Edward; Muinonen, Karri
2017-06-01
We describe a setup for precise multi-angular measurements of light scattered by mm- to μm-sized samples. We present a calibration procedure that ensures accurate measurements. Calibration is done using a spherical sample (d = 5 mm, n = 1.517) fixed on a static holder. The ultimate goal of the project is to allow accurate multi-wavelength measurements (the full Mueller matrix) of single-particle samples which are levitated ultrasonically. The system comprises a tunable multimode Argon-krypton laser, with 12 wavelengths ranging from 465 to 676 nm, a linear polarizer, a reference photomultiplier tube (PMT) monitoring beam intensity, and several PMT:s mounted radially towards the sample at an adjustable radius. The current 150 mm radius allows measuring all azimuthal angles except for ±4° around the backward scattering direction. The measurement angle is controlled by a motor-driven rotational stage with an accuracy of 15'.
A generalized public goods game with coupling of individual ability and project benefit
NASA Astrophysics Data System (ADS)
Zhong, Li-Xin; Xu, Wen-Juan; He, Yun-Xin; Zhong, Chen-Yang; Chen, Rong-Da; Qiu, Tian; Shi, Yong-Dong; Ren, Fei
2017-08-01
Facing a heavy task, any single person can only make a limited contribution and team cooperation is needed. As one enjoys the benefit of the public goods, the potential benefits of the project are not always maximized and may be partly wasted. By incorporating individual ability and project benefit into the original public goods game, we study the coupling effect of the four parameters, the upper limit of individual contribution, the upper limit of individual benefit, the needed project cost and the upper limit of project benefit on the evolution of cooperation. Coevolving with the individual-level group size preferences, an increase in the upper limit of individual benefit promotes cooperation while an increase in the upper limit of individual contribution inhibits cooperation. The coupling of the upper limit of individual contribution and the needed project cost determines the critical point of the upper limit of project benefit, where the equilibrium frequency of cooperators reaches its highest level. Above the critical point, an increase in the upper limit of project benefit inhibits cooperation. The evolution of cooperation is closely related to the preferred group-size distribution. A functional relation between the frequency of cooperators and the dominant group size is found.
Singh, Satyakam; Prasad, Nagarajan Rajendra; Kapoor, Khyati; Chufan, Eduardo E.; Patel, Bhargav A.; Ambudkar, Suresh V.; Talele, Tanaji T.
2014-01-01
Multidrug resistance (MDR) caused by ATP-binding cassette (ABC) transporter P-glycoprotein (P-gp) through extrusion of anticancer drugs from the cells is a major cause of failure to cancer chemotherapy. Previously, selenazole containing cyclic peptides were reported as P-gp inhibitors and these were also used for co-crystallization with mouse P-gp, which has 87% homology to human P-gp. It has been reported that human P-gp, can simultaneously accommodate 2-3 moderate size molecules at the drug binding pocket. Our in-silico analysis based on the homology model of human P-gp spurred our efforts to investigate the optimal size of (S)-valine-derived thiazole units that can be accommodated at drug-binding pocket. Towards this goal, we synthesized varying lengths of linear and cyclic derivatives of (S)-valine-derived thiazole units to investigate the optimal size, lipophilicity and the structural form (linear and cyclic) of valine-derived thiazole peptides that can accommodate well in the P-gp binding pocket and affects its activity, previously an unexplored concept. Among these oligomers, lipophilic linear- (13) and cyclic-trimer (17) derivatives of QZ59S-SSS were found to be the most and equally potent inhibitors of human P-gp (IC50 = 1.5 μM). Cyclic trimer and linear trimer being equipotent, future studies can be focused on non-cyclic counterparts of cyclic peptides maintaining linear trimer length. Binding model of the linear trimer (13) within the drug-binding site on the homology model of human P-gp represents an opportunity for future optimization, specifically replacing valine and thiazole groups in the non-cyclic form. PMID:24288265
Singh, Satyakam; Prasad, Nagarajan Rajendra; Kapoor, Khyati; Chufan, Eduardo E; Patel, Bhargav A; Ambudkar, Suresh V; Talele, Tanaji T
2014-01-03
Multidrug resistance caused by ATP binding cassette transporter P-glycoprotein (P-gp) through extrusion of anticancer drugs from the cells is a major cause of failure in cancer chemotherapy. Previously, selenazole-containing cyclic peptides were reported as P-gp inhibitors and were also used for co-crystallization with mouse P-gp, which has 87 % homology to human P-gp. It has been reported that human P-gp can simultaneously accommodate two to three moderately sized molecules at the drug binding pocket. Our in silico analysis, based on the homology model of human P-gp, spurred our efforts to investigate the optimal size of (S)-valine-derived thiazole units that can be accommodated at the drug binding pocket. Towards this goal, we synthesized varying lengths of linear and cyclic derivatives of (S)-valine-derived thiazole units to investigate the optimal size, lipophilicity, and structural form (linear or cyclic) of valine-derived thiazole peptides that can be accommodated in the P-gp binding pocket and affects its activity, previously an unexplored concept. Among these oligomers, lipophilic linear (13) and cyclic trimer (17) derivatives of QZ59S-SSS were found to be the most and equally potent inhibitors of human P-gp (IC50 =1.5 μM). As the cyclic trimer and linear trimer compounds are equipotent, future studies should focus on noncyclic counterparts of cyclic peptides maintaining linear trimer length. A binding model of the linear trimer 13 within the drug binding site on the homology model of human P-gp represents an opportunity for future optimization, specifically replacing valine and thiazole groups in the noncyclic form. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
SU-F-207-16: CT Protocols Optimization Using Model Observer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tseng, H; Fan, J; Kupinski, M
2015-06-15
Purpose: To quantitatively evaluate the performance of different CT protocols using task-based measures of image quality. This work studies the task of size and the contrast estimation of different iodine concentration rods inserted in head- and body-sized phantoms using different imaging protocols. These protocols are designed to have the same dose level (CTDIvol) but using different X-ray tube voltage settings (kVp). Methods: Different concentrations of iodine objects inserted in a head size phantom and a body size phantom are imaged on a 64-slice commercial CT scanner. Scanning protocols with various tube voltages (80, 100, and 120 kVp) and current settingsmore » are selected, which output the same absorbed dose level (CTDIvol). Because the phantom design (size of the iodine objects, the air gap between the inserted objects and the phantom) is not ideal for a model observer study, the acquired CT images are used to generate simulation images with four different sizes and five different contracts iodine objects. For each type of the objects, 500 images (100 x 100 pixels) are generated for the observer study. The observer selected in this study is the channelized scanning linear observer which could be applied to estimate the size and the contrast. The figure of merit used is the correct estimation ratio. The mean and the variance are estimated by the shuffle method. Results: The results indicate that the protocols with 100 kVp tube voltage setting provides the best performance for iodine insert size and contrast estimation for both head and body phantom cases. Conclusion: This work presents a practical and robust quantitative approach using channelized scanning linear observer to study contrast and size estimation performance from different CT protocols. Different protocols at same CTDIvol setting could Result in different image quality performance. The relationship between the absorbed dose and the diagnostic image quality is not linear.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Solomou, G.; Stratakis, J.; Perisinakis, K.
Purpose: To provide data for estimation of fetal radiation dose (D{sub F}) from prophylactic hypogastric artery balloon occlusion (HABO) procedures. Methods: The Monte-Carlo-N-particle (MCNP) transport code and mathematical phantoms representing a pregnant patient at the ninth month of gestation were employed. PA, RAO 20° and LAO 20° fluoroscopy projections of left and right internal iliac arteries were simulated. Projection-specific normalized fetal dose (NFD) data were produced for various beam qualities. The effects of projection angle, x-ray field location relative to the fetus, field size, maternal body size, and fetal size on NFD were investigated. Presented NFD values were compared tomore » corresponding values derived using a physical anthropomorphic phantom simulating pregnancy at the third trimester and thermoluminescence dosimeters. Results: NFD did not considerably vary when projection angle was altered by ±5°, whereas it was found to markedly depend on tube voltage, filtration, x-ray field location and size, and maternal body size. Differences in NFD < 7.5% were observed for naturally expected variations in fetal size. A difference of less than 13.5% was observed between NFD values estimated by MCNP and direct measurements. Conclusions: Data and methods provided allow for reliable estimation of radiation burden to the fetus from HABO.« less
2016-01-01
Understanding the relationship between physiological measurements from human subjects and their demographic data is important within both the biometric and forensic domains. In this paper we explore the relationship between measurements of the human hand and a range of demographic features. We assess the ability of linear regression and machine learning classifiers to predict demographics from hand features, thereby providing evidence on both the strength of relationship and the key features underpinning this relationship. Our results show that we are able to predict sex, height, weight and foot size accurately within various data-range bin sizes, with machine learning classification algorithms out-performing linear regression in most situations. In addition, we identify the features used to provide these relationships applicable across multiple applications. PMID:27806075
Miguel-Hurtado, Oscar; Guest, Richard; Stevenage, Sarah V; Neil, Greg J; Black, Sue
2016-01-01
Understanding the relationship between physiological measurements from human subjects and their demographic data is important within both the biometric and forensic domains. In this paper we explore the relationship between measurements of the human hand and a range of demographic features. We assess the ability of linear regression and machine learning classifiers to predict demographics from hand features, thereby providing evidence on both the strength of relationship and the key features underpinning this relationship. Our results show that we are able to predict sex, height, weight and foot size accurately within various data-range bin sizes, with machine learning classification algorithms out-performing linear regression in most situations. In addition, we identify the features used to provide these relationships applicable across multiple applications.
Papa, E; Doucet, J P; Sangion, A; Doucet-Panaye, A
2016-07-01
The understanding of the mechanisms and interactions that occur when nanomaterials enter biological systems is important to improve their future use. The adsorption of proteins from biological fluids in a physiological environment to form a corona on the surface of nanoparticles represents a key step that influences nanoparticle behaviour. In this study, the quantitative description of the composition of the protein corona was used to study the effect on cell association induced by 84 surface-modified gold nanoparticles of different sizes. Quantitative relationships between the protein corona and the activity of the gold nanoparticles were modelled by using several machine learning-based linear and non-linear approaches. Models based on a selection of only six serum proteins had robust and predictive results. The Projection Pursuit Regression method had the best performances (r(2) = 0.91; Q(2)loo = 0.81; r(2)ext = 0.79). The present study confirmed the utility of protein corona composition to predict the bioactivity of gold nanoparticles and identified the main proteins that act as promoters or inhibitors of cell association. In addition, the comparison of several techniques showed which strategies offer the best results in prediction and could be used to support new toxicological studies on gold-based nanomaterials.
Mapping the zone of eye-height utility for seated and standing observers
NASA Technical Reports Server (NTRS)
Wraga, M.; Proffitt, D. R.; Kaiser, M. K. (Principal Investigator)
2000-01-01
In a series of experiments, we delimited a region within the vertical axis of space in which eye height (EH) information is used maximally to scale object heights, referred to as the "zone of eye height utility" (Wraga, 1999b Journal of Experimental Psychology, Human Perception and Performance 25 518-530). To test the lower limit of the zone, linear perspective (on the floor) was varied via introduction of a false perspective (FP) gradient while all sources of EH information except linear perspective were held constant. For seated (experiment 1a) observers, the FP gradient produced overestimations of height for rectangular objects up to 0.15 EH tall. This value was taken to be just outside the lower limit of the zone. This finding was replicated in a virtual environment, for both seated (experiment 1b) and standing (experiment 2) observers. For the upper limit of the zone, EH information itself was manipulated by lowering observers' center of projection in a virtual scene. Lowering the effective EH of standing (experiment 3) and seated (experiment 4) observers produced corresponding overestimations of height for objects up to about 2.5 EH. This zone of approximately 0.20-2.5 EH suggests that the human visual system weights size information differentially, depending on its efficacy.
Investigation of an optical sensor for small tilt angle detection of a precision linear stage
NASA Astrophysics Data System (ADS)
Saito, Yusuke; Arai, Yoshikazu; Gao, Wei
2010-05-01
This paper presents evaluation results of the characteristics of the angle sensor based on the laser autocollimation method for small tilt angle detection of a precision linear stage. The sensor consists of a laser diode (LD) as the light source, and a quadrant photodiode (QPD) as the position-sensing detector. A small plane mirror is mounted on the moving table of the stage as a target mirror for the sensor. This optical system has advantages of high sensitivity, fast response speed and the ability for two-axis angle detection. On the other hand, the sensitivity of the sensor is determined by the size of the optical spot focused on the QPD, which is a function of the diameter of the laser beam projected onto the target mirror. Because the diameter is influenced by the divergence of the laser beam, this paper focuses on the relationship between the sensor sensitivity and the moving position of the target mirror (sensor working distance) over the moving stroke of the stage. The main error components that influence the sensor sensitivity are discussed and the optimal conditions of the optical system of the sensor are analyzed. The experimental result about evaluation of the effective working distance is also presented.
Low-Gain Circularly Polarized Antenna with Torus-Shaped Pattern
NASA Technical Reports Server (NTRS)
Amaro, Luis R.; Kruid, Ronald C.; Vacchione, Joseph D.; Prata, Aluizio
2012-01-01
The Juno mission to Jupiter requires an antenna with a torus-shaped antenna pattern with approximately 6 dBic gain and circular polarization over the Deep Space Network (DSN) 7-GHz transmit frequency and the 8-GHz receive frequency. Given the large distances that accumulate en-route to Jupiter and the limited power afforded by the solar-powered vehicle, this toroidal low-gain antenna requires as much gain as possible while maintaining a beam width that could facilitate a +/-10deg edge of coverage. The natural antenna that produces a toroidal antenna pattern is the dipole, but the limited approx. = 2.2 dB peak gain would be insufficient. Here a shaped variation of the standard bicone antenna is proposed that could achieve the required gains and bandwidths while maintaining a size that was not excessive. The final geometry that was settled on consisted of a corrugated, shaped bicone, which is fed by a WR112 waveguide-to-coaxial- waveguide transition. This toroidal low-gain antenna (TLGA) geometry produced the requisite gain, moderate sidelobes, and the torus-shaped antenna pattern while maintaining a very good match over the entire required frequency range. Its "horn" geometry is also low-loss and capable of handling higher powers with large margins against multipactor breakdown. The final requirement for the antenna was to link with the DSN with circular polarization. A four-layer meander-line array polarizer was implemented; an approach that was fairly well suited to the TLGA geometry. The principal development of this work was to adapt the standard linear bicone such that its aperture could be increased in order to increase the available gain of the antenna. As one increases the aperture of a standard bicone, the phase variation across the aperture begins to increase, so the larger the aperture becomes, the greater the phase variation. In order to maximize the gain from any aperture antenna, the phase should be kept as uniform as possible. Thus, as the standard bicone fs aperture increases, the gain increase becomes less until one reaches a point of diminishing returns. In order to overcome this problem, a shaped aperture is used. Rather than the standard linear bicone, a parabolic bicone was found to reduce the amount of phase variation as the aperture increases. In fact, the phase variation is half of the standard linear bicone, which leads to higher gain with smaller aperture sizes. The antenna pattern radiated from this parabolic-shaped bicone antenna has fairly high side lobes. The Juno project requested that these sidelobes be minimized. This was accomplished by adding corrugations to the parabolic shape. This corrugated-shaped bicone antenna had reasonably low sidelobes, and the appropriate gain and beamwidth to meet project requirements.
Intense beams at the micron level for the Next Linear Collider
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seeman, J.T.
1991-08-01
High brightness beams with sub-micron dimensions are needed to produce a high luminosity for electron-positron collisions in the Next Linear Collider (NLC). To generate these small beam sizes, a large number of issues dealing with intense beams have to be resolved. Over the past few years many have been successfully addressed but most need experimental verification. Some of these issues are beam dynamics, emittance control, instrumentation, collimation, and beam-beam interactions. Recently, the Stanford Linear Collider (SLC) has proven the viability of linear collider technology and is an excellent test facility for future linear collider studies.
West, M M
1998-11-01
This meta-analysis of 12 studies assesses the efficacy of projective techniques to discriminate between sexually abused children and nonsexually abused children. A literature search was conducted to identify published studies that used projective instruments with sexually abused children. Those studies that reported statistics that allowed for an effect size to be calculated, were then included in the meta-analysis. There were 12 studies that fit the criteria. The projectives reviewed include The Rorschach, The Hand Test, The Thematic Apperception Test (TAT), the Kinetic Family Drawings, Human Figure Drawings, Draw Your Favorite Kind of Day, The Rosebush: A Visualization Strategy, and The House-Tree-Person. The results of this analysis gave an over-all effect size of d = .81, which is a large effect. Six studies included only a norm group of nondistressed, nonabused children with the sexual abuse group. The average effect size was d = .87, which is impressive. Six studies did include a clinical group of distressed nonsexually abused subjects and the effect size lowered to d = .76, which is a medium to large effect. This indicates that projective instruments can discriminate distressed children from nondistressed subjects, quite well. In the studies that included a clinical group of distressed children who were not sexually abused, the lower effect size indicates that the instruments were less able to discriminate the type of distress. This meta-analysis gives evidence that projective techniques have the ability to discriminate between children who have been sexually abused and those who were not abused sexually. However, further research that is designed to include clinical groups of distressed children is needed in order to determine how well the projectives can discriminate the type of distress.
Selecting algorithms, sensors, and linear bases for optimum spectral recovery of skylight.
López-Alvarez, Miguel A; Hernández-Andrés, Javier; Valero, Eva M; Romero, Javier
2007-04-01
In a previous work [Appl. Opt.44, 5688 (2005)] we found the optimum sensors for a planned multispectral system for measuring skylight in the presence of noise by adapting a linear spectral recovery algorithm proposed by Maloney and Wandell [J. Opt. Soc. Am. A3, 29 (1986)]. Here we continue along these lines by simulating the responses of three to five Gaussian sensors and recovering spectral information from noise-affected sensor data by trying out four different estimation algorithms, three different sizes for the training set of spectra, and various linear bases. We attempt to find the optimum combination of sensors, recovery method, linear basis, and matrix size to recover the best skylight spectral power distributions from colorimetric and spectral (in the visible range) points of view. We show how all these parameters play an important role in the practical design of a real multispectral system and how to obtain several relevant conclusions from simulating the behavior of sensors in the presence of noise.
Mathematical modelling of the growth of human fetus anatomical structures.
Dudek, Krzysztof; Kędzia, Wojciech; Kędzia, Emilia; Kędzia, Alicja; Derkowski, Wojciech
2017-09-01
The goal of this study was to present a procedure that would enable mathematical analysis of the increase of linear sizes of human anatomical structures, estimate mathematical model parameters and evaluate their adequacy. Section material consisted of 67 foetuses-rectus abdominis muscle and 75 foetuses- biceps femoris muscle. The following methods were incorporated to the study: preparation and anthropologic methods, image digital acquisition, Image J computer system measurements and statistical analysis method. We used an anthropologic method based on age determination with the use of crown-rump length-CRL (V-TUB) by Scammon and Calkins. The choice of mathematical function should be based on a real course of the curve presenting growth of anatomical structure linear size Ύ in subsequent weeks t of pregnancy. Size changes can be described with a segmental-linear model or one-function model with accuracy adequate enough for clinical purposes. The interdependence of size-age is described with many functions. However, the following functions are most often considered: linear, polynomial, spline, logarithmic, power, exponential, power-exponential, log-logistic I and II, Gompertz's I and II and von Bertalanffy's function. With the use of the procedures described above, mathematical models parameters were assessed for V-PL (the total length of body) and CRL body length increases, rectus abdominis total length h, its segments hI, hII, hIII, hIV, as well as biceps femoris length and width of long head (LHL and LHW) and of short head (SHL and SHW). The best adjustments to measurement results were observed in the exponential and Gompertz's models.
Donald McKenzie; John T. Abatzoglou; E. Natasha Stavros; Narasimhan K. Larkin
2014-01-01
Seasonal changes in the climatic potential for very large wildfires (VLWF >= 50,000 ac~20,234 ha) across the western contiguous United States are projected over the 21st century using generalized linear models and downscaled climate projections for two representative concentration pathways (RCPs). Significant (p
The Next Linear Collider Program
/graphics.htm Snowmass 2001 http://snowmass2001.org/ Electrical Systems Modulators http://www -project.slac.stanford.edu/lc/local/electrical/e_home.htm DC Magnet Power http://www-project.slac.stanford.edu/lc/local /electrical/e_home.htm Global Systems http://www-project.slac.stanford.edu/lc/local/electrical/e_home.htm
The Challenge of Separating Effects of Simultaneous Education Projects on Student Achievement
ERIC Educational Resources Information Center
Ma, Xin; Ma, Lingling
2009-01-01
When multiple education projects operate in an overlapping or rear-ended manner, it is always a challenge to separate unique project effects on schooling outcomes. Our analysis represents a first attempt to address this challenge. A three-level hierarchical linear model (HLM) was presented as a general analytical framework to separate program…
Streicher, Jeffrey W; Cox, Christian L; Birchard, Geoffrey F
2012-04-01
Although well documented in vertebrates, correlated changes between metabolic rate and cardiovascular function of insects have rarely been described. Using the very large cockroach species Gromphadorhina portentosa, we examined oxygen consumption and heart rate across a range of body sizes and temperatures. Metabolic rate scaled positively and heart rate negatively with body size, but neither scaled linearly. The response of these two variables to temperature was similar. This correlated response to endogenous (body mass) and exogenous (temperature) variables is likely explained by a mutual dependence on similar metabolic substrate use and/or coupled regulatory pathways. The intraspecific scaling for oxygen consumption rate showed an apparent plateauing at body masses greater than about 3 g. An examination of cuticle mass across all instars revealed isometric scaling with no evidence of an ontogenetic shift towards proportionally larger cuticles. Published oxygen consumption rates of other Blattodea species were also examined and, as in our intraspecific examination of G. portentosa, the scaling relationship was found to be non-linear with a decreasing slope at larger body masses. The decreasing slope at very large body masses in both intraspecific and interspecific comparisons may have important implications for future investigations of the relationship between oxygen transport and maximum body size in insects.
Panyabut, Teerawat; Sirirat, Natnicha; Siripinyanond, Atitaya
2018-02-13
Electrothermal atomic absorption spectrometry (ETAAS) was applied to investigate the atomization behaviors of gold nanoparticles (AuNPs) and silver nanoparticles (AgNPs) in order to relate with particle size information. At various atomization temperatures from 1400 °C to 2200 °C, the time-dependent atomic absorption peak profiles of AuNPs and AgNPs with varying sizes from 5 nm to 100 nm were examined. With increasing particle size, the maximum absorbance was observed at the longer time. The time at maximum absorbance was found to linearly increase with increasing particle size, suggesting that ETAAS can be applied to provide the size information of nanoparticles. With the atomization temperature of 1600 °C, the mixtures of nanoparticles containing two particle sizes, i.e., 5 nm tannic stabilized AuNPs with 60, 80, 100 nm citrate stabilized AuNPs, were investigated and bimodal peaks were observed. The particle size dependent atomization behaviors of nanoparticles show potential application of ETAAS for providing size information of nanoparticles. The calibration plot between the time at maximum absorbance and the particle size was applied to estimate the particle size of in-house synthesized AuNPs and AgNPs and the results obtained were in good agreement with those from flow field-flow fractionation (FlFFF) and transmission electron microscopy (TEM) techniques. Furthermore, the linear relationship between the activation energy and the particle size was observed. Copyright © 2017 Elsevier B.V. All rights reserved.
Linear Quantum Systems: Non-Classical States and Robust Stability
2016-06-29
quantum linear systems subject to non-classical quantum fields. The major outcomes of this project are (i) derivation of quantum filtering equations for...derivation of quantum filtering equations for systems non-classical input states including single photon states, (ii) determination of how linear...history going back some 50 years, to the birth of modern control theory with Kalman’s foundational work on filtering and LQG optimal control
NASA Astrophysics Data System (ADS)
Kajiwara, Yoshiyuki; Shiraishi, Junya; Kobayashi, Shoei; Yamagami, Tamotsu
2009-03-01
A digital phase-locked loop (PLL) with a linearly constrained adaptive filter (LCAF) has been studied for higher-linear-density optical discs. LCAF has been implemented before an interpolated timing recovery (ITR) PLL unit in order to improve the quality of phase error calculation by using an adaptively equalized partial response (PR) signal. Coefficient update of an asynchronous sampled adaptive FIR filter with a least-mean-square (LMS) algorithm has been constrained by a projection matrix in order to suppress the phase shift of the tap coefficients of the adaptive filter. We have developed projection matrices that are suitable for Blu-ray disc (BD) drive systems by numerical simulation. Results have shown the properties of the projection matrices. Then, we have designed the read channel system of the ITR PLL with an LCAF model on the FPGA board for experiments. Results have shown that the LCAF improves the tilt margins of 30 gigabytes (GB) recordable BD (BD-R) and 33 GB BD read-only memory (BD-ROM) with a sufficient LMS adaptation stability.
Prosthetic Leg Control in the Nullspace of Human Interaction.
Gregg, Robert D; Martin, Anne E
2016-07-01
Recent work has extended the control method of virtual constraints, originally developed for autonomous walking robots, to powered prosthetic legs for lower-limb amputees. Virtual constraints define desired joint patterns as functions of a mechanical phasing variable, which are typically enforced by torque control laws that linearize the output dynamics associated with the virtual constraints. However, the output dynamics of a powered prosthetic leg generally depend on the human interaction forces, which must be measured and canceled by the feedback linearizing control law. This feedback requires expensive multi-axis load cells, and actively canceling the interaction forces may minimize the human's influence over the prosthesis. To address these limitations, this paper proposes a method for projecting virtual constraints into the nullspace of the human interaction terms in the output dynamics. The projected virtual constraints naturally render the output dynamics invariant with respect to the human interaction forces, which instead enter into the internal dynamics of the partially linearized prosthetic system. This method is illustrated with simulations of a transfemoral amputee model walking with a powered knee-ankle prosthesis that is controlled via virtual constraints with and without the proposed projection.
Thin silicon layer SOI power device with linearly-distance fixed charge islands
NASA Astrophysics Data System (ADS)
Yuan, Zuo; Haiou, Li; Jianghui, Zhai; Ning, Tang; Shuxiang, Song; Qi, Li
2015-05-01
A new high-voltage LDMOS with linearly-distanced fixed charge islands is proposed (LFI LDMOS). A lot of linearly-distanced fixed charge islands are introduced by implanting the Cs or I ion into the buried oxide layer and dynamic holes are attracted and accumulated, which is crucial to enhance the electric field of the buried oxide and the vertical breakdown voltage. The surface electric field is improved by increasing the distance between two adjacent fixed charge islands from source to drain, which lead to the higher concentration of the drift region and a lower on-resistance. The numerical results indicate that the breakdown voltage of 500 V with Ld = 45 μm is obtained in the proposed device in comparison to 209 V of conventional LDMOS, while maintaining low on-resistance. Project supported by the Guangxi Natural Science Foundation of China (No. 2013GXNSFAA019335), the Guangxi Department of Education Project (No.201202ZD041), the China Postdoctoral Science Foundation Project (Nos. 2012M521127, 2013T60566), and the National Natural Science Foundation of China (Nos. 61361011, 61274077, 61464003).
An improved affine projection algorithm for active noise cancellation
NASA Astrophysics Data System (ADS)
Zhang, Congyan; Wang, Mingjiang; Han, Yufei; Sun, Yunzhuo
2017-08-01
Affine projection algorithm is a signal reuse algorithm, and it has a good convergence rate compared to other traditional adaptive filtering algorithm. There are two factors that affect the performance of the algorithm, which are step factor and the projection length. In the paper, we propose a new variable step size affine projection algorithm (VSS-APA). It dynamically changes the step size according to certain rules, so that it can get smaller steady-state error and faster convergence speed. Simulation results can prove that its performance is superior to the traditional affine projection algorithm and in the active noise control (ANC) applications, the new algorithm can get very good results.
Mayer, B; Muche, R
2013-01-01
Animal studies are highly relevant for basic medical research, although their usage is discussed controversially in public. Thus, an optimal sample size for these projects should be aimed at from a biometrical point of view. Statistical sample size calculation is usually the appropriate methodology in planning medical research projects. However, required information is often not valid or only available during the course of an animal experiment. This article critically discusses the validity of formal sample size calculation for animal studies. Within the discussion, some requirements are formulated to fundamentally regulate the process of sample size determination for animal experiments.
Determining the effect of grain size and maximum induction upon coercive field of electrical steels
NASA Astrophysics Data System (ADS)
Landgraf, Fernando José Gomes; da Silveira, João Ricardo Filipini; Rodrigues-Jr., Daniel
2011-10-01
Although theoretical models have already been proposed, experimental data is still lacking to quantify the influence of grain size upon coercivity of electrical steels. Some authors consider a linear inverse proportionality, while others suggest a square root inverse proportionality. Results also differ with regard to the slope of the reciprocal of grain size-coercive field relation for a given material. This paper discusses two aspects of the problem: the maximum induction used for determining coercive force and the possible effect of lurking variables such as the grain size distribution breadth and crystallographic texture. Electrical steel sheets containing 0.7% Si, 0.3% Al and 24 ppm C were cold-rolled and annealed in order to produce different grain sizes (ranging from 20 to 150 μm). Coercive field was measured along the rolling direction and found to depend linearly on reciprocal of grain size with a slope of approximately 0.9 (A/m)mm at 1.0 T induction. A general relation for coercive field as a function of grain size and maximum induction was established, yielding an average absolute error below 4%. Through measurement of B50 and image analysis of micrographs, the effects of crystallographic texture and grain size distribution breadth were qualitatively discussed.
Tang, Gong; Kong, Yuan; Chang, Chung-Chou Ho; Kong, Lan; Costantino, Joseph P
2012-01-01
In a phase III multi-center cancer clinical trial or a large public health study, sample size is predetermined to achieve desired power, and study participants are enrolled from tens or hundreds of participating institutions. As the accrual is closing to the target size, the coordinating data center needs to project the accrual closure date on the basis of the observed accrual pattern and notify the participating sites several weeks in advance. In the past, projections were simply based on some crude assessment, and conservative measures were incorporated in order to achieve the target accrual size. This approach often resulted in excessive accrual size and subsequently unnecessary financial burden on the study sponsors. Here we proposed a discrete-time Poisson process-based method to estimate the accrual rate at time of projection and subsequently the trial closure date. To ensure that target size would be reached with high confidence, we also proposed a conservative method for the closure date projection. The proposed method was illustrated through the analysis of the accrual data of the National Surgical Adjuvant Breast and Bowel Project trial B-38. The results showed that application of the proposed method could help to save considerable amount of expenditure in patient management without compromising the accrual goal in multi-center clinical trials. Copyright © 2012 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Divya, S.; Nampoori, V. P. N.; Radhakrishnan, P.; Mujeeb, A.
2014-08-01
TiN nanoparticles of average size 55 nm were investigated for their optical non-linear properties. During the experiment the irradiated laser wavelength coincided with the surface plasmon resonance (SPR) peak of the nanoparticle. The large non-linearity of the nanoparticle was attributed to the plasmon resonance, which largely enhanced the local field within the nanoparticle. Both open and closed aperture Z-scan experiments were performed and the corresponding optical constants were explored. The post-excitation absorption spectra revealed the interesting phenomenon of photo fragmentation leading to the blue shift in band gap and red shift in the SPR. The results are discussed in terms of enhanced interparticle interaction simultaneous with size reduction. Here, the optical constants being intrinsic constants for a particular sample change unusually with laser power intensity. The dependence of χ(3) is discussed in terms of the size variation caused by photo fragmentation. The studies proved that the TiN nanoparticles are potential candidates in photonics technology offering huge scope to study unexplored research for various expedient applications.
DOT National Transportation Integrated Search
2016-08-01
This project developed a solid-state welding process based on linear friction welding (LFW) technology. While resistance flash welding or : thermite techniques are tried and true methods for joining rails and performing partial rail replacement repai...
Neural processing of gravity information
NASA Technical Reports Server (NTRS)
Schor, Robert H.
1992-01-01
The goal of this project was to use the linear acceleration capabilities of the NASA Vestibular Research Facility (VRF) at Ames Research Center to directly examine encoding of linear accelerations in the vestibular system of the cat. Most previous studies, including my own, have utilized tilt stimuli, which at very low frequencies (e.g., 'static tilt') can be considered a reasonably pure linear acceleration (e.g., 'down'); however, higher frequencies of tilt, necessary for understanding the dynamic processing of linear acceleration information, necessarily involves rotations which can stimulate the semicircular canals. The VRF, particularly the Long Linear Sled, has promise to provide controlled pure linear accelerations at a variety of stimulus frequencies, with no confounding angular motion.
Conjugate gradient based projection - A new explicit methodology for frictional contact
NASA Technical Reports Server (NTRS)
Tamma, Kumar K.; Li, Maocheng; Sha, Desong
1993-01-01
With special attention towards the applicability to parallel computation or vectorization, a new and effective explicit approach for linear complementary formulations involving a conjugate gradient based projection methodology is proposed in this study for contact problems with Coulomb friction. The overall objectives are focussed towards providing an explicit methodology of computation for the complete contact problem with friction. In this regard, the primary idea for solving the linear complementary formulations stems from an established search direction which is projected to a feasible region determined by the non-negative constraint condition; this direction is then applied to the Fletcher-Reeves conjugate gradient method resulting in a powerful explicit methodology which possesses high accuracy, excellent convergence characteristics, fast computational speed and is relatively simple to implement for contact problems involving Coulomb friction.
Current information technology needs of small to medium sized apparel manufacturers and contractors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wipple, C.; Vosti, E.
1997-11-01
This report documents recent efforts of the American Textile Partnership (AMTEX) Demand Activated Manufacturing Architecture (DAMA) Project to address needs that are characterized of small to medium sized apparel manufactures and contractors. Background on the AMTEX/DAMA project and objectives for this specific efforts are discussed.
Climate-induced lake drying causes heterogeneous reductions in waterfowl species richness
Roach, Jennifer K.; Griffith, Dennis B.
2015-01-01
ContextLake size has declined on breeding grounds for international populations of waterfowl.ObjectivesOur objectives were to (1) model the relationship between waterfowl species richness and lake size; (2) use the model and trends in lake size to project historical, contemporary, and future richness at 2500+ lakes; (3) evaluate mechanisms for the species–area relationship (SAR); and (4) identify species most vulnerable to shrinking lakes.MethodsMonte Carlo simulations of the richness model were used to generate projections. Correlations between richness and both lake size and habitat diversity were compared to identify mechanisms for the SAR. Patterns of nestedness were used to identify vulnerable species.ResultsSpecies richness was greatest at lakes that were larger, closer to rivers, had more wetlands along their perimeters and were within 5 km of a large lake. Average richness per lake was projected to decline by 11 % from 1986 to 2050 but was heterogeneous across sub-regions and lakes. Richness in sub-regions with species-rich lakes was projected to remain stable, while richness in the sub-region with species-poor lakes was projected to decline. Lake size had a greater effect on richness than did habitat diversity, suggesting that large lakes have more species because they provide more habitat but not more habitat types. The vulnerability of species to shrinking lakes was related to species rarity rather than foraging guild.ConclusionsOur maps of projected changes in species richness and rank-ordered list of species most vulnerable to shrinking lakes can be used to identify targets for conservation or monitoring.
NASA Astrophysics Data System (ADS)
Kuzmanoski, M.; Box, M.; Box, G. P.; Schmidt, B.; Russell, P. B.; Redemann, J.; Livingston, J. M.; Wang, J.; Flagan, R. C.; Seinfeld, J. H.
2002-12-01
As part of the ACE-Asia experiment, conducted off the coast of China, Korea and Japan in spring 2001, measurements of aerosol physical, chemical and radiative characteristics were performed aboard the Twin Otter aircraft. Of particular importance for this paper were spectral measurements of aerosol optical thickness obtained at 13 discrete wavelengths, within 354-1558 nm wavelength range, using the AATS-14 sunphotometer. Spectral aerosol optical thickness can be used to obtain information about particle size distribution. In this paper, we use sunphotometer measurements to retrieve size distribution of aerosols during ACE-Asia. We focus on four cases in which layers influenced by different air masses were identified. Aerosol optical thickness of each layer was inverted using two different techniques - constrained linear inversion and multimodal. In the constrained linear inversion algorithm no assumption about the mathematical form of the distribution to be retrieved is made. Conversely, the multimodal technique assumes that aerosol size distribution is represented as a linear combination of few lognormal modes with predefined values of mode radii and geometric standard deviations. Amplitudes of modes are varied to obtain best fit of sum of optical thicknesses due to individual modes to sunphotometer measurements. In this paper we compare the results of these two retrieval methods. In addition, we present comparisons of retrieved size distributions with in situ measurements taken using an aerodynamic particle sizer and differential mobility analyzer system aboard the Twin Otter aircraft.
24 CFR 266.200 - Eligible projects.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 24 Housing and Urban Development 2 2013-04-01 2013-04-01 false Eligible projects. 266.200 Section... FINANCE AGENCY RISK-SHARING PROGRAM FOR INSURED AFFORDABLE MULTIFAMILY PROJECT LOANS Program Requirements § 266.200 Eligible projects. (a) Minimum project size. Projects insured under this part must consist of...
24 CFR 266.200 - Eligible projects.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 24 Housing and Urban Development 2 2011-04-01 2011-04-01 false Eligible projects. 266.200 Section... FINANCE AGENCY RISK-SHARING PROGRAM FOR INSURED AFFORDABLE MULTIFAMILY PROJECT LOANS Program Requirements § 266.200 Eligible projects. (a) Minimum project size. Projects insured under this part must consist of...
24 CFR 266.200 - Eligible projects.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 24 Housing and Urban Development 2 2014-04-01 2014-04-01 false Eligible projects. 266.200 Section... FINANCE AGENCY RISK-SHARING PROGRAM FOR INSURED AFFORDABLE MULTIFAMILY PROJECT LOANS Program Requirements § 266.200 Eligible projects. (a) Minimum project size. Projects insured under this part must consist of...
24 CFR 266.200 - Eligible projects.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 24 Housing and Urban Development 2 2012-04-01 2012-04-01 false Eligible projects. 266.200 Section... FINANCE AGENCY RISK-SHARING PROGRAM FOR INSURED AFFORDABLE MULTIFAMILY PROJECT LOANS Program Requirements § 266.200 Eligible projects. (a) Minimum project size. Projects insured under this part must consist of...
Spacecraft-borne long life cryogenic refrigeration: Status and trends
NASA Technical Reports Server (NTRS)
Johnson, A. L.
1983-01-01
The status of cryogenic refrigerator development intended for, or possibly applicable to, long life spacecraft-borne application is reviewed. Based on these efforts, the general development trends are identified. Using currently projected technology needs, the various trends are compared and evaluated. The linear drive, non-contacting bearing Stirling cycle refrigerator concept appears to be the best current approach that will meet the technology projection requirements for spacecraft-borne cryogenic refrigerators. However, a multiply redundant set of lightweight, moderate life, moderate reliability Stirling cycle cryogenic refrigerators using high-speed linear drive and sliding contact bearings may possibly suffice.
Moerbeek, Mirjam; van Schie, Sander
2016-07-11
The number of clusters in a cluster randomized trial is often low. It is therefore likely random assignment of clusters to treatment conditions results in covariate imbalance. There are no studies that quantify the consequences of covariate imbalance in cluster randomized trials on parameter and standard error bias and on power to detect treatment effects. The consequences of covariance imbalance in unadjusted and adjusted linear mixed models are investigated by means of a simulation study. The factors in this study are the degree of imbalance, the covariate effect size, the cluster size and the intraclass correlation coefficient. The covariate is binary and measured at the cluster level; the outcome is continuous and measured at the individual level. The results show covariate imbalance results in negligible parameter bias and small standard error bias in adjusted linear mixed models. Ignoring the possibility of covariate imbalance while calculating the sample size at the cluster level may result in a loss in power of at most 25 % in the adjusted linear mixed model. The results are more severe for the unadjusted linear mixed model: parameter biases up to 100 % and standard error biases up to 200 % may be observed. Power levels based on the unadjusted linear mixed model are often too low. The consequences are most severe for large clusters and/or small intraclass correlation coefficients since then the required number of clusters to achieve a desired power level is smallest. The possibility of covariate imbalance should be taken into account while calculating the sample size of a cluster randomized trial. Otherwise more sophisticated methods to randomize clusters to treatments should be used, such as stratification or balance algorithms. All relevant covariates should be carefully identified, be actually measured and included in the statistical model to avoid severe levels of parameter and standard error bias and insufficient power levels.
A multiscale filter for noise reduction of low-dose cone beam projections.
Yao, Weiguang; Farr, Jonathan B
2015-08-21
The Poisson or compound Poisson process governs the randomness of photon fluence in cone beam computed tomography (CBCT) imaging systems. The probability density function depends on the mean (noiseless) of the fluence at a certain detector. This dependence indicates the natural requirement of multiscale filters to smooth noise while preserving structures of the imaged object on the low-dose cone beam projection. In this work, we used a Gaussian filter, exp(-x2/2σ(2)(f)) as the multiscale filter to de-noise the low-dose cone beam projections. We analytically obtained the expression of σ(f), which represents the scale of the filter, by minimizing local noise-to-signal ratio. We analytically derived the variance of residual noise from the Poisson or compound Poisson processes after Gaussian filtering. From the derived analytical form of the variance of residual noise, optimal σ(2)(f)) is proved to be proportional to the noiseless fluence and modulated by local structure strength expressed as the linear fitting error of the structure. A strategy was used to obtain the reliable linear fitting error: smoothing the projection along the longitudinal direction to calculate the linear fitting error along the lateral direction and vice versa. The performance of our multiscale filter was examined on low-dose cone beam projections of a Catphan phantom and a head-and-neck patient. After performing the filter on the Catphan phantom projections scanned with pulse time 4 ms, the number of visible line pairs was similar to that scanned with 16 ms, and the contrast-to-noise ratio of the inserts was higher than that scanned with 16 ms about 64% in average. For the simulated head-and-neck patient projections with pulse time 4 ms, the visibility of soft tissue structures in the patient was comparable to that scanned with 20 ms. The image processing took less than 0.5 s per projection with 1024 × 768 pixels.
Wu, Mixia; Shu, Yu; Li, Zhaohai; Liu, Aiyi
2016-01-01
A sequential design is proposed to test whether the accuracy of a binary diagnostic biomarker meets the minimal level of acceptance. The accuracy of a binary diagnostic biomarker is a linear combination of the marker’s sensitivity and specificity. The objective of the sequential method is to minimize the maximum expected sample size under the null hypothesis that the marker’s accuracy is below the minimal level of acceptance. The exact results of two-stage designs based on Youden’s index and efficiency indicate that the maximum expected sample sizes are smaller than the sample sizes of the fixed designs. Exact methods are also developed for estimation, confidence interval and p-value concerning the proposed accuracy index upon termination of the sequential testing. PMID:26947768
NASA Astrophysics Data System (ADS)
Baasch, Benjamin; Müller, Hendrik; von Dobeneck, Tilo; Oberle, Ferdinand K. J.
2017-05-01
The electric conductivity and magnetic susceptibility of sediments are fundamental parameters in environmental geophysics. Both can be derived from marine electromagnetic profiling, a novel, fast and non-invasive seafloor mapping technique. Here we present statistical evidence that electric conductivity and magnetic susceptibility can help to determine physical grain-size characteristics (size, sorting and mud content) of marine surficial sediments. Electromagnetic data acquired with the bottom-towed electromagnetic profiler MARUM NERIDIS III were analysed and compared with grain size data from 33 samples across the NW Iberian continental shelf. A negative correlation between mean grain size and conductivity (R=-0.79) as well as mean grain size and susceptibility (R=-0.78) was found. Simple and multiple linear regression analyses were carried out to predict mean grain size, mud content and the standard deviation of the grain-size distribution from conductivity and susceptibility. The comparison of both methods showed that multiple linear regression models predict the grain-size distribution characteristics better than the simple models. This exemplary study demonstrates that electromagnetic benthic profiling is capable to estimate mean grain size, sorting and mud content of marine surficial sediments at a very high significance level. Transfer functions can be calibrated using grains-size data from a few reference samples and extrapolated along shelf-wide survey lines. This study suggests that electromagnetic benthic profiling should play a larger role for coastal zone management, seafloor contamination and sediment provenance studies in worldwide continental shelf systems.
Nakamura, S; Shimojo, S
1998-10-01
The effects of the size and eccentricity of the visual stimulus upon visually induced perception of self-motion (vection) were examined with various sizes of central and peripheral visual stimulation. Analysis indicated the strength of vection increased linearly with the size of the area in which the moving pattern was presented, but there was no difference in vection strength between central and peripheral stimuli when stimulus sizes were the same. Thus, the effect of stimulus size is homogeneous across eccentricities in the visual field.
Two-point method uncertainty during control and measurement of cylindrical element diameters
NASA Astrophysics Data System (ADS)
Glukhov, V. I.; Shalay, V. V.; Radev, H.
2018-04-01
The topic of the article is devoted to the urgent problem of the reliability of technical products geometric specifications measurements. The purpose of the article is to improve the quality of parts linear sizes control by the two-point measurement method. The article task is to investigate methodical extended uncertainties in measuring cylindrical element linear sizes. The investigation method is a geometric modeling of the element surfaces shape and location deviations in a rectangular coordinate system. The studies were carried out for elements of various service use, taking into account their informativeness, corresponding to the kinematic pairs classes in theoretical mechanics and the number of constrained degrees of freedom in the datum element function. Cylindrical elements with informativity of 4, 2, 1 and θ (zero) were investigated. The uncertainties estimation of in two-point measurements was made by comparing the results of of linear dimensions measurements with the functional diameters maximum and minimum of the element material. Methodical uncertainty is formed when cylindrical elements with maximum informativeness have shape deviations of the cut and the curvature types. Methodical uncertainty is formed by measuring the element average size for all types of shape deviations. The two-point measurement method cannot take into account the location deviations of a dimensional element, so its use for elements with informativeness less than the maximum creates unacceptable methodical uncertainties in measurements of the maximum, minimum and medium linear dimensions. Similar methodical uncertainties also exist in the arbitration control of the linear dimensions of the cylindrical elements by limiting two-point gauges.
Radke, Wolfgang
2004-03-05
Simulations of the distribution coefficients of linear polymers and regular combs with various spacings between the arms have been performed. The distribution coefficients were plotted as a function of the number of segments in order to compare the size exclusion chromatography (SEC)-elution behavior of combs relative to linear molecules. By comparing the simulated SEC-calibration curves it is possible to predict the elution behavior of comb-shaped polymers relative to linear ones. In order to compare the results obtained by computer simulations with experimental data, a variety of comb-shaped polymers varying in side chain length, spacing between the side chains and molecular weights of the backbone were analyzed by SEC with light-scattering detection. It was found that the computer simulations could predict the molecular weights of linear molecules having the same retention volume with an accuracy of about 10%, i.e. the error in the molecular weight obtained by calculating the molecular weight of the comb-polymer based on a calibration curve constructed using linear standards and the results of the computer simulations are of the same magnitude as the experimental error of absolute molecular weight determination.
Tsai, Hsiu-Hui; Huang, Chih-Hung; Tessmer, Ingrid; Erie, Dorothy A.; Chen, Carton W.
2011-01-01
Linear chromosomes and linear plasmids of Streptomyces possess covalently bound terminal proteins (TPs) at the 5′ ends of their telomeres. These TPs are proposed to act as primers for DNA synthesis that patches the single-stranded gaps at the 3′ ends during replication. Most (‘archetypal’) Streptomyces TPs (designated Tpg) are highly conserved in size and sequence. In addition, there are a number of atypical TPs with heterologous sequences and sizes, one of which is Tpc that caps SCP1 plasmid of Streptomyces coelicolor. Interactions between the TPs on the linear Streptomyces replicons have been suggested by electrophoretic behaviors of TP-capped DNA and circular genetic maps of Streptomyces chromosomes. Using chemical cross-linking, we demonstrated intramolecular and intermolecular interactions in vivo between Tpgs, between Tpcs and between Tpg and Tpc. Interactions between the chromosomal and plasmid telomeres were also detected in vivo. The intramolecular telomere interactions produced negative superhelicity in the linear DNA, which was relaxed by topoisomerase I. Such intramolecular association between the TPs poses a post-replicational complication in the formation of a pseudo-dimeric structure that requires resolution by exchanging TPs or DNA. PMID:21109537
Radio Propagation Prediction Software for Complex Mixed Path Physical Channels
2006-08-14
63 4.4.6. Applied Linear Regression Analysis in the Frequency Range 1-50 MHz 69 4.4.7. Projected Scaling to...4.4.6. Applied Linear Regression Analysis in the Frequency Range 1-50 MHz In order to construct a comprehensive numerical algorithm capable of
Reaction-Infiltration Instabilities in Fractured and Porous Rocks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ladd, Anthony
In this project we are developing a multiscale analysis of the evolution of fracture permeability, using numerical simulations and linear stability analysis. Our simulations include fully three-dimensional simulations of the fracture topography, fluid flow, and reactant transport, two-dimensional simulations based on aperture models, and linear stability analysis.
Mathematical Modelling in Engineering: An Alternative Way to Teach Linear Algebra
ERIC Educational Resources Information Center
Domínguez-García, S.; García-Planas, M. I.; Taberna, J.
2016-01-01
Technological advances require that basic science courses for engineering, including Linear Algebra, emphasize the development of mathematical strengths associated with modelling and interpretation of results, which are not limited only to calculus abilities. Based on this consideration, we have proposed a project-based learning, giving a dynamic…
Electron-Phonon Systems on a Universal Quantum Computer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Macridin, Alexandru; Spentzouris, Panagiotis; Amundson, James
We present an algorithm that extends existing quantum algorithms forsimulating fermion systems in quantum chemistry and condensed matter physics toinclude phonons. The phonon degrees of freedom are represented with exponentialaccuracy on a truncated Hilbert space with a size that increases linearly withthe cutoff of the maximum phonon number. The additional number of qubitsrequired by the presence of phonons scales linearly with the size of thesystem. The additional circuit depth is constant for systems with finite-rangeelectron-phonon and phonon-phonon interactions and linear for long-rangeelectron-phonon interactions. Our algorithm for a Holstein polaron problem wasimplemented on an Atos Quantum Learning Machine (QLM) quantum simulatoremployingmore » the Quantum Phase Estimation method. The energy and the phonon numberdistribution of the polaron state agree with exact diagonalization results forweak, intermediate and strong electron-phonon coupling regimes.« less
NASA Technical Reports Server (NTRS)
Young, Katherine C.; Sobieszczanski-Sobieski, Jaroslaw
1988-01-01
This project has two objectives. The first is to determine whether linear programming techniques can improve performance when handling design optimization problems with a large number of design variables and constraints relative to the feasible directions algorithm. The second purpose is to determine whether using the Kreisselmeier-Steinhauser (KS) function to replace the constraints with one constraint will reduce the cost of total optimization. Comparisons are made using solutions obtained with linear and non-linear methods. The results indicate that there is no cost saving using the linear method or in using the KS function to replace constraints.
Small field output factors evaluation with a microDiamond detector over 30 Italian centers.
Russo, Serenella; Reggiori, Giacomo; Cagni, Elisabetta; Clemente, Stefania; Esposito, Marco; Falco, Maria Daniela; Fiandra, Christian; Giglioli, Francesca Romana; Marinelli, Marco; Marino, Carmelo; Masi, Laura; Pimpinella, Maria; Stasi, Michele; Strigari, Lidia; Talamonti, Cinzia; Villaggi, Elena; Mancosu, Pietro
2016-12-01
The aim of the study was a multicenter evaluation of MLC&jaws-defined small field output factors (OF) for different linear accelerator manufacturers and for different beam energies using the latest synthetic single crystal diamond detector commercially available. The feasibility of providing an experimental OF data set, useful for on-site measurements validation, was also evaluated. This work was performed in the framework of the Italian Association of Medical Physics (AIFM) SBRT working group. The project was subdivided in two phases: in the first phase each center measured OFs using their own routine detector for nominal field sizes ranging from 10×10cm 2 to 0.6×0.6cm 2 . In the second phase, the measurements were repeated in all centers using the PTW 60019 microDiamond detector. The project enrolled 30 Italian centers. Micro-ion chambers and silicon diodes were used for OF measurements in 24 and 6 centers respectively. Gafchromic films and TLDs were used for very small field OFs in 3 and 1 centers. Regarding the measurements performed with the user's detectors, OF standard deviations (SD) for field sizes down to 2×2cm 2 were in all cases <2.7%. In the second phase, a reduction of around 50% of the SD was obtained using the microDiamond detector. The measured values presented in this multicenter study provide a consistent dataset for OFs that could be a useful tool for improving dosimetric procedures in centers. The microDiamond data present a small variation among the centers confirming that this detector can contribute to improve overall accuracy in radiotherapy. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
1976-06-01
United States Naval Postgraduate School, Monterey , California, 1974. 6. Anton , H., Elementary Linear Algebra , John Wiley & Sons, 1973. 7. Parrat, L. G...CONVERTER ln(laser & bias) PULSE HEIGHT ANALYZER © LINEAR AMPLIFIER SAMPLE TRIGGER OSCILLATOR early ln(laser & bias) SCINTILLOMETERS recent BACKGROUND...DEMODULATOR LASER CALIBRATION BOX LASER OR CAL VOLTAGE LOG CONVERTER LN (LASER OR CAL VOLT) LINEAR AMPLIFIER uLN (LASER OR CAL VOLT) PULSE HEIGHTEN ANALYZER V
The Effect of Pixel Size on the Accuracy of Orthophoto Production
NASA Astrophysics Data System (ADS)
Kulur, S.; Yildiz, F.; Selcuk, O.; Yildiz, M. A.
2016-06-01
In our country, orthophoto products are used by the public and private sectors for engineering services and infrastructure projects, Orthophotos are particularly preferred due to faster and are more economical production according to vector digital photogrammetric production. Today, digital orthophotos provide an expected accuracy for engineering and infrastructure projects. In this study, the accuracy of orthophotos using pixel sizes with different sampling intervals are tested for the expectations of engineering and infrastructure projects.
Computer Power. Part 2: Electrical Power Problems and Their Amelioration.
ERIC Educational Resources Information Center
Price, Bennett J.
1989-01-01
Describes electrical power problems that affect computer users, including spikes, sags, outages, noise, frequency variations, and static electricity. Ways in which these problems may be diagnosed and cured are discussed. Sidebars consider transformers; power distribution units; surge currents/linear and non-linear loads; and sizing the power…
Clustering determines the dynamics of complex contagions in multiplex networks
NASA Astrophysics Data System (ADS)
Zhuang, Yong; Arenas, Alex; Yaǧan, Osman
2017-01-01
We present the mathematical analysis of generalized complex contagions in a class of clustered multiplex networks. The model is intended to understand spread of influence, or any other spreading process implying a threshold dynamics, in setups of interconnected networks with significant clustering. The contagion is assumed to be general enough to account for a content-dependent linear threshold model, where each link type has a different weight (for spreading influence) that may depend on the content (e.g., product, rumor, political view) that is being spread. Using the generating functions formalism, we determine the conditions, probability, and expected size of the emergent global cascades. This analysis provides a generalization of previous approaches and is especially useful in problems related to spreading and percolation. The results present nontrivial dependencies between the clustering coefficient of the networks and its average degree. In particular, several phase transitions are shown to occur depending on these descriptors. Generally speaking, our findings reveal that increasing clustering decreases the probability of having global cascades and their size, however, this tendency changes with the average degree. There exists a certain average degree from which on clustering favors the probability and size of the contagion. By comparing the dynamics of complex contagions over multiplex networks and their monoplex projections, we demonstrate that ignoring link types and aggregating network layers may lead to inaccurate conclusions about contagion dynamics, particularly when the correlation of degrees between layers is high.
Forecasted range shifts of arid-land fishes in response to climate change
Whitney, James E.; Whittier, Joanna B.; Paukert, Craig P.; Olden, Julian D.; Strecker, Angela L.
2017-01-01
Climate change is poised to alter the distributional limits, center, and size of many species. Traits may influence different aspects of range shifts, with trophic generality facilitating shifts at the leading edge, and greater thermal tolerance limiting contractions at the trailing edge. The generality of relationships between traits and range shifts remains ambiguous however, especially for imperiled fishes residing in xeric riverscapes. Our objectives were to quantify contemporary fish distributions in the Lower Colorado River Basin, forecast climate change by 2085 using two general circulation models, and quantify shifts in the limits, center, and size of fish elevational ranges according to fish traits. We examined relationships among traits and range shift metrics either singly using univariate linear modeling or combined with multivariate redundancy analysis. We found that trophic and dispersal traits were associated with shifts at the leading and trailing edges, respectively, although projected range shifts were largely unexplained by traits. As expected, piscivores and omnivores with broader diets shifted upslope most at the leading edge while more specialized invertivores exhibited minimal changes. Fishes that were more mobile shifted upslope most at the trailing edge, defying predictions. No traits explained changes in range center or size. Finally, current preference explained multivariate range shifts, as fishes with faster current preferences exhibited smaller multivariate changes. Although range shifts were largely unexplained by traits, more specialized invertivorous fishes with lower dispersal propensity or greater current preference may require the greatest conservation efforts because of their limited capacity to shift ranges under climate change.
Crowdsourcing lung nodules detection and annotation
NASA Astrophysics Data System (ADS)
Boorboor, Saeed; Nadeem, Saad; Park, Ji Hwan; Baker, Kevin; Kaufman, Arie
2018-03-01
We present crowdsourcing as an additional modality to aid radiologists in the diagnosis of lung cancer from clinical chest computed tomography (CT) scans. More specifically, a complete work flow is introduced which can help maximize the sensitivity of lung nodule detection by utilizing the collective intelligence of the crowd. We combine the concept of overlapping thin-slab maximum intensity projections (TS-MIPs) and cine viewing to render short videos that can be outsourced as an annotation task to the crowd. These videos are generated by linearly interpolating overlapping TS-MIPs of CT slices through the depth of each quadrant of a patient's lung. The resultant videos are outsourced to an online community of non-expert users who, after a brief tutorial, annotate suspected nodules in these video segments. Using our crowdsourcing work flow, we achieved a lung nodule detection sensitivity of over 90% for 20 patient CT datasets (containing 178 lung nodules with sizes between 1-30mm), and only 47 false positives from a total of 1021 annotations on nodules of all sizes (96% sensitivity for nodules>4mm). These results show that crowdsourcing can be a robust and scalable modality to aid radiologists in screening for lung cancer, directly or in combination with computer-aided detection (CAD) algorithms. For CAD algorithms, the presented work flow can provide highly accurate training data to overcome the high false-positive rate (per scan) problem. We also provide, for the first time, analysis on nodule size and position which can help improve CAD algorithms.
50 Years of ``Scaling'' Jack Kilby's Invention
NASA Astrophysics Data System (ADS)
Doering, Robert
2008-03-01
This year is the 50th anniversary of Jack Kilby's 1958 invention of the integrated circuit (IC), for which he won the 2000 Nobel Prize in Physics. Since that invention in a laboratory at Texas Instruments, IC components have been continuously miniaturized, which has resulted in exponential improvement trends in their performance, energy efficiency, and cost per function. These improvements have created a semiconductor industry that has grown to over 250B in annual sales. The process of reducing integrated-circuit component size and associated parameters in a coordinated fashion is traditionally called ``feature-size scaling.'' Kilby's original circuit had active (transistor) and passive (resistor, capacitor) components with dimensions of a few millimeters. Today, the minimum feature sizes on integrated circuits are less than 30 nanometers for patterned line widths and down to about one nanometer for film thicknesses. Thus, we have achieved about five orders of magnitude in linear-dimension scaling over the past fifty years, which has resulted in about ten orders of magnitude increase in the density of IC components, a representation of ``Moore's Law.'' As IC features are approaching atomic dimensions, increasing emphasis is now being given to the parallel effort of further diversifying the types of components in integrated circuits. This is called ``functional scaling'' and ``more then Moore.'' Of course, the enablers for both types of scaling have been developed at many laboratories around the world. This talk will review a few of the highlights in scaling and its applications from R&D projects at Texas Instruments.
ERIC Educational Resources Information Center
Payton, Spencer D.
2017-01-01
This study aimed to explore how inquiry-oriented teaching could be implemented in an introductory linear algebra course that, due to various constraints, may not lend itself to inquiry-oriented teaching. In particular, the course in question has a traditionally large class size, limited amount of class time, and is often coordinated with other…
Lack of Set Size Effects in Spatial Updating: Evidence for Offline Updating
ERIC Educational Resources Information Center
Hodgson, Eric; Waller, David
2006-01-01
Four experiments required participants to keep track of the locations of (i.e., update) 1, 2, 3, 4, 6, 8, 10, or 15 target objects after rotating. Across all conditions, updating was unaffected by set size. Although some traditional set size effects (i.e., a linear increase of latency with memory load) were observed under some conditions, these…
Ion size effects on the electrokinetics of spherical particles in salt-free concentrated suspensions
NASA Astrophysics Data System (ADS)
Roa, Rafael; Carrique, Felix; Ruiz-Reina, Emilio
2012-02-01
In this work we study the influence of the counterion size on the electrophoretic mobility and on the dynamic mobility of a suspended spherical particle in a salt-free concentrated colloidal suspension. Salt-free suspensions contain charged particles and the added counterions that counterbalance their surface charge. A spherical cell model approach is used to take into account particle-particle electro-hydrodynamic interactions in concentrated suspensions. The finite size of the counterions is considered including an entropic contribution, related with the excluded volume of the ions, in the free energy of the suspension, giving rise to a modified counterion concentration profile. We are interested in studying the linear response of the system to an electric field, thus we solve the different electrokinetic equations by using a linear perturbation scheme. We find that the ionic size effect is quite important for moderate to high particles charges at a given particle volume fraction. In addition for such particle surface charges, both the electrophoretic mobility and the dynamic mobility suffer more important changes the larger the particle volume fraction for each ion size. The latter effects are more relevant the larger the ionic size.
Size- and shape-dependent surface thermodynamic properties of nanocrystals
NASA Astrophysics Data System (ADS)
Fu, Qingshan; Xue, Yongqiang; Cui, Zixiang
2018-05-01
As the fundamental properties, the surface thermodynamic properties of nanocrystals play a key role in the physical and chemical changes. However, it remains ambiguous about the quantitative influence regularities of size and shape on the surface thermodynamic properties of nanocrystals. Thus by introducing interface variables into the Gibbs energy and combining Young-Laplace equation, relations between the surface thermodynamic properties (surface Gibbs energy, surface enthalpy, surface entropy, surface energy and surface heat capacity), respectively, and size of nanocrystals with different shapes were derived. Theoretical estimations of the orders of the surface thermodynamic properties of nanocrystals agree with available experimental values. Calculated results of the surface thermodynamic properties of Au, Bi and Al nanocrystals suggest that when r > 10 nm, the surface thermodynamic properties linearly vary with the reciprocal of particle size, and when r < 10 nm, the effect of particle size on the surface thermodynamic properties becomes greater and deviates from linear variation. For nanocrystals with identical equivalent diameter, the more the shape deviates from sphere, the larger the surface thermodynamic properties (absolute value) are.
Evaluation of an LED Retrofit Project at Princeton University’s Carl Icahn Laboratory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, Robert G.; Murphy, Arthur L.; Perrin, Tess E.
The LED lighting retrofit at the Carl Icahn Laboratory of the Lewis-Sigler Institute for Integrative Genomics was the first building-wide interior LED project at Princeton University, following the University’s experiences from several years of exterior and small-scale interior LED implementation projects. The project addressed three luminaire types – recessed 2x2 troffers, cove and other luminaires using linear T8 fluorescent lamps, and CFL downlights - which combined accounted for over 564,000 kWh of annual energy, over 90% of the lighting energy used in the facility. The Princeton Facilities Engineering staff used a thorough process of evaluating product alternatives before selecting anmore » acceptable LED retrofit solution for each luminaire type. Overall, 815 2x2 luminaires, 550 linear fluorescent luminaires, and 240 downlights were converted to LED as part of this project. Based solely on the reductions in wattage in converting from the incumbent fluorescent lamps to LED retrofit kits, the annual energy savings from the project was over 190,000 kWh, a savings of 37%. An additional 125,000 kWh of energy savings is expected from the implementation of occupancy and task-tuning control solutions, which will bring the total savings for the project to 62%.« less
Factors Affecting Acoustics and Speech Intelligibility in the Operating Room: Size Matters
Bennett, Christopher L.; Horn, Danielle Bodzin; Dudaryk, Roman
2017-01-01
INTRODUCTION: Noise in health care settings has increased since 1960 and represents a significant source of dissatisfaction among staff and patients and risk to patient safety. Operating rooms (ORs) in which effective communication is crucial are particularly noisy. Speech intelligibility is impacted by noise, room architecture, and acoustics. For example, sound reverberation time (RT60) increases with room size, which can negatively impact intelligibility, while room objects are hypothesized to have the opposite effect. We explored these relationships by investigating room construction and acoustics of the surgical suites at our institution. METHODS: We studied our ORs during times of nonuse. Room dimensions were measured to calculate room volumes (VR). Room content was assessed by estimating size and assigning items into 5 volume categories to arrive at an adjusted room content volume (VC) metric. Psychoacoustic analyses were performed by playing sweep tones from a speaker and recording the impulse responses (ie, resulting sound fields) from 3 locations in each room. The recordings were used to calculate 6 psychoacoustic indices of intelligibility. Multiple linear regression was performed using VR and VC as predictor variables and each intelligibility index as an outcome variable. RESULTS: A total of 40 ORs were studied. The surgical suites were characterized by a large degree of construction and surface finish heterogeneity and varied in size from 71.2 to 196.4 m3 (average VR = 131.1 [34.2] m3). An insignificant correlation was observed between VR and VC (Pearson correlation = 0.223, P = .166). Multiple linear regression model fits and β coefficients for VR were highly significant for each of the intelligibility indices and were best for RT60 (R2 = 0.666, F(2, 37) = 39.9, P < .0001). For Dmax (maximum distance where there is <15% loss of consonant articulation), both VR and VC β coefficients were significant. For RT60 and Dmax, after controlling for VC, partial correlations were 0.825 (P < .0001) and 0.718 (P < .0001), respectively, while after controlling for VR, partial correlations were −0.322 (P = .169) and 0.381 (P < .05), respectively. CONCLUSIONS: Our results suggest that the size and contents of an OR can predict a range of psychoacoustic indices of speech intelligibility. Specifically, increasing OR size correlated with worse speech intelligibility, while increasing amounts of OR contents correlated with improved speech intelligibility. This study provides valuable descriptive data and a predictive method for identifying existing ORs that may benefit from acoustic modifiers (eg, sound absorption panels). Additionally, it suggests that room dimensions and projected clinical use should be considered during the design phase of OR suites to optimize acoustic performance. PMID:28525511
What Have Researchers Learned from Project STAR?
ERIC Educational Resources Information Center
Schanzenbach, Diane Whitmore
2007-01-01
Project STAR (Student/Teacher Achievement Ratio) was a large-scale randomized trial of reduced class sizes in kindergarten through the third grade. Because of the scope of the experiment, it has been used in many policy discussions. For example, the California statewide class-size-reduction policy was justified, in part, by the successes of…
7 CFR 1822.271 - Processing applications.
Code of Federal Regulations, 2013 CFR
2013-01-01
... specific provisions of State law under which the applicant is organized; a copy of the applicant's articles... project. (i) Location and size of tract or tracts to be bought and/or developed. (ii) Number and size of... contribution to the project. (8) A map showing the location of and other supporting information on neighborhood...
7 CFR 1822.271 - Processing applications.
Code of Federal Regulations, 2012 CFR
2012-01-01
... specific provisions of State law under which the applicant is organized; a copy of the applicant's articles... project. (i) Location and size of tract or tracts to be bought and/or developed. (ii) Number and size of... contribution to the project. (8) A map showing the location of and other supporting information on neighborhood...
7 CFR 1822.271 - Processing applications.
Code of Federal Regulations, 2011 CFR
2011-01-01
... specific provisions of State law under which the applicant is organized; a copy of the applicant's articles... project. (i) Location and size of tract or tracts to be bought and/or developed. (ii) Number and size of... contribution to the project. (8) A map showing the location of and other supporting information on neighborhood...
7 CFR 1822.271 - Processing applications.
Code of Federal Regulations, 2014 CFR
2014-01-01
... specific provisions of State law under which the applicant is organized; a copy of the applicant's articles... project. (i) Location and size of tract or tracts to be bought and/or developed. (ii) Number and size of... contribution to the project. (8) A map showing the location of and other supporting information on neighborhood...
Seeing the Light: A Classroom-Sized Pinhole Camera Demonstration for Teaching Vision
ERIC Educational Resources Information Center
Prull, Matthew W.; Banks, William P.
2005-01-01
We describe a classroom-sized pinhole camera demonstration (camera obscura) designed to enhance students' learning of the visual system. The demonstration consists of a suspended rear-projection screen onto which the outside environment projects images through a small hole in a classroom window. Students can observe these images in a darkened…
A characterization of linearly repetitive cut and project sets
NASA Astrophysics Data System (ADS)
Haynes, Alan; Koivusalo, Henna; Walton, James
2018-02-01
For the development of a mathematical theory which can be used to rigorously investigate physical properties of quasicrystals, it is necessary to understand regularity of patterns in special classes of aperiodic point sets in Euclidean space. In one dimension, prototypical mathematical models for quasicrystals are provided by Sturmian sequences and by point sets generated by substitution rules. Regularity properties of such sets are well understood, thanks mostly to well known results by Morse and Hedlund, and physicists have used this understanding to study one dimensional random Schrödinger operators and lattice gas models. A key fact which plays an important role in these problems is the existence of a subadditive ergodic theorem, which is guaranteed when the corresponding point set is linearly repetitive. In this paper we extend the one-dimensional model to cut and project sets, which generalize Sturmian sequences in higher dimensions, and which are frequently used in mathematical and physical literature as models for higher dimensional quasicrystals. By using a combination of algebraic, geometric, and dynamical techniques, together with input from higher dimensional Diophantine approximation, we give a complete characterization of all linearly repetitive cut and project sets with cubical windows. We also prove that these are precisely the collection of such sets which satisfy subadditive ergodic theorems. The results are explicit enough to allow us to apply them to known classical models, and to construct linearly repetitive cut and project sets in all pairs of dimensions and codimensions in which they exist. Research supported by EPSRC grants EP/L001462, EP/J00149X, EP/M023540. HK also gratefully acknowledges the support of the Osk. Huttunen foundation.
Song, Kang-Ho; Fan, Alexander C; Hinkle, Joshua J; Newman, Joshua; Borden, Mark A; Harvey, Brandon K
2017-01-01
Focused ultrasound with microbubbles is being developed to transiently, locally and noninvasively open the blood-brain barrier (BBB) for improved pharmaceutical delivery. Prior work has demonstrated that, for a given concentration dose, microbubble size affects both the intravascular circulation persistence and extent of BBB opening. When matched to gas volume dose, however, the circulation half-life was found to be independent of microbubble size. In order to determine whether this holds true for BBB opening as well, we independently measured the effects of microbubble size (2 vs. 6 µm diameter) and concentration, covering a range of overlapping gas volume doses (1-40 µL/kg). We first demonstrated precise targeting and a linear dose-response of Evans Blue dye extravasation to the rat striatum for a set of constant microbubble and ultrasound parameters. We found that dye extravasation increased linearly with gas volume dose, with data points from both microbubble sizes collapsing to a single line. A linear trend was observed for both the initial sonication (R 2 =0.90) and a second sonication on the contralateral side (R 2 =0.68). Based on these results, we conclude that microbubble gas volume dose, not size, determines the extent of BBB opening by focused ultrasound (1 MHz, ~0.5 MPa at the focus). This result may simplify planning for focused ultrasound treatments by constraining the protocol to a single microbubble parameter - gas volume dose - which gives equivalent results for varying size distributions. Finally, using optimal parameters determined for Evan Blue, we demonstrated gene delivery and expression using a viral vector, dsAAV1-CMV-EGFP, one week after BBB disruption, which allowed us to qualitatively evaluate neuronal health.
Sliding Window Generalized Kernel Affine Projection Algorithm Using Projection Mappings
NASA Astrophysics Data System (ADS)
Slavakis, Konstantinos; Theodoridis, Sergios
2008-12-01
Very recently, a solution to the kernel-based online classification problem has been given by the adaptive projected subgradient method (APSM). The developed algorithm can be considered as a generalization of a kernel affine projection algorithm (APA) and the kernel normalized least mean squares (NLMS). Furthermore, sparsification of the resulting kernel series expansion was achieved by imposing a closed ball (convex set) constraint on the norm of the classifiers. This paper presents another sparsification method for the APSM approach to the online classification task by generating a sequence of linear subspaces in a reproducing kernel Hilbert space (RKHS). To cope with the inherent memory limitations of online systems and to embed tracking capabilities to the design, an upper bound on the dimension of the linear subspaces is imposed. The underlying principle of the design is the notion of projection mappings. Classification is performed by metric projection mappings, sparsification is achieved by orthogonal projections, while the online system's memory requirements and tracking are attained by oblique projections. The resulting sparsification scheme shows strong similarities with the classical sliding window adaptive schemes. The proposed design is validated by the adaptive equalization problem of a nonlinear communication channel, and is compared with classical and recent stochastic gradient descent techniques, as well as with the APSM's solution where sparsification is performed by a closed ball constraint on the norm of the classifiers.
A double B1-mode 4-layer laminated piezoelectric linear motor.
Li, Xiaotian; Chen, Zhijiang; Dong, Shuxiang
2012-12-01
We report a miniature piezoelectric ultrasonic linear motor that is made of four Pb(Zr,Ti)O(3) (PZT) piezoelectric ceramic layers for low-voltage work. The 4-layer piezoelectric laminate works in two orthogonal first-bending modes for producing elliptical oscillations, which are then used to drive a contacting slider into continuous linear motion. Experimental results show that the miniature linear motor (size: 4 × 4 × 12 mm, weight: 1.7 g) can generate a large driving force of 0.48 N and a linear motion speed of up to 160 mm/s, using a 40 V(pp)/mm voltage drive at its resonance frequency of 64.5 kHz. The maximum efficiency of the linear motor is 30%.
Cheng, Rebecca Wing-yi; Lam, Shui-fong; Chan, Joanne Chung-yan
2008-06-01
There has been an ongoing debate about the inconsistent effects of heterogeneous ability grouping on students in small group work such as project-based learning. The present research investigated the roles of group heterogeneity and processes in project-based learning. At the student level, we examined the interaction effect between students' within-group achievement and group processes on their self- and collective efficacy. At the group level, we examined how group heterogeneity was associated with the average self- and collective efficacy reported by the groups. The participants were 1,921 Hong Kong secondary students in 367 project-based learning groups. Student achievement was determined by school examination marks. Group processes, self-efficacy and collective efficacy were measured by a student-report questionnaire. Hierarchical linear modelling was used to analyse the nested data. When individual students in each group were taken as the unit of analysis, results indicated an interaction effect of group processes and students' within-group achievement on the discrepancy between collective- and self-efficacy. When compared with low achievers, high achievers reported lower collective efficacy than self-efficacy when group processes were of low quality. However, both low and high achievers reported higher collective efficacy than self-efficacy when group processes were of high quality. With 367 groups taken as the unit of analysis, the results showed that group heterogeneity, group gender composition and group size were not related to the discrepancy between collective- and self-efficacy reported by the students. Group heterogeneity was not a determinant factor in students' learning efficacy. Instead, the quality of group processes played a pivotal role because both high and low achievers were able to benefit when group processes were of high quality.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu Feipeng; Shi Hongjian; Bai Pengxiang
In fringe projection, the CCD camera and the projector are often placed at equal height. In this paper, we will study the calibration of an unequal arrangement of the CCD camera and the projector. The principle of fringe projection with two-dimensional digital image correlation to acquire the profile of object surface is described in detail. By formula derivation and experiment, the linear relationship between the out-of-plane calibration coefficient and the y coordinate is clearly found. To acquire the three-dimensional (3D) information of an object correctly, this paper presents an effective calibration method with linear least-squares fitting, which is very simplemore » in principle and calibration. Experiments are implemented to validate the availability and reliability of the calibration method.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tim Roney; Robert Seifert; Bob Pink
2011-09-01
The field-portable Digital Radiography and Computed Tomography (DRCT) x-ray inspection systems developed for the Project Manager for NonStockpile Chemical Materiel (PMNSCM) over the past 13 years have used linear diode detector arrays from two manufacturers; Thomson and Thales. These two manufacturers no longer produce this type of detector. In the interest of insuring the long term viability of the portable DRCT single munitions inspection systems and to improve the imaging capabilities, this project has been investigating improved, commercially available detectors. During FY-10, detectors were evaluated and one in particular, manufactured by Detection Technologies (DT), Inc, was acquired for possible integrationmore » into the DRCT systems. The remainder of this report describes the work performed in FY-11 to complete evaluations and fully integrate the detector onto a representative DRCT platform.« less
Sediment laboratory quality-assurance project: studies of methods and materials
Gordon, J.D.; Newland, C.A.; Gray, J.R.
2001-01-01
In August 1996 the U.S. Geological Survey initiated the Sediment Laboratory Quality-Assurance project. The Sediment Laboratory Quality Assurance project is part of the National Sediment Laboratory Quality-Assurance program. This paper addresses the fmdings of the sand/fme separation analysis completed for the single-blind reference sediment-sample project and differences in reported results between two different analytical procedures. From the results it is evident that an incomplete separation of fme- and sand-size material commonly occurs resulting in the classification of some of the fme-size material as sand-size material. Electron microscopy analysis supported the hypothesis that the negative bias for fme-size material and the positive bias for sand-size material is largely due to aggregation of some of the fine-size material into sand-size particles and adherence of fine-size material to the sand-size grains. Electron microscopy analysis showed that preserved river water, which was low in dissolved solids, specific conductance, and neutral pH, showed less aggregation and adhesion than preserved river water that was higher in dissolved solids and specific conductance with a basic pH. Bacteria were also found growing in the matrix, which may enhance fme-size material aggregation through their adhesive properties. Differences between sediment-analysis methods were also investigated as pan of this study. Suspended-sediment concentration results obtained from one participating laboratory that used a total-suspended solids (TSS) method had greater variability and larger negative biases than results obtained when this laboratory used a suspended-sediment concentration method. When TSS methods were used to analyze the reference samples, the median suspended sediment concentration percent difference was -18.04 percent. When the laboratory used a suspended-sediment concentration method, the median suspended-sediment concentration percent difference was -2.74 percent. The percent difference was calculated as follows: Percent difference = (( reported mass - known mass)/known mass ) X 100.
On Generalizations of Cochran’s Theorem and Projection Matrices.
1980-08-01
Definiteness of the Estimated Dispersion Matrix in a Multivariate Linear Model ," F. Pukelsheim and George P.H. Styan, May 1978. TECHNICAL REPORTS...with applications to the analysis of covariance," Proc. Cambridge Philos. Soc., 30, pp. 178-191. Graybill , F. A. and Marsaglia, G. (1957...34Idempotent matrices and quad- ratic forms in the general linear hypothesis," Ann. Math. Statist., 28, pp. 678-686. Greub, W. (1975). Linear Algebra (4th ed
Jelenkovic, Aline; Yokoyama, Yoshie; Sund, Reijo; Hur, Yoon-Mi; Harris, Jennifer R; Brandt, Ingunn; Nilsen, Thomas Sevenius; Ooki, Syuichi; Ullemar, Vilhelmina; Almqvist, Catarina; Magnusson, Patrik K E; Saudino, Kimberly J; Stazi, Maria A; Fagnani, Corrado; Brescianini, Sonia; Nelson, Tracy L; Whitfield, Keith E; Knafo-Noam, Ariel; Mankuta, David; Abramson, Lior; Cutler, Tessa L; Hopper, John L; Llewellyn, Clare H; Fisher, Abigail; Corley, Robin P; Huibregtse, Brooke M; Derom, Catherine A; Vlietinck, Robert F; Bjerregaard-Andersen, Morten; Beck-Nielsen, Henning; Sodemann, Morten; Krueger, Robert F; McGue, Matt; Pahlen, Shandell; Alexandra Burt, S; Klump, Kelly L; Dubois, Lise; Boivin, Michel; Brendgen, Mara; Dionne, Ginette; Vitaro, Frank; Willemsen, Gonneke; Bartels, Meike; van Beijsterveld, Catharina E M; Craig, Jeffrey M; Saffery, Richard; Rasmussen, Finn; Tynelius, Per; Heikkilä, Kauko; Pietiläinen, Kirsi H; Bayasgalan, Gombojav; Narandalai, Danshiitsoodol; Haworth, Claire M A; Plomin, Robert; Ji, Fuling; Ning, Feng; Pang, Zengchang; Rebato, Esther; Tarnoki, Adam D; Tarnoki, David L; Kim, Jina; Lee, Jooyeon; Lee, Sooji; Sung, Joohon; Loos, Ruth J F; Boomsma, Dorret I; Sørensen, Thorkild I A; Kaprio, Jaakko; Silventoinen, Karri
2018-05-01
There is evidence that birth size is positively associated with height in later life, but it remains unclear whether this is explained by genetic factors or the intrauterine environment. To analyze the associations of birth weight, length and ponderal index with height from infancy through adulthood within mono- and dizygotic twin pairs, which provides insights into the role of genetic and environmental individual-specific factors. This study is based on the data from 28 twin cohorts in 17 countries. The pooled data included 41,852 complete twin pairs (55% monozygotic and 45% same-sex dizygotic) with information on birth weight and a total of 112,409 paired height measurements at ages ranging from 1 to 69 years. Birth length was available for 19,881 complete twin pairs, with a total of 72,692 paired height measurements. The association between birth size and later height was analyzed at both the individual and within-pair level by linear regression analyses. Within twin pairs, regression coefficients showed that a 1-kg increase in birth weight and a 1-cm increase in birth length were associated with 1.14-4.25 cm and 0.18-0.90 cm taller height, respectively. The magnitude of the associations was generally greater within dizygotic than within monozygotic twin pairs, and this difference between zygosities was more pronounced for birth length. Both genetic and individual-specific environmental factors play a role in the association between birth size and later height from infancy to adulthood, with a larger role for genetics in the association with birth length than with birth weight. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
The non-linear relationship between body size and function in parrotfishes
NASA Astrophysics Data System (ADS)
Lokrantz, J.; Nyström, M.; Thyresson, M.; Johansson, C.
2008-12-01
Parrotfishes are a group of herbivores that play an important functional role in structuring benthic communities on coral reefs. Increasingly, these fish are being targeted by fishermen, and resultant declines in biomass and abundance may have severe consequences for the dynamics and regeneration of coral reefs. However, the impact of overfishing extends beyond declining fish stocks. It can also lead to demographic changes within species populations where mean body size is reduced. The effect of reduced mean body size on population dynamics is well described in literature but virtually no information exists on how this may influence important ecological functions. The study investigated how one important function, scraping (i.e., the capacity to remove algae and open up bare substratum for coral larval settlement), by three common species of parrotfishes ( Scarus niger, Chlorurus sordidus, and Chlorurus strongylocephalus) on coral reefs at Zanzibar (Tanzania) was influenced by the size of individual fishes. There was a non-linear relationship between body size and scraping function for all species examined, and impact through scraping was also found to increase markedly when fish reached a size of 15 20 cm. Thus, coral reefs which have a high abundance and biomass of parrotfish may nonetheless be functionally impaired if dominated by small-sized individuals. Reductions in mean body size within parrotfish populations could, therefore, have functional impacts on coral reefs that previously have been overlooked.
Inexpensive and Highly Reproducible Cloud-Based Variant Calling of 2,535 Human Genomes
Shringarpure, Suyash S.; Carroll, Andrew; De La Vega, Francisco M.; Bustamante, Carlos D.
2015-01-01
Population scale sequencing of whole human genomes is becoming economically feasible; however, data management and analysis remains a formidable challenge for many research groups. Large sequencing studies, like the 1000 Genomes Project, have improved our understanding of human demography and the effect of rare genetic variation in disease. Variant calling on datasets of hundreds or thousands of genomes is time-consuming, expensive, and not easily reproducible given the myriad components of a variant calling pipeline. Here, we describe a cloud-based pipeline for joint variant calling in large samples using the Real Time Genomics population caller. We deployed the population caller on the Amazon cloud with the DNAnexus platform in order to achieve low-cost variant calling. Using our pipeline, we were able to identify 68.3 million variants in 2,535 samples from Phase 3 of the 1000 Genomes Project. By performing the variant calling in a parallel manner, the data was processed within 5 days at a compute cost of $7.33 per sample (a total cost of $18,590 for completed jobs and $21,805 for all jobs). Analysis of cost dependence and running time on the data size suggests that, given near linear scalability, cloud computing can be a cheap and efficient platform for analyzing even larger sequencing studies in the future. PMID:26110529
Design study of a re-bunching RFQ for the SPES project
NASA Astrophysics Data System (ADS)
Shin, Seung Wook; Palmieri, A.; Comunian, M.; Grespan, F.; Chai, Jong Seo
2014-05-01
An upgrade to the 2nd generation of the selective production of exotic species (SPES) to produce a radioactive ion beam (RIB) has been studied at the istituto nazionale di fisica nucleare — laboratory nazionali di Legnaro (INFN-LNL). Due to the long distance between the isotope separator online (ISOL) facility and the superconducting quarter wave resonator (QWR) cavity acceleratore lineare per ioni (ALPI), a new re-buncher cavity must be introduced to maintain the high beam quality during the beam transport. A particular radio frequency quadrupole (RFQ) structure has been suggested to meet the requirements of this project. A window-type RFQ, which has a high mode separation, less power dissipation and compact size compared to the conventional normal 4-vane RFQ, has been introduced. The RF design has been studied considering the requirements of the re-bunching machine for high figures of merit such as a proper operation frequency, a high shunt impedance, a high quality factor, a low power dissipation, etc. A sensitivity analysis of the fabrication and the misalignment error has been conducted. A micro-movement slug tuner has been introduced to compensate for the frequency variations that may occur due to the beam loading, the thermal instability, the microphonic effect, etc.
Drivers of seabird population recovery on New Zealand islands after predator eradication.
Buxton, Rachel T; Jones, Christopher; Moller, Henrik; Towns, David R
2014-04-01
Eradication of introduced mammalian predators from islands has become increasingly common, with over 800 successful projects around the world. Historically, introduced predators extirpated or reduced the size of many seabird populations, changing the dynamics of entire island ecosystems. Although the primary outcome of many eradication projects is the restoration of affected seabird populations, natural population responses are rarely documented and mechanisms are poorly understood. We used a generic model of seabird colony growth to identify key predictor variables relevant to recovery or recolonization. We used generalized linear mixed models to test the importance of these variables in driving seabird population responses after predator eradication on islands around New Zealand. The most influential variable affecting recolonization of seabirds around New Zealand was the distance to a source population, with few cases of recolonization without a source population ≤25 km away. Colony growth was most affected by metapopulation status; there was little colony growth in species with a declining status. These characteristics may facilitate the prioritization of newly predator-free islands for active management. Although we found some evidence documenting natural recovery, generally this topic was understudied. Our results suggest that in order to guide management strategies, more effort should be allocated to monitoring wildlife response after eradication. © 2014 Society for Conservation Biology.
Sun, Hao; Dul, Mitchell W; Swanson, William H
2006-07-01
The purposes of this study are to compare macular perimetric sensitivities for conventional size III, frequency-doubling, and Gabor stimuli in terms of Weber contrast and to provide a theoretical interpretation of the results. Twenty-two patients with glaucoma performed four perimetric tests: a conventional Swedish Interactive Threshold Algorithm (SITA) 10-2 test with Goldmann size III stimuli, two frequency-doubling tests (FDT 10-2, FDT Macula) with counterphase-modulated grating stimuli, and a laboratory-designed test with Gabor stimuli. Perimetric sensitivities were converted to the reciprocal of Weber contrast and sensitivities from different tests were compared using the Bland-Altman method. Effects of ganglion cell loss on perimetric sensitivities were then simulated with a two-stage neural model. The average perimetric loss was similar for all stimuli until advanced stages of ganglion cell loss, in which perimetric loss tended to be greater for size III stimuli than for frequency-doubling and Gabor stimuli. Comparison of the experimental data and model simulation suggests that, in the macula, linear relations between ganglion cell loss and perimetric sensitivity loss hold for all three stimuli. Linear relations between perimetric loss and ganglion cell loss for all three stimuli can account for the similarity in perimetric loss until advanced stages. The results do not support the hypothesis that redundancy for frequency-doubling stimuli is lower than redundancy for size III stimuli.
Hydraulic Jumps on Superhydrophobic Surfaces Exhibiting Ribs and Cavities
NASA Astrophysics Data System (ADS)
Johnson, Michael; Russell, Benton; Maynes, Daniel; Webb, Brent
2009-11-01
We report experimental results characterizing the dynamics of a liquid jet impinging normally on superhydrophobic surfaces spanning the Weber number (based on the jet velocity and diameter) range from 100 to 1400. The superhydrophobic surfaces are fabricated with both silicon and PDMS surfaces and exhibit micro-ribs and cavities coated with a hydrophobic coating. In general, the hydraulic jump exhibits an elliptical shape with the major axis being aligned parallel to the ribs, concomitant with the frictional resistance being smaller in the parallel direction than in the transverse direction. When the water depth downstream of the jump was imposed at a predetermined value, the major and minor axis of the jump increased with decreasing water depth, following classical hydraulic jump behavior. When no water depth was imposed, however, the total projected area of the ellipse exhibited a nearly linear dependence on the jet Weber number, and was nominally invariant with varying hydrophobicity and relative size of the ribs and cavities. For this scenario the Weber number (based on the local radial velocity and water depth prior to the jump) was of order unity at the jump location. The results also reveal that for increasing relative size of the cavities, the ratio of the ellipse axis (major-to-minor) increases.
Zhang, Feng; Liao, Xiangke; Peng, Shaoliang; Cui, Yingbo; Wang, Bingqiang; Zhu, Xiaoqian; Liu, Jie
2016-06-01
' The de novo assembly of DNA sequences is increasingly important for biological researches in the genomic era. After more than one decade since the Human Genome Project, some challenges still exist and new solutions are being explored to improve de novo assembly of genomes. String graph assembler (SGA), based on the string graph theory, is a new method/tool developed to address the challenges. In this paper, based on an in-depth analysis of SGA we prove that the SGA-based sequence de novo assembly is an NP-complete problem. According to our analysis, SGA outperforms other similar methods/tools in memory consumption, but costs much more time, of which 60-70 % is spent on the index construction. Upon this analysis, we introduce a hybrid parallel optimization algorithm and implement this algorithm in the TianHe-2's parallel framework. Simulations are performed with different datasets. For data of small size the optimized solution is 3.06 times faster than before, and for data of middle size it's 1.60 times. The results demonstrate an evident performance improvement, with the linear scalability for parallel FM-index construction. This results thus contribute significantly to improving the efficiency of de novo assembly of DNA sequences.
Exact solution of corner-modified banded block-Toeplitz eigensystems
NASA Astrophysics Data System (ADS)
Cobanera, Emilio; Alase, Abhijeet; Ortiz, Gerardo; Viola, Lorenza
2017-05-01
Motivated by the challenge of seeking a rigorous foundation for the bulk-boundary correspondence for free fermions, we introduce an algorithm for determining exactly the spectrum and a generalized-eigenvector basis of a class of banded block quasi-Toeplitz matrices that we call corner-modified. Corner modifications of otherwise arbitrary banded block-Toeplitz matrices capture the effect of boundary conditions and the associated breakdown of translational invariance. Our algorithm leverages the interplay between a non-standard, projector-based method of kernel determination (physically, a bulk-boundary separation) and families of linear representations of the algebra of matrix Laurent polynomials. Thanks to the fact that these representations act on infinite-dimensional carrier spaces in which translation symmetry is restored, it becomes possible to determine the eigensystem of an auxiliary projected block-Laurent matrix. This results in an analytic eigenvector Ansatz, independent of the system size, which we prove is guaranteed to contain the full solution of the original finite-dimensional problem. The actual solution is then obtained by imposing compatibility with a boundary matrix, whose shape is also independent of system size. As an application, we show analytically that eigenvectors of short-ranged fermionic tight-binding models may display power-law corrections to exponential behavior, and demonstrate the phenomenon for the paradigmatic Majorana chain of Kitaev.
Principal Component Analysis: Resources for an Essential Application of Linear Algebra
ERIC Educational Resources Information Center
Pankavich, Stephen; Swanson, Rebecca
2015-01-01
Principal Component Analysis (PCA) is a highly useful topic within an introductory Linear Algebra course, especially since it can be used to incorporate a number of applied projects. This method represents an essential application and extension of the Spectral Theorem and is commonly used within a variety of fields, including statistics,…
Orthogonal Regression: A Teaching Perspective
ERIC Educational Resources Information Center
Carr, James R.
2012-01-01
A well-known approach to linear least squares regression is that which involves minimizing the sum of squared orthogonal projections of data points onto the best fit line. This form of regression is known as orthogonal regression, and the linear model that it yields is known as the major axis. A similar method, reduced major axis regression, is…
A procedure to determine the radiation isocenter size in a linear accelerator.
González, A; Castro, I; Martínez, J A
2004-06-01
Measurement of radiation isocenter is a fundamental part of commissioning and quality assurance (QA) for a linear accelerator (linac). In this work we present an automated procedure for the analysis of the stars-shots employed in the radiation isocenter determination. Once the star-shot film has been developed and digitized, the resulting image is analyzed by scanning concentric circles centered around the intersection of the lasers that had been previously marked on the film. The center and the radius of the minimum circle intersecting the central rays are determined with an accuracy and precision better than 1% of the pixel size. The procedure is applied to the position and size determination of the radiation isocenter by means of the analysis of star-shots, placed in different planes with respect to the gantry, couch and collimator rotation axes.
NASA Astrophysics Data System (ADS)
Perino, E. J.; Matoz-Fernandez, D. A.; Pasinetti, P. M.; Ramirez-Pastor, A. J.
2017-07-01
Monte Carlo simulations and finite-size scaling analysis have been performed to study the jamming and percolation behavior of linear k-mers (also known as rods or needles) on a two-dimensional triangular lattice of linear dimension L, considering an isotropic RSA process and periodic boundary conditions. Extensive numerical work has been done to extend previous studies to larger system sizes and longer k-mers, which enables the confirmation of a nonmonotonic size dependence of the percolation threshold and the estimation of a maximum value of k from which percolation would no longer occur. Finally, a complete analysis of critical exponents and universality has been done, showing that the percolation phase transition involved in the system is not affected, having the same universality class of the ordinary random percolation.
Betti, O O; Munari, C
1992-01-01
This study deals with 43 patients with cerebral arteriovenous malformations (AVMs) of a maximum of 20 mm in diameter. All of them were radiosurgically treated with a linear accelerator in stereotatic conditions (UMIC). The delivered doses vary from 20 gys to 50 gys. Thirty-seven were controlled angiographically and 35 of them showed the disappearence of the AVM. Different parameters can modify the results: delivered dose, the size and shape of the lesion, target-volume, peripheral lesion isodosis (75%), location, underestimation of the size or dose. These results show that small lesions are best to treat than larger ones, particularly because their volume enables us to encompass them more easily. The uniformity of this series is related to the homogenous size of the treated AVMs, thus avoiding the discussion of global, unclear, results.
Luenser, Arne; Schurkus, Henry F; Ochsenfeld, Christian
2017-04-11
A reformulation of the random phase approximation within the resolution-of-the-identity (RI) scheme is presented, that is competitive to canonical molecular orbital RI-RPA already for small- to medium-sized molecules. For electronically sparse systems drastic speedups due to the reduced scaling behavior compared to the molecular orbital formulation are demonstrated. Our reformulation is based on two ideas, which are independently useful: First, a Cholesky decomposition of density matrices that reduces the scaling with basis set size for a fixed-size molecule by one order, leading to massive performance improvements. Second, replacement of the overlap RI metric used in the original AO-RPA by an attenuated Coulomb metric. Accuracy is significantly improved compared to the overlap metric, while locality and sparsity of the integrals are retained, as is the effective linear scaling behavior.
How is the weather? Forecasting inpatient glycemic control
Saulnier, George E; Castro, Janna C; Cook, Curtiss B; Thompson, Bithika M
2017-01-01
Aim: Apply methods of damped trend analysis to forecast inpatient glycemic control. Method: Observed and calculated point-of-care blood glucose data trends were determined over 62 weeks. Mean absolute percent error was used to calculate differences between observed and forecasted values. Comparisons were drawn between model results and linear regression forecasting. Results: The forecasted mean glucose trends observed during the first 24 and 48 weeks of projections compared favorably to the results provided by linear regression forecasting. However, in some scenarios, the damped trend method changed inferences compared with linear regression. In all scenarios, mean absolute percent error values remained below the 10% accepted by demand industries. Conclusion: Results indicate that forecasting methods historically applied within demand industries can project future inpatient glycemic control. Additional study is needed to determine if forecasting is useful in the analyses of other glucometric parameters and, if so, how to apply the techniques to quality improvement. PMID:29134125
Dynamics of attitudes and genetic processes.
Guastello, Stephen J; Guastello, Denise D
2008-01-01
Relatively new discoveries of a genetic component to attitudes have challenged the traditional viewpoint that attitudes are primarily learned ideas and behaviors. Attitudes that are regarded by respondents as "more important" tend to have greater genetic components to them, and tend to be more closely associated with authoritarianism. Nonlinear theories, nonetheless, have also been introduced to study attitude change. The objective of this study was to determine whether change in authoritarian attitudes across two generations would be more aptly described by a linear or a nonlinear model. Participants were 372 college students, their mothers, and their fathers who completed an attitude questionnaire. Results indicated that the nonlinear model (R2 = .09) was slightly better than the linear model (R2 = .08), but the two models offered very different forecasts for future generations of US society. The linear model projected a gradual and continuing bifurcation between authoritarians and non-authoritarians. The nonlinear model projected a stabilization of authoritarian attitudes.
ERIC Educational Resources Information Center
Ng, Thomas W. H.; Feldman, Daniel C.
2011-01-01
Utilizing a meta-analytical approach for testing moderating effects, the current study investigated organizational tenure as a moderator in the relation between affective organizational commitment and organizational citizenship behavior (OCB). We observed that, across 40 studies (N = 11,416 respondents), the effect size for the relation between…
Linear and ring polymers in confined geometries
NASA Astrophysics Data System (ADS)
Usatenko, Zoryana; Kuterba, Piotr; Chamati, Hassan; Romeis, Dirk
2017-03-01
A short overview of the theoretical and experimental works on the polymer-colloid mixtures is given. The behaviour of a dilute solution of linear and ring polymers in confined geometries like slit of two parallel walls or in the solution of mesoscopic colloidal particles of big size with different adsorbing or repelling properties in respect to polymers is discussed. Besides, we consider the massive field theory approach in fixed space dimensions d = 3 for the investigation of the interaction between long flexible polymers and mesoscopic colloidal particles of big size and for the calculation of the correspondent depletion interaction potentials and the depletion forces between confining walls. The presented results indicate the interesting and nontrivial behavior of linear and ring polymers in confined geometries and give possibility better to understand the complexity of physical effects arising from confinement and chain topology which plays a significant role in the shaping of individual chromosomes and in the process of their segregation, especially in the case of elongated bacterial cells. The possibility of using linear and ring polymers for production of new types of nano- and micro-electromechanical devices is analyzed.
Control method for physical systems and devices
Guckenheimer, John
1997-01-01
A control method for stabilizing systems or devices that are outside the control domain of a linear controller is provided. When applied to nonlinear systems, the effectiveness of this method depends upon the size of the domain of stability that is produced for the stabilized equilibrium. If this domain is small compared to the accuracy of measurements or the size of disturbances within the system, then the linear controller is likely to fail within a short period. Failure of the system or device can be catastrophic: the system or device can wander far from the desired equilibrium. The method of the invention presents a general procedure to recapture the stability of a linear controller, when the trajectory of a system or device leaves its region of stability. By using a hybrid strategy based upon discrete switching events within the state space of the system or device, the system or device will return from a much larger domain to the region of stability utilized by the linear controller. The control procedure is robust and remains effective under large classes of perturbations of a given underlying system or device.
Saravanan, Chandra; Shao, Yihan; Baer, Roi; Ross, Philip N; Head-Gordon, Martin
2003-04-15
A sparse matrix multiplication scheme with multiatom blocks is reported, a tool that can be very useful for developing linear-scaling methods with atom-centered basis functions. Compared to conventional element-by-element sparse matrix multiplication schemes, efficiency is gained by the use of the highly optimized basic linear algebra subroutines (BLAS). However, some sparsity is lost in the multiatom blocking scheme because these matrix blocks will in general contain negligible elements. As a result, an optimal block size that minimizes the CPU time by balancing these two effects is recovered. In calculations on linear alkanes, polyglycines, estane polymers, and water clusters the optimal block size is found to be between 40 and 100 basis functions, where about 55-75% of the machine peak performance was achieved on an IBM RS6000 workstation. In these calculations, the blocked sparse matrix multiplications can be 10 times faster than a standard element-by-element sparse matrix package. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 24: 618-622, 2003
LINEAR POLYMER CHAIN AND BIOENGINEERED CHELATORS FOR METALS REMEDIATION
The 3-year GCHSRC grant of $150,000 levers financial assistance from the University ($94,500 match) as well as collaborative assistance from LANL and TCEQ in the project. Similarly, a related project supported by the Welch Foundation will likely contribute to the k...
Laser Bioeffects Resulting from Non-Linear Interactions of Ultrashort Pulses with Biological Systems
2004-07-01
project Saher Maswadi, Ph.D. (Postdoctoral Fellow) 100% on project Manuscripts submitted/published: Glickman RD. Phototoxicity to the retina...with Dr. Saher Maswadi, the AFOSR- supported postdoctoral fellow in my laboratory, we have implemented a non-invasive method for measuring absolute
Allocating Sample Sizes to Reduce Budget for Fixed-Effect 2×2 Heterogeneous Analysis of Variance
ERIC Educational Resources Information Center
Luh, Wei-Ming; Guo, Jiin-Huarng
2016-01-01
This article discusses the sample size requirements for the interaction, row, and column effects, respectively, by forming a linear contrast for a 2×2 factorial design for fixed-effects heterogeneous analysis of variance. The proposed method uses the Welch t test and its corresponding degrees of freedom to calculate the final sample size in a…
NASA Astrophysics Data System (ADS)
Abozeed, Amina A.; Kadono, Toshiharu; Sekiyama, Akira; Fujiwara, Hidenori; Higashiya, Atsushi; Yamasaki, Atsushi; Kanai, Yuina; Yamagami, Kohei; Tamasaku, Kenji; Yabashi, Makina; Ishikawa, Tetsuya; Andreev, Alexander V.; Wada, Hirofumi; Imada, Shin
2018-03-01
We developed a method to experimentally quantify the fourth-order multipole moment of the rare-earth 4f orbital. Linear dichroism (LD) in the Er 3d5/2 core-level photoemission spectra of cubic ErCo2 was measured using bulk-sensitive hard X-ray photoemission spectroscopy. Theoretical calculation reproduced the observed LD, and the result showed that the observed result does not contradict the suggested Γ 83 ground state. Theoretical calculation further showed a linear relationship between the LD size and the size of the fourth-order multipole moment of the Er3+ ion, which is proportional to the expectation value < O40 + 5O44> , where Onm are the Stevens operators. These analyses indicate that the LD in 3d photoemission spectra can be used to quantify the average fourth-order multipole moment of rare-earth atoms in a cubic crystal electric field.
The Assessment of Distortion in Neurosurgical Image Overlay Projection.
Vakharia, Nilesh N; Paraskevopoulos, Dimitris; Lang, Jozsef; Vakharia, Vejay N
2016-02-01
Numerous studies have demonstrated the superiority of neuronavigation during neurosurgical procedures compared to non-neuronavigation-based procedures. Limitations to neuronavigation systems include the need for the surgeons to avert their gaze from the surgical field and the cost of the systems, especially for hospitals in developing countries. Overlay projection of imaging directly onto the patient allows localization of intracranial structures. A previous study using overlay projection demonstrated the accuracy of image coregistration for a lesion in the temporal region but did not assess image distortion when projecting onto other anatomical locations. Our aim is to quantify this distortion and establish which regions of the skull would be most suitable for overlay projection. Using the difference in size of a square grid when projected onto an anatomically accurate model skull and a flat surface, from the same distance, we were able to calculate the degree of image distortion when projecting onto the skull from the anterior, posterior, superior, and lateral aspects. Measuring the size of a square when projected onto a flat surface from different distances allowed us to model change in lesion size when projecting a deep structure onto the skull surface. Using 2 mm as the upper limit for distortion, our results show that images can be accurately projected onto the majority (81.4%) of the surface of the skull. Our results support the use of image overlay projection in regions with ≤2 mm distortion to assist with localization of intracranial lesions at a fraction of the cost of existing methods. © The Author(s) 2015.
DOT National Transportation Integrated Search
2013-01-01
This project consisted of the development of a revision of the SAE J2735 Dedicated Short Range Communications (DSRC) Message Set Dictionary, published 2009-11-19. This revision will be submitted, at the end of this project to the Society of Automotiv...
Energy conserving, linear scaling Born-Oppenheimer molecular dynamics.
Cawkwell, M J; Niklasson, Anders M N
2012-10-07
Born-Oppenheimer molecular dynamics simulations with long-term conservation of the total energy and a computational cost that scales linearly with system size have been obtained simultaneously. Linear scaling with a low pre-factor is achieved using density matrix purification with sparse matrix algebra and a numerical threshold on matrix elements. The extended Lagrangian Born-Oppenheimer molecular dynamics formalism [A. M. N. Niklasson, Phys. Rev. Lett. 100, 123004 (2008)] yields microcanonical trajectories with the approximate forces obtained from the linear scaling method that exhibit no systematic drift over hundreds of picoseconds and which are indistinguishable from trajectories computed using exact forces.
Direct Solution of the Chemical Master Equation Using Quantized Tensor Trains
Kazeev, Vladimir; Khammash, Mustafa; Nip, Michael; Schwab, Christoph
2014-01-01
The Chemical Master Equation (CME) is a cornerstone of stochastic analysis and simulation of models of biochemical reaction networks. Yet direct solutions of the CME have remained elusive. Although several approaches overcome the infinite dimensional nature of the CME through projections or other means, a common feature of proposed approaches is their susceptibility to the curse of dimensionality, i.e. the exponential growth in memory and computational requirements in the number of problem dimensions. We present a novel approach that has the potential to “lift” this curse of dimensionality. The approach is based on the use of the recently proposed Quantized Tensor Train (QTT) formatted numerical linear algebra for the low parametric, numerical representation of tensors. The QTT decomposition admits both, algorithms for basic tensor arithmetics with complexity scaling linearly in the dimension (number of species) and sub-linearly in the mode size (maximum copy number), and a numerical tensor rounding procedure which is stable and quasi-optimal. We show how the CME can be represented in QTT format, then use the exponentially-converging -discontinuous Galerkin discretization in time to reduce the CME evolution problem to a set of QTT-structured linear equations to be solved at each time step using an algorithm based on Density Matrix Renormalization Group (DMRG) methods from quantum chemistry. Our method automatically adapts the “basis” of the solution at every time step guaranteeing that it is large enough to capture the dynamics of interest but no larger than necessary, as this would increase the computational complexity. Our approach is demonstrated by applying it to three different examples from systems biology: independent birth-death process, an example of enzymatic futile cycle, and a stochastic switch model. The numerical results on these examples demonstrate that the proposed QTT method achieves dramatic speedups and several orders of magnitude storage savings over direct approaches. PMID:24626049
A sequential linear optimization approach for controller design
NASA Technical Reports Server (NTRS)
Horta, L. G.; Juang, J.-N.; Junkins, J. L.
1985-01-01
A linear optimization approach with a simple real arithmetic algorithm is presented for reliable controller design and vibration suppression of flexible structures. Using first order sensitivity of the system eigenvalues with respect to the design parameters in conjunction with a continuation procedure, the method converts a nonlinear optimization problem into a maximization problem with linear inequality constraints. The method of linear programming is then applied to solve the converted linear optimization problem. The general efficiency of the linear programming approach allows the method to handle structural optimization problems with a large number of inequality constraints on the design vector. The method is demonstrated using a truss beam finite element model for the optimal sizing and placement of active/passive-structural members for damping augmentation. Results using both the sequential linear optimization approach and nonlinear optimization are presented and compared. The insensitivity to initial conditions of the linear optimization approach is also demonstrated.
Linear micromechanical stepping drive for pinhole array positioning
NASA Astrophysics Data System (ADS)
Endrödy, Csaba; Mehner, Hannes; Grewe, Adrian; Hoffmann, Martin
2015-05-01
A compact linear micromechanical stepping drive for positioning a 7 × 5.5 mm2 optical pinhole array is presented. The system features a step size of 13.2 µm and a full displacement range of 200 µm. The electrostatic inch-worm stepping mechanism shows a compact design capable of positioning a payload 50% of its own weight. The stepping drive movement, step sizes and position accuracy are characterized. The actuated pinhole array is integrated in a confocal chromatic hyperspectral imaging system, where coverage of the object plane, and therefore the useful picture data, can be multiplied by 14 in contrast to a non-actuated array.
Emittance Growth in the DARHT-II Linear Induction Accelerator
NASA Astrophysics Data System (ADS)
Ekdahl, Carl; Carlson, Carl A.; Frayer, Daniel K.; McCuistian, B. Trent; Mostrom, Christopher B.; Schulze, Martin E.; Thoma, Carsten H.
2017-11-01
The Dual-Axis Radiographic Hydrotest (DARHT) facility uses bremsstrahlung radiation source spots produced by the focused electron beams from two linear induction accelerators (LIAs) to radiograph large hydrodynamic experiments driven by high explosives. Radiographic resolution is determined by the size of the source spot, and beam emittance is the ultimate limitation to spot size. Some of the possible causes for the emittance growth in the DARHT LIA have been investigated using particle-in-cell (PIC) codes, and are discussed in this article. The results suggest that the most likely source of emittance growth is a mismatch of the beam to the magnetic transport, which can cause beam halo.
Anzehaee, Mohammad Mousavi; Haeri, Mohammad
2011-07-01
New estimators are designed based on the modified force balance model to estimate the detaching droplet size, detached droplet size, and mean value of droplet detachment frequency in a gas metal arc welding process. The proper droplet size for the process to be in the projected spray transfer mode is determined based on the modified force balance model and the designed estimators. Finally, the droplet size and the melting rate are controlled using two proportional-integral (PI) controllers to achieve high weld quality by retaining the transfer mode and generating appropriate signals as inputs of the weld geometry control loop. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
Karev, Georgy P; Wolf, Yuri I; Koonin, Eugene V
2003-10-12
The distributions of many genome-associated quantities, including the membership of paralogous gene families can be approximated with power laws. We are interested in developing mathematical models of genome evolution that adequately account for the shape of these distributions and describe the evolutionary dynamics of their formation. We show that simple stochastic models of genome evolution lead to power-law asymptotics of protein domain family size distribution. These models, called Birth, Death and Innovation Models (BDIM), represent a special class of balanced birth-and-death processes, in which domain duplication and deletion rates are asymptotically equal up to the second order. The simplest, linear BDIM shows an excellent fit to the observed distributions of domain family size in diverse prokaryotic and eukaryotic genomes. However, the stochastic version of the linear BDIM explored here predicts that the actual size of large paralogous families is reached on an unrealistically long timescale. We show that introduction of non-linearity, which might be interpreted as interaction of a particular order between individual family members, allows the model to achieve genome evolution rates that are much better compatible with the current estimates of the rates of individual duplication/loss events.
Lu, Guangtao; Feng, Qian; Li, Yourong; Wang, Hao; Song, Gangbing
2017-01-01
During the propagation of ultrasonic waves in structures, there is usually energy loss due to ultrasound energy diffusion and dissipation. The aim of this research is to characterize the ultrasound energy diffusion that occurs due to small-size damage on an aluminum plate using piezoceramic transducers, for the future purpose of developing a damage detection algorithm. The ultrasonic energy diffusion coefficient is related to the damage distributed in the medium. Meanwhile, the ultrasonic energy dissipation coefficient is related to the inhomogeneity of the medium. Both are usually employed to describe the characteristics of ultrasound energy diffusion. The existence of multimodes of Lamb waves in metallic plate structures results in the asynchronous energy transport of different modes. The mode of Lamb waves has a great influence on ultrasound energy diffusion as a result, and thus has to be chosen appropriately. In order to study the characteristics of ultrasound energy diffusion in metallic plate structures, an experimental setup of an aluminum plate with a through-hole, whose diameter varies from 0.6 mm to 1.2 mm, is used as the test specimen with the help of piezoceramic transducers. The experimental results of two categories of damages at different locations reveal that the existence of damage changes the energy transport between the actuator and the sensor. Also, when there is only one dominate mode of Lamb wave excited in the structure, the ultrasound energy diffusion coefficient decreases approximately linearly with the diameter of the simulated damage. Meanwhile, the ultrasonic energy dissipation coefficient increases approximately linearly with the diameter of the simulated damage. However, when two or more modes of Lamb waves are excited, due to the existence of different group velocities between the different modes, the energy transport of the different modes is asynchronous, and the ultrasonic energy diffusion is not strictly linear with the size of the damage. Therefore, it is recommended that only one dominant mode of Lamb wave should be excited during the characterization process, in order to ensure that the linear relationship between the damage size and the characteristic parameters is maintained. In addition, the findings from this paper demonstrate the potential of developing future damage detection algorithms using the linear relationships between damage size and the ultrasound energy diffusion coefficient or ultrasonic energy dissipation coefficient when a single dominant mode is excited. PMID:29207530
Optimizing Seismic Monitoring Networks for EGS and Conventional Geothermal Projects
NASA Astrophysics Data System (ADS)
Kraft, Toni; Herrmann, Marcus; Bethmann, Falko; Stefan, Wiemer
2013-04-01
In the past several years, geological energy technologies receive growing attention and have been initiated in or close to urban areas. Some of these technologies involve injecting fluids into the subsurface (e.g., oil and gas development, waste disposal, and geothermal energy development) and have been found or suspected to cause small to moderate sized earthquakes. These earthquakes, which may have gone unnoticed in the past when they occurred in remote sparsely populated areas, are now posing a considerable risk for the public acceptance of these technologies in urban areas. The permanent termination of the EGS project in Basel, Switzerland after a number of induced ML~3 (minor) earthquakes in 2006 is one prominent example. It is therefore essential for the future development and success of these geological energy technologies to develop strategies for managing induced seismicity and keeping the size of induced earthquakes at a level that is acceptable to all stakeholders. Most guidelines and recommendations on induced seismicity published since the 1970ies conclude that an indispensable component of such a strategy is the establishment of seismic monitoring in an early stage of a project. This is because an appropriate seismic monitoring is the only way to detect and locate induced microearthquakes with sufficient certainty to develop an understanding of the seismic and geomechanical response of the reservoir to the geotechnical operation. In addition, seismic monitoring lays the foundation for the establishment of advanced traffic light systems and is therefore an important confidence building measure towards the local population and authorities. We have developed an optimization algorithm for seismic monitoring networks in urban areas that allows to design and evaluate seismic network geometries for arbitrary geotechnical operation layouts. The algorithm is based on the D-optimal experimental design that aims to minimize the error ellipsoid of the linearized location problem. Optimization for additional criteria (e.g., focal mechanism determination or installation costs) can be included. We consider a 3D seismic velocity model, an European ambient seismic noise model derived from high-resolution land-use data, and existing seismic stations in the vicinity of the geotechnical site. Additionally, we account for the attenuation of the seismic signal with travel time and ambient seismic noise with depth to be able to correctly deal with borehole station networks. Using this algorithm we are able to find the optimal geometry and size of the seismic monitoring network that meets the predefined application-oriented performance criteria. This talk will focus on optimal network geometries for deep geothermal projects of the EGS and hydrothermal type, and discuss the requirements for basic seismic surveillance and high-resolution reservoir monitoring and characterization.
Graphite grain-size spectrum and molecules from core-collapse supernovae
NASA Astrophysics Data System (ADS)
Clayton, Donald D.; Meyer, Bradley S.
2018-01-01
Our goal is to compute the abundances of carbon atomic complexes that emerge from the C + O cores of core-collapse supernovae. We utilize our chemical reaction network in which every atomic step of growth employs a quantum-mechanically guided reaction rate. This tool follows step-by-step the growth of linear carbon chain molecules from C atoms in the oxygen-rich C + O cores. We postulate that once linear chain molecules reach a sufficiently large size, they isomerize to ringed molecules, which serve as seeds for graphite grain growth. We demonstrate our technique for merging the molecular reaction network with a parallel program that can follow 1017 steps of C addition onto the rare seed species. Due to radioactivity within the C + O core, abundant ambient oxygen is unable to convert C to CO, except to a limited degree that actually facilitates carbon molecular ejecta. But oxygen severely minimizes the linear-carbon-chain abundances. Despite the tiny abundances of these linear-carbon-chain molecules, they can give rise to a small abundance of ringed-carbon molecules that serve as the nucleations on which graphite grain growth builds. We expand the C + O-core gas adiabatically from 6000 K for 109 s when reactions have essentially stopped. These adiabatic tracks emulate the actual expansions of the supernova cores. Using a standard model of 1056 atoms of C + O core ejecta having O/C = 3, we calculate standard ejection yields of graphite grains of all sizes produced, of the CO molecular abundance, of the abundances of linear-carbon molecules, and of Buckminsterfullerene. None of these except CO was expected from the C + O cores just a few years past.
Zenitani, Masahiro; Ueno, Takehisa; Nara, Keigo; Nakahata, Kengo; Uehara, Shuichiro; Soh, Hideki; Oue, Takaharu; Kondo, Hiroki; Nagano, Hiroaki; Usui, Noriaki
2014-09-01
In pediatric LDLT, graft reduction is sometimes required because of the graft size mismatch. Dividing the portal triad and hepatic veins with a linear stapler is a rapid and safe method of reduction. We herein present a case with a left lateral segment reduction achieved using a linear stapler after reperfusion in pediatric LDLT. The patient was a male who had previously undergone Kasai procedure for biliary atresia. We performed the LDLT with his father's lateral segment. According to the pre-operative volumetry, the GV/SLV ratio was 102.5%. As the patient's PV was narrow, sclerotic and thick, we decided to put an interposition with the IMV graft of the donor between the confluence and the graft PV. The graft PV was anastomosed to the IMV graft. The warm ischemic time was 34 min, and the cold ischemic time was 82 min. The ratio of the graft size to the recipient weight (G/R ratio) was 4.2%. After reperfusion, we found that the graft had poor perfusion and decided to reduce the graft size. We noted good perfusion in the residual area after the lateral edge was clamped with an intestinal clamp. The liver tissue was sufficiently fractured with an intestinal clamp and then was divided with a linear stapler. The final G/R ratio was 3.6%. The total length of the operation was 12 h and 20 min. The amount of blood lost was 430 mL. No surgical complications, including post-operative hemorrhage and bile leakage, were encountered. We believe that using the linear stapler decreased the duration of the operation and was an acceptable technique for reducing the graft after reperfusion. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
The effect of reducing alfalfa haylage particle size on cows in early lactation.
Kononoff, P J; Heinrichs, A J
2003-04-01
The objective of this experiment was to evaluate effects of reducing forage particle size on cows in early lactation based on measurements of the Penn State Particle Separator (PSPS). Eight cannulated, multiparous cows averaging 19 +/- 4 d in milk and 642 +/- 45 kg BW were assigned to one of two 4 x 4 Latin Squares. During each of the 23-d periods, animals were offered one of four diets, which were chemically identical but included alfalfa haylage of different particle size; short (SH), mostly short (MSH), mostly long (MLG), and long (LG). Physically effective neutral detergent fiber (peNDF) was determined by measuring the amount of neutral detergent fiber retained on a 1.18 mm screen and was similar across diets (25.7, 26.2, 26.4, 26.7%) but the amount of particles >19.0 mm significantly decreased with decreasing particle size. Reducing haylage particle size increased dry matter intake linearly (23.3, 22.0, 20.9, 20.8 kg for SH, MSH, MLG, LG, respectively). Milk production and percentage fat did not differ across treatments averaging 35.5 +/- 0.68 kg milk and 3.32 +/- 0.67% fat, while a quadratic effect was observed for percent milk protein, with lowest values being observed for LG. A quadratic effect was observed for mean rumen pH (6.04, 6.15, 6.13, 6.09), while A:P ratio decreased linearly (2.75, 2.86, 2.88, 2.92) with decreasing particle size. Total time ruminating increased quadratically (467, 498, 486, 468 min/d), while time eating decreased linearly (262, 253, 298, 287 min/d) with decreasing particle size. Both eating and ruminating per unit of neutral detergent fiber intake decreased with reducing particle size (35.8, 36.7, 44.9, 45.6 min/kg; 19.9, 23.6, 23.5, 23.5 min/kg). Although chewing activity was closely related to forage particle size, effects on rumen pH were small, indicating factors other than particle size are critical in regulating pH when ration neutral detergent fiber met recommended levels. Feeding alfalfa haylage based rations of reduced particle size resulted in animals consuming more feed but did not affect milk production.
Anomaly General Circulation Models.
NASA Astrophysics Data System (ADS)
Navarra, Antonio
The feasibility of the anomaly model is assessed using barotropic and baroclinic models. In the barotropic case, both a stationary and a time-dependent model has been formulated and constructed, whereas only the stationary, linear case is considered in the baroclinic case. Results from the barotropic model indicate that a relation between the stationary solution and the time-averaged non-linear solution exists. The stationary linear baroclinic solution can therefore be considered with some confidence. The linear baroclinic anomaly model poses a formidable mathematical problem because it is necessary to solve a gigantic linear system to obtain the solution. A new method to find solution of large linear system, based on a projection on the Krylov subspace is shown to be successful when applied to the linearized baroclinic anomaly model. The scheme consists of projecting the original linear system on the Krylov subspace, thereby reducing the dimensionality of the matrix to be inverted to obtain the solution. With an appropriate setting of the damping parameters, the iterative Krylov method reaches a solution even using a Krylov subspace ten times smaller than the original space of the problem. This generality allows the treatment of the important problem of linear waves in the atmosphere. A larger class (nonzonally symmetric) of basic states can now be treated for the baroclinic primitive equations. These problem leads to large unsymmetrical linear systems of order 10000 and more which can now be successfully tackled by the Krylov method. The (R7) linear anomaly model is used to investigate extensively the linear response to equatorial and mid-latitude prescribed heating. The results indicate that the solution is deeply affected by the presence of the stationary waves in the basic state. The instability of the asymmetric flows, first pointed out by Simmons et al. (1983), is active also in the baroclinic case. However, the presence of baroclinic processes modifies the dominant response. The most sensitive areas are identified; they correspond to north Japan, the Pole and Greenland regions. A limited set of higher resolution (R15) experiments indicate that this situation is still present and enhanced at higher resolution. The linear anomaly model is also applied to a realistic case. (Abstract shortened with permission of author.).
Electrophoresis in strong electric fields.
Barany, Sandor
2009-01-01
Two kinds of non-linear electrophoresis (ef) that can be detected in strong electric fields (several hundred V/cm) are considered. The first ("classical" non-linear ef) is due to the interaction of the outer field with field-induced ionic charges in the electric double layer (EDL) under conditions, when field-induced variations of electrolyte concentration remain to be small comparatively to its equilibrium value. According to the Shilov theory, the non-linear component of the electrophoretic velocity for dielectric particles is proportional to the cubic power of the applied field strength (cubic electrophoresis) and to the second power of the particles radius; it is independent of the zeta-potential but is determined by the surface conductivity of particles. The second one, the so-called "superfast electrophoresis" is connected with the interaction of a strong outer field with a secondary diffuse layer of counterions (space charge) that is induced outside the primary (classical) diffuse EDL by the external field itself because of concentration polarization. The Dukhin-Mishchuk theory of "superfast electrophoresis" predicts quadratic dependence of the electrophoretic velocity of unipolar (ionically or electronically) conducting particles on the external field gradient and linear dependence on the particle's size in strong electric fields. These are in sharp contrast to the laws of classical electrophoresis (no dependence of V(ef) on the particle's size and linear dependence on the electric field gradient). A new method to measure the ef velocity of particles in strong electric fields is developed that is based on separation of the effects of sedimentation and electrophoresis using videoimaging and a new flowcell and use of short electric pulses. To test the "classical" non-linear electrophoresis, we have measured the ef velocity of non-conducting polystyrene, aluminium-oxide and (semiconductor) graphite particles as well as Saccharomice cerevisiae yeast cells as a function of the electric field strength, particle size, electrolyte concentration and the adsorbed polymer amount. It has been shown that the electrophoretic velocity of the particles/cells increases with field strength linearly up to about 100 and 200 V/cm (for cells) without and with adsorbed polymers both in pure water and in electrolyte solutions. In line with the theoretical predictions, in stronger fields substantial non-linear effects were recorded (V(ef)~E(3)). The ef velocity of unipolar ion-type conducting (ion-exchanger particles and fibres), electron-type conducting (magnesium and Mg/Al alloy) and semiconductor particles (graphite, activated carbon, pyrite, molybdenite) increases significantly with the electric field (V(ef)~E(2)) and the particle's size but is almost independent of the ionic strength. These trends are inconsistent with Smoluchowski's equation for dielectric particles, but are consistent with the Dukhin-Mishchuk theory of superfast electrophoresis.
Asymptotic Linearity of Optimal Control Modification Adaptive Law with Analytical Stability Margins
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.
2010-01-01
Optimal control modification has been developed to improve robustness to model-reference adaptive control. For systems with linear matched uncertainty, optimal control modification adaptive law can be shown by a singular perturbation argument to possess an outer solution that exhibits a linear asymptotic property. Analytical expressions of phase and time delay margins for the outer solution can be obtained. Using the gradient projection operator, a free design parameter of the adaptive law can be selected to satisfy stability margins.
A Learning Combination: Coaching with CLASS and the Project Approach
ERIC Educational Resources Information Center
Vartuli, Sue; Bolz, Carol; Wilson, Catherine
2014-01-01
The focus of this ongoing research is the effectiveness of coaching in improving the quality of teacher-child instructional interactions in Head Start classrooms. This study examines the relationship between two measures: Classroom Assessment Scoring System (CLASS) and a Project Approach Fidelity form developed by the authors. Linear regressions…
Predictors of Adolescent Breakfast Consumption: Longitudinal Findings from Project EAT
ERIC Educational Resources Information Center
Bruening, Meg; Larson, Nicole; Story, Mary; Neumark-Sztainer, Dianne; Hannan, Peter
2011-01-01
Objective: To identify predictors of breakfast consumption among adolescents. Methods: Five-year longitudinal study Project EAT (Eating Among Teens). Baseline surveys were completed in Minneapolis-St. Paul schools and by mail at follow-up by youth (n = 800) transitioning from middle to high school. Linear regression models examined associations…
DOT National Transportation Integrated Search
2016-06-01
The purpose of this project is to study the optimal scheduling of work zones so that they have minimum negative impact (e.g., travel delay, gas consumption, accidents, etc.) on transport service vehicle flows. In this project, a mixed integer linear ...
We project the change in ozone-related mortality burden attributable to changes in climate between a historical (1995-2005) and near-future (2025-2035) time period while incorporating a non-linear and synergistic effect of ozone and temperature on mortality. We simulate air quali...
Effective population size of korean populations.
Park, Leeyoung
2014-12-01
Recently, new methods have been developed for estimating the current and recent changes in effective population sizes. Based on the methods, the effective population sizes of Korean populations were estimated using data from the Korean Association Resource (KARE) project. The overall changes in the population sizes of the total populations were similar to CHB (Han Chinese in Beijing, China) and JPT (Japanese in Tokyo, Japan) of the HapMap project. There were no differences in past changes in population sizes with a comparison between an urban area and a rural area. Age-dependent current and recent effective population sizes represent the modern history of Korean populations, including the effects of World War II, the Korean War, and urbanization. The oldest age group showed that the population growth of Koreans had already been substantial at least since the end of the 19th century.
Quadratic constrained mixed discrete optimization with an adiabatic quantum optimizer
NASA Astrophysics Data System (ADS)
Chandra, Rishabh; Jacobson, N. Tobias; Moussa, Jonathan E.; Frankel, Steven H.; Kais, Sabre
2014-07-01
We extend the family of problems that may be implemented on an adiabatic quantum optimizer (AQO). When a quadratic optimization problem has at least one set of discrete controls and the constraints are linear, we call this a quadratic constrained mixed discrete optimization (QCMDO) problem. QCMDO problems are NP-hard, and no efficient classical algorithm for their solution is known. Included in the class of QCMDO problems are combinatorial optimization problems constrained by a linear partial differential equation (PDE) or system of linear PDEs. An essential complication commonly encountered in solving this type of problem is that the linear constraint may introduce many intermediate continuous variables into the optimization while the computational cost grows exponentially with problem size. We resolve this difficulty by developing a constructive mapping from QCMDO to quadratic unconstrained binary optimization (QUBO) such that the size of the QUBO problem depends only on the number of discrete control variables. With a suitable embedding, taking into account the physical constraints of the realizable coupling graph, the resulting QUBO problem can be implemented on an existing AQO. The mapping itself is efficient, scaling cubically with the number of continuous variables in the general case and linearly in the PDE case if an efficient preconditioner is available.
Zörnig, Peter
2015-08-01
We present integer programming models for some variants of the farthest string problem. The number of variables and constraints is substantially less than that of the integer linear programming models known in the literature. Moreover, the solution of the linear programming-relaxation contains only a small proportion of noninteger values, which considerably simplifies the rounding process. Numerical tests have shown excellent results, especially when a small set of long sequences is given.
Simplified large African carnivore density estimators from track indices.
Winterbach, Christiaan W; Ferreira, Sam M; Funston, Paul J; Somers, Michael J
2016-01-01
The range, population size and trend of large carnivores are important parameters to assess their status globally and to plan conservation strategies. One can use linear models to assess population size and trends of large carnivores from track-based surveys on suitable substrates. The conventional approach of a linear model with intercept may not intercept at zero, but may fit the data better than linear model through the origin. We assess whether a linear regression through the origin is more appropriate than a linear regression with intercept to model large African carnivore densities and track indices. We did simple linear regression with intercept analysis and simple linear regression through the origin and used the confidence interval for ß in the linear model y = αx + ß, Standard Error of Estimate, Mean Squares Residual and Akaike Information Criteria to evaluate the models. The Lion on Clay and Low Density on Sand models with intercept were not significant ( P > 0.05). The other four models with intercept and the six models thorough origin were all significant ( P < 0.05). The models using linear regression with intercept all included zero in the confidence interval for ß and the null hypothesis that ß = 0 could not be rejected. All models showed that the linear model through the origin provided a better fit than the linear model with intercept, as indicated by the Standard Error of Estimate and Mean Square Residuals. Akaike Information Criteria showed that linear models through the origin were better and that none of the linear models with intercept had substantial support. Our results showed that linear regression through the origin is justified over the more typical linear regression with intercept for all models we tested. A general model can be used to estimate large carnivore densities from track densities across species and study areas. The formula observed track density = 3.26 × carnivore density can be used to estimate densities of large African carnivores using track counts on sandy substrates in areas where carnivore densities are 0.27 carnivores/100 km 2 or higher. To improve the current models, we need independent data to validate the models and data to test for non-linear relationship between track indices and true density at low densities.
Osemlak, Paweł
2011-01-01
1. Determination of the size of testes and epididymes on the right and left side, in healthy boys in various age groups with use of non-invasive ultrasound examination method and the method of external linear measurements. 2. Determination of age, when intensive growth of testicular and epididymal size starts. 3. Determination whether there are statistically significant differences between the size of the right and the left testis, as well as between the right and left epididymis. 4. Evaluation of the ultrasound method and method of external linear measurements in their use for scientific investigations. 309 boys, aged from 1 day to 17 years of life, treated in the Clinical Department of Paediatric Surgery and Traumatology of the Medical University in Lublin from 2009 to 2010 due to diseases needed to be treated surgically, but not the scrotum, were examined in this study. No pathologies influencing the development of genital organs were found in these boys. Dimension of the testes was studied with ultrasound method and with method of external linear measurements. Dimension of epididymes was only examined with ultrasound method. In every age group the author calculated mean arithmetical values for: testiscular length, thickness, width and volume, as well as epididymal depth and basis. With consideration of standard deviation (X+/-1 SD) it was possible to define the range of dimension of healthy testes and epididymes and their change with age. Final dimensions of the right and left testis as well as of the right and left epididymis were compared. Dimensions of the testis on the same side of body acquired with the ultrasound method and acquired with the method of external linear measurements were compared. Statistical work-up with Wilcoxon test for two dependent groups was implemented. Ultrasound evaluation pointed to intensive 2.5-times increase in testicular length and width, and 2-times increase in testicular thickness in boys aged 10 to 17 years. Mean volume of neonatal testis is 0.35 ml. From 10th year of life, the testicular volume increases 10-times from 1.36 ml to 12.83 ml in 17th year of life. Depth of epididymis measured with ultrasound method is always greater than its basis. Both these dimensions increase quickly from the 10th year of life. Measurements done with the caliper on the average overestimate testicular length by 5.7 mm, its thickness by 2.9 mm and its width by 1.4 mm, comparing with ultrasound method. There were no statistically important differences between dimension of the right and left testis. Differences between dimension of the right and left epididymis are statistically significant. 1. Age is the main factor influencing testicular size in boys. 2. Intensive growth of testes starts in the 10th year of life, of epididymes in 12th year of life. 3. Testicular volume is the most precise description of its size. There are no statisticallysignificant differences between volume of the right and left testis. Differences between dimension, described by the depth and basis of the right and left epididymis are statistically significant. 4. Ultrasound method and method of external linear measurements with the caliper have similar diagnostic value in comparing the size of both testes. 5. Measurements of testicular size with ultrasound method have much greater value for detail evaluation than the method of external linear measurements with the caliper, which does not regard thickness of the skin and testicular coats, as well as the epididymal head which is often situated on the upper end of the testis.
Direct Retrieval of Exterior Orientation Parameters Using A 2-D Projective Transformation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seedahmed, Gamal H.
2006-09-01
Direct solutions are very attractive because they obviate the need for initial approximations associated with non-linear solutions. The Direct Linear Transformation (DLT) establishes itself as a method of choice for direct solutions in photogrammetry and other fields. The use of the DLT with coplanar object space points leads to a rank deficient model. This rank deficient model leaves the DLT defined up to a 2-D projective transformation, which makes the direct retrieval of the exterior orientation parameters (EOPs) a non-trivial task. This paper presents a novel direct algorithm to retrieve the EOPs from the 2-D projective transformation. It is basedmore » on a direct relationship between the 2-D projective transformation and the collinearity model using homogeneous coordinates representation. This representation offers a direct matrix correspondence between the 2-D projective transformation parameters and the collinearity model parameters. This correspondence lends itself to a direct matrix factorization to retrieve the EOPs. An important step in the proposed algorithm is a normalization process that provides the actual link between the 2-D projective transformation and the collinearity model. This paper explains the theoretical basis of the proposed algorithm as well as the necessary steps for its practical implementation. In addition, numerical examples are provided to demonstrate its validity.« less
Small size transformer provides high power regulation with low ripple and maximum control
NASA Technical Reports Server (NTRS)
Manoli, R.; Ulrich, B. R.
1971-01-01
Single, variable, transformer/choke device does work of several. Technique reduces drawer assembly physical size and design and manufacturing cost. Device provides power, voltage current and impedance regulation while maintaining maximum control of linearity and ensuring extremely low ripple. Nulling is controlled to very fine degree.
Precision Efficacy Analysis for Regression.
ERIC Educational Resources Information Center
Brooks, Gordon P.
When multiple linear regression is used to develop a prediction model, sample size must be large enough to ensure stable coefficients. If the derivation sample size is inadequate, the model may not predict well for future subjects. The precision efficacy analysis for regression (PEAR) method uses a cross- validity approach to select sample sizes…
LASER DESORPTION IONIZATION OF ULTRAFINE AEROSOL PARTICLES. (R823980)
On-line analysis of ultrafine aerosol particle in the 12 to 150 nm size range is performed by
laser desorption/ionization. Particles are size selected with a differential mobility analyzer and then
sent into a linear time-of-flight mass spectrometer where they are ablated w...
Intrinsic to the myriad of nano-enabled products are atomic-size multifunctional engineered nanomaterials, which upon release contaminate the environments, raising considerable health and safety concerns. Despite global research efforts, mechanism underlying nanotoxicity has rema...
A regression analysis of filler particle content to predict composite wear.
Jaarda, M J; Wang, R F; Lang, B R
1997-01-01
It has been hypothesized that composite wear is correlated to filler particle content. There is a paucity of research to substantiate this theory despite numerous projects evaluating the correlation. The purpose of this study was to determine whether a linear relationship existed between composite wear and filler particle content of 12 composites. In vivo wear data had been previously collected for the 12 composites and served as basis for this study. Scanning electron microscopy and backscatter electron imaging were combined with digital imaging analysis to develop "profile maps" of the filler particle composition of the composites. These profile maps included eight parameters: (1) total number of filler particles/28742.6 microns2, (2) percent of area occupied by all of the filler particles, (3) mean filler particle size, (4) percent of area occupied by the matrix, (5) percent of area occupied by filler particles, r (radius) 1.0 < or = micron, (6) percent of area occupied by filler particles, r = 1.0 < or = 4.5 microns, (7) percent of area occupied by filler particles, r = 4.5 < or = 10 microns, and (8) percent of area occupied by filler particles, r > 10 microns. Forward stepwise regression analyses were used with composite wear as the dependent variable and the eight parameters as independent variables. The results revealed a linear relationship between composite wear and the filler particle content. A mathematical formula was developed to predict composite wear.
Managing Risk and Uncertainty in Large-Scale University Research Projects
ERIC Educational Resources Information Center
Moore, Sharlissa; Shangraw, R. F., Jr.
2011-01-01
Both publicly and privately funded research projects managed by universities are growing in size and scope. Complex, large-scale projects (over $50 million) pose new management challenges and risks for universities. This paper explores the relationship between project success and a variety of factors in large-scale university projects. First, we…
Gartner, Thomas E; Jayaraman, Arthi
2018-01-17
In this paper, we apply molecular simulation and liquid state theory to uncover the structure and thermodynamics of homopolymer blends of the same chemistry and varying chain architecture in the presence of explicit solvent species. We use hybrid Monte Carlo (MC)/molecular dynamics (MD) simulations in the Gibbs ensemble to study the swelling of ∼12 000 g mol -1 linear, cyclic, and 4-arm star polystyrene chains in toluene. Our simulations show that the macroscopic swelling response is indistinguishable between the various architectures and matches published experimental data for the solvent annealing of linear polystyrene by toluene vapor. We then use standard MD simulations in the NPT ensemble along with polymer reference interaction site model (PRISM) theory to calculate effective polymer-solvent and polymer-polymer Flory-Huggins interaction parameters (χ eff ) in these systems. As seen in the macroscopic swelling results, there are no significant differences in the polymer-solvent and polymer-polymer χ eff between the various architectures. Despite similar macroscopic swelling and effective interaction parameters between various architectures, the pair correlation function between chain centers-of-mass indicates stronger correlations between cyclic or star chains in the linear-cyclic blends and linear-star blends, compared to linear chain-linear chain correlations. Furthermore, we note striking similarities in the chain-level correlations and the radius of gyration of cyclic and 4-arm star architectures of identical molecular weight. Our results indicate that the cyclic and star chains are 'smaller' and 'harder' than their linear counterparts, and through comparison with MD simulations of blends of soft spheres with varying hardness and size we suggest that these macromolecular characteristics are the source of the stronger cyclic-cyclic and star-star correlations.
The performance of projective standardization for digital subtraction radiography.
Mol, André; Dunn, Stanley M
2003-09-01
We sought to test the performance and robustness of projective standardization in preserving invariant properties of subtraction images in the presence of irreversible projection errors. Study design Twenty bone chips (1-10 mg each) were placed on dentate dry mandibles. Follow-up images were obtained without the bone chips, and irreversible projection errors of up to 6 degrees were introduced. Digitized image intensities were normalized, and follow-up images were geometrically reconstructed by 2 operators using anatomical and fiduciary landmarks. Subtraction images were analyzed by 3 observers. Regression analysis revealed a linear relationship between radiographic estimates of mineral loss and actual mineral loss (R(2) = 0.99; P <.05). The effect of projection error was not significant (general linear model [GLM]: P >.05). There was no difference between the radiographic estimates from images standardized with anatomical landmarks and those standardized with fiduciary landmarks (Wilcoxon signed rank test: P >.05). Operator variability was low for image analysis alone (R(2) = 0.99; P <.05), as well as for the entire procedure (R(2) = 0.98; P <.05). The predicted detection limit was smaller than 1 mg. Subtraction images registered by projective standardization yield estimates of osseous change that are invariant to irreversible projection errors of up to 6 degrees. Within these limits, operator precision is high and anatomical landmarks can be used to establish correspondence.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-08
... Section 1605 of ARRA. This action permits the purchase of the selected vertical linear motion mixers not...: Environmental Protection Agency (EPA). ACTION: Notice. SUMMARY: The Regional Administrator of EPA Region 6 is... purchase of ten (10) vertical linear motion mixers for the Clean Water State Revolving Fund (CWSRF) Hornsby...
DEGAS: Dynamic Exascale Global Address Space Programming Environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demmel, James
The Dynamic, Exascale Global Address Space programming environment (DEGAS) project will develop the next generation of programming models and runtime systems to meet the challenges of Exascale computing. The Berkeley part of the project concentrated on communication-optimal code generation to optimize speed and energy efficiency by reducing data movement. Our work developed communication lower bounds, and/or communication avoiding algorithms (that either meet the lower bound, or do much less communication than their conventional counterparts) for a variety of algorithms, including linear algebra, machine learning and genomics. The Berkeley part of the project concentrated on communication-optimal code generation to optimize speedmore » and energy efficiency by reducing data movement. Our work developed communication lower bounds, and/or communication avoiding algorithms (that either meet the lower bound, or do much less communication than their conventional counterparts) for a variety of algorithms, including linear algebra, machine learning and genomics.« less
Understanding the role of pore size homogeneity in the water transport through graphene layers.
Su, Jiaye; Zhao, Yunzhen; Fang, Chang
2018-06-01
Graphene is a versatile 2D material and attracts an increasing amount of attention from a broad scientific community, including novel nanofluidic devices. In this work, we use molecular dynamics simulations to study the pressure driven water transport through graphene layers, focusing on the pore size homogeneity, realized by the arrangement of two pore sizes. For a given layer number, we find that water flux exhibits an excellent linear behavior with pressure, in agreement with the prediction of the Hagen-Poiseuille equation. Interestingly, the flux for concentrated pore size distribution is around two times larger than that of a uniform distribution. More surprisingly, under a given pressure, the water flux changes in an opposite way for these two distributions, where the flux ratio almost increases linearly with the layer number. For the largest layer number, more distributions suggest the same conclusion that higher water flux can be attained for more concentrated pore size distributions. Similar differences for the water translocation time and occupancy are also identified. The major reason for these results should clearly be due to the hydrogen bond and density profile distributions. Our results are helpful to delineate the exquisite role of pore size homogeneity, and should have great implications for the design of high flux nanofluidic devices and inversely the detection of pore structures.
Understanding the role of pore size homogeneity in the water transport through graphene layers
NASA Astrophysics Data System (ADS)
Su, Jiaye; Zhao, Yunzhen; Fang, Chang
2018-06-01
Graphene is a versatile 2D material and attracts an increasing amount of attention from a broad scientific community, including novel nanofluidic devices. In this work, we use molecular dynamics simulations to study the pressure driven water transport through graphene layers, focusing on the pore size homogeneity, realized by the arrangement of two pore sizes. For a given layer number, we find that water flux exhibits an excellent linear behavior with pressure, in agreement with the prediction of the Hagen–Poiseuille equation. Interestingly, the flux for concentrated pore size distribution is around two times larger than that of a uniform distribution. More surprisingly, under a given pressure, the water flux changes in an opposite way for these two distributions, where the flux ratio almost increases linearly with the layer number. For the largest layer number, more distributions suggest the same conclusion that higher water flux can be attained for more concentrated pore size distributions. Similar differences for the water translocation time and occupancy are also identified. The major reason for these results should clearly be due to the hydrogen bond and density profile distributions. Our results are helpful to delineate the exquisite role of pore size homogeneity, and should have great implications for the design of high flux nanofluidic devices and inversely the detection of pore structures.
Morris, Katrina A; Parry, Allyson; Pretorius, Pieter M
2016-09-01
To compare the sensitivity of linear and volumetric measurements on MRI in detecting schwannoma progression in patients with neurofibromatosis type 2 on bevacizumab treatment as well as the extent to which this depends on the size of the tumour. We compared retrospectively, changes in linear tumour dimensions at a range of thresholds to volumetric tumour measurements performed using Brainlab iPlan(®) software (Feldkirchen, Germany) and classified for tumour progression according to the Response Evaluation in Neurofibromatosis and Schwannomatosis (REiNS) criteria. Assessment of 61 schwannomas in 46 patients with a median follow-up of 20 months (range 3-43 months) was performed. There was a mean of 7 time points per tumour (range 2-12 time points). Using the volumetric REiNS criteria as the gold standard, a sensitivity of 86% was achieved for linear measurement using a 2-mm threshold to define progression. We propose that a change in linear measurement by 2 mm (particularly in tumours with starting diameters 20-30 mm, the majority of this cohort) could be used as a filter to identify cases of possible progression requiring volumetric analysis. This pragmatic approach can be used if stabilization of a previously growing schwannoma is sufficient for a patient to continue treatment in such a circumstance. We demonstrate the real-world limitations of linear vs volumetric measurement in tumour response assessment and identify limited circumstances where linear measurements can be used to determine which patients require the more resource-intensive volumetric measurements.
SU-F-I-08: CT Image Ring Artifact Reduction Based On Prior Image
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuan, C; Qi, H; Chen, Z
Purpose: In computed tomography (CT) system, CT images with ring artifacts will be reconstructed when some adjacent bins of detector don’t work. The ring artifacts severely degrade CT image quality. We present a useful CT ring artifacts reduction based on projection data correction, aiming at estimating the missing data of projection data accurately, thus removing the ring artifacts of CT images. Methods: The method consists of ten steps: 1) Identification of abnormal pixel line in projection sinogram; 2) Linear interpolation within the pixel line of projection sinogram; 3) FBP reconstruction using interpolated projection data; 4) Filtering FBP image using meanmore » filter; 5) Forwarding projection of filtered FBP image; 6) Subtraction forwarded projection from original projection; 7) Linear interpolation of abnormal pixel line area in the subtraction projection; 8) Adding the interpolated subtraction projection on the forwarded projection; 9) FBP reconstruction using corrected projection data; 10) Return to step 4 until the pre-set iteration number is reached. The method is validated on simulated and real data to restore missing projection data and reconstruct ring artifact-free CT images. Results: We have studied impact of amount of dead bins of CT detector on the accuracy of missing data estimation in projection sinogram. For the simulated case with a resolution of 256 by 256 Shepp-Logan phantom, three iterations are sufficient to restore projection data and reconstruct ring artifact-free images when the dead bins rating is under 30%. The dead-bin-induced artifacts are substantially reduced. More iteration number is needed to reconstruct satisfactory images while the rating of dead bins increases. Similar results were found for a real head phantom case. Conclusion: A practical CT image ring artifact correction scheme based on projection data is developed. This method can produce ring artifact-free CT images feasibly and effectively.« less
Angular Size Test on the Expansion of the Universe
NASA Astrophysics Data System (ADS)
López-Corredoira, Martín
Assuming the standard cosmological model to be correct, the average linear size of the galaxies with the same luminosity is six times smaller at z = 3.2 than at z = 0; and their average angular size for a given luminosity is approximately proportional to z-1. Neither the hypothesis that galaxies which formed earlier have much higher densities nor their luminosity evolution, merger ratio, and massive outflows due to a quasar feedback mechanism are enough to justify such a strong size evolution. Also, at high redshift, the intrinsic ultraviolet surface brightness would be prohibitively high with this evolution, and the velocity dispersion much higher than observed. We explore here another possibility of overcoming this problem: considering different cosmological scenarios, which might make the observed angular sizes compatible with a weaker evolution. One of the explored models, a very simple phenomenological extrapolation of the linear Hubble law in a Euclidean static universe, fits quite well the angular size versus redshift dependence, also approximately proportional to z-1 with this cosmological model. There are no free parameters derived ad hoc, although the error bars allow a slight size/luminosity evolution. The supernova Ia Hubble diagram can also be explained in terms of this model without any ad-hoc-fitted parameter. NB: I do not argue here that the true universe is static. My intention is just to discuss which intellectual theoretical models fit better some data of the observational cosmology.
Varol, H. Samet; Meng, Fanlong; Hosseinkhani, Babak; Malm, Christian; Bonn, Daniel; Bonn, Mischa; Zaccone, Alessio
2017-01-01
Polymer nanocomposites—materials in which a polymer matrix is blended with nanoparticles (or fillers)—strengthen under sufficiently large strains. Such strain hardening is critical to their function, especially for materials that bear large cyclic loads such as car tires or bearing sealants. Although the reinforcement (i.e., the increase in the linear elasticity) by the addition of filler particles is phenomenologically understood, considerably less is known about strain hardening (the nonlinear elasticity). Here, we elucidate the molecular origin of strain hardening using uniaxial tensile loading, microspectroscopy of polymer chain alignment, and theory. The strain-hardening behavior and chain alignment are found to depend on the volume fraction, but not on the size of nanofillers. This contrasts with reinforcement, which depends on both volume fraction and size of nanofillers, potentially allowing linear and nonlinear elasticity of nanocomposites to be tuned independently. PMID:28377517
NASA Astrophysics Data System (ADS)
Salkin, Louis; Courbin, Laurent; Panizza, Pascal
2012-09-01
Combining experiments and theory, we investigate the break-up dynamics of deformable objects, such as drops and bubbles, against a linear micro-obstacle. Our experiments bring the role of the viscosity contrast Δη between dispersed and continuous phases to light: the evolution of the critical capillary number to break a drop as a function of its size is either nonmonotonic (Δη>0) or monotonic (Δη≤0). In the case of positive viscosity contrasts, experiments and modeling reveal the existence of an unexpected critical object size for which the critical capillary number for breakup is minimum. Using simple physical arguments, we derive a model that well describes observations, provides diagrams mapping the four hydrodynamic regimes identified experimentally, and demonstrates that the critical size originating from confinement solely depends on geometrical parameters of the obstacle.
Importance of Resolving the Spectral Support of Beam-plasma Instabilities in Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shalaby, Mohamad; Broderick, Avery E.; Chang, Philip
2017-10-20
Many astrophysical plasmas are prone to beam-plasma instabilities. For relativistic and dilute beams, the spectral support of the beam-plasma instabilities is narrow, i.e., the linearly unstable modes that grow with rates comparable to the maximum growth rate occupy a narrow range of wavenumbers. This places stringent requirements on the box-sizes when simulating the evolution of the instabilities. We identify the implied lower limits on the box size imposed by the longitudinal beam plasma instability, i.e., typically the most stringent condition required to correctly capture the linear evolution of the instabilities in multidimensional simulations. We find that sizes many orders ofmore » magnitude larger than the resonant wavelength are typically required. Using one-dimensional particle-in-cell simulations, we show that the failure to sufficiently resolve the spectral support of the longitudinal instability yields slower growth and lower levels of saturation, potentially leading to erroneous physical conclusion.« less
Complex messages regarding a thin ideal appearing in teenage girls' magazines from 1956 to 2005.
Luff, Gina M; Gray, James J
2009-03-01
Seventeen and YM were assessed from 1956 through 2005 (n=312) to examine changes in the messages about thinness sent to teenage women. Trends were analyzed through an investigation of written, internal content focused on dieting, exercise, or both, while cover models were examined to explore fluctuations in body size. Pearson's Product correlations and weighted-least squares linear regression models were used to demonstrate changes over time. The frequency of written content related to exercise and combined plans increased in Seventeen, while a curvilinear relationship between time and content relating to dieting appeared. YM showed a linear increase in content related to dieting, exercise, and combined plans. Average cover model body size increased over time in YM while demonstrating no significant changes in Seventeen. Overall, more written messages about dieting and exercise appeared in teen's magazines in 2005 than before while the average cover model body size increased.
A rapid method for optimization of the rocket propulsion system for single-stage-to-orbit vehicles
NASA Technical Reports Server (NTRS)
Eldred, C. H.; Gordon, S. V.
1976-01-01
A rapid analytical method for the optimization of rocket propulsion systems is presented for a vertical take-off, horizontal landing, single-stage-to-orbit launch vehicle. This method utilizes trade-offs between propulsion characteristics affecting flight performance and engine system mass. The performance results from a point-mass trajectory optimization program are combined with a linearized sizing program to establish vehicle sizing trends caused by propulsion system variations. The linearized sizing technique was developed for the class of vehicle systems studied herein. The specific examples treated are the optimization of nozzle expansion ratio and lift-off thrust-to-weight ratio to achieve either minimum gross mass or minimum dry mass. Assumed propulsion system characteristics are high chamber pressure, liquid oxygen and liquid hydrogen propellants, conventional bell nozzles, and the same fixed nozzle expansion ratio for all engines on a vehicle.
Kim, Young Baek; Choi, Bum Ho; Lim, Yong Hwan; Yoo, Ha Na; Lee, Jong Ho; Kim, Jin Hyeok
2011-02-01
In this study, pentacene organic thin film was prepared using newly developed organic material auto-feeding system integrated with linear cell and characterized. The newly developed organic material auto-feeding system consists of 4 major parts: reservoir, micro auto-feeder, vaporizer, and linear cell. The deposition of organic thin film could be precisely controlled by adjusting feeding rate, main tube size, position and size of nozzle. 10 nm thick pentacene thin film prepared on glass substrate exhibited high uniformity of 3.46% which is higher than that of conventional evaporation method using point cell. The continuous deposition without replenishment of organic material can be performed over 144 hours with regulated deposition control. The grain size of pentacene film which affect to mobility of OTFT, was controlled as a function of the temperature.
Designing pinhole vacancies in graphene towards functionalization: Effects on critical buckling load
NASA Astrophysics Data System (ADS)
Georgantzinos, S. K.; Markolefas, S.; Giannopoulos, G. I.; Katsareas, D. E.; Anifantis, N. K.
2017-03-01
The effect of size and placement of pinhole-type atom vacancies on Euler's critical load on free-standing, monolayer graphene, is investigated. The graphene is modeled by a structural spring-based finite element approach, in which every interatomic interaction is approached as a linear spring. The geometry of graphene and the pinhole size lead to the assembly of the stiffness matrix of the nanostructure. Definition of the boundary conditions of the problem leads to the solution of the eigenvalue problem and consequently to the critical buckling load. Comparison to results found in the literature illustrates the validity and accuracy of the proposed method. Parametric analysis regarding the placement and size of the pinhole-type vacancy, as well as the graphene geometry, depicts the effects on critical buckling load. Non-linear regression analysis leads to empirical-analytical equations for predicting the buckling behavior of graphene, with engineered pinhole-type atom vacancies.
A multiscale filter for noise reduction of low-dose cone beam projections
NASA Astrophysics Data System (ADS)
Yao, Weiguang; Farr, Jonathan B.
2015-08-01
The Poisson or compound Poisson process governs the randomness of photon fluence in cone beam computed tomography (CBCT) imaging systems. The probability density function depends on the mean (noiseless) of the fluence at a certain detector. This dependence indicates the natural requirement of multiscale filters to smooth noise while preserving structures of the imaged object on the low-dose cone beam projection. In this work, we used a Gaussian filter, \\text{exp}≤ft(-{{x}2}/2σ f2\\right) as the multiscale filter to de-noise the low-dose cone beam projections. We analytically obtained the expression of {σf} , which represents the scale of the filter, by minimizing local noise-to-signal ratio. We analytically derived the variance of residual noise from the Poisson or compound Poisson processes after Gaussian filtering. From the derived analytical form of the variance of residual noise, optimal σ f2 is proved to be proportional to the noiseless fluence and modulated by local structure strength expressed as the linear fitting error of the structure. A strategy was used to obtain the reliable linear fitting error: smoothing the projection along the longitudinal direction to calculate the linear fitting error along the lateral direction and vice versa. The performance of our multiscale filter was examined on low-dose cone beam projections of a Catphan phantom and a head-and-neck patient. After performing the filter on the Catphan phantom projections scanned with pulse time 4 ms, the number of visible line pairs was similar to that scanned with 16 ms, and the contrast-to-noise ratio of the inserts was higher than that scanned with 16 ms about 64% in average. For the simulated head-and-neck patient projections with pulse time 4 ms, the visibility of soft tissue structures in the patient was comparable to that scanned with 20 ms. The image processing took less than 0.5 s per projection with 1024 × 768 pixels.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suhara, Tadahiro; Kanada-En'yo, Yoshiko
We investigate the linear-chain structures in highly excited states of {sup 14}C using a generalized molecular-orbital model, by which we incorporate an asymmetric configuration of three {alpha} clusters in the linear-chain states. By applying this model to the {sup 14}C system, we study the {sup 10}Be+{alpha} correlation in the linear-chain state of {sup 14}C. To clarify the origin of the {sup 10}Be+{alpha} correlation in the {sup 14}C linear-chain state, we analyze linear 3 {alpha} and 3{alpha} + n systems in a similar way. We find that a linear 3{alpha} system prefers the asymmetric 2{alpha} + {alpha} configuration, whose origin ismore » the many-body correlation incorporated by the parity projection. This configuration causes an asymmetric mean field for two valence neutrons, which induces the concentration of valence neutron wave functions around the correlating 2{alpha}. A linear-chain structure of {sup 16}C is also discussed.« less
NASA Astrophysics Data System (ADS)
Krysa, Zbigniew; Pactwa, Katarzyna; Wozniak, Justyna; Dudek, Michal
2017-12-01
Geological variability is one of the main factors that has an influence on the viability of mining investment projects and on the technical risk of geology projects. In the current scenario, analyses of economic viability of new extraction fields have been performed for the KGHM Polska Miedź S.A. underground copper mine at Fore Sudetic Monocline with the assumption of constant averaged content of useful elements. Research presented in this article is aimed at verifying the value of production from copper and silver ore for the same economic background with the use of variable cash flows resulting from the local variability of useful elements. Furthermore, the ore economic model is investigated for a significant difference in model value estimated with the use of linear correlation between useful elements content and the height of mine face, and the approach in which model parameters correlation is based upon the copula best matched information capacity criterion. The use of copula allows the simulation to take into account the multi variable dependencies at the same time, thereby giving a better reflection of the dependency structure, which linear correlation does not take into account. Calculation results of the economic model used for deposit value estimation indicate that the correlation between copper and silver estimated with the use of copula generates higher variation of possible project value, as compared to modelling correlation based upon linear correlation. Average deposit value remains unchanged.
Permeability-porosity relationships in sedimentary rocks
Nelson, Philip H.
1994-01-01
In many consolidated sandstone and carbonate formations, plots of core data show that the logarithm of permeability (k) is often linearly proportional to porosity (??). The slope, intercept, and degree of scatter of these log(k)-?? trends vary from formation to formation, and these variations are attributed to differences in initial grain size and sorting, diagenetic history, and compaction history. In unconsolidated sands, better sorting systematically increases both permeability and porosity. In sands and sandstones, an increase in gravel and coarse grain size content causes k to increase even while decreasing ??. Diagenetic minerals in the pore space of sandstones, such as cement and some clay types, tend to decrease log(k) proportionately as ?? decreases. Models to predict permeability from porosity and other measurable rock parameters fall into three classes based on either grain, surface area, or pore dimension considerations. (Models that directly incorporate well log measurements but have no particular theoretical underpinnings from a fourth class.) Grain-based models show permeability proportional to the square of grain size times porosity raised to (roughly) the fifth power, with grain sorting as an additional parameter. Surface-area models show permeability proportional to the inverse square of pore surface area times porosity raised to (roughly) the fourth power; measures of surface area include irreducible water saturation and nuclear magnetic resonance. Pore-dimension models show permeability proportional to the square of a pore dimension times porosity raised to a power of (roughly) two and produce curves of constant pore size that transgress the linear data trends on a log(k)-?? plot. The pore dimension is obtained from mercury injection measurements and is interpreted as the pore opening size of some interconnected fraction of the pore system. The linear log(k)-?? data trends cut the curves of constant pore size from the pore-dimension models, which shows that porosity reduction is always accompanied by a reduction in characteristic pore size. The high powers of porosity of the grain-based and surface-area models are required to compensate for the inclusion of the small end of the pore size spectrum.
Wall, Michael; Zamba, Gideon K D; Artes, Paul H
2018-01-01
It has been shown that threshold estimates below approximately 20 dB have little effect on the ability to detect visual field progression in glaucoma. We aimed to compare stimulus size V to stimulus size III, in areas of visual damage, to confirm these findings by using (1) a different dataset, (2) different techniques of progression analysis, and (3) an analysis to evaluate the effect of censoring on mean deviation (MD). In the Iowa Variability in Perimetry Study, 120 glaucoma subjects were tested every 6 months for 4 years with size III SITA Standard and size V Full Threshold. Progression was determined with three complementary techniques: pointwise linear regression (PLR), permutation of PLR, and linear regression of the MD index. All analyses were repeated on "censored'' datasets in which threshold estimates below a given criterion value were set to equal the criterion value. Our analyses confirmed previous observations that threshold estimates below 20 dB contribute much less to visual field progression than estimates above this range. These findings were broadly similar with stimulus sizes III and V. Censoring of threshold values < 20 dB has relatively little impact on the rates of visual field progression in patients with mild to moderate glaucoma. Size V, which has lower retest variability, performs at least as well as size III for longitudinal glaucoma progression analysis and appears to have a larger useful dynamic range owing to the upper sensitivity limit being higher.
Employment of CB models for non-linear dynamic analysis
NASA Technical Reports Server (NTRS)
Klein, M. R. M.; Deloo, P.; Fournier-Sicre, A.
1990-01-01
The non-linear dynamic analysis of large structures is always very time, effort and CPU consuming. Whenever possible the reduction of the size of the mathematical model involved is of main importance to speed up the computational procedures. Such reduction can be performed for the part of the structure which perform linearly. Most of the time, the classical Guyan reduction process is used. For non-linear dynamic process where the non-linearity is present at interfaces between different structures, Craig-Bampton models can provide a very rich information, and allow easy selection of the relevant modes with respect to the phenomenon driving the non-linearity. The paper presents the employment of Craig-Bampton models combined with Newmark direct integration for solving non-linear friction problems appearing at the interface between the Hubble Space Telescope and its solar arrays during in-orbit maneuvers. Theory, implementation in the FEM code ASKA, and practical results are shown.
Microphysical and Optical Properties of Saharan Dust Measured during the ICE-D Aircraft Campaign
NASA Astrophysics Data System (ADS)
Ryder, Claire; Marenco, Franco; Brooke, Jennifer; Cotton, Richard; Taylor, Jonathan
2017-04-01
During August 2015, the UK FAAM BAe146 research aircraft was stationed in Cape Verde off the coast of West Africa. Measurements of Saharan dust, and ice and liquid water clouds, were taken for the ICE-D (Ice in Clouds Experiment - Dust) project - a multidisciplinary project aimed at further understanding aerosol-cloud interactions. Six flights formed part of a sub-project, AER-D, solely focussing on measurements of Saharan dust within the African dust plume. Dust loadings observed during these flights varied (aerosol optical depths of 0.2 to 1.3), as did the vertical structure of the dust, the size distributions and the optical properties. The BAe146 was fully equipped to measure size distributions covering aerosol accumulation, coarse and giant modes. Initial results of size distribution and optical properties of dust from the AER-D flights will be presented, showing that a substantial coarse mode was present, in agreement with previous airborne measurements. Optical properties of dust relating to the measured size distributions will also be presented.
Sirin, Selma; de Jong, Marcus C; Galluzzi, Paolo; Maeder, Philippe; Brisse, Hervé J; Castelijns, Jonas A; de Graaf, Pim; Goericke, Sophia L
2016-07-01
Pineal cysts are a common incidental finding on brain MRI with resulting difficulties in differentiation between normal glands and pineal pathologies. The aim of this study was to assess the size and morphology of the cystic pineal gland in children (0-5 years) and compare the findings with published pineoblastoma cases. In this retrospective multicenter study, 257 MR examinations (232 children, 0-5 years) were evaluated regarding pineal gland size (width, height, planimetric area, maximal cyst(s) size) and morphology. We performed linear regression analysis with 99 % prediction intervals of gland size versus age for the size parameters. Results were compared with a recent meta-analysis of pineoblastoma by de Jong et al. Follow-up was available in 25 children showing stable cystic findings in 48 %, cyst size increase in 36 %, and decrease in 16 %. Linear regression analysis gave 99 % upper prediction bounds of 10.8 mm, 10.9 mm, 7.7 mm and 66.9 mm(2), respectively, for cyst size, width, height, and area. The slopes (size increase per month) of each parameter were 0.030, 0.046, 0.021, and 0.25, respectively. Most of the pineoblastomas showed a size larger than the 99 % upper prediction margin, but with considerable overlap between the groups. We presented age-adapted normal values for size and morphology of the cystic pineal gland in children aged 0 to 5 years. Analysis of size is helpful in discriminating normal glands from cystic pineal pathologies such as pineoblastoma. We also presented guidelines for the approach of a solid or cystic pineal gland in hereditary retinoblastoma patients.
NASA Astrophysics Data System (ADS)
Wu, Yingchun; Crua, Cyril; Li, Haipeng; Saengkaew, Sawitree; Mädler, Lutz; Wu, Xuecheng; Gréhan, Gérard
2018-07-01
The accurate measurements of droplet temperature, size and evaporation rate are of great importance to characterize the heat and mass transfer during evaporation/condensation processes. The nanoscale size change of a micron-sized droplet exactly describes its transient mass transfer, but is difficult to measure because it is smaller than the resolutions of current size measurement techniques. The Phase Rainbow Refractometry (PRR) technique is developed and applied to measure droplet temperature, size and transient size changes and thereafter evaporation rate simultaneously. The measurement principle of PRR is theoretically derived, and it reveals that the phase shift of the time-resolved ripple structures linearly depends on, and can directly yield, nano-scale size changes of droplets. The PRR technique is first verified through the simulation of rainbows of droplets with changing size, and results show that PRR can precisely measure droplet refractive index, absolute size, as well as size change with absolute and relative errors within several nanometers and 0.6%, respectively, and thus PRR permits accurate measurements of transient droplet evaporation rates. The evaporations of flowing single n-nonane droplet and mono-dispersed n-heptane droplet stream are investigated by two PRR systems with a high speed linear CCD and a low speed array CCD, respectively. Their transient evaporation rates are experimentally determined and quantitatively agree well with the theoretical values predicted by classical Maxwell and Stefan-Fuchs models. With the demonstration of evaporation rate measurement of monocomponent droplet in this work, PRR is an ideal tool for measurements of transient droplet evaporation/condensation processes, and can be extended to multicomponent droplets in a wide range of industrially-relevant applications.
PROCEDURE FOR DETERMINATION OF SEDIMENT PARTICLE SIZE (GRAIN SIZE)
Sediment quality and sediment remediation projects have become a high priority for USEPA. Sediment particle size determinations are used in environmental assessments for habitat characterization, chemical normalization, and partitioning potential of chemicals. The accepted met...
Intravital Microscopy Imaging Approaches for Image-Guided Drug Delivery Systems
Kirui, Dickson K.; Ferrari, Mauro
2016-01-01
Rapid technical advances in the field of non-linear microscopy have made intravital microscopy a vital pre-clinical tool for research and development of imaging-guided drug delivery systems. The ability to dynamically monitor the fate of macromolecules in live animals provides invaluable information regarding properties of drug carriers (size, charge, and surface coating), physiological, and pathological processes that exist between point-of-injection and the projected of site of delivery, all of which influence delivery and effectiveness of drug delivery systems. In this Review, we highlight how integrating intravital microscopy imaging with experimental designs (in vitro analyses and mathematical modeling) can provide unique information critical in the design of novel disease-relevant drug delivery platforms with improved diagnostic and therapeutic indexes. The Review will provide the reader an overview of the various applications for which intravital microscopy has been used to monitor the delivery of diagnostic and therapeutic agents and discuss some of their potential clinical applications. PMID:25901526
Multivariate Heteroscedasticity Models for Functional Brain Connectivity.
Seiler, Christof; Holmes, Susan
2017-01-01
Functional brain connectivity is the co-occurrence of brain activity in different areas during resting and while doing tasks. The data of interest are multivariate timeseries measured simultaneously across brain parcels using resting-state fMRI (rfMRI). We analyze functional connectivity using two heteroscedasticity models. Our first model is low-dimensional and scales linearly in the number of brain parcels. Our second model scales quadratically. We apply both models to data from the Human Connectome Project (HCP) comparing connectivity between short and conventional sleepers. We find stronger functional connectivity in short than conventional sleepers in brain areas consistent with previous findings. This might be due to subjects falling asleep in the scanner. Consequently, we recommend the inclusion of average sleep duration as a covariate to remove unwanted variation in rfMRI studies. A power analysis using the HCP data shows that a sample size of 40 detects 50% of the connectivity at a false discovery rate of 20%. We provide implementations using R and the probabilistic programming language Stan.
Ultra-thin high-efficiency mid-infrared transmissive Huygens meta-optics.
Zhang, Li; Ding, Jun; Zheng, Hanyu; An, Sensong; Lin, Hongtao; Zheng, Bowen; Du, Qingyang; Yin, Gufan; Michon, Jerome; Zhang, Yifei; Fang, Zhuoran; Shalaginov, Mikhail Y; Deng, Longjiang; Gu, Tian; Zhang, Hualiang; Hu, Juejun
2018-04-16
The mid-infrared (mid-IR) is a strategically important band for numerous applications ranging from night vision to biochemical sensing. Here we theoretically analyzed and experimentally realized a Huygens metasurface platform capable of fulfilling a diverse cross-section of optical functions in the mid-IR. The meta-optical elements were constructed using high-index chalcogenide films deposited on fluoride substrates: the choices of wide-band transparent materials allow the design to be scaled across a broad infrared spectrum. Capitalizing on a two-component Huygens' meta-atom design, the meta-optical devices feature an ultra-thin profile (λ 0 /8 in thickness) and measured optical efficiencies up to 75% in transmissive mode for linearly polarized light, representing major improvements over state-of-the-art. We have also demonstrated mid-IR transmissive meta-lenses with diffraction-limited focusing and imaging performance. The projected size, weight and power advantages, coupled with the manufacturing scalability leveraging standard microfabrication technologies, make the Huygens meta-optical devices promising for next-generation mid-IR system applications.
Persistence versus extinction for a class of discrete-time structured population models.
Jin, Wen; Smith, Hal L; Thieme, Horst R
2016-03-01
We provide sharp conditions distinguishing persistence and extinction for a class of discrete-time dynamical systems on the positive cone of an ordered Banach space generated by a map which is the sum of a positive linear contraction A and a nonlinear perturbation G that is compact and differentiable at zero in the direction of the cone. Such maps arise as year-to-year projections of population age, stage, or size-structure distributions in population biology where typically A has to do with survival and individual development and G captures the effects of reproduction. The threshold distinguishing persistence and extinction is the principal eigenvalue of (II−A)(−1)G'(0) provided by the Krein-Rutman Theorem, and persistence is described in terms of associated eigenfunctionals. Our results significantly extend earlier persistence results of the last two authors which required more restrictive conditions on G. They are illustrated by application of the results to a plant model with a seed bank.
Tensor methodology and computational geometry in direct computational experiments in fluid mechanics
NASA Astrophysics Data System (ADS)
Degtyarev, Alexander; Khramushin, Vasily; Shichkina, Julia
2017-07-01
The paper considers a generalized functional and algorithmic construction of direct computational experiments in fluid dynamics. Notation of tensor mathematics is naturally embedded in the finite - element operation in the construction of numerical schemes. Large fluid particle, which have a finite size, its own weight, internal displacement and deformation is considered as an elementary computing object. Tensor representation of computational objects becomes strait linear and uniquely approximation of elementary volumes and fluid particles inside them. The proposed approach allows the use of explicit numerical scheme, which is an important condition for increasing the efficiency of the algorithms developed by numerical procedures with natural parallelism. It is shown that advantages of the proposed approach are achieved among them by considering representation of large particles of a continuous medium motion in dual coordinate systems and computing operations in the projections of these two coordinate systems with direct and inverse transformations. So new method for mathematical representation and synthesis of computational experiment based on large particle method is proposed.
OPTICON: Pro-Matlab software for large order controlled structure design
NASA Technical Reports Server (NTRS)
Peterson, Lee D.
1989-01-01
A software package for large order controlled structure design is described and demonstrated. The primary program, called OPTICAN, uses both Pro-Matlab M-file routines and selected compiled FORTRAN routines linked into the Pro-Matlab structure. The program accepts structural model information in the form of state-space matrices and performs three basic design functions on the model: (1) open loop analyses; (2) closed loop reduced order controller synthesis; and (3) closed loop stability and performance assessment. The current controller synthesis methods which were implemented in this software are based on the Generalized Linear Quadratic Gaussian theory of Bernstein. In particular, a reduced order Optimal Projection synthesis algorithm based on a homotopy solution method was successfully applied to an experimental truss structure using a 58-state dynamic model. These results are presented and discussed. Current plans to expand the practical size of the design model to several hundred states and the intention to interface Pro-Matlab to a supercomputing environment are discussed.
Quantum carpets in a one-dimensional tilted optical lattices
NASA Astrophysics Data System (ADS)
Parra Murillo, Carlos Alberto; Muã+/-Oz Arias, Manuel Humberto; Madroã+/-Ero, Javier
A unit filling Bose-Hubbard Hamiltonian embedded in a strong Stark field is studied in the off-resonant regime inhibiting single- and many-particle first-order tunneling resonances. We investigate the occurrence of coherent dipole wavelike propagation along an optical lattice by means of an effective Hamiltonian accounting for second-order tunneling processes. It is shown that dipole wave function evolution in the short-time limit is ballistic and that finite-size effects induce dynamical self-interference patterns known as quantum carpets. We also present the effects of the border right after the first reflection, showing that the wave function diffuses normally with the variance changing linearly in time. This work extends the rich physical phenomenology of tilted one-dimensional lattice systems in a scenario of many interacting quantum particles, the so-called many-body Wannier-Stark system. The authors acknownledge the finantial support of the Universidad del Valle (project CI 7996). C. A. Parra-Murillo greatfully acknowledges the financial support of COLCIENCIAS (Grant 656).
Triple galaxies and a hidden mass problem
NASA Technical Reports Server (NTRS)
Karachentsev, I. D.; Karachentseva, V. E.; Lebedev, V. S.
1990-01-01
The authors consider a homogeneous sample of 84 triple systems of galaxies with components brighter than m = 15.7, located in the northern sky and satisfying an isolation criterion with respect to neighboring galaxies in projection. The distributions of basic dynamical parameters for triplets have median values as follows: radial velocity dispersion 133 km/s, mean harmonic radius 63 kpc, absolute magnitude of galaxies M sub B equals -20.38, crossing time tau = 0.04 H(sup minus 1). For different ways of estimation the median mass-to-luminosity ratio is (20 - 30). A comparison of the last value with the ones for single and binary galaxies shows the presence of a virial mass excess for triplets by a factor 4. The mass-to-luminosity ratio is practically uncorrelated with linear size of triplets or with morphological types of their components. We note that a significant part of the virial excess may be explained by the presence of nonisolated triple configurations in the sample, which are produced by debris of more populous groups of galaxies.
DUL, MITCHELL W.; SWANSON, WILLIAM H.
2006-01-01
Purposes The purposes of this study are to compare macular perimetric sensitivities for conventional size III, frequency-doubling, and Gabor stimuli in terms of Weber contrast and to provide a theoretical interpretation of the results. Methods Twenty-two patients with glaucoma performed four perimetric tests: a conventional Swedish Interactive Threshold Algorithm (SITA) 10-2 test with Goldmann size III stimuli, two frequency-doubling tests (FDT 10-2, FDT Macula) with counterphase-modulated grating stimuli, and a laboratory-designed test with Gabor stimuli. Perimetric sensitivities were converted to the reciprocal of Weber contrast and sensitivities from different tests were compared using the Bland-Altman method. Effects of ganglion cell loss on perimetric sensitivities were then simulated with a two-stage neural model. Results The average perimetric loss was similar for all stimuli until advanced stages of ganglion cell loss, in which perimetric loss tended to be greater for size III stimuli than for frequency-doubling and Gabor stimuli. Comparison of the experimental data and model simulation suggests that, in the macula, linear relations between ganglion cell loss and perimetric sensitivity loss hold for all three stimuli. Conclusions Linear relations between perimetric loss and ganglion cell loss for all three stimuli can account for the similarity in perimetric loss until advanced stages. The results do not support the hypothesis that redundancy for frequency-doubling stimuli is lower than redundancy for size III stimuli. PMID:16840860
Depth Of Modulation And Spot Size Selection In Bar-Code Laser Scanners
NASA Astrophysics Data System (ADS)
Barkan, Eric; Swartz, Jerome
1982-04-01
Many optical and electronic considerations enter into the selection of optical spot size in flying spot laser scanners of the type used in modern industrial and commerical environments. These include: the scale of the symbols to be read, optical background noise present in the symbol substrate, and factors relating to the characteristics of the signal processor. Many 'front ends' consist of a linear signal conditioner followed by nonlinear conditioning and digitizing circuitry. Although the nonlinear portions of the circuit can be difficult to characterize mathematically, it is frequently possible to at least give a minimum depth of modulation measure to yield a worst-case guarantee of adequate performance with respect to digitization accuracy. Depth of modulation actually delivered to the nonlinear circuitry will depend on scale, contrast, and noise content of the scanned symbol, as well as the characteristics of the linear conditioning circuitry (eg. transfer function and electronic noise). Time and frequency domain techniques are applied in order to estimate the effects of these factors in selecting a spot size for a given system environment. Results obtained include estimates of the effects of the linear front end transfer function on effective spot size and asymmetries which can affect digitization accuracy. Plots of convolution-computed modulation patterns and other important system properties are presented. Considerations are limited primarily to Gaussian spot profiles but also apply to more general cases. Attention is paid to realistic symbol models and to implications with respect to printing tolerances.
Hong, In-Seok; Kim, Yong-Hwan; Choi, Bong-Hyuk; Choi, Suk-Jin; Park, Bum-Sik; Jin, Hyun-Chang; Kim, Hye-Jin; Heo, Jeong-Il; Kim, Deok-Min; Jang, Ji-Ho
2016-02-01
The injector for the main driver linear accelerator of the Rare Isotope Science Project in Korea, has been developed to allow heavy ions up to uranium to be delivered to the inflight fragmentation system. The critical components of the injector are the superconducting electron cyclotron resonance (ECR) ion sources, the radio frequency quadrupole (RFQ), and matching systems for low and medium energy beams. We have built superconducting magnets for the ECR ion source, and a prototype with one segment of the RFQ structure, with the aim of developing a design that can satisfy our specifications, demonstrate stable operation, and prove results to compare the design simulation.
Why Teach? A Project-Ive Life-World Approach to Understanding What Teaching Means for Teachers
ERIC Educational Resources Information Center
Landrum, Brittany; Guilbeau, Catherine; Garza, Gilbert
2017-01-01
Previous literature has examined teachers' motivations to teach in terms of intrinsic and extrinsic motives, personality dimensions, and teacher burnout. These findings have been cast in the rubric of differences between teachers and non-teachers and the linear relations between these measures among teachers. Utilizing a phenomenological approach…
Nikazad, T; Davidi, R; Herman, G. T.
2013-01-01
We study the convergence of a class of accelerated perturbation-resilient block-iterative projection methods for solving systems of linear equations. We prove convergence to a fixed point of an operator even in the presence of summable perturbations of the iterates, irrespective of the consistency of the linear system. For a consistent system, the limit point is a solution of the system. In the inconsistent case, the symmetric version of our method converges to a weighted least squares solution. Perturbation resilience is utilized to approximate the minimum of a convex functional subject to the equations. A main contribution, as compared to previously published approaches to achieving similar aims, is a more than an order of magnitude speed-up, as demonstrated by applying the methods to problems of image reconstruction from projections. In addition, the accelerated algorithms are illustrated to be better, in a strict sense provided by the method of statistical hypothesis testing, than their unaccelerated versions for the task of detecting small tumors in the brain from X-ray CT projection data. PMID:23440911
Nikazad, T; Davidi, R; Herman, G T
2012-03-01
We study the convergence of a class of accelerated perturbation-resilient block-iterative projection methods for solving systems of linear equations. We prove convergence to a fixed point of an operator even in the presence of summable perturbations of the iterates, irrespective of the consistency of the linear system. For a consistent system, the limit point is a solution of the system. In the inconsistent case, the symmetric version of our method converges to a weighted least squares solution. Perturbation resilience is utilized to approximate the minimum of a convex functional subject to the equations. A main contribution, as compared to previously published approaches to achieving similar aims, is a more than an order of magnitude speed-up, as demonstrated by applying the methods to problems of image reconstruction from projections. In addition, the accelerated algorithms are illustrated to be better, in a strict sense provided by the method of statistical hypothesis testing, than their unaccelerated versions for the task of detecting small tumors in the brain from X-ray CT projection data.
Al JABBARI, Youssef S.; TSAKIRIDIS, Peter; ELIADES, George; AL-HADLAQ, Solaiman M.; ZINELIS, Spiros
2012-01-01
Objective The aim of this study was to quantify the surface area, volume and specific surface area of endodontic files employing quantitative X-ray micro computed tomography (mXCT). Material and Methods Three sets (six files each) of the Flex-Master Ni-Ti system (Nº 20, 25 and 30, taper .04) were utilized in this study. The files were scanned by mXCT. The surface area and volume of all files were determined from the cutting tip up to 16 mm. The data from the surface area, volume and specific area were statistically evaluated using the one-way ANOVA and SNK multiple comparison tests at α=0.05, employing the file size as a discriminating variable. The correlation between the surface area and volume with nominal ISO sizes were tested employing linear regression analysis. Results The surface area and volume of Nº 30 files showed the highest value followed by Nº 25 and Nº 20 and the differences were statistically significant. The Nº 20 files showed a significantly higher specific surface area compared to Nº 25 and Nº 30. The increase in surface and volume towards higher file sizes follows a linear relationship with the nominal ISO sizes (r2=0.930 for surface area and r2=0.974 for volume respectively). Results indicated that the surface area and volume demonstrated an almost linear increase while the specific surface area exhibited an abrupt decrease towards higher sizes. Conclusions This study demonstrates that mXCT can be effectively applied to discriminate very small differences in the geometrical features of endodontic micro-instruments, while providing quantitative information for their geometrical properties. PMID:23329248
Wave-induced hydraulic forces on submerged aquatic plants in shallow lakes.
Schutten, J; Dainty, J; Davy, A J
2004-03-01
Hydraulic pulling forces arising from wave action are likely to limit the presence of freshwater macrophytes in shallow lakes, particularly those with soft sediments. The aim of this study was to develop and test experimentally simple models, based on linear wave theory for deep water, to predict such forces on individual shoots. Models were derived theoretically from the action of the vertical component of the orbital velocity of the waves on shoot size. Alternative shoot-size descriptors (plan-form area or dry mass) and alternative distributions of the shoot material along its length (cylinder or inverted cone) were examined. Models were tested experimentally in a flume that generated sinusoidal waves which lasted 1 s and were up to 0.2 m high. Hydraulic pulling forces were measured on plastic replicas of Elodea sp. and on six species of real plants with varying morphology (Ceratophyllum demersum, Chara intermedia, Elodea canadensis, Myriophyllum spicatum, Potamogeton natans and Potamogeton obtusifolius). Measurements on the plastic replicas confirmed predicted relationships between force and wave phase, wave height and plant submergence depth. Predicted and measured forces were linearly related over all combinations of wave height and submergence depth. Measured forces on real plants were linearly related to theoretically derived predictors of the hydraulic forces (integrals of the products of the vertical orbital velocity raised to the power 1.5 and shoot size). The general applicability of the simplified wave equations used was confirmed. Overall, dry mass and plan-form area performed similarly well as shoot-size descriptors, as did the conical or cylindrical models of shoot distribution. The utility of the modelling approach in predicting hydraulic pulling forces from relatively simple plant and environmental measurements was validated over a wide range of forces, plant sizes and species.