The linear sizes tolerances and fits system modernization
NASA Astrophysics Data System (ADS)
Glukhov, V. I.; Grinevich, V. A.; Shalay, V. V.
2018-04-01
The study is carried out on the urgent topic for technical products quality providing in the tolerancing process of the component parts. The aim of the paper is to develop alternatives for improving the system linear sizes tolerances and dimensional fits in the international standard ISO 286-1. The tasks of the work are, firstly, to classify as linear sizes the elements additionally linear coordinating sizes that determine the detail elements location and, secondly, to justify the basic deviation of the tolerance interval for the element's linear size. The geometrical modeling method of real details elements, the analytical and experimental methods are used in the research. It is shown that the linear coordinates are the dimensional basis of the elements linear sizes. To standardize the accuracy of linear coordinating sizes in all accuracy classes, it is sufficient to select in the standardized tolerance system only one tolerance interval with symmetrical deviations: Js for internal dimensional elements (holes) and js for external elements (shafts). The main deviation of this coordinating tolerance is the average zero deviation, which coincides with the nominal value of the coordinating size. Other intervals of the tolerance system are remained for normalizing the accuracy of the elements linear sizes with a fundamental change in the basic deviation of all tolerance intervals is the maximum deviation corresponding to the limit of the element material: EI is the lower tolerance for the of the internal elements (holes) sizes and es is the upper tolerance deviation for the outer elements (shafts) sizes. It is the sizes of the material maximum that are involved in the of the dimensional elements mating of the shafts and holes and determine the fits type.
NASA Technical Reports Server (NTRS)
Majda, G.
1985-01-01
A large set of variable coefficient linear systems of ordinary differential equations which possess two different time scales, a slow one and a fast one is considered. A small parameter epsilon characterizes the stiffness of these systems. A system of o.d.e.s. in this set is approximated by a general class of multistep discretizations which includes both one-leg and linear multistep methods. Sufficient conditions are determined under which each solution of a multistep method is uniformly bounded, with a bound which is independent of the stiffness of the system of o.d.e.s., when the step size resolves the slow time scale, but not the fast one. This property is called stability with large step sizes. The theory presented lets one compare properties of one-leg methods and linear multistep methods when they approximate variable coefficient systems of stiff o.d.e.s. In particular, it is shown that one-leg methods have better stability properties with large step sizes than their linear multistep counter parts. The theory also allows one to relate the concept of D-stability to the usual notions of stability and stability domains and to the propagation of errors for multistep methods which use large step sizes.
Switching times of nanoscale FePt: Finite size effects on the linear reversal mechanism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ellis, M. O. A.; Chantrell, R. W.
2015-04-20
The linear reversal mechanism in FePt grains ranging from 2.316 nm to 5.404 nm has been simulated using atomistic spin dynamics, parametrized from ab-initio calculations. The Curie temperature and the critical temperature (T{sup *}), at which the linear reversal mechanism occurs, are observed to decrease with system size whilst the temperature window T{sup *}
Controlling the Size and Shape of the Elastin-Like Polypeptide based Micelles
NASA Astrophysics Data System (ADS)
Streletzky, Kiril; Shuman, Hannah; Maraschky, Adam; Holland, Nolan
Elastin-like polypeptide (ELP) trimer constructs make reliable environmentally responsive micellar systems because they exhibit a controllable transition from being water-soluble at low temperatures to aggregating at high temperatures. It has been shown that depending on the specific details of the ELP design (length of the ELP chain, pH and salt concentration) micelles can vary in size and shape between spherical micelles with diameter 30-100 nm to elongated particles with an aspect ratio of about 10. This makes ELP trimers a convenient platform for developing potential drug delivery and bio-sensing applications as well as for understanding micelle formation in ELP systems. Since at a given salt concentration, the headgroup area for each foldon should be constant, the size of the micelles is expected to be proportional to the volume of the linear ELP available per foldon headgroup. Therefore, adding linear ELPs to a system of ELP-foldon should result in changes of the micelle volume allowing to control micelle size and possibly shape. The effects of addition of linear ELPs on size, shape, and molecular weight of micelles at different salt concentrations were studied by a combination of Dynamic Light Scattering and Static Light Scattering. The initial results on 50 µM ELP-foldon samples (at low salt) show that Rh of mixed micelles increases more than 5-fold as the amount of linear ELP raised from 0 to 50 µM. It was also found that a given mixture of linear and trimer constructs has two temperature-based transitions and therefore displays three predominant size regimes.
Control method for physical systems and devices
Guckenheimer, John
1997-01-01
A control method for stabilizing systems or devices that are outside the control domain of a linear controller is provided. When applied to nonlinear systems, the effectiveness of this method depends upon the size of the domain of stability that is produced for the stabilized equilibrium. If this domain is small compared to the accuracy of measurements or the size of disturbances within the system, then the linear controller is likely to fail within a short period. Failure of the system or device can be catastrophic: the system or device can wander far from the desired equilibrium. The method of the invention presents a general procedure to recapture the stability of a linear controller, when the trajectory of a system or device leaves its region of stability. By using a hybrid strategy based upon discrete switching events within the state space of the system or device, the system or device will return from a much larger domain to the region of stability utilized by the linear controller. The control procedure is robust and remains effective under large classes of perturbations of a given underlying system or device.
A rapid method for optimization of the rocket propulsion system for single-stage-to-orbit vehicles
NASA Technical Reports Server (NTRS)
Eldred, C. H.; Gordon, S. V.
1976-01-01
A rapid analytical method for the optimization of rocket propulsion systems is presented for a vertical take-off, horizontal landing, single-stage-to-orbit launch vehicle. This method utilizes trade-offs between propulsion characteristics affecting flight performance and engine system mass. The performance results from a point-mass trajectory optimization program are combined with a linearized sizing program to establish vehicle sizing trends caused by propulsion system variations. The linearized sizing technique was developed for the class of vehicle systems studied herein. The specific examples treated are the optimization of nozzle expansion ratio and lift-off thrust-to-weight ratio to achieve either minimum gross mass or minimum dry mass. Assumed propulsion system characteristics are high chamber pressure, liquid oxygen and liquid hydrogen propellants, conventional bell nozzles, and the same fixed nozzle expansion ratio for all engines on a vehicle.
NASA Astrophysics Data System (ADS)
Barnaś, Dawid; Bieniasz, Lesław K.
2017-07-01
We have recently developed a vectorized Thomas solver for quasi-block tridiagonal linear algebraic equation systems using Streaming SIMD Extensions (SSE) and Advanced Vector Extensions (AVX) in operations on dense blocks [D. Barnaś and L. K. Bieniasz, Int. J. Comput. Meth., accepted]. The acceleration caused by vectorization was observed for large block sizes, but was less satisfactory for small blocks. In this communication we report on another version of the solver, optimized for small blocks of size up to four rows and/or columns.
Plastic strain is a mixture of avalanches and quasireversible deformations: Study of various sizes
NASA Astrophysics Data System (ADS)
Szabó, Péter; Ispánovity, Péter Dusán; Groma, István
2015-02-01
The size dependence of plastic flow is studied by discrete dislocation dynamical simulations of systems with various amounts of interacting dislocations while the stress is slowly increased. The regions between avalanches in the individual stress curves as functions of the plastic strain were found to be nearly linear and reversible where the plastic deformation obeys an effective equation of motion with a nearly linear force. For small plastic deformation, the mean values of the stress-strain curves obey a power law over two decades. Here and for somewhat larger plastic deformations, the mean stress-strain curves converge for larger sizes, while their variances shrink, both indicating the existence of a thermodynamical limit. The converging averages decrease with increasing size, in accordance with size effects from experiments. For large plastic deformations, where steady flow sets in, the thermodynamical limit was not realized in this model system.
Electron-Phonon Systems on a Universal Quantum Computer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Macridin, Alexandru; Spentzouris, Panagiotis; Amundson, James
We present an algorithm that extends existing quantum algorithms forsimulating fermion systems in quantum chemistry and condensed matter physics toinclude phonons. The phonon degrees of freedom are represented with exponentialaccuracy on a truncated Hilbert space with a size that increases linearly withthe cutoff of the maximum phonon number. The additional number of qubitsrequired by the presence of phonons scales linearly with the size of thesystem. The additional circuit depth is constant for systems with finite-rangeelectron-phonon and phonon-phonon interactions and linear for long-rangeelectron-phonon interactions. Our algorithm for a Holstein polaron problem wasimplemented on an Atos Quantum Learning Machine (QLM) quantum simulatoremployingmore » the Quantum Phase Estimation method. The energy and the phonon numberdistribution of the polaron state agree with exact diagonalization results forweak, intermediate and strong electron-phonon coupling regimes.« less
Effect of wire size on maxillary arch force/couple systems for a simulated high canine malocclusion.
Major, Paul W; Toogood, Roger W; Badawi, Hisham M; Carey, Jason P; Seru, Surbhi
2014-12-01
To better understand the effects of copper nickel titanium (CuNiTi) archwire size on bracket-archwire mechanics through the analysis of force/couple distributions along the maxillary arch. The hypothesis is that wire size is linearly related to the forces and moments produced along the arch. An Orthodontic Simulator was utilized to study a simplified high canine malocclusion. Force/couple distributions produced by passive and elastic ligation using two wire sizes (Damon 0.014 and 0.018 inch) measured with a sample size of 144. The distribution and variation in force/couple loading around the arch is a complicated function of wire size. The use of a thicker wire increases the force/couple magnitudes regardless of ligation method. Owing to the non-linear material behaviour of CuNiTi, this increase is less than would occur based on linear theory as would apply for stainless steel wires. The results demonstrate that an increase in wire size does not result in a proportional increase of applied force/moment. This discrepancy is explained in terms of the non-linear properties of CuNiTi wires. This non-proportional force response in relation to increased wire size warrants careful consideration when selecting wires in a clinical setting. © 2014 British Orthodontic Society.
Pratapa, Phanisri P.; Suryanarayana, Phanish; Pask, John E.
2015-12-01
We employ Anderson extrapolation to accelerate the classical Jacobi iterative method for large, sparse linear systems. Specifically, we utilize extrapolation at periodic intervals within the Jacobi iteration to develop the Alternating Anderson–Jacobi (AAJ) method. We verify the accuracy and efficacy of AAJ in a range of test cases, including nonsymmetric systems of equations. We demonstrate that AAJ possesses a favorable scaling with system size that is accompanied by a small prefactor, even in the absence of a preconditioner. In particular, we show that AAJ is able to accelerate the classical Jacobi iteration by over four orders of magnitude, with speed-upsmore » that increase as the system gets larger. Moreover, we find that AAJ significantly outperforms the Generalized Minimal Residual (GMRES) method in the range of problems considered here, with the relative performance again improving with size of the system. As a result, the proposed method represents a simple yet efficient technique that is particularly attractive for large-scale parallel solutions of linear systems of equations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pratapa, Phanisri P.; Suryanarayana, Phanish; Pask, John E.
We employ Anderson extrapolation to accelerate the classical Jacobi iterative method for large, sparse linear systems. Specifically, we utilize extrapolation at periodic intervals within the Jacobi iteration to develop the Alternating Anderson–Jacobi (AAJ) method. We verify the accuracy and efficacy of AAJ in a range of test cases, including nonsymmetric systems of equations. We demonstrate that AAJ possesses a favorable scaling with system size that is accompanied by a small prefactor, even in the absence of a preconditioner. In particular, we show that AAJ is able to accelerate the classical Jacobi iteration by over four orders of magnitude, with speed-upsmore » that increase as the system gets larger. Moreover, we find that AAJ significantly outperforms the Generalized Minimal Residual (GMRES) method in the range of problems considered here, with the relative performance again improving with size of the system. As a result, the proposed method represents a simple yet efficient technique that is particularly attractive for large-scale parallel solutions of linear systems of equations.« less
Correction for spatial averaging in laser speckle contrast analysis
Thompson, Oliver; Andrews, Michael; Hirst, Evan
2011-01-01
Practical laser speckle contrast analysis systems face a problem of spatial averaging of speckles, due to the pixel size in the cameras used. Existing practice is to use a system factor in speckle contrast analysis to account for spatial averaging. The linearity of the system factor correction has not previously been confirmed. The problem of spatial averaging is illustrated using computer simulation of time-integrated dynamic speckle, and the linearity of the correction confirmed using both computer simulation and experimental results. The valid linear correction allows various useful compromises in the system design. PMID:21483623
NASA Technical Reports Server (NTRS)
Majda, George
1986-01-01
One-leg and multistep discretizations of variable-coefficient linear systems of ODEs having both slow and fast time scales are investigated analytically. The stability properties of these discretizations are obtained independent of ODE stiffness and compared. The results of numerical computations are presented in tables, and it is shown that for large step sizes the stability of one-leg methods is better than that of the corresponding linear multistep methods.
A variable-step-size robust delta modulator.
NASA Technical Reports Server (NTRS)
Song, C. L.; Garodnick, J.; Schilling, D. L.
1971-01-01
Description of an analytically obtained optimum adaptive delta modulator-demodulator configuration. The device utilizes two past samples to obtain a step size which minimizes the mean square error for a Markov-Gaussian source. The optimum system is compared, using computer simulations, with a linear delta modulator and an enhanced Abate delta modulator. In addition, the performance is compared to the rate distortion bound for a Markov source. It is shown that the optimum delta modulator is neither quantization nor slope-overload limited. The highly nonlinear equations obtained for the optimum transmitter and receiver are approximated by piecewise-linear equations in order to obtain system equations which can be transformed into hardware. The derivation of the experimental system is presented.
Nature of size effects in compact models of field effect transistors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Torkhov, N. A., E-mail: trkf@mail.ru; Scientific-Research Institute of Semiconductor Devices, Tomsk 634050; Tomsk State University of Control Systems and Radioelectronics, Tomsk 634050
Investigations have shown that in the local approximation (for sizes L < 100 μm), AlGaN/GaN high electron mobility transistor (HEMT) structures satisfy to all properties of chaotic systems and can be described in the language of fractal geometry of fractional dimensions. For such objects, values of their electrophysical characteristics depend on the linear sizes of the examined regions, which explain the presence of the so-called size effects—dependences of the electrophysical and instrumental characteristics on the linear sizes of the active elements of semiconductor devices. In the present work, a relationship has been established for the linear model parameters of themore » equivalent circuit elements of internal transistors with fractal geometry of the heteroepitaxial structure manifested through a dependence of its relative electrophysical characteristics on the linear sizes of the examined surface areas. For the HEMTs, this implies dependences of their relative static (A/mm, mA/V/mm, Ω/mm, etc.) and microwave characteristics (W/mm) on the width d of the sink-source channel and on the number of sections n that leads to a nonlinear dependence of the retrieved parameter values of equivalent circuit elements of linear internal transistor models on n and d. Thus, it has been demonstrated that the size effects in semiconductors determined by the fractal geometry must be taken into account when investigating the properties of semiconductor objects on the levels less than the local approximation limit and designing and manufacturing field effect transistors. In general, the suggested approach allows a complex of problems to be solved on designing, optimizing, and retrieving the parameters of equivalent circuits of linear and nonlinear models of not only field effect transistors but also any arbitrary semiconductor devices with nonlinear instrumental characteristics.« less
Kim, Young Baek; Choi, Bum Ho; Lim, Yong Hwan; Yoo, Ha Na; Lee, Jong Ho; Kim, Jin Hyeok
2011-02-01
In this study, pentacene organic thin film was prepared using newly developed organic material auto-feeding system integrated with linear cell and characterized. The newly developed organic material auto-feeding system consists of 4 major parts: reservoir, micro auto-feeder, vaporizer, and linear cell. The deposition of organic thin film could be precisely controlled by adjusting feeding rate, main tube size, position and size of nozzle. 10 nm thick pentacene thin film prepared on glass substrate exhibited high uniformity of 3.46% which is higher than that of conventional evaporation method using point cell. The continuous deposition without replenishment of organic material can be performed over 144 hours with regulated deposition control. The grain size of pentacene film which affect to mobility of OTFT, was controlled as a function of the temperature.
Kikugawa, Gota; Ando, Shotaro; Suzuki, Jo; Naruke, Yoichi; Nakano, Takeo; Ohara, Taku
2015-01-14
In the present study, molecular dynamics (MD) simulations on the monatomic Lennard-Jones liquid in a periodic boundary system were performed in order to elucidate the effect of the computational domain size and shape on the self-diffusion coefficient measured by the system. So far, the system size dependence in cubic computational domains has been intensively investigated and these studies showed that the diffusion coefficient depends linearly on the inverse of the system size, which is theoretically predicted based on the hydrodynamic interaction. We examined the system size effect not only in the cubic cell systems but also in rectangular cell systems which were created by changing one side length of the cubic cell with the system density kept constant. As a result, the diffusion coefficient in the direction perpendicular to the long side of the rectangular cell significantly increases more or less linearly with the side length. On the other hand, the diffusion coefficient in the direction along the long side is almost constant or slightly decreases. Consequently, anisotropy of the diffusion coefficient emerges in a rectangular cell with periodic boundary conditions even in a bulk liquid simulation. This unexpected result is of critical importance because rectangular fluid systems confined in nanospace, which are present in realistic nanoscale technologies, have been widely studied in recent MD simulations. In order to elucidate the underlying mechanism for this serious system shape effect on the diffusion property, the correlation structures of particle velocities were examined.
Linear micromechanical stepping drive for pinhole array positioning
NASA Astrophysics Data System (ADS)
Endrödy, Csaba; Mehner, Hannes; Grewe, Adrian; Hoffmann, Martin
2015-05-01
A compact linear micromechanical stepping drive for positioning a 7 × 5.5 mm2 optical pinhole array is presented. The system features a step size of 13.2 µm and a full displacement range of 200 µm. The electrostatic inch-worm stepping mechanism shows a compact design capable of positioning a payload 50% of its own weight. The stepping drive movement, step sizes and position accuracy are characterized. The actuated pinhole array is integrated in a confocal chromatic hyperspectral imaging system, where coverage of the object plane, and therefore the useful picture data, can be multiplied by 14 in contrast to a non-actuated array.
Selecting algorithms, sensors, and linear bases for optimum spectral recovery of skylight.
López-Alvarez, Miguel A; Hernández-Andrés, Javier; Valero, Eva M; Romero, Javier
2007-04-01
In a previous work [Appl. Opt.44, 5688 (2005)] we found the optimum sensors for a planned multispectral system for measuring skylight in the presence of noise by adapting a linear spectral recovery algorithm proposed by Maloney and Wandell [J. Opt. Soc. Am. A3, 29 (1986)]. Here we continue along these lines by simulating the responses of three to five Gaussian sensors and recovering spectral information from noise-affected sensor data by trying out four different estimation algorithms, three different sizes for the training set of spectra, and various linear bases. We attempt to find the optimum combination of sensors, recovery method, linear basis, and matrix size to recover the best skylight spectral power distributions from colorimetric and spectral (in the visible range) points of view. We show how all these parameters play an important role in the practical design of a real multispectral system and how to obtain several relevant conclusions from simulating the behavior of sensors in the presence of noise.
NASA Astrophysics Data System (ADS)
Perino, E. J.; Matoz-Fernandez, D. A.; Pasinetti, P. M.; Ramirez-Pastor, A. J.
2017-07-01
Monte Carlo simulations and finite-size scaling analysis have been performed to study the jamming and percolation behavior of linear k-mers (also known as rods or needles) on a two-dimensional triangular lattice of linear dimension L, considering an isotropic RSA process and periodic boundary conditions. Extensive numerical work has been done to extend previous studies to larger system sizes and longer k-mers, which enables the confirmation of a nonmonotonic size dependence of the percolation threshold and the estimation of a maximum value of k from which percolation would no longer occur. Finally, a complete analysis of critical exponents and universality has been done, showing that the percolation phase transition involved in the system is not affected, having the same universality class of the ordinary random percolation.
Investigating parameters participating in the infant respiratory control system attractor.
Terrill, Philip I; Wilson, Stephen J; Suresh, Sadasivam; Cooper, David M; Dakin, Carolyn
2008-01-01
Theoretically, any participating parameter in a non-linear system represents the dynamics of the whole system. Taken's time delay embedding theory provides the fundamental basis for allowing non-linear analysis to be performed on physiological, time-series data. In practice, only one measurable parameter is required to be measured to convey an accurate representation of the system dynamics. In this paper, the infant respiratory control system is represented using three variables-a digitally sampled respiratory inductive plethysmography waveform, and the derived parameters tidal volume and inter-breath interval time series data. For 14 healthy infants, these data streams were analysed using recurrence plot analysis across one night of sleep. The measured attractor size of these variables followed the same qualitative trends across the nights study. Results suggest that the attractor size measures of the derived IBI and tidal volume are representative surrogates for the raw respiratory waveform. The extent to which the relative attractor sizes of IBI and tidal volume remain constant through changing sleep state could potentially be used to quantify pathology, or maturation of breathing control.
Observation of Droplet Size Oscillations in a Two Phase Fluid under Shear Flow
NASA Astrophysics Data System (ADS)
Courbin, Laurent; Panizza, Pascal
2004-11-01
It is well known that complex fluids exhibit strong couplings between their microstructure and the flow field. Such couplings may lead to unusual non linear rheological behavior. Because energy is constantly brought to the system, richer dynamic behavior such as non linear oscillatory or chaotic response is expected. We report on the observation of droplet size oscillations at fixed shear rate. At low shear rates, we observe two steady states for which the droplet size results from a balance between capillary and viscous stress. For intermediate shear rates, the droplet size becomes a periodic function of time. We propose a phenomenological model to account for the observed phenomenon and compare numerical results to experimental data.
Comparison-based optical study on a point-line-coupling-focus system with linear Fresnel heliostats.
Dai, Yanjun; Li, Xian; Zhou, Lingyu; Ma, Xuan; Wang, Ruzhu
2016-05-16
Concentrating the concept of a beam-down solar tower with linear Fresnel heliostat (PLCF) is one of the feasible choices and has great potential in reducing spot size and improving optical efficiency. Optical characteristics of a PLCF system with the hyperboloid reflector are introduced and investigated theoretically. Taking into account solar position and optical surface errors, a Monte Carlo ray-tracing (MCRT) analysis model for a PLCF system is developed and applied in a comparison-based study on the optical performance between the PLCF system and the conventional beam-down solar tower system with flat and spherical heliostats. The optimal square facet of linear Fresnel heliostat is also proposed for matching with the 3D-CPC receiver.
A linear shift-invariant image preprocessing technique for multispectral scanner systems
NASA Technical Reports Server (NTRS)
Mcgillem, C. D.; Riemer, T. E.
1973-01-01
A linear shift-invariant image preprocessing technique is examined which requires no specific knowledge of any parameter of the original image and which is sufficiently general to allow the effective radius of the composite imaging system to be arbitrarily shaped and reduced, subject primarily to the noise power constraint. In addition, the size of the point-spread function of the preprocessing filter can be arbitrarily controlled, thus minimizing truncation errors.
Application of Nearly Linear Solvers to Electric Power System Computation
NASA Astrophysics Data System (ADS)
Grant, Lisa L.
To meet the future needs of the electric power system, improvements need to be made in the areas of power system algorithms, simulation, and modeling, specifically to achieve a time frame that is useful to industry. If power system time-domain simulations could run in real-time, then system operators would have situational awareness to implement online control and avoid cascading failures, significantly improving power system reliability. Several power system applications rely on the solution of a very large linear system. As the demands on power systems continue to grow, there is a greater computational complexity involved in solving these large linear systems within reasonable time. This project expands on the current work in fast linear solvers, developed for solving symmetric and diagonally dominant linear systems, in order to produce power system specific methods that can be solved in nearly-linear run times. The work explores a new theoretical method that is based on ideas in graph theory and combinatorics. The technique builds a chain of progressively smaller approximate systems with preconditioners based on the system's low stretch spanning tree. The method is compared to traditional linear solvers and shown to reduce the time and iterations required for an accurate solution, especially as the system size increases. A simulation validation is performed, comparing the solution capabilities of the chain method to LU factorization, which is the standard linear solver for power flow. The chain method was successfully demonstrated to produce accurate solutions for power flow simulation on a number of IEEE test cases, and a discussion on how to further improve the method's speed and accuracy is included.
ERIC Educational Resources Information Center
Payton, Spencer D.
2017-01-01
This study aimed to explore how inquiry-oriented teaching could be implemented in an introductory linear algebra course that, due to various constraints, may not lend itself to inquiry-oriented teaching. In particular, the course in question has a traditionally large class size, limited amount of class time, and is often coordinated with other…
Krylov subspace methods - Theory, algorithms, and applications
NASA Technical Reports Server (NTRS)
Sad, Youcef
1990-01-01
Projection methods based on Krylov subspaces for solving various types of scientific problems are reviewed. The main idea of this class of methods when applied to a linear system Ax = b, is to generate in some manner an approximate solution to the original problem from the so-called Krylov subspace span. Thus, the original problem of size N is approximated by one of dimension m, typically much smaller than N. Krylov subspace methods have been very successful in solving linear systems and eigenvalue problems and are now becoming popular for solving nonlinear equations. The main ideas in Krylov subspace methods are shown and their use in solving linear systems, eigenvalue problems, parabolic partial differential equations, Liapunov matrix equations, and nonlinear system of equations are discussed.
Space-Time Adaptive Processing for Airborne Radar
1994-12-13
horizontal plane Uniform linear antenna array (possibly columns of a planar array) Identical element patterns 13 14 15 9 7 7,33 7 7 Target Model ...Parameters for Example Scenario 31 3 Assumptions Made for Radar System and Signal Model 52 4 Platform and Interference Scenario for Baseline Scenario. 61 5...pulses, is addressed first. Fully adaptive STAP requires the solution to a system of linear equations of size MN, where N is the number of array
Thermal effects in nano-sized adsorbate islands growth processes at vapor deposition
NASA Astrophysics Data System (ADS)
Kharchenko, Vasyl O.; Kharchenko, Dmitrii O.; Dvornichenko, Alina V.
2016-02-01
We study a model of pattern formation in adsorptive systems with a local change in the surface temperature due to adsorption/desorption processes. It is found that thermal effects shrink the domain of main system parameters, when pattern formation is possible. It is shown that an increase in a surface reheat efficiency delays ordering processes. We have found that a distribution of adsorbate islands over sizes depends on relaxation and reheat processes. We have shown that the mean linear size of stationary adsorbate islands is of nano-meter range.
The correlation between the sizes of globular cluster systems and their host dark matter haloes
NASA Astrophysics Data System (ADS)
Hudson, Michael J.; Robison, Bailey
2018-07-01
The sizes of entire systems of globular clusters (GCs) depend not only on the formation and destruction histories of the GCs themselves but also on the assembly, merger, and accretion history of the dark matter (DM) haloes that they inhabit. Recent work has shown a linear relation between total mass of GCs in the GC system and the mass of its host DM halo, calibrated from weak lensing. Here, we extend this to GC system sizes, by studying the radial density profiles of GCs around galaxies in nearby galaxy groups. We find that radial density profiles of the GC systems are well fit with a de Vaucouleurs profile. Combining our results with those from the literature, we find tight relationship (˜0.2 dex scatter) between the effective radius of the GC system and the virial radius (or mass) of its host DM halo, for haloes with masses greater than ˜1012 M⊙. The steep non-linear dependence of this relationship (R_{ {e, GCS}} ∝ R_{200}^{2.5 - 3} ∝ M_{200}^{0.8 - 1}) is currently not well understood, but is an important clue regarding the assembly history of DM haloes and of the GC systems that they host.
Plug-in nanoliter pneumatic liquid dispenser with nozzle design flexibility
Choi, In Ho; Kim, Hojin; Lee, Sanghyun; Baek, Seungbum; Kim, Joonwon
2015-01-01
This paper presents a novel plug-in nanoliter liquid dispensing system with a plug-and-play interface for simple and reversible, yet robust integration of the dispenser. A plug-in type dispenser was developed to facilitate assembly and disassembly with an actuating part through efficient modularization. The entire process for assembly and operation of the plug-in dispenser is performed via the plug-and-play interface in less than a minute without loss of dispensing quality. The minimum volume of droplets pneumatically dispensed using the plug-in dispenser was 124 nl with a coefficient of variation of 1.6%. The dispensed volume increased linearly with the nozzle size. Utilizing this linear relationship, two types of multinozzle dispensers consisting of six parallel channels (emerging from an inlet) and six nozzles were developed to demonstrate a novel strategy for volume gradient dispensing at a single operating condition. The droplet volume dispensed from each nozzle also increased linearly with nozzle size, demonstrating that nozzle size is a dominant factor on dispensed volume, even for multinozzle dispensing. Therefore, the proposed plug-in dispenser enables flexible design of nozzles and reversible integration to dispense droplets with different volumes, depending on the application. Furthermore, to demonstrate the practicality of the proposed dispensing system, we developed a pencil-type dispensing system as an alternative to a conventional pipette for rapid and reliable dispensing of minute volume droplets. PMID:26594263
Plug-in nanoliter pneumatic liquid dispenser with nozzle design flexibility.
Choi, In Ho; Kim, Hojin; Lee, Sanghyun; Baek, Seungbum; Kim, Joonwon
2015-11-01
This paper presents a novel plug-in nanoliter liquid dispensing system with a plug-and-play interface for simple and reversible, yet robust integration of the dispenser. A plug-in type dispenser was developed to facilitate assembly and disassembly with an actuating part through efficient modularization. The entire process for assembly and operation of the plug-in dispenser is performed via the plug-and-play interface in less than a minute without loss of dispensing quality. The minimum volume of droplets pneumatically dispensed using the plug-in dispenser was 124 nl with a coefficient of variation of 1.6%. The dispensed volume increased linearly with the nozzle size. Utilizing this linear relationship, two types of multinozzle dispensers consisting of six parallel channels (emerging from an inlet) and six nozzles were developed to demonstrate a novel strategy for volume gradient dispensing at a single operating condition. The droplet volume dispensed from each nozzle also increased linearly with nozzle size, demonstrating that nozzle size is a dominant factor on dispensed volume, even for multinozzle dispensing. Therefore, the proposed plug-in dispenser enables flexible design of nozzles and reversible integration to dispense droplets with different volumes, depending on the application. Furthermore, to demonstrate the practicality of the proposed dispensing system, we developed a pencil-type dispensing system as an alternative to a conventional pipette for rapid and reliable dispensing of minute volume droplets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muthukumaran, M; Manigandan, D; Murali, V
Purpose: The aim of the study is to characterize a two dimensional liquid filled detector array SRS 1000 for routine QA in Cyberknife Robotic Radiosurgery system. Methods: SRS 1000 consists of 977 liquid filled ionization chambers and is designed to be used in small field SRS/SBRT techniques. The detector array has got two different spacial resolutions. Till field size of 5.5×5.5 cm the spacial resolution is 2.5mm (center to center) and after that till field size of 11 × 11 cm the spacial resolution is 5mm. The size of the detector is 2.3 × 2.3 0.5 mm with a volumemore » of .003 cc. The CyberKnife Robotic Radiosurgery System is a frameless stereotactic radiosurgery system in which a LINAC is mounted on a robotic manipulator to deliver beams with a high sub millimeter accuracy. The SRS 1000’s MU linearity, stability, reproducibility in Cyberknife Robotic Radiosurgery system was measured and investigated. The output factors for fixed and IRIS collimators for all available collimators (5mm till 60 mm) was measured and compared with the measurement done with PTW pin-point ionization chamber. Results: The MU linearity was measured from 2 MU till 1000 MU for doserates in the range of 700cGy/min – 780 cGy/min and compared with the measurement done with pin point chamber The MU linearity was with in 3%. The detector arrays stability and reproducibility was excellent and was withinin 0.5% The measured output factors showed an agreement of better than 2% when compared with the measurements with pinpoint chamber for both fixed and IRIS collimators with all available field sizes. Conclusion: We have characterised PTW 1000 SRS as a precise and accurate measurement tool for routine QA of Cyberknife Robotic radiosurgery system.« less
Efficient Computation of Closed-loop Frequency Response for Large Order Flexible Systems
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Giesy, Daniel P.
1997-01-01
An efficient and robust computational scheme is given for the calculation of the frequency response function of a large order, flexible system implemented with a linear, time invariant control system. Advantage is taken of the highly structured sparsity of the system matrix of the plant based on a model of the structure using normal mode coordinates. The computational time per frequency point of the new computational scheme is a linear function of system size, a significant improvement over traditional, full-matrix techniques whose computational times per frequency point range from quadratic to cubic functions of system size. This permits the practical frequency domain analysis of systems of much larger order than by traditional, full-matrix techniques. Formulations are given for both open and closed loop loop systems. Numerical examples are presented showing the advantages of the present formulation over traditional approaches, both in speed and in accuracy. Using a model with 703 structural modes, a speed-up of almost two orders of magnitude was observed while accuracy improved by up to 5 decimal places.
Pei, Soo-Chang; Ding, Jian-Jiun
2005-03-01
Prolate spheroidal wave functions (PSWFs) are known to be useful for analyzing the properties of the finite-extension Fourier transform (fi-FT). We extend the theory of PSWFs for the finite-extension fractional Fourier transform, the finite-extension linear canonical transform, and the finite-extension offset linear canonical transform. These finite transforms are more flexible than the fi-FT and can model much more generalized optical systems. We also illustrate how to use the generalized prolate spheroidal functions we derive to analyze the energy-preservation ratio, the self-imaging phenomenon, and the resonance phenomenon of the finite-sized one-stage or multiple-stage optical systems.
Linear response approach to active Brownian particles in time-varying activity fields
NASA Astrophysics Data System (ADS)
Merlitz, Holger; Vuijk, Hidde D.; Brader, Joseph; Sharma, Abhinav; Sommer, Jens-Uwe
2018-05-01
In a theoretical and simulation study, active Brownian particles (ABPs) in three-dimensional bulk systems are exposed to time-varying sinusoidal activity waves that are running through the system. A linear response (Green-Kubo) formalism is applied to derive fully analytical expressions for the torque-free polarization profiles of non-interacting particles. The activity waves induce fluxes that strongly depend on the particle size and may be employed to de-mix mixtures of ABPs or to drive the particles into selected areas of the system. Three-dimensional Langevin dynamics simulations are carried out to verify the accuracy of the linear response formalism, which is shown to work best when the particles are small (i.e., highly Brownian) or operating at low activity levels.
NASA Astrophysics Data System (ADS)
Lee, G. H.; Arnold, S. T.; Eaton, J. G.; Sarkas, H. W.; Bowen, K. H.; Ludewigt, C.; Haberland, H.
1991-03-01
The photodetachment spectra of (H2O){/n =2-69/-} and (NH3){/n =41-1100/-} have been recorded, and vertical detachment energies (VDEs) were obtained from the spectra. For both systems, the cluster anion VDEs increase smoothly with increasing sizes and most species plot linearly with n -1/3, extrapolating to a VDE ( n=∞) value which is very close to the photoelectric threshold energy for the corresponding condensed phase solvated electron system. The linear extrapolation of this data to the analogous condensed phase property suggests that these cluster anions are gas phase counterparts to solvated electrons, i.e. they are embryonic forms of hydrated and ammoniated electrons which mature with increasing cluster size toward condensed phase solvated electrons.
Energy conserving, linear scaling Born-Oppenheimer molecular dynamics.
Cawkwell, M J; Niklasson, Anders M N
2012-10-07
Born-Oppenheimer molecular dynamics simulations with long-term conservation of the total energy and a computational cost that scales linearly with system size have been obtained simultaneously. Linear scaling with a low pre-factor is achieved using density matrix purification with sparse matrix algebra and a numerical threshold on matrix elements. The extended Lagrangian Born-Oppenheimer molecular dynamics formalism [A. M. N. Niklasson, Phys. Rev. Lett. 100, 123004 (2008)] yields microcanonical trajectories with the approximate forces obtained from the linear scaling method that exhibit no systematic drift over hundreds of picoseconds and which are indistinguishable from trajectories computed using exact forces.
Cellular Manufacturing System with Dynamic Lot Size Material Handling
NASA Astrophysics Data System (ADS)
Khannan, M. S. A.; Maruf, A.; Wangsaputra, R.; Sutrisno, S.; Wibawa, T.
2016-02-01
Material Handling take as important role in Cellular Manufacturing System (CMS) design. In several study at CMS design material handling was assumed per pieces or with constant lot size. In real industrial practice, lot size may change during rolling period to cope with demand changes. This study develops CMS Model with Dynamic Lot Size Material Handling. Integer Linear Programming is used to solve the problem. Objective function of this model is minimizing total expected cost consisting machinery depreciation cost, operating costs, inter-cell material handling cost, intra-cell material handling cost, machine relocation costs, setup costs, and production planning cost. This model determines optimum cell formation and optimum lot size. Numerical examples are elaborated in the paper to ilustrate the characterictic of the model.
Resolution performance of a 0.60-NA, 364-nm laser direct writer
NASA Astrophysics Data System (ADS)
Allen, Paul C.; Buck, Peter D.
1990-06-01
ATEQ has developed a high resolution laser scanning printing engine based on the 8 beam architecture of the CORE- 2000. This printing engine has been incorporated into two systems: the CORE-2500 for the production of advanced masks and reticles and a prototype system for direct write on wafers. The laser direct writer incorporates a through-the-lens alignment system and a rotary chuck for theta alignment. Its resolution performance is delivered by a 0. 60 NA laser scan lens and a novel air-jet focus system. The short focal length high resolution lens also reduces beam position errors thereby improving overall pattern accuracy. In order to take advantage of the high NA optics a high performance focus servo was developed capable of dynamic focus with a maximum error of 0. 15 tm. The focus system uses a hot wire anemometer to measure air flow through an orifice abutting the wafer providing a direct measurement to the top surface of resist independent of substrate properties. Lens specifications are presented and compared with the previous design. Bench data of spot size vs. entrance pupil filling show spot size performance down to 0. 35 m FWHM. The lens has a linearity specification of 0. 05 m system measurements of lens linearity indicate system performance substantially below this. The aerial image of the scanned beams is measured using resist as a threshold detector. An effective spot size is
Foundation stiffness in the linear modeling of wind turbines
NASA Astrophysics Data System (ADS)
Chiang, Chih-Hung; Yu, Chih-Peng; Chen, Yan-Hao; Lai, Jiunnren; Hsu, Keng-Tsang; Cheng, Chia-Chi
2017-04-01
Effects of foundation stiffness on the linear vibrations of wind turbine systems are of concerns for both planning and construction of wind turbine systems. Current study performed numerical modeling for such a problem using linear spectral finite elements. The effects of foundation stiffness were investigated for various combinations of shear wave velocity of soil, size of tower base plate, and pile length. Multiple piles are also included in the models such that the foundation stiffness can be analyzed more realistically. The results indicate that the shear wave velocity of soil and the size of tower base plate have notable effects on the dominant frequency of the turbine-tower system. The larger the lateral dimension, the stiffer the foundation. Large pile cap and multiple spaced piles result in higher stiffness than small pile cap and a mono-pile. The lateral stiffness of a mono-pile mainly depends on the shear wave velocity of soil with the exception for a very short pile that the end constraints may affect the lateral vibration of the superstructure. Effective pile length may be determined by comparing the simulation results of the frictional pile to those of the end-bearing pile.
Depth Of Modulation And Spot Size Selection In Bar-Code Laser Scanners
NASA Astrophysics Data System (ADS)
Barkan, Eric; Swartz, Jerome
1982-04-01
Many optical and electronic considerations enter into the selection of optical spot size in flying spot laser scanners of the type used in modern industrial and commerical environments. These include: the scale of the symbols to be read, optical background noise present in the symbol substrate, and factors relating to the characteristics of the signal processor. Many 'front ends' consist of a linear signal conditioner followed by nonlinear conditioning and digitizing circuitry. Although the nonlinear portions of the circuit can be difficult to characterize mathematically, it is frequently possible to at least give a minimum depth of modulation measure to yield a worst-case guarantee of adequate performance with respect to digitization accuracy. Depth of modulation actually delivered to the nonlinear circuitry will depend on scale, contrast, and noise content of the scanned symbol, as well as the characteristics of the linear conditioning circuitry (eg. transfer function and electronic noise). Time and frequency domain techniques are applied in order to estimate the effects of these factors in selecting a spot size for a given system environment. Results obtained include estimates of the effects of the linear front end transfer function on effective spot size and asymmetries which can affect digitization accuracy. Plots of convolution-computed modulation patterns and other important system properties are presented. Considerations are limited primarily to Gaussian spot profiles but also apply to more general cases. Attention is paid to realistic symbol models and to implications with respect to printing tolerances.
Pan, Hung-Yin; Chen, Carton W; Huang, Chih-Hung
2018-04-17
Soil bacteria Streptomyces are the most important producers of secondary metabolites, including most known antibiotics. These bacteria and their close relatives are unique in possessing linear chromosomes, which typically harbor 20 to 30 biosynthetic gene clusters of tens to hundreds of kb in length. Many Streptomyces chromosomes are accompanied by linear plasmids with sizes ranging from several to several hundred kb. The large linear plasmids also often contain biosynthetic gene clusters. We have developed a targeted recombination procedure for arm exchanges between a linear plasmid and a linear chromosome. A chromosomal segment inserted in an artificially constructed plasmid allows homologous recombination between the two replicons at the homology. Depending on the design, the recombination may result in two recombinant replicons or a single recombinant chromosome with the loss of the recombinant plasmid that lacks a replication origin. The efficiency of such targeted recombination ranges from 9 to 83% depending on the locations of the homology (and thus the size of the chromosomal arm exchanged), essentially eliminating the necessity of selection. The targeted recombination is useful for the efficient engineering of the Streptomyces genome for large-scale deletion, addition, and shuffling.
NASA Technical Reports Server (NTRS)
Bernstein, Ira B.; Brookshaw, Leigh; Fox, Peter A.
1992-01-01
The present numerical method for accurate and efficient solution of systems of linear equations proceeds by numerically developing a set of basis solutions characterized by slowly varying dependent variables. The solutions thus obtained are shown to have a computational overhead largely independent of the small size of the scale length which characterizes the solutions; in many cases, the technique obviates series solutions near singular points, and its known sources of error can be easily controlled without a substantial increase in computational time.
Stochastic modeling and simulation of reaction-diffusion system with Hill function dynamics.
Chen, Minghan; Li, Fei; Wang, Shuo; Cao, Young
2017-03-14
Stochastic simulation of reaction-diffusion systems presents great challenges for spatiotemporal biological modeling and simulation. One widely used framework for stochastic simulation of reaction-diffusion systems is reaction diffusion master equation (RDME). Previous studies have discovered that for the RDME, when discretization size approaches zero, reaction time for bimolecular reactions in high dimensional domains tends to infinity. In this paper, we demonstrate that in the 1D domain, highly nonlinear reaction dynamics given by Hill function may also have dramatic change when discretization size is smaller than a critical value. Moreover, we discuss methods to avoid this problem: smoothing over space, fixed length smoothing over space and a hybrid method. Our analysis reveals that the switch-like Hill dynamics reduces to a linear function of discretization size when the discretization size is small enough. The three proposed methods could correctly (under certain precision) simulate Hill function dynamics in the microscopic RDME system.
The amazing evolutionary dynamics of non-linear optical systems with feedback
NASA Astrophysics Data System (ADS)
Yaroslavsky, Leonid
2013-09-01
Optical systems with feedback are, generally, non-linear dynamic systems. As such, they exhibit evolutionary behavior. In the paper we present results of experimental investigation of evolutionary dynamics of several models of such systems. The models are modifications of the famous mathematical "Game of Life". The modifications are two-fold: "Game of Life" rules are made stochastic and mutual influence of cells is made spatially non-uniform. A number of new phenomena in the evolutionary dynamics of the models are revealed: - "Ordering of chaos". Formation, from seed patterns, of stable maze-like patterns with chaotic "dislocations" that resemble natural patterns, such as skin patterns of some animals and fishes, see shell, fingerprints, magnetic domain patterns and alike, which one can frequently find in the nature. These patterns and their fragments exhibit a remarkable capability of unlimited growth. - "Self-controlled growth" of chaotic "live" formations into "communities" bounded, depending on the model, by a square, hexagon or octagon, until they reach a certain critical size, after which the growth stops. - "Eternal life in a bounded space" of "communities" after reaching a certain size and shape. - "Coherent shrinkage" of "mature", after reaching a certain size, "communities" into one of stable or oscillating patterns preserving in this process isomorphism of their bounding shapes until the very end.
Luenser, Arne; Schurkus, Henry F; Ochsenfeld, Christian
2017-04-11
A reformulation of the random phase approximation within the resolution-of-the-identity (RI) scheme is presented, that is competitive to canonical molecular orbital RI-RPA already for small- to medium-sized molecules. For electronically sparse systems drastic speedups due to the reduced scaling behavior compared to the molecular orbital formulation are demonstrated. Our reformulation is based on two ideas, which are independently useful: First, a Cholesky decomposition of density matrices that reduces the scaling with basis set size for a fixed-size molecule by one order, leading to massive performance improvements. Second, replacement of the overlap RI metric used in the original AO-RPA by an attenuated Coulomb metric. Accuracy is significantly improved compared to the overlap metric, while locality and sparsity of the integrals are retained, as is the effective linear scaling behavior.
NASA Astrophysics Data System (ADS)
Kim, Namkug; Seo, Joon Beom; Sung, Yu Sub; Park, Bum-Woo; Lee, Youngjoo; Park, Seong Hoon; Lee, Young Kyung; Kang, Suk-Ho
2008-03-01
To find optimal binning, variable binning size linear binning (LB) and non-linear binning (NLB) methods were tested. In case of small binning size (Q <= 10), NLB shows significant better accuracy than the LB. K-means NLB (Q = 26) is statistically significant better than every LB. To find optimal binning method and ROI size of the automatic classification system for differentiation between diffuse infiltrative lung diseases on the basis of textural analysis at HRCT Six-hundred circular regions of interest (ROI) with 10, 20, and 30 pixel diameter, comprising of each 100 ROIs representing six regional disease patterns (normal, NL; ground-glass opacity, GGO; reticular opacity, RO; honeycombing, HC; emphysema, EMPH; and consolidation, CONS) were marked by an experienced radiologist from HRCT images. Histogram (mean) and co-occurrence matrix (mean and SD of angular second moment, contrast, correlation, entropy, and inverse difference momentum) features were employed to test binning and ROI effects. To find optimal binning, variable binning size LB (bin size Q: 4~30, 32, 64, 128, 144, 196, 256, 384) and NLB (Q: 4~30) methods (K-means, and Fuzzy C-means clustering) were tested. For automated classification, a SVM classifier was implemented. To assess cross-validation of the system, a five-folding method was used. Each test was repeatedly performed twenty times. Overall accuracies with every combination of variable ROIs, and binning sizes were statistically compared. In case of small binning size (Q <= 10), NLB shows significant better accuracy than the LB. K-means NLB (Q = 26) is statistically significant better than every LB. In case of 30x30 ROI size and most of binning size, the K-means method showed better than other NLB and LB methods. When optimal binning and other parameters were set, overall sensitivity of the classifier was 92.85%. The sensitivity and specificity of the system for each class were as follows: NL, 95%, 97.9%; GGO, 80%, 98.9%; RO 85%, 96.9%; HC, 94.7%, 97%; EMPH, 100%, 100%; and CONS, 100%, 100%, respectively. We determined the optimal binning method and ROI size of the automatic classification system for differentiation between diffuse infiltrative lung diseases on the basis of texture features at HRCT.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, J.H.; Ellis, J.R.; Montague, S.
1997-03-01
One of the principal applications of monolithically integrated micromechanical/microelectronic systems has been accelerometers for automotive applications. As integrated MEMS/CMOS technologies such as those developed by U.C. Berkeley, Analog Devices, and Sandia National Laboratories mature, additional systems for more sensitive inertial measurements will enter the commercial marketplace. In this paper, the authors will examine key technology design rules which impact the performance and cost of inertial measurement devices manufactured in integrated MEMS/CMOS technologies. These design parameters include: (1) minimum MEMS feature size, (2) minimum CMOS feature size, (3) maximum MEMS linear dimension, (4) number of mechanical MEMS layers, (5) MEMS/CMOS spacing.more » In particular, the embedded approach to integration developed at Sandia will be examined in the context of these technology features. Presently, this technology offers MEMS feature sizes as small as 1 {micro}m, CMOS critical dimensions of 1.25 {micro}m, MEMS linear dimensions of 1,000 {micro}m, a single mechanical level of polysilicon, and a 100 {micro}m space between MEMS and CMOS. This is applicable to modern precision guided munitions.« less
Dry etching of chrome for photomasks for 100-nm technology using chemically amplified resist
NASA Astrophysics Data System (ADS)
Mueller, Mark; Komarov, Serguie; Baik, Ki-Ho
2002-07-01
Photo mask etching for the 100nm technology node places new requirements on dry etching processes. As the minimum-size features on the mask, such as assist bars and optical proximity correction (OPC) patterns, shrink down to 100nm, it is necessary to produce etch CD biases of below 20nm in order to reproduce minimum resist features into chrome with good pattern fidelity. In addition, vertical profiles are necessary. In previous generations of photomask technology, footing and sidewall profile slope were tolerated, since this dry etch profile was an improvement from wet etching. However, as feature sizes shrink, it is extremely important to select etch processes which do not generate a foot, because this will affect etch linearity and also limit the smallest etched feature size. Chemically amplified resist (CAR) from TOK is patterned with a 50keV MEBES eXara e-beam writer, allowing for patterning of small features with vertical resist profiles. This resist is developed for raster scan 50 kV e-beam systems. It has high contrast, good coating characteristics, good dry etch selectivity, and high environmental stability. Chrome etch process development has been performed using Design of Experiments to optimize parameters such as sidewall profile, etch CD bias, etch CD linearity for varying sizes of line/space patterns, etch CD linearity for varying sizes of isolated lines and spaces, loading effects, and application to contact etching.
NASA Astrophysics Data System (ADS)
Kairn, T.; Crowe, S. B.; Charles, P. H.; Trapp, J. V.
2014-03-01
This study investigates the variation of photon field penumbra shape with initial electron beam diameter, for very narrow beams. A Varian Millenium MLC (Varian Medical Systems, Palo Alto, USA) and a Brainlab m3 microMLC (Brainlab AB. Feldkirchen, Germany) were used, with one Varian iX linear accelerator, to produce fields that were (nominally) 0.20 cm across. Dose profiles for these fields were measured using radiochromic film and compared with the results of simulations completed using BEAMnrc and DOSXYZnrc, where the initial electron beam was set to FWHM = 0.02, 0.10, 0.12, 0.15, 0.20 and 0.50 cm. Increasing the electron-beam FWHM produced increasing occlusion of the photon source by the closely spaced collimator leaves and resulted in blurring of the simulated profile widths from 0.24 to 0.58 cm, for the MLC, from 0.11 to 0.40 cm, for the microMLC. Comparison with measurement data suggested that the electron spot size in the clinical linear accelerator was between FWHM = 0.10 and 0.15 cm, encompassing the result of our previous output-factor based work, which identified a FWHM of 0.12 cm. Investigation of narrow-beam penumbra variation has been found to be a useful procedure, with results varying noticeably with linear accelerator spot size and allowing FWHM estimates obtained using other methods to be verified.
A sequential linear optimization approach for controller design
NASA Technical Reports Server (NTRS)
Horta, L. G.; Juang, J.-N.; Junkins, J. L.
1985-01-01
A linear optimization approach with a simple real arithmetic algorithm is presented for reliable controller design and vibration suppression of flexible structures. Using first order sensitivity of the system eigenvalues with respect to the design parameters in conjunction with a continuation procedure, the method converts a nonlinear optimization problem into a maximization problem with linear inequality constraints. The method of linear programming is then applied to solve the converted linear optimization problem. The general efficiency of the linear programming approach allows the method to handle structural optimization problems with a large number of inequality constraints on the design vector. The method is demonstrated using a truss beam finite element model for the optimal sizing and placement of active/passive-structural members for damping augmentation. Results using both the sequential linear optimization approach and nonlinear optimization are presented and compared. The insensitivity to initial conditions of the linear optimization approach is also demonstrated.
Solving Graph Laplacian Systems Through Recursive Bisections and Two-Grid Preconditioning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ponce, Colin; Vassilevski, Panayot S.
2016-02-18
We present a parallelizable direct method for computing the solution to graph Laplacian-based linear systems derived from graphs that can be hierarchically bipartitioned with small edge cuts. For a graph of size n with constant-size edge cuts, our method decomposes a graph Laplacian in time O(n log n), and then uses that decomposition to perform a linear solve in time O(n log n). We then use the developed technique to design a preconditioner for graph Laplacians that do not have this property. Finally, we augment this preconditioner with a two-grid method that accounts for much of the preconditioner's weaknesses. Wemore » present an analysis of this method, as well as a general theorem for the condition number of a general class of two-grid support graph-based preconditioners. Numerical experiments illustrate the performance of the studied methods.« less
Spacecraft configuration study for second generation mobile satellite system
NASA Technical Reports Server (NTRS)
Louie, M.; Vonstentzsch, W.; Zanella, F.; Hayes, R.; Mcgovern, F.; Tyner, R.
1985-01-01
A high power, high performance communicatons satellite bus being developed is designed to satisfy a broad range of multimission payload requirements in a cost effective manner and is compatible with both STS and expendable launchers. Results are presented of tradeoff studies conducted to optimize the second generation mobile satellite system for its mass, power, and physical size. Investigations of the 20-meter antenna configuration, transponder linearization techniques, needed spacecraft modifications, and spacecraft power, dissipation, mass, and physical size indicate that the advanced spacecraft bus is capable of supporting the required payload for the satellite.
Deposition of Nanostructured Thin Film from Size-Classified Nanoparticles
NASA Technical Reports Server (NTRS)
Camata, Renato P.; Cunningham, Nicholas C.; Seol, Kwang Soo; Okada, Yoshiki; Takeuchi, Kazuo
2003-01-01
Materials comprising nanometer-sized grains (approximately 1_50 nm) exhibit properties dramatically different from those of their homogeneous and uniform counterparts. These properties vary with size, shape, and composition of nanoscale grains. Thus, nanoparticles may be used as building blocks to engineer tailor-made artificial materials with desired properties, such as non-linear optical absorption, tunable light emission, charge-storage behavior, selective catalytic activity, and countless other characteristics. This bottom-up engineering approach requires exquisite control over nanoparticle size, shape, and composition. We describe the design and characterization of an aerosol system conceived for the deposition of size classified nanoparticles whose performance is consistent with these strict demands. A nanoparticle aerosol is generated by laser ablation and sorted according to size using a differential mobility analyzer. Nanoparticles within a chosen window of sizes (e.g., (8.0 plus or minus 0.6) nm) are deposited electrostatically on a surface forming a film of the desired material. The system allows the assembly and engineering of thin films using size-classified nanoparticles as building blocks.
Time-delay control of a magnetic levitated linear positioning system
NASA Technical Reports Server (NTRS)
Tarn, J. H.; Juang, K. Y.; Lin, C. E.
1994-01-01
In this paper, a high accuracy linear positioning system with a linear force actuator and magnetic levitation is proposed. By locating a permanently magnetized rod inside a current-carrying solenoid, the axial force is achieved by the boundary effect of magnet poles and utilized to power the linear motion, while the force for levitation is governed by Ampere's Law supplied with the same solenoid. With the levitation in a radial direction, there is hardly any friction between the rod and the solenoid. The high speed motion can hence be achieved. Besides, the axial force acting on the rod is a smooth function of rod position, so the system can provide nanometer resolution linear positioning to the molecule size. Since the force-position relation is highly nonlinear, and the mathematical model is derived according to some assumptions, such as the equivalent solenoid of the permanently magnetized rod, so there exists unknown dynamics in practical application. Thus 'robustness' is an important issue in controller design. Meanwhile the load effect reacts directly on the servo system without transmission elements, so the capability of 'disturbance rejection; is also required. With the above consideration, a time-delay control scheme is chosen and applied. By comparing the input-output relation and the mathematical model, the time-delay controller calculates an estimation of unmodeled dynamics and disturbances and then composes the desired compensation into the system. Effectiveness of the linear positioning system and control scheme are illustrated with simulation results.
Element enrichment factor calculation using grain-size distribution and functional data regression.
Sierra, C; Ordóñez, C; Saavedra, A; Gallego, J R
2015-01-01
In environmental geochemistry studies it is common practice to normalize element concentrations in order to remove the effect of grain size. Linear regression with respect to a particular grain size or conservative element is a widely used method of normalization. In this paper, the utility of functional linear regression, in which the grain-size curve is the independent variable and the concentration of pollutant the dependent variable, is analyzed and applied to detrital sediment. After implementing functional linear regression and classical linear regression models to normalize and calculate enrichment factors, we concluded that the former regression technique has some advantages over the latter. First, functional linear regression directly considers the grain-size distribution of the samples as the explanatory variable. Second, as the regression coefficients are not constant values but functions depending on the grain size, it is easier to comprehend the relationship between grain size and pollutant concentration. Third, regularization can be introduced into the model in order to establish equilibrium between reliability of the data and smoothness of the solutions. Copyright © 2014 Elsevier Ltd. All rights reserved.
Ion size effects on the electrokinetics of spherical particles in salt-free concentrated suspensions
NASA Astrophysics Data System (ADS)
Roa, Rafael; Carrique, Felix; Ruiz-Reina, Emilio
2012-02-01
In this work we study the influence of the counterion size on the electrophoretic mobility and on the dynamic mobility of a suspended spherical particle in a salt-free concentrated colloidal suspension. Salt-free suspensions contain charged particles and the added counterions that counterbalance their surface charge. A spherical cell model approach is used to take into account particle-particle electro-hydrodynamic interactions in concentrated suspensions. The finite size of the counterions is considered including an entropic contribution, related with the excluded volume of the ions, in the free energy of the suspension, giving rise to a modified counterion concentration profile. We are interested in studying the linear response of the system to an electric field, thus we solve the different electrokinetic equations by using a linear perturbation scheme. We find that the ionic size effect is quite important for moderate to high particles charges at a given particle volume fraction. In addition for such particle surface charges, both the electrophoretic mobility and the dynamic mobility suffer more important changes the larger the particle volume fraction for each ion size. The latter effects are more relevant the larger the ionic size.
Finite-size scaling above the upper critical dimension in Ising models with long-range interactions
NASA Astrophysics Data System (ADS)
Flores-Sola, Emilio J.; Berche, Bertrand; Kenna, Ralph; Weigel, Martin
2015-01-01
The correlation length plays a pivotal role in finite-size scaling and hyperscaling at continuous phase transitions. Below the upper critical dimension, where the correlation length is proportional to the system length, both finite-size scaling and hyperscaling take conventional forms. Above the upper critical dimension these forms break down and a new scaling scenario appears. Here we investigate this scaling behaviour by simulating one-dimensional Ising ferromagnets with long-range interactions. We show that the correlation length scales as a non-trivial power of the linear system size and investigate the scaling forms. For interactions of sufficiently long range, the disparity between the correlation length and the system length can be made arbitrarily large, while maintaining the new scaling scenarios. We also investigate the behavior of the correlation function above the upper critical dimension and the modifications imposed by the new scaling scenario onto the associated Fisher relation.
System-size and beam energy dependence of the space-time extent of the pion emission source
NASA Astrophysics Data System (ADS)
Pak, Robert; Phenix Collaboration
2014-09-01
Two-pion interferometry measurements are used to extract the Gaussian source radii Rout ,Rside and Rlong , of the pion emission sources produced in d + Au, Cu +Cu and Au +Au collisions for several beam collision energies at PHENIX experiment. The extracted radii, which are compared to recent STAR and ALICE data, show characteristic scaling patterns as a function of the initial transverse geometric size of the collision system, and the transverse mass of the emitted pion pairs. These scaling patterns indicate a linear dependence of Rside on the initial transverse size, as well as a smaller freeze-out size for the d + Au system. Mathematical combinations of the extracted radii generally associated with the emission source duration and expansion rate exhibit non-monotonic behavior, suggesting a change in the expansion dynamics over this beam energy range.
Power/Performance Trade-offs of Small Batched LU Based Solvers on GPUs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Villa, Oreste; Fatica, Massimiliano; Gawande, Nitin A.
In this paper we propose and analyze a set of batched linear solvers for small matrices on Graphic Processing Units (GPUs), evaluating the various alternatives depending on the size of the systems to solve. We discuss three different solutions that operate with different level of parallelization and GPU features. The first, exploiting the CUBLAS library, manages matrices of size up to 32x32 and employs Warp level (one matrix, one Warp) parallelism and shared memory. The second works at Thread-block level parallelism (one matrix, one Thread-block), still exploiting shared memory but managing matrices up to 76x76. The third is Thread levelmore » parallel (one matrix, one thread) and can reach sizes up to 128x128, but it does not exploit shared memory and only relies on the high memory bandwidth of the GPU. The first and second solution only support partial pivoting, the third one easily supports partial and full pivoting, making it attractive to problems that require greater numerical stability. We analyze the trade-offs in terms of performance and power consumption as function of the size of the linear systems that are simultaneously solved. We execute the three implementations on a Tesla M2090 (Fermi) and on a Tesla K20 (Kepler).« less
Algorithms for Efficient Computation of Transfer Functions for Large Order Flexible Systems
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Giesy, Daniel P.
1998-01-01
An efficient and robust computational scheme is given for the calculation of the frequency response function of a large order, flexible system implemented with a linear, time invariant control system. Advantage is taken of the highly structured sparsity of the system matrix of the plant based on a model of the structure using normal mode coordinates. The computational time per frequency point of the new computational scheme is a linear function of system size, a significant improvement over traditional, still-matrix techniques whose computational times per frequency point range from quadratic to cubic functions of system size. This permits the practical frequency domain analysis of systems of much larger order than by traditional, full-matrix techniques. Formulations are given for both open- and closed-loop systems. Numerical examples are presented showing the advantages of the present formulation over traditional approaches, both in speed and in accuracy. Using a model with 703 structural modes, the present method was up to two orders of magnitude faster than a traditional method. The present method generally showed good to excellent accuracy throughout the range of test frequencies, while traditional methods gave adequate accuracy for lower frequencies, but generally deteriorated in performance at higher frequencies with worst case errors being many orders of magnitude times the correct values.
NASA Astrophysics Data System (ADS)
Avitabile, Peter; O'Callahan, John
2009-01-01
Generally, response analysis of systems containing discrete nonlinear connection elements such as typical mounting connections require the physical finite element system matrices to be used in a direct integration algorithm to compute the nonlinear response analysis solution. Due to the large size of these physical matrices, forced nonlinear response analysis requires significant computational resources. Usually, the individual components of the system are analyzed and tested as separate components and their individual behavior may essentially be linear when compared to the total assembled system. However, the joining of these linear subsystems using highly nonlinear connection elements causes the entire system to become nonlinear. It would be advantageous if these linear modal subsystems could be utilized in the forced nonlinear response analysis since much effort has usually been expended in fine tuning and adjusting the analytical models to reflect the tested subsystem configuration. Several more efficient techniques have been developed to address this class of problem. Three of these techniques given as: equivalent reduced model technique (ERMT);modal modification response technique (MMRT); andcomponent element method (CEM); are presented in this paper and are compared to traditional methods.
Radialenes are minimally conjugated cyclic π-systems
NASA Astrophysics Data System (ADS)
Dias, Jerry Ray
2017-03-01
Conjugation energy (CE) in benzene is larger than its aromatic stabilisation energy (ASE). A far-reaching conclusion offered by this work is that per π-electron, CE is energetically larger than aromaticity. If a diene has a doubly degenerate HOMO, then its Diels-Alder reaction will be kinetically faster than a similar diene with a nondegenerate HOMO. The topological conjugation energy (TCE) for the radialene, monocyclic, dendralene, and linear polyene series has quite different trends. Radialenes are minimally conjugated cyclic systems with the TCE/No. π-bond = 0.432 β; the members of the dendralene series approach this same value from smaller values with increasing size. With increasing size, the members of the monocyclic and linear polyene series have, respectively, decreasing and increasing TCE/No. π-bond values approaching 0.547 β. Topological resonance energy (TRE) for radialenes, dendralenes, and linear polyenes all have TRE = 0, and the TRE/π-electron for monocyclic polyenes has alternating declining values between antiaromatic (-0.3066 β, -0.07435 β, -0.03287 β, …) and aromatic (0.04543 β, 0.01594 β, 0.00807 β, …). For benzene, TRE/No. π-bond = 0.0909 β and TCE/No. π-bond = 0.576 β.
Radiation Field Forming for Industrial Electron Accelerators Using Rare-Earth Magnetic Materials
NASA Astrophysics Data System (ADS)
Ermakov, A. N.; Khankin, V. V.; Shvedunov, N. V.; Shvedunov, V. I.; Yurov, D. S.
2016-09-01
The article describes the radiation field forming system for industrial electron accelerators, which would have uniform distribution of linear charge density at the surface of an item being irradiated perpendicular to the direction of its motion. Its main element is non-linear quadrupole lens made with the use of rare-earth magnetic materials. The proposed system has a number of advantages over traditional beam scanning systems that use electromagnets, including easier product irradiation planning, lower instantaneous local dose rate, smaller size, lower cost. Provided are the calculation results for a 10 MeV industrial electron accelerator, as well as measurement results for current distribution in the prototype build based on calculations.
Thermally Induced Depolarization of the Photoluminescence of Carbon Nanodots in a Colloidal Matrix
NASA Astrophysics Data System (ADS)
Starukhin, A. N.; Nelson, D. K.; Kurdyukov, D. A.; Eurov, D. A.; Stovpiaga, E. Yu.; Golubev, V. G.
2018-02-01
The effect of temperature on fluorescence polarization in a colloidal system of carbon nanodots in glycerol under linearly polarized excitation is investigated for the first time. It is found that the experimentally obtained temperature dependence of the degree of linear polarization of fluorescence can be described by the Levshin-Perrin equation, taking into account the rotational diffusion of luminescent particles (fluorophores) in the liquid matrix. The fluorophore size determined in the context of the Levshin-Perrin model is significantly smaller than the size of carbon nanodots. This discrepancy gives evidence that small atomic groups responsible for nanodot luminescence are characterized by high segmental mobility with a large amplitude of motion with respect to the nanodot core.
Resonant mode controllers for launch vehicle applications
NASA Technical Reports Server (NTRS)
Schreiner, Ken E.; Roth, Mary Ellen
1992-01-01
Electro-mechanical actuator (EMA) systems are currently being investigated for the National Launch System (NLS) as a replacement for hydraulic actuators due to the large amount of manpower and support hardware required to maintain the hydraulic systems. EMA systems in weight sensitive applications, such as launch vehicles, have been limited to around 5 hp due to system size, controller efficiency, thermal management, and battery size. Presented here are design and test data for an EMA system that competes favorably in weight and is superior in maintainability to the hydraulic system. An EMA system uses dc power provided by a high energy density bipolar lithium thionyl chloride battery, with power conversion performed by low loss resonant topologies, and a high efficiency induction motor controlled with a high performance field oriented controller to drive a linear actuator.
Jeong, Bongwon; Cho, Hanna; Keum, Hohyun; Kim, Seok; Michael McFarland, D; Bergman, Lawrence A; King, William P; Vakakis, Alexander F
2014-11-21
Intentional utilization of geometric nonlinearity in micro/nanomechanical resonators provides a breakthrough to overcome the narrow bandwidth limitation of linear dynamic systems. In past works, implementation of intentional geometric nonlinearity to an otherwise linear nano/micromechanical resonator has been successfully achieved by local modification of the system through nonlinear attachments of nanoscale size, such as nanotubes and nanowires. However, the conventional fabrication method involving manual integration of nanoscale components produced a low yield rate in these systems. In the present work, we employed a transfer-printing assembly technique to reliably integrate a silicon nanomembrane as a nonlinear coupling component onto a linear dynamic system with two discrete microcantilevers. The dynamics of the developed system was modeled analytically and investigated experimentally as the coupling strength was finely tuned via FIB post-processing. The transition from the linear to the nonlinear dynamic regime with gradual change in the coupling strength was experimentally studied. In addition, we observed for the weakly coupled system that oscillation was asynchronous in the vicinity of the resonance, thus exhibiting a nonlinear complex mode. We conjectured that the emergence of this nonlinear complex mode could be attributed to the nonlinear damping arising from the attached nanomembrane.
Fame emerges as a result of small memory
NASA Astrophysics Data System (ADS)
Bingol, Haluk
2008-03-01
A dynamic memory model is proposed in which an agent “learns” a new agent by means of recommendation. The agents can also “remember” and “forget.” The memory size is decreased while the population size is kept constant. “Fame” emerged as a few agents become very well known in expense of the majority being completely forgotten. The minimum and the maximum of fame change linearly with the relative memory size. The network properties of the who-knows-who graph, which represents the state of the system, are investigated.
An analytical approach to top predator interference on the dynamics of a food chain model
NASA Astrophysics Data System (ADS)
Senthamarai, R.; Vijayalakshmi, T.
2018-04-01
In this paper, a nonlinear mathematical model is proposed and analyzed to study of top predator interference on the dynamics of a food chain model. The mathematical model is formulated using the system of non-linear ordinary differential equations. In this model, there are three state dimensionless variables, viz, size of prey population x, size of intermediate predator y and size of top predator population z. The analytical results are compared with the numerical simulation using MATLAB software and satisfactory results are noticed.
A Flexible CUDA LU-based Solver for Small, Batched Linear Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tumeo, Antonino; Gawande, Nitin A.; Villa, Oreste
This chapter presents the implementation of a batched CUDA solver based on LU factorization for small linear systems. This solver may be used in applications such as reactive flow transport models, which apply the Newton-Raphson technique to linearize and iteratively solve the sets of non linear equations that represent the reactions for ten of thousands to millions of physical locations. The implementation exploits somewhat counterintuitive GPGPU programming techniques: it assigns the solution of a matrix (representing a system) to a single CUDA thread, does not exploit shared memory and employs dynamic memory allocation on the GPUs. These techniques enable ourmore » implementation to simultaneously solve sets of systems with over 100 equations and to employ LU decomposition with complete pivoting, providing the higher numerical accuracy required by certain applications. Other currently available solutions for batched linear solvers are limited by size and only support partial pivoting, although they may result faster in certain conditions. We discuss the code of our implementation and present a comparison with the other implementations, discussing the various tradeoffs in terms of performance and flexibility. This work will enable developers that need batched linear solvers to choose whichever implementation is more appropriate to the features and the requirements of their applications, and even to implement dynamic switching approaches that can choose the best implementation depending on the input data.« less
NASA Astrophysics Data System (ADS)
Baldysz, Zofia; Nykiel, Grzegorz; Figurski, Mariusz; Szafranek, Karolina; Kroszczynski, Krzysztof; Araszkiewicz, Andrzej
2015-04-01
In recent years, the GNSS system began to play an increasingly important role in the research related to the climate monitoring. Based on the GPS system, which has the longest operational capability in comparison with other systems, and a common computational strategy applied to all observations, long and homogeneous ZTD (Zenith Tropospheric Delay) time series were derived. This paper presents results of analysis of 16-year ZTD time series obtained from the EPN (EUREF Permanent Network) reprocessing performed by the Military University of Technology. To maintain the uniformity of data, analyzed period of time (1998-2013) is exactly the same for all stations - observations carried out before 1998 were removed from time series and observations processed using different strategy were recalculated according to the MUT LAC approach. For all 16-year time series (59 stations) Lomb-Scargle periodograms were created to obtain information about the oscillations in ZTD time series. Due to strong annual oscillations which disturb the character of oscillations with smaller amplitude and thus hinder their investigation, Lomb-Scargle periodograms for time series with the deleted annual oscillations were created in order to verify presence of semi-annual, ter-annual and quarto-annual oscillations. Linear trend and seasonal components were estimated using LSE (Least Square Estimation) and Mann-Kendall trend test were used to confirm the presence of linear trend designated by LSE method. In order to verify the effect of the length of time series on the estimated size of the linear trend, comparison between two different length of ZTD time series was performed. To carry out a comparative analysis, 30 stations which have been operating since 1996 were selected. For these stations two periods of time were analyzed: shortened 16-year (1998-2013) and full 18-year (1996-2013). For some stations an additional two years of observations have significant impact on changing the size of linear trend - only for 4 stations the size of linear trend was exactly the same for two periods of time. In one case, the nature of the trend has changed from negative (16-year time series) for positive (18-year time series). The average value of a linear trends for 16-year time series is 1,5 mm/decade, but their spatial distribution is not uniform. The average value of linear trends for all 18-year time series is 2,0 mm/decade, with better spatial distribution and smaller discrepancies.
Mathematical modelling of the growth of human fetus anatomical structures.
Dudek, Krzysztof; Kędzia, Wojciech; Kędzia, Emilia; Kędzia, Alicja; Derkowski, Wojciech
2017-09-01
The goal of this study was to present a procedure that would enable mathematical analysis of the increase of linear sizes of human anatomical structures, estimate mathematical model parameters and evaluate their adequacy. Section material consisted of 67 foetuses-rectus abdominis muscle and 75 foetuses- biceps femoris muscle. The following methods were incorporated to the study: preparation and anthropologic methods, image digital acquisition, Image J computer system measurements and statistical analysis method. We used an anthropologic method based on age determination with the use of crown-rump length-CRL (V-TUB) by Scammon and Calkins. The choice of mathematical function should be based on a real course of the curve presenting growth of anatomical structure linear size Ύ in subsequent weeks t of pregnancy. Size changes can be described with a segmental-linear model or one-function model with accuracy adequate enough for clinical purposes. The interdependence of size-age is described with many functions. However, the following functions are most often considered: linear, polynomial, spline, logarithmic, power, exponential, power-exponential, log-logistic I and II, Gompertz's I and II and von Bertalanffy's function. With the use of the procedures described above, mathematical models parameters were assessed for V-PL (the total length of body) and CRL body length increases, rectus abdominis total length h, its segments hI, hII, hIII, hIV, as well as biceps femoris length and width of long head (LHL and LHW) and of short head (SHL and SHW). The best adjustments to measurement results were observed in the exponential and Gompertz's models.
NASA Astrophysics Data System (ADS)
Kuzmanoski, M.; Box, M.; Box, G. P.; Schmidt, B.; Russell, P. B.; Redemann, J.; Livingston, J. M.; Wang, J.; Flagan, R. C.; Seinfeld, J. H.
2002-12-01
As part of the ACE-Asia experiment, conducted off the coast of China, Korea and Japan in spring 2001, measurements of aerosol physical, chemical and radiative characteristics were performed aboard the Twin Otter aircraft. Of particular importance for this paper were spectral measurements of aerosol optical thickness obtained at 13 discrete wavelengths, within 354-1558 nm wavelength range, using the AATS-14 sunphotometer. Spectral aerosol optical thickness can be used to obtain information about particle size distribution. In this paper, we use sunphotometer measurements to retrieve size distribution of aerosols during ACE-Asia. We focus on four cases in which layers influenced by different air masses were identified. Aerosol optical thickness of each layer was inverted using two different techniques - constrained linear inversion and multimodal. In the constrained linear inversion algorithm no assumption about the mathematical form of the distribution to be retrieved is made. Conversely, the multimodal technique assumes that aerosol size distribution is represented as a linear combination of few lognormal modes with predefined values of mode radii and geometric standard deviations. Amplitudes of modes are varied to obtain best fit of sum of optical thicknesses due to individual modes to sunphotometer measurements. In this paper we compare the results of these two retrieval methods. In addition, we present comparisons of retrieved size distributions with in situ measurements taken using an aerodynamic particle sizer and differential mobility analyzer system aboard the Twin Otter aircraft.
Implicit–explicit (IMEX) Runge–Kutta methods for non-hydrostatic atmospheric models
Gardner, David J.; Guerra, Jorge E.; Hamon, François P.; ...
2018-04-17
The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit–explicit (IMEX) additive Runge–Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit – vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored.The accuracymore » and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.« less
Implicit-explicit (IMEX) Runge-Kutta methods for non-hydrostatic atmospheric models
NASA Astrophysics Data System (ADS)
Gardner, David J.; Guerra, Jorge E.; Hamon, François P.; Reynolds, Daniel R.; Ullrich, Paul A.; Woodward, Carol S.
2018-04-01
The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit-explicit (IMEX) additive Runge-Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit - vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored. The accuracy and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.
Implicit–explicit (IMEX) Runge–Kutta methods for non-hydrostatic atmospheric models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gardner, David J.; Guerra, Jorge E.; Hamon, François P.
The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit–explicit (IMEX) additive Runge–Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit – vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored.The accuracymore » and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.« less
Controlling Flexible Manipulators, an Experimental Investigation. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Hastings, Gordon Greene
1986-01-01
Lightweight, slender manipulators offer faster response and/or greater workspace range for the same size actuators than tradional manipulators. Lightweight construction of manipulator links results in increased structural flexibility. The increase flexibility must be considered in the design of control systems to properly account for the dynamic flexible vibrations and static deflections. Real time control of the flexible manipulator vibrations are experimentally investigated. Models intended for real-time control of distributed parameter system such as flexible manipulators rely on model approximation schemes. An linear model based on the application of Lagrangian dynamics to a rigid body mode and a series of separable flexible modes is examined with respect to model order requirements, and modal candidate selection. Balanced realizations are applied to the linear flexible model to obtain an estimate of appropriate order for a selected model. Describing the flexible deflections as a linear combination of modes results in measurements of beam state, which yield information about several modes. To realize the potential of linear systems theory, knowledge of each state must be available. State estimation is also accomplished by implementation of a Kalman Filter. State feedback control laws are implemented based upon linear quadratic regulator design.
General Linearized Theory of Quantum Fluctuations around Arbitrary Limit Cycles
NASA Astrophysics Data System (ADS)
Navarrete-Benlloch, Carlos; Weiss, Talitha; Walter, Stefan; de Valcárcel, Germán J.
2017-09-01
The theory of Gaussian quantum fluctuations around classical steady states in nonlinear quantum-optical systems (also known as standard linearization) is a cornerstone for the analysis of such systems. Its simplicity, together with its accuracy far from critical points or situations where the nonlinearity reaches the strong coupling regime, has turned it into a widespread technique, being the first method of choice in most works on the subject. However, such a technique finds strong practical and conceptual complications when one tries to apply it to situations in which the classical long-time solution is time dependent, a most prominent example being spontaneous limit-cycle formation. Here, we introduce a linearization scheme adapted to such situations, using the driven Van der Pol oscillator as a test bed for the method, which allows us to compare it with full numerical simulations. On a conceptual level, the scheme relies on the connection between the emergence of limit cycles and the spontaneous breaking of the symmetry under temporal translations. On the practical side, the method keeps the simplicity and linear scaling with the size of the problem (number of modes) characteristic of standard linearization, making it applicable to large (many-body) systems.
Film cameras or digital sensors? The challenge ahead for aerial imaging
Light, D.L.
1996-01-01
Cartographic aerial cameras continue to play the key role in producing quality products for the aerial photography business, and specifically for the National Aerial Photography Program (NAPP). One NAPP photograph taken with cameras capable of 39 lp/mm system resolution can contain the equivalent of 432 million pixels at 11 ??m spot size, and the cost is less than $75 per photograph to scan and output the pixels on a magnetic storage medium. On the digital side, solid state charge coupled device linear and area arrays can yield quality resolution (7 to 12 ??m detector size) and a broader dynamic range. If linear arrays are to compete with film cameras, they will require precise attitude and positioning of the aircraft so that the lines of pixels can be unscrambled and put into a suitable homogeneous scene that is acceptable to an interpreter. Area arrays need to be much larger than currently available to image scenes competitive in size with film cameras. Analysis of the relative advantages and disadvantages of the two systems show that the analog approach is more economical at present. However, as arrays become larger, attitude sensors become more refined, global positioning system coordinate readouts become commonplace, and storage capacity becomes more affordable, the digital camera may emerge as the imaging system for the future. Several technical challenges must be overcome if digital sensors are to advance to where they can support mapping, charting, and geographic information system applications.
NASA Astrophysics Data System (ADS)
Baasch, Benjamin; Müller, Hendrik; von Dobeneck, Tilo; Oberle, Ferdinand K. J.
2017-05-01
The electric conductivity and magnetic susceptibility of sediments are fundamental parameters in environmental geophysics. Both can be derived from marine electromagnetic profiling, a novel, fast and non-invasive seafloor mapping technique. Here we present statistical evidence that electric conductivity and magnetic susceptibility can help to determine physical grain-size characteristics (size, sorting and mud content) of marine surficial sediments. Electromagnetic data acquired with the bottom-towed electromagnetic profiler MARUM NERIDIS III were analysed and compared with grain size data from 33 samples across the NW Iberian continental shelf. A negative correlation between mean grain size and conductivity (R=-0.79) as well as mean grain size and susceptibility (R=-0.78) was found. Simple and multiple linear regression analyses were carried out to predict mean grain size, mud content and the standard deviation of the grain-size distribution from conductivity and susceptibility. The comparison of both methods showed that multiple linear regression models predict the grain-size distribution characteristics better than the simple models. This exemplary study demonstrates that electromagnetic benthic profiling is capable to estimate mean grain size, sorting and mud content of marine surficial sediments at a very high significance level. Transfer functions can be calibrated using grains-size data from a few reference samples and extrapolated along shelf-wide survey lines. This study suggests that electromagnetic benthic profiling should play a larger role for coastal zone management, seafloor contamination and sediment provenance studies in worldwide continental shelf systems.
Liu, Lan; Jiang, Tao
2007-01-01
With the launch of the international HapMap project, the haplotype inference problem has attracted a great deal of attention in the computational biology community recently. In this paper, we study the question of how to efficiently infer haplotypes from genotypes of individuals related by a pedigree without mating loops, assuming that the hereditary process was free of mutations (i.e. the Mendelian law of inheritance) and recombinants. We model the haplotype inference problem as a system of linear equations as in [10] and present an (optimal) linear-time (i.e. O(mn) time) algorithm to generate a particular solution (A particular solution of any linear system is an assignment of numerical values to the variables in the system which satisfies the equations in the system.) to the haplotype inference problem, where m is the number of loci (or markers) in a genotype and n is the number of individuals in the pedigree. Moreover, the algorithm also provides a general solution (A general solution of any linear system is denoted by the span of a basis in the solution space to its associated homogeneous system, offset from the origin by a vector, namely by any particular solution. A general solution for ZRHC is very useful in practice because it allows the end user to efficiently enumerate all solutions for ZRHC and performs tasks such as random sampling.) in O(mn2) time, which is optimal because the size of a general solution could be as large as Theta(mn2). The key ingredients of our construction are (i) a fast consistency checking procedure for the system of linear equations introduced in [10] based on a careful investigation of the relationship between the equations (ii) a novel linear-time method for solving linear equations without invoking the Gaussian elimination method. Although such a fast method for solving equations is not known for general systems of linear equations, we take advantage of the underlying loop-free pedigree graph and some special properties of the linear equations.
Evaluating linear response in active systems with no perturbing field
NASA Astrophysics Data System (ADS)
Szamel, Grzegorz
2017-03-01
We present a method for the evaluation of time-dependent linear response functions for systems of active particles propelled by a persistent (colored) noise from unperturbed simulations. The method is inspired by the Malliavin weights sampling method proposed by Warren and Allen (Phys. Rev. Lett., 109 (2012) 250601) for out-of-equilibrium systems of passive Brownian particles. We illustrate our method by evaluating two linear response functions for a single active particle in an external harmonic potential. As an application, we calculate the time-dependent mobility function and an effective temperature, defined through the Einstein relation between the self-diffusion and mobility coefficients, for a system of many active particles interacting via a screened Coulomb potential. We find that this effective temperature decreases with increasing persistence time of the self-propulsion. Initially, for not too large persistence times, it changes rather slowly, but then it decreases markedly when the persistence length of the self-propelled motion becomes comparable with the particle size.
NASA Astrophysics Data System (ADS)
Szamel, Grzegorz
We present a method for the evaluation of time-dependent linear response functions for systems of active particles propelled by a persistent (colored) noise from unperturbed simulations. The method is inspired by the Malliavin weights sampling method proposed earlier for systems of (passive) Brownian particles. We illustrate our method by evaluating a linear response function for a single active particle in an external harmonic potential. As an application, we calculate the time-dependent mobility function and an effective temperature, defined through the Einstein relation between the self-diffusion and mobility coefficients, for a system of active particles interacting via a screened-Coulomb potential. We find that this effective temperature decreases with increasing persistence time of the self-propulsion. Initially, for not too large persistence times, it changes rather slowly, but then it decreases markedly when the persistence length of the self-propelled motion becomes comparable with the particle size. Supported by NSF and ERC.
NASA Astrophysics Data System (ADS)
Ke, Rihuan; Ng, Michael K.; Sun, Hai-Wei
2015-12-01
In this paper, we study the block lower triangular Toeplitz-like with tri-diagonal blocks system which arises from the time-fractional partial differential equation. Existing fast numerical solver (e.g., fast approximate inversion method) cannot handle such linear system as the main diagonal blocks are different. The main contribution of this paper is to propose a fast direct method for solving this linear system, and to illustrate that the proposed method is much faster than the classical block forward substitution method for solving this linear system. Our idea is based on the divide-and-conquer strategy and together with the fast Fourier transforms for calculating Toeplitz matrix-vector multiplication. The complexity needs O (MNlog2 M) arithmetic operations, where M is the number of blocks (the number of time steps) in the system and N is the size (number of spatial grid points) of each block. Numerical examples from the finite difference discretization of time-fractional partial differential equations are also given to demonstrate the efficiency of the proposed method.
Bioinspired Concepts: Unified Theory for Complex Biological and Engineering Systems
2006-01-01
i.e., data flows of finite size arrive at the system randomly. For such a system , we propose a modified dual scheduling algorithm that stabilizes ...demon. We compute the efficiency of the controller over finite and infinite time intervals, and since the controller is optimal, this yields hard limits...and highly optimized tolerance. PNAS, 102, 2005. 51. G. N. Nair and R. J. Evans. Stabilizability of stochastic linear systems with finite feedback
NASA Astrophysics Data System (ADS)
Chen, Jiahui; Zhou, Hui; Duan, Changkui; Peng, Xinhua
2017-03-01
Entanglement, a unique quantum resource with no classical counterpart, remains at the heart of quantum information. The Greenberger-Horne-Zeilinger (GHZ) and W states are two inequivalent classes of multipartite entangled states which cannot be transformed into each other by means of local operations and classic communication. In this paper, we present the methods to prepare the GHZ and W states via global controls on a long-range Ising spin model. For the GHZ state, general solutions are analytically obtained for an arbitrary-size spin system, while for the W state, we find a standard way to prepare the W state that is analytically illustrated in three- and four-spin systems and numerically demonstrated for larger-size systems. The number of parameters required in the numerical search increases only linearly with the size of the system.
Two-point method uncertainty during control and measurement of cylindrical element diameters
NASA Astrophysics Data System (ADS)
Glukhov, V. I.; Shalay, V. V.; Radev, H.
2018-04-01
The topic of the article is devoted to the urgent problem of the reliability of technical products geometric specifications measurements. The purpose of the article is to improve the quality of parts linear sizes control by the two-point measurement method. The article task is to investigate methodical extended uncertainties in measuring cylindrical element linear sizes. The investigation method is a geometric modeling of the element surfaces shape and location deviations in a rectangular coordinate system. The studies were carried out for elements of various service use, taking into account their informativeness, corresponding to the kinematic pairs classes in theoretical mechanics and the number of constrained degrees of freedom in the datum element function. Cylindrical elements with informativity of 4, 2, 1 and θ (zero) were investigated. The uncertainties estimation of in two-point measurements was made by comparing the results of of linear dimensions measurements with the functional diameters maximum and minimum of the element material. Methodical uncertainty is formed when cylindrical elements with maximum informativeness have shape deviations of the cut and the curvature types. Methodical uncertainty is formed by measuring the element average size for all types of shape deviations. The two-point measurement method cannot take into account the location deviations of a dimensional element, so its use for elements with informativeness less than the maximum creates unacceptable methodical uncertainties in measurements of the maximum, minimum and medium linear dimensions. Similar methodical uncertainties also exist in the arbitration control of the linear dimensions of the cylindrical elements by limiting two-point gauges.
Investigating the unification of LOFAR-detected powerful AGN in the Boötes field
NASA Astrophysics Data System (ADS)
Morabito, Leah K.; Williams, W. L.; Duncan, Kenneth J.; Röttgering, H. J. A.; Miley, George; Saxena, Aayush; Barthel, Peter; Best, P. N.; Bruggen, M.; Brunetti, G.; Chyży, K. T.; Engels, D.; Hardcastle, M. J.; Harwood, J. J.; Jarvis, Matt J.; Mahony, E. K.; Prandoni, I.; Shimwell, T. W.; Shulevski, A.; Tasse, C.
2017-08-01
Low radio frequency surveys are important for testing unified models of radio-loud quasars and radio galaxies. Intrinsically similar sources that are randomly oriented on the sky will have different projected linear sizes. Measuring the projected linear sizes of these sources provides an indication of their orientation. Steep-spectrum isotropic radio emission allows for orientation-free sample selection at low radio frequencies. We use a new radio survey of the Boötes field at 150 MHz made with the Low-Frequency Array (LOFAR) to select a sample of radio sources. We identify 60 radio sources with powers P > 1025.5 W Hz-1 at 150 MHz using cross-matched multiwavelength information from the AGN and Galaxy Evolution Survey, which provides spectroscopic redshifts and photometric identification of 16 quasars and 44 radio galaxies. When considering the radio spectral slope only, we find that radio sources with steep spectra have projected linear sizes that are on average 4.4 ± 1.4 larger than those with flat spectra. The projected linear sizes of radio galaxies are on average 3.1 ± 1.0 larger than those of quasars (2.0 ± 0.3 after correcting for redshift evolution). Combining these results with three previous surveys, we find that the projected linear sizes of radio galaxies and quasars depend on redshift but not on power. The projected linear size ratio does not correlate with either parameter. The LOFAR data are consistent within the uncertainties with theoretical predictions of the correlation between the quasar fraction and linear size ratio, based on an orientation-based unification scheme.
Favazza, Christopher P; Fetterly, Kenneth A; Hangiandreou, Nicholas J; Leng, Shuai; Schueler, Beth A
2015-01-01
Evaluation of flat-panel angiography equipment through conventional image quality metrics is limited by the scope of standard spatial-domain image quality metric(s), such as contrast-to-noise ratio and spatial resolution, or by restricted access to appropriate data to calculate Fourier domain measurements, such as modulation transfer function, noise power spectrum, and detective quantum efficiency. Observer models have been shown capable of overcoming these limitations and are able to comprehensively evaluate medical-imaging systems. We present a spatial domain-based channelized Hotelling observer model to calculate the detectability index (DI) of our different sized disks and compare the performance of different imaging conditions and angiography systems. When appropriate, changes in DIs were compared to expectations based on the classical Rose model of signal detection to assess linearity of the model with quantum signal-to-noise ratio (SNR) theory. For these experiments, the estimated uncertainty of the DIs was less than 3%, allowing for precise comparison of imaging systems or conditions. For most experimental variables, DI changes were linear with expectations based on quantum SNR theory. DIs calculated for the smallest objects demonstrated nonlinearity with quantum SNR theory due to system blur. Two angiography systems with different detector element sizes were shown to perform similarly across the majority of the detection tasks.
Quadratic constrained mixed discrete optimization with an adiabatic quantum optimizer
NASA Astrophysics Data System (ADS)
Chandra, Rishabh; Jacobson, N. Tobias; Moussa, Jonathan E.; Frankel, Steven H.; Kais, Sabre
2014-07-01
We extend the family of problems that may be implemented on an adiabatic quantum optimizer (AQO). When a quadratic optimization problem has at least one set of discrete controls and the constraints are linear, we call this a quadratic constrained mixed discrete optimization (QCMDO) problem. QCMDO problems are NP-hard, and no efficient classical algorithm for their solution is known. Included in the class of QCMDO problems are combinatorial optimization problems constrained by a linear partial differential equation (PDE) or system of linear PDEs. An essential complication commonly encountered in solving this type of problem is that the linear constraint may introduce many intermediate continuous variables into the optimization while the computational cost grows exponentially with problem size. We resolve this difficulty by developing a constructive mapping from QCMDO to quadratic unconstrained binary optimization (QUBO) such that the size of the QUBO problem depends only on the number of discrete control variables. With a suitable embedding, taking into account the physical constraints of the realizable coupling graph, the resulting QUBO problem can be implemented on an existing AQO. The mapping itself is efficient, scaling cubically with the number of continuous variables in the general case and linearly in the PDE case if an efficient preconditioner is available.
Parametrizing linear generalized Langevin dynamics from explicit molecular dynamics simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gottwald, Fabian; Karsten, Sven; Ivanov, Sergei D., E-mail: sergei.ivanov@uni-rostock.de
2015-06-28
Fundamental understanding of complex dynamics in many-particle systems on the atomistic level is of utmost importance. Often the systems of interest are of macroscopic size but can be partitioned into a few important degrees of freedom which are treated most accurately and others which constitute a thermal bath. Particular attention in this respect attracts the linear generalized Langevin equation, which can be rigorously derived by means of a linear projection technique. Within this framework, a complicated interaction with the bath can be reduced to a single memory kernel. This memory kernel in turn is parametrized for a particular system studied,more » usually by means of time-domain methods based on explicit molecular dynamics data. Here, we discuss that this task is more naturally achieved in frequency domain and develop a Fourier-based parametrization method that outperforms its time-domain analogues. Very surprisingly, the widely used rigid bond method turns out to be inappropriate in general. Importantly, we show that the rigid bond approach leads to a systematic overestimation of relaxation times, unless the system under study consists of a harmonic bath bi-linearly coupled to the relevant degrees of freedom.« less
Thermodynamics of a lattice gas with linear attractive potential
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pirjol, Dan; Schat, Carlos
We study the equilibrium thermodynamics of a one-dimensional lattice gas with interaction V(|i−j|)=−1/(μn) (ξ−1/n |i−j|) given by the superposition of a universal attractive interaction with strength −1/(μn) ξ<0, and a linear attractive potential 1/(μn{sup 2}) |i−j|. The interaction is rescaled with the lattice size n, such that the thermodynamical limit n → ∞ is well-behaved. The thermodynamical properties of the system can be found exactly, both for a finite size lattice and in the thermodynamical limit n → ∞. The lattice gas can be mapped to a system of non-interacting bosons which are placed on known energy levels. The exactmore » solution shows that the system has a liquid-gas phase transition for ξ > 0. In the large temperature limit T ≫ T{sub 0}(ρ) = ρ{sup 2}/(4μ) with ρ the density, the system becomes spatially homogeneous, and the equation of state is given to a good approximation by a lattice version of the van der Waals equation, with critical temperature T{sub c}{sup (vdW)}=1/(12μ) (3ξ−1)« less
Rate-Compatible LDPC Codes with Linear Minimum Distance
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel
2009-01-01
A recently developed method of constructing protograph-based low-density parity-check (LDPC) codes provides for low iterative decoding thresholds and minimum distances proportional to block sizes, and can be used for various code rates. A code constructed by this method can have either fixed input block size or fixed output block size and, in either case, provides rate compatibility. The method comprises two submethods: one for fixed input block size and one for fixed output block size. The first mentioned submethod is useful for applications in which there are requirements for rate-compatible codes that have fixed input block sizes. These are codes in which only the numbers of parity bits are allowed to vary. The fixed-output-blocksize submethod is useful for applications in which framing constraints are imposed on the physical layers of affected communication systems. An example of such a system is one that conforms to one of many new wireless-communication standards that involve the use of orthogonal frequency-division modulation
Corsini, Niccolò R C; Greco, Andrea; Hine, Nicholas D M; Molteni, Carla; Haynes, Peter D
2013-08-28
We present an implementation in a linear-scaling density-functional theory code of an electronic enthalpy method, which has been found to be natural and efficient for the ab initio calculation of finite systems under hydrostatic pressure. Based on a definition of the system volume as that enclosed within an electronic density isosurface [M. Cococcioni, F. Mauri, G. Ceder, and N. Marzari, Phys. Rev. Lett. 94, 145501 (2005)], it supports both geometry optimizations and molecular dynamics simulations. We introduce an approach for calibrating the parameters defining the volume in the context of geometry optimizations and discuss their significance. Results in good agreement with simulations using explicit solvents are obtained, validating our approach. Size-dependent pressure-induced structural transformations and variations in the energy gap of hydrogenated silicon nanocrystals are investigated, including one comparable in size to recent experiments. A detailed analysis of the polyamorphic transformations reveals three types of amorphous structures and their persistence on depressurization is assessed.
NASA Astrophysics Data System (ADS)
Corsini, Niccolò R. C.; Greco, Andrea; Hine, Nicholas D. M.; Molteni, Carla; Haynes, Peter D.
2013-08-01
We present an implementation in a linear-scaling density-functional theory code of an electronic enthalpy method, which has been found to be natural and efficient for the ab initio calculation of finite systems under hydrostatic pressure. Based on a definition of the system volume as that enclosed within an electronic density isosurface [M. Cococcioni, F. Mauri, G. Ceder, and N. Marzari, Phys. Rev. Lett. 94, 145501 (2005)], 10.1103/PhysRevLett.94.145501, it supports both geometry optimizations and molecular dynamics simulations. We introduce an approach for calibrating the parameters defining the volume in the context of geometry optimizations and discuss their significance. Results in good agreement with simulations using explicit solvents are obtained, validating our approach. Size-dependent pressure-induced structural transformations and variations in the energy gap of hydrogenated silicon nanocrystals are investigated, including one comparable in size to recent experiments. A detailed analysis of the polyamorphic transformations reveals three types of amorphous structures and their persistence on depressurization is assessed.
Complexation of Polyelectrolyte Micelles with Oppositely Charged Linear Chains.
Kalogirou, Andreas; Gergidis, Leonidas N; Miliou, Kalliopi; Vlahos, Costas
2017-03-02
The formation of interpolyelectrolyte complexes (IPECs) from linear AB diblock copolymer precursor micelles and oppositely charged linear homopolymers is studied by means of molecular dynamics simulations. All beads of the linear polyelectrolyte (C) are charged with elementary quenched charge +1e, whereas in the diblock copolymer only the solvophilic (A) type beads have quenched charge -1e. For the same Bjerrum length, the ratio of positive to negative charges, Z +/- , of the mixture and the relative length of charged moieties r determine the size of IPECs. We found a nonmonotonic variation of the size of the IPECs with Z +/- . For small Z +/- values, the IPECs retain the size of the precursor micelle, whereas at larger Z +/- values the IPECs decrease in size due to the contraction of the corona and then increase as the aggregation number of the micelle increases. The minimum size of the IPECs is obtained at lower Z +/- values when the length of the hydrophilic block of the linear diblock copolymer decreases. The aforementioned findings are in agreement with experimental results. At a smaller Bjerrum length, we obtain the same trends but at even smaller Z +/- values. The linear homopolymer charged units are distributed throughout the corona.
Measuring Renyi entanglement entropy in quantum Monte Carlo simulations.
Hastings, Matthew B; González, Iván; Kallin, Ann B; Melko, Roger G
2010-04-16
We develop a quantum Monte Carlo procedure, in the valence bond basis, to measure the Renyi entanglement entropy of a many-body ground state as the expectation value of a unitary Swap operator acting on two copies of the system. An improved estimator involving the ratio of Swap operators for different subregions enables convergence of the entropy in a simulation time polynomial in the system size. We demonstrate convergence of the Renyi entropy to exact results for a Heisenberg chain. Finally, we calculate the scaling of the Renyi entropy in the two-dimensional Heisenberg model and confirm that the Néel ground state obeys the expected area law for systems up to linear size L=32.
System design of the annular suspension and pointing system /ASPS/
NASA Technical Reports Server (NTRS)
Cunningham, D. C.; Gismondi, T. P.; Wilson, G. W.
1978-01-01
This paper presents the control system design for the Annular Suspension and Pointing System. Actuator sizing and configuration of the system are explained, and the control laws developed for linearizing and compensating the magnetic bearings, roll induction motor and gimbal torquers are given. Decoupling, feedforward and error compensation for the vernier and gimbal controllers is developed. The algorithm for computing the strapdown attitude reference is derived, and the allowable sampling rates, time delays and quantization of control signals are specified.
Relating Linear and Volumetric Variables Through Body Scanning to Improve Human Interfaces in Space
NASA Technical Reports Server (NTRS)
Margerum, Sarah E.; Ferrer, Mike A.; Young, Karen S.; Rajulu, Sudhakar
2010-01-01
Designing space suits and vehicles for the diverse human population present unique challenges for the methods of traditional anthropometry. Space suits are bulky and allow the operator to shift position within the suit and inhibit the ability to identify body landmarks. Limited suit sizing options also cause variability in fit and performance between similarly sized individuals. Space vehicles are restrictive in volume in both the fit and the ability to collect data. NASA's Anthropometric and Biomechanics Facility (ABF) has utilized 3D scanning to shift from traditional linear anthropometry to explore and examine volumetric capabilities to provide anthropometric solutions for design. Overall, the key goals are to improve the human-system performance and develop new processes to aid in the design and evaluation of space systems. Four case studies are presented that illustrate the shift from purely linear analyses to an augmented volumetric toolset to predict and analyze the human within the space suit and vehicle. The first case study involves the calculation of maximal head volume to estimate total free volume in the helmet for proper air exchange. Traditional linear measurements resulted in an inaccurate representation of the head shape, yet limited data exists for the determination of a large head volume. Steps were first taken to identify and classify a maximum head volume and the resulting comparisons to the estimate are presented in this paper. This study illustrates the gap between linear components of anthropometry and the need for overall volume metrics in order to provide solutions. A second case study examines the overlay of the space suit scans and components onto scanned individuals to quantify fit and clearance to aid in sizing the suit to the individual. Restrictions in space suit size availability present unique challenges to optimally fit the individual within a limited sizing range while maintaining performance. Quantification of the clearance and fit between similarly sized individuals is critical in providing a greater understanding of the human body's function within the suit. The third case study presented in this paper explores the development of a conformal seat pan using scanning techniques, and details the challenges of volumetric analyses that were overcome in order to develop a universal seat pan that can be utilized across the entire user population. The final case study explores expanding volumetric capabilities through generation of boundary manikins. Boundary manikins are representative individuals from the population of interest that represent the extremes of the population spectrum. The ABF developed a technique to take three-dimensional scans of individuals and manipulate the scans to reflect the boundary manikins' anthropometry. In essence, this process generates a representative three-dimensional scan of an individual from anthropometry, using another individual's scanned image. The results from this process can be used in design process modeling and initial suit sizing work as a three dimensional, realistic example of individuals from the population, maintaining the variability between and correlation to the relevant dimensions of interest.
Huzak, M; Deleuze, M S; Hajgató, B
2011-09-14
An analysis using the formalism of crystalline orbitals for extended systems with periodicity in one dimension demonstrates that any antiferromagnetic and half-metallic spin-polarization of the edge states in n-acenes, and more generally in zigzag graphene nanoislands and nanoribbons of finite width, would imply a spin contamination
Zuehlsdorff, T J; Hine, N D M; Payne, M C; Haynes, P D
2015-11-28
We present a solution of the full time-dependent density-functional theory (TDDFT) eigenvalue equation in the linear response formalism exhibiting a linear-scaling computational complexity with system size, without relying on the simplifying Tamm-Dancoff approximation (TDA). The implementation relies on representing the occupied and unoccupied subspaces with two different sets of in situ optimised localised functions, yielding a very compact and efficient representation of the transition density matrix of the excitation with the accuracy associated with a systematic basis set. The TDDFT eigenvalue equation is solved using a preconditioned conjugate gradient algorithm that is very memory-efficient. The algorithm is validated on a small test molecule and a good agreement with results obtained from standard quantum chemistry packages is found, with the preconditioner yielding a significant improvement in convergence rates. The method developed in this work is then used to reproduce experimental results of the absorption spectrum of bacteriochlorophyll in an organic solvent, where it is demonstrated that the TDA fails to reproduce the main features of the low energy spectrum, while the full TDDFT equation yields results in good qualitative agreement with experimental data. Furthermore, the need for explicitly including parts of the solvent into the TDDFT calculations is highlighted, making the treatment of large system sizes necessary that are well within reach of the capabilities of the algorithm introduced here. Finally, the linear-scaling properties of the algorithm are demonstrated by computing the lowest excitation energy of bacteriochlorophyll in solution. The largest systems considered in this work are of the same order of magnitude as a variety of widely studied pigment-protein complexes, opening up the possibility of studying their properties without having to resort to any semiclassical approximations to parts of the protein environment.
NASA Astrophysics Data System (ADS)
Jeyakumar, S.
2016-06-01
The dependence of the turnover frequency on the linear size is presented for a sample of Giga-hertz Peaked Spectrum and Compact Steep Spectrum radio sources derived from complete samples. The dependence of the luminosity of the emission at the peak frequency with the linear size and the peak frequency is also presented for the galaxies in the sample. The luminosity of the smaller sources evolve strongly with the linear size. Optical depth effects have been included to the 3D model for the radio source of Kaiser to study the spectral turnover. Using this model, the observed trend can be explained by synchrotron self-absorption. The observed trend in the peak-frequency-linear-size plane is not affected by the luminosity evolution of the sources.
Riis, J
2014-01-01
This paper uses the frameworks and evidence from marketing and behavioral economics to highlight the opportunities and barriers for portion control in food service environments. Applying Kahneman's ‘thinking fast and slow' concepts, it describes 10 strategies that can be effective in ‘tricking' the consumer's fast cognitive system to make better decisions and in triggering the slow cognitive system to help prevent the fast system from making bad decisions. These strategies include shrinking defaults, elongating packages, increasing the visibility of small portions, offering more mixed virtue options, adding more small sizes, offering ‘right-sized' standard portions, using meaningful size labels, adopting linear pricing, using temporal landmarks to push smaller portions and facilitating pre-commitment. For each of these strategies, I discuss the specific cost and revenue barriers that a food service operator would face if the strategy were adopted. PMID:25033960
Riis, J
2014-07-01
This paper uses the frameworks and evidence from marketing and behavioral economics to highlight the opportunities and barriers for portion control in food service environments. Applying Kahneman's 'thinking fast and slow' concepts, it describes 10 strategies that can be effective in 'tricking' the consumer's fast cognitive system to make better decisions and in triggering the slow cognitive system to help prevent the fast system from making bad decisions. These strategies include shrinking defaults, elongating packages, increasing the visibility of small portions, offering more mixed virtue options, adding more small sizes, offering 'right-sized' standard portions, using meaningful size labels, adopting linear pricing, using temporal landmarks to push smaller portions and facilitating pre-commitment. For each of these strategies, I discuss the specific cost and revenue barriers that a food service operator would face if the strategy were adopted.
Sang, Yan-Hui; Hu, Hong-Cheng; Lu, Song-He; Wu, Yu-Wei; Li, Wei-Ran; Tang, Zhi-Hui
2016-01-01
Background: The accuracy of three-dimensional (3D) reconstructions from cone-beam computed tomography (CBCT) has been particularly important in dentistry, which will affect the effectiveness of diagnosis, treatment plan, and outcome in clinical practice. The aims of this study were to assess the linear, volumetric, and geometric accuracy of 3D reconstructions from CBCT and to investigate the influence of voxel size and CBCT system on the reconstructions results. Methods: Fifty teeth from 18 orthodontic patients were assigned to three groups as NewTom VG 0.15 mm group (NewTom VG; voxel size: 0.15 mm; n = 17), NewTom VG 0.30 mm group (NewTom VG; voxel size: 0.30 mm; n = 16), and VATECH DCTPRO 0.30 mm group (VATECH DCTPRO; voxel size: 0.30 mm; n = 17). The 3D reconstruction models of the teeth were segmented from CBCT data manually using Mimics 18.0 (Materialise Dental, Leuven, Belgium), and the extracted teeth were scanned by 3Shape optical scanner (3Shape A/S, Denmark). Linear and volumetric deviations were separately assessed by comparing the length and volume of the 3D reconstruction model with physical measurement by paired t-test. Geometric deviations were assessed by the root mean square value of the imposed 3D reconstruction and optical models by one-sample t-test. To assess the influence of voxel size and CBCT system on 3D reconstruction, analysis of variance (ANOVA) was used (α = 0.05). Results: The linear, volumetric, and geometric deviations were −0.03 ± 0.48 mm, −5.4 ± 2.8%, and 0.117 ± 0.018 mm for NewTom VG 0.15 mm group; −0.45 ± 0.42 mm, −4.5 ± 3.4%, and 0.116 ± 0.014 mm for NewTom VG 0.30 mm group; and −0.93 ± 0.40 mm, −4.8 ± 5.1%, and 0.194 ± 0.117 mm for VATECH DCTPRO 0.30 mm group, respectively. There were statistically significant differences between groups in terms of linear measurement (P < 0.001), but no significant difference in terms of volumetric measurement (P = 0.774). No statistically significant difference were found on geometric measurement between NewTom VG 0.15 mm and NewTom VG 0.30 mm groups (P = 0.999) while a significant difference was found between VATECH DCTPRO 0.30 mm and NewTom VG 0.30 mm groups (P = 0.006). Conclusions: The 3D reconstruction from CBCT data can achieve a high linear, volumetric, and geometric accuracy. Increasing voxel resolution from 0.30 to 0.15 mm does not result in increased accuracy of 3D tooth reconstruction while different systems can affect the accuracy. PMID:27270544
Design and Analysis of a Navigation System Using the Federated Filter
1995-12-01
There are a number of different sizes for INS states in each Kalman filter. In DKFSIM 3.3, the largest available is the so-called ABIAS model, which...REPRESENTATION PARAMETERS INS States - ABIAS Model 3 Position drifts Linearized propagation driven by ECEF velocity drifts 3 Velocity drifts
Structural performance analysis and redesign
NASA Technical Reports Server (NTRS)
Whetstone, W. D.
1978-01-01
Program performs stress buckling and vibrational analysis of large, linear, finite-element systems in excess of 50,000 degrees of freedom. Cost, execution time, and storage requirements are kept reasonable through use of sparse matrix solution techniques, and other computational and data management procedures designed for problems of very large size.
Estimating Linear Size and Scale: Body Rulers
ERIC Educational Resources Information Center
Jones, Gail; Taylor, Amy; Broadwell, Bethany
2009-01-01
The National Science Education Standards emphasise the use of concepts and skills that cut across the science domains. One of these cross-cutting areas is measurement. Students should know measurement systems, units of measurement, tools and error in measurement as well as the importance of measurement to scientific endeavours. Even though…
Monotone Approximation for a Nonlinear Size and Class Age Structured Epidemic Model
2006-02-22
information if it does not display a currently valid OMB control number. 1. REPORT DATE 22 FEB 2006 2. REPORT TYPE 3. DATES COVERED 00-00-2006 to 00...follows from standard results, given the fact that they are all linear problems with local boundary conditions for Sinko-Streifer type systems. We...model, J. Franklin Inst., 297 (1974), 325-333. [14] K. E. Howard, A size and maturity structured model of cell dwarfism exhibiting chaotic be- havior
FBX aqueous chemical dosimeter for measurement of virtual wedge profiles.
Semwal, Manoj K; Bansal, Anil K; Thakur, Pradeep K; Vidyasagar, Pandit B
2008-10-24
We investigated the ferrous sulfate-benzoic acid-xylenol orange (FBX) aqueous chemical dosimeter for measurement of virtual (dynamic) wedge profiles on a linear accelerator. The layout for irradiation of the FBX-filled tubes mimicked a conventional linear detector array geometry. A comparison of the resulting measurements with film-measured profiles showed that, in the main beam region, the difference between the FBX system and the film system was within +/-2% and that, in the penumbra region, the difference varied from +/-1 mm to +/-2.5 mm in terms of positional equivalence, depending on the size of the dosimeter tubes. We thus believe that the energy-independent FBX dosimetry system can measure virtual wedge profiles with reasonable accuracy at reasonable cost. However, efficiency improvement is required before this dosimetry system can be accepted into routine practice.
Kalinina, Elizabeth A
2013-08-01
The explicit Euler's method is known to be very easy and effective in implementation for many applications. This article extends results previously obtained for the systems of linear differential equations with constant coefficients to arbitrary systems of ordinary differential equations. Optimal (providing minimum total error) step size is calculated at each step of Euler's method. Several examples of solving stiff systems are included. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
High correlations between MRI brain volume measurements based on NeuroQuant® and FreeSurfer.
Ross, David E; Ochs, Alfred L; Tate, David F; Tokac, Umit; Seabaugh, John; Abildskov, Tracy J; Bigler, Erin D
2018-05-30
NeuroQuant ® (NQ) and FreeSurfer (FS) are commonly used computer-automated programs for measuring MRI brain volume. Previously they were reported to have high intermethod reliabilities but often large intermethod effect size differences. We hypothesized that linear transformations could be used to reduce the large effect sizes. This study was an extension of our previously reported study. We performed NQ and FS brain volume measurements on 60 subjects (including normal controls, patients with traumatic brain injury, and patients with Alzheimer's disease). We used two statistical approaches in parallel to develop methods for transforming FS volumes into NQ volumes: traditional linear regression, and Bayesian linear regression. For both methods, we used regression analyses to develop linear transformations of the FS volumes to make them more similar to the NQ volumes. The FS-to-NQ transformations based on traditional linear regression resulted in effect sizes which were small to moderate. The transformations based on Bayesian linear regression resulted in all effect sizes being trivially small. To our knowledge, this is the first report describing a method for transforming FS to NQ data so as to achieve high reliability and low effect size differences. Machine learning methods like Bayesian regression may be more useful than traditional methods. Copyright © 2018 Elsevier B.V. All rights reserved.
Similarity law and critical properties in ionic systems.
NASA Astrophysics Data System (ADS)
Desgranges, Caroline; Delhommelle, Jerome
2017-11-01
Using molecular simulations, we determine the locus of ideal compressibility, or Zeno line, for a series of ionic compounds. We find that the shape of this thermodynamic contour follows a linear law, leading to the determination of the Boyle parameters. We also show that a similarity law, based on the Boyle parameters, yields accurate critical data when compared to the experiment. Furthermore, we show that the Boyle density scales linearly with the size-asymmetry, providing a direct route to establish a correspondence between the thermodynamic properties of different ionic compounds.
Nested plasmonic resonances: extraordinary enhancement of linear and nonlinear interactions.
de Ceglia, Domenico; Vincenti, Maria Antonietta; Akozbek, Neset; Bloemer, Mark J; Scalora, Michael
2017-02-20
Plasmonic resonators can provide large local electric fields when the gap between metal components is filled with an ordinary dielectric. We consider a new concept consisting of a hybrid nanoantenna obtained by introducing a resonant, plasmonic nanoparticle strategically placed inside the gap of an aptly sized metallic antenna. The system exhibits two nested, nearly overlapping plasmonic resonances whose signature is a large field enhancement at the surface and within the bulk of the plasmonic nanoparticle that leads to unusually strong, linear and nonlinear light-matter coupling.
Optimal design of neural stimulation current waveforms.
Halpern, Mark
2009-01-01
This paper contains results on the design of electrical signals for delivering charge through electrodes to achieve neural stimulation. A generalization of the usual constant current stimulation phase to a stepped current waveform is presented. The electrode current design is then formulated as the calculation of the current step sizes to minimize the peak electrode voltage while delivering a specified charge in a given number of time steps. This design problem can be formulated as a finite linear program, or alternatively by using techniques for discrete-time linear system design.
Sánchez-de-Madariaga, Ricardo; Muñoz, Adolfo; Castro, Antonio L; Moreno, Oscar; Pascual, Mario
2018-01-01
This research shows a protocol to assess the computational complexity of querying relational and non-relational (NoSQL (not only Structured Query Language)) standardized electronic health record (EHR) medical information database systems (DBMS). It uses a set of three doubling-sized databases, i.e. databases storing 5000, 10,000 and 20,000 realistic standardized EHR extracts, in three different database management systems (DBMS): relational MySQL object-relational mapping (ORM), document-based NoSQL MongoDB, and native extensible markup language (XML) NoSQL eXist. The average response times to six complexity-increasing queries were computed, and the results showed a linear behavior in the NoSQL cases. In the NoSQL field, MongoDB presents a much flatter linear slope than eXist. NoSQL systems may also be more appropriate to maintain standardized medical information systems due to the special nature of the updating policies of medical information, which should not affect the consistency and efficiency of the data stored in NoSQL databases. One limitation of this protocol is the lack of direct results of improved relational systems such as archetype relational mapping (ARM) with the same data. However, the interpolation of doubling-size database results to those presented in the literature and other published results suggests that NoSQL systems might be more appropriate in many specific scenarios and problems to be solved. For example, NoSQL may be appropriate for document-based tasks such as EHR extracts used in clinical practice, or edition and visualization, or situations where the aim is not only to query medical information, but also to restore the EHR in exactly its original form. PMID:29608174
Sánchez-de-Madariaga, Ricardo; Muñoz, Adolfo; Castro, Antonio L; Moreno, Oscar; Pascual, Mario
2018-03-19
This research shows a protocol to assess the computational complexity of querying relational and non-relational (NoSQL (not only Structured Query Language)) standardized electronic health record (EHR) medical information database systems (DBMS). It uses a set of three doubling-sized databases, i.e. databases storing 5000, 10,000 and 20,000 realistic standardized EHR extracts, in three different database management systems (DBMS): relational MySQL object-relational mapping (ORM), document-based NoSQL MongoDB, and native extensible markup language (XML) NoSQL eXist. The average response times to six complexity-increasing queries were computed, and the results showed a linear behavior in the NoSQL cases. In the NoSQL field, MongoDB presents a much flatter linear slope than eXist. NoSQL systems may also be more appropriate to maintain standardized medical information systems due to the special nature of the updating policies of medical information, which should not affect the consistency and efficiency of the data stored in NoSQL databases. One limitation of this protocol is the lack of direct results of improved relational systems such as archetype relational mapping (ARM) with the same data. However, the interpolation of doubling-size database results to those presented in the literature and other published results suggests that NoSQL systems might be more appropriate in many specific scenarios and problems to be solved. For example, NoSQL may be appropriate for document-based tasks such as EHR extracts used in clinical practice, or edition and visualization, or situations where the aim is not only to query medical information, but also to restore the EHR in exactly its original form.
NASA Astrophysics Data System (ADS)
Elkatlawy, Saeid; Gomariz, María.; Soto-Sánchez, Cristina; Martínez Navarrete, Gema; Fernández, Eduardo; Fimia, Antonio
2014-05-01
In this paper we report on the use of digital holographic microscopy for 3D real time imaging of cultured neurons and neural networks, in vitro. Digital holographic microscopy is employed as an assessment tool to study the biophysical origin of neurodegenerative diseases. Our study consists in the morphological characterization of the axon, dendrites and cell bodies. The average size and thickness of the soma were 21 and 13 μm, respectively. Furthermore, the average size and diameter of some randomly selected neurites were 4.8 and 0.89 μm, respectively. In addition, the spatiotemporal growth process of cellular bodies and extensions was fitted to by a non-linear behavior of the nerve system. Remarkably, this non-linear process represents the relationship between the growth process of cellular body with respect to the axon and dendrites of the neurons.
Hine, N D M; Haynes, P D; Mostofi, A A; Payne, M C
2010-09-21
We present calculations of formation energies of defects in an ionic solid (Al(2)O(3)) extrapolated to the dilute limit, corresponding to a simulation cell of infinite size. The large-scale calculations required for this extrapolation are enabled by developments in the approach to parallel sparse matrix algebra operations, which are central to linear-scaling density-functional theory calculations. The computational cost of manipulating sparse matrices, whose sizes are determined by the large number of basis functions present, is greatly improved with this new approach. We present details of the sparse algebra scheme implemented in the ONETEP code using hierarchical sparsity patterns, and demonstrate its use in calculations on a wide range of systems, involving thousands of atoms on hundreds to thousands of parallel processes.
Mathematical model of alternative mechanism of telomere length maintenance
NASA Astrophysics Data System (ADS)
Kollár, Richard; Bod'ová, Katarína; Nosek, Jozef; Tomáška, L'ubomír
2014-03-01
Biopolymer length regulation is a complex process that involves a large number of biological, chemical, and physical subprocesses acting simultaneously across multiple spatial and temporal scales. An illustrative example important for genomic stability is the length regulation of telomeres—nucleoprotein structures at the ends of linear chromosomes consisting of tandemly repeated DNA sequences and a specialized set of proteins. Maintenance of telomeres is often facilitated by the enzyme telomerase but, particularly in telomerase-free systems, the maintenance of chromosomal termini depends on alternative lengthening of telomeres (ALT) mechanisms mediated by recombination. Various linear and circular DNA structures were identified to participate in ALT, however, dynamics of the whole process is still poorly understood. We propose a chemical kinetics model of ALT with kinetic rates systematically derived from the biophysics of DNA diffusion and looping. The reaction system is reduced to a coagulation-fragmentation system by quasi-steady-state approximation. The detailed treatment of kinetic rates yields explicit formulas for expected size distributions of telomeres that demonstrate the key role played by the J factor, a quantitative measure of bending of polymers. The results are in agreement with experimental data and point out interesting phenomena: an appearance of very long telomeric circles if the total telomere density exceeds a critical value (excess mass) and a nonlinear response of the telomere size distributions to the amount of telomeric DNA in the system. The results can be of general importance for understanding dynamics of telomeres in telomerase-independent systems as this mode of telomere maintenance is similar to the situation in tumor cells lacking telomerase activity. Furthermore, due to its universality, the model may also serve as a prototype of an interaction between linear and circular DNA structures in various settings.
Gartner, Thomas E; Jayaraman, Arthi
2018-01-17
In this paper, we apply molecular simulation and liquid state theory to uncover the structure and thermodynamics of homopolymer blends of the same chemistry and varying chain architecture in the presence of explicit solvent species. We use hybrid Monte Carlo (MC)/molecular dynamics (MD) simulations in the Gibbs ensemble to study the swelling of ∼12 000 g mol -1 linear, cyclic, and 4-arm star polystyrene chains in toluene. Our simulations show that the macroscopic swelling response is indistinguishable between the various architectures and matches published experimental data for the solvent annealing of linear polystyrene by toluene vapor. We then use standard MD simulations in the NPT ensemble along with polymer reference interaction site model (PRISM) theory to calculate effective polymer-solvent and polymer-polymer Flory-Huggins interaction parameters (χ eff ) in these systems. As seen in the macroscopic swelling results, there are no significant differences in the polymer-solvent and polymer-polymer χ eff between the various architectures. Despite similar macroscopic swelling and effective interaction parameters between various architectures, the pair correlation function between chain centers-of-mass indicates stronger correlations between cyclic or star chains in the linear-cyclic blends and linear-star blends, compared to linear chain-linear chain correlations. Furthermore, we note striking similarities in the chain-level correlations and the radius of gyration of cyclic and 4-arm star architectures of identical molecular weight. Our results indicate that the cyclic and star chains are 'smaller' and 'harder' than their linear counterparts, and through comparison with MD simulations of blends of soft spheres with varying hardness and size we suggest that these macromolecular characteristics are the source of the stronger cyclic-cyclic and star-star correlations.
Effect of thermal cycling on composites reinforced with two differently sized silica-glass fibers.
Meriç, Gökçe; Ruyter, I Eystein
2007-09-01
To evaluate the effects of thermal cycling on the flexural properties of composites reinforced with two differently sized fibers. Acid-washed, woven, fused silica-glass fibers, were heat-treated at 500 degrees C, silanized and sized with one of two sizing resins (linear poly(butyl methacrylate)) (PBMA), cross-linked poly(methyl methacrylate) (PMMA). Subsequently the fibers were incorporated into a polymer matrix. Two test groups with fibers and one control group without fibers were prepared. The flexural properties of the composite reinforced with linear PBMA-sized fibers were evaluated by 3-point bend testing before thermal cycling. The specimens from all three groups were thermally cycled in water (12,000 cycles, 5/55 degrees C, dwell time 30 s), and afterwards tested by 3-point bending. SEM micrographs were taken of the fibers and of the fractured fiber reinforced composites (FRC). The reduction of ultimate flexural strength after thermal cycling was less than 20% of that prior to thermal cycling for composites reinforced with linear PBMA-sized silica-glass fibers. The flexural strength of the composite reinforced with cross-linked PMMA-sized fibers was reduced to less than half of the initial value. This study demonstrated that thermal cycling differently influences the flexural properties of composites reinforced with different sized silica-glass fibers. The interfacial linear PBMA-sizing polymer acts as a stress-bearing component for the high interfacial stresses during thermal cycling due to the flexible structure of the linear PBMA above Tg. The cross-linked PMMA-sizing, however, acts as a rigid component and therefore causes adhesive fracture between the fibers and matrix after the fatigue process of thermal cycling and flexural fracture.
Favazza, Christopher P.; Fetterly, Kenneth A.; Hangiandreou, Nicholas J.; Leng, Shuai; Schueler, Beth A.
2015-01-01
Abstract. Evaluation of flat-panel angiography equipment through conventional image quality metrics is limited by the scope of standard spatial-domain image quality metric(s), such as contrast-to-noise ratio and spatial resolution, or by restricted access to appropriate data to calculate Fourier domain measurements, such as modulation transfer function, noise power spectrum, and detective quantum efficiency. Observer models have been shown capable of overcoming these limitations and are able to comprehensively evaluate medical-imaging systems. We present a spatial domain-based channelized Hotelling observer model to calculate the detectability index (DI) of our different sized disks and compare the performance of different imaging conditions and angiography systems. When appropriate, changes in DIs were compared to expectations based on the classical Rose model of signal detection to assess linearity of the model with quantum signal-to-noise ratio (SNR) theory. For these experiments, the estimated uncertainty of the DIs was less than 3%, allowing for precise comparison of imaging systems or conditions. For most experimental variables, DI changes were linear with expectations based on quantum SNR theory. DIs calculated for the smallest objects demonstrated nonlinearity with quantum SNR theory due to system blur. Two angiography systems with different detector element sizes were shown to perform similarly across the majority of the detection tasks. PMID:26158086
Small Stirling dynamic isotope power system for multihundred-watt robotic missions
NASA Technical Reports Server (NTRS)
Bents, David J.
1991-01-01
Free Piston Stirling Engine (FPSE) and linear alternator (LA) technology is combined with radioisotope heat sources to produce a compact dynamic isotope power system (DIPS) suitable for multihundred watt space application which appears competitive with advance radioisotope thermoelectric generators (RTGs). The small Stirling DIPS is scalable to multihundred watt power levels or lower. The FPSE/LA is a high efficiency convertor in sizes ranging from tens of kilowatts down to only a few watts. At multihundred watt unit size, the FPSE can be directly integrated with the General Purpose Heat Source (GPHS) via radiative coupling; the resulting dynamic isotope power system has a size and weight that compares favorably with the advanced modular (Mod) RTG, but requires less than a third the amount of isotope fuel. Thus the FPSE extends the high efficiency advantage of dynamic systems into a power range never previously considered competitive for DIPS. This results in lower fuel cost and reduced radiological hazard per delivered electrical watt.
Small Stirling dynamic isotope power system for multihundred-watt robotic missions
NASA Technical Reports Server (NTRS)
Bents, David J.
1991-01-01
Free piston Stirling Engine (FPSE) and linear alternator (LA) technology is combined with radioisotope heat sources to produce a compact dynamic isotope power system (DIPS) suitable for multihundred watt space application which appears competitive with advanced radioisotope thermoelectric generators (RTGs). The small Stirling DIPS is scalable to multihundred watt power levels or lower. The FPSE/LA is a high efficiency convertor in sizes ranging from tens of kilowatts down to only a few watts. At multihundred watt unit size, the FPSE can be directly integrated with the General Purpose Heat Source (GPHS) via radiative coupling; the resulting dynamic isotope power system has a size and weight that compares favorably with the advanced modular (Mod) RTG, but requires less than a third the amount of isotope fuel. Thus the FPSE extends the high efficiency advantage of dynamic systems into a power range never previously considered competitive for DIPS. This results in lower fuel cost and reduced radiological hazard per delivered electrical watt.
Size effects in non-linear heat conduction with flux-limited behaviors
NASA Astrophysics Data System (ADS)
Li, Shu-Nan; Cao, Bing-Yang
2017-11-01
Size effects are discussed for several non-linear heat conduction models with flux-limited behaviors, including the phonon hydrodynamic, Lagrange multiplier, hierarchy moment, nonlinear phonon hydrodynamic, tempered diffusion, thermon gas and generalized nonlinear models. For the phonon hydrodynamic, Lagrange multiplier and tempered diffusion models, heat flux will not exist in problems with sufficiently small scale. The existence of heat flux needs the sizes of heat conduction larger than their corresponding critical sizes, which are determined by the physical properties and boundary temperatures. The critical sizes can be regarded as the theoretical limits of the applicable ranges for these non-linear heat conduction models with flux-limited behaviors. For sufficiently small scale heat conduction, the phonon hydrodynamic and Lagrange multiplier models can also predict the theoretical possibility of violating the second law and multiplicity. Comparisons are also made between these non-Fourier models and non-linear Fourier heat conduction in the type of fast diffusion, which can also predict flux-limited behaviors.
Design, construction, and evaluation of new high resolution medical imaging detector/systems
NASA Astrophysics Data System (ADS)
Jain, Amit
Increasing need of minimally invasive endovascular image guided interventional procedures (EIGI) for accurate and successful treatment of vascular disease has set a quest for better image quality. Current state of the art detectors are not up to the mark for these complex procedures due to their inherent limitations. Our group has been actively working on the design and construction of a high resolution, region of interest CCD-based X-ray imager for some time. As a part of that endeavor, a Micro-angiographic fluoroscope (MAF) was developed to serve as a high resolution, ROI X-ray imaging detector in conjunction with large lower resolution full field of view (FOV) state-of-the-art x-ray detectors. The newly developed MAF is an indirect x-ray imaging detector capable of providing real-time images with high resolution, high sensitivity, no lag and low instrumentation noise. It consists of a CCD camera coupled to a light image intensifier (LII) through a fiber optic taper. The CsI(Tl) phosphor serving as the front end is coupled to the LII. For this work, the MAF was designed and constructed. The linear system cascade theory was used to evaluate the performance theoretically. Linear system metrics such as MTF and DQE were used to gauge the detector performance experimentally. The capabilities of the MAF as a complete system were tested using generalized linear system metrics. With generalized linear system metrics the effects of finite size focal spot, geometric magnification and the presence of scatter are included in the analysis and study. To minimize the effect of scatter, an anti-scatter grid specially designed for the MAF was also studied. The MAF was compared with the flat panel detector using signal-to-noise ratio and the two dimensional linear system metrics. The signal-to-noise comparison was carried out to point out the effect of pixel size and Point Spread Function of the detector. The two dimensional linear system metrics were used to investigate the comparative performance of both the detectors in similar simulated clinical neuro-vascular conditions. The last part of this work presents a unique quality of the MAF: operation in single photon mode. The successful operation of the MAF was demonstrated with considerable improvement in spatial and contrast resolution over conventional energy integrating mode. The work presented shows the evolution of a high resolution, high sensitivity, and region of interest x-ray imaging detector as an attractive and capable x-ray imager for the betterment of complex EIGI procedures. The capability of single photon counting mode imaging provides the potential for additional uses of the MAF including the possibility of use in dual modality imaging with radionuclide sources as well as x-rays.
NASA Astrophysics Data System (ADS)
Wu, Yingchun; Crua, Cyril; Li, Haipeng; Saengkaew, Sawitree; Mädler, Lutz; Wu, Xuecheng; Gréhan, Gérard
2018-07-01
The accurate measurements of droplet temperature, size and evaporation rate are of great importance to characterize the heat and mass transfer during evaporation/condensation processes. The nanoscale size change of a micron-sized droplet exactly describes its transient mass transfer, but is difficult to measure because it is smaller than the resolutions of current size measurement techniques. The Phase Rainbow Refractometry (PRR) technique is developed and applied to measure droplet temperature, size and transient size changes and thereafter evaporation rate simultaneously. The measurement principle of PRR is theoretically derived, and it reveals that the phase shift of the time-resolved ripple structures linearly depends on, and can directly yield, nano-scale size changes of droplets. The PRR technique is first verified through the simulation of rainbows of droplets with changing size, and results show that PRR can precisely measure droplet refractive index, absolute size, as well as size change with absolute and relative errors within several nanometers and 0.6%, respectively, and thus PRR permits accurate measurements of transient droplet evaporation rates. The evaporations of flowing single n-nonane droplet and mono-dispersed n-heptane droplet stream are investigated by two PRR systems with a high speed linear CCD and a low speed array CCD, respectively. Their transient evaporation rates are experimentally determined and quantitatively agree well with the theoretical values predicted by classical Maxwell and Stefan-Fuchs models. With the demonstration of evaporation rate measurement of monocomponent droplet in this work, PRR is an ideal tool for measurements of transient droplet evaporation/condensation processes, and can be extended to multicomponent droplets in a wide range of industrially-relevant applications.
Al JABBARI, Youssef S.; TSAKIRIDIS, Peter; ELIADES, George; AL-HADLAQ, Solaiman M.; ZINELIS, Spiros
2012-01-01
Objective The aim of this study was to quantify the surface area, volume and specific surface area of endodontic files employing quantitative X-ray micro computed tomography (mXCT). Material and Methods Three sets (six files each) of the Flex-Master Ni-Ti system (Nº 20, 25 and 30, taper .04) were utilized in this study. The files were scanned by mXCT. The surface area and volume of all files were determined from the cutting tip up to 16 mm. The data from the surface area, volume and specific area were statistically evaluated using the one-way ANOVA and SNK multiple comparison tests at α=0.05, employing the file size as a discriminating variable. The correlation between the surface area and volume with nominal ISO sizes were tested employing linear regression analysis. Results The surface area and volume of Nº 30 files showed the highest value followed by Nº 25 and Nº 20 and the differences were statistically significant. The Nº 20 files showed a significantly higher specific surface area compared to Nº 25 and Nº 30. The increase in surface and volume towards higher file sizes follows a linear relationship with the nominal ISO sizes (r2=0.930 for surface area and r2=0.974 for volume respectively). Results indicated that the surface area and volume demonstrated an almost linear increase while the specific surface area exhibited an abrupt decrease towards higher sizes. Conclusions This study demonstrates that mXCT can be effectively applied to discriminate very small differences in the geometrical features of endodontic micro-instruments, while providing quantitative information for their geometrical properties. PMID:23329248
NASA Astrophysics Data System (ADS)
Kempa, Wojciech M.
2017-12-01
A finite-capacity queueing system with server breakdowns is investigated, in which successive exponentially distributed failure-free times are followed by repair periods. After the processing a customer may either rejoin the queue (feedback) with probability q, or definitely leave the system with probability 1 - q. The system of integral equations for transient queue-size distribution, conditioned by the initial level of buffer saturation, is build. The solution of the corresponding system written for Laplace transforms is found using the linear algebraic approach. The considered queueing system can be successfully used in modelling production lines with machine failures, in which the parameter q may be considered as a typical fraction of items demanding corrections. Morever, this queueing model can be applied in the analysis of real TCP/IP performance, where q stands for the fraction of packets requiring retransmission.
Minimizing energy dissipation of matrix multiplication kernel on Virtex-II
NASA Astrophysics Data System (ADS)
Choi, Seonil; Prasanna, Viktor K.; Jang, Ju-wook
2002-07-01
In this paper, we develop energy-efficient designs for matrix multiplication on FPGAs. To analyze the energy dissipation, we develop a high-level model using domain-specific modeling techniques. In this model, we identify architecture parameters that significantly affect the total energy (system-wide energy) dissipation. Then, we explore design trade-offs by varying these parameters to minimize the system-wide energy. For matrix multiplication, we consider a uniprocessor architecture and a linear array architecture to develop energy-efficient designs. For the uniprocessor architecture, the cache size is a parameter that affects the I/O complexity and the system-wide energy. For the linear array architecture, the amount of storage per processing element is a parameter affecting the system-wide energy. By using maximum amount of storage per processing element and minimum number of multipliers, we obtain a design that minimizes the system-wide energy. We develop several energy-efficient designs for matrix multiplication. For example, for 6×6 matrix multiplication, energy savings of upto 52% for the uniprocessor architecture and 36% for the linear arrary architecture is achieved over an optimized library for Virtex-II FPGA from Xilinx.
Short-Term Memory in Orthogonal Neural Networks
NASA Astrophysics Data System (ADS)
White, Olivia L.; Lee, Daniel D.; Sompolinsky, Haim
2004-04-01
We study the ability of linear recurrent networks obeying discrete time dynamics to store long temporal sequences that are retrievable from the instantaneous state of the network. We calculate this temporal memory capacity for both distributed shift register and random orthogonal connectivity matrices. We show that the memory capacity of these networks scales with system size.
The Origins of Systemic Reform in American Higher Education, 1895-1920
ERIC Educational Resources Information Center
Ris, Ethan W.
2018-01-01
Background/Context: The traditional literature on the history of higher education in the United States focuses on linear explanations of the inexorable growth of the size, mission, and importance of colleges and universities. That approach ignores or minimizes a recurrent strain of discontent with the higher education sector, especially from…
Effects of pole flux distribution in a homopolar linear synchronous machine
NASA Astrophysics Data System (ADS)
Balchin, M. J.; Eastham, J. F.; Coles, P. C.
1994-05-01
Linear forms of synchronous electrical machine are at present being considered as the propulsion means in high-speed, magnetically levitated (Maglev) ground transportation systems. A homopolar form of machine is considered in which the primary member, which carries both ac and dc windings, is supported on the vehicle. Test results and theoretical predictions are presented for a design of machine intended for driving a 100 passenger vehicle at a top speed of 400 km/h. The layout of the dc magnetic circuit is examined to locate the best position for the dc winding from the point of view of minimum core weight. Measurements of flux build-up under the machine at different operating speeds are given for two types of secondary pole: solid and laminated. The solid pole results, which are confirmed theoretically, show that this form of construction is impractical for high-speed drives. Measured motoring characteristics are presented for a short length of machine which simulates conditions at the leading and trailing ends of the full-sized machine. Combination of the results with those from a cylindrical version of the machine make it possible to infer the performance of the full-sized traction machine. This gives 0.8 pf and 0.9 efficiency at 300 km/h, which is much better than the reported performance of a comparable linear induction motor (0.52 pf and 0.82 efficiency). It is therefore concluded that in any projected high-speed Maglev systems, a linear synchronous machine should be the first choice as the propulsion means.
Fast computation of an optimal controller for large-scale adaptive optics.
Massioni, Paolo; Kulcsár, Caroline; Raynaud, Henri-François; Conan, Jean-Marc
2011-11-01
The linear quadratic Gaussian regulator provides the minimum-variance control solution for a linear time-invariant system. For adaptive optics (AO) applications, under the hypothesis of a deformable mirror with instantaneous response, such a controller boils down to a minimum-variance phase estimator (a Kalman filter) and a projection onto the mirror space. The Kalman filter gain can be computed by solving an algebraic Riccati matrix equation, whose computational complexity grows very quickly with the size of the telescope aperture. This "curse of dimensionality" makes the standard solvers for Riccati equations very slow in the case of extremely large telescopes. In this article, we propose a way of computing the Kalman gain for AO systems by means of an approximation that considers the turbulence phase screen as the cropped version of an infinite-size screen. We demonstrate the advantages of the methods for both off- and on-line computational time, and we evaluate its performance for classical AO as well as for wide-field tomographic AO with multiple natural guide stars. Simulation results are reported.
3D Wavelet-Based Filter and Method
Moss, William C.; Haase, Sebastian; Sedat, John W.
2008-08-12
A 3D wavelet-based filter for visualizing and locating structural features of a user-specified linear size in 2D or 3D image data. The only input parameter is a characteristic linear size of the feature of interest, and the filter output contains only those regions that are correlated with the characteristic size, thus denoising the image.
NASA Astrophysics Data System (ADS)
Kim, Euiyoung; Cho, Maenghyo
2017-11-01
In most non-linear analyses, the construction of a system matrix uses a large amount of computation time, comparable to the computation time required by the solving process. If the process for computing non-linear internal force matrices is substituted with an effective equivalent model that enables the bypass of numerical integrations and assembly processes used in matrix construction, efficiency can be greatly enhanced. A stiffness evaluation procedure (STEP) establishes non-linear internal force models using polynomial formulations of displacements. To efficiently identify an equivalent model, the method has evolved such that it is based on a reduced-order system. The reduction process, however, makes the equivalent model difficult to parameterize, which significantly affects the efficiency of the optimization process. In this paper, therefore, a new STEP, E-STEP, is proposed. Based on the element-wise nature of the finite element model, the stiffness evaluation is carried out element-by-element in the full domain. Since the unit of computation for the stiffness evaluation is restricted by element size, and since the computation is independent, the equivalent model can be constructed efficiently in parallel, even in the full domain. Due to the element-wise nature of the construction procedure, the equivalent E-STEP model is easily characterized by design parameters. Various reduced-order modeling techniques can be applied to the equivalent system in a manner similar to how they are applied in the original system. The reduced-order model based on E-STEP is successfully demonstrated for the dynamic analyses of non-linear structural finite element systems under varying design parameters.
Liang, Yujie; Ying, Rendong; Lu, Zhenqi; Liu, Peilin
2014-01-01
In the design phase of sensor arrays during array signal processing, the estimation performance and system cost are largely determined by array aperture size. In this article, we address the problem of joint direction-of-arrival (DOA) estimation with distributed sparse linear arrays (SLAs) and propose an off-grid synchronous approach based on distributed compressed sensing to obtain larger array aperture. We focus on the complex source distribution in the practical applications and classify the sources into common and innovation parts according to whether a signal of source can impinge on all the SLAs or a specific one. For each SLA, we construct a corresponding virtual uniform linear array (ULA) to create the relationship of random linear map between the signals respectively observed by these two arrays. The signal ensembles including the common/innovation sources for different SLAs are abstracted as a joint spatial sparsity model. And we use the minimization of concatenated atomic norm via semidefinite programming to solve the problem of joint DOA estimation. Joint calculation of the signals observed by all the SLAs exploits their redundancy caused by the common sources and decreases the requirement of array size. The numerical results illustrate the advantages of the proposed approach. PMID:25420150
NASA Astrophysics Data System (ADS)
Eleftheriou, E.; Karatasos, K.
2012-10-01
Models of mixtures of peripherally charged dendrimers with oppositely charged linear polyelectrolytes in the presence of explicit solvent are studied by means of molecular dynamics simulations. Under the influence of varying strength of electrostatic interactions, these systems appear to form dynamically arrested film-like interconnected structures in the polymer-rich phase. Acting like a pseudo-thermodynamic inverse temperature, the increase of the strength of the Coulombic interactions drive the polymeric constituents of the mixture to a gradual dynamic freezing-in. The timescale of the average density fluctuations of the formed complexes initially increases in the weak electrostatic regime reaching a finite limit as the strength of electrostatic interactions grow. Although the models are overall electrically neutral, during this process the dendrimer/linear complexes develop a polar character with an excess charge mainly close to the periphery of the dendrimers. The morphological characteristics of the resulted pattern are found to depend on the size of the polymer chains on account of the distinct conformational features assumed by the complexed linear polyelectrolytes of different length. In addition, the length of the polymer chain appears to affect the dynamics of the counterions, thus affecting the ionic transport properties of the system. It appears, therefore, that the strength of electrostatic interactions together with the length of the linear polyelectrolytes are parameters to which these systems are particularly responsive, offering thus the possibility for a better control of the resulted structure and the electric properties of these soft-colloidal systems.
New trends in Taylor series based applications
NASA Astrophysics Data System (ADS)
Kocina, Filip; Šátek, Václav; Veigend, Petr; Nečasová, Gabriela; Valenta, Václav; Kunovský, Jiří
2016-06-01
The paper deals with the solution of large system of linear ODEs when minimal comunication among parallel processors is required. The Modern Taylor Series Method (MTSM) is used. The MTSM allows using a higher order during the computation that means a larger integration step size while keeping desired accuracy. As an example of complex systems we can take the Telegraph Equation Model. Symbolic and numeric solutions are compared when harmonic input signal is used.
Scalable Heuristics for Planning, Placement and Sizing of Flexible AC Transmission System Devices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frolov, Vladmir; Backhaus, Scott N.; Chertkov, Michael
Aiming to relieve transmission grid congestion and improve or extend feasibility domain of the operations, we build optimization heuristics, generalizing standard AC Optimal Power Flow (OPF), for placement and sizing of Flexible Alternating Current Transmission System (FACTS) devices of the Series Compensation (SC) and Static VAR Compensation (SVC) type. One use of these devices is in resolving the case when the AC OPF solution does not exist because of congestion. Another application is developing a long-term investment strategy for placement and sizing of the SC and SVC devices to reduce operational cost and improve power system operation. SC and SVCmore » devices are represented by modification of the transmission line inductances and reactive power nodal corrections respectively. We find one placement and sizing of FACTs devices for multiple scenarios and optimal settings for each scenario simultaneously. Our solution of the nonlinear and nonconvex generalized AC-OPF consists of building a convergent sequence of convex optimizations containing only linear constraints and shows good computational scaling to larger systems. The approach is illustrated on single- and multi-scenario examples of the Matpower case-30 model.« less
Form features provide a cue to the angular velocity of rotating objects
Blair, Christopher David; Goold, Jessica; Killebrew, Kyle; Caplovitz, Gideon Paul
2013-01-01
As an object rotates, each location on the object moves with an instantaneous linear velocity dependent upon its distance from the center of rotation, while the object as a whole rotates with a fixed angular velocity. Does the perceived rotational speed of an object correspond to its angular velocity, linear velocities, or some combination of the two? We had observers perform relative speed judgments of different sized objects, as changing the size of an object changes the linear velocity of each location on the object’s surface, while maintaining the object’s angular velocity. We found that the larger a given object is, the faster it is perceived to rotate. However, the observed relationships between size and perceived speed cannot be accounted for simply by size-related changes in linear velocity. Further, the degree to which size influences perceived rotational speed depends on the shape of the object. Specifically, perceived rotational speeds of objects with corners or regions of high contour curvature were less affected by size. The results suggest distinct contour features, such as corners or regions of high or discontinuous contour curvature, provide cues to the angular velocity of a rotating object. PMID:23750970
Form features provide a cue to the angular velocity of rotating objects.
Blair, Christopher David; Goold, Jessica; Killebrew, Kyle; Caplovitz, Gideon Paul
2014-02-01
As an object rotates, each location on the object moves with an instantaneous linear velocity, dependent upon its distance from the center of rotation, whereas the object as a whole rotates with a fixed angular velocity. Does the perceived rotational speed of an object correspond to its angular velocity, linear velocities, or some combination of the two? We had observers perform relative speed judgments of different-sized objects, as changing the size of an object changes the linear velocity of each location on the object's surface, while maintaining the object's angular velocity. We found that the larger a given object is, the faster it is perceived to rotate. However, the observed relationships between size and perceived speed cannot be accounted for simply by size-related changes in linear velocity. Further, the degree to which size influences perceived rotational speed depends on the shape of the object. Specifically, perceived rotational speeds of objects with corners or regions of high-contour curvature were less affected by size. The results suggest distinct contour features, such as corners or regions of high or discontinuous contour curvature, provide cues to the angular velocity of a rotating object. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Risk Factors for Bovine Tuberculosis (bTB) in Cattle in Ethiopia.
Dejene, Sintayehu W; Heitkönig, Ignas M A; Prins, Herbert H T; Lemma, Fitsum A; Mekonnen, Daniel A; Alemu, Zelalem E; Kelkay, Tessema Z; de Boer, Willem F
2016-01-01
Bovine tuberculosis (bTB) infection is generally correlated with individual cattle's age, sex, body condition, and with husbandry practices such as herd composition, cattle movement, herd size, production system and proximity to wildlife-including bTB maintenance hosts. We tested the correlation between those factors and the prevalence of bTB, which is endemic in Ethiopia's highland cattle, in the Afar Region and Awash National Park between November 2013 and April 2015. A total of 2550 cattle from 102 herds were tested for bTB presence using the comparative intradermal tuberculin test (CITT). Data on herd structure, herd movement, management and production system, livestock transfer, and contact with wildlife were collected using semi-structured interviews with cattle herders and herd owners. The individual overall prevalence of cattle bTB was 5.5%, with a herd prevalence of 46%. Generalized Linear Mixed Models with a random herd-effect were used to analyse risk factors of cattle reactors within each herd. The older the age of the cattle and the lower the body condition the higher the chance of a positive bTB test result, but sex, lactation status and reproductive status were not correlated with bTB status. At herd level, General Linear Models showed that pastoral production systems with transhumant herds had a higher bTB prevalence than sedentary herds. A model averaging analysis identified herd size, contact with wildlife, and the interaction of herd size and contact with wildlife as significant risk factors for bTB prevalence in cattle. A subsequent Structural Equation Model showed that the probability of contact with wildlife was influenced by herd size, through herd movement. Larger herds moved more and grazed in larger areas, hence the probability of grazing in an area with wildlife and contact with either infected cattle or infected wildlife hosts increased, enhancing the chances for bTB infection. Therefore, future bTB control strategies in cattle in pastoral areas should consider herd size and movement as important risk factors.
A low-cost and portable realization on fringe projection three-dimensional measurement
NASA Astrophysics Data System (ADS)
Xiao, Suzhi; Tao, Wei; Zhao, Hui
2015-12-01
Fringe projection three-dimensional measurement is widely applied in a wide range of industrial application. The traditional fringe projection system has the disadvantages of high expense, big size, and complicated calibration requirements. In this paper we introduce a low-cost and portable realization on three-dimensional measurement with Pico projector. It has the advantages of low cost, compact physical size, and flexible configuration. For the proposed fringe projection system, there is no restriction to camera and projector's relative alignment on parallelism and perpendicularity for installation. Moreover, plane-based calibration method is adopted in this paper that avoids critical requirements on calibration system such as additional gauge block or precise linear z stage. What is more, error sources existing in the proposed system are introduced in this paper. The experimental results demonstrate the feasibility of the proposed low cost and portable fringe projection system.
Modeling startup and shutdown transient of the microlinear piezo drive via ANSYS
NASA Astrophysics Data System (ADS)
Azin, A. V.; Bogdanov, E. P.; Rikkonen, S. V.; Ponomarev, S. V.; Khramtsov, A. M.
2017-02-01
The article describes the construction-design of the micro linear piezo drive intended for a peripheral cord tensioner in the reflecting surface shape regulator system for large-sized transformable spacecraft antenna reflectors. The research target -the development method of modeling startup and shutdown transient of the micro linear piezo drive. This method is based on application software package ANSYS. The method embraces a detailed description of the calculation stages to determine the operating characteristics of the designed piezo drive. Based on the numerical solutions, the time characteristics of the designed piezo drive are determined.
Matsumoto, Yuji; Takaki, Yasuhiro
2014-06-15
Horizontally scanning holography can enlarge both screen size and viewing zone angle. A microelectromechanical-system spatial light modulator, which can generate only binary images, is used to generate hologram patterns. Thus, techniques to improve gray-scale representation in reconstructed images should be developed. In this study, the error diffusion technique was used for the binarization of holograms. When the Floyd-Steinberg error diffusion coefficients were used, gray-scale representation was improved. However, the linearity in the gray-scale representation was not satisfactory. We proposed the use of a correction table and showed that the linearity was greatly improved.
Auxiliary basis expansions for large-scale electronic structure calculations.
Jung, Yousung; Sodt, Alex; Gill, Peter M W; Head-Gordon, Martin
2005-05-10
One way to reduce the computational cost of electronic structure calculations is to use auxiliary basis expansions to approximate four-center integrals in terms of two- and three-center integrals, usually by using the variationally optimum Coulomb metric to determine the expansion coefficients. However, the long-range decay behavior of the auxiliary basis expansion coefficients has not been characterized. We find that this decay can be surprisingly slow. Numerical experiments on linear alkanes and a toy model both show that the decay can be as slow as 1/r in the distance between the auxiliary function and the fitted charge distribution. The Coulomb metric fitting equations also involve divergent matrix elements for extended systems treated with periodic boundary conditions. An attenuated Coulomb metric that is short-range can eliminate these oddities without substantially degrading calculated relative energies. The sparsity of the fit coefficients is assessed on simple hydrocarbon molecules and shows quite early onset of linear growth in the number of significant coefficients with system size using the attenuated Coulomb metric. Hence it is possible to design linear scaling auxiliary basis methods without additional approximations to treat large systems.
Permeability-porosity relationships in sedimentary rocks
Nelson, Philip H.
1994-01-01
In many consolidated sandstone and carbonate formations, plots of core data show that the logarithm of permeability (k) is often linearly proportional to porosity (??). The slope, intercept, and degree of scatter of these log(k)-?? trends vary from formation to formation, and these variations are attributed to differences in initial grain size and sorting, diagenetic history, and compaction history. In unconsolidated sands, better sorting systematically increases both permeability and porosity. In sands and sandstones, an increase in gravel and coarse grain size content causes k to increase even while decreasing ??. Diagenetic minerals in the pore space of sandstones, such as cement and some clay types, tend to decrease log(k) proportionately as ?? decreases. Models to predict permeability from porosity and other measurable rock parameters fall into three classes based on either grain, surface area, or pore dimension considerations. (Models that directly incorporate well log measurements but have no particular theoretical underpinnings from a fourth class.) Grain-based models show permeability proportional to the square of grain size times porosity raised to (roughly) the fifth power, with grain sorting as an additional parameter. Surface-area models show permeability proportional to the inverse square of pore surface area times porosity raised to (roughly) the fourth power; measures of surface area include irreducible water saturation and nuclear magnetic resonance. Pore-dimension models show permeability proportional to the square of a pore dimension times porosity raised to a power of (roughly) two and produce curves of constant pore size that transgress the linear data trends on a log(k)-?? plot. The pore dimension is obtained from mercury injection measurements and is interpreted as the pore opening size of some interconnected fraction of the pore system. The linear log(k)-?? data trends cut the curves of constant pore size from the pore-dimension models, which shows that porosity reduction is always accompanied by a reduction in characteristic pore size. The high powers of porosity of the grain-based and surface-area models are required to compensate for the inclusion of the small end of the pore size spectrum.
NASA Astrophysics Data System (ADS)
Dai, Shengyun; Pan, Xiaoning; Ma, Lijuan; Huang, Xingguo; Du, Chenzhao; Qiao, Yanjiang; Wu, Zhisheng
2018-05-01
Particle size is of great importance for the quantitative model of the NIR diffuse reflectance. In this paper, the effect of sample particle size on the measurement of harpagoside in Radix Scrophulariae powder by near infrared diffuse (NIR) reflectance spectroscopy was explored. High-performance liquid chromatography (HPLC) was employed as a reference method to construct the quantitative particle size model. Several spectral preprocessing methods were compared, and particle size models obtained by different preprocessing methods for establishing the partial least-squares (PLS) models of harpagoside. Data showed that the particle size distribution of 125-150 μm for Radix Scrophulariae exhibited the best prediction ability with R2pre=0.9513, RMSEP=0.1029 mg·g-1, and RPD = 4.78. For the hybrid granularity calibration model, the particle size distribution of 90-180 μm exhibited the best prediction ability with R2pre=0.8919, RMSEP=0.1632 mg·g-1, and RPD = 3.09. Furthermore, the Kubelka-Munk theory was used to relate the absorption coefficient k (concentration-dependent) and scatter coefficient s (particle size-dependent). The scatter coefficient s was calculated based on the Kubelka-Munk theory to study the changes of s after being mathematically preprocessed. A linear relationship was observed between k/s and absorption A within a certain range and the value for k/s was greater than 4. According to this relationship, the model was more accurately constructed with the particle size distribution of 90-180 μm when s was kept constant or in a small linear region. This region provided a good reference for the linear modeling of diffuse reflectance spectroscopy. To establish a diffuse reflectance NIR model, further accurate assessment should be obtained in advance for a precise linear model.
Haws, Kelly L; Liu, Peggy J
2016-02-01
Many restaurants are increasingly required to display calorie information on their menus. We present a study examining how consumers' food choices are affected by the presence of calorie information on restaurant menus. However, unlike prior research on this topic, we focus on the effect of calorie information on food choices made from a menu that contains both full size portions and half size portions of entrées. This different focus is important because many restaurants increasingly provide more than one portion size option per entrée. Additionally, we examine whether the impact of calorie information differs depending on whether full portions are cheaper per unit than half portions (non-linear pricing) or whether they have a similar per unit price (linear pricing). We find that when linear pricing is used, calorie information leads people to order fewer calories. This decrease occurs as people switch from unhealthy full sized portions to healthy full sized portions, not to unhealthy half sized portions. In contrast, when non-linear pricing is used, calorie information has no impact on calories selected. Considering the impact of calorie information on consumers' choices from menus with more than one entrée portion size option is increasingly important given restaurant and legislative trends, and the present research demonstrates that calorie information and pricing scheme may interact to affect choices from such menus. Copyright © 2015 Elsevier Ltd. All rights reserved.
Multiband selection with linear array detectors
NASA Technical Reports Server (NTRS)
Richard, H. L.; Barnes, W. L.
1985-01-01
Several techniques that can be used in an earth-imaging system to separate the linear image formed after the collecting optics into the desired spectral band are examined. The advantages and disadvantages of the Multispectral Linear Array (MLA) multiple optics, the MLA adjacent arrays, the imaging spectrometer, and the MLA beam splitter are discussed. The beam-splitter design approach utilizes, in addition to relatively broad spectral region separation, a movable Multiband Selection Device (MSD), placed between the exit ports of the beam splitter and a linear array detector, permitting many bands to be selected. The successful development and test of the MSD is described. The device demonstrated the capacity to provide a wide field of view, visible-to-near IR/short-wave IR and thermal IR capability, and a multiplicity of spectral bands and polarization measuring means, as well as a reasonable size and weight at minimal cost and risk compared to a spectrometer design approach.
Prinyakupt, Jaroonrut; Pluempitiwiriyawej, Charnchai
2015-06-30
Blood smear microscopic images are routinely investigated by haematologists to diagnose most blood diseases. However, the task is quite tedious and time consuming. An automatic detection and classification of white blood cells within such images can accelerate the process tremendously. In this paper we propose a system to locate white blood cells within microscopic blood smear images, segment them into nucleus and cytoplasm regions, extract suitable features and finally, classify them into five types: basophil, eosinophil, neutrophil, lymphocyte and monocyte. Two sets of blood smear images were used in this study's experiments. Dataset 1, collected from Rangsit University, were normal peripheral blood slides under light microscope with 100× magnification; 555 images with 601 white blood cells were captured by a Nikon DS-Fi2 high-definition color camera and saved in JPG format of size 960 × 1,280 pixels at 15 pixels per 1 μm resolution. In dataset 2, 477 cropped white blood cell images were downloaded from CellaVision.com. They are in JPG format of size 360 × 363 pixels. The resolution is estimated to be 10 pixels per 1 μm. The proposed system comprises a pre-processing step, nucleus segmentation, cell segmentation, feature extraction, feature selection and classification. The main concept of the segmentation algorithm employed uses white blood cell's morphological properties and the calibrated size of a real cell relative to image resolution. The segmentation process combined thresholding, morphological operation and ellipse curve fitting. Consequently, several features were extracted from the segmented nucleus and cytoplasm regions. Prominent features were then chosen by a greedy search algorithm called sequential forward selection. Finally, with a set of selected prominent features, both linear and naïve Bayes classifiers were applied for performance comparison. This system was tested on normal peripheral blood smear slide images from two datasets. Two sets of comparison were performed: segmentation and classification. The automatically segmented results were compared to the ones obtained manually by a haematologist. It was found that the proposed method is consistent and coherent in both datasets, with dice similarity of 98.9 and 91.6% for average segmented nucleus and cell regions, respectively. Furthermore, the overall correction rate in the classification phase is about 98 and 94% for linear and naïve Bayes models, respectively. The proposed system, based on normal white blood cell morphology and its characteristics, was applied to two different datasets. The results of the calibrated segmentation process on both datasets are fast, robust, efficient and coherent. Meanwhile, the classification of normal white blood cells into five types shows high sensitivity in both linear and naïve Bayes models, with slightly better results in the linear classifier.
Theory of the intermediate stage of crystal growth with applications to insulin crystallization
NASA Astrophysics Data System (ADS)
Barlow, D. A.
2017-07-01
A theory for the intermediate stage of crystal growth, where two defining equations one for population continuity and another for mass-balance, is used to study the kinetics of the supersaturation decay, the homogeneous nucleation rate, the linear growth rate and the final distribution of crystal sizes for the crystallization of bovine and porcine insulin from solution. The cited experimental reports suggest that the crystal linear growth rate is directly proportional to the square of the insulin concentration in solution for bovine insulin and to the cube of concentration for porcine. In a previous work, it was shown that the above mentioned system could be solved for the case where the growth rate is directly proportional to the normalized supersaturation. Here a more general solution is presented valid for cases where the growth rate is directly proportional to the normalized supersaturation raised to the power of any positive integer. The resulting expressions for the time dependent normalized supersaturation and crystal size distribution are compared with experimental reports for insulin crystallization. An approximation for the maximum crystal size at the end of the intermediate stage is derived. The results suggest that the largest crystal size in the distribution at the end of the intermediate stage is maximized when nucleation is restricted to be only homogeneous. Further, the largest size in the final distribution depends only weakly upon the initial supersaturation.
NASA Astrophysics Data System (ADS)
Dar, Aasif Bashir; Jha, Rakesh Kumar
2017-03-01
Various dispersion compensation units are presented and evaluated in this paper. These dispersion compensation units include dispersion compensation fiber (DCF), DCF merged with fiber Bragg grating (FBG) (joint technique), and linear, square root, and cube root chirped tanh apodized FBG. For the performance evaluation 10 Gb/s NRZ transmission system over 100-km-long single-mode fiber is used. The three chirped FBGs are optimized individually to yield pulse width reduction percentage (PWRP) of 86.66, 79.96, 62.42% for linear, square root, and cube root, respectively. The DCF and Joint technique both provide a remarkable PWRP of 94.45 and 96.96%, respectively. The performance of optimized linear chirped tanh apodized FBG and DCF is compared for long-haul transmission system on the basis of quality factor of received signal. For both the systems maximum transmission distance is calculated such that quality factor is ≥ 6 at the receiver and result shows that performance of FBG is comparable to that of DCF with advantages of very low cost, small size and reduced nonlinear effects.
NASA Astrophysics Data System (ADS)
Constantin, Lucian A.; Fabiano, Eduardo; Della Sala, Fabio
2018-05-01
Orbital-free density functional theory (OF-DFT) promises to describe the electronic structure of very large quantum systems, being its computational cost linear with the system size. However, the OF-DFT accuracy strongly depends on the approximation made for the kinetic energy (KE) functional. To date, the most accurate KE functionals are nonlocal functionals based on the linear-response kernel of the homogeneous electron gas, i.e., the jellium model. Here, we use the linear-response kernel of the jellium-with-gap model to construct a simple nonlocal KE functional (named KGAP) which depends on the band-gap energy. In the limit of vanishing energy gap (i.e., in the case of metals), the KGAP is equivalent to the Smargiassi-Madden (SM) functional, which is accurate for metals. For a series of semiconductors (with different energy gaps), the KGAP performs much better than SM, and results are close to the state-of-the-art functionals with sophisticated density-dependent kernels.
NASA Astrophysics Data System (ADS)
Reinert, K. A.
The use of linear decision rules (LDR) and chance constrained programming (CCP) to optimize the performance of wind energy conversion clusters coupled to storage systems is described. Storage is modelled by LDR and output by CCP. The linear allocation rule and linear release rule prescribe the size and optimize a storage facility with a bypass. Chance constraints are introduced to explicitly treat reliability in terms of an appropriate value from an inverse cumulative distribution function. Details of deterministic programming structure and a sample problem involving a 500 kW and a 1.5 MW WECS are provided, considering an installed cost of $1/kW. Four demand patterns and three levels of reliability are analyzed for optimizing the generator choice and the storage configuration for base load and peak operating conditions. Deficiencies in ability to predict reliability and to account for serial correlations are noted in the model, which is concluded useful for narrowing WECS design options.
The Elementary Operations of Human Vision Are Not Reducible to Template-Matching
Neri, Peter
2015-01-01
It is generally acknowledged that biological vision presents nonlinear characteristics, yet linear filtering accounts of visual processing are ubiquitous. The template-matching operation implemented by the linear-nonlinear cascade (linear filter followed by static nonlinearity) is the most widely adopted computational tool in systems neuroscience. This simple model achieves remarkable explanatory power while retaining analytical tractability, potentially extending its reach to a wide range of systems and levels in sensory processing. The extent of its applicability to human behaviour, however, remains unclear. Because sensory stimuli possess multiple attributes (e.g. position, orientation, size), the issue of applicability may be asked by considering each attribute one at a time in relation to a family of linear-nonlinear models, or by considering all attributes collectively in relation to a specified implementation of the linear-nonlinear cascade. We demonstrate that human visual processing can operate under conditions that are indistinguishable from linear-nonlinear transduction with respect to substantially different stimulus attributes of a uniquely specified target signal with associated behavioural task. However, no specific implementation of a linear-nonlinear cascade is able to account for the entire collection of results across attributes; a satisfactory account at this level requires the introduction of a small gain-control circuit, resulting in a model that no longer belongs to the linear-nonlinear family. Our results inform and constrain efforts at obtaining and interpreting comprehensive characterizations of the human sensory process by demonstrating its inescapably nonlinear nature, even under conditions that have been painstakingly fine-tuned to facilitate template-matching behaviour and to produce results that, at some level of inspection, do conform to linear filtering predictions. They also suggest that compliance with linear transduction may be the targeted outcome of carefully crafted nonlinear circuits, rather than default behaviour exhibited by basic components. PMID:26556758
Flexible cue combination in the guidance of attention in visual search
Brand, John; Oriet, Chris; Johnson, Aaron P.; Wolfe, Jeremy M.
2014-01-01
Hodsoll and Humphreys (2001) have assessed the relative contributions of stimulus-driven and user-driven knowledge on linearly- and nonlinearly separable search. However, the target feature used to determine linear separability in their task (i.e., target size) was required to locate the target. In the present work, we investigated the contributions of stimulus-driven and user-driven knowledge when a linearly- or nonlinearly-separable feature is available but not required for target identification. We asked observers to complete a series of standard color X orientation conjunction searches in which target size was either linearly- or nonlinearly separable from the size of the distractors. When guidance by color X orientation and by size information are both available, observers rely on whichever information results in the best search efficiency. This is the case irrespective of whether we provide target foreknowledge by blocking stimulus conditions, suggesting that feature information is used in both a stimulus-driven and user-driven fashion. PMID:25463553
Blood cell counting and classification by nonflowing laser light scattering method
NASA Astrophysics Data System (ADS)
Yang, Ye; Zhang, Zhenxi; Yang, Xinhui; Jiang, Dazong; Yeo, Joon Hock
1999-11-01
A new non-flowing laser light scattering method for counting and classifying blood cells is presented. A linear charge- coupled device with 1024 elements is used to detect the scattered light intensity distribution of the blood cells. A pinhole plate is combined with the CCD to compete the focusing of the measurement system. An isotropic sphere is used to simulate the blood cell. Mie theory is used to describe the scattering of blood cells. In order to inverse the size distribution of blood cells from their scattered light intensity distribution, Powell method combined with precision punishment method is used as a dependent model method for measurement red blood cells and blood plates. Non-negative constraint least square method combined with Powell method and precision punishment method is used as an independent model for measuring white blood cells. The size distributions of white blood cells and red blood cells, and the mean diameter of red blood cells are measured by this method. White blood cells can be divided into three classes: lymphocytes, middle-sized cells and neutrocytes according to their sizes. And the number of blood cells in unit volume can also be measured by the linear dependence of blood cells concentration on scattered light intensity.
Study on the biomass and size spectra of bio-particles in vermifilter biofilms.
Di, Wanyin; Xing, Meiyan
2018-09-15
In biological processes of sludge treatment, the sludge yield is closely related to the energy dissipation of entire microbial system. The vermifilter (VF), a novel biofilter, works efficiently due to the introduction of earthworms, which modifies the energy flow pathway through the variations of microbial size structure. For a deep insight into the sludge reduction in the VF, the biomass size spectrum (BSS) was employed to map the energy dissipation in the VF. The results indicated that bio-particles in the size class of [31, 63] μm were reduced most in the excess sludge after the VF treatment. In biofilms, bio-particles in the size class of [31, 63] μm varied most with the filter depth and earthworm density. Eight biomass and size spectra (BSS) were established for all beds of the VF and BF (the control of the VF, without earthworms). The normalized BSS were all linear both in the VF and BF, and their linear regression parameters, the slopes (k) and intercepts (b), varied with the filter depth and the earthworm density. The k and b of the VF were both significantly different from those of the BF. According to the k, the productivity level of largest bio-particles was higher in the VF than in the BF. According to the b, bio-particles at the bottom of size structure could be taken faster in the VF than in the BF. At last, some improvement approaches with some tries were proposed to enhance the sludge treatment capacity of the VF. Copyright © 2018 Elsevier B.V. All rights reserved.
A design study for an advanced ocean color scanner system. [spaceborne equipment
NASA Technical Reports Server (NTRS)
Kim, H. H.; Fraser, R. S.; Thompson, L. L.; Bahethi, O.
1980-01-01
Along with a colorimetric data analysis scheme, the instrumental parameters which need to be optimized in future spaceborne ocean color scanner systems are outlined. With regard to assessing atmospheric effects from ocean colorimetry, attention is given to computing size parameters of the aerosols in the atmosphere, total optical depth measurement, and the aerosol optical thickness. It is suggested that sensors based on the use of linear array technology will meet hardware objectives.
Scalable PGAS Metadata Management on Extreme Scale Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chavarría-Miranda, Daniel; Agarwal, Khushbu; Straatsma, TP
Programming models intended to run on exascale systems have a number of challenges to overcome, specially the sheer size of the system as measured by the number of concurrent software entities created and managed by the underlying runtime. It is clear from the size of these systems that any state maintained by the programming model has to be strictly sub-linear in size, in order not to overwhelm memory usage with pure overhead. A principal feature of Partitioned Global Address Space (PGAS) models is providing easy access to global-view distributed data structures. In order to provide efficient access to these distributedmore » data structures, PGAS models must keep track of metadata such as where array sections are located with respect to processes/threads running on the HPC system. As PGAS models and applications become ubiquitous on very large transpetascale systems, a key component to their performance and scalability will be efficient and judicious use of memory for model overhead (metadata) compared to application data. We present an evaluation of several strategies to manage PGAS metadata that exhibit different space/time tradeoffs. We use two real-world PGAS applications to capture metadata usage patterns and gain insight into their communication behavior.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zuehlsdorff, T. J., E-mail: tjz21@cam.ac.uk; Payne, M. C.; Hine, N. D. M.
2015-11-28
We present a solution of the full time-dependent density-functional theory (TDDFT) eigenvalue equation in the linear response formalism exhibiting a linear-scaling computational complexity with system size, without relying on the simplifying Tamm-Dancoff approximation (TDA). The implementation relies on representing the occupied and unoccupied subspaces with two different sets of in situ optimised localised functions, yielding a very compact and efficient representation of the transition density matrix of the excitation with the accuracy associated with a systematic basis set. The TDDFT eigenvalue equation is solved using a preconditioned conjugate gradient algorithm that is very memory-efficient. The algorithm is validated on amore » small test molecule and a good agreement with results obtained from standard quantum chemistry packages is found, with the preconditioner yielding a significant improvement in convergence rates. The method developed in this work is then used to reproduce experimental results of the absorption spectrum of bacteriochlorophyll in an organic solvent, where it is demonstrated that the TDA fails to reproduce the main features of the low energy spectrum, while the full TDDFT equation yields results in good qualitative agreement with experimental data. Furthermore, the need for explicitly including parts of the solvent into the TDDFT calculations is highlighted, making the treatment of large system sizes necessary that are well within reach of the capabilities of the algorithm introduced here. Finally, the linear-scaling properties of the algorithm are demonstrated by computing the lowest excitation energy of bacteriochlorophyll in solution. The largest systems considered in this work are of the same order of magnitude as a variety of widely studied pigment-protein complexes, opening up the possibility of studying their properties without having to resort to any semiclassical approximations to parts of the protein environment.« less
Linearized self-consistent quasiparticle GW method: Application to semiconductors and simple metals
NASA Astrophysics Data System (ADS)
Kutepov, A. L.; Oudovenko, V. S.; Kotliar, G.
2017-10-01
We present a code implementing the linearized quasiparticle self-consistent GW method (LQSGW) in the LAPW basis. Our approach is based on the linearization of the self-energy around zero frequency which differs it from the existing implementations of the QSGW method. The linearization allows us to use Matsubara frequencies instead of working on the real axis. This results in efficiency gains by switching to the imaginary time representation in the same way as in the space time method. The all electron LAPW basis set eliminates the need for pseudopotentials. We discuss the advantages of our approach, such as its N3 scaling with the system size N, as well as its shortcomings. We apply our approach to study the electronic properties of selected semiconductors, insulators, and simple metals and show that our code produces the results very close to the previously published QSGW data. Our implementation is a good platform for further many body diagrammatic resummations such as the vertex-corrected GW approach and the GW+DMFT method. Program Files doi:http://dx.doi.org/10.17632/cpchkfty4w.1 Licensing provisions: GNU General Public License Programming language: Fortran 90 External routines/libraries: BLAS, LAPACK, MPI (optional) Nature of problem: Direct implementation of the GW method scales as N4 with the system size, which quickly becomes prohibitively time consuming even in the modern computers. Solution method: We implemented the GW approach using a method that switches between real space and momentum space representations. Some operations are faster in real space, whereas others are more computationally efficient in the reciprocal space. This makes our approach scale as N3. Restrictions: The limiting factor is usually the memory available in a computer. Using 10 GB/core of memory allows us to study the systems up to 15 atoms per unit cell.
Size effects of single-walled carbon nanotubes on in vivo and in vitro pulmonary toxicity
Fujita, Katsuhide; Fukuda, Makiko; Endoh, Shigehisa; Maru, Junko; Kato, Haruhisa; Nakamura, Ayako; Shinohara, Naohide; Uchino, Kanako; Honda, Kazumasa
2015-01-01
Abstract To elucidate the effect of size on the pulmonary toxicity of single-wall carbon nanotubes (SWCNTs), we prepared two types of dispersed SWCNTs, namely relatively thin bundles with short linear shapes (CNT-1) and thick bundles with long linear shapes (CNT-2), and conducted rat intratracheal instillation tests and in vitro cell-based assays using NR8383 rat alveolar macrophages. Total protein levels, MIP-1α expression, cell counts in BALF, and histopathological examinations revealed that CNT-1 caused pulmonary inflammation and slower recovery and that CNT-2 elicited acute lung inflammation shortly after their instillation. Comprehensive gene expression analysis confirmed that CNT-1-induced genes were strongly associated with inflammatory responses, cell proliferation, and immune system processes at 7 or 30 d post-instillation. Numerous genes were significantly upregulated or downregulated by CNT-2 at 1 d post-instillation. In vitro assays demonstrated that CNT-1 and CNT-2 SWCNTs were phagocytized by NR8383 cells. CNT-2 treatment induced cell growth inhibition, reactive oxygen species production, MIP-1α expression, and several genes involved in response to stimulus, whereas CNT-1 treatment did not exert a significant impact in these regards. These results suggest that SWCNTs formed as relatively thin bundles with short linear shapes elicited delayed pulmonary inflammation with slower recovery. In contrast, SWCNTs with a relatively thick bundle and long linear shapes sensitively induced cellular responses in alveolar macrophages and elicited acute lung inflammation shortly after inhalation. We conclude that the pulmonary toxicity of SWCNTs is closely associated with the size of the bundles. These physical parameters are useful for risk assessment and management of SWCNTs. PMID:25865113
NASA Astrophysics Data System (ADS)
Chen, Hui; Deng, Ju-Zhi; Yin, Min; Yin, Chang-Chun; Tang, Wen-Wu
2017-03-01
To speed up three-dimensional (3D) DC resistivity modeling, we present a new multigrid method, the aggregation-based algebraic multigrid method (AGMG). We first discretize the differential equation of the secondary potential field with mixed boundary conditions by using a seven-point finite-difference method to obtain a large sparse system of linear equations. Then, we introduce the theory behind the pairwise aggregation algorithms for AGMG and use the conjugate-gradient method with the V-cycle AGMG preconditioner (AGMG-CG) to solve the linear equations. We use typical geoelectrical models to test the proposed AGMG-CG method and compare the results with analytical solutions and the 3DDCXH algorithm for 3D DC modeling (3DDCXH). In addition, we apply the AGMG-CG method to different grid sizes and geoelectrical models and compare it to different iterative methods, such as ILU-BICGSTAB, ILU-GCR, and SSOR-CG. The AGMG-CG method yields nearly linearly decreasing errors, whereas the number of iterations increases slowly with increasing grid size. The AGMG-CG method is precise and converges fast, and thus can improve the computational efficiency in forward modeling of three-dimensional DC resistivity.
Using recurrent neural networks for adaptive communication channel equalization.
Kechriotis, G; Zervas, E; Manolakos, E S
1994-01-01
Nonlinear adaptive filters based on a variety of neural network models have been used successfully for system identification and noise-cancellation in a wide class of applications. An important problem in data communications is that of channel equalization, i.e., the removal of interferences introduced by linear or nonlinear message corrupting mechanisms, so that the originally transmitted symbols can be recovered correctly at the receiver. In this paper we introduce an adaptive recurrent neural network (RNN) based equalizer whose small size and high performance makes it suitable for high-speed channel equalization. We propose RNN based structures for both trained adaptation and blind equalization, and we evaluate their performance via extensive simulations for a variety of signal modulations and communication channel models. It is shown that the RNN equalizers have comparable performance with traditional linear filter based equalizers when the channel interferences are relatively mild, and that they outperform them by several orders of magnitude when either the channel's transfer function has spectral nulls or severe nonlinear distortion is present. In addition, the small-size RNN equalizers, being essentially generalized IIR filters, are shown to outperform multilayer perceptron equalizers of larger computational complexity in linear and nonlinear channel equalization cases.
Morphology filter bank for extracting nodular and linear patterns in medical images.
Hashimoto, Ryutaro; Uchiyama, Yoshikazu; Uchimura, Keiichi; Koutaki, Gou; Inoue, Tomoki
2017-04-01
Using image processing to extract nodular or linear shadows is a key technique of computer-aided diagnosis schemes. This study proposes a new method for extracting nodular and linear patterns of various sizes in medical images. We have developed a morphology filter bank that creates multiresolution representations of an image. Analysis bank of this filter bank produces nodular and linear patterns at each resolution level. Synthesis bank can then be used to perfectly reconstruct the original image from these decomposed patterns. Our proposed method shows better performance based on a quantitative evaluation using a synthesized image compared with a conventional method based on a Hessian matrix, often used to enhance nodular and linear patterns. In addition, experiments show that our method can be applied to the followings: (1) microcalcifications of various sizes in mammograms can be extracted, (2) blood vessels of various sizes in retinal fundus images can be extracted, and (3) thoracic CT images can be reconstructed while removing normal vessels. Our proposed method is useful for extracting nodular and linear shadows or removing normal structures in medical images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Renke; Jin, Shuangshuang; Chen, Yousu
This paper presents a faster-than-real-time dynamic simulation software package that is designed for large-size power system dynamic simulation. It was developed on the GridPACKTM high-performance computing (HPC) framework. The key features of the developed software package include (1) faster-than-real-time dynamic simulation for a WECC system (17,000 buses) with different types of detailed generator, controller, and relay dynamic models, (2) a decoupled parallel dynamic simulation algorithm with optimized computation architecture to better leverage HPC resources and technologies, (3) options for HPC-based linear and iterative solvers, (4) hidden HPC details, such as data communication and distribution, to enable development centered on mathematicalmore » models and algorithms rather than on computational details for power system researchers, and (5) easy integration of new dynamic models and related algorithms into the software package.« less
Dynamics of one-dimensional self-gravitating systems using Hermite-Legendre polynomials
NASA Astrophysics Data System (ADS)
Barnes, Eric I.; Ragan, Robert J.
2014-01-01
The current paradigm for understanding galaxy formation in the Universe depends on the existence of self-gravitating collisionless dark matter. Modelling such dark matter systems has been a major focus of astrophysicists, with much of that effort directed at computational techniques. Not surprisingly, a comprehensive understanding of the evolution of these self-gravitating systems still eludes us, since it involves the collective non-linear dynamics of many particle systems interacting via long-range forces described by the Vlasov equation. As a step towards developing a clearer picture of collisionless self-gravitating relaxation, we analyse the linearized dynamics of isolated one-dimensional systems near thermal equilibrium by expanding their phase-space distribution functions f(x, v) in terms of Hermite functions in the velocity variable, and Legendre functions involving the position variable. This approach produces a picture of phase-space evolution in terms of expansion coefficients, rather than spatial and velocity variables. We obtain equations of motion for the expansion coefficients for both test-particle distributions and self-gravitating linear perturbations of thermal equilibrium. N-body simulations of perturbed equilibria are performed and found to be in excellent agreement with the expansion coefficient approach over a time duration that depends on the size of the expansion series used.
Thrust vectoring for lateral-directional stability
NASA Technical Reports Server (NTRS)
Peron, Lee R.; Carpenter, Thomas
1992-01-01
The advantages and disadvantages of using thrust vectoring for lateral-directional control and the effects of reducing the tail size of a single-engine aircraft were investigated. The aerodynamic characteristics of the F-16 aircraft were generated by using the Aerodynamic Preliminary Analysis System II panel code. The resulting lateral-directional linear perturbation analysis of a modified F-16 aircraft with various tail sizes and yaw vectoring was performed at several speeds and altitudes to determine the stability and control trends for the aircraft compared to these trends for a baseline aircraft. A study of the paddle-type turning vane thrust vectoring control system as used on the National Aeronautics and Space Administration F/A-18 High Alpha Research Vehicle is also presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Chuyu
2012-12-31
Beam diagnostics is an essential constituent of any accelerator, so that it is named as "organs of sense" or "eyes of the accelerator." Beam diagnostics is a rich field. A great variety of physical effects or physical principles are made use of in this field. Some devices are based on electro-magnetic influence by moving charges, such as faraday cups, beam transformers, pick-ups; Some are related to Coulomb interaction of charged particles with matter, such as scintillators, viewing screens, ionization chambers; Nuclear or elementary particle physics interactions happen in some other devices, like beam loss monitors, polarimeters, luminosity monitors; Some measuremore » photons emitted by moving charges, such as transition radiation, synchrotron radiation monitors and diffraction radiation-which is the topic of the first part of this thesis; Also, some make use of interaction of particles with photons, such as laser wire and Compton polarimeters-which is the second part of my thesis. Diagnostics let us perceive what properties a beam has and how it behaves in a machine, give us guideline for commissioning, controlling the machine and indispensable parameters vital to physics experiments. In the next two decades, the research highlight will be colliders (TESLA, CLIC, JLC) and fourth-generation light sources (TESLA FEL, LCLS, Spring 8 FEL) based on linear accelerator. These machines require a new generation of accelerator with smaller beam, better stability and greater efficiency. Compared with those existing linear accelerators, the performance of next generation linear accelerator will be doubled in all aspects, such as 10 times smaller horizontal beam size, more than 10 times smaller vertical beam size and a few or more times higher peak power. Furthermore, some special positions in the accelerator have even more stringent requirements, such as the interaction point of colliders and wigglor of free electron lasers. Higher performance of these accelerators increases the difficulty of diagnostics. For most cases, intercepting measurements are no longer acceptable, and nonintercepting method like synchrotron radiation monitor can not be applied to linear accelerators. The development of accelerator technology asks for simutanous diagnostics innovations, to expand the performance of diagnostic tools to meet the requirements of the next generation accelerators. Diffraction radiation and inverse Compton scattering are two of the most promising techniques, their nonintercepting nature avoids perturbance to the beam and damage to the instrumentation. This thesis is divided into two parts, beam size measurement by optical diffraction radiation and Laser system for Compton polarimeter. Diffraction radiation, produced by the interaction between the electric field of charged particles and the target, is related to transition radiation. Even though the theory of diffraction radiation has been discussed since 1960s, there are only a few experimental studies in recent years. The successful beam size measurement by optical diffraction radiation at CEBAF machine is a milestone: First of all, we have successfully demonstrated diffraction radiation as an effective nonintercepting diagnostics; Secondly, the simple linear relationship between the diffraction radiation image size and the actual beam size improves the reliability of ODR measurements; And, we measured the polarized components of diffraction radiation for the first time and I analyzed the contribution from edge radiation to diffraction radiation.« less
The application of an atomistic J-integral to a ductile crack.
Zimmerman, Jonathan A; Jones, Reese E
2013-04-17
In this work we apply a Lagrangian kernel-based estimator of continuum fields to atomic data to estimate the J-integral for the emission dislocations from a crack tip. Face-centered cubic (fcc) gold and body-centered cubic (bcc) iron modeled with embedded atom method (EAM) potentials are used as example systems. The results of a single crack with a K-loading compare well to an analytical solution from anisotropic linear elastic fracture mechanics. We also discovered that in the post-emission of dislocations from the crack tip there is a loop size-dependent contribution to the J-integral. For a system with a finite width crack loaded in simple tension, the finite size effects for the systems that were feasible to compute prevented precise agreement with theory. However, our results indicate that there is a trend towards convergence.
Beam characterisation of the KIRAMS electron microbeam system.
Sun, G M; Kim, E H; Song, K B; Jang, M
2006-01-01
An electron microbeam system has been installed at the Korea Institute of Radiological and Medical Sciences (KIRAMS) for use in radiation biology studies. The electron beam is produced from a commercial electron gun, and the beam size is defined by a 5 microm diameter pinhole. Beam energy can be varied in the range of 1-100 keV, covering a range of linear energy transfer from 0.4 to 12.1 keV microm-1. The micrometer-sized electron beam selectively irradiates cells cultured in a Mylar-bottomed dish. The positioning of target cells one by one onto the beam exit is automated, as is beam shooting. The electron beam entering the target cells has been calibrated using a Passivated Implanted Planar Silicon (PIPS) detector. This paper describes the KIRAMS microbeam cell irradiation system and its beam characteristics.
Efficient Implementation of an Optimal Interpolator for Large Spatial Data Sets
NASA Technical Reports Server (NTRS)
Memarsadeghi, Nargess; Mount, David M.
2007-01-01
Scattered data interpolation is a problem of interest in numerous areas such as electronic imaging, smooth surface modeling, and computational geometry. Our motivation arises from applications in geology and mining, which often involve large scattered data sets and a demand for high accuracy. The method of choice is ordinary kriging. This is because it is a best unbiased estimator. Unfortunately, this interpolant is computationally very expensive to compute exactly. For n scattered data points, computing the value of a single interpolant involves solving a dense linear system of size roughly n x n. This is infeasible for large n. In practice, kriging is solved approximately by local approaches that are based on considering only a relatively small'number of points that lie close to the query point. There are many problems with this local approach, however. The first is that determining the proper neighborhood size is tricky, and is usually solved by ad hoc methods such as selecting a fixed number of nearest neighbors or all the points lying within a fixed radius. Such fixed neighborhood sizes may not work well for all query points, depending on local density of the point distribution. Local methods also suffer from the problem that the resulting interpolant is not continuous. Meyer showed that while kriging produces smooth continues surfaces, it has zero order continuity along its borders. Thus, at interface boundaries where the neighborhood changes, the interpolant behaves discontinuously. Therefore, it is important to consider and solve the global system for each interpolant. However, solving such large dense systems for each query point is impractical. Recently a more principled approach to approximating kriging has been proposed based on a technique called covariance tapering. The problems arise from the fact that the covariance functions that are used in kriging have global support. Our implementations combine, utilize, and enhance a number of different approaches that have been introduced in literature for solving large linear systems for interpolation of scattered data points. For very large systems, exact methods such as Gaussian elimination are impractical since they require 0(n(exp 3)) time and 0(n(exp 2)) storage. As Billings et al. suggested, we use an iterative approach. In particular, we use the SYMMLQ method, for solving the large but sparse ordinary kriging systems that result from tapering. The main technical issue that need to be overcome in our algorithmic solution is that the points' covariance matrix for kriging should be symmetric positive definite. The goal of tapering is to obtain a sparse approximate representation of the covariance matrix while maintaining its positive definiteness. Furrer et al. used tapering to obtain a sparse linear system of the form Ax = b, where A is the tapered symmetric positive definite covariance matrix. Thus, Cholesky factorization could be used to solve their linear systems. They implemented an efficient sparse Cholesky decomposition method. They also showed if these tapers are used for a limited class of covariance models, the solution of the system converges to the solution of the original system. Matrix A in the ordinary kriging system, while symmetric, is not positive definite. Thus, their approach is not applicable to the ordinary kriging system. Therefore, we use tapering only to obtain a sparse linear system. Then, we use SYMMLQ to solve the ordinary kriging system. We show that solving large kriging systems becomes practical via tapering and iterative methods, and results in lower estimation errors compared to traditional local approaches, and significant memory savings compared to the original global system. We also developed a more efficient variant of the sparse SYMMLQ method for large ordinary kriging systems. This approach adaptively finds the correct local neighborhood for each query point in the interpolation process.
Characteristics of mobile MOSFET dosimetry system for megavoltage photon beams
Kumar, A. Sathish; Sharma, S. D.; Ravindran, B. Paul
2014-01-01
The characteristics of a mobile metal oxide semiconductor field effect transistor (mobile MOSFET) detector for standard bias were investigated for megavoltage photon beams. This study was performed with a brass alloy build-up cap for three energies namely Co-60, 6 and 15 MV photon beams. The MOSFETs were calibrated and the performance characteristics were analyzed with respect to dose rate dependence, energy dependence, field size dependence, linearity, build-up factor, and angular dependence for all the three energies. A linear dose-response curve was noted for Co-60, 6 MV, and 15 MV photons. The calibration factors were found to be 1.03, 1, and 0.79 cGy/mV for Co-60, 6 MV, and 15 MV photon energies, respectively. The calibration graph has been obtained to the dose up to 600 cGy, and the dose-response curve was found to be linear. The MOSFETs were found to be energy independent both for measurements performed at depth as well as on the surface with build-up. However, field size dependence was also analyzed for variable field sizes and found to be field size independent. Angular dependence was analyzed by keeping the MOSFET dosimeter in parallel and perpendicular orientation to the angle of incidence of the radiation with and without build-up on the surface of the phantom. The maximum variation for the three energies was found to be within ± 2% for the gantry angles 90° and 270°, the deviations without the build-up for the same gantry angles were found to be 6%, 25%, and 60%, respectively. The MOSFET response was found to be independent of dose rate for all three energies. The dosimetric characteristics of the MOSFET detector make it a suitable in vivo dosimeter for megavoltage photon beams. PMID:25190992
Characteristics of mobile MOSFET dosimetry system for megavoltage photon beams.
Kumar, A Sathish; Sharma, S D; Ravindran, B Paul
2014-07-01
The characteristics of a mobile metal oxide semiconductor field effect transistor (mobile MOSFET) detector for standard bias were investigated for megavoltage photon beams. This study was performed with a brass alloy build-up cap for three energies namely Co-60, 6 and 15 MV photon beams. The MOSFETs were calibrated and the performance characteristics were analyzed with respect to dose rate dependence, energy dependence, field size dependence, linearity, build-up factor, and angular dependence for all the three energies. A linear dose-response curve was noted for Co-60, 6 MV, and 15 MV photons. The calibration factors were found to be 1.03, 1, and 0.79 cGy/mV for Co-60, 6 MV, and 15 MV photon energies, respectively. The calibration graph has been obtained to the dose up to 600 cGy, and the dose-response curve was found to be linear. The MOSFETs were found to be energy independent both for measurements performed at depth as well as on the surface with build-up. However, field size dependence was also analyzed for variable field sizes and found to be field size independent. Angular dependence was analyzed by keeping the MOSFET dosimeter in parallel and perpendicular orientation to the angle of incidence of the radiation with and without build-up on the surface of the phantom. The maximum variation for the three energies was found to be within ± 2% for the gantry angles 90° and 270°, the deviations without the build-up for the same gantry angles were found to be 6%, 25%, and 60%, respectively. The MOSFET response was found to be independent of dose rate for all three energies. The dosimetric characteristics of the MOSFET detector make it a suitable in vivo dosimeter for megavoltage photon beams.
Spatio-temporal correlations in the Manna model in one, three and five dimensions
NASA Astrophysics Data System (ADS)
Willis, Gary; Pruessner, Gunnar
2018-02-01
Although the paradigm of criticality is centered around spatial correlations and their anomalous scaling, not many studies of self-organized criticality (SOC) focus on spatial correlations. Often, integrated observables, such as avalanche size and duration, are used, not least as to avoid complications due to the unavoidable lack of translational invariance. The present work is a survey of spatio-temporal correlation functions in the Manna Model of SOC, measured numerically in detail in d = 1,3 and 5 dimensions and compared to theoretical results, in particular relating them to “integrated” observables such as avalanche size and duration scaling, that measure them indirectly. Contrary to the notion held by some of SOC models organizing into a critical state by re-arranging their spatial structure avalanche by avalanche, which may be expected to result in large, nontrivial, system-spanning spatial correlations in the quiescent state (between avalanches), correlations of inactive particles in the quiescent state have a small amplitude that does not and cannot increase with the system size, although they display (noisy) power law scaling over a range linear in the system size. Self-organization, however, does take place as the (one-point) density of inactive particles organizes into a particular profile that is asymptotically independent of the driving location, also demonstrated analytically in one dimension. Activity and its correlations, on the other hand, display nontrivial long-ranged spatio-temporal scaling with exponents that can be related to established results, in particular avalanche size and duration exponents. The correlation length and amplitude are set by the system size (confirmed analytically for some observables), as expected in systems displaying finite size scaling. In one dimension, we find some surprising inconsistencies of the dynamical exponent. A (spatially extended) mean field theory (MFT) is recovered, with some corrections, in five dimensions.
Depth Perception and Defensive System Activation in a 3-D Environment
Combe, Emmanuelle; Fujii, Naotaka
2011-01-01
To survive, animals must be able to react appropriately (in temporal and behavioral terms) when facing a threat. One of the essential parameters considered by the defensive system is the distance of the threat, the “defensive distance.” In this study, we investigate the visual depth cues that could be considered as an alarm cue for the activation of the defensive system. For this purpose, we performed an active-escape pain task in a virtual three-dimensional environment. In two experiments, we manipulated the nature and consistency of different depth cues: vergence, linear perspective, and angular size. By measuring skin conductance responses, we characterized the situations that activated the defensive system. We show that the angular size of the predator was sufficient information to trigger responses from the defensive system, but we also demonstrate that vergence, which can delay the emotional response in inconsistent situations, is also a highly reliable cue for the activation of the defensive system. PMID:21941515
Mudalige, Thilak K; Qu, Haiou; Linder, Sean W
2015-11-13
Engineered nanoparticles are available in large numbers of commercial products claiming various health benefits. Nanoparticle absorption, distribution, metabolism, excretion, and toxicity in a biological system are dependent on particle size, thus the determination of size and size distribution is essential for full characterization. Number based average size and size distribution is a major parameter for full characterization of the nanoparticle. In the case of polydispersed samples, large numbers of particles are needed to obtain accurate size distribution data. Herein, we report a rapid methodology, demonstrating improved nanoparticle recovery and excellent size resolution, for the characterization of gold nanoparticles in dietary supplements using asymmetric flow field flow fractionation coupled with visible absorption spectrometry and inductively coupled plasma mass spectrometry. A linear relationship between gold nanoparticle size and retention times was observed, and used for characterization of unknown samples. The particle size results from unknown samples were compared to results from traditional size analysis by transmission electron microscopy, and found to have less than a 5% deviation in size for unknown product over the size range from 7 to 30 nm. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Scherer, Artur; Valiron, Benoît; Mau, Siun-Chuon; Alexander, Scott; van den Berg, Eric; Chapuran, Thomas E.
2017-03-01
We provide a detailed estimate for the logical resource requirements of the quantum linear-system algorithm (Harrow et al. in Phys Rev Lett 103:150502, 2009) including the recently described elaborations and application to computing the electromagnetic scattering cross section of a metallic target (Clader et al. in Phys Rev Lett 110:250504, 2013). Our resource estimates are based on the standard quantum-circuit model of quantum computation; they comprise circuit width (related to parallelism), circuit depth (total number of steps), the number of qubits and ancilla qubits employed, and the overall number of elementary quantum gate operations as well as more specific gate counts for each elementary fault-tolerant gate from the standard set { X, Y, Z, H, S, T, { CNOT } }. In order to perform these estimates, we used an approach that combines manual analysis with automated estimates generated via the Quipper quantum programming language and compiler. Our estimates pertain to the explicit example problem size N=332{,}020{,}680 beyond which, according to a crude big-O complexity comparison, the quantum linear-system algorithm is expected to run faster than the best known classical linear-system solving algorithm. For this problem size, a desired calculation accuracy ɛ =0.01 requires an approximate circuit width 340 and circuit depth of order 10^{25} if oracle costs are excluded, and a circuit width and circuit depth of order 10^8 and 10^{29}, respectively, if the resource requirements of oracles are included, indicating that the commonly ignored oracle resources are considerable. In addition to providing detailed logical resource estimates, it is also the purpose of this paper to demonstrate explicitly (using a fine-grained approach rather than relying on coarse big-O asymptotic approximations) how these impressively large numbers arise with an actual circuit implementation of a quantum algorithm. While our estimates may prove to be conservative as more efficient advanced quantum-computation techniques are developed, they nevertheless provide a valid baseline for research targeting a reduction of the algorithmic-level resource requirements, implying that a reduction by many orders of magnitude is necessary for the algorithm to become practical.
Laser SRS tracker for reverse prototyping tasks
NASA Astrophysics Data System (ADS)
Kolmakov, Egor; Redka, Dmitriy; Grishkanich, Aleksandr; Tsvetkov, Konstantin
2017-10-01
According to the current great interest concerning Large-Scale Metrology applications in many different fields of manufacturing industry, technologies and techniques for dimensional measurement have recently shown a substantial improvement. Ease-of-use, logistic and economic issues, as well as metrological performance, are assuming a more and more important role among system requirements. The project is planned to conduct experimental studies aimed at identifying the impact of the application of the basic laws of chip and microlasers as radiators on the linear-angular characteristics of existing measurement systems. The project is planned to conduct experimental studies aimed at identifying the impact of the application of the basic laws of microlasers as radiators on the linear-angular characteristics of existing measurement systems. The system consists of a distributed network-based layout, whose modularity allows to fit differently sized and shaped working volumes by adequately increasing the number of sensing units. Differently from existing spatially distributed metrological instruments, the remote sensor devices are intended to provide embedded data elaboration capabilities, in order to share the overall computational load.
GPU computing with Kaczmarz’s and other iterative algorithms for linear systems
Elble, Joseph M.; Sahinidis, Nikolaos V.; Vouzis, Panagiotis
2009-01-01
The graphics processing unit (GPU) is used to solve large linear systems derived from partial differential equations. The differential equations studied are strongly convection-dominated, of various sizes, and common to many fields, including computational fluid dynamics, heat transfer, and structural mechanics. The paper presents comparisons between GPU and CPU implementations of several well-known iterative methods, including Kaczmarz’s, Cimmino’s, component averaging, conjugate gradient normal residual (CGNR), symmetric successive overrelaxation-preconditioned conjugate gradient, and conjugate-gradient-accelerated component-averaged row projections (CARP-CG). Computations are preformed with dense as well as general banded systems. The results demonstrate that our GPU implementation outperforms CPU implementations of these algorithms, as well as previously studied parallel implementations on Linux clusters and shared memory systems. While the CGNR method had begun to fall out of favor for solving such problems, for the problems studied in this paper, the CGNR method implemented on the GPU performed better than the other methods, including a cluster implementation of the CARP-CG method. PMID:20526446
Interactive graphical system for small-angle scattering analysis of polydisperse systems
NASA Astrophysics Data System (ADS)
Konarev, P. V.; Volkov, V. V.; Svergun, D. I.
2016-09-01
A program suite for one-dimensional small-angle scattering analysis of polydisperse systems and multiple data sets is presented. The main program, POLYSAS, has a menu-driven graphical user interface calling computational modules from ATSAS package to perform data treatment and analysis. The graphical menu interface allows one to process multiple (time, concentration or temperature-dependent) data sets and interactively change the parameters for the data modelling using sliders. The graphical representation of the data is done via the Winteracter-based program SASPLOT. The package is designed for the analysis of polydisperse systems and mixtures, and permits one to obtain size distributions and evaluate the volume fractions of the components using linear and non-linear fitting algorithms as well as model-independent singular value decomposition. The use of the POLYSAS package is illustrated by the recent examples of its application to study concentration-dependent oligomeric states of proteins and time kinetics of polymer micelles for anticancer drug delivery.
Leverage Between the Buffering Effect and the Bystander Effect in Social Networking.
Chiu, Yu-Ping; Chang, Shu-Chen
2015-08-01
This study examined encouraged and inhibited social feedback behaviors based on the theories of the buffering effect and the bystander effect. A system program was used to collect personal data and social feedback from a Facebook data set to test the research model. The results revealed that the buffering effect induced a positive relationship between social network size and feedback gained from friends when people's social network size was under a certain cognitive constraint. For people with a social network size that exceeds this cognitive constraint, the bystander effect may occur, in which having more friends may inhibit social feedback. In this study, two social psychological theories were applied to explain social feedback behavior on Facebook, and it was determined that social network size and social feedback exhibited no consistent linear relationship.
NASA Astrophysics Data System (ADS)
Travelet, Christophe; Stemmelen, Mylène; Lapinte, Vincent; Dubreuil, Frédéric; Robin, Jean-Jacques; Borsali, Redouane
2013-06-01
The self-assembly in solution of original structures of amphiphilic partially natural copolymers based on polyoxazoline [more precisely poly(2-methyl-2-oxazoline) (POx)] and grape seed vegetable oil derivatives (linear, T-, and trident-structure) is investigated. The results show that such systems are found, using dynamic light scattering (DLS), to spontaneously self-organize into monomodal, narrow-size, and stable nanoparticles in aqueous medium. The obtained hydrodynamic diameters ( D h) range from 8.6 to 32.5 nm. Specifically, such size increases strongly with increasing natural block (i.e., lipophilic species) length due to higher hydrophobic interactions (from 10.1 nm for C19 to 19.2 nm for C57). Furthermore, increasing the polyoxazoline (i.e., hydrophilic block) length leads to a moderate linear increase of the D h-values. Therefore, the first-order size effect comes from the natural lipophilic block, whereas the characteristic size can be tuned more finely (i.e., in a second-order) by choosing appropriately the polyoxazoline length. The DLS results in terms of characteristic size are corroborated using nanoparticle tracking analysis (NTA), and also by atomic force microscopy (AFM) and transmission electron microscopy (TEM) imaging where well-defined spherical and individual nanoparticles exhibit a very good mechanical resistance upon drying. Moreover, changing the lipophilic block architecture from linear to T-shape, while keeping the same molar mass, generates a branching and thus a shrinking by a factor of 2 of the nanoparticle volume, as observed by DLS. In this paper, it is clearly shown that the self-assemblies of amphiphilic block copolymer obtained from grape seed vegetable oil derivatives (sustainable renewable resources) as well as their tunability are of great interest for biomass valorization at the nanoscale level [continuation of the article by Stemmelen et al. (Polym Chem 4:1445-1458, 2013)].
Ellis, Shane R; Soltwisch, Jens; Heeren, Ron M A
2014-05-01
In this study, we describe the implementation of a position- and time-sensitive detection system (Timepix detector) to directly visualize the spatial distributions of the matrix-assisted laser desorption ionization ion cloud in a linear-time-of-flight (MALDI linear-ToF) as it is projected onto the detector surface. These time-resolved images allow direct visualization of m/z-dependent ion focusing effects that occur within the ion source of the instrument. The influence of key parameters, namely extraction voltage (E(V)), pulsed-ion extraction (PIE) delay, and even the matrix-dependent initial ion velocity was investigated and were found to alter the focusing properties of the ion-optical system. Under certain conditions where the spatial focal plane coincides with the detector plane, so-called x-y space focusing could be observed (i.e., the focusing of the ion cloud to a small, well-defined spot on the detector). Such conditions allow for the stigmatic ion imaging of intact proteins for the first time on a commercial linear ToF-MS system. In combination with the ion-optical magnification of the system (~100×), a spatial resolving power of 11–16 μm with a pixel size of 550 nm was recorded within a laser spot diameter of ~125 μm. This study demonstrates both the diagnostic and analytical advantages offered by the Timepix detector in ToF-MS.
A CPV System with Static Linear Fresnel Lenses in a Greenhouse
NASA Astrophysics Data System (ADS)
Sonneveld, Piet; Zahn, Helmut; Swinkels, Gert-Jan
2010-10-01
A new CPV system with a static linear Fresnel lens, silicon PV module suitable for concentrated radiation and an innovative tracking system is integrated in a greenhouse covering. The basic idea of this horticultural application is to develop a greenhouse for pot plants (typical shadow plants) which don't like high direct radiation. Removing all direct radiation will block up to 77% of the solar energy, which will reduce the necessary cooling capacity. The solar energy focused on the Thermal Photovoltaic (PV/T) module generates electric and thermal energy. The PV/T module is tracked in the focal line and requires cooling due to the high heat load of the concentrated radiation (concentration factor of 50 times). All parts are integrated in a greenhouse with a size of about 36 m2. The electrical and thermal yield is determined for Dutch climate circumstances. Some measurements were performed with a PMMA linear Fresnel lens between double glass. Further improvement of the performance of the CPV-system is possible by using a PDMS lens directly laminated on glass and using AR-coated glass. This lens is developed with ZEMAX and the results of the Ray-tracing simulations are presented with the lens structure oriented in an upwards and downwards position. The best performance of the static linear Fresnel lens is achieved with upwards orientation of the lens structures. In practice this is only possible with the Fresnel lens placed between a double glass structure, which will keep the lens clean and free of water.
Photoacoustic simulation study of chirp excitation response from different size absorbers
NASA Astrophysics Data System (ADS)
Jnawali, K.; Chinni, B.; Dogra, V.; Rao, N.
2017-03-01
Photoacoustic (PA) imaging is a hybrid imaging modality that integrates the strength of optical and ultrasound imaging. Nanosecond (ns) pulsed lasers used in current PA imaging systems are expensive, bulky and they often waste energy. We propose and evaluate, through simulations, the use of a continuous wave (CW) laser whose amplitude is linear frequency modulated (chirp) for PA imaging. The chirp signal provides signal-to-side-lobe ratio (SSR) improvement potential and full control over PA signal frequencies excited in the sample. The PA signal spectrum is a function of absorber size and the time frequencies present in the chirp. A mismatch between the input chirp spectrum and the output PA signal spectrum can affect the compressed pulse that is recovered from cross-correlating the two. We have quantitatively characterized this effect. The k-wave Matlab tool box was used to simulate PA signals in three dimensions for absorbers ranging in size from 0.1 mm to 0.6 mm, in response to laser excitation amplitude that is linearly swept from 0.5 MHz to 4 MHz. This sweep frequency range was chosen based on the spectrum analysis of a PA signal generated from ex-vivo human prostate tissue samples. In comparison, the energy wastage by a ns laser pulse was also estimated. For the chirp methodology, the compressed pulse peak amplitude, pulse width and side lobe structure parameters were extracted for different size absorbers. While the SSR increased 6 fold with absorber size, the pulse width decreased by 25%.
Auxiliary basis expansions for large-scale electronic structure calculations
Jung, Yousung; Sodt, Alex; Gill, Peter M. W.; Head-Gordon, Martin
2005-01-01
One way to reduce the computational cost of electronic structure calculations is to use auxiliary basis expansions to approximate four-center integrals in terms of two- and three-center integrals, usually by using the variationally optimum Coulomb metric to determine the expansion coefficients. However, the long-range decay behavior of the auxiliary basis expansion coefficients has not been characterized. We find that this decay can be surprisingly slow. Numerical experiments on linear alkanes and a toy model both show that the decay can be as slow as 1/r in the distance between the auxiliary function and the fitted charge distribution. The Coulomb metric fitting equations also involve divergent matrix elements for extended systems treated with periodic boundary conditions. An attenuated Coulomb metric that is short-range can eliminate these oddities without substantially degrading calculated relative energies. The sparsity of the fit coefficients is assessed on simple hydrocarbon molecules and shows quite early onset of linear growth in the number of significant coefficients with system size using the attenuated Coulomb metric. Hence it is possible to design linear scaling auxiliary basis methods without additional approximations to treat large systems. PMID:15845767
Computational efficiency improvements for image colorization
NASA Astrophysics Data System (ADS)
Yu, Chao; Sharma, Gaurav; Aly, Hussein
2013-03-01
We propose an efficient algorithm for colorization of greyscale images. As in prior work, colorization is posed as an optimization problem: a user specifies the color for a few scribbles drawn on the greyscale image and the color image is obtained by propagating color information from the scribbles to surrounding regions, while maximizing the local smoothness of colors. In this formulation, colorization is obtained by solving a large sparse linear system, which normally requires substantial computation and memory resources. Our algorithm improves the computational performance through three innovations over prior colorization implementations. First, the linear system is solved iteratively without explicitly constructing the sparse matrix, which significantly reduces the required memory. Second, we formulate each iteration in terms of integral images obtained by dynamic programming, reducing repetitive computation. Third, we use a coarseto- fine framework, where a lower resolution subsampled image is first colorized and this low resolution color image is upsampled to initialize the colorization process for the fine level. The improvements we develop provide significant speedup and memory savings compared to the conventional approach of solving the linear system directly using off-the-shelf sparse solvers, and allow us to colorize images with typical sizes encountered in realistic applications on typical commodity computing platforms.
NASA Astrophysics Data System (ADS)
Tisdell, Christopher C.
2017-11-01
For over 50 years, the learning of teaching of a priori bounds on solutions to linear differential equations has involved a Euclidean approach to measuring the size of a solution. While the Euclidean approach to a priori bounds on solutions is somewhat manageable in the learning and teaching of the proofs involving second-order, linear problems with constant co-efficients, we believe it is not pedagogically optimal. Moreover, the Euclidean method becomes pedagogically unwieldy in the proofs involving higher-order cases. The purpose of this work is to propose a simpler pedagogical approach to establish a priori bounds on solutions by considering a different way of measuring the size of a solution to linear problems, which we refer to as the Uber size. The Uber form enables a simplification of pedagogy from the literature and the ideas are accessible to learners who have an understanding of the Fundamental Theorem of Calculus and the exponential function, both usually seen in a first course in calculus. We believe that this work will be of mathematical and pedagogical interest to those who are learning and teaching in the area of differential equations or in any of the numerous disciplines where linear differential equations are used.
Diab-Elschahawi, Magda; Berger, Jutta; Blacky, Alexander; Kimberger, Oliver; Oguz, Ruken; Kuelpmann, Ruediger; Kramer, Axel; Assadian, Ojan
2011-09-01
This study investigated the influence of the size of unidirectional ceiling distribution systems on counts of viable microorganisms recovered at defined sites in operating room (ORs) and on instrument tables during orthopedic surgery. We compared bacterial sedimentation during 80 orthopedic surgeries. A total of 19 surgeries were performed in ORs with a large (518 cm × 380 cm) unidirectional ceiling distribution (colloquially known as laminar air flow [LAF]) ventilation system, 21 procedures in ORs with a small (380 cm × 120 cm) LAF system, and 40 procedures in ORs with no LAF system. Bacterial sedimentation was evaluated using both settle plates and nitrocellulose membranes. Multivariate linear regression analysis revealed that the colony-forming unit count on nitrocellulose membranes positioned on the instrument table was significantly associated only with the size of the unidirectional LAF distribution system (P < .001), not with the duration of the surgical intervention (P = .753) or with the number of persons present during the surgical intervention (P = .291). Our findings indicate that simply having an LAF ventilation system in place will not provide bacteria-free conditions at the surgical site and on the instrument table. In view of the limited number of procedures studied, our findings require confirmation and further investigations on the ideal, but affordable, size of LAF ventilation systems. Copyright © 2011 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Mosby, Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Rezaei, G.; Vaseghi, B.; Doostimotlagh, N. A.
2012-03-01
Simultaneous effects of an on-center hydrogenic impurity and band edge non-parabolicity on intersubband optical absorption coefficients and refractive index changes of a typical GaAs/AlxGa1-x As spherical quantum dot are theoretically investigated, using the Luttinger—Kohn effective mass equation. So, electronic structure and optical properties of the system are studied by means of the matrix diagonalization technique and compact density matrix approach, respectively. Finally, effects of an impurity, band edge non-parabolicity, incident light intensity and the dot size on the linear, the third-order nonlinear and the total optical absorption coefficients and refractive index changes are investigated. Our results indicate that, the magnitudes of these optical quantities increase and their peaks shift to higher energies as the influences of the impurity and the band edge non-parabolicity are considered. Moreover, incident light intensity and the dot size have considerable effects on the optical absorption coefficients and refractive index changes.
The Use of Meteorlogical Data to Improve Contrail Detection in Thermal Imagery over Ireland.
NASA Technical Reports Server (NTRS)
Whelan, Gillian M.; Cawkwell, Fiona; Mannstein, Hermann; Minnis, Patrick
2009-01-01
Aircraft induced contrails have been found to have a net warming influence on the climate system, with strong regional dependence. Persistent linear contrails are detectable in 1 Km thermal imagery and, using an automated Contrail Detection Algorithm (CDA), can be identified on the basis of their different properties at the 11 and 12 m w av.el enTgthshe algorithm s ability to distinguish contrails from other linear features depends on the sensitivity of its tuning parameters. In order to keep the number of false identifications low, the algorithm imposes strict limits on contrail size, linearity and intensity. This paper investigates whether including additional information (i.e. meteorological data) within the CDA may allow for these criteria to be less rigorous, thus increasing the contrail-detection rate, without increasing the false alarm rate.
NASA Astrophysics Data System (ADS)
Mocherla, Pavana S. V.; Sahana, M. B.; Gopalan, R.; Ramachandra Rao, M. S.; Nanda, B. R. K.; Sudakar, C.
2017-10-01
Magnetization of antiferromagnetic nanoparticles is known to generally scale up inversely to their diameter (d) according to Néel’s model. Here we report a deviation from this conventional linear 1/d dependence, altered significantly by the microstrain, in Ca and Ti substituted BiFeO3 nanoparticles. Magnetic properties of microstrain-controlled Bi1-x Ca x Fe1-y Ti y O3-δ (y = 0 and x = y) nanoparticles are analyzed as a function of their size ranging from 18 nm to 200 nm. A complex interdependence of doping concentration (x or y), annealing temperature (T), microstrain (ɛ) and particle size (d) is established. X-ray diffraction studies reveal a linear variation of microstrain with inverse particle size, 1/d nm-1 (i.e. ɛ · d = 16.5 nm·%). A rapid increase in the saturation magnetization below a critical size d c ~ 35 nm, exhibiting a (1/d) α (α ≈ 2.6) dependence, is attributed to the influence of microstrain. We propose an empirical formula M \\propto (1/d)ɛ β (β ≈ 1.6) to highlight the contributions from both the size and microstrain towards the total magnetization in the doped systems. The magnetization observed in nanoparticles is thus, a result of the competing magnetic contribution from the terminated spin cycloid on the surface and counteracting microstrain present at a given size.
Critical Nucleation Length for Accelerating Frictional Slip
NASA Astrophysics Data System (ADS)
Aldam, Michael; Weikamp, Marc; Spatschek, Robert; Brener, Efim A.; Bouchbinder, Eran
2017-11-01
The spontaneous nucleation of accelerating slip along slowly driven frictional interfaces is central to a broad range of geophysical, physical, and engineering systems, with particularly far-reaching implications for earthquake physics. A common approach to this problem associates nucleation with an instability of an expanding creep patch upon surpassing a critical length Lc. The critical nucleation length Lc is conventionally obtained from a spring-block linear stability analysis extended to interfaces separating elastically deformable bodies using model-dependent fracture mechanics estimates. We propose an alternative approach in which the critical nucleation length is obtained from a related linear stability analysis of homogeneous sliding along interfaces separating elastically deformable bodies. For elastically identical half-spaces and rate-and-state friction, the two approaches are shown to yield Lc that features the same scaling structure, but with substantially different numerical prefactors, resulting in a significantly larger Lc in our approach. The proposed approach is also shown to be naturally applicable to finite-size systems and bimaterial interfaces, for which various analytic results are derived. To quantitatively test the proposed approach, we performed inertial Finite-Element-Method calculations for a finite-size two-dimensional elastically deformable body in rate-and-state frictional contact with a rigid body under sideway loading. We show that the theoretically predicted Lc and its finite-size dependence are in reasonably good quantitative agreement with the full numerical solutions, lending support to the proposed approach. These results offer a theoretical framework for predicting rapid slip nucleation along frictional interfaces.
NASA Technical Reports Server (NTRS)
1975-01-01
Results are discussed of a study to define a radar and antenna system which best suits the space shuttle rendezvous requirements. Topics considered include antenna characteristics and antenna size tradeoffs, fundamental sources of measurement errors inherent in the target itself, backscattering crosssection models of the target and three basic candidate radar types. Antennas up to 1.5 meters in diameter are within specified installation constraints, however, a 1 meter diameter paraboloid and a folding, four slot backfeed on a two gimbal mount implemented for a spiral acquisition scan is recommended. The candidate radar types discussed are: (1) noncoherent pulse radar (2) coherent pulse radar and (3) pulse Doppler radar with linear FM ranging. The radar type recommended is a pulse Doppler with linear FM ranging. Block diagrams of each radar system are shown.
NASA Astrophysics Data System (ADS)
Pando, V.; García-Laguna, J.; San-José, L. A.
2012-11-01
In this article, we integrate a non-linear holding cost with a stock-dependent demand rate in a maximising profit per unit time model, extending several inventory models studied by other authors. After giving the mathematical formulation of the inventory system, we prove the existence and uniqueness of the optimal policy. Relying on this result, we can obtain the optimal solution using different numerical algorithms. Moreover, we provide a necessary and sufficient condition to determine whether a system is profitable, and we establish a rule to check when a given order quantity is the optimal lot size of the inventory model. The results are illustrated through numerical examples and the sensitivity of the optimal solution with respect to changes in some values of the parameters is assessed.
A model for size- and rotation-invariant pattern processing in the visual system.
Reitboeck, H J; Altmann, J
1984-01-01
The mapping of retinal space onto the striate cortex of some mammals can be approximated by a log-polar function. It has been proposed that this mapping is of functional importance for scale- and rotation-invariant pattern recognition in the visual system. An exact log-polar transform converts centered scaling and rotation into translations. A subsequent translation-invariant transform, such as the absolute value of the Fourier transform, thus generates overall size- and rotation-invariance. In our model, the translation-invariance is realized via the R-transform. This transform can be executed by simple neural networks, and it does not require the complex computations of the Fourier transform, used in Mellin-transform size-invariance models. The logarithmic space distortion and differentiation in the first processing stage of the model is realized via "Mexican hat" filters whose diameter increases linearly with eccentricity, similar to the characteristics of the receptive fields of retinal ganglion cells. Except for some special cases, the model can explain object recognition independent of size, orientation and position. Some general problems of Mellin-type size-invariance models-that also apply to our model-are discussed.
Split Stirling linear cryogenic cooler for a new generation of high temperature infrared imagers
NASA Astrophysics Data System (ADS)
Veprik, A.; Zechtzer, S.; Pundak, N.
2010-04-01
Split linear cryocoolers find use in a variety of infrared equipment installed in airborne, heliborne, marine and vehicular platforms along with hand held and ground fixed applications. An upcoming generation of portable, high-definition night vision imagers will rely on the high-temperature infrared detectors, operating at elevated temperatures, ranging from 95K to 200K, while being able to show the performance indices comparable with these of their traditional 77K competitors. Recent technological advances in industrial development of such high-temperature detectors initialized attempts for developing compact split Stirling linear cryogenic coolers. Their known advantages, as compared to the rotary integral coolers, are superior flexibility in the system packaging, constant and relatively high driving frequency, lower wideband vibration export, unsurpassed reliability and aural stealth. Unfortunately, such off-the-shelf available linear cryogenic coolers still cannot compete with rotary integral rivals in terms of size, weight and power consumption. Ricor developed the smallest in the range, 1W@95K, linear split Stirling cryogenic cooler for demanding infrared applications, where power consumption, compactness, vibration, aural noise and ownership costs are of concern.
The Feasibility of Linear Motors and High-Energy Thrusters for Massive Aerospace Vehicles
NASA Astrophysics Data System (ADS)
Stull, M. A.
A combination of two propulsion technologies, superconducting linear motors using ambient magnetic fields and high- energy particle beam thrusters, may make it possible to develop massive aerospace vehicles the size of aircraft carriers. If certain critical thresholds can be attained, linear motors can enable massive vehicles to fly within the atmosphere and can propel them to orbit. Thrusters can do neither, because power requirements are prohibitive. However, unless superconductors having extremely high critical current densities can be developed, the interplanetary magnetic field is too weak for linear motors to provide sufficient acceleration to reach even nearby planets. On the other hand, high-energy thrusters can provide adequate acceleration using a minimal amount of reaction mass, at achievable levels of power generation. If the requirements for linear motor propulsion can be met, combining the two modes of propulsion could enable huge nuclear powered spacecraft to reach at least the inner planets of the solar system, the asteroid belt, and possibly Jupiter, in reasonably short times under continuous acceleration, opening them to exploration, resource development and colonization.
Scanning linear estimation: improvements over region of interest (ROI) methods
NASA Astrophysics Data System (ADS)
Kupinski, Meredith K.; Clarkson, Eric W.; Barrett, Harrison H.
2013-03-01
In tomographic medical imaging, a signal activity is typically estimated by summing voxels from a reconstructed image. We introduce an alternative estimation scheme that operates on the raw projection data and offers a substantial improvement, as measured by the ensemble mean-square error (EMSE), when compared to using voxel values from a maximum-likelihood expectation-maximization (MLEM) reconstruction. The scanning-linear (SL) estimator operates on the raw projection data and is derived as a special case of maximum-likelihood estimation with a series of approximations to make the calculation tractable. The approximated likelihood accounts for background randomness, measurement noise and variability in the parameters to be estimated. When signal size and location are known, the SL estimate of signal activity is unbiased, i.e. the average estimate equals the true value. By contrast, unpredictable bias arising from the null functions of the imaging system affect standard algorithms that operate on reconstructed data. The SL method is demonstrated for two different tasks: (1) simultaneously estimating a signal’s size, location and activity; (2) for a fixed signal size and location, estimating activity. Noisy projection data are realistically simulated using measured calibration data from the multi-module multi-resolution small-animal SPECT imaging system. For both tasks, the same set of images is reconstructed using the MLEM algorithm (80 iterations), and the average and maximum values within the region of interest (ROI) are calculated for comparison. This comparison shows dramatic improvements in EMSE for the SL estimates. To show that the bias in ROI estimates affects not only absolute values but also relative differences, such as those used to monitor the response to therapy, the activity estimation task is repeated for three different signal sizes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eisenbach, Markus; Li, Ying Wai; Liu, Xianglin
2017-12-01
LSMS is a first principles, Density Functional theory based, electronic structure code targeted mainly at materials applications. LSMS calculates the local spin density approximation to the diagonal part of the electron Green's function. The electron/spin density and energy are easily determined once the Green's function is known. Linear scaling with system size is achieved in the LSMS by using several unique properties of the real space multiple scattering approach to the Green's function.
Software For Integer Programming
NASA Technical Reports Server (NTRS)
Fogle, F. R.
1992-01-01
Improved Exploratory Search Technique for Pure Integer Linear Programming Problems (IESIP) program optimizes objective function of variables subject to confining functions or constraints, using discrete optimization or integer programming. Enables rapid solution of problems up to 10 variables in size. Integer programming required for accuracy in modeling systems containing small number of components, distribution of goods, scheduling operations on machine tools, and scheduling production in general. Written in Borland's TURBO Pascal.
Tian, Bian; Zhao, Yulong; Jiang, Zhuangde; Zhang, Ling; Liao, Nansheng; Liu, Yuanhao; Meng, Chao
2009-01-01
In this paper we describe the design and testing of a micro piezoresistive pressure sensor for a Tire Pressure Measurement System (TPMS) which has the advantages of a minimized structure, high sensitivity, linearity and accuracy. Through analysis of the stress distribution of the diaphragm using the ANSYS software, a model of the structure was established. The fabrication on a single silicon substrate utilizes the technologies of anisotropic chemical etching and packaging through glass anodic bonding. The performance of this type of piezoresistive sensor, including size, sensitivity, and long-term stability, were investigated. The results indicate that the accuracy is 0.5% FS, therefore this design meets the requirements for a TPMS, and not only has a smaller size and simplicity of preparation, but also has high sensitivity and accuracy.
NASA Technical Reports Server (NTRS)
Utku, S.
1969-01-01
A general purpose digital computer program for the in-core solution of linear equilibrium problems of structural mechanics is documented. The program requires minimum input for the description of the problem. The solution is obtained by means of the displacement method and the finite element technique. Almost any geometry and structure may be handled because of the availability of linear, triangular, quadrilateral, tetrahedral, hexahedral, conical, triangular torus, and quadrilateral torus elements. The assumption of piecewise linear deflection distribution insures monotonic convergence of the deflections from the stiffer side with decreasing mesh size. The stresses are provided by the best-fit strain tensors in the least squares at the mesh points where the deflections are given. The selection of local coordinate systems whenever necessary is automatic. The core memory is used by means of dynamic memory allocation, an optional mesh-point relabelling scheme and imposition of the boundary conditions during the assembly time.
Several Families of Sequences with Low Correlation and Large Linear Span
NASA Astrophysics Data System (ADS)
Zeng, Fanxin; Zhang, Zhenyu
In DS-CDMA systems and DS-UWB radios, low correlation of spreading sequences can greatly help to minimize multiple access interference (MAI) and large linear span of spreading sequences can reduce their predictability. In this letter, new sequence sets with low correlation and large linear span are proposed. Based on the construction Trm1[Trnm(αbt+γiαdt)]r for generating p-ary sequences of period pn-1, where n=2m, d=upm±v, b=u±v, γi∈GF(pn), and p is an arbitrary prime number, several methods to choose the parameter d are provided. The obtained sequences with family size pn are of four-valued, five-valued, six-valued or seven-valued correlation and the maximum nontrivial correlation value is (u+v-1)pm-1. The simulation by a computer shows that the linear span of the new sequences is larger than that of the sequences with Niho-type and Welch-type decimations, and similar to that of [10].
Comparisons of volcanic eruptions from linear and central vents on Earth, Venus, and Mars (Invited)
NASA Astrophysics Data System (ADS)
Glaze, L. S.; Baloga, S. M.
2010-12-01
Vent geometry (linear versus central) plays a significant role in the ability of an explosive eruption to sustain a buoyant, convective plume. This has important implications for the injection and dispersal of particulates into planetary atmospheres and the ability to interpret the geologic record of planetary volcanism. The approach to modeling linear volcanic vents builds on the original work by Stothers [1], and takes advantage of substantial improvements that have been made in volcanic plume modeling over the last 20 years [e.g., 2,3]. A complete system of equations describing buoyant plume rise requires at least a half dozen differential equations and another half dozen equations for the parameters and constraints within the plume and ambient atmosphere. For the cylindrically axisymmetric system of differential equations given in [2], the control volume is defined as V = πr2dz. The area through which ambient atmosphere is entrained is Ae = 2πr dz, where r is the plume radius and z is vertical distance. The analogous linear vent system has a corresponding control volume, V = 2bLdz and entrainment area, Ae ≈ 2Ldz, where L is the length of the linear plume, 2b is the width of the linear plume, and it is assumed that L >> b. For typical terrestrial boundary conditions (temperature, velocity, gas mass fraction), buoyant plumes from circular vents can be maintained with substantial maximum heights over a wide range of vent sizes. However, linear vent plumes are much more sensitive to vent size, and can maintain a convective plume only over a much more narrow range of half widths. As L increases, linear plumes become more capable of establishing a convective regime over a broad range of bo, similar to the circular vents. This is primarily because as L increases, the entrainment area of the linear plumes increases, relative to the control volume. The ability of a plume to become buoyant is driven by whether or not sufficient air can be entrained (and warmed) to reduce the bulk plume density before upward momentum is exhausted. From mass conservation, linear plumes surpass circular vents in entrainment efficiency approximately when Lo ≥ 3ro. Consistent with other work [3,4], the range of conditions for maintaining a buoyant plume from a circular vent on Venus is very narrow, and the range of linear vent widths is more limited still. Unlike the terrestrial case, linear vents on Venus appear capable of driving a plume to somewhat higher maximum altitudes, with all other things remaining equal. Similar analyses were conducted for current atmospheric conditions on Mars. Results indicate a preference for the formation of pyroclastic flows on Mars from both circular and linear vents, as opposed to widely dispersed airfall deposits. Only the Earth, with its thick wet atmosphere, favors explosive eruptions that can maintain convective plumes reaching 10s of km in altitude over a broad range of eruptive conditions. References: [1] Stothers, R.B. (1989) J Atmos Sci, 46, 2662-2670. [2] Glaze. L.S., Baloga, S. M., and Wilson, L. (1997) JGR, 102, 6099-6108. [3] Glaze, L.S. (1999) JGR, 104, 18,899-18,906. [4] Thornhill, G.D. (1993) JGR, 98, 9107-9111.
Estimating linear temporal trends from aggregated environmental monitoring data
Erickson, Richard A.; Gray, Brian R.; Eager, Eric A.
2017-01-01
Trend estimates are often used as part of environmental monitoring programs. These trends inform managers (e.g., are desired species increasing or undesired species decreasing?). Data collected from environmental monitoring programs is often aggregated (i.e., averaged), which confounds sampling and process variation. State-space models allow sampling variation and process variations to be separated. We used simulated time-series to compare linear trend estimations from three state-space models, a simple linear regression model, and an auto-regressive model. We also compared the performance of these five models to estimate trends from a long term monitoring program. We specifically estimated trends for two species of fish and four species of aquatic vegetation from the Upper Mississippi River system. We found that the simple linear regression had the best performance of all the given models because it was best able to recover parameters and had consistent numerical convergence. Conversely, the simple linear regression did the worst job estimating populations in a given year. The state-space models did not estimate trends well, but estimated population sizes best when the models converged. We found that a simple linear regression performed better than more complex autoregression and state-space models when used to analyze aggregated environmental monitoring data.
Construction of trypanosome artificial mini-chromosomes.
Lee, M G; E, Y; Axelrod, N
1995-01-01
We report the preparation of two linear constructs which, when transformed into the procyclic form of Trypanosoma brucei, become stably inherited artificial mini-chromosomes. Both of the two constructs, one of 10 kb and the other of 13 kb, contain a T.brucei PARP promoter driving a chloramphenicol acetyltransferase (CAT) gene. In the 10 kb construct the CAT gene is followed by one hygromycin phosphotransferase (Hph) gene, and in the 13 kb construct the CAT gene is followed by three tandemly linked Hph genes. At each end of these linear molecules are telomere repeats and subtelomeric sequences. Electroporation of these linear DNA constructs into the procyclic form of T.brucei generated hygromycin-B resistant cell lines. In these cell lines, the input DNA remained linear and bounded by the telomere ends, but it increased in size. In the cell lines generated by the 10 kb construct, the input DNA increased in size to 20-50 kb. In the cell lines generated by the 13 kb constructs, two sizes of linear DNAs containing the input plasmid were detected: one of 40-50 kb and the other of 150 kb. The increase in size was not the result of in vivo tandem repetitions of the input plasmid, but represented the addition of new sequences. These Hph containing linear DNA molecules were maintained stably in cell lines for at least 20 generations in the absence of drug selection and were subsequently referred to as trypanosome artificial mini-chromosomes, or TACs. Images PMID:8532534
Weak stability of the plasma-vacuum interface problem
NASA Astrophysics Data System (ADS)
Catania, Davide; D'Abbicco, Marcello; Secchi, Paolo
2016-09-01
We consider the free boundary problem for the two-dimensional plasma-vacuum interface in ideal compressible magnetohydrodynamics (MHD). In the plasma region, the flow is governed by the usual compressible MHD equations, while in the vacuum region we consider the Maxwell system for the electric and the magnetic fields. At the free interface, driven by the plasma velocity, the total pressure is continuous and the magnetic field on both sides is tangent to the boundary. We study the linear stability of rectilinear plasma-vacuum interfaces by computing the Kreiss-Lopatinskiĭ determinant of an associated linearized boundary value problem. Apart from possible resonances, we obtain that the piecewise constant plasma-vacuum interfaces are always weakly linearly stable, independently of the size of tangential velocity, magnetic and electric fields on both sides of the characteristic discontinuity. We also prove that solutions to the linearized problem obey an energy estimate with a loss of regularity with respect to the source terms, both in the interior domain and on the boundary, due to the failure of the uniform Kreiss-Lopatinskiĭ condition, as the Kreiss-Lopatinskiĭ determinant associated with this linearized boundary value problem has roots on the boundary of the frequency space. In the proof of the a priori estimates, a crucial part is played by the construction of symmetrizers for a reduced differential system, which has poles at which the Kreiss-Lopatinskiĭ condition may fail simultaneously.
Optimization of pencil beam f-theta lens for high-accuracy metrology
NASA Astrophysics Data System (ADS)
Peng, Chuanqian; He, Yumei; Wang, Jie
2018-01-01
Pencil beam deflectometric profilers are common instruments for high-accuracy surface slope metrology of x-ray mirrors in synchrotron facilities. An f-theta optical system is a key optical component of the deflectometric profilers and is used to perform the linear angle-to-position conversion. Traditional optimization procedures of the f-theta systems are not directly related to the angle-to-position conversion relation and are performed with stops of large size and a fixed working distance, which means they may not be suitable for the design of f-theta systems working with a small-sized pencil beam within a working distance range for ultra-high-accuracy metrology. If an f-theta system is not well-designed, aberrations of the f-theta system will introduce many systematic errors into the measurement. A least-squares' fitting procedure was used to optimize the configuration parameters of an f-theta system. Simulations using ZEMAX software showed that the optimized f-theta system significantly suppressed the angle-to-position conversion errors caused by aberrations. Any pencil-beam f-theta optical system can be optimized with the help of this optimization method.
Rosén, T; Einarsson, J; Nordmark, A; Aidun, C K; Lundell, F; Mehlig, B
2015-12-01
We numerically analyze the rotation of a neutrally buoyant spheroid in a shear flow at small shear Reynolds number. Using direct numerical stability analysis of the coupled nonlinear particle-flow problem, we compute the linear stability of the log-rolling orbit at small shear Reynolds number Re(a). As Re(a)→0 and as the box size of the system tends to infinity, we find good agreement between the numerical results and earlier analytical predictions valid to linear order in Re(a) for the case of an unbounded shear. The numerical stability analysis indicates that there are substantial finite-size corrections to the analytical results obtained for the unbounded system. We also compare the analytical results to results of lattice Boltzmann simulations to analyze the stability of the tumbling orbit at shear Reynolds numbers of order unity. Theory for an unbounded system at infinitesimal shear Reynolds number predicts a bifurcation of the tumbling orbit at aspect ratio λ(c)≈0.137 below which tumbling is stable (as well as log rolling). The simulation results show a bifurcation line in the λ-Re(a) plane that reaches λ≈0.1275 at the smallest shear Reynolds number (Re(a)=1) at which we could simulate with the lattice Boltzmann code, in qualitative agreement with the analytical results.
On the development of HSCT tail sizing criteria using linear matrix inequalities
NASA Technical Reports Server (NTRS)
Kaminer, Isaac
1995-01-01
This report presents the results of a study to extend existing high speed civil transport (HSCT) tail sizing criteria using linear matrix inequalities (LMI). In particular, the effects of feedback specifications, such as MIL STD 1797 Level 1 and 2 flying qualities requirements, and actuator amplitude and rate constraints on the maximum allowable cg travel for a given set of tail sizes are considered. Results comparing previously developed industry criteria and the LMI methodology on an HSCT concept airplane are presented.
Transient hydrodynamic finite-size effects in simulations under periodic boundary conditions
NASA Astrophysics Data System (ADS)
Asta, Adelchi J.; Levesque, Maximilien; Vuilleumier, Rodolphe; Rotenberg, Benjamin
2017-06-01
We use lattice-Boltzmann and analytical calculations to investigate transient hydrodynamic finite-size effects induced by the use of periodic boundary conditions. These effects are inevitable in simulations at the molecular, mesoscopic, or continuum levels of description. We analyze the transient response to a local perturbation in the fluid and obtain the local velocity correlation function via linear response theory. This approach is validated by comparing the finite-size effects on the steady-state velocity with the known results for the diffusion coefficient. We next investigate the full time dependence of the local velocity autocorrelation function. We find at long times a crossover between the expected t-3 /2 hydrodynamic tail and an oscillatory exponential decay, and study the scaling with the system size of the crossover time, exponential rate and amplitude, and oscillation frequency. We interpret these results from the analytic solution of the compressible Navier-Stokes equation for the slowest modes, which are set by the system size. The present work not only provides a comprehensive analysis of hydrodynamic finite-size effects in bulk fluids, which arise regardless of the level of description and simulation algorithm, but also establishes the lattice-Boltzmann method as a suitable tool to investigate such effects in general.
NASA Astrophysics Data System (ADS)
Vinod, Sithara; Philip, John
2017-12-01
Magnetic nanofluids or ferrofluids exhibit extraordinary field dependant tunable thermal conductivity (k), which make them potential candidates for microelectronic cooling applications. However, the associated viscosity enhancement under an external stimulus is undesirable for practical applications. Further, the exact mechanism of heat transport and the role of field induced nanostructures on thermal transport is not clearly understood. In this paper, through systematic thermal, rheological and microscopic studies in 'model ferrofluids', we demonstrate for the first time, the conditions to achieve very high thermal conductivity to viscosity ratio. Highly stable ferrofluids with similar crystallite size, base fluid, capping agent and magnetic properties, but with slightly different size distributions, are synthesized and characterized by X-ray diffraction, small angle X-ray scattering, transmission electron microscopy, dynamic light scattering, vibrating sample magnetometer, Fourier transform infrared spectroscopy and thermo-gravimetry. The average hydrodynamic diameters of the particles were 11.7 and 10.1 nm and the polydispersity indices (σ), were 0.226 and 0.151, respectively. We observe that the system with smaller polydispersity (σ = 0.151) gives larger k enhancement (130% for 150 G) as compared to the one with σ = 0.226 (73% for 80 G). Further, our results show that dispersions without larger aggregates and with high density interfacial capping (with surfactant) can provide very high enhancement in thermal conductivity, with insignificant viscosity enhancement, due to minimal interfacial losses. We also provide experimental evidence for the effective heat conduction (parallel mode) through a large number of space filling linear aggregates with high aspect ratio. Microscopic studies reveal that the larger particles act as nucleating sites and facilitate lateral aggregation (zippering) of linear chains that considerably reduces the number density of space filling linear aggregates. Our findings are very useful for optimizing the heat transfer properties of magnetic fluids (and also in composite systems consisting of CNT, graphene etc.) for the development of next generation microelectronic cooling technologies, thermal energy harvesting and magnetic fluid based therapeutics.
ERIC Educational Resources Information Center
Tisdell, Christopher C.
2017-01-01
For over 50 years, the learning of teaching of "a priori" bounds on solutions to linear differential equations has involved a Euclidean approach to measuring the size of a solution. While the Euclidean approach to "a priori" bounds on solutions is somewhat manageable in the learning and teaching of the proofs involving…
Real-time dissolution measurement of sized and unsized calcium phosphate glass fibers.
Rinehart, J D; Taylor, T D; Tian, Y; Latour, R A
1999-01-01
The objective of this study was to develop an efficient "real time" measurement system able to directly measure, with microgram resolution, the dissolution rate of absorbable glass fibers, and utilize the system to evaluate the effectiveness of silane-based sizing as a means to delay the fiber dissolution process. The absorbable glass fiber used was calcium phosphate (CaP), with tetramethoxysilane selected as the sizing agent. E-glass fiber was used as a relatively nondegrading control. Both the unsized-CaP and sized-CaP degraded linearly at both the 37 degrees C and 60 degrees C test temperature levels used. No significant decrease in weight-loss rate was recorded when the CaP fiber tows were pretreated, using conventional application methods, with the tetramethoxysilane sizing for either temperature condition. The unsized-CaP and sized-CaP weight loss rates were each significantly higher at 60 than at 37 degrees C (both p < 0.02), as expected from dissolution kinetics. In terms of actual weight loss rate measured using our system for phosphate glass fiber, the unsized-CaP fiber we studied dissolved at a rate of 10.90 x 10(-09) and 41.20 x 10(-09) g/min-cm(2) at 37 degrees C and 60 degrees C, respectively. Considering performance validation of the developed system, the slope of the weight loss vs. time plot for the tested E-glass fiber was not significantly different compared to a slope equal to zero for both test temperatures. Copyright 1999 John Wiley & Sons, Inc.
Standard and inverse bond percolation of straight rigid rods on square lattices
NASA Astrophysics Data System (ADS)
Ramirez, L. S.; Centres, P. M.; Ramirez-Pastor, A. J.
2018-04-01
Numerical simulations and finite-size scaling analysis have been carried out to study standard and inverse bond percolation of straight rigid rods on square lattices. In the case of standard percolation, the lattice is initially empty. Then, linear bond k -mers (sets of k linear nearest-neighbor bonds) are randomly and sequentially deposited on the lattice. Jamming coverage pj ,k and percolation threshold pc ,k are determined for a wide range of k (1 ≤k ≤120 ). pj ,k and pc ,k exhibit a decreasing behavior with increasing k , pj ,k →∞=0.7476 (1 ) and pc ,k →∞=0.0033 (9 ) being the limit values for large k -mer sizes. pj ,k is always greater than pc ,k, and consequently, the percolation phase transition occurs for all values of k . In the case of inverse percolation, the process starts with an initial configuration where all lattice bonds are occupied and, given that periodic boundary conditions are used, the opposite sides of the lattice are connected by nearest-neighbor occupied bonds. Then, the system is diluted by randomly removing linear bond k -mers from the lattice. The central idea here is based on finding the maximum concentration of occupied bonds (minimum concentration of empty bonds) for which connectivity disappears. This particular value of concentration is called the inverse percolation threshold pc,k i, and determines a geometrical phase transition in the system. On the other hand, the inverse jamming coverage pj,k i is the coverage of the limit state, in which no more objects can be removed from the lattice due to the absence of linear clusters of nearest-neighbor bonds of appropriate size. It is easy to understand that pj,k i=1 -pj ,k . The obtained results for pc,k i show that the inverse percolation threshold is a decreasing function of k in the range 1 ≤k ≤18 . For k >18 , all jammed configurations are percolating states, and consequently, there is no nonpercolating phase. In other words, the lattice remains connected even when the highest allowed concentration of removed bonds pj,k i is reached. In terms of network attacks, this striking behavior indicates that random attacks on single nodes (k =1 ) are much more effective than correlated attacks on groups of close nodes (large k 's). Finally, the accurate determination of critical exponents reveals that standard and inverse bond percolation models on square lattices belong to the same universality class as the random percolation, regardless of the size k considered.
Marcar, Valentine L; Baselgia, Silvana; Lüthi-Eisenegger, Barbara; Jäncke, Lutz
2018-03-01
Retinal input processing in the human visual system involves a phasic and tonic neural response. We investigated the role of the magno- and parvocellular systems by comparing the influence of the active neural population size and its discharge activity on the amplitude and latency of four VEP components. We recorded the scalp electric potential of 20 human volunteers viewing a series of dartboard images presented as a pattern reversing and pattern on-/offset stimulus. These patterns were designed to vary both neural population size coding the temporal- and spatial luminance contrast property and the discharge activity of the population involved in a systematic manner. When the VEP amplitude reflected the size of the neural population coding the temporal luminance contrast property of the image, the influence of luminance contrast followed the contrast response function of the parvocellular system. When the VEP amplitude reflected the size of the neural population responding to the spatial luminance contrast property the image, the influence of luminance contrast followed the contrast response function of the magnocellular system. The latencies of the VEP components examined exhibited the same behavior across our stimulus series. This investigation demonstrates the complex interplay of the magno- and parvocellular systems on the neural response as captured by the VEP. It also demonstrates a linear relationship between stimulus property, neural response, and the VEP and reveals the importance of feedback projections in modulating the ongoing neural response. In doing so, it corroborates the conclusions of our previous study.
Summer Proceedings 2016: The Center for Computing Research at Sandia National Laboratories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carleton, James Brian; Parks, Michael L.
Solving sparse linear systems from the discretization of elliptic partial differential equations (PDEs) is an important building block in many engineering applications. Sparse direct solvers can solve general linear systems, but are usually slower and use much more memory than effective iterative solvers. To overcome these two disadvantages, a hierarchical solver (LoRaSp) based on H2-matrices was introduced in [22]. Here, we have developed a parallel version of the algorithm in LoRaSp to solve large sparse matrices on distributed memory machines. On a single processor, the factorization time of our parallel solver scales almost linearly with the problem size for three-dimensionalmore » problems, as opposed to the quadratic scalability of many existing sparse direct solvers. Moreover, our solver leads to almost constant numbers of iterations, when used as a preconditioner for Poisson problems. On more than one processor, our algorithm has significant speedups compared to sequential runs. With this parallel algorithm, we are able to solve large problems much faster than many existing packages as demonstrated by the numerical experiments.« less
Woodward, Carol S.; Gardner, David J.; Evans, Katherine J.
2015-01-01
Efficient solutions of global climate models require effectively handling disparate length and time scales. Implicit solution approaches allow time integration of the physical system with a step size governed by accuracy of the processes of interest rather than by stability of the fastest time scales present. Implicit approaches, however, require the solution of nonlinear systems within each time step. Usually, a Newton's method is applied to solve these systems. Each iteration of the Newton's method, in turn, requires the solution of a linear model of the nonlinear system. This model employs the Jacobian of the problem-defining nonlinear residual, but thismore » Jacobian can be costly to form. If a Krylov linear solver is used for the solution of the linear system, the action of the Jacobian matrix on a given vector is required. In the case of spectral element methods, the Jacobian is not calculated but only implemented through matrix-vector products. The matrix-vector multiply can also be approximated by a finite difference approximation which may introduce inaccuracy in the overall nonlinear solver. In this paper, we review the advantages and disadvantages of finite difference approximations of these matrix-vector products for climate dynamics within the spectral element shallow water dynamical core of the Community Atmosphere Model.« less
Ofei, K T; Holst, M; Rasmussen, H H; Mikkelsen, B E
2015-08-01
The trolley meal system allows hospital patients to select food items and portion sizes directly from the food trolley. The nutritional status of the patient may be compromised if portions selected do not meet recommended intakes for energy, protein and micronutrients. The aim of this study was to investigate: (1) the portion size served, consumed and plate waste generated, (2) the extent to which the size of meal portions served contributes to daily recommended intakes for energy and protein, (3) the predictive effect of the served portion sizes on plate waste in patients screened for nutritional risk by NRS-2002, and (4) to establish the applicability of the dietary intake monitoring system (DIMS) as a technique to monitor plate waste. A prospective observational cohort study was conducted in two hospital wards over five weekdays. The DIMS was used to collect paired before- and after-meal consumption photos and measure the weight of plate content. The proportion of energy and protein consumed by both groups at each meal session could contribute up to 15% of the total daily recommended intake. Linear mixed model identified a positive relationship between meal portion size and plate waste (P = 0.002) and increased food waste in patients at nutritional risk during supper (P = 0.001). Meal portion size was associated with the level of plate waste produced. Being at nutritional risk further increased the extent of waste, regardless of the portion size served at supper. The use of DIMS as an innovative technique might be a promising way to monitor plate waste for optimizing meal portion size servings and minimizing food waste. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Gregorio, Fernando; Cousseau, Juan; Werner, Stefan; Riihonen, Taneli; Wichman, Risto
2011-12-01
The design of predistortion techniques for broadband multiple input multiple output-OFDM (MIMO-OFDM) systems raises several implementation challenges. First, the large bandwidth of the OFDM signal requires the introduction of memory effects in the PD model. In addition, it is usual to consider an imbalanced in-phase and quadrature (IQ) modulator to translate the predistorted baseband signal to RF. Furthermore, the coupling effects, which occur when the MIMO paths are implemented in the same reduced size chipset, cannot be avoided in MIMO transceivers structures. This study proposes a MIMO-PD system that linearizes the power amplifier response and compensates nonlinear crosstalk and IQ imbalance effects for each branch of the multiantenna system. Efficient recursive algorithms are presented to estimate the complete MIMO-PD coefficients. The algorithms avoid the high computational complexity in previous solutions based on least squares estimation. The performance of the proposed MIMO-PD structure is validated by simulations using a two-transmitter antenna MIMO system. Error vector magnitude and adjacent channel power ratio are evaluated showing significant improvement compared with conventional MIMO-PD systems.
Speed scanning system based on solid-state microchip laser for architectural planning
NASA Astrophysics Data System (ADS)
Redka, Dmitriy; Grishkanich, Alexsandr S.; Kolmakov, Egor; Tsvetkov, Konstantin
2017-10-01
According to the current great interest concerning Large-Scale Metrology applications in many different fields of manufacturing industry, technologies and techniques for dimensional measurement have recently shown a substantial improvement. Ease-of-use, logistic and economic issues, as well as metrological performance, are assuming a more and more important role among system requirements. The project is planned to conduct experimental studies aimed at identifying the impact of the application of the basic laws of microlasers as radiators on the linear-angular characteristics of existing measurement systems. The project is planned to conduct experimental studies aimed at identifying the impact of the application of the basic laws of microlasers as radiators on the linear-angular characteristics of existing measurement systems. The system consists of a distributed network-based layout, whose modularity allows to fit differently sized and shaped working volumes by adequately increasing the number of sensing units. Differently from existing spatially distributed metrological instruments, the remote sensor devices are intended to provide embedded data elaboration capabilities, in order to share the overall computational load.
Coordinate measuring system based on microchip lasers for reverse prototyping
NASA Astrophysics Data System (ADS)
Iakovlev, Alexey; Grishkanich, Alexsandr S.; Redka, Dmitriy; Tsvetkov, Konstantin
2017-02-01
According to the current great interest concerning Large-Scale Metrology applications in many different fields of manufacturing industry, technologies and techniques for dimensional measurement have recently shown a substantial improvement. Ease-of-use, logistic and economic issues, as well as metrological performance, are assuming a more and more important role among system requirements. The project is planned to conduct experimental studies aimed at identifying the impact of the application of the basic laws of chip and microlasers as radiators on the linear-angular characteristics of existing measurement systems. The project is planned to conduct experimental studies aimed at identifying the impact of the application of the basic laws of microlasers as radiators on the linear-angular characteristics of existing measurement systems. The system consists of a distributed network-based layout, whose modularity allows to fit differently sized and shaped working volumes by adequately increasing the number of sensing units. Differently from existing spatially distributed metrological instruments, the remote sensor devices are intended to provide embedded data elaboration capabilities, in order to share the overall computational load.
Efficient Implementation of an Optimal Interpolator for Large Spatial Data Sets
NASA Technical Reports Server (NTRS)
Memarsadeghi, Nargess; Mount, David M.
2007-01-01
Interpolating scattered data points is a problem of wide ranging interest. A number of approaches for interpolation have been proposed both from theoretical domains such as computational geometry and in applications' fields such as geostatistics. Our motivation arises from geological and mining applications. In many instances data can be costly to compute and are available only at nonuniformly scattered positions. Because of the high cost of collecting measurements, high accuracy is required in the interpolants. One of the most popular interpolation methods in this field is called ordinary kriging. It is popular because it is a best linear unbiased estimator. The price for its statistical optimality is that the estimator is computationally very expensive. This is because the value of each interpolant is given by the solution of a large dense linear system. In practice, kriging problems have been solved approximately by restricting the domain to a small local neighborhood of points that lie near the query point. Determining the proper size for this neighborhood is a solved by ad hoc methods, and it has been shown that this approach leads to undesirable discontinuities in the interpolant. Recently a more principled approach to approximating kriging has been proposed based on a technique called covariance tapering. This process achieves its efficiency by replacing the large dense kriging system with a much sparser linear system. This technique has been applied to a restriction of our problem, called simple kriging, which is not unbiased for general data sets. In this paper we generalize these results by showing how to apply covariance tapering to the more general problem of ordinary kriging. Through experimentation we demonstrate the space and time efficiency and accuracy of approximating ordinary kriging through the use of covariance tapering combined with iterative methods for solving large sparse systems. We demonstrate our approach on large data sizes arising both from synthetic sources and from real applications.
Calliste, Jabari; Wu, Gongting; Laganis, Philip E; Spronk, Derrek; Jafari, Houman; Olson, Kyle; Gao, Bo; Lee, Yueh Z; Zhou, Otto; Lu, Jianping
2017-09-01
The aim of this study was to characterize a new generation stationary digital breast tomosynthesis system with higher tube flux and increased angular span over a first generation system. The linear CNT x-ray source was designed, built, and evaluated to determine its performance parameters. The second generation system was then constructed using the CNT x-ray source and a Hologic gantry. Upon construction, test objects and phantoms were used to characterize system resolution as measured by the modulation transfer function (MTF), and artifact spread function (ASF). The results indicated that the linear CNT x-ray source was capable of stable operation at a tube potential of 49 kVp, and measured focal spot sizes showed source-to-source consistency with a nominal focal spot size of 1.1 mm. After construction, the second generation (Gen 2) system exhibited entrance surface air kerma rates two times greater the previous s-DBT system. System in-plane resolution as measured by the MTF is 7.7 cycles/mm, compared to 6.7 cycles/mm for the Gen 1 system. As expected, an increase in the z-axis depth resolution was observed, with a decrease in the ASF from 4.30 mm to 2.35 mm moving from the Gen 1 system to the Gen 2 system as result of an increased angular span. The results indicate that the Gen 2 stationary digital breast tomosynthesis system, which has a larger angular span, increased entrance surface air kerma, and faster image acquisition time over the Gen 1 s-DBT system, results in higher resolution images. With the detector operating at full resolution, the Gen 2 s-DBT system can achieve an in-plane resolution of 7.7 cycles per mm, which is better than the current commercial DBT systems today, and may potentially result in better patient diagnosis. © 2017 American Association of Physicists in Medicine.
A survey of packages for large linear systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Kesheng; Milne, Brent
2000-02-11
This paper evaluates portable software packages for the iterative solution of very large sparse linear systems on parallel architectures. While we cannot hope to tell individual users which package will best suit their needs, we do hope that our systematic evaluation provides essential unbiased information about the packages and the evaluation process may serve as an example on how to evaluate these packages. The information contained here include feature comparisons, usability evaluations and performance characterizations. This review is primarily focused on self-contained packages that can be easily integrated into an existing program and are capable of computing solutions to verymore » large sparse linear systems of equations. More specifically, it concentrates on portable parallel linear system solution packages that provide iterative solution schemes and related preconditioning schemes because iterative methods are more frequently used than competing schemes such as direct methods. The eight packages evaluated are: Aztec, BlockSolve,ISIS++, LINSOL, P-SPARSLIB, PARASOL, PETSc, and PINEAPL. Among the eight portable parallel iterative linear system solvers reviewed, we recommend PETSc and Aztec for most application programmers because they have well designed user interface, extensive documentation and very responsive user support. Both PETSc and Aztec are written in the C language and are callable from Fortran. For those users interested in using Fortran 90, PARASOL is a good alternative. ISIS++is a good alternative for those who prefer the C++ language. Both PARASOL and ISIS++ are relatively new and are continuously evolving. Thus their user interface may change. In general, those packages written in Fortran 77 are more cumbersome to use because the user may need to directly deal with a number of arrays of varying sizes. Languages like C++ and Fortran 90 offer more convenient data encapsulation mechanisms which make it easier to implement a clean and intuitive user interface. In addition to reviewing these portable parallel iterative solver packages, we also provide a more cursory assessment of a range of related packages, from specialized parallel preconditioners to direct methods for sparse linear systems.« less
Non-Linear Dynamics of Saturn's Rings
NASA Astrophysics Data System (ADS)
Esposito, L. W.
2016-12-01
Non-linear processes can explain why Saturn's rings are so active and dynamic. Ring systems differ from simple linear systems in two significant ways: 1. They are systems of granular material: where particle-to-particle collisions dominate; thus a kinetic, not a fluid description needed. Stresses are strikingly inhomogeneous and fluctuations are large compared to equilibrium. 2. They are strongly forced by resonances: which drive a non-linear response, that push the system across thresholds that lead to persistent states. Some of this non-linearity is captured in a simple Predator-Prey Model: Periodic forcing from the moon causes streamline crowding; This damps the relative velocity. About a quarter phase later, the aggregates stir the system to higher relative velocity and the limit cycle repeats each orbit, with relative velocity ranging from nearly zero to a multiple of the orbit average. Summary of Halo Results: A predator-prey model for ring dynamics produces transient structures like `straw' that can explain the halo morphology and spectroscopy: Cyclic velocity changes cause perturbed regions to reach higher collision speeds at some orbital phases, which preferentially removes small regolith particles; surrounding particles diffuse back too slowly to erase the effect: this gives the halo morphology; this requires energetic collisions (v ≈ 10m/sec, with throw distances about 200km, implying objects of scale R ≈ 20km).Transform to Duffing Eqn : With the coordinate transformation, z = M2/3, the Predator-Prey equations can be combined to form a single second-order differential equation with harmonic resonance forcing.Ring dynamics and history implications: Moon-triggered clumping explains both small and large particles at resonances. We calculate the stationary size distribution using a cell-to-cell mapping procedure that converts the phase-plane trajectories to a Markov chain. Approximating it as an asymmetric random walk with reflecting boundaries determines the power law index, using results of numerical simulations in the tidal environment. Aggregates can explain many dynamic aspects of the rings and can renew rings by shielding and recycling the material within them, depending on how long the mass is sequestered. We can ask: Are Saturn's rings a chaotic non-linear driven system?
Non-Linear Dynamics of Saturn’s Rings
NASA Astrophysics Data System (ADS)
Esposito, Larry W.
2015-11-01
Non-linear processes can explain why Saturn’s rings are so active and dynamic. Ring systems differ from simple linear systems in two significant ways: 1. They are systems of granular material: where particle-to-particle collisions dominate; thus a kinetic, not a fluid description needed. We find that stresses are strikingly inhomogeneous and fluctuations are large compared to equilibrium. 2. They are strongly forced by resonances: which drive a non-linear response, pushing the system across thresholds that lead to persistent states.Some of this non-linearity is captured in a simple Predator-Prey Model: Periodic forcing from the moon causes streamline crowding; This damps the relative velocity, and allows aggregates to grow. About a quarter phase later, the aggregates stir the system to higher relative velocity and the limit cycle repeats each orbit.Summary of Halo Results: A predator-prey model for ring dynamics produces transient structures like ‘straw’ that can explain the halo structure and spectroscopy: This requires energetic collisions (v ≈ 10m/sec, with throw distances about 200km, implying objects of scale R ≈ 20km).Transform to Duffing Eqn : With the coordinate transformation, z = M2/3, the Predator-Prey equations can be combined to form a single second-order differential equation with harmonic resonance forcing.Ring dynamics and history implications: Moon-triggered clumping at perturbed regions in Saturn’s rings creates both high velocity dispersion and large aggregates at these distances, explaining both small and large particles observed there. We calculate the stationary size distribution using a cell-to-cell mapping procedure that converts the phase-plane trajectories to a Markov chain. Approximating the Markov chain as an asymmetric random walk with reflecting boundaries allows us to determine the power law index from results of numerical simulations in the tidal environment surrounding Saturn. Aggregates can explain many dynamic aspects of the rings and can renew rings by shielding and recycling the material within them, depending on how long the mass is sequestered. We can ask: Are Saturn’s rings a chaotic non-linear driven system?
NASA Astrophysics Data System (ADS)
Hanson-Heine, Magnus W. D.; George, Michael W.; Besley, Nicholas A.
2018-06-01
The restricted excitation subspace approximation is explored as a basis to reduce the memory storage required in linear response time-dependent density functional theory (TDDFT) calculations within the Tamm-Dancoff approximation. It is shown that excluding the core orbitals and up to 70% of the virtual orbitals in the construction of the excitation subspace does not result in significant changes in computed UV/vis spectra for large molecules. The reduced size of the excitation subspace greatly reduces the size of the subspace vectors that need to be stored when using the Davidson procedure to determine the eigenvalues of the TDDFT equations. Furthermore, additional screening of the two-electron integrals in combination with a reduction in the size of the numerical integration grid used in the TDDFT calculation leads to significant computational savings. The use of these approximations represents a simple approach to extend TDDFT to the study of large systems and make the calculations increasingly tractable using modest computing resources.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paudel, M R; Beachey, D J; Sarfehnia, A
Purpose: A new commercial GPU-based Monte Carlo dose calculation algorithm (GPUMCD) developed by the vendor Elekta™ to be used in the Monaco Treatment Planning System (TPS) is capable of modeling dose for both a standard linear accelerator and for an Elekta MRI-Linear accelerator (modeling magnetic field effects). We are evaluating this algorithm in two parts: commissioning the algorithm for an Elekta Agility linear accelerator (the focus of this work) and evaluating the algorithm’s ability to model magnetic field effects for an MRI-linear accelerator. Methods: A beam model was developed in the Monaco TPS (v.5.09.06) using the commissioned beam data formore » a 6MV Agility linac. A heterogeneous phantom representing tumor-in-lung, lung, bone-in-tissue, and prosthetic was designed/built. Dose calculations in Monaco were done using the current clinical algorithm (XVMC) and the new GPUMCD algorithm (1 mm3 voxel size, 0.5% statistical uncertainty) and in the Pinnacle TPS using the collapsed cone convolution (CCC) algorithm. These were compared with the measured doses using an ionization chamber (A1SL) and Gafchromic EBT3 films for 2×2 cm{sup 2}, 5×5 cm{sup 2}, and 10×10 cm{sup 2} field sizes. Results: The calculated central axis percentage depth doses (PDDs) in homogeneous solid water were within 2% compared to measurements for XVMC and GPUMCD. For tumor-in-lung and lung phantoms, doses calculated by all of the algorithms were within the experimental uncertainty of the measurements (±2% in the homogeneous phantom and ±3% for the tumor-in-lung or lung phantoms), except for 2×2 cm{sup 2} field size where only the CCC algorithm differs from film by 5% in the lung region. The analysis for bone-in-tissue and the prosthetic phantoms are ongoing. Conclusion: The new GPUMCD algorithm calculated dose comparable to both the XVMC algorithm and to measurements in both a homogeneous solid water medium and the heterogeneous phantom representing lung or tumor-in-lung for 2×2 cm{sup 2}-10×10 cm{sup 2} field sizes. Funding support was obtained from Elekta.« less
Sánchez-de-Madariaga, Ricardo; Muñoz, Adolfo; Lozano-Rubí, Raimundo; Serrano-Balazote, Pablo; Castro, Antonio L; Moreno, Oscar; Pascual, Mario
2017-08-18
The objective of this research is to compare the relational and non-relational (NoSQL) database systems approaches in order to store, recover, query and persist standardized medical information in the form of ISO/EN 13606 normalized Electronic Health Record XML extracts, both in isolation and concurrently. NoSQL database systems have recently attracted much attention, but few studies in the literature address their direct comparison with relational databases when applied to build the persistence layer of a standardized medical information system. One relational and two NoSQL databases (one document-based and one native XML database) of three different sizes have been created in order to evaluate and compare the response times (algorithmic complexity) of six different complexity growing queries, which have been performed on them. Similar appropriate results available in the literature have also been considered. Relational and non-relational NoSQL database systems show almost linear algorithmic complexity query execution. However, they show very different linear slopes, the former being much steeper than the two latter. Document-based NoSQL databases perform better in concurrency than in isolation, and also better than relational databases in concurrency. Non-relational NoSQL databases seem to be more appropriate than standard relational SQL databases when database size is extremely high (secondary use, research applications). Document-based NoSQL databases perform in general better than native XML NoSQL databases. EHR extracts visualization and edition are also document-based tasks more appropriate to NoSQL database systems. However, the appropriate database solution much depends on each particular situation and specific problem.
What Is a Complex Innovation System?
Katz, J. Sylvan
2016-01-01
Innovation systems are sometimes referred to as complex systems, something that is intuitively understood but poorly defined. A complex system dynamically evolves in non-linear ways giving it unique properties that distinguish it from other systems. In particular, a common signature of complex systems is scale-invariant emergent properties. A scale-invariant property can be identified because it is solely described by a power law function, f(x) = kxα, where the exponent, α, is a measure of scale-invariance. The focus of this paper is to describe and illustrate that innovation systems have properties of a complex adaptive system. In particular scale-invariant emergent properties indicative of their complex nature that can be quantified and used to inform public policy. The global research system is an example of an innovation system. Peer-reviewed publications containing knowledge are a characteristic output. Citations or references to these articles are an indirect measure of the impact the knowledge has on the research community. Peer-reviewed papers indexed in Scopus and in the Web of Science were used as data sources to produce measures of sizes and impact. These measures are used to illustrate how scale-invariant properties can be identified and quantified. It is demonstrated that the distribution of impact has a reasonable likelihood of being scale-invariant with scaling exponents that tended toward a value of less than 3.0 with the passage of time and decreasing group sizes. Scale-invariant correlations are shown between the evolution of impact and size with time and between field impact and sizes at points in time. The recursive or self-similar nature of scale-invariance suggests that any smaller innovation system within the global research system is likely to be complex with scale-invariant properties too. PMID:27258040
Synthesizing Dynamic Programming Algorithms from Linear Temporal Logic Formulae
NASA Technical Reports Server (NTRS)
Rosu, Grigore; Havelund, Klaus
2001-01-01
The problem of testing a linear temporal logic (LTL) formula on a finite execution trace of events, generated by an executing program, occurs naturally in runtime analysis of software. We present an algorithm which takes an LTL formula and generates an efficient dynamic programming algorithm. The generated algorithm tests whether the LTL formula is satisfied by a finite trace of events given as input. The generated algorithm runs in linear time, its constant depending on the size of the LTL formula. The memory needed is constant, also depending on the size of the formula.
NASA Astrophysics Data System (ADS)
Haskovic, Emir Y.; Walsh, Sterling; Cloud, Glenn; Winkelman, Rick; Jia, Yingqing; Vishnyakov, Sergey; Jin, Feng
2013-05-01
Low cost, power and bandwidth UGS can be used to fill the growing need for surveillance in remote environments. In particular, linear and 2D thermal sensor systems can run for up to months at a time and their deployment can be scaled to suit the size of the mission. Thermal silhouette profilers like Brimrose's SPOT system reduce power and bandwidth requirements by performing elementary classification and only transmitting binary data using optimized compression methods. These systems satisfy the demands for an increasing number of surveillance operations where reduced bandwidth and power consumption are mission critical.
A compact roller-gear pitch-yaw joint module: Design and control issues
NASA Technical Reports Server (NTRS)
Dohring, Mark E.; Anderson, William J.; Newman, Wyatt S.; Rohn, Douglas A.
1993-01-01
Robotic systems have been proposed as a means of accomplishing assembly and maintenance tasks in space. The desirable characteristics of these systems include compact size, low mass, high load capacity, and programmable compliance to improve assembly performance. In addition, the mechanical system must transmit power in such a way as to allow high performance control of the system. Efficiency, linearity, low backlash, low torque ripple, and low friction are all desirable characteristics. This work presents a pitch-yaw joint module designed and built to address these issues. Its effectiveness as a two degree-of-freedom manipulator using natural admittance control, a method of force control, is demonstrated.
Micro-flock patterns and macro-clusters in chiral active Brownian disks
NASA Astrophysics Data System (ADS)
Levis, Demian; Liebchen, Benno
2018-02-01
Chiral active particles (or self-propelled circle swimmers) feature a rich collective behavior, comprising rotating macro-clusters and micro-flock patterns which consist of phase-synchronized rotating clusters with a characteristic self-limited size. These patterns emerge from the competition of alignment interactions and rotations suggesting that they might occur generically in many chiral active matter systems. However, although excluded volume interactions occur naturally among typical circle swimmers, it is not yet clear if macro-clusters and micro-flock patterns survive their presence. The present work shows that both types of pattern do survive but feature strongly enhance fluctuations regarding the size and shape of the individual clusters. Despite these fluctuations, we find that the average micro-flock size still follows the same characteristic scaling law as in the absence of excluded volume interactions, i.e. micro-flock sizes scale linearly with the single-swimmer radius.
Computations of Drop Collision and Coalescence
NASA Technical Reports Server (NTRS)
Tryggvason, Gretar; Juric, Damir; Nas, Selman; Mortazavi, Saeed
1996-01-01
Computations of drops collisions, coalescence, and other problems involving drops are presented. The computations are made possible by a finite difference/front tracking technique that allows direct solutions of the Navier-Stokes equations for a multi-fluid system with complex, unsteady internal boundaries. This method has been used to examine the various collision modes for binary collisions of drops of equal size, mixing of two drops of unequal size, behavior of a suspension of drops in linear and parabolic shear flows, and the thermal migration of several drops. The key results from these simulations are reviewed. Extensions of the method to phase change problems and preliminary results for boiling are also shown.
Alabastri, Alessandro; Tuccio, Salvatore; Giugni, Andrea; Toma, Andrea; Liberale, Carlo; Das, Gobind; De Angelis, Francesco; Di Fabrizio, Enzo; Zaccaria, Remo Proietti
2013-01-01
In this paper, we review the principal theoretical models through which the dielectric function of metals can be described. Starting from the Drude assumptions for intraband transitions, we show how this model can be improved by including interband absorption and temperature effect in the damping coefficients. Electronic scattering processes are described and included in the dielectric function, showing their role in determining plasmon lifetime at resonance. Relationships among permittivity, electric conductivity and refractive index are examined. Finally, a temperature dependent permittivity model is presented and is employed to predict temperature and non-linear field intensity dependence on commonly used plasmonic geometries, such as nanospheres. PMID:28788366
Numerical distance effect size is a poor metric of approximate number system acuity.
Chesney, Dana
2018-04-12
Individual differences in the ability to compare and evaluate nonsymbolic numerical magnitudes-approximate number system (ANS) acuity-are emerging as an important predictor in many research areas. Unfortunately, recent empirical studies have called into question whether a historically common ANS-acuity metric-the size of the numerical distance effect (NDE size)-is an effective measure of ANS acuity. NDE size has been shown to frequently yield divergent results from other ANS-acuity metrics. Given these concerns and the measure's past popularity, it behooves us to question whether the use of NDE size as an ANS-acuity metric is theoretically supported. This study seeks to address this gap in the literature by using modeling to test the basic assumption underpinning use of NDE size as an ANS-acuity metric: that larger NDE size indicates poorer ANS acuity. This assumption did not hold up under test. Results demonstrate that the theoretically ideal relationship between NDE size and ANS acuity is not linear, but rather resembles an inverted J-shaped distribution, with the inflection points varying based on precise NDE task methodology. Thus, depending on specific methodology and the distribution of ANS acuity in the tested population, positive, negative, or null correlations between NDE size and ANS acuity could be predicted. Moreover, peak NDE sizes would be found for near-average ANS acuities on common NDE tasks. This indicates that NDE size has limited and inconsistent utility as an ANS-acuity metric. Past results should be interpreted on a case-by-case basis, considering both specifics of the NDE task and expected ANS acuity of the sampled population.
The Trail Less Traveled: Individual Decision-Making and Its Effect on Group Behavior
Lanan, Michele C.; Dornhaus, Anna; Jones, Emily I.; Waser, Andrew; Bronstein, Judith L.
2012-01-01
Social insect colonies are complex systems in which the interactions of many individuals lead to colony-level collective behaviors such as foraging. However, the emergent properties of collective behaviors may not necessarily be adaptive. Here, we examine symmetry breaking, an emergent pattern exhibited by some social insects that can lead colonies to focus their foraging effort on only one of several available food patches. Symmetry breaking has been reported to occur in several ant species. However, it is not clear whether it arises as an unavoidable epiphenomenon of pheromone recruitment, or whether it is an adaptive behavior that can be controlled through modification of the individual behavior of workers. In this paper, we used a simulation model to test how symmetry breaking is affected by the degree of non-linearity of recruitment, the specific mechanism used by individuals to choose between patches, patch size, and forager number. The model shows that foraging intensity on different trails becomes increasingly asymmetric as the recruitment response of individuals varies from linear to highly non-linear, supporting the predictions of previous work. Surprisingly, we also found that the direction of the relationship between forager number (i.e., colony size) and asymmetry varied depending on the specific details of the decision rule used by individuals. Limiting the size of the resource produced a damping effect on asymmetry, but only at high forager numbers. Variation in the rule used by individual ants to choose trails is a likely mechanism that could cause variation among the foraging behaviors of species, and is a behavior upon which selection could act. PMID:23112880
NASA Astrophysics Data System (ADS)
Tombak, Ali
The recent advancement in wireless communications demands an ever increasing improvement in the system performance and functionality with a reduced size and cost. This thesis demonstrates novel RF and microwave components based on ferroelectric and solid-state based tunable capacitor (varactor) technologies for the design of low-cost, small-size and multi-functional wireless communication systems. These include tunable lumped element VHF filters based on ferroelectric varactors, a beam-steering technique which, unlike conventional systems, does not require separate power divider and phase shifters, and a predistortion linearization technique that uses a varactor based tunable R-L-C resonator. Among various ferroelectric materials, Barium Strontium Titanate (BST) is actively being studied for the fabrication of high performance varactors at RF and microwave frequencies. BST based tunable capacitors are presented with typical tunabilities of 4.2:1 with the application of 5 to 10 V DC bias voltages and typical loss tangents in the range of 0.003--0.009 at VHF frequencies. Tunable lumped element lowpass and bandpass VHF filters based on BST varactors are also demonstrated with tunabilities of 40% and 57%, respectively. A new beam-steering technique is developed based on the extended resonance power dividing technique. Phased arrays based on this technique do not require separate power divider and phase shifters. Instead, the power division and phase shifting circuits are combined into a single circuit, which utilizes tunable capacitors. This results in a substantial reduction in the circuit complexity and cost. Phased arrays based on this technique can be employed in mobile multimedia services and automotive collision avoidance radars. A 2-GHz 4-antenna and a 10-GHz 8-antenna extended resonance phased arrays are demonstrated with scan ranges of 20 degrees and 18 degrees, respectively. A new predistortion linearization technique for the linearization of RF/microwave power amplifiers is also presented. This technique utilizes a varactor based tunable R-L-C resonator in shunt configuration. Due to the small number of circuit elements required, linearizers based on this technique offer low-cost and simple circuitry, hence can be utilized in handheld and cellular applications. A 1.8 GHz power amplifier with 9 dB gain is linearized using this technique. The linearizer improves the output 1-dB compression point of the power amplifier from 21 to 22.8 dBm. Adjacent channel power ratio (ACPR) is improved approximately 11 dB at an output RF power level of 17.5 dBm. The thesis is concluded by summarizing the main achievements and discussing the future work directions.
Using nonlinear quantile regression to estimate the self-thinning boundary curve
Quang V. Cao; Thomas J. Dean
2015-01-01
The relationship between tree size (quadratic mean diameter) and tree density (number of trees per unit area) has been a topic of research and discussion for many decades. Starting with Reineke in 1933, the maximum size-density relationship, on a log-log scale, has been assumed to be linear. Several techniques, including linear quantile regression, have been employed...
Thermodynamics of quasideterministic digital computers
NASA Astrophysics Data System (ADS)
Chu, Dominique
2018-02-01
A central result of stochastic thermodynamics is that irreversible state transitions of Markovian systems entail a cost in terms of an infinite entropy production. A corollary of this is that strictly deterministic computation is not possible. Using a thermodynamically consistent model, we show that quasideterministic computation can be achieved at finite, and indeed modest cost with accuracies that are indistinguishable from deterministic behavior for all practical purposes. Concretely, we consider the entropy production of stochastic (Markovian) systems that behave like and and a not gates. Combinations of these gates can implement any logical function. We require that these gates return the correct result with a probability that is very close to 1, and additionally, that they do so within finite time. The central component of the model is a machine that can read and write binary tapes. We find that the error probability of the computation of these gates falls with the power of the system size, whereas the cost only increases linearly with the system size.
Electro-optic high voltage sensor
Davidson, James R.; Seifert, Gary D.
2003-09-16
A small sized electro-optic voltage sensor capable of accurate measurement of high voltages without contact with a conductor or voltage source is provided. When placed in the presence of an electric field, the sensor receives an input beam of electromagnetic radiation. A polarization beam displacer separates the input beam into two beams with orthogonal linear polarizations and causes one linearly polarized beam to impinge a crystal at a desired angle independent of temperature. The Pockels effect elliptically polarizes the beam as it travels through the crystal. A reflector redirects the beam back through the crystal and the beam displacer. On the return path, the polarization beam displacer separates the elliptically polarized beam into two output beams of orthogonal linear polarization. The system may include a detector for converting the output beams into electrical signals and a signal processor for determining the voltage based on an analysis of the output beams.
Hu, Tengjiang; Zhao, Yulong; Li, Xiuyuan; Zhao, You; Bai, Yingwei
2016-03-01
The design, fabrication, and testing of a novel electro-thermal linear motor for micro manipulators is presented in this paper. The V-shape electro-thermal actuator arrays, micro lever, micro spring, and slider are introduced. In moving operation, the linear motor can move nearly 1 mm displacement with 100 μm each step while keeping the applied voltage as low as 17 V. In holding operation, the motor can stay in one particular position without consuming energy and no creep deformation is found. Actuation force of 12.7 mN indicates the high force generation capability of the device. Experiments of lifetime show that the device can wear over two million cycles of operation. A silicon-on-insulator wafer is introduced to fabricate a high aspect ratio structure and the chip size is 8.5 mm × 8.5 mm × 0.5 mm.
Kim, Da Hye; Kim, Hyun You; Ryu, Ji Hoon; Lee, Hyuck Mo
2009-07-07
This report on the solid-to-liquid transition region of an Ag-Pd bimetallic nanocluster is based on a constant energy microcanonical ensemble molecular dynamics simulation combined with a collision method. By varying the size and composition of an Ag-Pd bimetallic cluster, we obtained a complete solid-solution type of binary phase diagram of the Ag-Pd system. Irrespective of the size and composition of the cluster, the melting temperature of Ag-Pd bimetallic clusters is lower than that of the bulk state and rises as the cluster size and the Pd composition increase. Additionally, the slope of the phase boundaries (even though not exactly linear) is lowered when the cluster size is reduced on account of the complex relations of the surface tension, the bulk melting temperature, and the heat of fusion. The melting of the cluster initially starts at the surface layer. The initiation and propagation of a five-fold icosahedron symmetry is related to the sequential melting of the cluster.
Exponential Sensitivity and its Cost in Quantum Physics
Gilyén, András; Kiss, Tamás; Jex, Igor
2016-01-01
State selective protocols, like entanglement purification, lead to an essentially non-linear quantum evolution, unusual in naturally occurring quantum processes. Sensitivity to initial states in quantum systems, stemming from such non-linear dynamics, is a promising perspective for applications. Here we demonstrate that chaotic behaviour is a rather generic feature in state selective protocols: exponential sensitivity can exist for all initial states in an experimentally realisable optical scheme. Moreover, any complex rational polynomial map, including the example of the Mandelbrot set, can be directly realised. In state selective protocols, one needs an ensemble of initial states, the size of which decreases with each iteration. We prove that exponential sensitivity to initial states in any quantum system has to be related to downsizing the initial ensemble also exponentially. Our results show that magnifying initial differences of quantum states (a Schrödinger microscope) is possible; however, there is a strict bound on the number of copies needed. PMID:26861076
Hepatic lymphatics: anatomy and related diseases.
Pupulim, Lawrence F; Vilgrain, Valérie; Ronot, Maxime; Becker, Christoph D; Breguet, Romain; Terraz, Sylvain
2015-08-01
The liver normally produces a large amount of lymph. It is estimated that between 25% and 50% of the lymph received by the thoracic duct comes from the liver. In normal conditions, hepatic lymphatics are not depicted on cross-sectional imaging. They are divided in lymphatics of deep system (lymphatics following the hepatic veins and the portal tract) and those of superficial system (convex surface and inferior surface). A variety of diseases may affect hepatic lymphatics and in general they manifest as lymphedema, lymphatic mass, or cystic lesions. Abnormal distended lymphatics are especially seen in periportal spaces as linear hypoattenuations on CT or strong linear hyperintensities on heavily T2-weighted MR imaging. Lymphatic tumor spread as in lymphoma and lymphangitic carcinomatosis manifests as periportal masses and regional lymph node enlargement. Lymphatic disruption after trauma or surgery is depicted as perihepatic fluid collections of lymph (lymphocele). Lymphatic malformation such as lymphangioma is seen on imaging as cystic spaces of variable size.
Exponential Sensitivity and its Cost in Quantum Physics.
Gilyén, András; Kiss, Tamás; Jex, Igor
2016-02-10
State selective protocols, like entanglement purification, lead to an essentially non-linear quantum evolution, unusual in naturally occurring quantum processes. Sensitivity to initial states in quantum systems, stemming from such non-linear dynamics, is a promising perspective for applications. Here we demonstrate that chaotic behaviour is a rather generic feature in state selective protocols: exponential sensitivity can exist for all initial states in an experimentally realisable optical scheme. Moreover, any complex rational polynomial map, including the example of the Mandelbrot set, can be directly realised. In state selective protocols, one needs an ensemble of initial states, the size of which decreases with each iteration. We prove that exponential sensitivity to initial states in any quantum system has to be related to downsizing the initial ensemble also exponentially. Our results show that magnifying initial differences of quantum states (a Schrödinger microscope) is possible; however, there is a strict bound on the number of copies needed.
SU-F-E-06: Dosimetric Characterization of Small Photons Beams of a Novel Linear Accelerator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Almonte, A; Polanco, G; Sanchez, E
2016-06-15
Purpose: The aim of the present contribution was to measure the main dosimetric quantities of small fields produced by UNIQUE and evaluate its matching with the corresponding dosimetric data of one 21EX conventional linear accelerator (Varian) in operation at the same center. The second step was to evaluate comparative performance of the EDGE diode detector and the PinPoint micro-ionization chamber for dosimetry of small fields. Methods: UNIQUE is configured with MLC (120 leaves with 0.5 cm leaf width) and a single low photon energy of 6 MV. Beam data were measured with scanning EDGE diode detector (volume of 0.019 mm{supmore » 3}), a PinPoint micro-ionization chamber (PTW) and for larger fields (≥ 4×4cm{sup 2}) a PTW Semi flex chamber (0.125 cm{sup 3}) was used. The scanning system used was the 3D cylindrical tank manufactured by Sun Nuclear, Inc. The measurement of PDD and profiles were done at 100 cm SSD and 1.5 depth; the relative output factors were measured at 10 cm depth. Results: PDD and the profile data showed less than 1% variation between the two linear accelerators for fields size between 2×2 cm{sup 2} and 5×5cm{sup 2}. Output factor differences was less than 1% for field sizes between 3×3 cm{sup 2} and 10×10 cm{sup 2} and less of 1.5 % for fields of 1.5×1.5 cm{sup 2} and 2×2 cm{sup 2} respectively. The dmax value of the EDGE diode detector, measured from the PDD, was 8.347 mm for 0.5×0,5cm{sup 2} for UNIQUE. The performance of EDGE diode detector was comparable for all measurements in small fields. Conclusion: UNIQUE linear accelerator show similar dosimetrics characteristics as conventional 21EX Varian linear accelerator for small, medium and large field sizes.EDGE detector show good performance by measuring dosimetrics quantities in small fields typically used in IMRT and radiosurgery treatments.« less
Characterization of operating parameters of an in vivo micro CT system
NASA Astrophysics Data System (ADS)
Ghani, Muhammad U.; Ren, Liqiang; Yang, Kai; Chen, Wei R.; Wu, Xizeng; Liu, Hong
2016-03-01
The objective of this study was to characterize the operating parameters of an in-vivo micro CT system. In-plane spatial resolution, noise, geometric accuracy, CT number uniformity and linearity, and phase effects were evaluated using various phantoms. The system employs a flat panel detector with a 127 μm pixel pitch, and a micro focus x-ray tube with a focal spot size ranging from 5-30 μm. The system accommodates three magnification sets of 1.72, 2.54 and 5.10. The in-plane cutoff frequencies (10% MTF) ranged from 2.31 lp/mm (60 mm FOV, M=1.72, 2×2 binning) to 13 lp/mm (10 mm FOV, M=5.10, 1×1 binning). The results were qualitatively validated by a resolution bar pattern phantom and the smallest visible lines were in 30-40 μm range. Noise power spectrum (NPS) curves revealed that the noise peaks exponentially increased as the geometric magnification (M) increased. True in-plane pixel spacing and slice thickness were within 2% of the system's specifications. The CT numbers in cone beam modality are greatly affected by scattering and thus they do not remain the same in the three magnifications. A high linear relationship (R2 > 0.999) was found between the measured CT numbers and Hydroxyapatite (HA) loadings of the rods of a water filled mouse phantom. Projection images of a laser cut acrylic edge acquired at a small focal spot size of 5 μm with 1.5 fps revealed that noticeable phase effects occur at M=5.10 in the form of overshooting at the boundary of air and acrylic. In order to make the CT numbers consistent across all the scan settings, scatter correction methods may be a valuable improvement for this system.
Battaglia, P; Malara, D; Ammendolia, G; Romeo, T; Andaloro, F
2015-09-01
Length-mass relationships and linear regressions are given for otolith size (length and height) and standard length (LS ) of certain mesopelagic fishes (Myctophidae, Paralepididae, Phosichthyidae and Stomiidae) living in the central Mediterranean Sea. The length-mass relationship showed isometric growth in six species, whereas linear regressions of LS and otolith size fit the data well for all species. These equations represent a useful tool for dietary studies on Mediterranean marine predators. © 2015 The Fisheries Society of the British Isles.
Shear Melting of a Colloidal Glass
NASA Astrophysics Data System (ADS)
Eisenmann, Christoph; Kim, Chanjoong; Mattsson, Johan; Weitz, David A.
2010-01-01
We use confocal microscopy to explore shear melting of colloidal glasses, which occurs at strains of ˜0.08, coinciding with a strongly non-Gaussian step size distribution. For larger strains, the particle mean square displacement increases linearly with strain and the step size distribution becomes Gaussian. The effective diffusion coefficient varies approximately linearly with shear rate, consistent with a modified Stokes-Einstein relationship in which thermal energy is replaced by shear energy and the length scale is set by the size of cooperatively moving regions consisting of ˜3 particles.
Framework based on stochastic L-Systems for modeling IP traffic with multifractal behavior
NASA Astrophysics Data System (ADS)
Salvador, Paulo S.; Nogueira, Antonio; Valadas, Rui
2003-08-01
In a previous work we have introduced a multifractal traffic model based on so-called stochastic L-Systems, which were introduced by biologist A. Lindenmayer as a method to model plant growth. L-Systems are string rewriting techniques, characterized by an alphabet, an axiom (initial string) and a set of production rules. In this paper, we propose a novel traffic model, and an associated parameter fitting procedure, which describes jointly the packet arrival and the packet size processes. The packet arrival process is modeled through a L-System, where the alphabet elements are packet arrival rates. The packet size process is modeled through a set of discrete distributions (of packet sizes), one for each arrival rate. In this way the model is able to capture correlations between arrivals and sizes. We applied the model to measured traffic data: the well-known pOct Bellcore, a trace of aggregate WAN traffic and two traces of specific applications (Kazaa and Operation Flashing Point). We assess the multifractality of these traces using Linear Multiscale Diagrams. The suitability of the traffic model is evaluated by comparing the empirical and fitted probability mass and autocovariance functions; we also compare the packet loss ratio and average packet delay obtained with the measured traces and with traces generated from the fitted model. Our results show that our L-System based traffic model can achieve very good fitting performance in terms of first and second order statistics and queuing behavior.
Korte, Andrew R.; Yandeau-Nelson, Marna D.; Nikolau, Basil J.; ...
2015-01-25
A significant limiting factor in achieving high spatial resolution for matrix-assisted laser desorption ionization-mass spectrometry (MALDI-MS) imaging is the size of the laser spot at the sample surface. We present modifications to the beam-delivery optics of a commercial MALDI-linear ion trap-Orbitrap instrument, incorporating an external Nd:YAG laser, beam-shaping optics, and an aspheric focusing lens, to reduce the minimum laser spot size from ~50 μm for the commercial configuration down to ~9 μm for the modified configuration. This improved system was applied for MALDI-MS imaging of cross sections of juvenile maize leaves at 5-μm spatial resolution using an oversampling method. Theremore » are a variety of different metabolites including amino acids, glycerolipids, and defense-related compounds were imaged at a spatial resolution well below the size of a single cell. Such images provide unprecedented insights into the metabolism associated with the different tissue types of the maize leaf, which is known to asymmetrically distribute the reactions of C4 photosynthesis among the mesophyll and bundle sheath cell types. The metabolite ion images correlate with the optical images that reveal the structures of the different tissues, and previously known and newly revealed asymmetric metabolic features are observed.« less
NASA Astrophysics Data System (ADS)
Ajitanand, N. N.; Phenix Collaboration
2014-11-01
Two-pion interferometry measurements in d +Au and Au + Au collisions at √{sNN} = 200 GeV are used to extract and compare the Gaussian source radii Rout, Rside and Rlong, which characterize the space-time extent of the emission sources. The comparisons, which are performed as a function of collision centrality and the mean transverse momentum for pion pairs, indicate strikingly similar patterns for the d +Au and Au + Au systems. They also indicate a linear dependence of Rside on the initial transverse geometric size R bar , as well as a smaller freeze-out size for the d +Au system. These patterns point to the important role of final-state re-scattering effects in the reaction dynamics of d +Au collisions.
Tian, Bian; Zhao, Yulong; Jiang, Zhuangde; Zhang, Ling; Liao, Nansheng; Liu, Yuanhao; Meng, Chao
2009-01-01
In this paper we describe the design and testing of a micro piezoresistive pressure sensor for a Tire Pressure Measurement System (TPMS) which has the advantages of a minimized structure, high sensitivity, linearity and accuracy. Through analysis of the stress distribution of the diaphragm using the ANSYS software, a model of the structure was established. The fabrication on a single silicon substrate utilizes the technologies of anisotropic chemical etching and packaging through glass anodic bonding. The performance of this type of piezoresistive sensor, including size, sensitivity, and long-term stability, were investigated. The results indicate that the accuracy is 0.5% FS, therefore this design meets the requirements for a TPMS, and not only has a smaller size and simplicity of preparation, but also has high sensitivity and accuracy. PMID:22573960
Percolation Thresholds in Angular Grain media: Drude Directed Infiltration
NASA Astrophysics Data System (ADS)
Priour, Donald
Pores in many realistic systems are not well delineated channels, but are void spaces among grains impermeable to charge or fluid flow which comprise the medium. Sparse grain concentrations lead to permeable systems, while concentrations in excess of a critical density block bulk fluid flow. We calculate percolation thresholds in porous materials made up of randomly placed (and oriented) disks, tetrahedrons, and cubes. To determine if randomly generated finite system samples are permeable, we deploy virtual tracer particles which are scattered (e.g. specularly) by collisions with impenetrable angular grains. We hasten the rate of exploration (which would otherwise scale as ncoll1 / 2 where ncoll is the number of collisions with grains if the tracers followed linear trajectories) by considering the tracer particles to be charged in conjunction with a randomly directed uniform electric field. As in the Drude treatment, where a succession of many scattering events leads to a constant drift velocity, tracer displacements on average grow linearly in ncoll. By averaging over many disorder realizations for a variety of systems sizes, we calculate the percolation threshold and critical exponent which characterize the phase transition.
Chen, Huipeng; Li, Mengyuan; Zhang, Yi; Xie, Huikai; Chen, Chang; Peng, Zhangming; Su, Shaohui
2018-02-08
Incorporating linear-scanning micro-electro-mechanical systems (MEMS) micromirrors into Fourier transform spectral acquisition systems can greatly reduce the size of the spectrometer equipment, making portable Fourier transform spectrometers (FTS) possible. How to minimize the tilting of the MEMS mirror plate during its large linear scan is a major problem in this application. In this work, an FTS system has been constructed based on a biaxial MEMS micromirror with a large-piston displacement of 180 μm, and a biaxial H∞ robust controller is designed. Compared with open-loop control and proportional-integral-derivative (PID) closed-loop control, H∞ robust control has good stability and robustness. The experimental results show that the stable scanning displacement reaches 110.9 μm under the H∞ robust control, and the tilting angle of the MEMS mirror plate in that full scanning range falls within ±0.0014°. Without control, the FTS system cannot generate meaningful spectra. In contrast, the FTS yields a clean spectrum with a full width at half maximum (FWHM) spectral linewidth of 96 cm -1 under the H∞ robust control. Moreover, the FTS system can maintain good stability and robustness under various driving conditions.
Li, Mengyuan; Zhang, Yi; Chen, Chang; Peng, Zhangming; Su, Shaohui
2018-01-01
Incorporating linear-scanning micro-electro-mechanical systems (MEMS) micromirrors into Fourier transform spectral acquisition systems can greatly reduce the size of the spectrometer equipment, making portable Fourier transform spectrometers (FTS) possible. How to minimize the tilting of the MEMS mirror plate during its large linear scan is a major problem in this application. In this work, an FTS system has been constructed based on a biaxial MEMS micromirror with a large-piston displacement of 180 μm, and a biaxial H∞ robust controller is designed. Compared with open-loop control and proportional-integral-derivative (PID) closed-loop control, H∞ robust control has good stability and robustness. The experimental results show that the stable scanning displacement reaches 110.9 μm under the H∞ robust control, and the tilting angle of the MEMS mirror plate in that full scanning range falls within ±0.0014°. Without control, the FTS system cannot generate meaningful spectra. In contrast, the FTS yields a clean spectrum with a full width at half maximum (FWHM) spectral linewidth of 96 cm−1 under the H∞ robust control. Moreover, the FTS system can maintain good stability and robustness under various driving conditions. PMID:29419765
Diaz, Francisco J; Berg, Michel J; Krebill, Ron; Welty, Timothy; Gidal, Barry E; Alloway, Rita; Privitera, Michael
2013-12-01
Due to concern and debate in the epilepsy medical community and to the current interest of the US Food and Drug Administration (FDA) in revising approaches to the approval of generic drugs, the FDA is currently supporting ongoing bioequivalence studies of antiepileptic drugs, the EQUIGEN studies. During the design of these crossover studies, the researchers could not find commercial or non-commercial statistical software that quickly allowed computation of sample sizes for their designs, particularly software implementing the FDA requirement of using random-effects linear models for the analyses of bioequivalence studies. This article presents tables for sample-size evaluations of average bioequivalence studies based on the two crossover designs used in the EQUIGEN studies: the four-period, two-sequence, two-formulation design, and the six-period, three-sequence, three-formulation design. Sample-size computations assume that random-effects linear models are used in bioequivalence analyses with crossover designs. Random-effects linear models have been traditionally viewed by many pharmacologists and clinical researchers as just mathematical devices to analyze repeated-measures data. In contrast, a modern view of these models attributes an important mathematical role in theoretical formulations in personalized medicine to them, because these models not only have parameters that represent average patients, but also have parameters that represent individual patients. Moreover, the notation and language of random-effects linear models have evolved over the years. Thus, another goal of this article is to provide a presentation of the statistical modeling of data from bioequivalence studies that highlights the modern view of these models, with special emphasis on power analyses and sample-size computations.
Microminiature linear split Stirling cryogenic cooler for portable infrared imagers
NASA Astrophysics Data System (ADS)
Veprik, A.; Vilenchik, H.; Riabzev, S.; Pundak, N.
2007-04-01
Novel tactics employed in carrying out military and antiterrorist operations call for the development of a new generation of warfare, among which sophisticated portable infrared (IR) imagers for surveillance, reconnaissance, targeting and navigation play an important role. The superior performance of such imagers relies on novel optronic technologies and maintaining the infrared focal plane arrays at cryogenic temperatures using closed cycle refrigerators. Traditionally, rotary driven Stirling cryogenic engines are used for this purpose. As compared to their military off-theshelf linear rivals, they are lighter, more compact and normally consume less electrical power. Latest technological advances in industrial development of high-temperature (100K) infrared detectors initialized R&D activity towards developing microminiature cryogenic coolers, both of rotary and linear types. On this occasion, split linearly driven cryogenic coolers appear to be more suitable for the above applications. Their known advantages include flexibility in the system design, inherently longer life time, low vibration export and superior aural stealth. Moreover, recent progress in designing highly efficient "moving magnet" resonant linear drives and driving electronics enable further essential reduction of the cooler size, weight and power consumption. The authors report on the development and project status of a novel Ricor model K527 microminiature split Stirling linear cryogenic cooler designed especially for the portable infrared imagers.
Superconducting linear actuator
NASA Technical Reports Server (NTRS)
Johnson, Bruce; Hockney, Richard
1993-01-01
Special actuators are needed to control the orientation of large structures in space-based precision pointing systems. Electromagnetic actuators that presently exist are too large in size and their bandwidth is too low. Hydraulic fluid actuation also presents problems for many space-based applications. Hydraulic oil can escape in space and contaminate the environment around the spacecraft. A research study was performed that selected an electrically-powered linear actuator that can be used to control the orientation of a large pointed structure. This research surveyed available products, analyzed the capabilities of conventional linear actuators, and designed a first-cut candidate superconducting linear actuator. The study first examined theoretical capabilities of electrical actuators and determined their problems with respect to the application and then determined if any presently available actuators or any modifications to available actuator designs would meet the required performance. The best actuator was then selected based on available design, modified design, or new design for this application. The last task was to proceed with a conceptual design. No commercially-available linear actuator or modification capable of meeting the specifications was found. A conventional moving-coil dc linear actuator would meet the specification, but the back-iron for this actuator would weigh approximately 12,000 lbs. A superconducting field coil, however, eliminates the need for back iron, resulting in an actuator weight of approximately 1000 lbs.
Petrovic, Borislava; Grzadziel, Aleksandra; Rutonjski, Laza; Slosarek, Krzysztof
2010-09-01
Enhanced dynamic wedges (EDW) are known to increase drastically the radiation therapy treatment efficiency. This paper has the aim to compare linear array measurements of EDW with the calculations of treatment planning system (TPS) and the electronic portal imaging device (EPID) for 15 MV photon energy. The range of different field sizes and wedge angles (for 15 MV photon beam) were measured by the linear chamber array CA 24 in Blue water phantom. The measurement conditions were applied to the calculations of the commercial treatment planning system XIO CMS v.4.2.0 using convolution algorithm. EPID measurements were done on EPID-focus distance of 100 cm, and beam parameters being the same as for CA24 measurements. Both depth doses and profiles were measured. EDW linear array measurements of profiles to XIO CMS TPS calculation differ around 0.5%. Profiles in non-wedged direction and open field profiles practically do not differ. Percentage depth doses (PDDs) for all EDW measurements show the difference of not more than 0.2%, while the open field PDD is almost the same as EDW PDD. Wedge factors for 60 deg wedge angle were also examined, and the difference is up to 4%. EPID to linear array differs up to 5%. The implementation of EDW in radiation therapy treatments provides clinicians with an effective tool for the conformal radiotherapy treatment planning. If modelling of EDW beam in TPS is done correctly, a very good agreement between measurements and calculation is obtained, but EPID cannot be used for reference measurements.
Buckling Design and Analysis of a Payload Fairing One-Sixth Cylindrical Arc-Segment Panel
NASA Technical Reports Server (NTRS)
Kosareo, Daniel N.; Oliver, Stanley T.; Bednarcyk, Brett A.
2013-01-01
Design and analysis results are reported for a panel that is a 16th arc-segment of a full 33-ft diameter cylindrical barrel section of a payload fairing structure. Six such panels could be used to construct the fairing barrel, and, as such, compression buckling testing of a 16th arc-segment panel would serve as a validation test of the buckling analyses used to design the fairing panels. In this report, linear and nonlinear buckling analyses have been performed using finite element software for 16th arc-segment panels composed of aluminum honeycomb core with graphiteepoxy composite facesheets and an alternative fiber reinforced foam (FRF) composite sandwich design. The cross sections of both concepts were sized to represent realistic Space Launch Systems (SLS) Payload Fairing panels. Based on shell-based linear buckling analyses, smaller, more manageable buckling test panel dimensions were determined such that the panel would still be expected to buckle with a circumferential (as opposed to column-like) mode with significant separation between the first and second buckling modes. More detailed nonlinear buckling analyses were then conducted for honeycomb panels of various sizes using both Abaqus and ANSYS finite element codes, and for the smaller size panel, a solid-based finite element analysis was conducted. Finally, for the smaller size FRF panel, nonlinear buckling analysis was performed wherein geometric imperfections measured from an actual manufactured FRF were included. It was found that the measured imperfection did not significantly affect the panel's predicted buckling response
Browndye: A software package for Brownian dynamics
NASA Astrophysics Data System (ADS)
Huber, Gary A.; McCammon, J. Andrew
2010-11-01
A new software package, Browndye, is presented for simulating the diffusional encounter of two large biological molecules. It can be used to estimate second-order rate constants and encounter probabilities, and to explore reaction trajectories. Browndye builds upon previous knowledge and algorithms from software packages such as UHBD, SDA, and Macrodox, while implementing algorithms that scale to larger systems. Program summaryProgram title: Browndye Catalogue identifier: AEGT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: MIT license, included in distribution No. of lines in distributed program, including test data, etc.: 143 618 No. of bytes in distributed program, including test data, etc.: 1 067 861 Distribution format: tar.gz Programming language: C++, OCaml ( http://caml.inria.fr/) Computer: PC, Workstation, Cluster Operating system: Linux Has the code been vectorised or parallelized?: Yes. Runs on multiple processors with shared memory using pthreads RAM: Depends linearly on size of physical system Classification: 3 External routines: uses the output of APBS [1] ( http://www.poissonboltzmann.org/apbs/) as input. APBS must be obtained and installed separately. Expat 2.0.1, CLAPACK, ocaml-expat, Mersenne Twister. These are included in the Browndye distribution. Nature of problem: Exploration and determination of rate constants of bimolecular interactions involving large biological molecules. Solution method: Brownian dynamics with electrostatic, excluded volume, van der Waals, and desolvation forces. Running time: Depends linearly on size of physical system and quadratically on precision of results. The included example executes in a few minutes.
Plane Systems for Irradiation of a Patient from Any Directions
NASA Astrophysics Data System (ADS)
Kats, M. M.; Onossovsky, K. K.
1997-05-01
The system for transportation of a beam used for proton therapy is suggested. In this system a prone patient is placed perpendicularly to the beam axis. The beam is bent and focused in the vertical plane in such a way that makes possible patient irradiation from any direction. Three versions of such a system are discussed. All of them give the opportunity to transport protons with energy up to 250 MeV and R*R' up to 10-5 m*rad to targets with linear size in the interval between 10 and 300 mm. As compared to systems described earlier (GANTRY, Corcscrew etc.) the systems described in this paper have smaller weight of movable equipment, occupy less space and consume less power. Coauthor deseased
Response of jammed packings to thermal fluctuations
NASA Astrophysics Data System (ADS)
Wu, Qikai; Bertrand, Thibault; Shattuck, Mark D.; O'Hern, Corey S.
2017-12-01
We focus on the response of mechanically stable (MS) packings of frictionless, bidisperse disks to thermal fluctuations, with the aim of quantifying how nonlinearities affect system properties at finite temperature. In contrast, numerous prior studies characterized the structural and mechanical properties of MS packings of frictionless spherical particles at zero temperature. Packings of disks with purely repulsive contact interactions possess two main types of nonlinearities, one from the form of the interaction potential (e.g., either linear or Hertzian spring interactions) and one from the breaking (or forming) of interparticle contacts. To identify the temperature regime at which the contact-breaking nonlinearities begin to contribute, we first calculated the minimum temperatures Tc b required to break a single contact in the MS packing for both single- and multiple-eigenmode perturbations of the T =0 MS packing. We find that the temperature required to break a single contact for equal velocity-amplitude perturbations involving all eigenmodes approaches the minimum value obtained for a perturbation in the direction connecting disk pairs with the smallest overlap. We then studied deviations in the constant volume specific heat C¯V and deviations of the average disk positions Δ r from their T =0 values in the temperature regime TC ¯V
Short Round Sub-Linear Zero-Knowledge Argument for Linear Algebraic Relations
NASA Astrophysics Data System (ADS)
Seo, Jae Hong
Zero-knowledge arguments allows one party to prove that a statement is true, without leaking any other information than the truth of the statement. In many applications such as verifiable shuffle (as a practical application) and circuit satisfiability (as a theoretical application), zero-knowledge arguments for mathematical statements related to linear algebra are essentially used. Groth proposed (at CRYPTO 2009) an elegant methodology for zero-knowledge arguments for linear algebraic relations over finite fields. He obtained zero-knowledge arguments of the sub-linear size for linear algebra using reductions from linear algebraic relations to equations of the form z = x *' y, where x, y ∈ Fnp are committed vectors, z ∈ Fp is a committed element, and *' : Fnp × Fnp → Fp is a bilinear map. These reductions impose additional rounds on zero-knowledge arguments of the sub-linear size. The round complexity of interactive zero-knowledge arguments is an important measure along with communication and computational complexities. We focus on minimizing the round complexity of sub-linear zero-knowledge arguments for linear algebra. To reduce round complexity, we propose a general transformation from a t-round zero-knowledge argument, satisfying mild conditions, to a (t - 2)-round zero-knowledge argument; this transformation is of independent interest.
Non-linear optical measurements using a scanned, Bessel beam
NASA Astrophysics Data System (ADS)
Collier, Bradley B.; Awasthi, Samir; Lieu, Deborah K.; Chan, James W.
2015-03-01
Oftentimes cells are removed from the body for disease diagnosis or cellular research. This typically requires fluorescent labeling followed by sorting with a flow cytometer; however, possible disruption of cellular function or even cell death due to the presence of the label can occur. This may be acceptable for ex vivo applications, but as cells are more frequently moving from the lab to the body, label-free methods of cell sorting are needed to eliminate these issues. This is especially true of the growing field of stem cell research where specialized cells are needed for treatments. Because differentiation processes are not completely efficient, cells must be sorted to eliminate any unwanted cells (i.e. un-differentiated or differentiated into an unwanted cell type). In order to perform label-free measurements, non-linear optics (NLO) have been increasingly utilized for single cell analysis because of their ability to not disrupt cellular function. An optical system was developed for the measurement of NLO in a microfluidic channel similar to a flow cytometer. In order to improve the excitation efficiency of NLO, a scanned Bessel beam was utilized to create a light-sheet across the channel. The system was tested by monitoring twophoton fluorescence from polystyrene microbeads of different sizes. Fluorescence intensity obtained from light-sheet measurements were significantly greater than measurements made using a static Gaussian beam. In addition, the increase in intensity from larger sized beads was more evident for the light-sheet system.
Garment Counting in a Textile Warehouse by Means of a Laser Imaging System
Martínez-Sala, Alejandro Santos; Sánchez-Aartnoutse, Juan Carlos; Egea-López, Esteban
2013-01-01
Textile logistic warehouses are highly automated mechanized places where control points are needed to count and validate the number of garments in each batch. This paper proposes and describes a low cost and small size automated system designed to count the number of garments by processing an image of the corresponding hanger hooks generated using an array of phototransistors sensors and a linear laser beam. The generated image is processed using computer vision techniques to infer the number of garment units. The system has been tested on two logistic warehouses with a mean error in the estimated number of hangers of 0.13%. PMID:23628760
Garment counting in a textile warehouse by means of a laser imaging system.
Martínez-Sala, Alejandro Santos; Sánchez-Aartnoutse, Juan Carlos; Egea-López, Esteban
2013-04-29
Textile logistic warehouses are highly automated mechanized places where control points are needed to count and validate the number of garments in each batch. This paper proposes and describes a low cost and small size automated system designed to count the number of garments by processing an image of the corresponding hanger hooks generated using an array of phototransistors sensors and a linear laser beam. The generated image is processed using computer vision techniques to infer the number of garment units. The system has been tested on two logistic warehouses with a mean error in the estimated number of hangers of 0.13%.
Pink-Beam, Highly-Accurate Compact Water Cooled Slits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lyndaker, Aaron; Deyhim, Alex; Jayne, Richard
2007-01-19
Advanced Design Consulting, Inc. (ADC) has designed accurate compact slits for applications where high precision is required. The system consists of vertical and horizontal slit mechanisms, a vacuum vessel which houses them, water cooling lines with vacuum guards connected to the individual blades, stepper motors with linear encoders, limit (home position) switches and electrical connections including internal wiring for a drain current measurement system. The total slit size is adjustable from 0 to 15 mm both vertically and horizontally. Each of the four blades are individually controlled and motorized. In this paper, a summary of the design and Finite Elementmore » Analysis of the system are presented.« less
Stability of Local Quantum Dissipative Systems
NASA Astrophysics Data System (ADS)
Cubitt, Toby S.; Lucia, Angelo; Michalakis, Spyridon; Perez-Garcia, David
2015-08-01
Open quantum systems weakly coupled to the environment are modeled by completely positive, trace preserving semigroups of linear maps. The generators of such evolutions are called Lindbladians. In the setting of quantum many-body systems on a lattice it is natural to consider Lindbladians that decompose into a sum of local interactions with decreasing strength with respect to the size of their support. For both practical and theoretical reasons, it is crucial to estimate the impact that perturbations in the generating Lindbladian, arising as noise or errors, can have on the evolution. These local perturbations are potentially unbounded, but constrained to respect the underlying lattice structure. We show that even for polynomially decaying errors in the Lindbladian, local observables and correlation functions are stable if the unperturbed Lindbladian has a unique fixed point and a mixing time that scales logarithmically with the system size. The proof relies on Lieb-Robinson bounds, which describe a finite group velocity for propagation of information in local systems. As a main example, we prove that classical Glauber dynamics is stable under local perturbations, including perturbations in the transition rates, which may not preserve detailed balance.
A rapid high-resolution method for resolving DNA topoisomers.
Mitchenall, Lesley A; Hipkin, Rachel E; Piperakis, Michael M; Burton, Nicolas P; Maxwell, Anthony
2018-01-16
Agarose gel electrophoresis has been the mainstay technique for the analysis of DNA samples of moderate size. In addition to separating linear DNA molecules, it can also resolve different topological forms of plasmid DNAs, an application useful for the analysis of the reactions of DNA topoisomerases. However, gel electrophoresis is an intrinsically low-throughput technique and suffers from other potential disadvantages. We describe the application of the QIAxcel Advanced System, a high-throughput capillary electrophoresis system, to separate DNA topoisomers, and compare this technique with gel electrophoresis. We prepared a range of topoisomers of plasmids pBR322 and pUC19, and a 339 bp DNA minicircle, and compared their separation by gel electrophoresis and the QIAxcel System. We found superior resolution with the QIAxcel System, and that quantitative analysis of topoisomer distributions was straightforward. We show that the QIAxcel system has advantages in terms of speed, resolution and cost, and can be applied to DNA circles of various sizes. It can readily be adapted for use in compound screening against topoisomerase targets.
Analysis of transport in gyrokinetic tokamaks
NASA Astrophysics Data System (ADS)
Mynick, H. E.; Parker, S. E.
1995-06-01
Progress toward a detailed understanding of the transport in full-volume gyrokinetic simulations of tokamaks is described. The transition between the two asymptotic regimes (large and small) of scaling of the heat flux with system size a/ρg reported earlier is explained, along with the approximate size at which the transition occurs. The larger systems have transport close to that predicted by the simple standard estimates for transport by drift-wave turbulence (viz., Bohm or gyro-Bohm) in scaling with a/ρg, temperature, magnetic field, ion mass, safety factor, and minor radius, but lying much closer to Bohm, which seems the result better supported theoretically. The characteristic downshift in the
Analysis of seismic stability of large-sized tank VST-20000 with software package ANSYS
NASA Astrophysics Data System (ADS)
Tarasenko, A. A.; Chepur, P. V.; Gruchenkova, A. A.
2018-05-01
The work is devoted to the study of seismic stability of vertical steel tank VST-20000 with due consideration of the system response “foundation-tank-liquid”, conducted on the basis of the finite element method, modal analysis and linear spectral theory. The calculations are performed for the tank model with a high degree of detailing of metallic structures: shells, a fixed roof, a bottom, a reinforcing ring.
DNA fragment sizing and sorting by laser-induced fluorescence
Hammond, Mark L.; Jett, James H.; Keller, Richard A.; Marrone, Babetta L.; Martin, John C.
1996-01-01
A method is provided for sizing DNA fragments using high speed detection systems, such as flow cytometry to determine unique characteristics of DNA pieces from a sample. In one characterization the DNA piece is fragmented at preselected sites to produce a plurality of DNA fragments. The DNA piece or the resulting DNA fragments are treated with a dye effective to stain stoichiometrically the DNA piece or the DNA fragments. The fluorescence from the dye in the stained fragments is then examined to generate an output functionally related to the number of nucleotides in each one of the DNA fragments. In one embodiment, the intensity of the fluorescence emissions from each fragment is linearly related to the fragment length. The distribution of DNA fragment sizes forms a characterization of the DNA piece for use in forensic and research applications.
For the depolarization of linearly polarized light by smoke particles
NASA Astrophysics Data System (ADS)
Sun, Wenbo; Liu, Zhaoyan; Videen, Gorden; Fu, Qiang; Muinonen, Karri; Winker, David M.; Lukashin, Constantine; Jin, Zhonghai; Lin, Bing; Huang, Jianping
2013-06-01
The CALIPSO satellite mission consistently measures volume (including molecule and particulate) light depolarization ratio of ∼2% for smoke, compared to ∼1% for marine aerosols and ∼15% for dust. The observed ∼2% smoke depolarization ratio comes primarily from the nonspherical habits of particles in the smoke at certain particle sizes. In this study, the depolarization of linearly polarized light by small sphere aggregates and irregular Gaussian-shaped particles is studied, to reveal the physics between the depolarization of linearly polarized light and smoke aerosol shape and size. It is found that the depolarization ratio curves of Gaussian-deformed spheres are very similar to sphere aggregates in terms of scattering-angle dependence and particle size parameters when particle size parameter is smaller than 1.0π. This demonstrates that small randomly oriented nonspherical particles have some common depolarization properties as functions of scattering angle and size parameter. This may be very useful information for characterization and active remote sensing of smoke particles using polarized light. We also show that the depolarization ratio from the CALIPSO measurements could be used to derive smoke aerosol particle size. From the calculation results for light depolarization ratio by Gaussian-shaped smoke particles and the CALIPSO-measured light depolarization ratio of ∼2% for smoke, the mean particle size of South-African smoke is estimated to be about half of the 532nm wavelength of the CALIPSO lidar.
Electric-field-induced association of colloidal particles
NASA Astrophysics Data System (ADS)
Fraden, Seth; Hurd, Alan J.; Meyer, Robert B.
1989-11-01
Dilute suspensions of micron diameter dielectric spheres confined to two dimensions are induced to aggregate linearly by application of an electric field. The growth of the average cluster size agrees well with the Smoluchowski equation, but the evolution of the measured cluster size distribution exhibits significant departures from theory at large times due to the formation of long linear clusters which effectively partition space into isolated one-dimensional strips.
Linearized self-consistent quasiparticle GW method: Application to semiconductors and simple metals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kutepov, A. L.; Oudovenko, V. S.; Kotliar, G.
We present a code implementing the linearized self-consistent quasiparticle GW method (QSGW) in the LAPW basis. Our approach is based on the linearization of the self-energy around zero frequency which differs it from the existing implementations of the QSGW method. The linearization allows us to use Matsubara frequencies instead of working on the real axis. This results in efficiency gains by switching to the imaginary time representation in the same way as in the space time method. The all electron LAPW basis set eliminates the need for pseudopotentials. We discuss the advantages of our approach, such as its N 3more » scaling with the system size N, as well as its shortcomings. We apply our approach to study the electronic properties of selected semiconductors, insulators, and simple metals and show that our code produces the results very close to the previously published QSGW data. Our implementation is a good platform for further many body diagrammatic resummations such as the vertex-corrected GW approach and the GW+DMFT method.« less
Healthcare service quality perception in Japan.
Eleuch, Amira ep Koubaa
2011-01-01
This study aims to assess Japanese patients' healthcare service quality perceptions and to shed light on the most meaningful service features. It follows-up a study published in IJHCQA Vol. 21 No. 7. Through a non-linear approach, the study relied on the scatter model to detect healthcare service features' importance in forming overall quality judgment. Japanese patients perceive healthcare services through a linear compensatory process. Features related to technical quality and staff behavior compensate for each other to decide service quality. A limitation of the study is the limited sample size. Non-linear approaches could help researchers to better understand patients' healthcare service quality perceptions. The study highlights a need to adopt an evolution that enhances technical quality and medical practices in Japanese healthcare settings. The study relies on a non-linear approach to assess patient overall quality perceptions in order to enrich knowledge. Furthermore, the research is conducted in Japan where healthcare marketing studies are scarce owing to cultural and language barriers. Japanese culture and healthcare system characteristics are used to explain and interpret the results.
Linearized self-consistent quasiparticle GW method: Application to semiconductors and simple metals
Kutepov, A. L.; Oudovenko, V. S.; Kotliar, G.
2017-06-23
We present a code implementing the linearized self-consistent quasiparticle GW method (QSGW) in the LAPW basis. Our approach is based on the linearization of the self-energy around zero frequency which differs it from the existing implementations of the QSGW method. The linearization allows us to use Matsubara frequencies instead of working on the real axis. This results in efficiency gains by switching to the imaginary time representation in the same way as in the space time method. The all electron LAPW basis set eliminates the need for pseudopotentials. We discuss the advantages of our approach, such as its N 3more » scaling with the system size N, as well as its shortcomings. We apply our approach to study the electronic properties of selected semiconductors, insulators, and simple metals and show that our code produces the results very close to the previously published QSGW data. Our implementation is a good platform for further many body diagrammatic resummations such as the vertex-corrected GW approach and the GW+DMFT method.« less
Solving large tomographic linear systems: size reduction and error estimation
NASA Astrophysics Data System (ADS)
Voronin, Sergey; Mikesell, Dylan; Slezak, Inna; Nolet, Guust
2014-10-01
We present a new approach to reduce a sparse, linear system of equations associated with tomographic inverse problems. We begin by making a modification to the commonly used compressed sparse-row format, whereby our format is tailored to the sparse structure of finite-frequency (volume) sensitivity kernels in seismic tomography. Next, we cluster the sparse matrix rows to divide a large matrix into smaller subsets representing ray paths that are geographically close. Singular value decomposition of each subset allows us to project the data onto a subspace associated with the largest eigenvalues of the subset. After projection we reject those data that have a signal-to-noise ratio (SNR) below a chosen threshold. Clustering in this way assures that the sparse nature of the system is minimally affected by the projection. Moreover, our approach allows for a precise estimation of the noise affecting the data while also giving us the ability to identify outliers. We illustrate the method by reducing large matrices computed for global tomographic systems with cross-correlation body wave delays, as well as with surface wave phase velocity anomalies. For a massive matrix computed for 3.7 million Rayleigh wave phase velocity measurements, imposing a threshold of 1 for the SNR, we condensed the matrix size from 1103 to 63 Gbyte. For a global data set of multiple-frequency P wave delays from 60 well-distributed deep earthquakes we obtain a reduction to 5.9 per cent. This type of reduction allows one to avoid loss of information due to underparametrizing models. Alternatively, if data have to be rejected to fit the system into computer memory, it assures that the most important data are preserved.
A Sawmill Manager Adapts To Change With Linear Programming
George F. Dutrow; James E. Granskog
1973-01-01
Linear programming provides guidelines for increasing sawmill capacity and flexibility and for determining stumpagepurchasing strategy. The operator of a medium-sized sawmill implemented improvements suggested by linear programming analysis; results indicate a 45 percent increase in revenue and a 36 percent hike in volume processed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
He Guangjun; Duan Wenshan; Tian Duoxiang
2008-04-15
For unmagnetized dusty plasma with many different dust grain species containing both hot isothermal electrons and ions, both the linear dispersion relation and the Kadomtsev-Petviashvili equation for small, but finite amplitude dust acoustic waves are obtained. The linear dispersion relation is investigated numerically. Furthermore, the variations of amplitude, width, and propagation velocity of the nonlinear solitary wave with an arbitrary dust size distribution function are studied as well. Moreover, both the power law distribution and the Gaussian distribution are approximately simulated by using appropriate arbitrary dust size distribution functions.
The influence of mass configurations on velocity amplified vibrational energy harvesters
NASA Astrophysics Data System (ADS)
O'Donoghue, D.; Frizzell, R.; Kelly, G.; Nolan, K.; Punch, J.
2016-05-01
Vibrational energy harvesters scavenge ambient vibrational energy, offering an alternative to batteries for the autonomous operation of low power electronics. Velocity amplified electromagnetic generators (VAEGs) utilize the velocity amplification effect to increase power output and operational bandwidth, compared to linear resonators. A detailed experimental analysis of the influence of mass ratio and number of degrees-of-freedom (dofs) on the dynamic behaviour and power output of a macro-scale VAEG is presented. Various mass configurations are tested under drop-test and sinusoidal forced excitation, and the system performances are compared. For the drop-test, increasing mass ratio and number of dofs increases velocity amplification. Under forced excitation, the impacts between the masses are more complex, inducing greater energy losses. This results in the 2-dof systems achieving the highest velocities and, hence, highest output voltages. With fixed transducer size, higher mass ratios achieve higher voltage output due to the superior velocity amplification. Changing the magnet size to a fixed percentage of the final mass showed the increase in velocity of the systems with higher mass ratios is not significant enough to overcome the reduction in transducer size. Consequently, the 3:1 mass ratio systems achieved the highest output voltage. These findings are significant for the design of future reduced-scale VAEGs.
Tsai, Shirley C; Tsai, Chen S
2013-08-01
A linear theory on temporal instability of megahertz Faraday waves for monodisperse microdroplet ejection based on mass conservation and linearized Navier-Stokes equations is presented using the most recently observed micrometer- sized droplet ejection from a millimeter-sized spherical water ball as a specific example. The theory is verified in the experiments utilizing silicon-based multiple-Fourier horn ultrasonic nozzles at megahertz frequency to facilitate temporal instability of the Faraday waves. Specifically, the linear theory not only correctly predicted the Faraday wave frequency and onset threshold of Faraday instability, the effect of viscosity, the dynamics of droplet ejection, but also established the first theoretical formula for the size of the ejected droplets, namely, the droplet diameter equals four-tenths of the Faraday wavelength involved. The high rate of increase in Faraday wave amplitude at megahertz drive frequency subsequent to onset threshold, together with enhanced excitation displacement on the nozzle end face, facilitated by the megahertz multiple Fourier horns in resonance, led to high-rate ejection of micrometer- sized monodisperse droplets (>10(7) droplets/s) at low electrical drive power (<;1 W) with short initiation time (<;0.05 s). This is in stark contrast to the Rayleigh-Plateau instability of a liquid jet, which ejects one droplet at a time. The measured diameters of the droplets ranging from 2.2 to 4.6 μm at 2 to 1 MHz drive frequency fall within the optimum particle size range for pulmonary drug delivery.
High-accuracy microassembly by intelligent vision systems and smart sensor integration
NASA Astrophysics Data System (ADS)
Schilp, Johannes; Harfensteller, Mark; Jacob, Dirk; Schilp, Michael
2003-10-01
Innovative production processes and strategies from batch production to high volume scale are playing a decisive role in generating microsystems economically. In particular assembly processes are crucial operations during the production of microsystems. Due to large batch sizes many microsystems can be produced economically by conventional assembly techniques using specialized and highly automated assembly systems. At laboratory stage microsystems are mostly assembled by hand. Between these extremes there is a wide field of small and middle sized batch production wherefore common automated solutions rarely are profitable. For assembly processes at these batch sizes a flexible automated assembly system has been developed at the iwb. It is based on a modular design. Actuators like grippers, dispensers or other process tools can easily be attached due to a special tool changing system. Therefore new joining techniques can easily be implemented. A force-sensor and a vision system are integrated into the tool head. The automated assembly processes are based on different optical sensors and smart actuators like high-accuracy robots or linear-motors. A fiber optic sensor is integrated in the dispensing module to measure contactless the clearance between the dispense needle and the substrate. Robot vision systems using the strategy of optical pattern recognition are also implemented as modules. In combination with relative positioning strategies, an assembly accuracy of the assembly system of less than 3 μm can be realized. A laser system is used for manufacturing processes like soldering.
Soler, Carles; Contell, Jesús; Bori, Lorena; Sancho, María; García-Molina, Almudena; Valverde, Anthony; Segarvall, Jan
2017-01-01
This work provides information on the blue fox ejaculated sperm quality needed for seminal dose calculations. Twenty semen samples, obtained by masturbation, were analyzed for kinematic and morphometric parameters by using CASA-Mot and CASA-Morph system and principal component (PC) analysis. For motility, eight kinematic parameters were evaluated, which were reduced to PC1, related to linear variables, and PC2, related to oscillatory movement. The whole population was divided into three independent subpopulations: SP1, fast cells with linear movement; SP2, slow cells and nonoscillatory motility; and SP3, medium speed cells and oscillatory movement. In almost all cases, the subpopulation distribution by animal was significantly different. Head morphology analysis generated four size and four shape parameters, which were reduced to PC1, related to size, and PC2, related to shape of the cells. Three morphometric subpopulations existed: SP1: large oval cells; SP2: medium size elongated cells; and SP3: small and short cells. The subpopulation distribution differed between animals. Combining the kinematic and morphometric datasets produced PC1, related to morphometric parameters, and PC2, related to kinematics, which generated four sperm subpopulations - SP1: high oscillatory motility, large and short heads; SP2: medium velocity with small and short heads; SP3: slow motion small and elongated cells; and SP4: high linear speed and large elongated cells. Subpopulation distribution was different in all animals. The establishment of sperm subpopulations from kinematic, morphometric, and combined variables not only improves the well-defined fox semen characteristics and offers a good conceptual basis for fertility and sperm preservation techniques in this species, but also opens the door to use this approach in other species, included humans.
Soler, Carles; Contell, Jesús; Bori, Lorena; Sancho, María; García-Molina, Almudena; Valverde, Anthony; Segarvall, Jan
2017-01-01
This work provides information on the blue fox ejaculated sperm quality needed for seminal dose calculations. Twenty semen samples, obtained by masturbation, were analyzed for kinematic and morphometric parameters by using CASA-Mot and CASA-Morph system and principal component (PC) analysis. For motility, eight kinematic parameters were evaluated, which were reduced to PC1, related to linear variables, and PC2, related to oscillatory movement. The whole population was divided into three independent subpopulations: SP1, fast cells with linear movement; SP2, slow cells and nonoscillatory motility; and SP3, medium speed cells and oscillatory movement. In almost all cases, the subpopulation distribution by animal was significantly different. Head morphology analysis generated four size and four shape parameters, which were reduced to PC1, related to size, and PC2, related to shape of the cells. Three morphometric subpopulations existed: SP1: large oval cells; SP2: medium size elongated cells; and SP3: small and short cells. The subpopulation distribution differed between animals. Combining the kinematic and morphometric datasets produced PC1, related to morphometric parameters, and PC2, related to kinematics, which generated four sperm subpopulations – SP1: high oscillatory motility, large and short heads; SP2: medium velocity with small and short heads; SP3: slow motion small and elongated cells; and SP4: high linear speed and large elongated cells. Subpopulation distribution was different in all animals. The establishment of sperm subpopulations from kinematic, morphometric, and combined variables not only improves the well-defined fox semen characteristics and offers a good conceptual basis for fertility and sperm preservation techniques in this species, but also opens the door to use this approach in other species, included humans. PMID:27751987
NASA Astrophysics Data System (ADS)
Yuwono, Rio Akbar; Izdiharruddin, Mokhammad Fahmi; Wahyuono, Ruri Agung
2016-11-01
Microfluidic paper-based analytical devices decorated with ZnO nanospherical (nanoSPs) aggregates (ZnO-μPAD) for glucose detection have been fabricated. ZnO nanoSPs were prepared by wet chemical synthesis and integrated on the optimized geometry of ZnO-μPAD has 0.2 and 0.4 mm of channel width and length, respectively. Glucose detection measurements were based on electrochemical and infrared transmission measurements. The glucose concentrations were adjusted as 5, 6.5, and 9 mmol, i.e. typical glucose level for normal, pre-diabetes and diabetes, in a mixture of ringer lactate as simulated biological fluid and red blood cells. ZnO nanoSPs in this study possess an average aggregate size of 160 nm formed by clustered 18 nm crystallite size and ordered porous matrix as well as a surface area of 15 m2·g-1.The separation process of the glucose sample on ZnO-μPAD requires approximately 45 s. The glucose detection results show that both electrochemical-based and FTIR-based measurements perform a linear measurement system (R2 of 0.81 to 0.99) with a relatively high sensitivity. A linearly decreasing impedance spanning from 2.2 - 0.6 Ohm and linearly increasing ΔIR transmission spanning from 3 - 19% are obtained for glucose level ranging from 5 - 9 mmol.
A General Method for Solving Systems of Non-Linear Equations
NASA Technical Reports Server (NTRS)
Nachtsheim, Philip R.; Deiss, Ron (Technical Monitor)
1995-01-01
The method of steepest descent is modified so that accelerated convergence is achieved near a root. It is assumed that the function of interest can be approximated near a root by a quadratic form. An eigenvector of the quadratic form is found by evaluating the function and its gradient at an arbitrary point and another suitably selected point. The terminal point of the eigenvector is chosen to lie on the line segment joining the two points. The terminal point found lies on an axis of the quadratic form. The selection of a suitable step size at this point leads directly to the root in the direction of steepest descent in a single step. Newton's root finding method not infrequently diverges if the starting point is far from the root. However, the current method in these regions merely reverts to the method of steepest descent with an adaptive step size. The current method's performance should match that of the Levenberg-Marquardt root finding method since they both share the ability to converge from a starting point far from the root and both exhibit quadratic convergence near a root. The Levenberg-Marquardt method requires storage for coefficients of linear equations. The current method which does not require the solution of linear equations requires more time for additional function and gradient evaluations. The classic trade off of time for space separates the two methods.
Dosimetric characteristics of fabricated silica fibre for postal radiotherapy dose audits
NASA Astrophysics Data System (ADS)
Fadzil, M. S. Ahmad; Ramli, N. N. H.; Jusoh, M. A.; Kadni, T.; Bradley, D. A.; Ung, N. M.; Suhairul, H.; Mohd Noor, N.
2014-11-01
Present investigation aims to establish the dosimetric characteristics of a novel fabricated flat fibre TLD system for postal radiotherapy dose audits. Various thermoluminescence (TL) properties have been investigated for five sizes of 6 mol% Ge-doped optical fibres. Key dosimetric characteristics including reproducibility, linearity, fading and energy dependence have been established. Irradiations were carried out using a linear accelerator (linac) and a Cobalt-60 machine. For doses from 0.5 Gy up to 10 Gy, Ge-doped flat fibres exhibit linearity between TL yield and dose, reproducible to better than 8% standard deviation (SD) following repeat measurements (n = 3). For photons generated at potentials from 1.25 MeV to 10 MV an energy-dependent response is noted, with a coefficient of variation (CV) of less than 40% over the range of energies investigated. For 6.0 mm length flat fibres 100 μm thick × 350 pm wide, the TL fading loss following 30 days of storage at room temperature was < 8%. The Ge-doped flat fibre system represents a viable basis for use in postal radiotherapy dose audits, corrections being made for the various factors influencing the TL yield.
NASA Astrophysics Data System (ADS)
Imamura, N.; Schultz, A.
2015-12-01
Recently, a full waveform time domain solution has been developed for the magnetotelluric (MT) and controlled-source electromagnetic (CSEM) methods. The ultimate goal of this approach is to obtain a computationally tractable direct waveform joint inversion for source fields and earth conductivity structure in three and four dimensions. This is desirable on several grounds, including the improved spatial resolving power expected from use of a multitude of source illuminations of non-zero wavenumber, the ability to operate in areas of high levels of source signal spatial complexity and non-stationarity, etc. This goal would not be obtainable if one were to adopt the finite difference time-domain (FDTD) approach for the forward problem. This is particularly true for the case of MT surveys, since an enormous number of degrees of freedom are required to represent the observed MT waveforms across the large frequency bandwidth. It means that for FDTD simulation, the smallest time steps should be finer than that required to represent the highest frequency, while the number of time steps should also cover the lowest frequency. This leads to a linear system that is computationally burdensome to solve. We have implemented our code that addresses this situation through the use of a fictitious wave domain method and GPUs to speed up the computation time. We also substantially reduce the size of the linear systems by applying concepts from successive cascade decimation, through quasi-equivalent time domain decomposition. By combining these refinements, we have made good progress toward implementing the core of a full waveform joint source field/earth conductivity inverse modeling method. From results, we found the use of previous generation of CPU/GPU speeds computations by an order of magnitude over a parallel CPU only approach. In part, this arises from the use of the quasi-equivalent time domain decomposition, which shrinks the size of the linear system dramatically.
Linear and Branched PEIs (Polyethylenimines) and Their Property Space.
Lungu, Claudiu N; Diudea, Mircea V; Putz, Mihai V; Grudziński, Ireneusz P
2016-04-13
A chemical property space defines the adaptability of a molecule to changing conditions and its interaction with other molecular systems determining a pharmacological response. Within a congeneric molecular series (compounds with the same derivatization algorithm and thus the same brute formula) the chemical properties vary in a monotonic manner, i.e., congeneric compounds share the same chemical property space. The chemical property space is a key component in molecular design, where some building blocks are functionalized, i.e., derivatized, and eventually self-assembled in more complex systems, such as enzyme-ligand systems, of which (physico-chemical) properties/bioactivity may be predicted by QSPR/QSAR (quantitative structure-property/activity relationship) studies. The system structure is determined by the binding type (temporal/permanent; electrostatic/covalent) and is reflected in its local electronic (and/or magnetic) properties. Such nano-systems play the role of molecular devices, important in nano-medicine. In the present article, the behavior of polyethylenimine (PEI) macromolecules (linear LPEI and branched BPEI, respectively) with respect to the glucose oxidase enzyme GOx is described in terms of their (interacting) energy, geometry and topology, in an attempt to find the best shape and size of PEIs to be useful for a chosen (nanochemistry) purpose.
Linear and Branched PEIs (Polyethylenimines) and Their Property Space
Lungu, Claudiu N.; Diudea, Mircea V.; Putz, Mihai V.; Grudziński, Ireneusz P.
2016-01-01
A chemical property space defines the adaptability of a molecule to changing conditions and its interaction with other molecular systems determining a pharmacological response. Within a congeneric molecular series (compounds with the same derivatization algorithm and thus the same brute formula) the chemical properties vary in a monotonic manner, i.e., congeneric compounds share the same chemical property space. The chemical property space is a key component in molecular design, where some building blocks are functionalized, i.e., derivatized, and eventually self-assembled in more complex systems, such as enzyme-ligand systems, of which (physico-chemical) properties/bioactivity may be predicted by QSPR/QSAR (quantitative structure-property/activity relationship) studies. The system structure is determined by the binding type (temporal/permanent; electrostatic/covalent) and is reflected in its local electronic (and/or magnetic) properties. Such nano-systems play the role of molecular devices, important in nano-medicine. In the present article, the behavior of polyethylenimine (PEI) macromolecules (linear LPEI and branched BPEI, respectively) with respect to the glucose oxidase enzyme GOx is described in terms of their (interacting) energy, geometry and topology, in an attempt to find the best shape and size of PEIs to be useful for a chosen (nanochemistry) purpose. PMID:27089324
Performance testing and results of the first Etec CORE-2564
NASA Astrophysics Data System (ADS)
Franks, C. Edward; Shikata, Asao; Baker, Catherine A.
1993-03-01
In order to be able to write 64 megabit DRAM reticles, to prepare to write 256 megabit DRAM reticles and in general to meet the current and next generation mask and reticle quality requirements, Hoya Micro Mask (HMM) installed in 1991 the first CORE-2564 Laser Reticle Writer from Etec Systems, Inc. The system was delivered as a CORE-2500XP and was subsequently upgraded to a 2564. The CORE (Custom Optical Reticle Engraver) system produces photomasks with an exposure strategy similar to that employed by an electron beam system, but it uses a laser beam to deliver the photoresist exposure energy. Since then the 2564 has been tested by Etec's standard Acceptance Test Procedure and by several supplementary HMM techniques to insure performance to all the Etec advertised specifications and certain additional HMM requirements that were more demanding and/or more thorough than the advertised specifications. The primary purpose of the HMM tests was to more closely duplicate mask usage. The performance aspects covered by the tests include registration accuracy and repeatability; linewidth accuracy, uniformity and linearity; stripe butting; stripe and scan linearity; edge quality; system cleanliness; minimum geometry resolution; minimum address size and plate loading accuracy and repeatability.
NASA Astrophysics Data System (ADS)
Nägele, G.; Heinen, M.; Banchio, A. J.; Contreras-Aburto, C.
2013-11-01
Dynamic processes in dispersions of charged spherical particles are of importance both in fundamental science, and in technical and bio-medical applications. There exists a large variety of charged-particles systems, ranging from nanometer-sized electrolyte ions to micron-sized charge-stabilized colloids. We review recent advances in theoretical methods for the calculation of linear transport coefficients in concentrated particulate systems, with the focus on hydrodynamic interactions and electrokinetic effects. Considered transport properties are the dispersion viscosity, self- and collective diffusion coefficients, sedimentation coefficients, and electrophoretic mobilities and conductivities of ionic particle species in an external electric field. Advances by our group are also discussed, including a novel mode-coupling-theory method for conduction-diffusion and viscoelastic properties of strong electrolyte solutions. Furthermore, results are presented for dispersions of solvent-permeable particles, and particles with non-zero hydrodynamic surface slip. The concentration-dependent swelling of ionic microgels is discussed, as well as a far-reaching dynamic scaling behavior relating colloidal long- to short-time dynamics.
A noniterative improvement of Guyan reduction
NASA Technical Reports Server (NTRS)
Ganesan, N.
1993-01-01
In determining the natural modes and frequencies of a linear elastic structure, Guyan reduction is often used to reduce the size of the mass and stiffness matrices and the solution of the reduced system is obtained first. The reduced system modes are then expanded to the size of the original system by using a static transformation linking the retained degrees of freedom to the omitted degrees of freedom. In the present paper, the transformation matrix of Guyan reduction is modified to include additional terms from a series accounting for the inertial effects. However, the inertial terms are dependent on the unknown frequencies. A practical approximation is employed to compute the inertial terms without any iteration. This new transformation is implemented in NASTRAN using a DMAP sequence alter. Numerical examples using a cantilever beam illustrate the necessary condition for allowing a large number of additional terms in the proposed series correction of Guyan reduction. A practical example of a large model of the Plasma Motor Generator module to be flown on a Delta launch vehicle is also presented.
NASA Astrophysics Data System (ADS)
Ahmed, S. Jbara; Zulkafli, Othaman; M, A. Saeed
2016-05-01
Based on the Schrödinger equation for envelope function in the effective mass approximation, linear and nonlinear optical absorption coefficients in a multi-subband lens quantum dot are investigated. The effects of quantum dot size on the interband and intraband transitions energy are also analyzed. The finite element method is used to calculate the eigenvalues and eigenfunctions. Strain and In-mole-fraction effects are also studied, and the results reveal that with the decrease of the In-mole fraction, the amplitudes of linear and nonlinear absorption coefficients increase. The present computed results show that the absorption coefficients of transitions between the first excited states are stronger than those of the ground states. In addition, it has been found that the quantum dot size affects the amplitudes and peak positions of linear and nonlinear absorption coefficients while the incident optical intensity strongly affects the nonlinear absorption coefficients. Project supported by the Ministry of Higher Education and Scientific Research in Iraq, Ibnu Sina Institute and Physics Department of Universiti Teknologi Malaysia (UTM RUG Vote No. 06-H14).
Azéma, Emilien; Linero, Sandra; Estrada, Nicolas; Lizcano, Arcesio
2017-08-01
By means of extensive contact dynamics simulations, we analyzed the effect of particle size distribution (PSD) on the strength and microstructure of sheared granular materials composed of frictional disks. The PSDs are built by means of a normalized β function, which allows the systematic investigation of the effects of both, the size span (from almost monodisperse to highly polydisperse) and the shape of the PSD (from linear to pronouncedly curved). We show that the shear strength is independent of the size span, which substantiates previous results obtained for uniform distributions by packing fraction. Notably, the shear strength is also independent of the shape of the PSD, as shown previously for systems composed of frictionless disks. In contrast, the packing fraction increases with the size span, but decreases with more pronounced PSD curvature. At the microscale, we analyzed the connectivity and anisotropies of the contacts and forces networks. We show that the invariance of the shear strength with the PSD is due to a compensation mechanism which involves both geometrical sources of anisotropy. In particular, contact orientation anisotropy decreases with the size span and increases with PSD curvature, while the branch length anisotropy behaves inversely.
Crossover in growth laws for phase-separating binary fluids: molecular dynamics simulations.
Ahmad, Shaista; Das, Subir K; Puri, Sanjay
2012-03-01
Pattern and dynamics during phase separation in a symmetrical binary (A+B) Lennard-Jones fluid are studied via molecular dynamics simulations after quenching homogeneously mixed critical (50:50) systems to temperatures below the critical one. The morphology of the domains, rich in A or B particles, is observed to be bicontinuous. The early-time growth of the average domain size is found to be consistent with the Lifshitz-Slyozov law for diffusive domain coarsening. After a characteristic time, dependent on the temperature, we find a clear crossover to an extended viscous hydrodynamic regime where the domains grow linearly with time. Pattern formation in the present system is compared with that in solid binary mixtures, as a function of temperature. Important results for the finite-size and temperature effects on the small-wave-vector behavior of the scattering function are also presented.
Slit scan radiographic system for intermediate size rocket motors
NASA Astrophysics Data System (ADS)
Bernardi, Richard T.; Waters, David D.
1992-12-01
The development of slit-scan radiography capability for the NASA Advanced Computed Tomography Inspection System (ACTIS) computed tomography (CT) scanner at MSFC is discussed. This allows for tangential case interface (bondline) inspection at 2 MeV of intermediate-size rocket motors like the Hawk. Motorized mounting fixture hardware was designed, fabricated, installed, and tested on ACTIS. The ACTIS linear array of x-ray detectors was aligned parallel to the tangent line of a horizontal Hawk motor case. A 5 mm thick x-ray fan beam was used. Slit-scan images were produced with continuous rotation of a horizontal Hawk motor. Image features along Hawk motor case interfaces were indicated. A motorized exit cone fixture for ACTIS slit-scan inspection was also provided. The results of this SBIR have shown that slit scanning is an alternative imaging technique for case interface inspection. More data is required to qualify the technique for bondline inspection.
Laminar flow burner system with infrared heated spray chamber and condenser.
Hell, A; Ulrich, W F; Shifrin, N; Ramírez-Muñoz, J
1968-07-01
A laminar flow burner is described that provides several advantages in atomic absorption flame photometry. Included in its design is a heated spray chamber followed by a condensing system. This combination improves the concentration level of the analyte in the flame and keeps solvent concentration low. Therefore, sensitivities are significantly improved for most elements relative to cold chamber burners. The burner also contains several safety features. These various design features are discussed in detail, and performance data are given on (a) signal size, (b) signal-to-noise ratio, (c) linearity, (d) working range, (e) precision, and (g) accuracy.
NASA Technical Reports Server (NTRS)
Bartels, Robert E.
2003-01-01
A variable order method of integrating the structural dynamics equations that is based on the state transition matrix has been developed. The method has been evaluated for linear time variant and nonlinear systems of equations. When the time variation of the system can be modeled exactly by a polynomial it produces nearly exact solutions for a wide range of time step sizes. Solutions of a model nonlinear dynamic response exhibiting chaotic behavior have been computed. Accuracy of the method has been demonstrated by comparison with solutions obtained by established methods.
Theory of optical transitions in conjugated polymers. II. Real systems
NASA Astrophysics Data System (ADS)
Marcus, Max; Tozer, Oliver Robert; Barford, William
2014-10-01
The theory of optical transitions developed in Barford and Marcus ["Theory of optical transitions in conjugated polymers. I. Ideal systems," J. Chem. Phys. 141, 164101 (2014)] for linear, ordered polymer chains is extended in this paper to model conformationally disordered systems. Our key result is that in the Born-Oppenheimer regime the emission intensities are proportional to S(1)/⟨IPR⟩, where S(1) is the Huang-Rhys parameter for a monomer. ⟨IPR⟩ is the average inverse participation ratio for the emitting species, i.e., local exciton ground states (LEGSs). Since the spatial coherence of LEGSs determines the spatial extent of chromophores, the significance of this result is that it directly relates experimental observables to chromophore sizes (where ⟨IPR⟩ is half the mean chromophore size in monomer units). This result is independent of the chromophore shape, because of the Born-Oppenheimer factorization of the many body wavefunction. We verify this prediction by density matrix renormalization group (DMRG) calculations of the Frenkel-Holstein model in the adiabatic limit for both linear, disordered chains and for coiled, ordered chains. We also model optical spectra for poly(p-phenylene) and poly(p-phenylene-vinylene) oligomers and polymers. For oligomers, we solve the fully quantized Frenkel-Holstein model via the DMRG method. For polymers, we use the much simpler method of solving the one-particle Frenkel model and employ the Born-Oppenheimer expressions relating the effective Franck-Condon factor of a chromophore to its inverse participation ratio. We show that increased disorder decreases chromophore sizes and increases the inhomogeneous broadening, but has a non-monotonic effect on transition energies. We also show that as planarizing the polymer chain increases the exciton band width, it causes the chromophore sizes to increase, the transition energies to decrease, and the broadening to decrease. Finally, we show that the absorption spectra are more broadened than the emission spectra and that the broadening of the absorption spectra increases as the chains become more coiled. This is primarily because absorption occurs to both LEGSs and quasi-extended exciton states (QEESs), and QEES acquire increased intensity as chromophores bend, while emission only occurs from LEGSs.
The N-policy for an unreliable server with delaying repair and two phases of service
NASA Astrophysics Data System (ADS)
Choudhury, Gautam; Ke, Jau-Chuan; Tadj, Lotfi
2009-09-01
This paper deals with an MX/G/1 with an additional second phase of optional service and unreliable server, which consist of a breakdown period and a delay period under N-policy. While the server is working with any phase of service, it may break down at any instant and the service channel will fail for a short interval of time. Further concept of the delay time is also introduced. If no customer arrives during the breakdown period, the server becomes idle in the system until the queue size builds up to a threshold value . As soon as the queue size becomes at least N, the server immediately begins to serve the first phase of regular service to all the waiting customers. After the completion of which, only some of them receive the second phase of the optional service. We derive the queue size distribution at a random epoch and departure epoch as well as various system performance measures. Finally we derive a simple procedure to obtain optimal stationary policy under a suitable linear cost structure.
Rajkomar, Alvin; Yim, Joanne Wing Lan; Grumbach, Kevin; Parekh, Ami
2016-10-14
Characterizing patient complexity using granular electronic health record (EHR) data regularly available to health systems is necessary to optimize primary care processes at scale. To characterize the utilization patterns of primary care patients and create weighted panel sizes for providers based on work required to care for patients with different patterns. We used EHR data over a 2-year period from patients empaneled to primary care clinicians in a single academic health system, including their in-person encounter history and virtual encounters such as telephonic visits, electronic messaging, and care coordination with specialists. Using a combination of decision rules and k-means clustering, we identified clusters of patients with similar health care system activity. Phenotypes with basic demographic information were used to predict future health care utilization using log-linear models. Phenotypes were also used to calculate weighted panel sizes. We identified 7 primary care utilization phenotypes, which were characterized by various combinations of primary care and specialty usage and were deemed clinically distinct by primary care physicians. These phenotypes, combined with age-sex and primary payer variables, predicted future primary care utilization with R 2 of .394 and were used to create weighted panel sizes. Individual patients' health care utilization may be useful for classifying patients by primary care work effort and for predicting future primary care usage.
Linear Chord Diagrams with Long Chords
NASA Astrophysics Data System (ADS)
Sullivan, Everett
A linear chord diagram of size n is a partition of the first 2n integers into sets of size two. These diagrams appear in many different contexts in combinatorics and other areas of mathematics, particularly knot theory. We explore various constraints that produce diagrams which have no short chords. A number of patterns appear from the results of these constraints which we can prove using techniques ranging from explicit bijections to non-commutative algebra.
van der Laan, J. D.; Sandia National Lab.; Scrymgeour, D. A.; ...
2015-03-13
We find for infrared wavelengths there are broad ranges of particle sizes and refractive indices that represent fog and rain where the use of circular polarization can persist to longer ranges than linear polarization. Using polarization tracking Monte Carlo simulations for varying particle size, wavelength, and refractive index, we show that for specific scene parameters circular polarization outperforms linear polarization in maintaining the intended polarization state for large optical depths. This enhancement with circular polarization can be exploited to improve range and target detection in obscurant environments that are important in many critical sensing applications. Specifically, circular polarization persists bettermore » than linear for radiation fog in the short-wave infrared, for advection fog in the short-wave infrared and the long-wave infrared, and large particle sizes of Sahara dust around the 4 micron wavelength.« less
The Dynamics of Entangled DNA Networks using Single-Molecule Methods
NASA Astrophysics Data System (ADS)
Chapman, Cole David
Single molecule experiments were performed on DNA, a model polymer, and entangled DNA networks to explore diffusion within complex polymeric fluids and their linear and non-linear viscoelasticity. DNA molecules of varying length and topology were prepared using biological methods. An ensemble of individual molecules were then fluorescently labeled and tracked in blends of entangled linear and circular DNA to examine the dependence of diffusion on polymer length, topology, and blend ratio. Diffusion was revealed to possess a non-monotonic dependence on the blend ratio, which we believe to be due to a second-order effect where the threading of circular polymers by their linear counterparts greatly slows the mobility of the system. Similar methods were used to examine the diffusive and conformational behavior of DNA within highly crowded environments, comparable to that experienced within the cell. A previously unseen gamma distributed elongation of the DNA in the presence of crowders, proposed to be due to entropic effects and crowder mobility, was observed. Additionally, linear viscoelastic properties of entangled DNA networks were explored using active microrheology. Plateau moduli values verified for the first time the predicted independence from polymer length. However, a clear bead-size dependence was observed for bead radii less than ~3x the tube radius, a newly discovered limit, above which microrheology results are within the continuum limit and may access the bulk properties of the fluid. Furthermore, the viscoelastic properties of entangled DNA in the non-linear regime, where the driven beads actively deform the network, were also examined. By rapidly driving a bead through the network utilizing optical tweezers, then removing the trap and tracking the bead's subsequent motion we are able to model the system as an over-damped harmonic oscillator and find the elasticity to be dominated by stress-dependent entanglements.
Prananingrum, Widyasri; Tomotake, Yoritoki; Naito, Yoshihito; Bae, Jiyoung; Sekine, Kazumitsu; Hamada, Kenichi; Ichikawa, Tetsuo
2016-08-01
The prosthetic applications of titanium have been challenging because titanium does not possess suitable properties for the conventional casting method using the lost wax technique. We have developed a production method for biomedical application of porous titanium using a moldless process. This study aimed to evaluate the physical and mechanical properties of porous titanium using various particle sizes, shapes, and mixing ratio of titanium powder to wax binder for use in prosthesis production. CP Ti powders with different particle sizes, shapes, and mixing ratios were divided into five groups. A 90:10wt% mixture of titanium powder and wax binder was prepared manually at 70°C. After debinding at 380°C, the specimen was sintered in Ar at 1100°C without a mold for 1h. The linear shrinkage ratio of sintered specimens ranged from 2.5% to 14.2%. The linear shrinkage ratio increased with decreasing particle size. While the linear shrinkage ratio of Groups 3, 4, and 5 were approximately 2%, Group 1 showed the highest shrinkage of all. The bending strength ranged from 106 to 428MPa under the influence of porosity. Groups 1 and 2 presented low porosity followed by higher strength. The shear bond strength ranged from 32 to 100MPa. The shear bond strength was also particle-size dependent. The decrease in the porosity increased the linear shrinkage ratio and bending strength. Shrinkage and mechanical strength required for prostheses were dependent on the particle size and shape of titanium powders. These findings suggested that this production method can be applied to the prosthetic framework by selecting the material design. Copyright © 2016 Elsevier Ltd. All rights reserved.
Dependence of Raman Spectral Intensity on Crystal Size in Organic Nano Energetics.
Patel, Rajen B; Stepanov, Victor; Qiu, Hongwei
2016-08-01
Raman spectra for various nitramine energetic compounds were investigated as a function of crystal size at the nanoscale regime. In the case of 2,4,6,8,10,12-hexanitro-2,4,6,8,10,12-hexaazaisowurtzitane (CL-20), there was a linear relationship between intensity of Raman spectra and crystal size. Notably, the Raman modes between 120 cm(-1) and 220 cm(-1) were especially affected, and at the smallest crystal size, were completely eliminated. The Raman spectral intensity of octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine (HMX), like that of CL-20's, depended linearly on crystal size. The Raman spectral intensity of 1,3,5-trinitroperhydro-1,3,5-triazine (RDX), however, was not observably changed by crystal size. A non-nitramine explosive compound, 2,4,6-triamino-1,3,5- trinitrobenzene (TATB), was also investigated. Its spectral intensity was also found to correlate linearly with crystal size, although substantially less so than that of HMX and CL-20. To explain the observed trends, it is hypothesized that disordered molecular arrangement, originating from the crystal surface, may be responsible. In particular, it appears that the thickness of the disordered surface layer is dependent on molecular characteristics, including size and conformational flexibility. Furthermore, as the mean crystal size decreases, the volume fraction of disordered molecules within a specimen increases, consequently, weakening the Raman intensity. These results could have practical benefit for allowing the facile monitoring of crystal size during manufacturing. Finally, these findings could lead to deep insights into the general structure of the surface of crystals. © The Author(s) 2016.
Axial diffusivity of the corona radiata correlated with ventricular size in adult hydrocephalus.
Cauley, Keith A; Cataltepe, Oguz
2014-07-01
Hydrocephalus causes changes in the diffusion-tensor properties of periventricular white matter. Understanding the nature of these changes may aid in the diagnosis and treatment planning of this relatively common neurologic condition. Because ventricular size is a common measure of the severity of hydrocephalus, we hypothesized that a quantitative correlation could be made between the ventricular size and diffusion-tensor changes in the periventricular corona radiata. In this article, we investigated this relationship in adult patients with hydrocephalus and in healthy adult subjects. Diffusion-tensor imaging metrics of the corona radiata were correlated with ventricular size in 14 adult patients with acute hydrocephalus, 16 patients with long-standing hydrocephalus, and 48 consecutive healthy adult subjects. Regression analysis was performed to investigate the relationship between ventricular size and the diffusion-tensor metrics of the corona radiata. Subject age was analyzed as a covariable. There is a linear correlation between fractional anisotropy of the corona radiata and ventricular size in acute hydrocephalus (r = 0.784, p < 0.001), with positive correlation with axial diffusivity (r = 0.636, p = 0.014) and negative correlation with radial diffusivity (r = 0.668, p = 0.009). In healthy subjects, axial diffusion in the periventricular corona radiata is more strongly correlated with ventricular size than with patient age (r = 0.466, p < 0.001, compared with r = 0.058, p = 0.269). Axial diffusivity of the corona radiata is linearly correlated with ventricular size in healthy adults and in patients with hydrocephalus. Radial diffusivity of the corona radiata decreases linearly with ventricular size in acute hydrocephalus but is not significantly correlated with ventricular size in healthy subjects or in patients with long-standing hydrocephalus.
NASA Astrophysics Data System (ADS)
Jung, Moonjung; Kim, Dong-Hee
2017-12-01
We investigate the first-order transition in the spin-1 two-dimensional Blume-Capel model in square lattices by revisiting the transfer-matrix method. With large strip widths increased up to the size of 18 sites, we construct the detailed phase coexistence curve which shows excellent quantitative agreement with the recent advanced Monte Carlo results. In the deep first-order area, we observe the exponential system-size scaling of the spectral gap of the transfer matrix from which linearly increasing interfacial tension is deduced with decreasing temperature. We find that the first-order signature at low temperatures is strongly pronounced with much suppressed finite-size influence in the examined thermodynamic properties of entropy, non-zero spin population, and specific heat. It turns out that the jump at the transition becomes increasingly sharp as it goes deep into the first-order area, which is in contrast to the Wang-Landau results where finite-size smoothing gets more severe at lower temperatures.
Design of measuring system for wire diameter based on sub-pixel edge detection algorithm
NASA Astrophysics Data System (ADS)
Chen, Yudong; Zhou, Wang
2016-09-01
Light projection method is often used in measuring system for wire diameter, which is relatively simpler structure and lower cost, and the measuring accuracy is limited by the pixel size of CCD. Using a CCD with small pixel size can improve the measuring accuracy, but will increase the cost and difficulty of making. In this paper, through the comparative analysis of a variety of sub-pixel edge detection algorithms, polynomial fitting method is applied for data processing in measuring system for wire diameter, to improve the measuring accuracy and enhance the ability of anti-noise. In the design of system structure, light projection method with orthogonal structure is used for the detection optical part, which can effectively reduce the error caused by line jitter in the measuring process. For the electrical part, ARM Cortex-M4 microprocessor is used as the core of the circuit module, which can not only drive double channel linear CCD but also complete the sampling, processing and storage of the CCD video signal. In addition, ARM microprocessor can complete the high speed operation of the whole measuring system for wire diameter in the case of no additional chip. The experimental results show that sub-pixel edge detection algorithm based on polynomial fitting can make up for the lack of single pixel size and improve the precision of measuring system for wire diameter significantly, without increasing hardware complexity of the entire system.
Layout optimization using the homogenization method
NASA Technical Reports Server (NTRS)
Suzuki, Katsuyuki; Kikuchi, Noboru
1993-01-01
A generalized layout problem involving sizing, shape, and topology optimization is solved by using the homogenization method for three-dimensional linearly elastic shell structures in order to seek a possibility of establishment of an integrated design system of automotive car bodies, as an extension of the previous work by Bendsoe and Kikuchi. A formulation of a three-dimensional homogenized shell, a solution algorithm, and several examples of computing the optimum layout are presented in this first part of the two articles.
HIGH EFFICIENCY STRUCTURAL FLOWTHROUGH ROTOR WITH ACTIVE FLAP CONTROL: VOLUME THREE: MARKET & TEAM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zuteck, Michael D.; Jackson, Kevin L.; Santos, Richard A.
The Zimitar one-piece rotor primary structure is integrated, so balanced thrust and gravity loads flow through the hub region without transferring out of its composite material. Large inner rotor geometry is used since there is no need to neck down to a blade root region and pitch bearing. Rotor control is provided by a highly redundant, five flap system on each blade, sized so that easily handled standard electric linear actuators are sufficient.
Receiver for solar energy collector having improved aperture aspect
McIntire, William R.
1984-01-01
A secondary concentrator for use in receiver systems for linear focusing primary concentrators is provided with reflector wings at each end. The wings increase the capture of light rays reflected from areas adjacent the rim of a primary concentrator, increasing the apparent aperture size of the absorber as viewed from the rim of the primary concentrator. The length, tilt, and curvature of the wing reflectors can be adjusted to provide an absorber having a desired aperture aspect.
Distributed Arrays and Signal Processing for the TechSat21 Space-Based Radar
2009-04-01
lIlustrating the derivation of minimum aperture size and coherent integration time ............. 25 B 4. Global coordinate system and satellite-based...work of Dr. Robert Mailloux. Dr. Peter Franchi . and Dr. Scott Santarelli. VII Summary The TechSat2l space-based radar concept, suggested by AFRUVS...Linearization for small motions around a reference point in a global circular orbit leads to the Hill equations, derived in 1878, and alternatively named
The Focused Inverse Method for Linear Logic
2006-12-04
design and engineering. Furthermore, it is a denouncement of the versatility of the inverse method if one were simply to abandon it for a radically...is technically unavoidable, but the impetus of design for such provers should be to reduce the size of the database. Our answer is to combine the...or “infinitely often P”. Systems such as Lamport’s TLA are not designed with automation as their primary aim; rather, they are intended to engage
Passler, Peter P; Hofer, Thomas S
2017-02-15
Stochastic dynamics is a widely employed strategy to achieve local thermostatization in molecular dynamics simulation studies; however, it suffers from an inherent violation of momentum conservation. Although this short-coming has little impact on structural and short-time dynamic properties, it can be shown that dynamics in the long-time limit such as diffusion is strongly dependent on the respective thermostat setting. Application of the methodically similar dissipative particle dynamics (DPD) provides a simple, effective strategy to ensure the advantages of local, stochastic thermostatization while at the same time the linear momentum of the system remains conserved. In this work, the key parameters to employ the DPD thermostats in the framework of periodic boundary conditions are investigated, in particular the dependence of the system properties on the size of the DPD-region as well as the treatment of forces near the cutoff. Structural and dynamical data for light and heavy water as well as a Lennard-Jones fluid have been compared to simulations executed via stochastic dynamics as well as via use of the widely employed Nose-Hoover chain and Berendsen thermostats. It is demonstrated that a small size of the DPD region is sufficient to achieve local thermalization, while at the same time artifacts in the self-diffusion characteristic for stochastic dynamics are eliminated. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Datta, Anubhav; Johnson, Wayne R.
2009-01-01
This paper has two objectives. The first objective is to formulate a 3-dimensional Finite Element Model for the dynamic analysis of helicopter rotor blades. The second objective is to implement and analyze a dual-primal iterative substructuring based Krylov solver, that is parallel and scalable, for the solution of the 3-D FEM analysis. The numerical and parallel scalability of the solver is studied using two prototype problems - one for ideal hover (symmetric) and one for a transient forward flight (non-symmetric) - both carried out on up to 48 processors. In both hover and forward flight conditions, a perfect linear speed-up is observed, for a given problem size, up to the point of substructure optimality. Substructure optimality and the linear parallel speed-up range are both shown to depend on the problem size as well as on the selection of the coarse problem. With a larger problem size, linear speed-up is restored up to the new substructure optimality. The solver also scales with problem size - even though this conclusion is premature given the small prototype grids considered in this study.
Simple and multiple linear regression: sample size considerations.
Hanley, James A
2016-11-01
The suggested "two subjects per variable" (2SPV) rule of thumb in the Austin and Steyerberg article is a chance to bring out some long-established and quite intuitive sample size considerations for both simple and multiple linear regression. This article distinguishes two of the major uses of regression models that imply very different sample size considerations, neither served well by the 2SPV rule. The first is etiological research, which contrasts mean Y levels at differing "exposure" (X) values and thus tends to focus on a single regression coefficient, possibly adjusted for confounders. The second research genre guides clinical practice. It addresses Y levels for individuals with different covariate patterns or "profiles." It focuses on the profile-specific (mean) Y levels themselves, estimating them via linear compounds of regression coefficients and covariates. By drawing on long-established closed-form variance formulae that lie beneath the standard errors in multiple regression, and by rearranging them for heuristic purposes, one arrives at quite intuitive sample size considerations for both research genres. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Terrano, Daniel; Tsuper, Ilona; Maraschky, Adam; Holland, Nolan; Streletzky, Kiril
Temperature sensitive nanoparticles were generated from a construct (H20F) of three chains of elastin-like polypeptides (ELP) linked to a negatively charged foldon domain. This ELP system was mixed at different ratios with linear chains of ELP (H40L) which lacks the foldon domain. The mixed system is soluble at room temperature and at a transition temperature (Tt) will form swollen micelles with the hydrophobic linear chains hidden inside. This system was studied using depolarized dynamic light scattering (DDLS) and static light scattering (SLS) to determine the size, shape, and internal structure of the mixed micelles. The mixed micelle in equal parts of H20F and H40L show a constant apparent hydrodynamic radius of 40-45 nm at the concentration window from 25:25 to 60:60 uM (1:1 ratio). At a fixed 50 uM concentration of the H20F, varying H40L concentration from 5 to 80 uM resulted in a linear growth in the hydrodynamic radius from about 11 to about 62 nm, along with a 1000-fold increase in VH signal. A possible simple model explaining the growth of the swollen micelles is considered. Lastly, the VH signal can indicate elongation in the geometry of the particle or could possibly be a result from anisotropic properties from the core of the micelle. SLS was used to study the molecular weight, and the radius of gyration of the micelle to help identify the structure and morphology of mixed micelles and the tangible cause of the VH signal.
Graf, Daniel; Beuerle, Matthias; Schurkus, Henry F; Luenser, Arne; Savasci, Gökcen; Ochsenfeld, Christian
2018-05-08
An efficient algorithm for calculating the random phase approximation (RPA) correlation energy is presented that is as accurate as the canonical molecular orbital resolution-of-the-identity RPA (RI-RPA) with the important advantage of an effective linear-scaling behavior (instead of quartic) for large systems due to a formulation in the local atomic orbital space. The high accuracy is achieved by utilizing optimized minimax integration schemes and the local Coulomb metric attenuated by the complementary error function for the RI approximation. The memory bottleneck of former atomic orbital (AO)-RI-RPA implementations ( Schurkus, H. F.; Ochsenfeld, C. J. Chem. Phys. 2016 , 144 , 031101 and Luenser, A.; Schurkus, H. F.; Ochsenfeld, C. J. Chem. Theory Comput. 2017 , 13 , 1647 - 1655 ) is addressed by precontraction of the large 3-center integral matrix with the Cholesky factors of the ground state density reducing the memory requirements of that matrix by a factor of [Formula: see text]. Furthermore, we present a parallel implementation of our method, which not only leads to faster RPA correlation energy calculations but also to a scalable decrease in memory requirements, opening the door for investigations of large molecules even on small- to medium-sized computing clusters. Although it is known that AO methods are highly efficient for extended systems, where sparsity allows for reaching the linear-scaling regime, we show that our work also extends the applicability when considering highly delocalized systems for which no linear scaling can be achieved. As an example, the interlayer distance of two covalent organic framework pore fragments (comprising 384 atoms in total) is analyzed.
Orthodontics: computer-aided diagnosis and treatment planning
NASA Astrophysics Data System (ADS)
Yi, Yaxing; Li, Zhongke; Wei, Suyuan; Deng, Fanglin; Yao, Sen
2000-10-01
The purpose of this article is to introduce the outline of our newly developed computer-aided 3D dental cast analyzing system with laser scanning, and its preliminary clinical applications. The system is composed of a scanning device and a personal computer as a scanning controller and post processor. The scanning device is composed of a laser beam emitter, two sets of linear CCD cameras and a table which is rotatable by two-degree-of-freedom. The rotating is controlled precisely by a personal computer. The dental cast is projected and scanned with a laser beam. Triangulation is applied to determine the location of each point. Generation of 3D graphics of the dental cast takes approximately 40 minutes. About 170,000 sets of X,Y,Z coordinates are store for one dental cast. Besides the conventional linear and angular measurements of the dental cast, we are also able to demonstrate the size of the top surface area of each molar. The advantage of this system is that it facilitates the otherwise complicated and time- consuming mock surgery necessary for treatment planning in orthognathic surgery.
Two-dimensional imaging via a narrowband MIMO radar system with two perpendicular linear arrays.
Wang, Dang-wei; Ma, Xiao-yan; Su, Yi
2010-05-01
This paper presents a system model and method for the 2-D imaging application via a narrowband multiple-input multiple-output (MIMO) radar system with two perpendicular linear arrays. Furthermore, the imaging formulation for our method is developed through a Fourier integral processing, and the parameters of antenna array including the cross-range resolution, required size, and sampling interval are also examined. Different from the spatial sequential procedure sampling the scattered echoes during multiple snapshot illuminations in inverse synthetic aperture radar (ISAR) imaging, the proposed method utilizes a spatial parallel procedure to sample the scattered echoes during a single snapshot illumination. Consequently, the complex motion compensation in ISAR imaging can be avoided. Moreover, in our array configuration, multiple narrowband spectrum-shared waveforms coded with orthogonal polyphase sequences are employed. The mainlobes of the compressed echoes from the different filter band could be located in the same range bin, and thus, the range alignment in classical ISAR imaging is not necessary. Numerical simulations based on synthetic data are provided for testing our proposed method.
Digital control of magnetic bearings in a cryogenic cooler
NASA Technical Reports Server (NTRS)
Feeley, J.; Law, A.; Lind, F.
1990-01-01
This paper describes the design of a digital control system for control of magnetic bearings used in a spaceborne cryogenic cooler. The cooler was developed by Philips Laboratories for the NASA Goddard Space Flight Center. Six magnetic bearing assemblies are used to levitate the piston, displacer, and counter-balance of the cooler. The piston and displacer are driven by linear motors in accordance with Stirling cycle thermodynamic principles to produce the desired cooling effect. The counter-balance is driven by a third linear motor to cancel motion induced forces that would otherwise be transmitted to the spacecraft. An analog control system is currently used for bearing control. The purpose of this project is to investigate the possibilities for improved performance using digital control. Areas for potential improvement include transient and steady state control characteristics, robustness, reliability, adaptability, alternate control modes, size, weight, and cost. The present control system is targeted for the Intel 80196 microcontroller family. The eventual introduction of application specific integrated circuit (ASIC) technology to this problem may produce a unique and elegant solution both here and in related industrial problems.
Lee, Yi Feng; Jöhnck, Matthias; Frech, Christian
2018-02-21
The efficiencies of mono gradient elution and dual salt-pH gradient elution for separation of six mAb charge and size variants on a preparative-scale ion exchange chromatographic resin are compared in this study. Results showed that opposite dual salt-pH gradient elution with increasing pH gradient and simultaneously decreasing salt gradient is best suited for the separation of these mAb charge and size variants on Eshmuno ® CPX. Besides giving high binding capacity, this type of opposite dual salt-pH gradient also provides better resolved mAb variant peaks and lower conductivity in the elution pools compared to single pH or salt gradients. To have a mechanistic understanding of the differences in mAb variants retention behaviors of mono pH gradient, parallel dual salt-pH gradient, and opposite dual salt-pH gradient, a linear gradient elution model was used. After determining the model parameters using the linear gradient elution model, 2D plots were used to show the pH and salt dependencies of the reciprocals of distribution coefficient, equilibrium constant, and effective ionic capacity of the mAb variants in these gradient elution systems. Comparison of the 2D plots indicated that the advantage of opposite dual salt-pH gradient system with increasing pH gradient and simultaneously decreasing salt gradient is the noncontinuous increased acceleration of protein migration. Furthermore, the fitted model parameters can be used for the prediction and optimization of mAb variants separation in dual salt-pH gradient and step elution. © 2018 American Institute of Chemical Engineers Biotechnol. Prog., 2018. © 2018 American Institute of Chemical Engineers.
Importance of elastic finite-size effects: Neutral defects in ionic compounds
Burr, P. A.; Cooper, M. W. D.
2017-09-15
Small system sizes are a well known source of error in DFT calculations, yet computational constraints frequently dictate the use of small supercells, often as small as 96 atoms in oxides and compound semiconductors. In ionic compounds, electrostatic finite size effects have been well characterised, but self-interaction of charge neutral defects is often discounted or assumed to follow an asymptotic behaviour and thus easily corrected with linear elastic theory. Here we show that elastic effect are also important in the description of defects in ionic compounds and can lead to qualitatively incorrect conclusions if inadequatly small supercells are used; moreover,more » the spurious self-interaction does not follow the behaviour predicted by linear elastic theory. Considering the exemplar cases of metal oxides with fluorite structure, we show that numerous previous studies, employing 96-atom supercells, misidentify the ground state structure of (charge neutral) Schottky defects. We show that the error is eliminated by employing larger cells (324, 768 and 1500 atoms), and careful analysis determines that elastic effects, not electrostatic, are responsible. The spurious self-interaction was also observed in non-oxide ionic compounds and irrespective of the computational method used, thereby resolving long standing discrepancies between DFT and force-field methods, previously attributed to the level of theory. The surprising magnitude of the elastic effects are a cautionary tale for defect calculations in ionic materials, particularly when employing computationally expensive methods (e.g. hybrid functionals) or when modelling large defect clusters. We propose two computationally practicable methods to test the magnitude of the elastic self-interaction in any ionic system. In commonly studies oxides, where electrostatic effects would be expected to be dominant, it is the elastic effects that dictate the need for larger supercells | greater than 96 atoms.« less
Importance of elastic finite-size effects: Neutral defects in ionic compounds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burr, P. A.; Cooper, M. W. D.
Small system sizes are a well known source of error in DFT calculations, yet computational constraints frequently dictate the use of small supercells, often as small as 96 atoms in oxides and compound semiconductors. In ionic compounds, electrostatic finite size effects have been well characterised, but self-interaction of charge neutral defects is often discounted or assumed to follow an asymptotic behaviour and thus easily corrected with linear elastic theory. Here we show that elastic effect are also important in the description of defects in ionic compounds and can lead to qualitatively incorrect conclusions if inadequatly small supercells are used; moreover,more » the spurious self-interaction does not follow the behaviour predicted by linear elastic theory. Considering the exemplar cases of metal oxides with fluorite structure, we show that numerous previous studies, employing 96-atom supercells, misidentify the ground state structure of (charge neutral) Schottky defects. We show that the error is eliminated by employing larger cells (324, 768 and 1500 atoms), and careful analysis determines that elastic effects, not electrostatic, are responsible. The spurious self-interaction was also observed in non-oxide ionic compounds and irrespective of the computational method used, thereby resolving long standing discrepancies between DFT and force-field methods, previously attributed to the level of theory. The surprising magnitude of the elastic effects are a cautionary tale for defect calculations in ionic materials, particularly when employing computationally expensive methods (e.g. hybrid functionals) or when modelling large defect clusters. We propose two computationally practicable methods to test the magnitude of the elastic self-interaction in any ionic system. In commonly studies oxides, where electrostatic effects would be expected to be dominant, it is the elastic effects that dictate the need for larger supercells | greater than 96 atoms.« less
Importance of elastic finite-size effects: Neutral defects in ionic compounds
NASA Astrophysics Data System (ADS)
Burr, P. A.; Cooper, M. W. D.
2017-09-01
Small system sizes are a well-known source of error in density functional theory (DFT) calculations, yet computational constraints frequently dictate the use of small supercells, often as small as 96 atoms in oxides and compound semiconductors. In ionic compounds, electrostatic finite-size effects have been well characterized, but self-interaction of charge-neutral defects is often discounted or assumed to follow an asymptotic behavior and thus easily corrected with linear elastic theory. Here we show that elastic effects are also important in the description of defects in ionic compounds and can lead to qualitatively incorrect conclusions if inadequately small supercells are used; moreover, the spurious self-interaction does not follow the behavior predicted by linear elastic theory. Considering the exemplar cases of metal oxides with fluorite structure, we show that numerous previous studies, employing 96-atom supercells, misidentify the ground-state structure of (charge-neutral) Schottky defects. We show that the error is eliminated by employing larger cells (324, 768, and 1500 atoms), and careful analysis determines that elastic, not electrostatic, effects are responsible. The spurious self-interaction was also observed in nonoxide ionic compounds irrespective of the computational method used, thereby resolving long-standing discrepancies between DFT and force-field methods, previously attributed to the level of theory. The surprising magnitude of the elastic effects is a cautionary tale for defect calculations in ionic materials, particularly when employing computationally expensive methods (e.g., hybrid functionals) or when modeling large defect clusters. We propose two computationally practicable methods to test the magnitude of the elastic self-interaction in any ionic system. In commonly studied oxides, where electrostatic effects would be expected to be dominant, it is the elastic effects that dictate the need for larger supercells: greater than 96 atoms.
NASA Astrophysics Data System (ADS)
Behera, Bhuban Mohan; Thirukumaran, V.; Soni, Aishwaraya; Mishra, Prasanta Kumar; Biswal, Tapas Kumar
2017-06-01
Gangavalli (Brittle) Shear Zone (Fault) near Attur, Tamil Nadu exposes nearly 50 km long and 1-3 km wide NNE-SSW trending linear belt of cataclasites and pseudotachylyte produced on charnockites of the Southern Granulite Terrane. Pseudotachylytes, as well as the country rock, bear the evidence of conjugate strike slip shearing along NNE-SSW and NW-SE directions, suggesting an N-S compression. The Gangavalli Shear Zone represents the NNE-SSW fault of the conjugate system along which a right lateral shear has produced seismic slip motion giving rise to cataclasites and pseudotachylytes. Pseudotachylytes occur as veins of varying width extending from hairline fracture fills to tens of meters in length. They carry quartz as well as feldspar clasts with sizes of few mm in diameter; the clast sizes show a modified Power law distribution with finer ones (<1000 {\\upmu }m2) deviating from linearity. The shape of the clasts shows a high degree of roundness (>0.4) due to thermal decrepitation. In a large instance, devitrification has occurred producing albitic microlites that suggest the temperature of the pseudotachylyte melt was >1000^{circ }\\hbox {C}. Thus, pseudotachylyte veins act as a proxy to understand the genetic process involved in the evolution of the shear zone and its tectonic settings.
Magnetic resonance imaging for precise radiotherapy of small laboratory animals.
Frenzel, Thorsten; Kaul, Michael Gerhard; Ernst, Thomas Michael; Salamon, Johannes; Jäckel, Maria; Schumacher, Udo; Krüll, Andreas
2017-03-01
Radiotherapy of small laboratory animals (SLA) is often not as precisely applied as in humans. Here we describe the use of a dedicated SLA magnetic resonance imaging (MRI) scanner for precise tumor volumetry, radiotherapy treatment planning, and diagnostic imaging in order to make the experiments more accurate. Different human cancer cells were injected at the lower trunk of pfp/rag2 and SCID mice to allow for local tumor growth. Data from cross sectional MRI scans were transferred to a clinical treatment planning system (TPS) for humans. Manual palpation of the tumor size was compared with calculated tumor size of the TPS and with tumor weight at necropsy. As a feasibility study MRI based treatment plans were calculated for a clinical 6MV linear accelerator using a micro multileaf collimator (μMLC). In addition, diagnostic MRI scans were used to investigate animals which did clinical poorly during the study. MRI is superior in precise tumor volume definition whereas manual palpation underestimates their size. Cross sectional MRI allow for treatment planning so that conformal irradiation of mice with a clinical linear accelerator using a μMLC is in principle feasible. Several internal pathologies were detected during the experiment using the dedicated scanner. MRI is a key technology for precise radiotherapy of SLA. The scanning protocols provided are suited for tumor volumetry, treatment planning, and diagnostic imaging. Copyright © 2016. Published by Elsevier GmbH.
Influence of equilibrium shear flow in the parallel magnetic direction on edge localized mode crash
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Y.; Xiong, Y. Y.; Chen, S. Y., E-mail: sychen531@163.com
2016-04-15
The influence of the parallel shear flow on the evolution of peeling-ballooning (P-B) modes is studied with the BOUT++ four-field code in this paper. The parallel shear flow has different effects in linear simulation and nonlinear simulation. In the linear simulations, the growth rate of edge localized mode (ELM) can be increased by Kelvin-Helmholtz term, which can be caused by the parallel shear flow. In the nonlinear simulations, the results accord with the linear simulations in the linear phase. However, the ELM size is reduced by the parallel shear flow in the beginning of the turbulence phase, which is recognizedmore » as the P-B filaments' structure. Then during the turbulence phase, the ELM size is decreased by the shear flow.« less
Linear Approximation SAR Azimuth Processing Study
NASA Technical Reports Server (NTRS)
Lindquist, R. B.; Masnaghetti, R. K.; Belland, E.; Hance, H. V.; Weis, W. G.
1979-01-01
A segmented linear approximation of the quadratic phase function that is used to focus the synthetic antenna of a SAR was studied. Ideal focusing, using a quadratic varying phase focusing function during the time radar target histories are gathered, requires a large number of complex multiplications. These can be largely eliminated by using linear approximation techniques. The result is a reduced processor size and chip count relative to ideally focussed processing and a correspondingly increased feasibility for spaceworthy implementation. A preliminary design and sizing for a spaceworthy linear approximation SAR azimuth processor meeting requirements similar to those of the SEASAT-A SAR was developed. The study resulted in a design with approximately 1500 IC's, 1.2 cubic feet of volume, and 350 watts of power for a single look, 4000 range cell azimuth processor with 25 meters resolution.
Anomalous finite-size effects in the Battle of the Sexes
NASA Astrophysics Data System (ADS)
Cremer, J.; Reichenbach, T.; Frey, E.
2008-06-01
The Battle of the Sexes describes asymmetric conflicts in mating behavior of males and females. Males can be philanderer or faithful, while females are either fast or coy, leading to a cyclic dynamics. The adjusted replicator equation predicts stable coexistence of all four strategies. In this situation, we consider the effects of fluctuations stemming from a finite population size. We show that they unavoidably lead to extinction of two strategies in the population. However, the typical time until extinction occurs strongly prolongs with increasing system size. In the emerging time window, a quasi-stationary probability distribution forms that is anomalously flat in the vicinity of the coexistence state. This behavior originates in a vanishing linear deterministic drift near the fixed point. We provide numerical data as well as an analytical approach to the mean extinction time and the quasi-stationary probability distribution.
Wray, Lindsay S; Rnjak-Kovacina, Jelena; Mandal, Biman B; Schmidt, Daniel F; Gil, Eun Seok; Kaplan, David L
2012-12-01
In the field of tissue engineering and regenerative medicine there is significant unmet need for critically-sized, fully degradable biomaterial scaffold systems with tunable properties for optimizing tissue formation in vitro and tissue regeneration in vivo. To address this need, we have developed a silk-based scaffold platform that has tunable material properties, including localized and bioactive functionalization, degradation rate, and mechanical properties and that provides arrays of linear hollow channels for delivery of oxygen and nutrients throughout the scaffold bulk. The scaffolds can be assembled with dimensions that range from millimeters to centimeters, addressing the need for a critically-sized platform for tissue formation. We demonstrate that the hollow channel arrays support localized and confluent endothelialization. This new platform offers a unique and versatile tool for engineering 'tailored' scaffolds for a range of tissue engineering and regenerative medicine needs. Copyright © 2012 Elsevier Ltd. All rights reserved.
Bridgman growth of large-aperture yttrium calcium oxyborate crystal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Anhua, E-mail: wuanhua@mail.sic.ac.cn; Jiang, Linwen; Qian, Guoxing
2012-09-15
Highlights: ► YCOB is a novel non-linear optical crystal possessing good thermal, mechanical and nonlinear optical properties. ► Large size crystal growth is key technology question for YCOB crystal. ► YCOB crystals 3 in. in diameter were grown with modified vertical Bridgman method. ► It is a more effective growth method to obtain large size and high quality YCOB crystal. -- Abstract: Large-aperture yttrium calcium oxyborate YCa{sub 4}O(BO{sub 3}){sub 3} (YCOB) crystals with 3 in. in diameter were grown with modified vertical Bridgman method, and the large crystal plate (63 mm × 68 mm × 20 mm) was harvested formore » high-average power frequency conversion system. The crack, facet growth and spiral growth can be effectively controlled in the as-grown crystal, and Bridgman method displays more effective in obtain large size and high quality YCOB crystal plate than Czochralski technique.« less
[Detection of linear chromosomes and plasmids among 15 genera in the Actinomycetales].
Ma, Ning; Ma, Wei; Jiang, Chenglin; Fang, Ping; Qin, Zhongjun
2003-10-01
Bacterial chromosomes and plasmids are commonly circular, however, linear chromosomes and plasmids were discovered among 5 genera of the Actinomycetales. Here, we use pulsed field gel electrophoresis to study the genomes of 19 species which belong to 15 genera in the Actinomycetales. All chromosomes of 19 species are linear DNA, and linear plasmids with different sizes and copy numbers are detected among 5 species. This work provide basis for investigating the possible novel functions of linear replicons beyond Streptomyces and also helps to develop Actinomycetales artificial linear chromosome.
Algebraic multigrid preconditioners for two-phase flow in porous media with phase transitions
NASA Astrophysics Data System (ADS)
Bui, Quan M.; Wang, Lu; Osei-Kuffuor, Daniel
2018-04-01
Multiphase flow is a critical process in a wide range of applications, including oil and gas recovery, carbon sequestration, and contaminant remediation. Numerical simulation of multiphase flow requires solving of a large, sparse linear system resulting from the discretization of the partial differential equations modeling the flow. In the case of multiphase multicomponent flow with miscible effect, this is a very challenging task. The problem becomes even more difficult if phase transitions are taken into account. A new approach to handle phase transitions is to formulate the system as a nonlinear complementarity problem (NCP). Unlike in the primary variable switching technique, the set of primary variables in this approach is fixed even when there is phase transition. Not only does this improve the robustness of the nonlinear solver, it opens up the possibility to use multigrid methods to solve the resulting linear system. The disadvantage of the complementarity approach, however, is that when a phase disappears, the linear system has the structure of a saddle point problem and becomes indefinite, and current algebraic multigrid (AMG) algorithms cannot be applied directly. In this study, we explore the effectiveness of a new multilevel strategy, based on the multigrid reduction technique, to deal with problems of this type. We demonstrate the effectiveness of the method through numerical results for the case of two-phase, two-component flow with phase appearance/disappearance. We also show that the strategy is efficient and scales optimally with problem size.
Shade response of a full size TESSERA module
NASA Astrophysics Data System (ADS)
Slooff, Lenneke H.; Carr, Anna J.; de Groot, Koen; Jansen, Mark J.; Okel, Lars; Jonkman, Rudi; Bakker, Jan; de Gier, Bart; Harthoorn, Adriaan
2017-08-01
A full size TESSERA shade tolerant module has been made and was tested under various shadow conditions. The results show that the dedicated electrical interconnection of cells result in an almost linear response under shading. Furthermore, the voltage at maximum power point is almost independent of the shadow. This decreases the demand on the voltage range of the inverter. The increased shadow linearity results in a calculated increase in annual yield of about 4% for a typical Dutch house.
NASA Technical Reports Server (NTRS)
James, Mark; Wells, Doug; Allen, Phillip; Wallin, Kim
2017-01-01
Recently proposed modifications to ASTM E399 would provide a new size-insensitive approach to analyzing the force-displacement test record. The proposed size-insensitive linear-elastic fracture toughness, KIsi, targets a consistent 0.5mm crack extension for all specimen sizes by using an offset secant that is a function of the specimen ligament length. The KIsi evaluation also removes the Pmax/PQ criterion and increases the allowable specimen deformation. These latter two changes allow more plasticity at the crack tip, prompting the review undertaken in this work to ensure the validity of this new interpretation of the force-displacement curve. This paper provides a brief review of the proposed KIsi methodology and summarizes a finite element study into the effects of increased crack tip plasticity on the method given the allowance for additional specimen deformation. The study has two primary points of investigation: the effect of crack tip plasticity on compliance change in the force-displacement record and the continued validity of linear-elastic fracture mechanics to describe the crack front conditions. The analytical study illustrates that linear-elastic fracture mechanics assumptions remain valid at the increased deformation limit; however, the influence of plasticity on the compliance change in the test record is problematic. A proposed revision to the validity criteria for the KIsi test method is briefly discussed.
NASA Astrophysics Data System (ADS)
Manfredi, Sabato
2016-06-01
Large-scale dynamic systems are becoming highly pervasive in their occurrence with applications ranging from system biology, environment monitoring, sensor networks, and power systems. They are characterised by high dimensionality, complexity, and uncertainty in the node dynamic/interactions that require more and more computational demanding methods for their analysis and control design, as well as the network size and node system/interaction complexity increase. Therefore, it is a challenging problem to find scalable computational method for distributed control design of large-scale networks. In this paper, we investigate the robust distributed stabilisation problem of large-scale nonlinear multi-agent systems (briefly MASs) composed of non-identical (heterogeneous) linear dynamical systems coupled by uncertain nonlinear time-varying interconnections. By employing Lyapunov stability theory and linear matrix inequality (LMI) technique, new conditions are given for the distributed control design of large-scale MASs that can be easily solved by the toolbox of MATLAB. The stabilisability of each node dynamic is a sufficient assumption to design a global stabilising distributed control. The proposed approach improves some of the existing LMI-based results on MAS by both overcoming their computational limits and extending the applicative scenario to large-scale nonlinear heterogeneous MASs. Additionally, the proposed LMI conditions are further reduced in terms of computational requirement in the case of weakly heterogeneous MASs, which is a common scenario in real application where the network nodes and links are affected by parameter uncertainties. One of the main advantages of the proposed approach is to allow to move from a centralised towards a distributed computing architecture so that the expensive computation workload spent to solve LMIs may be shared among processors located at the networked nodes, thus increasing the scalability of the approach than the network size. Finally, a numerical example shows the applicability of the proposed method and its advantage in terms of computational complexity when compared with the existing approaches.
Herrera, Javier
2009-05-01
While pollinators may in general select for large, morphologically uniform floral phenotypes, drought stress has been proposed as a destabilizing force that may favour small flowers and/or promote floral variation within species. The general validity of this concept was checked by surveying a taxonomically diverse array of 38 insect-pollinated Mediterranean species. The interplay between fresh biomass investment, linear size and percentage corolla allocation was studied. Allometric relationships between traits were investigated by reduced major-axis regression, and qualitative correlates of floral variation explored using general linear-model MANOVA. Across species, flowers were perfectly isometrical with regard to corolla allocation (i.e. larger flowers were just scaled-up versions of smaller ones and vice versa). In contrast, linear size and biomass varied allometrically (i.e. there were shape variations, in addition to variations in size). Most floral variables correlated positively and significantly across species, except corolla allocation, which was largely determined by family membership and floral symmetry. On average, species with bilateral flowers allocated more to the corolla than those with radial flowers. Plant life-form was immaterial to all of the studied traits. Flower linear size variation was in general low among conspecifics (coefficients of variation around 10 %), whereas biomass was in general less uniform (e.g. 200-400 mg in Cistus salvifolius). Significant among-population differences were detected for all major quantitative floral traits. Flower miniaturization can allow an improved use of reproductive resources under prevailingly stressful conditions. The hypothesis that flower size reflects a compromise between pollinator attraction, water requirements and allometric constraints among floral parts is discussed.
NASA Astrophysics Data System (ADS)
Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha
2014-10-01
We explore optimization methods for planning the placement, sizing and operations of flexible alternating current transmission system (FACTS) devices installed to relieve transmission grid congestion. We limit our selection of FACTS devices to series compensation (SC) devices that can be represented by modification of the inductance of transmission lines. Our master optimization problem minimizes the l1 norm of the inductance modification subject to the usual line thermal-limit constraints. We develop heuristics that reduce this non-convex optimization to a succession of linear programs (LP) that are accelerated further using cutting plane methods. The algorithm solves an instance of the MatPower Polish Grid model (3299 lines and 2746 nodes) in 40 seconds per iteration on a standard laptop—a speed that allows the sizing and placement of a family of SC devices to correct a large set of anticipated congestions. We observe that our algorithm finds feasible solutions that are always sparse, i.e., SC devices are placed on only a few lines. In a companion manuscript, we demonstrate our approach on realistically sized networks that suffer congestion from a range of causes, including generator retirement. In this manuscript, we focus on the development of our approach, investigate its structure on a small test system subject to congestion from uniform load growth, and demonstrate computational efficiency on a realistically sized network.
Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha
2014-10-24
We explore optimization methods for planning the placement, sizing and operations of Flexible Alternating Current Transmission System (FACTS) devices installed to relieve transmission grid congestion. We limit our selection of FACTS devices to Series Compensation (SC) devices that can be represented by modification of the inductance of transmission lines. Our master optimization problem minimizes the l 1 norm of the inductance modification subject to the usual line thermal-limit constraints. We develop heuristics that reduce this non-convex optimization to a succession of Linear Programs (LP) which are accelerated further using cutting plane methods. The algorithm solves an instance of the MatPowermore » Polish Grid model (3299 lines and 2746 nodes) in 40 seconds per iteration on a standard laptop—a speed up that allows the sizing and placement of a family of SC devices to correct a large set of anticipated congestions. We observe that our algorithm finds feasible solutions that are always sparse, i.e., SC devices are placed on only a few lines. In a companion manuscript, we demonstrate our approach on realistically-sized networks that suffer congestion from a range of causes including generator retirement. In this manuscript, we focus on the development of our approach, investigate its structure on a small test system subject to congestion from uniform load growth, and demonstrate computational efficiency on a realistically-sized network.« less
The French press: a repeatable and high-throughput approach to exercising zebrafish (Danio rerio).
Usui, Takuji; Noble, Daniel W A; O'Dea, Rose E; Fangmeier, Melissa L; Lagisz, Malgorzata; Hesselson, Daniel; Nakagawa, Shinichi
2018-01-01
Zebrafish are increasingly used as a vertebrate model organism for various traits including swimming performance, obesity and metabolism, necessitating high-throughput protocols to generate standardized phenotypic information. Here, we propose a novel and cost-effective method for exercising zebrafish, using a coffee plunger and magnetic stirrer. To demonstrate the use of this method, we conducted a pilot experiment to show that this simple system provides repeatable estimates of maximal swim performance (intra-class correlation [ICC] = 0.34-0.41) and observe that exercise training of zebrafish on this system significantly increases their maximum swimming speed. We propose this high-throughput and reproducible system as an alternative to traditional linear chamber systems for exercising zebrafish and similarly sized fishes.
The French press: a repeatable and high-throughput approach to exercising zebrafish (Danio rerio)
Usui, Takuji; Noble, Daniel W.A.; O’Dea, Rose E.; Fangmeier, Melissa L.; Lagisz, Malgorzata; Hesselson, Daniel
2018-01-01
Zebrafish are increasingly used as a vertebrate model organism for various traits including swimming performance, obesity and metabolism, necessitating high-throughput protocols to generate standardized phenotypic information. Here, we propose a novel and cost-effective method for exercising zebrafish, using a coffee plunger and magnetic stirrer. To demonstrate the use of this method, we conducted a pilot experiment to show that this simple system provides repeatable estimates of maximal swim performance (intra-class correlation [ICC] = 0.34–0.41) and observe that exercise training of zebrafish on this system significantly increases their maximum swimming speed. We propose this high-throughput and reproducible system as an alternative to traditional linear chamber systems for exercising zebrafish and similarly sized fishes. PMID:29372124
NASA Astrophysics Data System (ADS)
Uesaka, M.; Demachi, K.; Fujiwara, T.; Dobashi, K.; Fujisawa, H.; Chhatkuli, R. B.; Tsuda, A.; Tanaka, S.; Matsumura, Y.; Otsuki, S.; Kusano, J.; Yamamoto, M.; Nakamura, N.; Tanabe, E.; Koyama, K.; Yoshida, M.; Fujimori, R.; Yasui, A.
2015-06-01
We are developing compact electron linear accelerators (hereafter linac) with high RF (Radio Frequency) frequency (9.3 GHz, wavelength 32.3 mm) of X-band and applying to medicine and non-destructive testing. Especially, potable 950 keV and 3.95 MeV linac X-ray sources have been developed for on-site transmission testing at several industrial plants and civil infrastructures including bridges. 6 MeV linac have been made for pinpoint X-ray dynamic tracking cancer therapy. The length of the accelerating tube is ∼600 mm. The electron beam size at the X-ray target is less than 1 mm and X-ray spot size at the cancer is less than 3 mm. Several hardware and software are under construction for dynamic tracking therapy for moving lung cancer. Moreover, as an ultimate compact linac, we are designing and manufacturing a laser dielectric linac of ∼1 MeV with Yr fiber laser (283 THz, wavelength 1.06 pm). Since the wavelength is 1.06 μm, the length of one accelerating strcture is tens pm and the electron beam size is in sub-micro meter. Since the sizes of cell and nuclear are about 10 and 1 μm, respectively, we plan to use this “On-chip” linac for radiation-induced DNA damage/repair analysis. We are thinking a system where DNA in a nucleus of cell is hit by ∼1 μm electron or X-ray beam and observe its repair by proteins and enzymes in live cells in-situ.
Model-based Estimation for Pose, Velocity of Projectile from Stereo Linear Array Image
NASA Astrophysics Data System (ADS)
Zhao, Zhuxin; Wen, Gongjian; Zhang, Xing; Li, Deren
2012-01-01
The pose (position and attitude) and velocity of in-flight projectiles have major influence on the performance and accuracy. A cost-effective method for measuring the gun-boosted projectiles is proposed. The method adopts only one linear array image collected by the stereo vision system combining a digital line-scan camera and a mirror near the muzzle. From the projectile's stereo image, the motion parameters (pose and velocity) are acquired by using a model-based optimization algorithm. The algorithm achieves optimal estimation of the parameters by matching the stereo projection of the projectile and that of the same size 3D model. The speed and the AOA (angle of attack) could also be determined subsequently. Experiments are made to test the proposed method.
Ensemble Weight Enumerators for Protograph LDPC Codes
NASA Technical Reports Server (NTRS)
Divsalar, Dariush
2006-01-01
Recently LDPC codes with projected graph, or protograph structures have been proposed. In this paper, finite length ensemble weight enumerators for LDPC codes with protograph structures are obtained. Asymptotic results are derived as the block size goes to infinity. In particular we are interested in obtaining ensemble average weight enumerators for protograph LDPC codes which have minimum distance that grows linearly with block size. As with irregular ensembles, linear minimum distance property is sensitive to the proportion of degree-2 variable nodes. In this paper the derived results on ensemble weight enumerators show that linear minimum distance condition on degree distribution of unstructured irregular LDPC codes is a sufficient but not a necessary condition for protograph LDPC codes.
A prototype fully polarimetric 160-GHz bistatic ISAR compact radar range
NASA Astrophysics Data System (ADS)
Beaudoin, C. J.; Horgan, T.; DeMartinis, G.; Coulombe, M. J.; Goyette, T.; Gatesman, A. J.; Nixon, William E.
2017-05-01
We present a prototype bistatic compact radar range operating at 160 GHz and capable of collecting fullypolarimetric radar cross-section and electromagnetic scattering measurements in a true far-field facility. The bistatic ISAR system incorporates two 90-inch focal length, 27-inch-diameter diamond-turned mirrors fed by 160 GHz transmit and receive horns to establish the compact range. The prototype radar range with its modest sized quiet zone serves as a precursor to a fully developed compact radar range incorporating a larger quiet zone capable of collecting X-band bistatic RCS data and 3D imagery using 1/16th scale objects. The millimeter-wave transmitter provides 20 GHz of swept bandwidth in the single linear (Horizontal/Vertical) polarization while the millimeter-wave receiver, that is sensitive to linear Horizontal and Vertical polarization, possesses a 7 dB noise figure. We present the design of the compact radar range and report on test results collected to validate the system's performance.
NASA Astrophysics Data System (ADS)
Kothavale, Shantaram; Katariya, Santosh; Sekar, Nagaiyan
2018-01-01
Rigid pyrazino-phenanthroline based donor-π-acceptor-π-auxiliary acceptor type compounds have been studied for their linear and non-linear optical properties. The non-linear optical (NLO) behavior of these dyes was studied by calculating the values of static α , β and γ using solvatochromic as well as computational methods. The results obtained by solvatochromic method are correlated theoretically with Density Functional Theory (DFT) using B3LYP/6-31G (d), CAM B3LYP/6-31 G(d), B3LYP/6-31++ g(d,P) and CAM B3LYP/6-31++ g(d,P) methods. The results reveal that, among all four computational methods CAM-B3LYP/6-31++ g(d,P) performs well for the calculation of linear polarizability (α) and first order hyperpolarizability (β), while CAM-B3LYP/6-31 g(d,P) for the calculation of second order hyperpolarizability (ϒ). Overall TPA depends on the molecular structure variation with increase in complexity and molecular weight, which implies that both the number of branches and the size of π-framework are important factors for the molecular TPA in this chromophoric system. Generalized Mulliken-Hush (GMH) analysis is performed to study the effective charge transfer from donor to acceptor.
Performance Analysis of Local Ensemble Kalman Filter
NASA Astrophysics Data System (ADS)
Tong, Xin T.
2018-03-01
Ensemble Kalman filter (EnKF) is an important data assimilation method for high-dimensional geophysical systems. Efficient implementation of EnKF in practice often involves the localization technique, which updates each component using only information within a local radius. This paper rigorously analyzes the local EnKF (LEnKF) for linear systems and shows that the filter error can be dominated by the ensemble covariance, as long as (1) the sample size exceeds the logarithmic of state dimension and a constant that depends only on the local radius; (2) the forecast covariance matrix admits a stable localized structure. In particular, this indicates that with small system and observation noises, the filter error will be accurate in long time even if the initialization is not. The analysis also reveals an intrinsic inconsistency caused by the localization technique, and a stable localized structure is necessary to control this inconsistency. While this structure is usually taken for granted for the operation of LEnKF, it can also be rigorously proved for linear systems with sparse local observations and weak local interactions. These theoretical results are also validated by numerical implementation of LEnKF on a simple stochastic turbulence in two dynamical regimes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stiebel-Kalish, Hadas, E-mail: kalishhadas@gmail.com; Sackler School of Medicine, Tel Aviv University, Tel Aviv; Reich, Ehud
Purpose: Meningiomas threatening the anterior visual pathways (AVPs) and not amenable for surgery are currently treated with multisession stereotactic radiotherapy. Stereotactic radiotherapy is available with a number of devices. The most ubiquitous include the gamma knife, CyberKnife, tomotherapy, and isocentric linear accelerator systems. The purpose of our study was to describe a case series of AVP meningiomas treated with linear accelerator fractionated stereotactic radiotherapy (FSRT) using the multiple, noncoplanar, dynamic conformal rotation paradigm and to compare the success and complication rates with those reported for other techniques. Patients and Methods: We included all patients with AVP meningiomas followed up atmore » our neuro-ophthalmology unit for a minimum of 12 months after FSRT. We compared the details of the neuro-ophthalmologic examinations and tumor size before and after FSRT and at the end of follow-up. Results: Of 87 patients with AVP meningiomas, 17 had been referred for FSRT. Of the 17 patients, 16 completed >12 months of follow-up (mean 39). Of the 16 patients, 11 had undergone surgery before FSRT and 5 had undergone FSRT as first-line management. Tumor control was achieved in 14 of the 16 patients, with three meningiomas shrinking in size after RT. Two meningiomas progressed, one in an area that was outside the radiation field. The visual function had improved in 6 or stabilized in 8 of the 16 patients (88%) and worsened in 2 (12%). Conclusions: Linear accelerator fractionated RT using the multiple noncoplanar dynamic rotation conformal paradigm can be offered to patients with meningiomas that threaten the anterior visual pathways as an adjunct to surgery or as first-line treatment, with results comparable to those reported for other stereotactic RT techniques.« less
Adsorption of polypropylene from dilute solutions on a zeolite column packing.
Macko, Tibor; Pasch, Harald; Denayer, Joeri F
2005-01-01
Faujasite type zeolite CBV-780 was tested as adsorbent for isotactic polypropylene by liquid chromatography. When cyclohexane, cyclohexanol, n-decanol, n-dodecanol, diphenylmethane, or methylcyclohexane was used as mobile phase, polypropylene was fully or partially retained within the column packing. This is the first series of sorbent-solvent systems to show a pronounced retention of isotactic polypropylene. According to the hydrodynamic volumes of polypropylene in solution, macromolecules of polypropylene should be fully excluded from the pore volume of the sorbent. Sizes of polypropylene macromolecules in linear conformations, however, correlate with the pore size of the column packing used. It is presumed that the polypropylene chains partially penetrate into the pores and are retained due to the high adsorption potential in the narrow pores.
Optimal estimation and scheduling in aquifer management using the rapid feedback control method
NASA Astrophysics Data System (ADS)
Ghorbanidehno, Hojat; Kokkinaki, Amalia; Kitanidis, Peter K.; Darve, Eric
2017-12-01
Management of water resources systems often involves a large number of parameters, as in the case of large, spatially heterogeneous aquifers, and a large number of "noisy" observations, as in the case of pressure observation in wells. Optimizing the operation of such systems requires both searching among many possible solutions and utilizing new information as it becomes available. However, the computational cost of this task increases rapidly with the size of the problem to the extent that textbook optimization methods are practically impossible to apply. In this paper, we present a new computationally efficient technique as a practical alternative for optimally operating large-scale dynamical systems. The proposed method, which we term Rapid Feedback Controller (RFC), provides a practical approach for combined monitoring, parameter estimation, uncertainty quantification, and optimal control for linear and nonlinear systems with a quadratic cost function. For illustration, we consider the case of a weakly nonlinear uncertain dynamical system with a quadratic objective function, specifically a two-dimensional heterogeneous aquifer management problem. To validate our method, we compare our results with the linear quadratic Gaussian (LQG) method, which is the basic approach for feedback control. We show that the computational cost of the RFC scales only linearly with the number of unknowns, a great improvement compared to the basic LQG control with a computational cost that scales quadratically. We demonstrate that the RFC method can obtain the optimal control values at a greatly reduced computational cost compared to the conventional LQG algorithm with small and controllable losses in the accuracy of the state and parameter estimation.
NASA Astrophysics Data System (ADS)
Yang, Yong; Li, Chengshan
2017-10-01
The effect of minor loop size on the magnetic stiffness has not been paid attention to by most researchers in experimental and theoretical studies about the high temperature superconductor (HTS) magnetic levitation system. In this work, we numerically investigate the average magnetic stiffness obtained by the minor loop traverses Δz (or Δx) varying from 0.1 mm to 2 mm in zero field cooling and field cooling regimes, respectively. The approximate values of the magnetic stiffness with zero traverse are obtained using the method of linear extrapolation. Compared with the average magnetic stiffness gained by any minor loop traverse, these approximate values are Not always close to the average magnetic stiffness produced by the smallest size of minor loops. The relative deviation ranges of average magnetic stiffness gained by the usually minor loop traverse (1 or 2 mm) are presented by the ratios of approximate values to average stiffness for different moving processes and two typical cooling conditions. The results show that most of average magnetic stiffness are remarkably influenced by the sizes of minor loop, which indicates that the magnetic stiffness obtained by a single minor loop traverse Δ z or Δ x, for example, 1 or 2 mm, can be generally caused a large deviation.
Self-organizing biochemical cycle in dynamic feedback with soil structure
NASA Astrophysics Data System (ADS)
Vasilyeva, Nadezda; Vladimirov, Artem; Smirnov, Alexander; Matveev, Sergey; Tyrtyshnikov, Evgeniy; Yudina, Anna; Milanovskiy, Evgeniy; Shein, Evgeniy
2016-04-01
In the present study we perform bifurcation analysis of a physically-based mathematical model of self-organized structures in soil (Vasilyeva et al., 2015). The state variables in this model included microbial biomass, two organic matter types, oxygen, carbon dioxide, water content and capillary pore size. According to our previous experimental studies, organic matter affinity to water is an important property affecting soil structure. Therefore, organic matter wettability was taken as principle distinction between organic matter types in this model. It considers general known biological feedbacks with soil physical properties formulated as a system of parabolic type non-linear partial differential equations with elements of discrete modeling for water and pore formation. The model shows complex behavior, involving emergence of temporal and spatial irregular auto-oscillations from initially homogeneous distributions. The energy of external impact on a system was defined by a constant oxygen level on the boundary. Non-linear as opposed to linear oxygen diffusion gives possibility of modeling anaerobic micro-zones formation (organic matter conservation mechanism). For the current study we also introduced population competition of three different types of microorganisms according to their mobility/feeding (diffusive, moving and fungal growth). The strongly non-linear system was solved and parameterized by time-optimized algorithm combining explicit and implicit (matrix form of Thomas algorithm) methods considering the time for execution of the evaluated time-step according to accuracy control. The integral flux of the CO2 state variable was used as a macroscopic parameter to describe system as a whole and validation was carried out on temperature series of moisture dependence for soil heterotrophic respiration data. Thus, soil heterotrophic respiration can be naturally modeled as an integral result of complex dynamics on microscale, arising from biological processes formulated as a sum of state variables products, with no need to introduce any saturation functions, such as Mikhaelis-Menten type kinetics, inside the model. Analyzed dynamic soil model is being further developed to describe soil structure formation and its effect on organic matter decomposition at macro-scale, to predict changes with external perturbations. To link micro- and macro-scales we additionally model soil particles aggregation process. The results from local biochemical soil organic matter cycle serve as inputs to aggregation process, while the output aggregate size distributions define physical properties in the soil profile, these in turn serve as dynamic parameters in local biochemical cycles. The additional formulation is a system of non-linear ordinary differential equations, including Smoluchowski-type equations for aggregation and reaction kinetics equations for coagulation/adsorption/adhesion processes. Vasilyeva N.A., Ingtem J.G., Silaev D.A. Nonlinear dynamical model of microbial growth in soil medium. Computational Mathematics and Modeling, vol. 49, p.31-44, 2015 (in Russian). English version is expected in corresponding vol.27, issue 2, 2016.
NASA Astrophysics Data System (ADS)
Nelson, Robert M.; Boryta, Mark D.; Hapke, Bruce W.; Manatt, Kenneth S.; Shkuratov, Yuriy; Psarev, V.; Vandervoort, Kurt; Kroner, Desire; Nebedum, Adaze; Vides, Christina L.; Quiñones, John
2018-03-01
We present reflectance and polarization phase curve measurements of highly reflective planetary regolith analogues having physical characteristics expected on atmosphereless solar system bodies (ASSBs) such as a eucritic asteroids or icy satellites. We used a goniometric photopolarimeter (GPP) of novel design to study thirteen well-sorted particle size fractions of aluminum oxide (Al2O3). The sample suite included particle sizes larger than, approximately equal to, and smaller than the wavelength of the incident monochromatic radiation (λ = 635 nm). The observed phase angle, α, was 0.056 o < α < 15°. These Al2O3 particulate samples have very high normal reflectance (> ∼95%). The incident radiation has a very high probability of being multiply scattered before being backscattered toward the incident direction or ultimately absorbed. The five smallest particle sizes exhibited extremely high void space (> ∼95%). The reflectance phase curves for all particle size fractions show a pronounced non-linear reflectance increase with decreasing phase angle at α∼ < 3°. Our earlier studies suggest that the cause of this non-linear reflectance increase is constructive interference of counter-propagating waves in the medium by coherent backscattering (CB), a photonic analog of Anderson localization of electrons in solid state media. The polarization phase curves for particle size fractions with size parameter (particle radius/wavelength) r/λ < ∼1, show that the linear polarization rapidly decreases as α increases from 0°; it reaches a minimum near α = ∼2°. Longward of ∼2°, the negative polarization decreases as phase angle increases, becoming positive between 12° and at least 15°, (probably ∼20°) depending on particle size. For size parameters r/λ > ∼1 we detect no polarization. This polarization behavior is distinct from that observed in low albedo solar system objects such as the Moon and asteroids and for absorbing materials in the laboratory. We suggest this behavior arises because photons that are backscattered have a high probability of having interacted with two or more particles, thus giving rise to the CB process. These results may explain the unusual negative polarization behavior observed near small phase angles reported for several decades on highly reflective ASSBs such as the asteroids 44 Nysa, 64 Angelina and the Galilean satellites Io, Europa and Ganymede. Our results suggest these ASSB regoliths scatter electromagnetic radiation as if they were extremely fine grained with void space > ∼95%, and grain sizes of the order < = λ. This portends consequences for efforts to deploy landers on high ASSBs such as Europa. These results are also germane to the field of terrestrial geo-engineering, particularly to suggestions that earth's radiation balance can be modified by injecting Al2O3 particulates into the stratosphere thereby offsetting the effect of anthropogenic greenhouse gas emissions. The GPP used in this study was modified from our previous design so that the sample is presented with light that is alternatingly polarized perpendicular to and parallel to the scattering plane. There are no analyzers before the detector. This optical arrangement, following the Helmholtz Reciprocity Principle (HRP), produces a physically identical result to the traditional laboratory reflectance polarization measurements in which the incident light is unpolarized and the analyzers are placed before the detector. The results are identical in samples measured by both methods. We believe that ours is the first experimental demonstration of the HRP for polarized light, first proposed by Helmholtz in 1856.
NASA Astrophysics Data System (ADS)
Seneviratne, Sashieka
With the growth of smart phones, the demand for more broadband, data centric technologies are being driven higher. As mobile operators worldwide plan and deploy 4th generation (4G) networks such as LTE to support the relentless growth in mobile data demand, the need for strategically positioned pico-sized cellular base stations known as 'pico-cells' are gaining traction. In addition to having to design a transceiver in a much compact footprint, pico-cells must still face the technical challenges presented by the new 4G systems, such as reduced power consumptions and linear amplification of the signals. The RF power amplifier (PA) that amplifies the output signals of 4G pico-cell systems face challenges to minimize size, achieve high average efficiencies and broader bandwidths while maintaining linearity and operating at higher frequencies. 4G standards as LTE use non-constant envelope modulation techniques with high peak to average ratios. Power amplifiers implemented in such applications are forced to operate at a backed off region from saturation. Therefore, in order to reduce power consumption, a design of a high efficiency PA that can maintain the efficiency for a wider range of radio frequency signals is required. The primary focus of this thesis is to enhance the efficiency of a compact RF amplifier suitable for a 4G pico-cell base station. For this aim, an integrated two way Doherty amplifier design in a compact 10mm x 11.5mm2 monolithic microwave integrated circuit using GaN device technology is presented. Using non-linear GaN HFETs models, the design achieves high effi-ciencies of over 50% at both back-off and peak power regions without compromising on the stringent linearity requirements of 4G LTE standards. This demonstrates a 17% increase in power added efficiency at 6 dB back off from peak power compared to conventional Class AB amplifier performance. Performance optimization techniques to select between high efficiency and high linearity operation are also presented. Overall, this thesis demonstrates the feasibility of an integrated HFET Doherty amplifier for LTE band 7 which entails the frequencies from 2.62-2.69GHz. The realization of the layout and various issues related to the PA design is discussed and attempted to be solved.
Nature of bonding and cooperativity in linear DMSO clusters: A DFT, AIM and NCI analysis.
Venkataramanan, Natarajan Sathiyamoorthy; Suvitha, Ambigapathy
2018-05-01
This study aims to cast light on the nature of interactions and cooperativity that exists in linear dimethyl sulfoxide (DMSO) clusters using dispersion corrected density functional theory. In the linear DMSO, DMSO molecules in the middle of the clusters are bound strongly than at the terminal. The plot of the total binding energy of the clusters vs the cluster size and mean polarizabilities vs cluster size shows an excellent linearity demonstrating the presence of cooperativity effect. The computed incremental binding energy of the clusters remains nearly constant, implying that DMSO addition at the terminal site can happen to form an infinite chain. In the linear clusters, two σ-hole at the terminal DMSO molecules were found and the value on it was found to increase with the increase in cluster size. The quantum theory of atoms in molecules topography shows the existence of hydrogen and SO⋯S type in linear tetramer and larger clusters. In the dimer and trimer SO⋯OS type of interaction exists. In 2D non-covalent interactions plot, additional peaks in the regions which contribute to the stabilization of the clusters were observed and it splits in the trimer and intensifies in the larger clusters. In the trimer and larger clusters in addition to the blue patches due to hydrogen bonds, additional, light blue patches were seen between the hydrogen atom of the methyl groups and the sulphur atom of the nearby DMSO molecule. Thus, in addition to the strong H-bonds, strong electrostatic interactions between the sulphur atom and methyl hydrogens exists in the linear clusters. Copyright © 2018 Elsevier Inc. All rights reserved.
Protograph based LDPC codes with minimum distance linearly growing with block size
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Jones, Christopher; Dolinar, Sam; Thorpe, Jeremy
2005-01-01
We propose several LDPC code constructions that simultaneously achieve good threshold and error floor performance. Minimum distance is shown to grow linearly with block size (similar to regular codes of variable degree at least 3) by considering ensemble average weight enumerators. Our constructions are based on projected graph, or protograph, structures that support high-speed decoder implementations. As with irregular ensembles, our constructions are sensitive to the proportion of degree-2 variable nodes. A code with too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code with too many such nodes tends to not exhibit a minimum distance that grows linearly in block length. In this paper we also show that precoding can be used to lower the threshold of regular LDPC codes. The decoding thresholds of the proposed codes, which have linearly increasing minimum distance in block size, outperform that of regular LDPC codes. Furthermore, a family of low to high rate codes, with thresholds that adhere closely to their respective channel capacity thresholds, is presented. Simulation results for a few example codes show that the proposed codes have low error floors as well as good threshold SNFt performance.
Development of a prototype sensor system for ultra-high-speed LDA-PIV
NASA Astrophysics Data System (ADS)
Griffiths, Jennifer A.; Royle, Gary J.; Bohndiek, Sarah E.; Turchetta, Renato; Chen, Daoyi
2008-04-01
Laser Doppler Anemometry (LDA) and Particle Image Velocimetry (PIV) are commonly used in the analysis of particulates in fluid flows. Despite the successes of these techniques, current instrumentation has placed limitations on the size and shape of the particles undergoing measurement, thus restricting the available data for the many industrial processes now utilising nano/micro particles. Data for spherical and irregularly shaped particles down to the order of 0.1 µm is now urgently required. Therefore, an ultra-fast LDA-PIV system is being constructed for the acquisition of this data. A key component of this instrument is the PIV optical detection system. Both the size and speed of the particles under investigation place challenging constraints on the system specifications: magnification is required within the system in order to visualise particles of the size of interest, but this restricts the corresponding field of view in a linearly inverse manner. Thus, for several images of a single particle in a fast fluid flow to be obtained, the image capture rate and sensitivity of the system must be sufficiently high. In order to fulfil the instrumentation criteria, the optical detection system chosen is a high-speed, lensed, digital imaging system based on state-of-the-art CMOS technology - the 'Vanilla' sensor developed by the UK based MI3 consortium. This novel Active Pixel Sensor is capable of high frame rates and sparse readout. When coupled with an image intensifier, it will have single photon detection capabilities. An FPGA based DAQ will allow real-time operation with minimal data transfer.
NASA Astrophysics Data System (ADS)
Åkerman, Björn
1997-04-01
DNA orientation measurements by linear dichroism (LD) spectroscopy and single molecule imaging by fluorescence microscopy are used to investigate the effect of DNA size (71-740 kilo base pairs) and field strength E (1-5.9 V/cm) on the conformation dynamics during the field-driven threading of DNA molecules through a set of parallel pores in agarose gels, with average pore radii between 380 Å and 1400 Å. Locally relaxed but globally oriented DNA molecules are subjected to a perpendicular field, and the observed LD time profile is compared with a recent theory for the threading [D. Long and J.-L. Viovy, Phys. Rev. E 53, 803 (1996)] which assumes the same initial state. As predicted the DNA is driven by the ends into a U-form, leading to an overshoot in the LD. The overshoot-time scales as E-(1.2-1.4) as predicted, but grows more slowly with DNA size than the predicted linear dependence. For long molecules loops form initially in the threading process but are finally consumed by the ends, and the process of transfer of DNA segments, from the loops to the arms of the U, leads to a shoulder in the LD as predicted. The critical size below which loops do not form (as indicated by the LD shoulder being absent) is between 71 and 105 kbp (0.5% agarose, 5.9 V/cm), and considerably larger than predicted because in the initial state the DNA molecules are housed in gel cavities with effective pore sizes about four times larger than the average pore size. From the data, the separation of DNA by exploiting the threading dynamics in pulsed fields [D. Long et al., CR Acad. Sci. Paris, Ser. IIb 321, 239 (1995)] is shown to be feasible in principle in an agarose-based system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zuteck, Michael D.; Jackson, Kevin L.; Santos, Richard A.
The Zimitar one-piece rotor primary structure is integrated, so balanced thrust and gravity loads flow through the hub region without transferring out of its composite material. Large inner rotor geometry is used since there is no need to neck down to a blade root region and pitch bearing. Rotor control is provided by a highly redundant, five flap system on each blade, sized so that easily handled standard electric linear actuators are sufficient.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zuteck, Michael D.; Jackson, Kevin L.; Santos, Richard A.
The Zimitar one-piece rotor primary structure is integrated, so balanced thrust and gravity loads flow through the hub region without transferring out of its composite material. Large inner rotor geometry is used since there is no need to neck down to a blade root region and pitch bearing. Rotor control is provided by a highly redundant, five flap system on each blade, sized so that easily handled standard electric linear actuators are sufficient.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zuteck, Michael D.; Jackson, Kevin L.; Santos, Richard A.
The Zimitar one-piece rotor primary structure is integrated, so balanced thrust and gravity loads flow through the hub region without transferring out of its composite material. Large inner rotor geometry is used since there is no need to neck down to a blade root region and pitch bearing. Rotor control is provided by a highly redundant, five flap system on each blade, sized so that easily handled standard electric linear actuators are sufficient.
2014-06-01
antenna beamwidth and R is the range distance. Antenna beam width is proportional to the real aperture size and is given as antennaL ...18) where is the wavelength and antennaL is the physical length of the radar antenna; therefore, cross-range resolution for a real aperture... antennaL R (20) A value of 50 meters for cross-range resolution is rather high and signifies poor resolution. Under these conditions, obtaining
Study of cavitating inducer instabilities
NASA Technical Reports Server (NTRS)
Young, W. E.; Murphy, R.; Reddecliff, J. M.
1972-01-01
An analytic and experimental investigation into the causes and mechanisms of cavitating inducer instabilities was conducted. Hydrofoil cascade tests were performed, during which cavity sizes were measured. The measured data were used, along with inducer data and potential flow predictions, to refine an analysis for the prediction of inducer blade suction surface cavitation cavity volume. Cavity volume predictions were incorporated into a linearized system model, and instability predictions for an inducer water test loop were generated. Inducer tests were conducted and instability predictions correlated favorably with measured instability data.
Effects of shock on hypersonic boundary layer stability
NASA Astrophysics Data System (ADS)
Pinna, F.; Rambaud, P.
2013-06-01
The design of hypersonic vehicles requires the estimate of the laminar to turbulent transition location for an accurate sizing of the thermal protection system. Linear stability theory is a fast scientific way to study the problem. Recent improvements in computational capabilities allow computing the flow around a full vehicle instead of using only simplified boundary layer equations. In this paper, the effect of the shock is studied on a mean flow provided by steady Computational Fluid Dynamics (CFD) computations and simplified boundary layer calculations.
Linear-sweep voltammetry of a soluble redox couple in a cylindrical electrode
NASA Technical Reports Server (NTRS)
Weidner, John W.
1991-01-01
An approach is described for using the linear sweep voltammetry (LSV) technique to study the kinetics of flooded porous electrodes by assuming a porous electrode as a collection of identical noninterconnected cylindrical pores that are filled with electrolyte. This assumption makes possible to study the behavior of this ideal electrode as that of a single pore. Alternatively, for an electrode of a given pore-size distribution, it is possible to predict the performance of different pore sizes and then combine the performance values.
NASA Technical Reports Server (NTRS)
Holloway, Sidney E., III; Crossley, Edward A.; Miller, James B.; Jones, Irby W.; Davis, C. Calvin; Behun, Vaughn D.; Goodrich, Lewis R., Sr.
1995-01-01
Linear proof-mass actuator (LPMA) is friction-driven linear mass actuator capable of applying controlled force to structure in outer space to damp out oscillations. Capable of high accelerations and provides smooth, bidirectional travel of mass. Design eliminates gears and belts. LPMA strong enough to be used terrestrially where linear actuators needed to excite or damp out oscillations. High flexibility designed into LPMA by varying size of motors, mass, and length of stroke, and by modifying control software.
NASA Technical Reports Server (NTRS)
Zubair, Mohammad; Nielsen, Eric; Luitjens, Justin; Hammond, Dana
2016-01-01
In the field of computational fluid dynamics, the Navier-Stokes equations are often solved using an unstructuredgrid approach to accommodate geometric complexity. Implicit solution methodologies for such spatial discretizations generally require frequent solution of large tightly-coupled systems of block-sparse linear equations. The multicolor point-implicit solver used in the current work typically requires a significant fraction of the overall application run time. In this work, an efficient implementation of the solver for graphics processing units is proposed. Several factors present unique challenges to achieving an efficient implementation in this environment. These include the variable amount of parallelism available in different kernel calls, indirect memory access patterns, low arithmetic intensity, and the requirement to support variable block sizes. In this work, the solver is reformulated to use standard sparse and dense Basic Linear Algebra Subprograms (BLAS) functions. However, numerical experiments show that the performance of the BLAS functions available in existing CUDA libraries is suboptimal for matrices representative of those encountered in actual simulations. Instead, optimized versions of these functions are developed. Depending on block size, the new implementations show performance gains of up to 7x over the existing CUDA library functions.
Xu, Suxin; Chen, Jiangang; Wang, Bijia; Yang, Yiqi
2015-11-15
Two predictive models were presented for the adsorption affinities and diffusion coefficients of disperse dyes in polylactic acid matrix. Quantitative structure-sorption behavior relationship would not only provide insights into sorption process, but also enable rational engineering for desired properties. The thermodynamic and kinetic parameters for three disperse dyes were measured. The predictive model for adsorption affinity was based on two linear relationships derived by interpreting the experimental measurements with molecular structural parameters and compensation effect: ΔH° vs. dye size and ΔS° vs. ΔH°. Similarly, the predictive model for diffusion coefficient was based on two derived linear relationships: activation energy of diffusion vs. dye size and logarithm of pre-exponential factor vs. activation energy of diffusion. The only required parameters for both models are temperature and solvent accessible surface area of the dye molecule. These two predictive models were validated by testing the adsorption and diffusion properties of new disperse dyes. The models offer fairly good predictive ability. The linkage between structural parameter of disperse dyes and sorption behaviors might be generalized and extended to other similar polymer-penetrant systems. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Aytaç Korkmaz, Sevcan; Binol, Hamidullah
2018-03-01
Patients who die from stomach cancer are still present. Early diagnosis is crucial in reducing the mortality rate of cancer patients. Therefore, computer aided methods have been developed for early detection in this article. Stomach cancer images were obtained from Fırat University Medical Faculty Pathology Department. The Local Binary Patterns (LBP) and Histogram of Oriented Gradients (HOG) features of these images are calculated. At the same time, Sammon mapping, Stochastic Neighbor Embedding (SNE), Isomap, Classical multidimensional scaling (MDS), Local Linear Embedding (LLE), Linear Discriminant Analysis (LDA), t-Distributed Stochastic Neighbor Embedding (t-SNE), and Laplacian Eigenmaps methods are used for dimensional the reduction of the features. The high dimension of these features has been reduced to lower dimensions using dimensional reduction methods. Artificial neural networks (ANN) and Random Forest (RF) classifiers were used to classify stomach cancer images with these new lower feature sizes. New medical systems have developed to measure the effects of these dimensions by obtaining features in different dimensional with dimensional reduction methods. When all the methods developed are compared, it has been found that the best accuracy results are obtained with LBP_MDS_ANN and LBP_LLE_ANN methods.
Effect of Cross-Linking on Free Volume Properties of PEG Based Thiol-Ene Networks
NASA Astrophysics Data System (ADS)
Ramakrishnan, Ramesh; Vasagar, Vivek; Nazarenko, Sergei
According to the Fox and Loshaek theory, in elastomeric networks, free volume decreases linearly with the cross-link density increase. The aim of this study is to show whether the poly(ethylene glycol) (PEG) based multicomponent thiol-ene elastomeric networks demonstrate this model behavior? Networks with a broad cross-link density range were prepared by changing the ratio of the trithiol crosslinker to PEG dithiol and then UV cured with PEG diene while maintaining 1:1 thiol:ene stoichiometry. Pressure-volume-temperature (PVT) data of the networks was generated from the high pressure dilatometry experiments which was fit using the Simha-Somcynsky Equation-of-State analysis to obtain the fractional free volume of the networks. Using Positron Annihilation Lifetime Spectroscopy (PALS) analysis, the average free volume hole size of the networks was also quantified. The fractional free volume and the average free volume hole size showed a linear change with the cross-link density confirming that the Fox and Loshaek theory can be applied to this multicomponent system. Gas diffusivities of the networks showed a good correlation with free volume. A free volume based model was developed to describe the gas diffusivity trends as a function of cross-link density.
Somatotyping using 3D anthropometry: a cluster analysis.
Olds, Tim; Daniell, Nathan; Petkov, John; David Stewart, Arthur
2013-01-01
Somatotyping is the quantification of human body shape, independent of body size. Hitherto, somatotyping (including the most popular method, the Heath-Carter system) has been based on subjective visual ratings, sometimes supported by surface anthropometry. This study used data derived from three-dimensional (3D) whole-body scans as inputs for cluster analysis to objectively derive clusters of similar body shapes. Twenty-nine dimensions normalised for body size were measured on a purposive sample of 301 adults aged 17-56 years who had been scanned using a Vitus Smart laser scanner. K-means Cluster Analysis with v-fold cross-validation was used to determine shape clusters. Three male and three female clusters emerged, and were visualised using those scans closest to the cluster centroid and a caricature defined by doubling the difference between the average scan and the cluster centroid. The male clusters were decidedly endomorphic (high fatness), ectomorphic (high linearity), and endo-mesomorphic (a mixture of fatness and muscularity). The female clusters were clearly endomorphic, ectomorphic, and the ecto-mesomorphic (a mixture of linearity and muscularity). An objective shape quantification procedure combining 3D scanning and cluster analysis yielded shape clusters strikingly similar to traditional somatotyping.
NASA Astrophysics Data System (ADS)
Zheng, Yuan-Fang
A three-dimensional, five link biped system is established. Newton-Euler state space formulation is employed to derive the equations of the system. The constraint forces involved in the equations can be eliminated by projection onto a smaller state space system for deriving advanced control laws. A model-referenced adaptive control scheme is developed to control the system. Digital computer simulations of point to point movement are carried out to show that the model-referenced adaptive control increases the dynamic range and speeds up the response of the system in comparison with linear and nonlinear feedback control. Further, the implementation of the controller is simpler. Impact effects of biped contact with the environment are modeled and studied. The instant velocity change at the moment of impact is derived as a function of the biped state and contact speed. The effects of impact on the state, as well as constraints are studied in biped landing on heels and toes simultaneously or on toes first. Rate and nonlinear position feedback are employed for stability of the biped after the impact. The complex structure of the foot is properly modeled. A spring and dashpot pair is suggested to represent the action of plantar fascia during the impact. This action prevents the arch of the foot from collapsing. A mathematical model of the skeletal muscle is discussed. A direct relationship between the stimulus rate and the active state is established. A piecewise linear relation between the length of the contractile element and the isometric force is considered. Hill's characteristic equation is maintained for determining the actual output force during different shortening velocities. A physical threshold model is proposed for recruitment which encompasses the size principle, its manifestations and exceptions to the size principle. Finally the role of spindle feedback in stability of the model is demonstrated by study of a pair of muscles.
Non-Linear Dynamics of Saturn's Rings
NASA Astrophysics Data System (ADS)
Esposito, L. W.
2015-12-01
Non-linear processes can explain why Saturn's rings are so active and dynamic. Some of this non-linearity is captured in a simple Predator-Prey Model: Periodic forcing from the moon causes streamline crowding; This damps the relative velocity, and allows aggregates to grow. About a quarter phase later, the aggregates stir the system to higher relative velocity and the limit cycle repeats each orbit, with relative velocity ranging from nearly zero to a multiple of the orbit average: 2-10x is possible. Summary of Halo Results: A predator-prey model for ring dynamics produces transient structures like 'straw' that can explain the halo structure and spectroscopy: Cyclic velocity changes cause perturbed regions to reach higher collision speeds at some orbital phases, which preferentially removes small regolith particles; Surrounding particles diffuse back too slowly to erase the effect: this gives the halo morphology; This requires energetic collisions (v ≈ 10m/sec, with throw distances about 200km, implying objects of scale R ≈ 20km); We propose 'straw', as observed ny Cassini cameras. Transform to Duffing Eqn : With the coordinate transformation, z = M2/3, the Predator-Prey equations can be combined to form a single second-order differential equation with harmonic resonance forcing. Ring dynamics and history implications: Moon-triggered clumping at perturbed regions in Saturn's rings creates both high velocity dispersion and large aggregates at these distances, explaining both small and large particles observed there. This confirms the triple architecture of ring particles: a broad size distribution of particles; these aggregate into temporary rubble piles; coated by a regolith of dust. We calculate the stationary size distribution using a cell-to-cell mapping procedure that converts the phase-plane trajectories to a Markov chain. Approximating the Markov chain as an asymmetric random walk with reflecting boundaries allows us to determine the power law index from results of numerical simulations in the tidal environment surrounding Saturn. Aggregates can explain many dynamic aspects of the rings and can renew rings by shielding and recycling the material within them, depending on how long the mass is sequestered. We can ask: Are Saturn's rings a chaotic non-linear driven system?
Snow mapping and land use studies in Switzerland
NASA Technical Reports Server (NTRS)
Haefner, H. (Principal Investigator)
1977-01-01
The author has identified the following significant results. A system was developed for operational snow and land use mapping, based on a supervised classification method using various classification algorithms and representation of the results in maplike form on color film with a photomation system. Land use mapping, under European conditions, was achieved with a stepwise linear discriminant analysis by using additional ratio variables. On fall images, signatures of built-up areas were often not separable from wetlands. Two different methods were tested to correlate the size of settlements and the population with an accuracy for the densely populated Swiss Plateau between +2 or -12%.
Nucleation and growth in one dimension
NASA Astrophysics Data System (ADS)
Ben-Naim, E.; Krapivsky, P. L.
1996-10-01
We study statistical properties of the Kolmogorov-Avrami-Johnson-Mehl nucleation-and-growth model in one dimension. We obtain exact results for the gap density as well as the island distribution. When all nucleation events occur simultaneously, we show that the island distribution has discontinuous derivatives on the rays xn(t)=nt, n=1,2,3... . We introduce an accelerated growth mechanism with growth rate increasing linearly with the island size. We solve for the interisland gap density and show that the system reaches complete coverage in a finite time and that the near-critical behavior of the system is robust; i.e., it is insensitive to details such as the nucleation mechanism.
NASA Technical Reports Server (NTRS)
Bartels, Robert E.
2002-01-01
A variable order method of integrating initial value ordinary differential equations that is based on the state transition matrix has been developed. The method has been evaluated for linear time variant and nonlinear systems of equations. While it is more complex than most other methods, it produces exact solutions at arbitrary time step size when the time variation of the system can be modeled exactly by a polynomial. Solutions to several nonlinear problems exhibiting chaotic behavior have been computed. Accuracy of the method has been demonstrated by comparison with an exact solution and with solutions obtained by established methods.
Schwandt, E F; Wagner, J J; Engle, T E; Bartle, S J; Thomson, D U; Reinhardt, C D
2016-03-01
Crossbred yearling steers ( = 360; 395 ± 33.1 kg initial BW) were used to evaluate the effects of dry-rolled corn (DRC) particle size in diets containing 20% wet distiller's grains plus solubles on feedlot performance, carcass characteristics, and starch digestibility. Steers were used in a randomized complete block design and allocated to 36 pens (9 pens/treatment, with 10 animals/pen). Treatments were coarse DRC (4,882 μm), medium DRC (3,760 μm), fine DRC (2,359 μm), and steam-flaked corn (0.35 kg/L; SFC). Final BW and ADG were not affected by treatment ( > 0.05). Dry matter intake was greater and G:F was lower ( < 0.05) for steers fed DRC vs. steers fed SFC. There was a linear decrease ( < 0.05) in DMI in the final 5 wk on feed with decreasing DRC particle size. Fecal starch decreased (linear, < 0.01) as DRC particle size decreased. In situ starch disappearance was lower for DRC vs. SFC ( < 0.05) and linearly increased ( < 0.05) with decreasing particle size at 8 and 24 h. Reducing DRC particle size did not influence growth performance but increased starch digestion and influenced DMI of cattle on finishing diets. No differences ( > 0.10) were observed among treatments for any of the carcass traits measured. Results indicate improved ruminal starch digestibility, reduced fecal starch concentration, and reduced DMI with decreasing DRC particle size in feedlot diets containing 20% wet distiller's grains on a DM basis.
Adsorption of Poly(methyl methacrylate) on Concave Al2O3 Surfaces in Nanoporous Membranes
Nunnery, Grady; Hershkovits, Eli; Tannenbaum, Allen; Tannenbaum, Rina
2009-01-01
The objective of this study was to determine the influence of polymer molecular weight and surface curvature on the adsorption of polymers onto concave surfaces. Poly(methyl methacrylate) (PMMA) of various molecular weights was adsorbed onto porous aluminum oxide membranes having various pore sizes, ranging from 32 to 220 nm. The surface coverage, expressed as repeat units per unit surface area, was observed to vary linearly with molecular weight for molecular weights below ~120 000 g/mol. The coverage was independent of molecular weight above this critical molar mass, as was previously reported for the adsorption of PMMA on convex surfaces. Furthermore, the coverage varied linearly with pore size. A theoretical model was developed to describe curvature-dependent adsorption by considering the density gradient that exists between the surface and the edge of the adsorption layer. According to this model, the density gradient of the adsorbed polymer segments scales inversely with particle size, while the total coverage scales linearly with particle size, in good agreement with experiment. These results show that the details of the adsorption of polymers onto concave surfaces with cylindrical geometries can be used to calculate molecular weight (below a critical molecular weight) if pore size is known. Conversely, pore size can also be determined with similar adsorption experiments. Most significantly, for polymers above a critical molecular weight, the precise molecular weight need not be known in order to determine pore size. Moreover, the adsorption developed and validated in this work can be used to predict coverage also onto surfaces with different geometries. PMID:19415910
Solar granulation and statistical crystallography: A modeling approach using size-shape relations
NASA Technical Reports Server (NTRS)
Noever, D. A.
1994-01-01
The irregular polygonal pattern of solar granulation is analyzed for size-shape relations using statistical crystallography. In contrast to previous work which has assumed perfectly hexagonal patterns for granulation, more realistic accounting of cell (granule) shapes reveals a broader basis for quantitative analysis. Several features emerge as noteworthy: (1) a linear correlation between number of cell-sides and neighboring shapes (called Aboav-Weaire's law); (2) a linear correlation between both average cell area and perimeter and the number of cell-sides (called Lewis's law and a perimeter law, respectively) and (3) a linear correlation between cell area and squared perimeter (called convolution index). This statistical picture of granulation is consistent with a finding of no correlation in cell shapes beyond nearest neighbors. A comparative calculation between existing model predictions taken from luminosity data and the present analysis shows substantial agreements for cell-size distributions. A model for understanding grain lifetimes is proposed which links convective times to cell shape using crystallographic results.
Pakes, D; Boulding, E G
2010-08-01
Empirical estimates of selection gradients caused by predators are common, yet no one has quantified how these estimates vary with predator ontogeny. We used logistic regression to investigate how selection on gastropod shell thickness changed with predator size. Only small and medium purple shore crabs (Hemigrapsus nudus) exerted a linear selection gradient for increased shell-thickness within a single population of the intertidal snail (Littorina subrotundata). The shape of the fitness function for shell thickness was confirmed to be linear for small and medium crabs but was humped for large male crabs, suggesting no directional selection. A second experiment using two prey species to amplify shell thickness differences established that the selection differential on adult snails decreased linearly as crab size increased. We observed differences in size distribution and sex ratios among three natural shore crab populations that may cause spatial and temporal variation in predator-mediated selection on local snail populations.
Firm Size, a Self-Organized Critical Phenomenon: Evidence from the Dynamical Systems Theory
NASA Astrophysics Data System (ADS)
Chandra, Akhilesh
This research draws upon a recent innovation in the dynamical systems literature called the theory of self -organized criticality (SOC) (Bak, Tang, and Wiesenfeld 1988) to develop a computational model of a firm's size by relating its internal and the external sub-systems. As a holistic paradigm, the theory of SOC implies that a firm as a composite system of many degrees of freedom naturally evolves to a critical state in which a minor event starts a chain reaction that can affect either a part or the system as a whole. Thus, the global features of a firm cannot be understood by analyzing its individual parts separately. The causal framework builds upon a constant capital resource to support a volume of production at the existing level of efficiency. The critical size is defined as the production level at which the average product of a firm's factors of production attains its maximum value. The non -linearity is inferred by a change in the nature of relations at the border of criticality, between size and the two performance variables, viz., the operating efficiency and the financial efficiency. The effect of breaching the critical size is examined on the stock price reactions. Consistent with the theory of SOC, it is hypothesized that the temporal response of a firm breaching the level of critical size should behave as a flicker noise (1/f) process. The flicker noise is characterized by correlations extended over a wide range of time scales, indicating some sort of cooperative effect among a firm's degrees of freedom. It is further hypothesized that a firm's size evolves to a spatial structure with scale-invariant, self-similar (fractal) properties. The system is said to be self-organized inasmuch as it naturally evolves to the state of criticality without any detailed specifications of the initial conditions. In this respect, the critical state is an attractor of the firm's dynamics. Another set of hypotheses examines the relations between the size and the performance variables during the sub-critical (below the critical size) and the supra-critical (above the critical size) states. Since the dynamics of any two firms is likely to be different, the analysis is performed individually for each company within the Pharmaceuticals and the Perfume industries. The statistical results of this study provide evidence in support of the hypotheses. The size of a firm is found to be a self-organized critical phenomenon. The presence of 1/f noise and the spatial power-law behavior is taken as an evidence of the firm's size as a self-organized critical phenomenon. (Abstract shortened by UMI.).
NASA Astrophysics Data System (ADS)
Sedukhin, Andrey G.; Poleshchuk, Alexander G.
2018-01-01
A method is proposed for efficient, rotationally symmetric, tight mirror focusing of laser beams that is optimally matched to their thin-film linear-to-radial polarization conversion by a constant near-Brewster angle of incidence of the beams onto a polarizing element. Two optical systems and their modifications are considered that are based on this method and on the use of Toraldo filters. If focusing components of these systems operate in media with refractive indices equal to that of the focal region, they take the form of an axicon and an annular reflector generated by the revolution of an inclined parabola around the optical axis. Vectorial formulas for calculating the diffracted field near the focus of these systems are derived. Also presented are the results of designing a thin-film obliquely illuminated polarizer and a numerical simulation of deep UV laser beams generated by one of the systems and focused in an immersion liquid. The transverse and axial sizes of a needle longitudinally polarized field generated by the system with a simplest phase Toraldo filter were found to be 0.39 λ and 10.5 λ, with λ being the wavelength in the immersion liquid.
The Non-linear Health Consequences of Living in Larger Cities.
Rocha, Luis E C; Thorson, Anna E; Lambiotte, Renaud
2015-10-01
Urbanization promotes economy, mobility, access, and availability of resources, but on the other hand, generates higher levels of pollution, violence, crime, and mental distress. The health consequences of the agglomeration of people living close together are not fully understood. Particularly, it remains unclear how variations in the population size across cities impact the health of the population. We analyze the deviations from linearity of the scaling of several health-related quantities, such as the incidence and mortality of diseases, external causes of death, wellbeing, and health care availability, in respect to the population size of cities in Brazil, Sweden, and the USA. We find that deaths by non-communicable diseases tend to be relatively less common in larger cities, whereas the per capita incidence of infectious diseases is relatively larger for increasing population size. Healthier lifestyle and availability of medical support are disproportionally higher in larger cities. The results are connected with the optimization of human and physical resources and with the non-linear effects of social networks in larger populations. An urban advantage in terms of health is not evident, and using rates as indicators to compare cities with different population sizes may be insufficient.
Algorithm for Autonomous Landing
NASA Technical Reports Server (NTRS)
Kuwata, Yoshiaki
2011-01-01
Because of their small size, high maneuverability, and easy deployment, micro aerial vehicles (MAVs) are used for a wide variety of both civilian and military missions. One of their current drawbacks is the vast array of sensors (such as GPS, altimeter, radar, and the like) required to make a landing. Due to the MAV s small payload size, this is a major concern. Replacing the imaging sensors with a single monocular camera is sufficient to land a MAV. By applying optical flow algorithms to images obtained from the camera, time-to-collision can be measured. This is a measurement of position and velocity (but not of absolute distance), and can avoid obstacles as well as facilitate a landing on a flat surface given a set of initial conditions. The key to this approach is to calculate time-to-collision based on some image on the ground. By holding the angular velocity constant, horizontal speed decreases linearly with the height, resulting in a smooth landing. Mathematical proofs show that even with actuator saturation or modeling/ measurement uncertainties, MAVs can land safely. Landings of this nature may have a higher velocity than is desirable, but this can be compensated for by a cushioning or dampening system, or by using a system of legs to grab onto a surface. Such a monocular camera system can increase vehicle payload size (or correspondingly reduce vehicle size), increase speed of descent, and guarantee a safe landing by directly correlating speed to height from the ground.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Kiwoo; Natsui, Takuya; Hirai, Shunsuke
2011-06-01
One of the advantages of applying X-band linear accelerator (Linac) is the compact size of the whole system. That shows us the possibility of on-site system such as the custom inspection system in an airport. As X-ray source, we have developed X-band Linac and achieved maximum X-ray energy 950 keV using the low power magnetron (250 kW) in 2 {mu}s pulse length. The whole size of the Linac system is 1x1x1 m{sup 3}. That is realized by introducing X-band system. In addition, we have designed two-fold scintillator detector in dual energy X-ray concept. Monte carlo N-particle transport (MCNP) code wasmore » used to make up sensor part of the design with two scintillators, CsI and CdWO4. The custom inspection system is composed of two equipments: 950 keV X-band Linac and two-fold scintillator and they are operated simulating real situation such as baggage check in an airport. We will show you the results of experiment which was performed with metal samples: iron and lead as targets in several conditions.« less
Herrera, Javier
2009-01-01
Background and Aims While pollinators may in general select for large, morphologically uniform floral phenotypes, drought stress has been proposed as a destabilizing force that may favour small flowers and/or promote floral variation within species. Methods The general validity of this concept was checked by surveying a taxonomically diverse array of 38 insect-pollinated Mediterranean species. The interplay between fresh biomass investment, linear size and percentage corolla allocation was studied. Allometric relationships between traits were investigated by reduced major-axis regression, and qualitative correlates of floral variation explored using general linear-model MANOVA. Key Results Across species, flowers were perfectly isometrical with regard to corolla allocation (i.e. larger flowers were just scaled-up versions of smaller ones and vice versa). In contrast, linear size and biomass varied allometrically (i.e. there were shape variations, in addition to variations in size). Most floral variables correlated positively and significantly across species, except corolla allocation, which was largely determined by family membership and floral symmetry. On average, species with bilateral flowers allocated more to the corolla than those with radial flowers. Plant life-form was immaterial to all of the studied traits. Flower linear size variation was in general low among conspecifics (coefficients of variation around 10 %), whereas biomass was in general less uniform (e.g. 200–400 mg in Cistus salvifolius). Significant among-population differences were detected for all major quantitative floral traits. Conclusions Flower miniaturization can allow an improved use of reproductive resources under prevailingly stressful conditions. The hypothesis that flower size reflects a compromise between pollinator attraction, water requirements and allometric constraints among floral parts is discussed. PMID:19258340
High-Speed Edge-Detecting Line Scan Smart Camera
NASA Technical Reports Server (NTRS)
Prokop, Norman F.
2012-01-01
A high-speed edge-detecting line scan smart camera was developed. The camera is designed to operate as a component in a NASA Glenn Research Center developed inlet shock detection system. The inlet shock is detected by projecting a laser sheet through the airflow. The shock within the airflow is the densest part and refracts the laser sheet the most in its vicinity, leaving a dark spot or shadowgraph. These spots show up as a dip or negative peak within the pixel intensity profile of an image of the projected laser sheet. The smart camera acquires and processes in real-time the linear image containing the shock shadowgraph and outputting the shock location. Previously a high-speed camera and personal computer would perform the image capture and processing to determine the shock location. This innovation consists of a linear image sensor, analog signal processing circuit, and a digital circuit that provides a numerical digital output of the shock or negative edge location. The smart camera is capable of capturing and processing linear images at over 1,000 frames per second. The edges are identified as numeric pixel values within the linear array of pixels, and the edge location information can be sent out from the circuit in a variety of ways, such as by using a microcontroller and onboard or external digital interface to include serial data such as RS-232/485, USB, Ethernet, or CAN BUS; parallel digital data; or an analog signal. The smart camera system can be integrated into a small package with a relatively small number of parts, reducing size and increasing reliability over the previous imaging system..
Cook, James P; Mahajan, Anubha; Morris, Andrew P
2017-02-01
Linear mixed models are increasingly used for the analysis of genome-wide association studies (GWAS) of binary phenotypes because they can efficiently and robustly account for population stratification and relatedness through inclusion of random effects for a genetic relationship matrix. However, the utility of linear (mixed) models in the context of meta-analysis of GWAS of binary phenotypes has not been previously explored. In this investigation, we present simulations to compare the performance of linear and logistic regression models under alternative weighting schemes in a fixed-effects meta-analysis framework, considering designs that incorporate variable case-control imbalance, confounding factors and population stratification. Our results demonstrate that linear models can be used for meta-analysis of GWAS of binary phenotypes, without loss of power, even in the presence of extreme case-control imbalance, provided that one of the following schemes is used: (i) effective sample size weighting of Z-scores or (ii) inverse-variance weighting of allelic effect sizes after conversion onto the log-odds scale. Our conclusions thus provide essential recommendations for the development of robust protocols for meta-analysis of binary phenotypes with linear models.
Garamszegi, Sara; Franzosa, Eric A.; Xia, Yu
2013-01-01
A central challenge in host-pathogen systems biology is the elucidation of general, systems-level principles that distinguish host-pathogen interactions from within-host interactions. Current analyses of host-pathogen and within-host protein-protein interaction networks are largely limited by their resolution, treating proteins as nodes and interactions as edges. Here, we construct a domain-resolved map of human-virus and within-human protein-protein interaction networks by annotating protein interactions with high-coverage, high-accuracy, domain-centric interaction mechanisms: (1) domain-domain interactions, in which a domain in one protein binds to a domain in a second protein, and (2) domain-motif interactions, in which a domain in one protein binds to a short, linear peptide motif in a second protein. Analysis of these domain-resolved networks reveals, for the first time, significant mechanistic differences between virus-human and within-human interactions at the resolution of single domains. While human proteins tend to compete with each other for domain binding sites by means of sequence similarity, viral proteins tend to compete with human proteins for domain binding sites in the absence of sequence similarity. Independent of their previously established preference for targeting human protein hubs, viral proteins also preferentially target human proteins containing linear motif-binding domains. Compared to human proteins, viral proteins participate in more domain-motif interactions, target more unique linear motif-binding domains per residue, and contain more unique linear motifs per residue. Together, these results suggest that viruses surmount genome size constraints by convergently evolving multiple short linear motifs in order to effectively mimic, hijack, and manipulate complex host processes for their survival. Our domain-resolved analyses reveal unique signatures of pleiotropy, economy, and convergent evolution in viral-host interactions that are otherwise hidden in the traditional binary network, highlighting the power and necessity of high-resolution approaches in host-pathogen systems biology. PMID:24339775
Takagi-Sugeno-Kang fuzzy models of the rainfall-runoff transformation
NASA Astrophysics Data System (ADS)
Jacquin, A. P.; Shamseldin, A. Y.
2009-04-01
Fuzzy inference systems, or fuzzy models, are non-linear models that describe the relation between the inputs and the output of a real system using a set of fuzzy IF-THEN rules. This study deals with the application of Takagi-Sugeno-Kang type fuzzy models to the development of rainfall-runoff models operating on a daily basis, using a system based approach. The models proposed are classified in two types, each intended to account for different kinds of dominant non-linear effects in the rainfall-runoff relationship. Fuzzy models type 1 are intended to incorporate the effect of changes in the prevailing soil moisture content, while fuzzy models type 2 address the phenomenon of seasonality. Each model type consists of five fuzzy models of increasing complexity; the most complex fuzzy model of each model type includes all the model components found in the remaining fuzzy models of the respective type. The models developed are applied to data of six catchments from different geographical locations and sizes. Model performance is evaluated in terms of two measures of goodness of fit, namely the Nash-Sutcliffe criterion and the index of volumetric fit. The results of the fuzzy models are compared with those of the Simple Linear Model, the Linear Perturbation Model and the Nearest Neighbour Linear Perturbation Model, which use similar input information. Overall, the results of this study indicate that Takagi-Sugeno-Kang fuzzy models are a suitable alternative for modelling the rainfall-runoff relationship. However, it is also observed that increasing the complexity of the model structure does not necessarily produce an improvement in the performance of the fuzzy models. The relative importance of the different model components in determining the model performance is evaluated through sensitivity analysis of the model parameters in the accompanying study presented in this meeting. Acknowledgements: We would like to express our gratitude to Prof. Kieran M. O'Connor from the National University of Ireland, Galway, for providing the data used in this study.
Garamszegi, Sara; Franzosa, Eric A; Xia, Yu
2013-01-01
A central challenge in host-pathogen systems biology is the elucidation of general, systems-level principles that distinguish host-pathogen interactions from within-host interactions. Current analyses of host-pathogen and within-host protein-protein interaction networks are largely limited by their resolution, treating proteins as nodes and interactions as edges. Here, we construct a domain-resolved map of human-virus and within-human protein-protein interaction networks by annotating protein interactions with high-coverage, high-accuracy, domain-centric interaction mechanisms: (1) domain-domain interactions, in which a domain in one protein binds to a domain in a second protein, and (2) domain-motif interactions, in which a domain in one protein binds to a short, linear peptide motif in a second protein. Analysis of these domain-resolved networks reveals, for the first time, significant mechanistic differences between virus-human and within-human interactions at the resolution of single domains. While human proteins tend to compete with each other for domain binding sites by means of sequence similarity, viral proteins tend to compete with human proteins for domain binding sites in the absence of sequence similarity. Independent of their previously established preference for targeting human protein hubs, viral proteins also preferentially target human proteins containing linear motif-binding domains. Compared to human proteins, viral proteins participate in more domain-motif interactions, target more unique linear motif-binding domains per residue, and contain more unique linear motifs per residue. Together, these results suggest that viruses surmount genome size constraints by convergently evolving multiple short linear motifs in order to effectively mimic, hijack, and manipulate complex host processes for their survival. Our domain-resolved analyses reveal unique signatures of pleiotropy, economy, and convergent evolution in viral-host interactions that are otherwise hidden in the traditional binary network, highlighting the power and necessity of high-resolution approaches in host-pathogen systems biology.
NASA Astrophysics Data System (ADS)
Zhu, Xiaoyuan; Zhang, Hui; Cao, Dongpu; Fang, Zongde
2015-06-01
Integrated motor-transmission (IMT) powertrain system with directly coupled motor and gearbox is a good choice for electric commercial vehicles (e.g., pure electric buses) due to its potential in motor size reduction and energy efficiency improvement. However, the controller design for powertrain oscillation damping becomes challenging due to the elimination of damping components. On the other hand, as controller area network (CAN) is commonly adopted in modern vehicle system, the network-induced time-varying delays that caused by bandwidth limitation will further lead to powertrain vibration or even destabilize the powertrain control system. Therefore, in this paper, a robust energy-to-peak controller is proposed for the IMT powertrain system to address the oscillation damping problem and also attenuate the external disturbance. The control law adopted here is based on a multivariable PI control, which ensures the applicability and performance of the proposed controller in engineering practice. With the linearized delay uncertainties characterized by polytopic inclusions, a delay-free closed-loop augmented system is established for the IMT powertrain system under discrete-time framework. The proposed controller design problem is then converted to a static output feedback (SOF) controller design problem where the feedback control gains are obtained by solving a set of linear matrix inequalities (LMIs). The effectiveness as well as robustness of the proposed controller is demonstrated by comparing its performance against that of a conventional PI controller.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tanyi, James A.; Nitzling, Kevin D.; Lodwick, Camille J.
2011-02-15
Purpose: Assessment of the fundamental dosimetric characteristics of a novel gated fiber-optic-coupled dosimetry system for clinical electron beam irradiation. Methods: The response of fiber-optic-coupled dosimetry system to clinical electron beam, with nominal energy range of 6-20 MeV, was evaluated for reproducibility, linearity, and output dependence on dose rate, dose per pulse, energy, and field size. The validity of the detector system's response was assessed in correspondence with a reference ionization chamber. Results: The fiber-optic-coupled dosimetry system showed little dependence to dose rate variations (coefficient of variation {+-}0.37%) and dose per pulse changes (with 0.54% of reference chamber measurements). The reproducibilitymore » of the system was {+-}0.55% for dose fractions of {approx}100 cGy. Energy dependence was within {+-}1.67% relative to the reference ionization chamber for the 6-20 MeV nominal electron beam energy range. The system exhibited excellent linear response (R{sup 2}=1.000) compared to reference ionization chamber in the dose range of 1-1000 cGy. The output factors were within {+-}0.54% of the corresponding reference ionization chamber measurements. Conclusions: The dosimetric properties of the gated fiber-optic-coupled dosimetry system compare favorably to the corresponding reference ionization chamber measurements and show considerable potential for applications in clinical electron beam radiotherapy.« less
Monte Carlo simulation of star/linear and star/star blends with chemically identical monomers
NASA Astrophysics Data System (ADS)
Theodorakis, P. E.; Avgeropoulos, A.; Freire, J. J.; Kosmas, M.; Vlahos, C.
2007-11-01
The effects of chain size and architectural asymmetry on the miscibility of blends with chemically identical monomers, differing only in their molecular weight and architecture, are studied via Monte Carlo simulation by using the bond fluctuation model. Namely, we consider blends composed of linear/linear, star/linear and star/star chains. We found that linear/linear blends are more miscible than the corresponding star/star mixtures. In star/linear blends, the increase in the volume fraction of the star chains increases the miscibility. For both star/linear and star/star blends, the miscibility decreases with the increase in star functionality. When we increase the molecular weight of linear chains of star/linear mixtures the miscibility decreases. Our findings are compared with recent analytical and experimental results.
Voltage and pace-capture mapping of linear ablation lesions overestimates chronic ablation gap size.
O'Neill, Louisa; Harrison, James; Chubb, Henry; Whitaker, John; Mukherjee, Rahul K; Bloch, Lars Ølgaard; Andersen, Niels Peter; Dam, Høgni; Jensen, Henrik K; Niederer, Steven; Wright, Matthew; O'Neill, Mark; Williams, Steven E
2018-04-26
Conducting gaps in lesion sets are a major reason for failure of ablation procedures. Voltage mapping and pace-capture have been proposed for intra-procedural identification of gaps. We aimed to compare gap size measured acutely and chronically post-ablation to macroscopic gap size in a porcine model. Intercaval linear ablation was performed in eight Göttingen minipigs with a deliberate gap of ∼5 mm left in the ablation line. Gap size was measured by interpolating ablation contact force values between ablation tags and thresholding at a low force cut-off of 5 g. Bipolar voltage mapping and pace-capture mapping along the length of the line were performed immediately, and at 2 months, post-ablation. Animals were euthanized and gap sizes were measured macroscopically. Voltage thresholds to define scar were determined by receiver operating characteristic analysis as <0.56 mV (acutely) and <0.62 mV (chronically). Taking the macroscopic gap size as gold standard, error in gap measurements were determined for voltage, pace-capture, and ablation contact force maps. All modalities overestimated chronic gap size, by 1.4 ± 2.0 mm (ablation contact force map), 5.1 ± 3.4 mm (pace-capture), and 9.5 ± 3.8 mm (voltage mapping). Error on ablation contact force map gap measurements were significantly less than for voltage mapping (P = 0.003, Tukey's multiple comparisons test). Chronically, voltage mapping and pace-capture mapping overestimated macroscopic gap size by 11.9 ± 3.7 and 9.8 ± 3.5 mm, respectively. Bipolar voltage and pace-capture mapping overestimate the size of chronic gap formation in linear ablation lesions. The most accurate estimation of chronic gap size was achieved by analysis of catheter-myocardium contact force during ablation.
Efficient Construction of Mesostate Networks from Molecular Dynamics Trajectories.
Vitalis, Andreas; Caflisch, Amedeo
2012-03-13
The coarse-graining of data from molecular simulations yields conformational space networks that may be used for predicting the system's long time scale behavior, to discover structural pathways connecting free energy basins in the system, or simply to represent accessible phase space regions of interest and their connectivities in a two-dimensional plot. In this contribution, we present a tree-based algorithm to partition conformations of biomolecules into sets of similar microstates, i.e., to coarse-grain trajectory data into mesostates. On account of utilizing an architecture similar to that of established tree-based algorithms, the proposed scheme operates in near-linear time with data set size. We derive expressions needed for the fast evaluation of mesostate properties and distances when employing typical choices for measures of similarity between microstates. Using both a pedagogically useful and a real-word application, the algorithm is shown to be robust with respect to tree height, which in addition to mesostate threshold size is the main adjustable parameter. It is demonstrated that the derived mesostate networks can preserve information regarding the free energy basins and barriers by which the system is characterized.
Berman, Marcie; Bozsik, Frances; Shook, Robin P; Meissen-Sebelius, Emily; Markenson, Deborah; Summar, Shelly; DeWit, Emily; Carlson, Jordan A
2018-02-22
Policy, systems, and environmental approaches are recommended for preventing childhood obesity. The objective of our study was to evaluate the Healthy Lifestyles Initiative, which aimed to strengthen community capacity for policy, systems, and environmental approaches to healthy eating and active living among children and families. The Healthy Lifestyles Initiative was developed through a collaborative process and facilitated by community organizers at a local children's hospital. The initiative supported 218 partners from 170 community organizations through training, action planning, coalition support, one-on-one support, and the dissemination of materials and sharing of resources. Eighty initiative partners completed a brief online survey on implementation strategies engaged in, materials used, and policy, systems, and environmental activities implemented. In accordance with frameworks for implementation science, we assessed associations among the constructs by using linear regression to identify whether and which of the implementation strategies were associated with materials used and implementation of policy, systems, and environmental activities targeted by the initiative. Each implementation strategy was engaged in by 30% to 35% of the 80 survey respondents. The most frequently used materials were educational handouts (76.3%) and posters (66.3%). The most frequently implemented activities were developing or continuing partnerships (57.5%) and reviewing organizational wellness policies (46.3%). Completing an action plan and the number of implementation strategies engaged in were positively associated with implementation of targeted activities (action plan, effect size = 0.82; number of strategies, effect size = 0.51) and materials use (action plan, effect size = 0.59; number of strategies, effect size = 0.52). Materials use was positively associated with implementation of targeted activities (effect size = 0.35). Community-capacity-building efforts can be effective in supporting community organizations to engage in policy, systems, and environmental activities for healthy eating and active living. Multiple implementation strategies are likely needed, particularly strategies that involve a high level of engagement, such as training community organizations and working with them on structured action plans.
Comparative analysis of linear motor geometries for Stirling coolers
NASA Astrophysics Data System (ADS)
R, Rajesh V.; Kuzhiveli, Biju T.
2017-12-01
Compared to rotary motor driven Stirling coolers, linear motor coolers are characterized by small volume and long life, making them more suitable for space and military applications. The motor design and operational characteristics have a direct effect on the operation of the cooler. In this perspective, ample scope exists in understanding the behavioural description of linear motor systems. In the present work, the authors compare and analyze different moving magnet linear motor geometries to finalize the most favourable one for Stirling coolers. The required axial force in the linear motors is generated by the interaction of magnetic fields of a current carrying coil and that of a permanent magnet. The compact size, commercial availability of permanent magnets and low weight requirement of the system are quite a few constraints for the design. The finite element analysis performed using Maxwell software serves as the basic tool to analyze the magnet movement, flux distribution in the air gap and the magnetic saturation levels on the core. A number of material combinations are investigated for core before finalizing the design. The effect of varying the core geometry on the flux produced in the air gap is also analyzed. The electromagnetic analysis of the motor indicates that the permanent magnet height ought to be taken in such a way that it is under the influence of electromagnetic field of current carrying coil as well as the outer core in the balanced position. This is necessary so that sufficient amount of thrust force is developed by efficient utilisation of the air gap flux density. Also, the outer core ends need to be designed to facilitate enough room for the magnet movement under the operating conditions.
Fiber Optic Sensor Embedment Study for Multi-Parameter Strain Sensing
Drissi-Habti, Monssef; Raman, Venkadesh; Khadour, Aghiad; Timorian, Safiullah
2017-01-01
The fiber optic sensors (FOSs) are commonly used for large-scale structure monitoring systems for their small size, noise free and low electrical risk characteristics. Embedded fiber optic sensors (FOSs) lead to micro-damage in composite structures. This damage generation threshold is based on the coating material of the FOSs and their diameter. In addition, embedded FOSs are aligned parallel to reinforcement fibers to avoid micro-damage creation. This linear positioning of distributed FOS fails to provide all strain parameters. We suggest novel sinusoidal sensor positioning to overcome this issue. This method tends to provide multi-parameter strains in a large surface area. The effectiveness of sinusoidal FOS positioning over linear FOS positioning is studied under both numerical and experimental methods. This study proves the advantages of the sinusoidal positioning method for FOS in composite material’s bonding. PMID:28333117
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kishimoto, S., E-mail: syunji.kishimoto@kek.jp; Haruki, R.; Mitsui, T.
We developed a silicon avalanche photodiode (Si-APD) linear-array detector to be used for time-resolved X-ray scattering experiments using synchrotron X-rays. The Si-APD linear array consists of 64 pixels (pixel size: 100 × 200 μm{sup 2}) with a pixel pitch of 150 μm and a depletion depth of 10 μm. The multichannel scaler counted X-ray pulses over continuous 2046 time bins for every 0.5 ns and recorded a time spectrum at each pixel with a time resolution of 0.5 ns (FWHM) for 8.0 keV X-rays. Using the detector system, we were able to observe X-ray peaks clearly separated with 2 nsmore » interval in the multibunch-mode operation of the Photon Factory ring. The small-angle X-ray scattering for polyvinylidene fluoride film was also observed with the detector.« less
An implicit-iterative solution of the heat conduction equation with a radiation boundary condition
NASA Technical Reports Server (NTRS)
Williams, S. D.; Curry, D. M.
1977-01-01
For the problem of predicting one-dimensional heat transfer between conducting and radiating mediums by an implicit finite difference method, four different formulations were used to approximate the surface radiation boundary condition while retaining an implicit formulation for the interior temperature nodes. These formulations are an explicit boundary condition, a linearized boundary condition, an iterative boundary condition, and a semi-iterative boundary method. The results of these methods in predicting surface temperature on the space shuttle orbiter thermal protection system model under a variety of heating rates were compared. The iterative technique caused the surface temperature to be bounded at each step. While the linearized and explicit methods were generally more efficient, the iterative and semi-iterative techniques provided a realistic surface temperature response without requiring step size control techniques.
Extended linear detection range for optical tweezers using image-plane detection scheme
NASA Astrophysics Data System (ADS)
Hajizadeh, Faegheh; Masoumeh Mousavi, S.; Khaksar, Zeinab S.; Reihani, S. Nader S.
2014-10-01
Ability to measure pico- and femto-Newton range forces using optical tweezers (OT) strongly relies on the sensitivity of its detection system. We show that the commonly used back-focal-plane detection method provides a linear response range which is shorter than that of the restoring force of OT for large beads. This limits measurable force range of OT. We show, both theoretically and experimentally, that utilizing a second laser beam for tracking could solve the problem. We also propose a new detection scheme in which the quadrant photodiode is positioned at the plane optically conjugate to the object plane (image plane). This method solves the problem without need for a second laser beam for the bead sizes that are commonly used in force spectroscopy applications of OT, such as biopolymer stretching.
On the repeated measures designs and sample sizes for randomized controlled trials.
Tango, Toshiro
2016-04-01
For the analysis of longitudinal or repeated measures data, generalized linear mixed-effects models provide a flexible and powerful tool to deal with heterogeneity among subject response profiles. However, the typical statistical design adopted in usual randomized controlled trials is an analysis of covariance type analysis using a pre-defined pair of "pre-post" data, in which pre-(baseline) data are used as a covariate for adjustment together with other covariates. Then, the major design issue is to calculate the sample size or the number of subjects allocated to each treatment group. In this paper, we propose a new repeated measures design and sample size calculations combined with generalized linear mixed-effects models that depend not only on the number of subjects but on the number of repeated measures before and after randomization per subject used for the analysis. The main advantages of the proposed design combined with the generalized linear mixed-effects models are (1) it can easily handle missing data by applying the likelihood-based ignorable analyses under the missing at random assumption and (2) it may lead to a reduction in sample size, compared with the simple pre-post design. The proposed designs and the sample size calculations are illustrated with real data arising from randomized controlled trials. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
NASA Technical Reports Server (NTRS)
Meisch, A. J.
1972-01-01
Data for the system n-pentane/n-heptane on porous Chromosorb-102 adsorbent were obtained at 150, 175, and 200 C for mixtures containing zero to 100% n-pentane by weight. Prior results showing limitations on superposition of pure component data to predict multicomponent chromatograms were verified. The thermodynamic parameter MR0 was found to be a linear function of sample composition. A nonporous adsorbent failed to separate the system because of large input sample dispersions. A proposed automated data processing scheme involving magnetic tape recording of the detector signals and processing by a minicomputer was rejected because of resolution limitations of the available a/d converters. Preliminary data on porosity and pore size distributions of the adsorbents were obtained.
Number of Transition Frequencies of a System Containing an Arbitrary Number of Gas Bubbles
NASA Astrophysics Data System (ADS)
Ida, Masato
2002-05-01
“Transition frequencies” of a system containing an arbitrary number of bubbles levitated in a liquid are discussed. Using a linear coupled-oscillator model, it is shown theoretically that when the system contains N bubbles of different sizes, each bubble has 2N - 1 (or less) transition frequencies which make the phase difference between an external sound and a bubble’s pulsation π / 2. Furthermore, we discuss a discrepancy appearing between the present result regarding the transition frequencies and existing ones for the resonance frequencies in a two-bubble case, and show that the transition frequency, defined as above, and the resonance frequency have a different physical meaning when N ≥ 2, while they are consistent for N = 1.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bui, Quan M.; Wang, Lu; Osei-Kuffuor, Daniel
Multiphase flow is a critical process in a wide range of applications, including oil and gas recovery, carbon sequestration, and contaminant remediation. Numerical simulation of multiphase flow requires solving of a large, sparse linear system resulting from the discretization of the partial differential equations modeling the flow. In the case of multiphase multicomponent flow with miscible effect, this is a very challenging task. The problem becomes even more difficult if phase transitions are taken into account. A new approach to handle phase transitions is to formulate the system as a nonlinear complementarity problem (NCP). Unlike in the primary variable switchingmore » technique, the set of primary variables in this approach is fixed even when there is phase transition. Not only does this improve the robustness of the nonlinear solver, it opens up the possibility to use multigrid methods to solve the resulting linear system. The disadvantage of the complementarity approach, however, is that when a phase disappears, the linear system has the structure of a saddle point problem and becomes indefinite, and current algebraic multigrid (AMG) algorithms cannot be applied directly. In this study, we explore the effectiveness of a new multilevel strategy, based on the multigrid reduction technique, to deal with problems of this type. We demonstrate the effectiveness of the method through numerical results for the case of two-phase, two-component flow with phase appearance/disappearance. In conclusion, we also show that the strategy is efficient and scales optimally with problem size.« less
Bui, Quan M.; Wang, Lu; Osei-Kuffuor, Daniel
2018-02-06
Multiphase flow is a critical process in a wide range of applications, including oil and gas recovery, carbon sequestration, and contaminant remediation. Numerical simulation of multiphase flow requires solving of a large, sparse linear system resulting from the discretization of the partial differential equations modeling the flow. In the case of multiphase multicomponent flow with miscible effect, this is a very challenging task. The problem becomes even more difficult if phase transitions are taken into account. A new approach to handle phase transitions is to formulate the system as a nonlinear complementarity problem (NCP). Unlike in the primary variable switchingmore » technique, the set of primary variables in this approach is fixed even when there is phase transition. Not only does this improve the robustness of the nonlinear solver, it opens up the possibility to use multigrid methods to solve the resulting linear system. The disadvantage of the complementarity approach, however, is that when a phase disappears, the linear system has the structure of a saddle point problem and becomes indefinite, and current algebraic multigrid (AMG) algorithms cannot be applied directly. In this study, we explore the effectiveness of a new multilevel strategy, based on the multigrid reduction technique, to deal with problems of this type. We demonstrate the effectiveness of the method through numerical results for the case of two-phase, two-component flow with phase appearance/disappearance. In conclusion, we also show that the strategy is efficient and scales optimally with problem size.« less
Durejko, Tomasz; Aniszewska, Justyna; Ziętala, Michał; Antolak-Dudka, Anna; Czujko, Tomasz; Varin, Robert A; Paserin, Vlad
2018-05-18
The water-atomized ATOMET 28, 1001, 4701, and 4801 powders, manufactured by Rio Tinto Metal Powders, were used for additive manufacturing by a laser engineered net shaping (LENS) technique. Their overall morphology was globular and rounded with a size distribution from about 20 to 200 µm. Only the ATOMET 28 powder was characterized by a strong inhomogeneity of particle size and irregular polyhedral shape of powder particles with sharp edges. The powders were pre-sieved to a size distribution from 40 to 150 µm before LENS processing. One particular sample-LENS-fabricated from the ATOMET 28 powder-was characterized by the largest cross-sectional (2D) porosity of 4.2% and bulk porosity of 3.9%, the latter determined by microtomography measurements. In contrast, the cross-sectional porosities of bulk, solid, nearly cubic LENS-fabricated samples from the other ATOMET powders exhibited very low porosities within the range 0.03⁻0.1%. Unexpectedly, the solid sample-LENS-fabricated from the reference, a purely spherical Fe 99.8 powder-exhibited a porosity of 1.1%, the second largest after that of the pre-sieved, nonspherical ATOMET 28 powder. Vibrations incorporated mechanically into the LENS powder feeding system substantially improved the flow rate vs. feeding rate dependence, making it completely linear with an excellent coefficient of fit, R² = 0.99. In comparison, the reference powder Fe 99.8 always exhibited a linear dependence of the powder flow rate vs. feeding rate, regardless of vibrations.
Development of a multichannel hyperspectral imaging probe for food property and quality assessment
NASA Astrophysics Data System (ADS)
Huang, Yuping; Lu, Renfu; Chen, Kunjie
2017-05-01
This paper reports on the development, calibration and evaluation of a new multipurpose, multichannel hyperspectral imaging probe for property and quality assessment of food products. The new multichannel probe consists of a 910 μm fiber as a point light source and 30 light receiving fibers of three sizes (i.e., 50 μm, 105 μm and 200 μm) arranged in a special pattern to enhance signal acquisitions over the spatial distances of up to 36 mm. The multichannel probe allows simultaneous acquisition of 30 spatially-resolved reflectance spectra of food samples with either flat or curved surface over the spectral region of 550-1,650 nm. The measured reflectance spectra can be used for estimating the optical scattering and absorption properties of food samples, as well as for assessing the tissues of the samples at different depths. Several calibration procedures that are unique to this probe were carried out; they included linearity calibrations for each channel of the hyperspectral imaging system to ensure consistent linear responses of individual channels, and spectral response calibrations of individual channels for each fiber size group and between the three groups of different size fibers. Finally, applications of this new multichannel probe were demonstrated through the optical property measurement of liquid model samples and tomatoes of different maturity levels. The multichannel probe offers new capabilities for optical property measurement and quality detection of food and agricultural products.
Ziętala, Michał; Antolak-Dudka, Anna; Paserin, Vlad
2018-01-01
The water-atomized ATOMET 28, 1001, 4701, and 4801 powders, manufactured by Rio Tinto Metal Powders, were used for additive manufacturing by a laser engineered net shaping (LENS) technique. Their overall morphology was globular and rounded with a size distribution from about 20 to 200 µm. Only the ATOMET 28 powder was characterized by a strong inhomogeneity of particle size and irregular polyhedral shape of powder particles with sharp edges. The powders were pre-sieved to a size distribution from 40 to 150 µm before LENS processing. One particular sample—LENS-fabricated from the ATOMET 28 powder—was characterized by the largest cross-sectional (2D) porosity of 4.2% and bulk porosity of 3.9%, the latter determined by microtomography measurements. In contrast, the cross-sectional porosities of bulk, solid, nearly cubic LENS-fabricated samples from the other ATOMET powders exhibited very low porosities within the range 0.03–0.1%. Unexpectedly, the solid sample—LENS-fabricated from the reference, a purely spherical Fe 99.8 powder—exhibited a porosity of 1.1%, the second largest after that of the pre-sieved, nonspherical ATOMET 28 powder. Vibrations incorporated mechanically into the LENS powder feeding system substantially improved the flow rate vs. feeding rate dependence, making it completely linear with an excellent coefficient of fit, R2 = 0.99. In comparison, the reference powder Fe 99.8 always exhibited a linear dependence of the powder flow rate vs. feeding rate, regardless of vibrations. PMID:29783704
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yost, Shane R.; Head-Gordon, Martin, E-mail: mhg@cchem.berkeley.edu; Chemical Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720
2016-08-07
In this paper we introduce two size consistent forms of the non-orthogonal configuration interaction with second-order Møller-Plesset perturbation theory method, NOCI-MP2. We show that the original NOCI-MP2 formulation [S. R. Yost, T. Kowalczyk, and T. VanVoorh, J. Chem. Phys. 193, 174104 (2013)], which is a perturb-then-diagonalize multi-reference method, is not size consistent. We also show that this causes significant errors in large systems like the linear acenes. By contrast, the size consistent versions of the method give satisfactory results for singlet and triplet excited states when compared to other multi-reference methods that include dynamic correlation. For NOCI-MP2 however, the numbermore » of required determinants to yield similar levels of accuracy is significantly smaller. These results show the promise of the NOCI-MP2 method, though work still needs to be done in creating a more consistent black-box approach to computing the determinants that comprise the many-electron NOCI basis.« less
NASA Astrophysics Data System (ADS)
Sumi, Ayako; Olsen, Lars Folke; Ohtomo, Norio; Tanaka, Yukio; Sawamura, Sadashi
2003-02-01
We have carried out spectral analysis of measles notifications in several communities in Denmark, UK and USA. The results confirm that each power spectral density (PSD) shows exponential characteristics, which are universally observed in the PSD for time series generated from nonlinear dynamical system. The exponential gradient increases with the population size. For almost all communities, many spectral lines observed in each PSD can be fully assigned to linear combinations of several fundamental periods, suggesting that the measles data are substantially noise-free. The optimum least squares fitting curve calculated using these fundamental periods essentially reproduces an underlying variation of the measles data, and an extension of the curve can be used to predict measles epidemics. For the communities with large population sizes, some PSD patterns obtained from segment time series analysis show a close resemblance to the PSD patterns at the initial stages of a period-doubling bifurcation process for the so-called susceptible/exposed/infectious/recovered (SEIR) model with seasonal forcing. The meaning of the relationship between the exponential gradient and the population size is discussed.
van Aggelen, Helen; Verstichel, Brecht; Bultinck, Patrick; Van Neck, Dimitri; Ayers, Paul W; Cooper, David L
2011-02-07
Variational second order density matrix theory under "two-positivity" constraints tends to dissociate molecules into unphysical fractionally charged products with too low energies. We aim to construct a qualitatively correct potential energy surface for F(3)(-) by applying subspace energy constraints on mono- and diatomic subspaces of the molecular basis space. Monoatomic subspace constraints do not guarantee correct dissociation: the constraints are thus geometry dependent. Furthermore, the number of subspace constraints needed for correct dissociation does not grow linearly with the number of atoms. The subspace constraints do impose correct chemical properties in the dissociation limit and size-consistency, but the structure of the resulting second order density matrix method does not exactly correspond to a system of noninteracting units.
NASA Technical Reports Server (NTRS)
Chao, Luen-Yuan; Shetty, Dinesh K.
1992-01-01
Statistical analysis and correlation between pore-size distribution and fracture strength distribution using the theory of extreme-value statistics is presented for a sintered silicon nitride. The pore-size distribution on a polished surface of this material was characterized, using an automatic optical image analyzer. The distribution measured on the two-dimensional plane surface was transformed to a population (volume) distribution, using the Schwartz-Saltykov diameter method. The population pore-size distribution and the distribution of the pore size at the fracture origin were correllated by extreme-value statistics. Fracture strength distribution was then predicted from the extreme-value pore-size distribution, usin a linear elastic fracture mechanics model of annular crack around pore and the fracture toughness of the ceramic. The predicted strength distribution was in good agreement with strength measurements in bending. In particular, the extreme-value statistics analysis explained the nonlinear trend in the linearized Weibull plot of measured strengths without postulating a lower-bound strength.
On remote sensing of small aerosol particles with polarized light
NASA Astrophysics Data System (ADS)
Sun, W.
2012-12-01
The CALIPSO satellite mission consistently measures volume (including molecule and particulate) light depolarization ratio of ~2% for smoke, compared to ~1% for marine aerosols and ~15% for dust. The observed ~2% smoke depolarization ratio comes primarily from the nonspherical habits of particles in the smoke at certain particle sizes. The depolarization of linearly polarized light by small sphere aggregates and irregular Gaussian-shaped particles is studied, to reveal the physics between the depolarization of linearly polarized light and aerosol shape and size. It is found that randomly oriented nonspherical particles have some common depolarization properties as functions of scattering angle and size parameter. This may be very useful information for active remote sensing of small nonspherical aerosols using polarized light. We also show that the depolarization ratio from the CALIPSO measurements could be used to derive smoke aerosol particle size. The mean particle size of South-African smoke is estimated to be about half of the 532 nm wavelength of the CALIPSO lidar.
NASA Astrophysics Data System (ADS)
Lu, Bin; Li, Ji-Guang; Sakka, Yoshio
2013-12-01
Synthesis of (Gd0.95-xLnxEu0.05)2O3 (Ln = Y and Lu, x = 0-0.95) powders via ammonium hydrogen carbonate (AHC) precipitation has been systematically studied. The best synthesis parameters are found to be an AHC/total cation molar ratio of 4.5 and an ageing time of 3 h. The effects of Y3+ and Lu3+ substitution for Gd3+, on the nucleation kinetics of the precursors and structural features and optical properties of the oxides, have been investigated. The results show that (i) different nucleation kinetics exist in the Gd-Y-Eu and Gd-Lu-Eu ternary systems, which lead to various morphologies and particle sizes of the precipitated precursors. The (Gd,Y)2O3:Eu precursors display spherical particle morphologies and the particle sizes increase along with more Y3+ addition. The (Gd,Lu)2O3:Eu precursors, on the other hand, are hollow spheres and the particle sizes increase with increasing Lu3+ incorporation, (ii) the resultant oxide powders are ultrafine, narrow in size distribution, well dispersed and rounded in particle shape, (iii) lattice parameters of the two kinds of oxide solid solutions linearly decrease at a higher Y3+ or Lu3+ content. Their theoretical densities linearly decrease with increasing Y3+ incorporation, but increase along with more Lu3+ addition and (iv) the two kinds of phosphors exhibit typical red emissions at ˜613 nm and their charge-transfer bands blue shift at a higher Y3+ or Lu3+ content. Photoluminescence/photoluminescence excitation intensities and external quantum efficiency are found to decrease with increasing value of x, and the fluorescence lifetime mainly depends on the specific surface areas of the powders.
Additive scales in degenerative disease--calculation of effect sizes and clinical judgment.
Riepe, Matthias W; Wilkinson, David; Förstl, Hans; Brieden, Andreas
2011-12-16
The therapeutic efficacy of an intervention is often assessed in clinical trials by scales measuring multiple diverse activities that are added to produce a cumulative global score. Medical communities and health care systems subsequently use these data to calculate pooled effect sizes to compare treatments. This is done because major doubt has been cast over the clinical relevance of statistically significant findings relying on p values with the potential to report chance findings. Hence in an aim to overcome this pooling the results of clinical studies into a meta-analyses with a statistical calculus has been assumed to be a more definitive way of deciding of efficacy. We simulate the therapeutic effects as measured with additive scales in patient cohorts with different disease severity and assess the limitations of an effect size calculation of additive scales which are proven mathematically. We demonstrate that the major problem, which cannot be overcome by current numerical methods, is the complex nature and neurobiological foundation of clinical psychiatric endpoints in particular and additive scales in general. This is particularly relevant for endpoints used in dementia research. 'Cognition' is composed of functions such as memory, attention, orientation and many more. These individual functions decline in varied and non-linear ways. Here we demonstrate that with progressive diseases cumulative values from multidimensional scales are subject to distortion by the limitations of the additive scale. The non-linearity of the decline of function impedes the calculation of effect sizes based on cumulative values from these multidimensional scales. Statistical analysis needs to be guided by boundaries of the biological condition. Alternatively, we suggest a different approach avoiding the error imposed by over-analysis of cumulative global scores from additive scales.
Performance evaluation of the CT component of the IRIS PET/CT preclinical tomograph
NASA Astrophysics Data System (ADS)
Panetta, Daniele; Belcari, Nicola; Tripodi, Maria; Burchielli, Silvia; Salvadori, Piero A.; Del Guerra, Alberto
2016-01-01
In this paper, we evaluate the physical performance of the CT component of the IRIS scanner, a novel combined PET/CT scanner for preclinical imaging. The performance assessment is based on phantom measurement for the determination of image quality parameters (spatial resolution, linearity, geometric accuracy, contrast to noise ratio) and reproducibility in dynamic (4D) imaging. The CTDI100 has been measured free in air with a pencil ionization chamber, and the animal dose was calculated using Monte Carlo derived conversion factors taken from the literature. The spatial resolution at the highest quality protocol was 6.9 lp/mm at 10% of the MTF, using the smallest reconstruction voxel size of 58.8 μm. The accuracy of the reconstruction voxel size was within 0.1%. The linearity of the CT numbers as a function of the concentration of iodine was very good, with R2>0.996 for all the tube voltages. The animal dose depended strongly on the scanning protocol, ranging from 158 mGy for the highest quality protocol (2 min, 80 kV) to about 12 mGy for the fastest protocol (7.3 s, 80 kV). In 4D dynamic modality, the maximum scanning rate reached was 3.1 frames per minute, using a short-scan protocol with 7.3 s of scan time per frame at the isotropic voxel size of 235 μm. The reproducibility of the system was high throughout the 10 frames acquired in dynamic modality, with a standard deviation of the CT values of all frames <8 HU and an average spatial reproducibility within 30% of the voxel size across all the field of view. Example images obtained during animal experiments are also shown.
Infrared laser spectroscopy of the linear C13 carbon cluster
NASA Technical Reports Server (NTRS)
Giesen, T. F.; Van Orden, A.; Hwang, H. J.; Fellers, R. S.; Provencal, R. A.; Saykally, R. J.
1994-01-01
The infrared absorption spectrum of a linear, 13-atom carbon cluster (C13) has been observed by using a supersonic cluster beam-diode laser spectrometer. Seventy-six rovibrational transitions were measured near 1809 wave numbers and assigned to an antisymmetric stretching fundamental in the 1 sigma g+ ground state of C13. This definitive structural characterization of a carbon cluster in the intermediate size range between C10 and C20 is in apparent conflict with theoretical calculations, which predict that clusters of this size should exist as planar monocyclic rings.
Evaluation of a new breast-shaped compensation filter for a newly built breast imaging system
NASA Astrophysics Data System (ADS)
Cai, Weixing; Ning, Ruola; Zhang, Yan; Conover, David
2007-03-01
A new breast-shaped compensation filter has been designed and fabricated for breast imaging using our newly built breast imaging (CBCTBI) system, which is able to scan an uncompressed breast with pendant geometry. The shape of this compensation filter is designed based on an average-sized breast phantom. Unlike conventional bow-tie compensation filters, its cross-sectional profile varies along the chest wall-to-nipple direction for better compensation for the shape of a breast. Breast phantoms of three different sizes are used to evaluate the performance of this compensation filter. The reconstruction image quality was studied and compared to that obtained without the compensation filter in place. The uniformity of linear attenuation coefficient and the uniformity of noise distribution are significantly improved, and the contrast-to-noise ratios (CNR) of small lesions near the chest wall are increased as well. Multi-normal image method is used in the reconstruction process to correct compensation flood field and to reduce ring artifacts.
Universality and robustness of revivals in the transverse field XY model
NASA Astrophysics Data System (ADS)
Häppölä, Juho; Halász, Gábor B.; Hamma, Alioscia
2012-03-01
We study the structure of the revivals in an integrable quantum many-body system, the transverse field XY spin chain, after a quantum quench. The time evolutions of the Loschmidt echo, the magnetization, and the single-spin entanglement entropy are calculated. We find that the revival times for all of these observables are given by integer multiples of Trev≃L/vmax, where L is the linear size of the system and vmax is the maximal group velocity of quasiparticles. This revival structure is universal in the sense that it does not depend on the initial state and the size of the quench. Applying nonintegrable perturbations to the XY model, we observe that the revivals are robust against such perturbations: they are still visible at time scales much larger than the quasiparticle lifetime. We therefore propose a generic connection between the revival structure and the locality of the dynamics, where the quasiparticle speed vmax generalizes into the Lieb-Robinson speed vLR.
Thermal conductivity of nanocrystalline SiGe alloys using molecular dynamics simulations
NASA Astrophysics Data System (ADS)
Abs da Cruz, Carolina; Katcho, Nebil A.; Mingo, Natalio; Veiga, Roberto G. A.
2013-10-01
We have studied the effect of nanocrystalline microstructure on the thermal conductivity of SiGe alloys using molecular dynamics simulations. Nanograins are modeled using both the coincidence site lattice and the Voronoi tessellation methods, and the thermal conductivity is computed using the Green-Kubo formalism. We analyze the dependence of the thermal conductivity with temperature, grain size L, and misorientation angle. We find a power dependence of L1/4 of the thermal conductivity with the grain size, instead of the linear dependence shown by non-alloyed nanograined systems. This dependence can be derived analytically underlines the important role that disorder scattering plays even when the grains are of the order of a few nm. This is in contrast to non-alloyed systems, where phonon transport is governed mainly by the boundary scattering. The temperature dependence is weak, in agreement with experimental measurements. The effect of angle misorientation is also small, which stresses the main role played by the disorder scattering.
Optical inspection system for cylindrical objects
Brenden, Byron B.; Peters, Timothy J.
1989-01-01
In the inspection of cylindrical objects, particularly O-rings, the object is translated through a field of view and a linear light trace is projected on its surface. An image of the light trace is projected on a mask, which has a size and shape corresponding to the size and shape which the image would have if the surface of the object were perfect. If there is a defect, light will pass the mask and be sensed by a detector positioned behind the mask. Preferably, two masks and associated detectors are used, one mask being convex to pass light when the light trace falls on a projection from the surface and the other concave, to pass light when the light trace falls on a depression in the surface. The light trace may be either dynamic, formed by a scanned laser beam, or static, formed by such a beam focussed by a cylindrical lens. Means are provided to automatically keep the illuminating receiving systems properly aligned.
Advanced microwave radiometer antenna system study
NASA Technical Reports Server (NTRS)
Kummer, W. H.; Villeneuve, A. T.; Seaton, A. F.
1976-01-01
The practicability of a multi-frequency antenna for spaceborne microwave radiometers was considered in detail. The program consisted of a comparative study of various antenna systems, both mechanically and electronically scanned, in relation to specified design goals and desired system performance. The study involved several distinct tasks: definition of candidate antennas that are lightweight and that, at the specified frequencies of 5, 10, 18, 22, and 36 GHz, can provide conical scanning, dual linear polarization, and simultaneous multiple frequency operation; examination of various feed systems and phase-shifting techniques; detailed analysis of several key performance parameters such as beam efficiency, sidelobe level, and antenna beam footprint size; and conception of an antenna/feed system that could meet the design goals. Candidate antennas examined include phased arrays, lenses, and optical reflector systems. Mechanical, electrical, and performance characteristics of the various systems were tabulated for ease of comparison.
NASA Technical Reports Server (NTRS)
Ponomarev, A. L.; Brenner, D.; Hlatky, L. R.; Sachs, R. K.
2000-01-01
DNA double-strand breaks (DSBs) produced by densely ionizing radiation are not located randomly in the genome: recent data indicate DSB clustering along chromosomes. Stochastic DSB clustering at large scales, from > 100 Mbp down to < 0.01 Mbp, is modeled using computer simulations and analytic equations. A random-walk, coarse-grained polymer model for chromatin is combined with a simple track structure model in Monte Carlo software called DNAbreak and is applied to data on alpha-particle irradiation of V-79 cells. The chromatin model neglects molecular details but systematically incorporates an increase in average spatial separation between two DNA loci as the number of base-pairs between the loci increases. Fragment-size distributions obtained using DNAbreak match data on large fragments about as well as distributions previously obtained with a less mechanistic approach. Dose-response relations, linear at small doses of high linear energy transfer (LET) radiation, are obtained. They are found to be non-linear when the dose becomes so large that there is a significant probability of overlapping or close juxtaposition, along one chromosome, for different DSB clusters from different tracks. The non-linearity is more evident for large fragments than for small. The DNAbreak results furnish an example of the RLC (randomly located clusters) analytic formalism, which generalizes the broken-stick fragment-size distribution of the random-breakage model that is often applied to low-LET data.
Nonlocal theory of curved rods. 2-D, high order, Timoshenko's and Euler-Bernoulli models
NASA Astrophysics Data System (ADS)
Zozulya, V. V.
2017-09-01
New models for plane curved rods based on linear nonlocal theory of elasticity have been developed. The 2-D theory is developed from general 2-D equations of linear nonlocal elasticity using a special curvilinear system of coordinates related to the middle line of the rod along with special hypothesis based on assumptions that take into account the fact that the rod is thin. High order theory is based on the expansion of the equations of the theory of elasticity into Fourier series in terms of Legendre polynomials. First, stress and strain tensors, vectors of displacements and body forces have been expanded into Fourier series in terms of Legendre polynomials with respect to a thickness coordinate. Thereby, all equations of elasticity including nonlocal constitutive relations have been transformed to the corresponding equations for Fourier coefficients. Then, in the same way as in the theory of local elasticity, a system of differential equations in terms of displacements for Fourier coefficients has been obtained. First and second order approximations have been considered in detail. Timoshenko's and Euler-Bernoulli theories are based on the classical hypothesis and the 2-D equations of linear nonlocal theory of elasticity which are considered in a special curvilinear system of coordinates related to the middle line of the rod. The obtained equations can be used to calculate stress-strain and to model thin walled structures in micro- and nanoscales when taking into account size dependent and nonlocal effects.
The Effect of Primary School Size on Academic Achievement
ERIC Educational Resources Information Center
Gershenson, Seth; Langbein, Laura
2015-01-01
Evidence on optimal school size is mixed. We estimate the effect of transitory changes in school size on the academic achievement of fourth-and fifth-grade students in North Carolina using student-level longitudinal administrative data. Estimates of value-added models that condition on school-specific linear time trends and a variety of…
Model Order Reduction Algorithm for Estimating the Absorption Spectrum
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Beeumen, Roel; Williams-Young, David B.; Kasper, Joseph M.
The ab initio description of the spectral interior of the absorption spectrum poses both a theoretical and computational challenge for modern electronic structure theory. Due to the often spectrally dense character of this domain in the quantum propagator’s eigenspectrum for medium-to-large sized systems, traditional approaches based on the partial diagonalization of the propagator often encounter oscillatory and stagnating convergence. Electronic structure methods which solve the molecular response problem through the solution of spectrally shifted linear systems, such as the complex polarization propagator, offer an alternative approach which is agnostic to the underlying spectral density or domain location. This generality comesmore » at a seemingly high computational cost associated with solving a large linear system for each spectral shift in some discretization of the spectral domain of interest. In this work, we present a novel, adaptive solution to this high computational overhead based on model order reduction techniques via interpolation. Model order reduction reduces the computational complexity of mathematical models and is ubiquitous in the simulation of dynamical systems and control theory. The efficiency and effectiveness of the proposed algorithm in the ab initio prediction of X-ray absorption spectra is demonstrated using a test set of challenging water clusters which are spectrally dense in the neighborhood of the oxygen K-edge. On the basis of a single, user defined tolerance we automatically determine the order of the reduced models and approximate the absorption spectrum up to the given tolerance. We also illustrate that, for the systems studied, the automatically determined model order increases logarithmically with the problem dimension, compared to a linear increase of the number of eigenvalues within the energy window. Furthermore, we observed that the computational cost of the proposed algorithm only scales quadratically with respect to the problem dimension.« less
NASA Astrophysics Data System (ADS)
Horiuchi, Toshiyuki; Watanabe, Jun; Suzuki, Yuta; Iwasaki, Jun-ya
2017-05-01
Two dimensional code marks are often used for the production management. In particular, in the production lines of liquid-crystal-display panels and others, data on fabrication processes such as production number and process conditions are written on each substrate or device in detail, and they are used for quality managements. For this reason, lithography system specialized in code mark printing is developed. However, conventional systems using lamp projection exposure or laser scan exposure are very expensive. Therefore, development of a low-cost exposure system using light emitting diodes (LEDs) and optical fibers with squared ends arrayed in a matrix is strongly expected. In the past research, feasibility of such a new exposure system was demonstrated using a handmade system equipped with 100 LEDs with a central wavelength of 405 nm, a 10×10 matrix of optical fibers with 1 mm square ends, and a 10X projection lens. Based on these progresses, a new method for fabricating large-scale arrays of finer fibers with squared ends was developed in this paper. At most 40 plastic optical fibers were arranged in a linear gap of an arraying instrument, and simultaneously squared by heating them on a hotplate at 120°C for 7 min. Fiber sizes were homogeneous within 496+/-4 μm. In addition, average light leak was improved from 34.4 to 21.3% by adopting the new method in place of conventional one by one squaring method. Square matrix arrays necessary for printing code marks will be obtained by piling the newly fabricated linear arrays up.
Improved Linear-Ion-Trap Frequency Standard
NASA Technical Reports Server (NTRS)
Prestage, John D.
1995-01-01
Improved design concept for linear-ion-trap (LIT) frequency-standard apparatus proposed. Apparatus contains lengthened linear ion trap, and ions processed alternately in two regions: ions prepared in upper region of trap, then transported to lower region for exposure to microwave radiation, then returned to upper region for optical interrogation. Improved design intended to increase long-term frequency stability of apparatus while reducing size, mass, and cost.
Passive acoustic measurement of bedload grain size distribution using self-generated noise
NASA Astrophysics Data System (ADS)
Petrut, Teodor; Geay, Thomas; Gervaise, Cédric; Belleudy, Philippe; Zanker, Sebastien
2018-01-01
Monitoring sediment transport processes in rivers is of particular interest to engineers and scientists to assess the stability of rivers and hydraulic structures. Various methods for sediment transport process description were proposed using conventional or surrogate measurement techniques. This paper addresses the topic of the passive acoustic monitoring of bedload transport in rivers and especially the estimation of the bedload grain size distribution from self-generated noise. It discusses the feasibility of linking the acoustic signal spectrum shape to bedload grain sizes involved in elastic impacts with the river bed treated as a massive slab. Bedload grain size distribution is estimated by a regularized algebraic inversion scheme fed with the power spectrum density of river noise estimated from one hydrophone. The inversion methodology relies upon a physical model that predicts the acoustic field generated by the collision between rigid bodies. Here we proposed an analytic model of the acoustic energy spectrum generated by the impacts between a sphere and a slab. The proposed model computes the power spectral density of bedload noise using a linear system of analytic energy spectra weighted by the grain size distribution. The algebraic system of equations is then solved by least square optimization and solution regularization methods. The result of inversion leads directly to the estimation of the bedload grain size distribution. The inversion method was applied to real acoustic data from passive acoustics experiments realized on the Isère River, in France. The inversion of in situ measured spectra reveals good estimations of grain size distribution, fairly close to what was estimated by physical sampling instruments. These results illustrate the potential of the hydrophone technique to be used as a standalone method that could ensure high spatial and temporal resolution measurements for sediment transport in rivers.
SU-F-T-240: EPID-Based Quality Assurance for Dosimetric Credentialing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miri, N; Lehmann, J; Vial, P
Purpose: We propose a novel dosimetric audit method for clinical trials using EPID measurements at each center and a standardized EPID to dose conversion algorithm. The aim of this work is to investigate the applicability of the EPID method to different linear accelerator, EPID and treatment planning system (TPS) combinations. Methods: Combination of delivery and planning systems were three Varian linacs including one Pinnacle and two Eclipse TPS and, two ELEKTA linacs including one Pinnacle and one Monaco TPS. All Varian linacs had the same EPID structure and similarly for the ELEKTA linacs. Initially, dose response of the EPIDs wasmore » investigated by acquiring integrated pixel value (IPV) of the central area of 10 cm2 images versus MUs, 5-400 MU. Then, the EPID to dose conversion was investigated for different system combinations. Square field size images, 2, 3, 4, 6, 10, 15, 20, 25 cm2 acquired by all systems were converted to dose at isocenter of a virtual flat phantom then the dose was compared to the corresponding TPS dose. Results: All EPIDs showed a relatively linear behavior versus MU except at low MUs which showed irregularities probably due to initial inaccuracies of irradiation. Furthermore, for all the EPID models, the model predicted TPS dose with a mean dose difference percentage of 1.3. However the model showed a few inaccuracies for ELEKTA EPID images at field sizes larger than 20 cm2. Conclusion: The EPIDs demonstrated similar behavior versus MU and the model was relatively accurate for all the systems. Therefore, the model could be employed as a global dosimetric method to audit clinical trials. Funding has been provided from Department of Radiation Oncology, TROG Cancer Research and the University of Newcastle. Narges Miri is a recipient of a University of Newcastle postgraduate scholarship.« less
NASA Astrophysics Data System (ADS)
Ahmed, M. F.; Shrestha, N.; Schnell, E.; Ahmad, S.; Akselrod, M. S.; Yukihara, E. G.
2016-11-01
This work evaluates the dosimetric properties of newly developed optically stimulated luminescence (OSL) films, fabricated with either Al2O3:C or Al2O3:C,Mg, using a prototype laser scanning reader, a developed image reconstruction algorithm, and a 6 MV therapeutic photon beam. Packages containing OSL films (Al2O3:C and Al2O3:C,Mg) and a radiochromic film (Gafchromic EBT3) were irradiated using a 6 MV photon beam using different doses, field sizes, with and without wedge filter. Dependence on film orientation of the OSL system was also tested. Diode-array (MapCHECK) and ionization chamber measurements were performed for comparison. The OSLD film doses agreed with the MapCHECK and ionization chamber data within the experimental uncertainties (<2% at 1.5 Gy). The system background and minimum detectable dose (MDD) were <0.5 mGy, and the dose response was approximately linear from the MDD up to a few grays (the linearity correction was <10% up to ~2-4 Gy), with no saturation up to 30 Gy. The dose profiles agreed with those obtained using EBT3 films (analyzed using the triple channel method) in the high dose regions of the images. In the low dose regions, the dose profiles from the OSLD films were more reproducible than those from the EBT3 films. We also demonstrated that the OSL film data are independent on scan orientation and field size over the investigated range. The results demonstrate the potential of OSLD films for 2D dosimetry, particularly for the characterization of small fields, due to their wide dynamic range, linear response, resolution and dosimetric properties. The negligible background and potential simple calibration make these OSLD films suitable for remote audits. The characterization presented here may motivate further commercial development of a 2D dosimetry system based on the OSL from Al2O3:C or Al2O3:C,Mg.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, G
2016-06-15
Purpose: Recently a new 2.5 megavoltage imaging beam has become available in a TrueBeam linear accelerator for image guidance. There is limited information available related to the beam characteristics. Commissioning dosimetric data of the new imaging is necessary for configuration of the beam in a treatment planning system in order to calculate imaging doses to patients resulting from this new imaging beam. The purpose of this study is to provide measured commissioning data recommended for a beam configuration in a treatment planning system. Methods: A recently installed TrueBeam linear accelerator is equipped with a new low energy photon beam withmore » a nominal energy of 2.5 MV which provides better image quality in addition to other therapeutic megavoltage beams. Dosimetric characteristics of the 2.5 MV are measured for commissioning. An ionization chamber was used to measure dosimetric data including depth-dose curves and dose profiles at different depths for field sizes ranging from 5×5 cm{sup 2} to 40×40 cm{sup 2}. Results: Although the new 2.5 MV beam is a flattening-filter-free (FFF) beam, its dose profiles are much flatter compared to a 6 MV FFF beam. The dose decrease at 20 cm away from the central axis is less than 30% for a 40×40 cm{sup 2} field. This moderately lower dose at off-axis distances benefits the imaging quality. The values of percentage depth-dose (PDD) curves are 53% and 63% for 10×10 cm{sup 2} and 40×40 cm{sup 2} fields respectively. The measured beam output is 0.85 cGy/MU for a reference field size at depth 5 cm obtained according to the AAPM TG-51 protocol. Conclusion: This systematically measured commissioning data is useful for configuring the new imaging beam in a treatment planning system for patient imaging dose calculations resulting from the application of this 2.5 MV beam which is commonly set as a default in imaging procedures.« less
NASA Astrophysics Data System (ADS)
Andrinopoulos, Lampros; Hine, Nicholas; Haynes, Peter; Mostofi, Arash
2010-03-01
The placement of organic molecules such as CuPc (copper phthalocyanine) on wurtzite ZnO (zinc oxide) charged surfaces has been proposed as a way of creating photovoltaic solar cellsfootnotetextG.D. Sharma et al., Solar Energy Materials & Solar Cells 90, 933 (2006) ; optimising their performance may be aided by computational simulation. Electronic structure calculations provide high accuracy at modest computational cost but two challenges are encountered for such layered systems. First, the system size is at or beyond the limit of traditional cubic-scaling Density Functional Theory (DFT). Second, traditional exchange-correlation functionals do not account for van der Waals (vdW) interactions, crucial for determining the structure of weakly bonded systems. We present an implementation of recently developed approachesfootnotetextP.L. Silvestrelli, P.R.L. 100, 102 (2008) to include vdW in DFT within ONETEPfootnotetextC.-K. Skylaris, P.D. Haynes, A.A. Mostofi and M.C. Payne, J.C.P. 122, 084119 (2005) , a linear-scaling package for performing DFT calculations using a basis of localised functions. We have applied this methodology to simple planar organic molecules, such as benzene and pentacene, on ZnO surfaces.
NASA Astrophysics Data System (ADS)
García-Aldea, David; Alvarellos, J. E.
2009-03-01
We present several nonlocal exchange energy density functionals that reproduce the linear response function of the free electron gas. These nonlocal functionals are constructed following a similar procedure used previously for nonlocal kinetic energy density functionals by Chac'on-Alvarellos-Tarazona, Garc'ia-Gonz'alez et al., Wang-Govind-Carter and Garc'ia-Aldea-Alvarellos. The exchange response function is not known but we have used the approximate response function developed by Utsumi and Ichimaru, even we must remark that the same ansatz can be used to reproduce any other response function with the same scaling properties. We have developed two families of new nonlocal functionals: one is constructed with a mathematical structure based on the LDA approximation -- the Dirac functional for the exchange - and for the second one the structure of the second order gradient expansion approximation is took as a model. The functionals are constructed is such a way that they can be used in localized systems (using real space calculations) and in extended systems (using the momentum space, and achieving a quasilinear scaling with the system size if a constant reference electron density is defined).
Force system generated by elastic archwires with vertical V bends: a three-dimensional analysis.
Upadhyay, Madhur; Shah, Raja; Peterson, Donald; Asaki, Takafumi; Yadav, Sumit; Agarwal, Sachin
2017-04-01
Our previous understanding of V-bend mechanics is primarily from two-dimensional (2D) analysis of archwire bracket interactions in the second order. These analyses do not take into consideration the three-dimensional (3D) nature of orthodontic appliances involving the third order. To quantify the force system generated in a 3D two bracket set up involving the molar and incisors with vertical V-bends. Maxillary molar and incisor brackets were arranged in a dental arch form and attached to load cells capable of measuring forces and moments in all three planes (x, y, and z) of space. Symmetrical V-bends (right and left sides) were placed at 11 different locations along rectangular beta-titanium archwires of various sizes at an angle of 150degrees. Each wire was evaluated for the 11 bend positions. Specifically, the vertical forces (Fz) and anterio-posterior moments (Mx) were analysed. Descriptive statistics were used to interpret the results. With increasing archwire size, Fz and Mx increased at the two brackets (P < 0.05). The vertical forces were linear and symmetric in nature, increasing in magnitude as the bends moved closer to either bracket. The Mx curves were asymmetric and non-linear displaying higher magnitudes for molar bracket. As the bends were moved closer to either bracket a distinct flattening of the incisor Mx curve was noted, implying no change in its magnitude. This article provides critical information on V-bend mechanics involving second order and third order archwire-bracket interactions. A model for determining this force system is described that might allow for easier translation to actual clinical practice. © The Author 2016. Published by Oxford University Press on behalf of the European Orthodontic Society. All rights reserved. For permissions, please email: journals.permissions@oup.com
Statistics and Machine Learning based Outlier Detection Techniques for Exoplanets
NASA Astrophysics Data System (ADS)
Goel, Amit; Montgomery, Michele
2015-08-01
Architectures of planetary systems are observable snapshots in time that can indicate formation and dynamic evolution of planets. The observable key parameters that we consider are planetary mass and orbital period. If planet masses are significantly less than their host star masses, then Keplerian Motion is defined as P^2 = a^3 where P is the orbital period in units of years and a is the orbital period in units of Astronomical Units (AU). Keplerian motion works on small scales such as the size of the Solar System but not on large scales such as the size of the Milky Way Galaxy. In this work, for confirmed exoplanets of known stellar mass, planetary mass, orbital period, and stellar age, we analyze Keplerian motion of systems based on stellar age to seek if Keplerian motion has an age dependency and to identify outliers. For detecting outliers, we apply several techniques based on statistical and machine learning methods such as probabilistic, linear, and proximity based models. In probabilistic and statistical models of outliers, the parameters of a closed form probability distributions are learned in order to detect the outliers. Linear models use regression analysis based techniques for detecting outliers. Proximity based models use distance based algorithms such as k-nearest neighbour, clustering algorithms such as k-means, or density based algorithms such as kernel density estimation. In this work, we will use unsupervised learning algorithms with only the proximity based models. In addition, we explore the relative strengths and weaknesses of the various techniques by validating the outliers. The validation criteria for the outliers is if the ratio of planetary mass to stellar mass is less than 0.001. In this work, we present our statistical analysis of the outliers thus detected.
Singh, Satyakam; Prasad, Nagarajan Rajendra; Kapoor, Khyati; Chufan, Eduardo E.; Patel, Bhargav A.; Ambudkar, Suresh V.; Talele, Tanaji T.
2014-01-01
Multidrug resistance (MDR) caused by ATP-binding cassette (ABC) transporter P-glycoprotein (P-gp) through extrusion of anticancer drugs from the cells is a major cause of failure to cancer chemotherapy. Previously, selenazole containing cyclic peptides were reported as P-gp inhibitors and these were also used for co-crystallization with mouse P-gp, which has 87% homology to human P-gp. It has been reported that human P-gp, can simultaneously accommodate 2-3 moderate size molecules at the drug binding pocket. Our in-silico analysis based on the homology model of human P-gp spurred our efforts to investigate the optimal size of (S)-valine-derived thiazole units that can be accommodated at drug-binding pocket. Towards this goal, we synthesized varying lengths of linear and cyclic derivatives of (S)-valine-derived thiazole units to investigate the optimal size, lipophilicity and the structural form (linear and cyclic) of valine-derived thiazole peptides that can accommodate well in the P-gp binding pocket and affects its activity, previously an unexplored concept. Among these oligomers, lipophilic linear- (13) and cyclic-trimer (17) derivatives of QZ59S-SSS were found to be the most and equally potent inhibitors of human P-gp (IC50 = 1.5 μM). Cyclic trimer and linear trimer being equipotent, future studies can be focused on non-cyclic counterparts of cyclic peptides maintaining linear trimer length. Binding model of the linear trimer (13) within the drug-binding site on the homology model of human P-gp represents an opportunity for future optimization, specifically replacing valine and thiazole groups in the non-cyclic form. PMID:24288265
Singh, Satyakam; Prasad, Nagarajan Rajendra; Kapoor, Khyati; Chufan, Eduardo E; Patel, Bhargav A; Ambudkar, Suresh V; Talele, Tanaji T
2014-01-03
Multidrug resistance caused by ATP binding cassette transporter P-glycoprotein (P-gp) through extrusion of anticancer drugs from the cells is a major cause of failure in cancer chemotherapy. Previously, selenazole-containing cyclic peptides were reported as P-gp inhibitors and were also used for co-crystallization with mouse P-gp, which has 87 % homology to human P-gp. It has been reported that human P-gp can simultaneously accommodate two to three moderately sized molecules at the drug binding pocket. Our in silico analysis, based on the homology model of human P-gp, spurred our efforts to investigate the optimal size of (S)-valine-derived thiazole units that can be accommodated at the drug binding pocket. Towards this goal, we synthesized varying lengths of linear and cyclic derivatives of (S)-valine-derived thiazole units to investigate the optimal size, lipophilicity, and structural form (linear or cyclic) of valine-derived thiazole peptides that can be accommodated in the P-gp binding pocket and affects its activity, previously an unexplored concept. Among these oligomers, lipophilic linear (13) and cyclic trimer (17) derivatives of QZ59S-SSS were found to be the most and equally potent inhibitors of human P-gp (IC50 =1.5 μM). As the cyclic trimer and linear trimer compounds are equipotent, future studies should focus on noncyclic counterparts of cyclic peptides maintaining linear trimer length. A binding model of the linear trimer 13 within the drug binding site on the homology model of human P-gp represents an opportunity for future optimization, specifically replacing valine and thiazole groups in the noncyclic form. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
SU-F-207-16: CT Protocols Optimization Using Model Observer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tseng, H; Fan, J; Kupinski, M
2015-06-15
Purpose: To quantitatively evaluate the performance of different CT protocols using task-based measures of image quality. This work studies the task of size and the contrast estimation of different iodine concentration rods inserted in head- and body-sized phantoms using different imaging protocols. These protocols are designed to have the same dose level (CTDIvol) but using different X-ray tube voltage settings (kVp). Methods: Different concentrations of iodine objects inserted in a head size phantom and a body size phantom are imaged on a 64-slice commercial CT scanner. Scanning protocols with various tube voltages (80, 100, and 120 kVp) and current settingsmore » are selected, which output the same absorbed dose level (CTDIvol). Because the phantom design (size of the iodine objects, the air gap between the inserted objects and the phantom) is not ideal for a model observer study, the acquired CT images are used to generate simulation images with four different sizes and five different contracts iodine objects. For each type of the objects, 500 images (100 x 100 pixels) are generated for the observer study. The observer selected in this study is the channelized scanning linear observer which could be applied to estimate the size and the contrast. The figure of merit used is the correct estimation ratio. The mean and the variance are estimated by the shuffle method. Results: The results indicate that the protocols with 100 kVp tube voltage setting provides the best performance for iodine insert size and contrast estimation for both head and body phantom cases. Conclusion: This work presents a practical and robust quantitative approach using channelized scanning linear observer to study contrast and size estimation performance from different CT protocols. Different protocols at same CTDIvol setting could Result in different image quality performance. The relationship between the absorbed dose and the diagnostic image quality is not linear.« less
Body size and lower limb posture during walking in humans.
Hora, Martin; Soumar, Libor; Pontzer, Herman; Sládek, Vladimír
2017-01-01
We test whether locomotor posture is associated with body mass and lower limb length in humans and explore how body size and posture affect net joint moments during walking. We acquired gait data for 24 females and 25 males using a three-dimensional motion capture system and pressure-measuring insoles. We employed the general linear model and commonality analysis to assess the independent effect of body mass and lower limb length on flexion angles at the hip, knee, and ankle while controlling for sex and velocity. In addition, we used inverse dynamics to model the effect of size and posture on net joint moments. At early stance, body mass has a negative effect on knee flexion (p < 0.01), whereas lower limb length has a negative effect on hip flexion (p < 0.05). Body mass uniquely explains 15.8% of the variance in knee flexion, whereas lower limb length uniquely explains 5.4% of the variance in hip flexion. Both of the detected relationships between body size and posture are consistent with the moment moderating postural adjustments predicted by our model. At late stance, no significant relationship between body size and posture was detected. Humans of greater body size reduce the flexion of the hip and knee at early stance, which results in the moderation of net moments at these joints.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eldib, A; Chibani, O; Chen, L
Purpose: Tremendous technological developments were made for conformal therapy techniques with linear accelerators, while less attention was paid to cobalt-60 units. The aim of the current study is to explore the dosimetric benefits of a novel rotating gamma ray system enhanced with interchangeable source sizes and multi-leaf collimator (MLC). Material and Methods: CybeRT is a novel rotating gamma ray machine with a ring gantry that ensures an iso-center accuracy of less than 0.3 mm. The new machine has a 70cm source axial distance allowing for improved penumbra compared to conventional machines. MCBEAM was used to simulate Cobalt-60 beams from themore » CybeRT head, while the MCPLAN code was used for modeling the MLC and for phantom/patient dose calculation. The CybeRT collimation will incorporate a system allowing for interchanging source sizes. In this work we have created phase space files for 1cm and 2cm source sizes. Evaluation of the system was done by comparing CybeRT beams with the 6MV beams in a water phantom and in patient geometry. Treatment plans were compared based on isodose distributions and dose volume histograms. Results: Profiles for the 1cm source were comparable to that from 6MV in the order of 6mm for 10×10 cm{sup 2} field size at the depth of maximum dose. This could ascribe to Cobalt-60 beams producing lowerenergy secondary electrons. Although, the 2cm source have a larger penumbra however it could be still used for large targets with proportionally increased dose rate. For large lung targets, the difference between cobalt and 6MV plans is clinically insignificant. Our preliminary results showed that interchanging source sizes will allow cobalt beams for volumetric arc therapy of both small lesions and large tumors. Conclusion: The CybeRT system will be a cost effective machine capable of performing advanced radiation therapy treatments of both small tumors and large target volumes.« less
2016-01-01
Understanding the relationship between physiological measurements from human subjects and their demographic data is important within both the biometric and forensic domains. In this paper we explore the relationship between measurements of the human hand and a range of demographic features. We assess the ability of linear regression and machine learning classifiers to predict demographics from hand features, thereby providing evidence on both the strength of relationship and the key features underpinning this relationship. Our results show that we are able to predict sex, height, weight and foot size accurately within various data-range bin sizes, with machine learning classification algorithms out-performing linear regression in most situations. In addition, we identify the features used to provide these relationships applicable across multiple applications. PMID:27806075
Miguel-Hurtado, Oscar; Guest, Richard; Stevenage, Sarah V; Neil, Greg J; Black, Sue
2016-01-01
Understanding the relationship between physiological measurements from human subjects and their demographic data is important within both the biometric and forensic domains. In this paper we explore the relationship between measurements of the human hand and a range of demographic features. We assess the ability of linear regression and machine learning classifiers to predict demographics from hand features, thereby providing evidence on both the strength of relationship and the key features underpinning this relationship. Our results show that we are able to predict sex, height, weight and foot size accurately within various data-range bin sizes, with machine learning classification algorithms out-performing linear regression in most situations. In addition, we identify the features used to provide these relationships applicable across multiple applications.
NASA Astrophysics Data System (ADS)
Chen, Yu-Wen; Wang, Yetmen; Chang, Liang-Cheng
2017-04-01
Groundwater resources play a vital role on regional supply. To avoid irreversible environmental impact such as land subsidence, the characteristic identification of groundwater system is crucial before sustainable management of groundwater resource. This study proposes a signal process approach to identify the character of groundwater systems based on long-time hydrologic observations include groundwater level and rainfall. The study process contains two steps. First, a linear signal model (LSM) is constructed and calibrated to simulate the variation of underground hydrology based on the time series of groundwater levels and rainfall. The mass balance equation of the proposed LSM contains three major terms contain net rate of horizontal exchange, rate of rainfall recharge and rate of pumpage and four parameters are required to calibrate. Because reliable records of pumpage is rare, the time-variant groundwater amplitudes of daily frequency (P ) calculated by STFT are assumed as linear indicators of puamage instead of pumpage records. Time series obtained from 39 observation wells and 50 rainfall stations in and around the study area, Pintung Plain, are paired for model construction. Second, the well-calibrated parameters of the linear signal model can be used to interpret the characteristic of groundwater system. For example, the rainfall recharge coefficient (γ) means the transform ratio between rainfall intention and groundwater level raise. The area around the observation well with higher γ means that the saturated zone here is easily affected by rainfall events and the material of unsaturated zone might be gravel or coarse sand with high infiltration ratio. Considering the spatial distribution of γ, the values of γ decrease from the upstream to the downstream of major rivers and also are correlated to the spatial distribution of grain size of surface soil. Via the time-series of groundwater levels and rainfall, the well-calibrated parameters of LSM have ability to identify the characteristic of aquifer.
Intense beams at the micron level for the Next Linear Collider
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seeman, J.T.
1991-08-01
High brightness beams with sub-micron dimensions are needed to produce a high luminosity for electron-positron collisions in the Next Linear Collider (NLC). To generate these small beam sizes, a large number of issues dealing with intense beams have to be resolved. Over the past few years many have been successfully addressed but most need experimental verification. Some of these issues are beam dynamics, emittance control, instrumentation, collimation, and beam-beam interactions. Recently, the Stanford Linear Collider (SLC) has proven the viability of linear collider technology and is an excellent test facility for future linear collider studies.
Conditions for Stabilizability of Linear Switched Systems
NASA Astrophysics Data System (ADS)
Minh, Vu Trieu
2011-06-01
This paper investigates some conditions that can provide stabilizability for linear switched systems with polytopic uncertainties via their closed loop linear quadratic state feedback regulator. The closed loop switched systems can stabilize unstable open loop systems or stable open loop systems but in which there is no solution for a common Lyapunov matrix. For continuous time switched linear systems, we show that if there exists solution in an associated Riccati equation for the closed loop systems sharing one common Lyapunov matrix, the switched linear systems are stable. For the discrete time switched systems, we derive a Linear Matrix Inequality (LMI) to calculate a common Lyapunov matrix and solution for the stable closed loop feedback systems. These closed loop linear quadratic state feedback regulators guarantee the global asymptotical stability for any switched linear systems with any switching signal sequence.
Signal Prediction With Input Identification
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Chen, Ya-Chin
1999-01-01
A novel coding technique is presented for signal prediction with applications including speech coding, system identification, and estimation of input excitation. The approach is based on the blind equalization method for speech signal processing in conjunction with the geometric subspace projection theory to formulate the basic prediction equation. The speech-coding problem is often divided into two parts, a linear prediction model and excitation input. The parameter coefficients of the linear predictor and the input excitation are solved simultaneously and recursively by a conventional recursive least-squares algorithm. The excitation input is computed by coding all possible outcomes into a binary codebook. The coefficients of the linear predictor and excitation, and the index of the codebook can then be used to represent the signal. In addition, a variable-frame concept is proposed to block the same excitation signal in sequence in order to reduce the storage size and increase the transmission rate. The results of this work can be easily extended to the problem of disturbance identification. The basic principles are outlined in this report and differences from other existing methods are discussed. Simulations are included to demonstrate the proposed method.
Development of two-framing camera with large format and ultrahigh speed
NASA Astrophysics Data System (ADS)
Jiang, Xiaoguo; Wang, Yuan; Wang, Yi
2012-10-01
High-speed imaging facility is important and necessary for the formation of time-resolved measurement system with multi-framing capability. The framing camera which satisfies the demands of both high speed and large format needs to be specially developed in the ultrahigh speed research field. A two-framing camera system with high sensitivity and time-resolution has been developed and used for the diagnosis of electron beam parameters of Dragon-I linear induction accelerator (LIA). The camera system, which adopts the principle of light beam splitting in the image space behind the lens with long focus length, mainly consists of lens-coupled gated image intensifier, CCD camera and high-speed shutter trigger device based on the programmable integrated circuit. The fastest gating time is about 3 ns, and the interval time between the two frames can be adjusted discretely at the step of 0.5 ns. Both the gating time and the interval time can be tuned to the maximum value of about 1 s independently. Two images with the size of 1024×1024 for each can be captured simultaneously in our developed camera. Besides, this camera system possesses a good linearity, uniform spatial response and an equivalent background illumination as low as 5 electrons/pix/sec, which fully meets the measurement requirements of Dragon-I LIA.
A fourth gradient to overcome slice dependent phase effects of voxel-sized coils in planar arrays.
Bosshard, John C; Eigenbrodt, Edwin P; McDougall, Mary P; Wright, Steven M
2010-01-01
The signals from an array of densely spaced long and narrow receive coils for MRI are complicated when the voxel size is of comparable dimension to the coil size. The RF coil causes a phase gradient across each voxel, which is dependent on the distance from the coil, resulting in a slice dependent shift of k-space. A fourth gradient coil has been implemented and used with the system's gradient set to create a gradient field which varies with slice. The gradients are pulsed together to impart a slice dependent phase gradient to compensate for the slice dependent phase due to the RF coils. However the non-linearity in the fourth gradient which creates the desired slice dependency also results in a through-slice phase ramp, which disturbs normal slice refocusing and leads to additional signal cancelation and reduced field of view. This paper discusses the benefits and limitations of using a fourth gradient coil to compensate for the phase due to RF coils.
Deviation of Zipf's and Heaps' Laws in Human Languages with Limited Dictionary Sizes
Lü, Linyuan; Zhang, Zi-Ke; Zhou, Tao
2013-01-01
Zipf's law on word frequency and Heaps' law on the growth of distinct words are observed in Indo-European language family, but it does not hold for languages like Chinese, Japanese and Korean. These languages consist of characters, and are of very limited dictionary sizes. Extensive experiments show that: (i) The character frequency distribution follows a power law with exponent close to one, at which the corresponding Zipf's exponent diverges. Indeed, the character frequency decays exponentially in the Zipf's plot. (ii) The number of distinct characters grows with the text length in three stages: It grows linearly in the beginning, then turns to a logarithmical form, and eventually saturates. A theoretical model for writing process is proposed, which embodies the rich-get-richer mechanism and the effects of limited dictionary size. Experiments, simulations and analytical solutions agree well with each other. This work refines the understanding about Zipf's and Heaps' laws in human language systems. PMID:23378896
A cohesive granular material with tunable elasticity
Hemmerle, Arnaud; Schröter, Matthias; Goehring, Lucas
2016-01-01
By mixing glass beads with a curable polymer we create a well-defined cohesive granular medium, held together by solidified, and hence elastic, capillary bridges. This material has a geometry similar to a wet packing of beads, but with an additional control over the elasticity of the bonds holding the particles together. We show that its mechanical response can be varied over several orders of magnitude by adjusting the size and stiffness of the bridges, and the size of the particles. We also investigate its mechanism of failure under unconfined uniaxial compression in combination with in situ x-ray microtomography. We show that a broad linear-elastic regime ends at a limiting strain of about 8%, whatever the stiffness of the agglomerate, which corresponds to the beginning of shear failure. The possibility to finely tune the stiffness, size and shape of this simple material makes it an ideal model system for investigations on, for example, fracturing of porous rocks, seismology, or root growth in cohesive porous media. PMID:27774988
A cohesive granular material with tunable elasticity.
Hemmerle, Arnaud; Schröter, Matthias; Goehring, Lucas
2016-10-24
By mixing glass beads with a curable polymer we create a well-defined cohesive granular medium, held together by solidified, and hence elastic, capillary bridges. This material has a geometry similar to a wet packing of beads, but with an additional control over the elasticity of the bonds holding the particles together. We show that its mechanical response can be varied over several orders of magnitude by adjusting the size and stiffness of the bridges, and the size of the particles. We also investigate its mechanism of failure under unconfined uniaxial compression in combination with in situ x-ray microtomography. We show that a broad linear-elastic regime ends at a limiting strain of about 8%, whatever the stiffness of the agglomerate, which corresponds to the beginning of shear failure. The possibility to finely tune the stiffness, size and shape of this simple material makes it an ideal model system for investigations on, for example, fracturing of porous rocks, seismology, or root growth in cohesive porous media.
Brunet-Derrida Behavior of Branching-Selection Particle Systems on the Line
NASA Astrophysics Data System (ADS)
Bérard, Jean; Gouéré, Jean-Baptiste
2010-09-01
We consider a class of branching-selection particle systems on {mathbb{R}} similar to the one considered by E. Brunet and B. Derrida in their 1997 paper “Shift in the velocity of a front due to a cutoff”. Based on numerical simulations and heuristic arguments, Brunet and Derrida showed that, as the population size N of the particle system goes to infinity, the asymptotic velocity of the system converges to a limiting value at the unexpectedly slow rate (log N)-2. In this paper, we give a rigorous mathematical proof of this fact, for the class of particle systems we consider. The proof makes use of ideas and results by R. Pemantle, and by N. Gantert, Y. Hu and Z. Shi, and relies on a comparison of the particle system with a family of N independent branching random walks killed below a linear space-time barrier.
Clonal Selection Based Artificial Immune System for Generalized Pattern Recognition
NASA Technical Reports Server (NTRS)
Huntsberger, Terry
2011-01-01
The last two decades has seen a rapid increase in the application of AIS (Artificial Immune Systems) modeled after the human immune system to a wide range of areas including network intrusion detection, job shop scheduling, classification, pattern recognition, and robot control. JPL (Jet Propulsion Laboratory) has developed an integrated pattern recognition/classification system called AISLE (Artificial Immune System for Learning and Exploration) based on biologically inspired models of B-cell dynamics in the immune system. When used for unsupervised or supervised classification, the method scales linearly with the number of dimensions, has performance that is relatively independent of the total size of the dataset, and has been shown to perform as well as traditional clustering methods. When used for pattern recognition, the method efficiently isolates the appropriate matches in the data set. The paper presents the underlying structure of AISLE and the results from a number of experimental studies.
Acoustic Biometric System Based on Preprocessing Techniques and Linear Support Vector Machines
del Val, Lara; Izquierdo-Fuente, Alberto; Villacorta, Juan J.; Raboso, Mariano
2015-01-01
Drawing on the results of an acoustic biometric system based on a MSE classifier, a new biometric system has been implemented. This new system preprocesses acoustic images, extracts several parameters and finally classifies them, based on Support Vector Machine (SVM). The preprocessing techniques used are spatial filtering, segmentation—based on a Gaussian Mixture Model (GMM) to separate the person from the background, masking—to reduce the dimensions of images—and binarization—to reduce the size of each image. An analysis of classification error and a study of the sensitivity of the error versus the computational burden of each implemented algorithm are presented. This allows the selection of the most relevant algorithms, according to the benefits required by the system. A significant improvement of the biometric system has been achieved by reducing the classification error, the computational burden and the storage requirements. PMID:26091392
Acoustic Biometric System Based on Preprocessing Techniques and Linear Support Vector Machines.
del Val, Lara; Izquierdo-Fuente, Alberto; Villacorta, Juan J; Raboso, Mariano
2015-06-17
Drawing on the results of an acoustic biometric system based on a MSE classifier, a new biometric system has been implemented. This new system preprocesses acoustic images, extracts several parameters and finally classifies them, based on Support Vector Machine (SVM). The preprocessing techniques used are spatial filtering, segmentation-based on a Gaussian Mixture Model (GMM) to separate the person from the background, masking-to reduce the dimensions of images-and binarization-to reduce the size of each image. An analysis of classification error and a study of the sensitivity of the error versus the computational burden of each implemented algorithm are presented. This allows the selection of the most relevant algorithms, according to the benefits required by the system. A significant improvement of the biometric system has been achieved by reducing the classification error, the computational burden and the storage requirements.
Keeping speed and distance for aligned motion.
Farkas, Illés J; Kun, Jeromos; Jin, Yi; He, Gaoqi; Xu, Mingliang
2015-01-01
The cohesive collective motion (flocking, swarming) of autonomous agents is ubiquitously observed and exploited in both natural and man-made settings, thus, minimal models for its description are essential. In a model with continuous space and time we find that if two particles arrive symmetrically in a plane at a large angle, then (i) radial repulsion and (ii) linear self-propelling toward a fixed preferred speed are sufficient for them to depart at a smaller angle. For this local gain of momentum explicit velocity alignment is not necessary, nor are adhesion or attraction, inelasticity or anisotropy of the particles, or nonlinear drag. With many particles obeying these microscopic rules of motion we find that their spatial confinement to a square with periodic boundaries (which is an indirect form of attraction) leads to stable macroscopic ordering. As a function of the strength of added noise we see--at finite system sizes--a critical slowing down close to the order-disorder boundary and a discontinuous transition. After varying the density of particles at constant system size and varying the size of the system with constant particle density we predict that in the infinite system size (or density) limit the hysteresis loop disappears and the transition becomes continuous. We note that animals, humans, drones, etc., tend to move asynchronously and are often more responsive to motion than positions. Thus, for them velocity-based continuous models can provide higher precision than coordinate-based models. An additional characteristic and realistic feature of the model is that convergence to the ordered state is fastest at a finite density, which is in contrast to models applying (discontinuous) explicit velocity alignments and discretized time. To summarize, we find that the investigated model can provide a minimal description of flocking.
Streicher, Jeffrey W; Cox, Christian L; Birchard, Geoffrey F
2012-04-01
Although well documented in vertebrates, correlated changes between metabolic rate and cardiovascular function of insects have rarely been described. Using the very large cockroach species Gromphadorhina portentosa, we examined oxygen consumption and heart rate across a range of body sizes and temperatures. Metabolic rate scaled positively and heart rate negatively with body size, but neither scaled linearly. The response of these two variables to temperature was similar. This correlated response to endogenous (body mass) and exogenous (temperature) variables is likely explained by a mutual dependence on similar metabolic substrate use and/or coupled regulatory pathways. The intraspecific scaling for oxygen consumption rate showed an apparent plateauing at body masses greater than about 3 g. An examination of cuticle mass across all instars revealed isometric scaling with no evidence of an ontogenetic shift towards proportionally larger cuticles. Published oxygen consumption rates of other Blattodea species were also examined and, as in our intraspecific examination of G. portentosa, the scaling relationship was found to be non-linear with a decreasing slope at larger body masses. The decreasing slope at very large body masses in both intraspecific and interspecific comparisons may have important implications for future investigations of the relationship between oxygen transport and maximum body size in insects.
Nonequilibrium dynamic critical scaling of the quantum Ising chain.
Kolodrubetz, Michael; Clark, Bryan K; Huse, David A
2012-07-06
We solve for the time-dependent finite-size scaling functions of the one-dimensional transverse-field Ising chain during a linear-in-time ramp of the field through the quantum critical point. We then simulate Mott-insulating bosons in a tilted potential, an experimentally studied system in the same equilibrium universality class, and demonstrate that universality holds for the dynamics as well. We find qualitatively athermal features of the scaling functions, such as negative spin correlations, and we show that they should be robustly observable within present cold atom experiments.
DLP NIRscan Nano: an ultra-mobile DLP-based near-infrared Bluetooth spectrometer
NASA Astrophysics Data System (ADS)
Gelabert, Pedro; Pruett, Eric; Perrella, Gavin; Subramanian, Sreeram; Lakshminarayanan, Aravind
2016-02-01
The DLP NIRscan Nano is an ultra-portable spectrometer evaluation module utilizing DLP technology to meet lower cost, smaller size, and higher performance than traditional architectures. The replacement of a linear array detector with DLP digital micromirror device (DMD) in conjunction with a single point detector adds the functionality of programmable spectral filters and sampling techniques that were not previously available on NIR spectrometers. This paper presents the hardware, software, and optical systems of the DLP NIRscan Nano and its design considerations on the implementation of a DLP-based spectrometer.
A discourse on sensitivity analysis for discretely-modeled structures
NASA Technical Reports Server (NTRS)
Adelman, Howard M.; Haftka, Raphael T.
1991-01-01
A descriptive review is presented of the most recent methods for performing sensitivity analysis of the structural behavior of discretely-modeled systems. The methods are generally but not exclusively aimed at finite element modeled structures. Topics included are: selections of finite difference step sizes; special consideration for finite difference sensitivity of iteratively-solved response problems; first and second derivatives of static structural response; sensitivity of stresses; nonlinear static response sensitivity; eigenvalue and eigenvector sensitivities for both distinct and repeated eigenvalues; and sensitivity of transient response for both linear and nonlinear structural response.
Panyabut, Teerawat; Sirirat, Natnicha; Siripinyanond, Atitaya
2018-02-13
Electrothermal atomic absorption spectrometry (ETAAS) was applied to investigate the atomization behaviors of gold nanoparticles (AuNPs) and silver nanoparticles (AgNPs) in order to relate with particle size information. At various atomization temperatures from 1400 °C to 2200 °C, the time-dependent atomic absorption peak profiles of AuNPs and AgNPs with varying sizes from 5 nm to 100 nm were examined. With increasing particle size, the maximum absorbance was observed at the longer time. The time at maximum absorbance was found to linearly increase with increasing particle size, suggesting that ETAAS can be applied to provide the size information of nanoparticles. With the atomization temperature of 1600 °C, the mixtures of nanoparticles containing two particle sizes, i.e., 5 nm tannic stabilized AuNPs with 60, 80, 100 nm citrate stabilized AuNPs, were investigated and bimodal peaks were observed. The particle size dependent atomization behaviors of nanoparticles show potential application of ETAAS for providing size information of nanoparticles. The calibration plot between the time at maximum absorbance and the particle size was applied to estimate the particle size of in-house synthesized AuNPs and AgNPs and the results obtained were in good agreement with those from flow field-flow fractionation (FlFFF) and transmission electron microscopy (TEM) techniques. Furthermore, the linear relationship between the activation energy and the particle size was observed. Copyright © 2017 Elsevier B.V. All rights reserved.
Determining the effect of grain size and maximum induction upon coercive field of electrical steels
NASA Astrophysics Data System (ADS)
Landgraf, Fernando José Gomes; da Silveira, João Ricardo Filipini; Rodrigues-Jr., Daniel
2011-10-01
Although theoretical models have already been proposed, experimental data is still lacking to quantify the influence of grain size upon coercivity of electrical steels. Some authors consider a linear inverse proportionality, while others suggest a square root inverse proportionality. Results also differ with regard to the slope of the reciprocal of grain size-coercive field relation for a given material. This paper discusses two aspects of the problem: the maximum induction used for determining coercive force and the possible effect of lurking variables such as the grain size distribution breadth and crystallographic texture. Electrical steel sheets containing 0.7% Si, 0.3% Al and 24 ppm C were cold-rolled and annealed in order to produce different grain sizes (ranging from 20 to 150 μm). Coercive field was measured along the rolling direction and found to depend linearly on reciprocal of grain size with a slope of approximately 0.9 (A/m)mm at 1.0 T induction. A general relation for coercive field as a function of grain size and maximum induction was established, yielding an average absolute error below 4%. Through measurement of B50 and image analysis of micrographs, the effects of crystallographic texture and grain size distribution breadth were qualitatively discussed.
Kinjo, Ken; Uchibe, Eiji; Doya, Kenji
2013-01-01
Linearly solvable Markov Decision Process (LMDP) is a class of optimal control problem in which the Bellman's equation can be converted into a linear equation by an exponential transformation of the state value function (Todorov, 2009b). In an LMDP, the optimal value function and the corresponding control policy are obtained by solving an eigenvalue problem in a discrete state space or an eigenfunction problem in a continuous state using the knowledge of the system dynamics and the action, state, and terminal cost functions. In this study, we evaluate the effectiveness of the LMDP framework in real robot control, in which the dynamics of the body and the environment have to be learned from experience. We first perform a simulation study of a pole swing-up task to evaluate the effect of the accuracy of the learned dynamics model on the derived the action policy. The result shows that a crude linear approximation of the non-linear dynamics can still allow solution of the task, despite with a higher total cost. We then perform real robot experiments of a battery-catching task using our Spring Dog mobile robot platform. The state is given by the position and the size of a battery in its camera view and two neck joint angles. The action is the velocities of two wheels, while the neck joints were controlled by a visual servo controller. We test linear and bilinear dynamic models in tasks with quadratic and Guassian state cost functions. In the quadratic cost task, the LMDP controller derived from a learned linear dynamics model performed equivalently with the optimal linear quadratic regulator (LQR). In the non-quadratic task, the LMDP controller with a linear dynamics model showed the best performance. The results demonstrate the usefulness of the LMDP framework in real robot control even when simple linear models are used for dynamics learning.
Deep Potential Molecular Dynamics: A Scalable Model with the Accuracy of Quantum Mechanics
NASA Astrophysics Data System (ADS)
Zhang, Linfeng; Han, Jiequn; Wang, Han; Car, Roberto; E, Weinan
2018-04-01
We introduce a scheme for molecular simulations, the deep potential molecular dynamics (DPMD) method, based on a many-body potential and interatomic forces generated by a carefully crafted deep neural network trained with ab initio data. The neural network model preserves all the natural symmetries in the problem. It is first-principles based in the sense that there are no ad hoc components aside from the network model. We show that the proposed scheme provides an efficient and accurate protocol in a variety of systems, including bulk materials and molecules. In all these cases, DPMD gives results that are essentially indistinguishable from the original data, at a cost that scales linearly with system size.
Production Planning and Planting Pattern Scheduling Information System for Horticulture
NASA Astrophysics Data System (ADS)
Vitadiar, Tanhella Zein; Farikhin; Surarso, Bayu
2018-02-01
This paper present the production of planning and planting pattern scheduling faced by horticulture farmer using two methods. Fuzzy time series method use to predict demand on based on sales amount, while linear programming is used to assist horticulture farmers in making production planning decisions and determining the schedule of cropping patterns in accordance with demand predictions of the fuzzy time series method, variable use in this paper is size of areas, production advantage, amount of seeds and age of the plants. This research result production planning and planting patterns scheduling information system with the output is recommendations planting schedule, harvest schedule and the number of seeds will be plant.
Recursive inverse factorization.
Rubensson, Emanuel H; Bock, Nicolas; Holmström, Erik; Niklasson, Anders M N
2008-03-14
A recursive algorithm for the inverse factorization S(-1)=ZZ(*) of Hermitian positive definite matrices S is proposed. The inverse factorization is based on iterative refinement [A.M.N. Niklasson, Phys. Rev. B 70, 193102 (2004)] combined with a recursive decomposition of S. As the computational kernel is matrix-matrix multiplication, the algorithm can be parallelized and the computational effort increases linearly with system size for systems with sufficiently sparse matrices. Recent advances in network theory are used to find appropriate recursive decompositions. We show that optimization of the so-called network modularity results in an improved partitioning compared to other approaches. In particular, when the recursive inverse factorization is applied to overlap matrices of irregularly structured three-dimensional molecules.
Nichols, J.M.; Link, W.A.; Murphy, K.D.; Olson, C.C.
2010-01-01
This work discusses a Bayesian approach to approximating the distribution of parameters governing nonlinear structural systems. Specifically, we use a Markov Chain Monte Carlo method for sampling the posterior parameter distributions thus producing both point and interval estimates for parameters. The method is first used to identify both linear and nonlinear parameters in a multiple degree-of-freedom structural systems using free-decay vibrations. The approach is then applied to the problem of identifying the location, size, and depth of delamination in a model composite beam. The influence of additive Gaussian noise on the response data is explored with respect to the quality of the resulting parameter estimates.
Farr, J B; Dessy, F; De Wilde, O; Bietzer, O; Schönenberg, D
2013-07-01
The purpose of this investigation was to compare and contrast the measured fundamental properties of two new types of modulated proton scanning systems. This provides a basis for clinical expectations based on the scanned beam quality and a benchmark for computational models. Because the relatively small beam and fast scanning gave challenges to the characterization, a secondary purpose was to develop and apply new approaches where necessary to do so. The following performances of the proton scanning systems were investigated: beamlet alignment, static in-air beamlet size and shape, scanned in-air penumbra, scanned fluence map accuracy, geometric alignment of scanning system to isocenter, maximum field size, lateral and longitudinal field uniformity of a 1 l cubic uniform field, output stability over time, gantry angle invariance, monitoring system linearity, and reproducibility. A range of detectors was used: film, ionization chambers, lateral multielement and longitudinal multilayer ionization chambers, and a scintillation screen combined with a digital video camera. Characterization of the scanned fluence maps was performed with a software analysis tool. The resulting measurements and analysis indicated that the two types of delivery systems performed within specification for those aspects investigated. The significant differences were observed between the two types of scanning systems where one type exhibits a smaller spot size and associated penumbra than the other. The differential is minimum at maximum energy and increases inversely with decreasing energy. Additionally, the large spot system showed an increase in dose precision to a static target with layer rescanning whereas the small spot system did not. The measured results from the two types of modulated scanning types of system were consistent with their designs under the conditions tested. The most significant difference between the types of system was their proton spot size and associated resolution, factors of magnetic optics, and vacuum length. The need and benefit of mutielement detectors and high-resolution sensors was also shown. The use of a fluence map analytical software tool was particularly effective in characterizing the dynamic proton energy-layer scanning.
Self-Assembly of Emulsion Droplets into Polymer Chains
NASA Astrophysics Data System (ADS)
Bargteil, Dylan; McMullen, Angus; Brujic, Jasna
We experimentally investigate `beads-on-a-string' models of polymers using the spontaneous assembly of emulsion droplets into linear chains. Droplets functionalized with surface-mobile DNA allow for programmable 'monomers' through which we can influence the three-dimensional structure of the assembled 'polymer'. Such model polymers can be used to study conformational changes of polypeptides and the principles governing protein folding. In our system, we find that droplets bind via complementary DNA strands that are recruited into adhesion patches. Recruitment is driven by the DNA hybridization energy, and is limited by the energy cost of surface deformation and the entropy loss of the mobile linkers, yielding adhesion patches of a characteristic size with a given number of linkers. By tuning the initial surface coverage of linkers, we control valency between the droplets to create linear or branched polymer chains. We additionally control the flexibility of the model polymers by varying the salt concentration and study their dynamics between extended and collapsed states. This system opens the possibility of programming stable three-dimensional structures, such as those found within folded proteins.
Parallel iterative solution for h and p approximations of the shallow water equations
Barragy, E.J.; Walters, R.A.
1998-01-01
A p finite element scheme and parallel iterative solver are introduced for a modified form of the shallow water equations. The governing equations are the three-dimensional shallow water equations. After a harmonic decomposition in time and rearrangement, the resulting equations are a complex Helmholz problem for surface elevation, and a complex momentum equation for the horizontal velocity. Both equations are nonlinear and the resulting system is solved using the Picard iteration combined with a preconditioned biconjugate gradient (PBCG) method for the linearized subproblems. A subdomain-based parallel preconditioner is developed which uses incomplete LU factorization with thresholding (ILUT) methods within subdomains, overlapping ILUT factorizations for subdomain boundaries and under-relaxed iteration for the resulting block system. The method builds on techniques successfully applied to linear elements by introducing ordering and condensation techniques to handle uniform p refinement. The combined methods show good performance for a range of p (element order), h (element size), and N (number of processors). Performance and scalability results are presented for a field scale problem where up to 512 processors are used. ?? 1998 Elsevier Science Ltd. All rights reserved.
Spatial filtering self-velocimeter for vehicle application using a CMOS linear image sensor
NASA Astrophysics Data System (ADS)
He, Xin; Zhou, Jian; Nie, Xiaoming; Long, Xingwu
2015-03-01
The idea of using a spatial filtering velocimeter (SFV) to measure the velocity of a vehicle for an inertial navigation system is put forward. The presented SFV is based on a CMOS linear image sensor with a high-speed data rate, large pixel size, and built-in timing generator. These advantages make the image sensor suitable to measure vehicle velocity. The power spectrum of the output signal is obtained by fast Fourier transform and is corrected by a frequency spectrum correction algorithm. This velocimeter was used to measure the velocity of a conveyor belt driven by a rotary table and the measurement uncertainty is ˜0.54%. Furthermore, it was also installed on a vehicle together with a laser Doppler velocimeter (LDV) to measure self-velocity. The measurement result of the designed SFV is compared with that of the LDV. It is shown that the measurement result of the SFV is coincident with that of the LDV. Therefore, the designed SFV is suitable for a vehicle self-contained inertial navigation system.
Quadratic canonical transformation theory and higher order density matrices.
Neuscamman, Eric; Yanai, Takeshi; Chan, Garnet Kin-Lic
2009-03-28
Canonical transformation (CT) theory provides a rigorously size-extensive description of dynamic correlation in multireference systems, with an accuracy superior to and cost scaling lower than complete active space second order perturbation theory. Here we expand our previous theory by investigating (i) a commutator approximation that is applied at quadratic, as opposed to linear, order in the effective Hamiltonian, and (ii) incorporation of the three-body reduced density matrix in the operator and density matrix decompositions. The quadratic commutator approximation improves CT's accuracy when used with a single-determinant reference, repairing the previous formal disadvantage of the single-reference linear CT theory relative to singles and doubles coupled cluster theory. Calculations on the BH and HF binding curves confirm this improvement. In multireference systems, the three-body reduced density matrix increases the overall accuracy of the CT theory. Tests on the H(2)O and N(2) binding curves yield results highly competitive with expensive state-of-the-art multireference methods, such as the multireference Davidson-corrected configuration interaction (MRCI+Q), averaged coupled pair functional, and averaged quadratic coupled cluster theories.
Improving Strategies via SMT Solving
NASA Astrophysics Data System (ADS)
Gawlitza, Thomas Martin; Monniaux, David
We consider the problem of computing numerical invariants of programs by abstract interpretation. Our method eschews two traditional sources of imprecision: (i) the use of widening operators for enforcing convergence within a finite number of iterations (ii) the use of merge operations (often, convex hulls) at the merge points of the control flow graph. It instead computes the least inductive invariant expressible in the domain at a restricted set of program points, and analyzes the rest of the code en bloc. We emphasize that we compute this inductive invariant precisely. For that we extend the strategy improvement algorithm of Gawlitza and Seidl [17]. If we applied their method directly, we would have to solve an exponentially sized system of abstract semantic equations, resulting in memory exhaustion. Instead, we keep the system implicit and discover strategy improvements using SAT modulo real linear arithmetic (SMT). For evaluating strategies we use linear programming. Our algorithm has low polynomial space complexity and performs for contrived examples in the worst case exponentially many strategy improvement steps; this is unsurprising, since we show that the associated abstract reachability problem is Π2 P -complete.
Xu, Qingsong
2013-05-01
Limited-angle rotary micropositioning stages are required in precision engineering applications where an ultrahigh-precision rotational motion within a restricted range is needed. This paper presents the design, fabrication, and control of a compliant rotary micropositioning stage dedicated to the said applications. To tackle the challenge of achieving both a large rotational range and a compact size, a new idea of multi-stage compound radial flexure is proposed. A compact rotary stage is devised to deliver an over 10° rotational range while possessing a negligible magnitude of center shift. The stage is driven by a linear voice coil motor and its output motion is measured by laser displacement sensors. Analytical models are derived to facilitate the parametric design, which is validated by conducting finite element analysis. The actuation and sensing issues are addressed to guarantee the stage performance. A prototype is fabricated and a proportional-integral-derivative control is implemented to achieve a precise positioning. Experimental results demonstrate a resolution of 2 μrad over 10° rotational range as well as a low level of center shift of the rotary micropositioning system.
High frequency ultrasound: a new frontier for ultrasound.
Shung, K; Cannata, Jonathan; Qifa Zhou, Member; Lee, Jungwoo
2009-01-01
High frequency ultrasonic imaging is considered by many to be the next frontier in ultrasonic imaging because higher frequencies yield much improved spatial resolution by sacrificing the depth of penetration. It has many clinical applications including visualizing blood vessel wall, anterior segments of the eye and skin. Another application is small animal imaging. Ultrasound is especially attractive in imaging the heart of a small animal like mouse which has a size in the mm range and a heart beat rate faster than 600 BPM. A majority of current commercial high frequency scanners often termed "ultrasonic backscatter microscope or UBM" acquire images by scanning single element transducers at frequencies between 50 to 80 MHz with a frame rate lower than 40 frames/s, making them less suitable for this application. High frequency linear arrays and linear array based ultrasonic imaging systems at frequencies higher than 30 MHz are being developed. The engineering of such arrays and development of high frequency imaging systems has been proven to be highly challenging. High frequency ultrasound may find other significant biomedical applications. The development of acoustic tweezers for manipulating microparticles is such an example.
NASA Astrophysics Data System (ADS)
Hassanzadeh, H.; Jafari Raad, S. M.
2017-12-01
Linear stability analysis is conducted to study the onset of buoyancy-driven convection involved in solubility trapping of CO2 into deep fractured aquifers. In this study, the effect of fracture network physical properties on the stability criteria in a brine-rich fractured porous layer is investigated using dual porosity concept for both single and variable matrix block size distributions. Linear stability analysis results show that both fracture interporosity flow and fracture storativity factors play an important role in the stability behavior of the system. It is shown that a diffusive boundary layer under the gravity field in a fractured rock with lower fracture storativity and/or higher fracture interporosity flow coefficient is more stable. We present scaling relations that relate the onset of convective instability in fractured aquifers. These findings improve our understanding of buoyancy driven flow in fractured aquifers and are particularly important in estimation of potential storage capacity, risk assessment, and storage sites characterization and screening.Keywords: CO2 sequestration; fractured rock; buoyancy-driven convection; stability analysis
Contact angle of sessile drops in Lennard-Jones systems.
Becker, Stefan; Urbassek, Herbert M; Horsch, Martin; Hasse, Hans
2014-11-18
Molecular dynamics simulations are used for studying the contact angle of nanoscale sessile drops on a planar solid wall in a system interacting via the truncated and shifted Lennard-Jones potential. The entire range between total wetting and dewetting is investigated by varying the solid-fluid dispersive interaction energy. The temperature is varied between the triple point and the critical temperature. A correlation is obtained for the contact angle in dependence of the temperature and the dispersive interaction energy. Size effects are studied by varying the number of fluid particles at otherwise constant conditions, using up to 150,000 particles. For particle numbers below 10,000, a decrease of the contact angle is found. This is attributed to a dependence of the solid-liquid surface tension on the droplet size. A convergence to a constant contact angle is observed for larger system sizes. The influence of the wall model is studied by varying the density of the wall. The effective solid-fluid dispersive interaction energy at a contact angle of θ = 90° is found to be independent of temperature and to decrease linearly with the solid density. A correlation is developed that describes the contact angle as a function of the dispersive interaction, the temperature, and the solid density. The density profile of the sessile drop and the surrounding vapor phase is described by a correlation combining a sigmoidal function and an oscillation term.
NASA Astrophysics Data System (ADS)
Divya, S.; Nampoori, V. P. N.; Radhakrishnan, P.; Mujeeb, A.
2014-08-01
TiN nanoparticles of average size 55 nm were investigated for their optical non-linear properties. During the experiment the irradiated laser wavelength coincided with the surface plasmon resonance (SPR) peak of the nanoparticle. The large non-linearity of the nanoparticle was attributed to the plasmon resonance, which largely enhanced the local field within the nanoparticle. Both open and closed aperture Z-scan experiments were performed and the corresponding optical constants were explored. The post-excitation absorption spectra revealed the interesting phenomenon of photo fragmentation leading to the blue shift in band gap and red shift in the SPR. The results are discussed in terms of enhanced interparticle interaction simultaneous with size reduction. Here, the optical constants being intrinsic constants for a particular sample change unusually with laser power intensity. The dependence of χ(3) is discussed in terms of the size variation caused by photo fragmentation. The studies proved that the TiN nanoparticles are potential candidates in photonics technology offering huge scope to study unexplored research for various expedient applications.
NASA Technical Reports Server (NTRS)
Cheatwood, F. McNeil; Swanson, Gregory T.; Johnson, R. Keith; Hughes, Stephen; Calomino, Anthony; Gilles, Brian; Anderson, Paul; Bond, Bruce
2016-01-01
Over a decade of work has been conducted in the development of NASA's Hypersonic Inflatable Aerodynamic Decelerator (HIAD) deployable aeroshell technology. This effort has included multiple ground test campaigns and flight tests culminating in the HIAD project's second generation (Gen-2) aeroshell system. The HIAD project team has developed, fabricated, and tested stacked-torus inflatable structures (IS) with flexible thermal protection systems (F-TPS) ranging in diameters from 3-6m, with cone angles of 60 and 70 deg. To meet NASA and commercial near term objectives, the HIAD team must scale the current technology up to 12-15m in diameter. The HIAD project's experience in scaling the technology has reached a critical juncture. Growing from a 6m to a 15m class system will introduce many new structural and logistical challenges to an already complicated manufacturing process. Although the general architecture and key aspects of the HIAD design scale well to larger vehicles, details of the technology will need to be reevaluated and possibly redesigned for use in a 15m-class HIAD system. These include: layout and size of the structural webbing that transfers load throughout the IS, inflatable gas barrier design, torus diameter and braid construction, internal pressure and inflation line routing, adhesives used for coating and bonding, and F-TPS gore design and seam fabrication. The logistics of fabricating and testing the IS and the F-TPS also become more challenging with increased scale. Compared to the 6m aeroshell (the largest HIAD built to date), a 12m aeroshell has four times the cross-sectional area, and a 15m one has over six times the area. This means that fabrication and test procedures will need to be reexamined to account for the sheer size and weight of the aeroshell components. This will affect a variety of steps in the manufacturing process, such as: stacking the tori during assembly, stitching the structural webbing, initial inflation of tori, and stitching of F-TPS gores. Additionally, new approaches and hardware will be required for handling and ground testing of both individual tori and the fully assembled HIADs. There are also noteworthy benefits of scaling up the HIAD aeroshell to 15m-class system. Two complications in working with handmade textiles structures are the non-linearity of the materials and the role of human accuracy during fabrication. Larger, more capable, HIAD structures should see much larger operational loads, potentially bringing the structural response of the materials out of the non-linear regime and into the preferred linear response range. Also, making the reasonable assumption that the magnitude of fabrication accuracy remains constant as the structures grow, the relative effect of fabrication errors should decrease as a percentage of the textile component size. Combined, these two effects improve the predictive capability and the uniformity of the structural response for a 12-15m class HIAD. In this paper, the challenges and associated mitigation plans related to scaling up the HIAD stacked-torus aeroshell to a 15m class system will be discussed. In addition, the benefits of enlarging the structure will be further explored.
Onboard Image Processing System for Hyperspectral Sensor
Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun
2015-01-01
Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS’s performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost. PMID:26404281