Sample records for simple linear combination

  1. Ranking Forestry Investments With Parametric Linear Programming

    Treesearch

    Paul A. Murphy

    1976-01-01

    Parametric linear programming is introduced as a technique for ranking forestry investments under multiple constraints; it combines the advantages of simple tanking and linear programming as capital budgeting tools.

  2. Linear and Non-Linear Visual Feature Learning in Rat and Humans

    PubMed Central

    Bossens, Christophe; Op de Beeck, Hans P.

    2016-01-01

    The visual system processes visual input in a hierarchical manner in order to extract relevant features that can be used in tasks such as invariant object recognition. Although typically investigated in primates, recent work has shown that rats can be trained in a variety of visual object and shape recognition tasks. These studies did not pinpoint the complexity of the features used by these animals. Many tasks might be solved by using a combination of relatively simple features which tend to be correlated. Alternatively, rats might extract complex features or feature combinations which are nonlinear with respect to those simple features. In the present study, we address this question by starting from a small stimulus set for which one stimulus-response mapping involves a simple linear feature to solve the task while another mapping needs a well-defined nonlinear combination of simpler features related to shape symmetry. We verified computationally that the nonlinear task cannot be trivially solved by a simple V1-model. We show how rats are able to solve the linear feature task but are unable to acquire the nonlinear feature. In contrast, humans are able to use the nonlinear feature and are even faster in uncovering this solution as compared to the linear feature. The implications for the computational capabilities of the rat visual system are discussed. PMID:28066201

  3. Robust Combining of Disparate Classifiers Through Order Statistics

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Ghosh, Joydeep

    2001-01-01

    Integrating the outputs of multiple classifiers via combiners or meta-learners has led to substantial improvements in several difficult pattern recognition problems. In this article we investigate a family of combiners based on order statistics, for robust handling of situations where there are large discrepancies in performance of individual classifiers. Based on a mathematical modeling of how the decision boundaries are affected by order statistic combiners, we derive expressions for the reductions in error expected when simple output combination methods based on the the median, the maximum and in general, the ith order statistic, are used. Furthermore, we analyze the trim and spread combiners, both based on linear combinations of the ordered classifier outputs, and show that in the presence of uneven classifier performance, they often provide substantial gains over both linear and simple order statistics combiners. Experimental results on both real world data and standard public domain data sets corroborate these findings.

  4. A simple smoothness indicator for the WENO scheme with adaptive order

    NASA Astrophysics Data System (ADS)

    Huang, Cong; Chen, Li Li

    2018-01-01

    The fifth order WENO scheme with adaptive order is competent for solving hyperbolic conservation laws, its reconstruction is a convex combination of a fifth order linear reconstruction and three third order linear reconstructions. Note that, on uniform mesh, the computational cost of smoothness indicator for fifth order linear reconstruction is comparable with the sum of ones for three third order linear reconstructions, thus it is too heavy; on non-uniform mesh, the explicit form of smoothness indicator for fifth order linear reconstruction is difficult to be obtained, and its computational cost is much heavier than the one on uniform mesh. In order to overcome these problems, a simple smoothness indicator for fifth order linear reconstruction is proposed in this paper.

  5. A Simple Demonstration of Atomic and Molecular Orbitals Using Circular Magnets

    ERIC Educational Resources Information Center

    Chakraborty, Maharudra; Mukhopadhyay, Subrata; Das, Ranendu Sekhar

    2014-01-01

    A quite simple and inexpensive technique is described here to represent the approximate shapes of atomic orbitals and the molecular orbitals formed by them following the principles of the linear combination of atomic orbitals (LCAO) method. Molecular orbitals of a few simple molecules can also be pictorially represented. Instructors can employ the…

  6. User's manual for interfacing a leading edge, vortex rollup program with two linear panel methods

    NASA Technical Reports Server (NTRS)

    Desilva, B. M. E.; Medan, R. T.

    1979-01-01

    Sufficient instructions are provided for interfacing the Mangler-Smith, leading edge vortex rollup program with a vortex lattice (POTFAN) method and an advanced higher order, singularity linear analysis for computing the vortex effects for simple canard wing combinations.

  7. Amplitude Frequency Response Measurement: A Simple Technique

    ERIC Educational Resources Information Center

    Satish, L.; Vora, S. C.

    2010-01-01

    A simple method is described to combine a modern function generator and a digital oscilloscope to configure a setup that can directly measure the amplitude frequency response of a system. This is achieved by synchronously triggering both instruments, with the function generator operated in the "Linear-Sweep" frequency mode, while the oscilloscope…

  8. Evaluation of Two Statistical Methods Provides Insights into the Complex Patterns of Alternative Polyadenylation Site Switching

    PubMed Central

    Li, Jie; Li, Rui; You, Leiming; Xu, Anlong; Fu, Yonggui; Huang, Shengfeng

    2015-01-01

    Switching between different alternative polyadenylation (APA) sites plays an important role in the fine tuning of gene expression. New technologies for the execution of 3’-end enriched RNA-seq allow genome-wide detection of the genes that exhibit significant APA site switching between different samples. Here, we show that the independence test gives better results than the linear trend test in detecting APA site-switching events. Further examination suggests that the discrepancy between these two statistical methods arises from complex APA site-switching events that cannot be represented by a simple change of average 3’-UTR length. In theory, the linear trend test is only effective in detecting these simple changes. We classify the switching events into four switching patterns: two simple patterns (3’-UTR shortening and lengthening) and two complex patterns. By comparing the results of the two statistical methods, we show that complex patterns account for 1/4 of all observed switching events that happen between normal and cancerous human breast cell lines. Because simple and complex switching patterns may convey different biological meanings, they merit separate study. We therefore propose to combine both the independence test and the linear trend test in practice. First, the independence test should be used to detect APA site switching; second, the linear trend test should be invoked to identify simple switching events; and third, those complex switching events that pass independence testing but fail linear trend testing can be identified. PMID:25875641

  9. A refinement of the combination equations for evaporation

    USGS Publications Warehouse

    Milly, P.C.D.

    1991-01-01

    Most combination equations for evaporation rely on a linear expansion of the saturation vapor-pressure curve around the air temperature. Because the temperature at the surface may differ from this temperature by several degrees, and because the saturation vapor-pressure curve is nonlinear, this approximation leads to a certain degree of error in those evaporation equations. It is possible, however, to introduce higher-order polynomial approximations for the saturation vapor-pressure curve and to derive a family of explicit equations for evaporation, having any desired degree of accuracy. Under the linear approximation, the new family of equations for evaporation reduces, in particular cases, to the combination equations of H. L. Penman (Natural evaporation from open water, bare soil and grass, Proc. R. Soc. London, Ser. A193, 120-145, 1948) and of subsequent workers. Comparison of the linear and quadratic approximations leads to a simple approximate expression for the error associated with the linear case. Equations based on the conventional linear approximation consistently underestimate evaporation, sometimes by a substantial amount. ?? 1991 Kluwer Academic Publishers.

  10. A Technique of Treating Negative Weights in WENO Schemes

    NASA Technical Reports Server (NTRS)

    Shi, Jing; Hu, Changqing; Shu, Chi-Wang

    2000-01-01

    High order accurate weighted essentially non-oscillatory (WENO) schemes have recently been developed for finite difference and finite volume methods both in structural and in unstructured meshes. A key idea in WENO scheme is a linear combination of lower order fluxes or reconstructions to obtain a high order approximation. The combination coefficients, also called linear weights, are determined by local geometry of the mesh and order of accuracy and may become negative. WENO procedures cannot be applied directly to obtain a stable scheme if negative linear weights are present. Previous strategy for handling this difficulty is by either regrouping of stencils or reducing the order of accuracy to get rid of the negative linear weights. In this paper we present a simple and effective technique for handling negative linear weights without a need to get rid of them.

  11. A novel method for calculating the energy barriers for carbon diffusion in ferrite under heterogeneous stress

    NASA Astrophysics Data System (ADS)

    Tchitchekova, Deyana S.; Morthomas, Julien; Ribeiro, Fabienne; Ducher, Roland; Perez, Michel

    2014-07-01

    A novel method for accurate and efficient evaluation of the change in energy barriers for carbon diffusion in ferrite under heterogeneous stress is introduced. This method, called Linear Combination of Stress States, is based on the knowledge of the effects of simple stresses (uniaxial or shear) on these diffusion barriers. Then, it is assumed that the change in energy barriers under a complex stress can be expressed as a linear combination of these already known simple stress effects. The modifications of energy barriers by either uniaxial traction/compression and shear stress are determined by means of atomistic simulations with the Climbing Image-Nudge Elastic Band method and are stored as a set of functions. The results of this method are compared to the predictions of anisotropic elasticity theory. It is shown that, linear anisotropic elasticity fails to predict the correct energy barrier variation with stress (especially with shear stress) whereas the proposed method provides correct energy barrier variation for stresses up to ˜3 GPa. This study provides a basis for the development of multiscale models of diffusion under non-uniform stress.

  12. A novel method for calculating the energy barriers for carbon diffusion in ferrite under heterogeneous stress.

    PubMed

    Tchitchekova, Deyana S; Morthomas, Julien; Ribeiro, Fabienne; Ducher, Roland; Perez, Michel

    2014-07-21

    A novel method for accurate and efficient evaluation of the change in energy barriers for carbon diffusion in ferrite under heterogeneous stress is introduced. This method, called Linear Combination of Stress States, is based on the knowledge of the effects of simple stresses (uniaxial or shear) on these diffusion barriers. Then, it is assumed that the change in energy barriers under a complex stress can be expressed as a linear combination of these already known simple stress effects. The modifications of energy barriers by either uniaxial traction/compression and shear stress are determined by means of atomistic simulations with the Climbing Image-Nudge Elastic Band method and are stored as a set of functions. The results of this method are compared to the predictions of anisotropic elasticity theory. It is shown that, linear anisotropic elasticity fails to predict the correct energy barrier variation with stress (especially with shear stress) whereas the proposed method provides correct energy barrier variation for stresses up to ∼3 GPa. This study provides a basis for the development of multiscale models of diffusion under non-uniform stress.

  13. Linear and Order Statistics Combiners for Pattern Classification

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Ghosh, Joydeep; Lau, Sonie (Technical Monitor)

    2001-01-01

    Several researchers have experimentally shown that substantial improvements can be obtained in difficult pattern recognition problems by combining or integrating the outputs of multiple classifiers. This chapter provides an analytical framework to quantify the improvements in classification results due to combining. The results apply to both linear combiners and order statistics combiners. We first show that to a first order approximation, the error rate obtained over and above the Bayes error rate, is directly proportional to the variance of the actual decision boundaries around the Bayes optimum boundary. Combining classifiers in output space reduces this variance, and hence reduces the 'added' error. If N unbiased classifiers are combined by simple averaging. the added error rate can be reduced by a factor of N if the individual errors in approximating the decision boundaries are uncorrelated. Expressions are then derived for linear combiners which are biased or correlated, and the effect of output correlations on ensemble performance is quantified. For order statistics based non-linear combiners, we derive expressions that indicate how much the median, the maximum and in general the i-th order statistic can improve classifier performance. The analysis presented here facilitates the understanding of the relationships among error rates, classifier boundary distributions, and combining in output space. Experimental results on several public domain data sets are provided to illustrate the benefits of combining and to support the analytical results.

  14. Principal components colour display of ERTS imagery

    NASA Technical Reports Server (NTRS)

    Taylor, M. M.

    1974-01-01

    In the technique presented, colours are not derived from single bands, but rather from independent linear combinations of the bands. Using a simple model of the processing done by the visual system, three informationally independent linear combinations of the four ERTS bands are mapped onto the three visual colour dimensions of brightness, redness-greenness and blueness-yellowness. The technique permits user-specific transformations which enhance particular features, but this is not usually needed, since a single transformation provides a picture which conveys much of the information implicit in the ERTS data. Examples of experimental vector images with matched individual band images are shown.

  15. A three-dimensional FEM-DEM technique for predicting the evolution of fracture in geomaterials and concrete

    NASA Astrophysics Data System (ADS)

    Zárate, Francisco; Cornejo, Alejandro; Oñate, Eugenio

    2018-07-01

    This paper extends to three dimensions (3D), the computational technique developed by the authors in 2D for predicting the onset and evolution of fracture in a finite element mesh in a simple manner based on combining the finite element method and the discrete element method (DEM) approach (Zárate and Oñate in Comput Part Mech 2(3):301-314, 2015). Once a crack is detected at an element edge, discrete elements are generated at the adjacent element vertexes and a simple DEM mechanism is considered in order to follow the evolution of the crack. The combination of the DEM with simple four-noded linear tetrahedron elements correctly captures the onset of fracture and its evolution, as shown in several 3D examples of application.

  16. Multiwavelength observations of magnetic fields and related activity on XI Bootis A

    NASA Technical Reports Server (NTRS)

    Saar, Steven H.; Huovelin, J.; Linsky, Jeffrey L.; Giampapa, Mark S.; Jordan, Carole

    1988-01-01

    Preliminary results of coordinated observations of magnetic fields and related activity on the active dwarf, Xi Boo A, are presented. Combining the magnetic fluxes with the linear polarization data, a simple map of the stellar active regions is constructed.

  17. Combining global and local approximations

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.

    1991-01-01

    A method based on a linear approximation to a scaling factor, designated the 'global-local approximation' (GLA) method, is presented and shown capable of extending the range of usefulness of derivative-based approximations to a more refined model. The GLA approach refines the conventional scaling factor by means of a linearly varying, rather than constant, scaling factor. The capabilities of the method are demonstrated for a simple beam example with a crude and more refined FEM model.

  18. Direct localization of poles of a meromorphic function from measurements on an incomplete boundary

    NASA Astrophysics Data System (ADS)

    Nara, Takaaki; Ando, Shigeru

    2010-01-01

    This paper proposes an algebraic method to reconstruct the positions of multiple poles in a meromorphic function field from measurements on an arbitrary simple arc in it. A novel issue is the exactness of the algorithm depending on whether the arc is open or closed, and whether it encloses or does not enclose the poles. We first obtain a differential equation that can equivalently determine the meromorphic function field. From it, we derive linear equations that relate the elementary symmetric polynomials of the pole positions to weighted integrals of the field along the simple arc and end-point terms of the arc when it is an open one. Eliminating the end-point terms based on an appropriate choice of weighting functions and a combination of the linear equations, we obtain a simple system of linear equations for solving the elementary symmetric polynomials. We also show that our algorithm can be applied to a 2D electric impedance tomography problem. The effects of the proximity of the poles, the number of measurements and noise on the localization accuracy are numerically examined.

  19. Atomistic Structure and Dynamics of the Solvation Shell Formed by Organic Carbonates around Lithium Ions via Infrared Spectroscopies

    NASA Astrophysics Data System (ADS)

    Kuroda, Daniel; Fufler, Kristen

    Lithium-ion batteries have become ubiquitous to the portable energy storage industry, but efficiency issues still remain. Currently, most technological and scientific efforts are focused on the electrodes with little attention on the electrolyte. For example, simple fundamental questions about the lithium ion solvation shell composition in commercially used electrolytes have not been answered. Using a combination of linear and non-linear IR spectroscopies and theoretical calculations, we have carried out a thorough investigation of the solvation structure and dynamics of the lithium ion in various linear and cyclic carbonates at common battery electrolyte concentrations. Our studies show that carbonates coordinate the lithium ion tetrahedrally. They also reveal that linear and cyclic carbonates have contrasting dynamics in which cyclic carbonates present the most ordered structure. Finally, our experiments demonstrate that simple structural modifications in the linear carbonates impact significantly the microscopic interactions of the system. The stark differences in the solvation structure and dynamics among different carbonates reveal previously unknown details about the molecular level picture of these systems.

  20. Multi-Window Controllers for Autonomous Space Systems

    NASA Technical Reports Server (NTRS)

    Lurie, B, J.; Hadaegh, F. Y.

    1997-01-01

    Multi-window controllers select between elementary linear controllers using nonlinear windows based on the amplitude and frequency content of the feedback error. The controllers are relatively simple to implement and perform much better than linear controllers. The commanders for such controllers only order the destination point and are freed from generating the command time-profiles. The robotic missions rely heavily on the tasks of acquisition and tracking. For autonomous and optimal control of the spacecraft, the control bandwidth must be larger while the feedback can (and, therefore, must) be reduced.. Combining linear compensators via multi-window nonlinear summer guarantees minimum phase character of the combined transfer function. It is shown that the solution may require using several parallel branches and windows. Several examples of multi-window nonlinear controller applications are presented.

  1. Generalized concentration addition: a method for examining mixtures containing partial agonists.

    PubMed

    Howard, Gregory J; Webster, Thomas F

    2009-08-07

    Environmentally relevant toxic exposures often consist of simultaneous exposure to multiple agents. Methods to predict the expected outcome of such combinations are critical both to risk assessment and to an accurate judgment of whether combinations are synergistic or antagonistic. Concentration addition (CA) has commonly been used to assess the presence of synergy or antagonism in combinations of similarly acting chemicals, and to predict effects of combinations of such agents. CA has the advantage of clear graphical interpretation: Curves of constant joint effect (isoboles) must be negatively sloped straight lines if the mixture is concentration additive. However, CA cannot be directly used to assess combinations that include partial agonists, although such agents are of considerable interest. Here, we propose a natural extension of CA to a functional form that may be applied to mixtures including full agonists and partial agonists. This extended definition, for which we suggest the term "generalized concentration addition," encompasses linear isoboles with slopes of any sign. We apply this approach to the simple example of agents with dose-response relationships described by Hill functions with slope parameter n=1. The resulting isoboles are in all cases linear, with negative, zero and positive slopes. Using simple mechanistic models of ligand-receptor systems, we show that the same isobole pattern and joint effects are generated by modeled combinations of full and partial agonists. Special cases include combinations of two full agonists and a full agonist plus a competitive antagonist.

  2. [Relation between Body Height and Combined Length of Manubrium and Mesosternum of Sternum Measured by CT-VRT in Southwest Han Population].

    PubMed

    Luo, Ying-zhen; Tu, Meng; Fan, Fei; Zheng, Jie-qian; Yang, Ming; Li, Tao; Zhang, Kui; Deng, Zhen-hua

    2015-06-01

    To establish the linear regression equation between body height and combined length of manubrium and mesostenum of sternum measured by CT volume rendering technique (CT-VRT) in southwest Han population. One hundred and sixty subjects, including 80 males and 80 females were selected from southwest Han population for routine CT-VRT (reconstruction thickness 1 mm) examination. The lengths of both manubrium and mesosternum were recorded, and the combined length of manubrium and mesosternum was equal to the algebraic sum of them. The sex-specific linear regression equations between the combined length of manubrium and mesosternum and the real body height of each subject were deduced. The sex-specific simple linear regression equations between the combined length of manubrium and mesostenum (x3) and body height (y) were established (male: y = 135.000+2.118 x3 and female: y = 120.790+2.808 x3). Both equations showed statistical significance (P < 0.05) with a 100% predictive accuracy. CT-VRT is an effective method for measurement of the index of sternum. The combined length of manubrium and mesosternum from CT-VRT can be used for body height estimation in southwest Han population.

  3. Measurements of PANs during the New England Air Quality Study 2002

    NASA Astrophysics Data System (ADS)

    Roberts, J. M.; Marchewka, M.; Bertman, S. B.; Sommariva, R.; Warneke, C.; de Gouw, J.; Kuster, W.; Goldan, P.; Williams, E.; Lerner, B. M.; Murphy, P.; Fehsenfeld, F. C.

    2007-10-01

    Measurements of peroxycarboxylic nitric anhydrides (PANs) were made during the New England Air Quality Study 2002 cruise of the NOAA RV Ronald H Brown. The four compounds observed, PAN, peroxypropionic nitric anhydride (PPN), peroxymethacrylic nitric anhydride (MPAN), and peroxyisobutyric nitric anhydride (PiBN) were compared with results from other continental and Gulf of Maine sites. Systematic changes in PPN/PAN ratio, due to differential thermal decomposition rates, were related quantitatively to air mass aging. At least one early morning period was observed when O3 seemed to have been lost probably due to NO3 and N2O5 chemistry. The highest O3 episode was observed in the combined plume of isoprene sources and anthropogenic volatile organic compounds (VOCs) and NOx sources from the greater Boston area. A simple linear combination model showed that the organic precursors leading to elevated O3 were roughly half from the biogenic and half from anthropogenic VOC regimes. An explicit chemical box model confirmed that the chemistry in the Boston plume is well represented by the simple linear combination model. This degree of biogenic hydrocarbon involvement in the production of photochemical ozone has significant implications for air quality control strategies in this region.

  4. Virasoro constraints and polynomial recursion for the linear Hodge integrals

    NASA Astrophysics Data System (ADS)

    Guo, Shuai; Wang, Gehao

    2017-04-01

    The Hodge tau-function is a generating function for the linear Hodge integrals. It is also a tau-function of the KP hierarchy. In this paper, we first present the Virasoro constraints for the Hodge tau-function in the explicit form of the Virasoro equations. The expression of our Virasoro constraints is simply a linear combination of the Virasoro operators, where the coefficients are restored from a power series for the Lambert W function. Then, using this result, we deduce a simple version of the Virasoro constraints for the linear Hodge partition function, where the coefficients are restored from the Gamma function. Finally, we establish the equivalence relation between the Virasoro constraints and polynomial recursion formula for the linear Hodge integrals.

  5. Difference-Equation/Flow-Graph Circuit Analysis

    NASA Technical Reports Server (NTRS)

    Mcvey, I. M.

    1988-01-01

    Numerical technique enables rapid, approximate analyses of electronic circuits containing linear and nonlinear elements. Practiced in variety of computer languages on large and small computers; for circuits simple enough, programmable hand calculators used. Although some combinations of circuit elements make numerical solutions diverge, enables quick identification of divergence and correction of circuit models to make solutions converge.

  6. A new adaptively central-upwind sixth-order WENO scheme

    NASA Astrophysics Data System (ADS)

    Huang, Cong; Chen, Li Li

    2018-03-01

    In this paper, we propose a new sixth-order WENO scheme for solving one dimensional hyperbolic conservation laws. The new WENO reconstruction has three properties: (1) it is central in smooth region for low dissipation, and is upwind near discontinuities for numerical stability; (2) it is a convex combination of four linear reconstructions, in which one linear reconstruction is sixth order, and the others are third order; (3) its linear weights can be any positive numbers with requirement that their sum equals one. Furthermore, we propose a simple smoothness indicator for the sixth-order linear reconstruction, this smooth indicator not only can distinguish the smooth region and discontinuities exactly, but also can reduce the computational cost, thus it is more efficient than the classical one.

  7. Adaptive nonlinear control for autonomous ground vehicles

    NASA Astrophysics Data System (ADS)

    Black, William S.

    We present the background and motivation for ground vehicle autonomy, and focus on uses for space-exploration. Using a simple design example of an autonomous ground vehicle we derive the equations of motion. After providing the mathematical background for nonlinear systems and control we present two common methods for exactly linearizing nonlinear systems, feedback linearization and backstepping. We use these in combination with three adaptive control methods: model reference adaptive control, adaptive sliding mode control, and extremum-seeking model reference adaptive control. We show the performances of each combination through several simulation results. We then consider disturbances in the system, and design nonlinear disturbance observers for both single-input-single-output and multi-input-multi-output systems. Finally, we show the performance of these observers with simulation results.

  8. Bounded Linear Stability Analysis - A Time Delay Margin Estimation Approach for Adaptive Control

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Ishihara, Abraham K.; Krishnakumar, Kalmanje Srinlvas; Bakhtiari-Nejad, Maryam

    2009-01-01

    This paper presents a method for estimating time delay margin for model-reference adaptive control of systems with almost linear structured uncertainty. The bounded linear stability analysis method seeks to represent the conventional model-reference adaptive law by a locally bounded linear approximation within a small time window using the comparison lemma. The locally bounded linear approximation of the combined adaptive system is cast in a form of an input-time-delay differential equation over a small time window. The time delay margin of this system represents a local stability measure and is computed analytically by a matrix measure method, which provides a simple analytical technique for estimating an upper bound of time delay margin. Based on simulation results for a scalar model-reference adaptive control system, both the bounded linear stability method and the matrix measure method are seen to provide a reasonably accurate and yet not too conservative time delay margin estimation.

  9. Nonlinear random response prediction using MSC/NASTRAN

    NASA Technical Reports Server (NTRS)

    Robinson, J. H.; Chiang, C. K.; Rizzi, S. A.

    1993-01-01

    An equivalent linearization technique was incorporated into MSC/NASTRAN to predict the nonlinear random response of structures by means of Direct Matrix Abstract Programming (DMAP) modifications and inclusion of the nonlinear differential stiffness module inside the iteration loop. An iterative process was used to determine the rms displacements. Numerical results obtained for validation on simple plates and beams are in good agreement with existing solutions in both the linear and linearized regions. The versatility of the implementation will enable the analyst to determine the nonlinear random responses for complex structures under combined loads. The thermo-acoustic response of a hexagonal thermal protection system panel is used to highlight some of the features of the program.

  10. A new adaptive multiple modelling approach for non-linear and non-stationary systems

    NASA Astrophysics Data System (ADS)

    Chen, Hao; Gong, Yu; Hong, Xia

    2016-07-01

    This paper proposes a novel adaptive multiple modelling algorithm for non-linear and non-stationary systems. This simple modelling paradigm comprises K candidate sub-models which are all linear. With data available in an online fashion, the performance of all candidate sub-models are monitored based on the most recent data window, and M best sub-models are selected from the K candidates. The weight coefficients of the selected sub-model are adapted via the recursive least square (RLS) algorithm, while the coefficients of the remaining sub-models are unchanged. These M model predictions are then optimally combined to produce the multi-model output. We propose to minimise the mean square error based on a recent data window, and apply the sum to one constraint to the combination parameters, leading to a closed-form solution, so that maximal computational efficiency can be achieved. In addition, at each time step, the model prediction is chosen from either the resultant multiple model or the best sub-model, whichever is the best. Simulation results are given in comparison with some typical alternatives, including the linear RLS algorithm and a number of online non-linear approaches, in terms of modelling performance and time consumption.

  11. Influence of smooth temperature variation on hotspot ignition

    NASA Astrophysics Data System (ADS)

    Reinbacher, Fynn; Regele, Jonathan David

    2018-01-01

    Autoignition in thermally stratified reactive mixtures originates in localised hotspots. The ignition behaviour is often characterised using linear temperature gradients and more recently constant temperature plateaus combined with temperature gradients. Acoustic timescale characterisation of plateau regions has been successfully used to characterise the type of mechanical disturbance that will be created from a plateau core ignition. This work combines linear temperature gradients with superelliptic cores in order to more accurately account for a local temperature maximum of finite size and the smooth temperature variation contained inside realistic hotspot centres. A one-step Arrhenius reaction is used to model a H2-air reactive mixture. Using the superelliptic approach a range of behaviours for temperature distributions are investigated by varying the temperature profile between the gradient only and plateau and gradient bounding cases. Each superelliptic case is compared to a respective plateau and gradient case where simple acoustic timescale characterisation may be performed. It is shown that hot spots equivalent with excitation-to-acoustic timescale ratios sufficiently greater than unity exhibit behaviour very similar to a simple plateau-gradient model. However, for larger hot spots with timescale ratios sufficiently less than unity the reaction behaviour is highly dependent on the smooth temperature profile contained within the core region.

  12. Advanced statistics: linear regression, part I: simple linear regression.

    PubMed

    Marill, Keith A

    2004-01-01

    Simple linear regression is a mathematical technique used to model the relationship between a single independent predictor variable and a single dependent outcome variable. In this, the first of a two-part series exploring concepts in linear regression analysis, the four fundamental assumptions and the mechanics of simple linear regression are reviewed. The most common technique used to derive the regression line, the method of least squares, is described. The reader will be acquainted with other important concepts in simple linear regression, including: variable transformations, dummy variables, relationship to inference testing, and leverage. Simplified clinical examples with small datasets and graphic models are used to illustrate the points. This will provide a foundation for the second article in this series: a discussion of multiple linear regression, in which there are multiple predictor variables.

  13. Digitally gain controlled linear high voltage amplifier for laboratory applications.

    PubMed

    Koçum, C

    2011-08-01

    The design of a digitally gain controlled high-voltage non-inverting bipolar linear amplifier is presented. This cost efficient and relatively simple circuit has stable operation range from dc to 90 kHz under the load of 10 kΩ and 39 pF. The amplifier can swing up to 360 V(pp) under these conditions and it has 2.5 μs rise time. The gain can be changed by the aid of JFETs. The amplifiers have been realized using a combination of operational amplifiers and high-voltage discrete bipolar junction transistors. The circuit details and performance characteristics are discussed.

  14. Simple Procedure to Compute the Inductance of a Toroidal Ferrite Core from the Linear to the Saturation Regions

    PubMed Central

    Salas, Rosa Ana; Pleite, Jorge

    2013-01-01

    We propose a specific procedure to compute the inductance of a toroidal ferrite core as a function of the excitation current. The study includes the linear, intermediate and saturation regions. The procedure combines the use of Finite Element Analysis in 2D and experimental measurements. Through the two dimensional (2D) procedure we are able to achieve convergence, a reduction of computational cost and equivalent results to those computed by three dimensional (3D) simulations. The validation is carried out by comparing 2D, 3D and experimental results. PMID:28809283

  15. More memory under evolutionary learning may lead to chaos

    NASA Astrophysics Data System (ADS)

    Diks, Cees; Hommes, Cars; Zeppini, Paolo

    2013-02-01

    We show that an increase of memory of past strategy performance in a simple agent-based innovation model, with agents switching between costly innovation and cheap imitation, can be quantitatively stabilising while at the same time qualitatively destabilising. As memory in the fitness measure increases, the amplitude of price fluctuations decreases, but at the same time a bifurcation route to chaos may arise. The core mechanism leading to the chaotic behaviour in this model with strategy switching is that the map obtained for the system with memory is a convex combination of an increasing linear function and a decreasing non-linear function.

  16. Does linear separability really matter? Complex visual search is explained by simple search

    PubMed Central

    Vighneshvel, T.; Arun, S. P.

    2013-01-01

    Visual search in real life involves complex displays with a target among multiple types of distracters, but in the laboratory, it is often tested using simple displays with identical distracters. Can complex search be understood in terms of simple searches? This link may not be straightforward if complex search has emergent properties. One such property is linear separability, whereby search is hard when a target cannot be separated from its distracters using a single linear boundary. However, evidence in favor of linear separability is based on testing stimulus configurations in an external parametric space that need not be related to their true perceptual representation. We therefore set out to assess whether linear separability influences complex search at all. Our null hypothesis was that complex search performance depends only on classical factors such as target-distracter similarity and distracter homogeneity, which we measured using simple searches. Across three experiments involving a variety of artificial and natural objects, differences between linearly separable and nonseparable searches were explained using target-distracter similarity and distracter heterogeneity. Further, simple searches accurately predicted complex search regardless of linear separability (r = 0.91). Our results show that complex search is explained by simple search, refuting the widely held belief that linear separability influences visual search. PMID:24029822

  17. Non-planar vibrations of slightly curved pipes conveying fluid in simple and combination parametric resonances

    NASA Astrophysics Data System (ADS)

    Czerwiński, Andrzej; Łuczko, Jan

    2018-01-01

    The paper summarises the experimental investigations and numerical simulations of non-planar parametric vibrations of a statically deformed pipe. Underpinning the theoretical analysis is a 3D dynamic model of curved pipe. The pipe motion is governed by four non-linear partial differential equations with periodically varying coefficients. The Galerkin method was applied, the shape function being that governing the beam's natural vibrations. Experiments were conducted in the range of simple and combination parametric resonances, evidencing the possibility of in-plane and out-of-plane vibrations as well as fully non-planar vibrations in the combination resonance range. It is demonstrated that sub-harmonic and quasi-periodic vibrations are likely to be excited. The method suggested allows the spatial modes to be determined basing on results registered at selected points in the pipe. Results are summarised in the form of time histories, phase trajectory plots and spectral diagrams. Dedicated video materials give us a better insight into the investigated phenomena.

  18. The deflection of circular mirrors of linearly varying thickness supported along a central hole and free along the outer edge.

    PubMed

    Prevenslik, T V

    1968-10-01

    Most cassegrainian mirrors supported along the central hole are designed for deflection tolerances using the theory for solid, constant thickness plates. Where tolerances are critical, the mirror is usually made thicker, thereby reducing the deflection, but also increasing the weight of the mirror. Weight can be reduced by using a honeycomb design; however, manufacturing problems result because of the inherent complexity. To circumvent the disadvantages of excessive weight in the solid, constant thickness design and the complexity of the honeycomb design, a lightweight, yet simple design would be desirable. A possible lightweight, yet simple design would be a solid mirror of linearly varying thickness, decreasing in thickness from the center to the outer edge. As mirrors of linearly varying thickness may provide the best solution under combined deflection and weight restraints, a design basis is required and found in small deflection plate theory. The work of H. Conway was extended to account for pressure loading proportional to mirror density for the case when Poisson's ratio is ?. Closed form solutions for the slope of the linearly varying thickness mirrors were obtained for fixed and simply supported boundary conditions along the central hole. Maximum deflections were obtained by numerical integration and compared with the results for comparable constant thickness mirrors.

  19. Factorization-based texture segmentation

    DOE PAGES

    Yuan, Jiangye; Wang, Deliang; Cheriyadat, Anil M.

    2015-06-17

    This study introduces a factorization-based approach that efficiently segments textured images. We use local spectral histograms as features, and construct an M × N feature matrix using M-dimensional feature vectors in an N-pixel image. Based on the observation that each feature can be approximated by a linear combination of several representative features, we factor the feature matrix into two matrices-one consisting of the representative features and the other containing the weights of representative features at each pixel used for linear combination. The factorization method is based on singular value decomposition and nonnegative matrix factorization. The method uses local spectral histogramsmore » to discriminate region appearances in a computationally efficient way and at the same time accurately localizes region boundaries. Finally, the experiments conducted on public segmentation data sets show the promise of this simple yet powerful approach.« less

  20. Correlators in tensor models from character calculus

    NASA Astrophysics Data System (ADS)

    Mironov, A.; Morozov, A.

    2017-11-01

    We explain how the calculations of [20], which provided the first evidence for non-trivial structures of Gaussian correlators in tensor models, are efficiently performed with the help of the (Hurwitz) character calculus. This emphasizes a close similarity between technical methods in matrix and tensor models and supports a hope to understand the emerging structures in very similar terms. We claim that the 2m-fold Gaussian correlators of rank r tensors are given by r-linear combinations of dimensions with the Young diagrams of size m. The coefficients are made from the characters of the symmetric group Sm and their exact form depends on the choice of the correlator and on the symmetries of the model. As the simplest application of this new knowledge, we provide simple expressions for correlators in the Aristotelian tensor model as tri-linear combinations of dimensions.

  1. Time-Frequency Analysis of Non-Stationary Biological Signals with Sparse Linear Regression Based Fourier Linear Combiner.

    PubMed

    Wang, Yubo; Veluvolu, Kalyana C

    2017-06-14

    It is often difficult to analyze biological signals because of their nonlinear and non-stationary characteristics. This necessitates the usage of time-frequency decomposition methods for analyzing the subtle changes in these signals that are often connected to an underlying phenomena. This paper presents a new approach to analyze the time-varying characteristics of such signals by employing a simple truncated Fourier series model, namely the band-limited multiple Fourier linear combiner (BMFLC). In contrast to the earlier designs, we first identified the sparsity imposed on the signal model in order to reformulate the model to a sparse linear regression model. The coefficients of the proposed model are then estimated by a convex optimization algorithm. The performance of the proposed method was analyzed with benchmark test signals. An energy ratio metric is employed to quantify the spectral performance and results show that the proposed method Sparse-BMFLC has high mean energy (0.9976) ratio and outperforms existing methods such as short-time Fourier transfrom (STFT), continuous Wavelet transform (CWT) and BMFLC Kalman Smoother. Furthermore, the proposed method provides an overall 6.22% in reconstruction error.

  2. Simple pre-distortion schemes for improving the power efficiency of SOA-based IR-UWB over fiber systems

    NASA Astrophysics Data System (ADS)

    Taki, H.; Azou, S.; Hamie, A.; Al Housseini, A.; Alaeddine, A.; Sharaiha, A.

    2017-01-01

    In this paper, we investigate the usage of SOA for reach extension of an impulse radio over fiber system. Operating in the saturated regime translates into strong nonlinearities and spectral distortions, which drops the power efficiency of the propagated pulses. After studying the SOA response versus operating conditions, we have enhanced the system performance by applying simple analog pre-distortion schemes for various derivatives of the Gaussian pulse and their combination. A novel pulse shape has also been designed by linearly combining three basic Gaussian pulses, offering a very good spectral efficiency (> 55 %) for a high power (0 dBm) at the amplifier input. Furthermore, the potential of our technique has been examined considering a 1.5 Gbps-OOK and 0.75 Gbps-PPM modulation schemes. Pre-distortion proved an advantage for a large extension of optical link (150 km), with an inline amplification via SOA at 40 km.

  3. Applications of Probabilistic Combiners on Linear Feedback Shift Register Sequences

    DTIC Science & Technology

    2016-12-01

    on the resulting output strings show a drastic increase in complexity, while simultaneously passing the stringent randomness tests required by the...a three-variable function. Our tests on the resulting output strings show a drastic increase in complex- ity, while simultaneously passing the...10001101 01000010 11101001 Decryption of a message that has been encrypted using bitwise XOR is quite simple. Since each bit is its own additive inverse

  4. Influence of smooth temperature variation on hotspot ignition

    DOE PAGES

    Reinbacher, Fynn; Regele, Jonathan David

    2017-10-06

    Autoignition in thermally stratified reactive mixtures originates in localised hotspots. The ignition behaviour is often characterised using linear temperature gradients and more recently constant temperature plateaus combined with temperature gradients. Acoustic timescale characterisation of plateau regions has been successfully used to characterise the type of mechanical disturbance that will be created from a plateau core ignition. This work combines linear temperature gradients with superelliptic cores in order to more accurately account for a local temperature maximum of finite size and the smooth temperature variation contained inside realistic hotspot centres. A one-step Arrhenius reaction is used to model a H 2–airmore » reactive mixture. Using the superelliptic approach a range of behaviours for temperature distributions are investigated by varying the temperature profile between the gradient only and plateau and gradient bounding cases. Each superelliptic case is compared to a respective plateau and gradient case where simple acoustic timescale characterisation may be performed. It is shown that hot spots equivalent with excitation-to-acoustic timescale ratios sufficiently greater than unity exhibit behaviour very similar to a simple plateau-gradient model. Furthermore, for larger hot spots with timescale ratios sufficiently less than unity the reaction behaviour is highly dependent on the smooth temperature profile contained within the core region.« less

  5. Influence of smooth temperature variation on hotspot ignition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reinbacher, Fynn; Regele, Jonathan David

    Autoignition in thermally stratified reactive mixtures originates in localised hotspots. The ignition behaviour is often characterised using linear temperature gradients and more recently constant temperature plateaus combined with temperature gradients. Acoustic timescale characterisation of plateau regions has been successfully used to characterise the type of mechanical disturbance that will be created from a plateau core ignition. This work combines linear temperature gradients with superelliptic cores in order to more accurately account for a local temperature maximum of finite size and the smooth temperature variation contained inside realistic hotspot centres. A one-step Arrhenius reaction is used to model a H 2–airmore » reactive mixture. Using the superelliptic approach a range of behaviours for temperature distributions are investigated by varying the temperature profile between the gradient only and plateau and gradient bounding cases. Each superelliptic case is compared to a respective plateau and gradient case where simple acoustic timescale characterisation may be performed. It is shown that hot spots equivalent with excitation-to-acoustic timescale ratios sufficiently greater than unity exhibit behaviour very similar to a simple plateau-gradient model. Furthermore, for larger hot spots with timescale ratios sufficiently less than unity the reaction behaviour is highly dependent on the smooth temperature profile contained within the core region.« less

  6. Analysis of lithology: Vegetation mixes in multispectral images

    NASA Technical Reports Server (NTRS)

    Adams, J. B.; Smith, M.; Adams, J. D.

    1982-01-01

    Discrimination and identification of lithologies from multispectral images is discussed. Rock/soil identification can be facilitated by removing the component of the signal in the images that is contributed by the vegetation. Mixing models were developed to predict the spectra of combinations of pure end members, and those models were refined using laboratory measurements of real mixtures. Models in use include a simple linear (checkerboard) mix, granular mixing, semi-transparent coatings, and combinations of the above. The use of interactive computer techniques that allow quick comparison of the spectrum of a pixel stack (in a multiband set) with laboratory spectra is discussed.

  7. The terminator "toy" chemistry test: A simple tool to assess errors in transport schemes

    DOE PAGES

    Lauritzen, P. H.; Conley, A. J.; Lamarque, J. -F.; ...

    2015-05-04

    This test extends the evaluation of transport schemes from prescribed advection of inert scalars to reactive species. The test consists of transporting two interacting chemical species in the Nair and Lauritzen 2-D idealized flow field. The sources and sinks for these two species are given by a simple, but non-linear, "toy" chemistry that represents combination (X+X → X 2) and dissociation (X 2 → X+X). This chemistry mimics photolysis-driven conditions near the solar terminator, where strong gradients in the spatial distribution of the species develop near its edge. Despite the large spatial variations in each species, the weighted sum Xmore » T = X+2X 2 should always be preserved at spatial scales at which molecular diffusion is excluded. The terminator test demonstrates how well the advection–transport scheme preserves linear correlations. Chemistry–transport (physics–dynamics) coupling can also be studied with this test. Examples of the consequences of this test are shown for illustration.« less

  8. Second Law of Thermodynamics Applied to Metabolic Networks

    NASA Technical Reports Server (NTRS)

    Nigam, R.; Liang, S.

    2003-01-01

    We present a simple algorithm based on linear programming, that combines Kirchoff's flux and potential laws and applies them to metabolic networks to predict thermodynamically feasible reaction fluxes. These law's represent mass conservation and energy feasibility that are widely used in electrical circuit analysis. Formulating the Kirchoff's potential law around a reaction loop in terms of the null space of the stoichiometric matrix leads to a simple representation of the law of entropy that can be readily incorporated into the traditional flux balance analysis without resorting to non-linear optimization. Our technique is new as it can easily check the fluxes got by applying flux balance analysis for thermodynamic feasibility and modify them if they are infeasible so that they satisfy the law of entropy. We illustrate our method by applying it to the network dealing with the central metabolism of Escherichia coli. Due to its simplicity this algorithm will be useful in studying large scale complex metabolic networks in the cell of different organisms.

  9. Active disturbance rejection controller for chemical reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Both, Roxana; Dulf, Eva H.; Muresan, Cristina I., E-mail: roxana.both@aut.utcluj.ro

    2015-03-10

    In the petrochemical industry, the synthesis of 2 ethyl-hexanol-oxo-alcohols (plasticizers alcohol) is of high importance, being achieved through hydrogenation of 2 ethyl-hexenal inside catalytic trickle bed three-phase reactors. For this type of processes the use of advanced control strategies is suitable due to their nonlinear behavior and extreme sensitivity to load changes and other disturbances. Due to the complexity of the mathematical model an approach was to use a simple linear model of the process in combination with an advanced control algorithm which takes into account the model uncertainties, the disturbances and command signal limitations like robust control. However themore » resulting controller is complex, involving cost effective hardware. This paper proposes a simple integer-order control scheme using a linear model of the process, based on active disturbance rejection method. By treating the model dynamics as a common disturbance and actively rejecting it, active disturbance rejection control (ADRC) can achieve the desired response. Simulation results are provided to demonstrate the effectiveness of the proposed method.« less

  10. Eigenspace-based minimum variance adaptive beamformer combined with delay multiply and sum: experimental study

    NASA Astrophysics Data System (ADS)

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Nasiriavanaki, Mohammadreza; Orooji, Mahdi

    2018-02-01

    Delay and sum (DAS) is the most common beamforming algorithm in linear-array photoacoustic imaging (PAI) as a result of its simple implementation. However, it leads to a low resolution and high sidelobes. Delay multiply and sum (DMAS) was used to address the incapabilities of DAS, providing a higher image quality. However, the resolution improvement is not well enough compared to eigenspace-based minimum variance (EIBMV). In this paper, the EIBMV beamformer has been combined with DMAS algebra, called EIBMV-DMAS, using the expansion of DMAS algorithm. The proposed method is used as the reconstruction algorithm in linear-array PAI. EIBMV-DMAS is experimentally evaluated where the quantitative and qualitative results show that it outperforms DAS, DMAS and EIBMV. The proposed method degrades the sidelobes for about 365 %, 221 % and 40 %, compared to DAS, DMAS and EIBMV, respectively. Moreover, EIBMV-DMAS improves the SNR about 158 %, 63 % and 20 %, respectively.

  11. Blocked Force and Loading Calculations for LaRC THUNDER Actuators

    NASA Technical Reports Server (NTRS)

    Campbell, Joel F.

    2007-01-01

    An analytic approach is developed to predict the performance of LaRC Thunder actuators under load and under blocked conditions. The problem is treated with the Von Karman non-linear analysis combined with a simple Raleigh-Ritz calculation. From this, shape and displacement under load combined with voltage are calculated. A method is found to calculate the blocked force vs voltage and spring force vs distance. It is found that under certain conditions, the blocked force and displacement is almost linear with voltage. It is also found that the spring force is multivalued and has at least one bifurcation point. This bifurcation point is where the device collapses under load and locks to a different bending solution. This occurs at a particular critical load. It is shown this other bending solution has a reduced amplitude and is proportional to the original amplitude times the square of the aspect ratio.

  12. Determination of stress intensity factors for interface cracks under mixed-mode loading

    NASA Technical Reports Server (NTRS)

    Naik, Rajiv A.; Crews, John H., Jr.

    1992-01-01

    A simple technique was developed using conventional finite element analysis to determine stress intensity factors, K1 and K2, for interface cracks under mixed-mode loading. This technique involves the calculation of crack tip stresses using non-singular finite elements. These stresses are then combined and used in a linear regression procedure to calculate K1 and K2. The technique was demonstrated by calculating three different bimaterial combinations. For the normal loading case, the K's were within 2.6 percent of an exact solution. The normalized K's under shear loading were shown to be related to the normalized K's under normal loading. Based on these relations, a simple equation was derived for calculating K1 and K2 for mixed-mode loading from knowledge of the K's under normal loading. The equation was verified by computing the K's for a mixed-mode case with equal and normal shear loading. The correlation between exact and finite element solutions is within 3.7 percent. This study provides a simple procedure to compute K2/K1 ratio which has been used to characterize the stress state at the crack tip for various combinations of materials and loadings. Tests conducted over a range of K2/K1 ratios could be used to fully characterize interface fracture toughness.

  13. Solid phase microextraction of diclofenac using molecularly imprinted polymer sorbent in hollow fiber combined with fiber optic-linear array spectrophotometry.

    PubMed

    Pebdani, Arezou Amiri; Shabani, Ali Mohammad Haji; Dadfarnia, Shayessteh; Khodadoust, Saeid

    2015-08-05

    A simple solid phase microextraction method based on molecularly imprinted polymer sorbent in the hollow fiber (MIP-HF-SPME) combined with fiber optic-linear array spectrophotometer has been applied for the extraction and determination of diclofenac in environmental and biological samples. The effects of different parameters such as pH, times of extraction, type and volume of the organic solvent, stirring rate and donor phase volume on the extraction efficiency of the diclofenac were investigated and optimized. Under the optimal conditions, the calibration graph was linear (r(2)=0.998) in the range of 3.0-85.0 μg L(-1) with a detection limit of 0.7 μg L(-1) for preconcentration of 25.0 mL of the sample and the relative standard deviation (n=6) less than 5%. This method was applied successfully for the extraction and determination of diclofenac in different matrices (water, urine and plasma) and accuracy was examined through the recovery experiments. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. A simple method for identifying parameter correlations in partially observed linear dynamic models.

    PubMed

    Li, Pu; Vu, Quoc Dong

    2015-12-14

    Parameter estimation represents one of the most significant challenges in systems biology. This is because biological models commonly contain a large number of parameters among which there may be functional interrelationships, thus leading to the problem of non-identifiability. Although identifiability analysis has been extensively studied by analytical as well as numerical approaches, systematic methods for remedying practically non-identifiable models have rarely been investigated. We propose a simple method for identifying pairwise correlations and higher order interrelationships of parameters in partially observed linear dynamic models. This is made by derivation of the output sensitivity matrix and analysis of the linear dependencies of its columns. Consequently, analytical relations between the identifiability of the model parameters and the initial conditions as well as the input functions can be achieved. In the case of structural non-identifiability, identifiable combinations can be obtained by solving the resulting homogenous linear equations. In the case of practical non-identifiability, experiment conditions (i.e. initial condition and constant control signals) can be provided which are necessary for remedying the non-identifiability and unique parameter estimation. It is noted that the approach does not consider noisy data. In this way, the practical non-identifiability issue, which is popular for linear biological models, can be remedied. Several linear compartment models including an insulin receptor dynamics model are taken to illustrate the application of the proposed approach. Both structural and practical identifiability of partially observed linear dynamic models can be clarified by the proposed method. The result of this method provides important information for experimental design to remedy the practical non-identifiability if applicable. The derivation of the method is straightforward and thus the algorithm can be easily implemented into a software packet.

  15. Holographic Waveguide Array Rollable Display.

    DTIC Science & Technology

    1997-04-01

    scale lithography for fabrication. Projection systems offer large images, in the range of 40 - 60 inches diagonal, and both front-view and rear-view...Boulder, CO, and a l-D array of digital micromirrors ( DMD ) from Texas Instruments. The linear format permits simple driving electronics and high...TI’s DMD , or a CMOS-SLM. A collimated laser beaming (combine three colors) or a collimated white light beam from a high intensity halogen lamp can be

  16. Group Theory and Crystal Field Theory: A Simple and Rigorous Derivation of the Spectroscopic Terms Generated by the t[subscript 2g][superscript 2] Electronic Configuration in a Strong Octahedral Field

    ERIC Educational Resources Information Center

    Morpurgo, Simone

    2007-01-01

    The principles of symmetry and group theory are applied to the zero-order wavefunctions associated with the strong-field t[subscript 2g][superscript 2] configuration and their symmetry-adapted linear combinations (SALC) associated with the generated energy terms are derived. This approach will enable students to better understand the use of…

  17. Simplified, inverse, ejector design tool

    NASA Technical Reports Server (NTRS)

    Dechant, Lawrence J.

    1993-01-01

    A simple lumped parameter based inverse design tool has been developed which provides flow path geometry and entrainment estimates subject to operational, acoustic, and design constraints. These constraints are manifested through specification of primary mass flow rate or ejector thrust, fully-mixed exit velocity, and static pressure matching. Fundamentally, integral forms of the conservation equations coupled with the specified design constraints are combined to yield an easily invertible linear system in terms of the flow path cross-sectional areas. Entrainment is computed by back substitution. Initial comparison with experimental and analogous one-dimensional methods show good agreement. Thus, this simple inverse design code provides an analytically based, preliminary design tool with direct application to High Speed Civil Transport (HSCT) design studies.

  18. Flow-induced immobilization of glucose oxidase in nonionic micellar nanogels for glucose sensing.

    PubMed

    Cardiel, Joshua J; Zhao, Ya; Tonggu, Lige; Wang, Liguo; Chung, Jae-Hyun; Shen, Amy Q

    2014-10-21

    A simple microfluidic platform was utilized to immobilize glucose oxidase (GOx) in a nonionic micellar scaffold. The immobilization of GOx was verified by using a combination of cryogenic electron microscopy (cryo-EM), scanning electron microscopy (SEM), and ultraviolet spectroscopy (UV) techniques. Chronoamperometric measurements were conducted on nanogel-GOx scaffolds under different glucose concentrations, exhibiting linear amperometric responses. Without impacting the lifetime and denaturation of GOx, the nonionic nanogel provides a favorable microenvironment for GOx in biological media. This flow-induced immobilization method in a nonionic nanogel host matrix opens up new pathways for designing a simple, fast, biocompatible, and cost-effective process to immobilize biomolecules that are averse to ionic environments.

  19. Comparative Performance Evaluation of Rainfall-runoff Models, Six of Black-box Type and One of Conceptual Type, From The Galway Flow Forecasting System (gffs) Package, Applied On Two Irish Catchments

    NASA Astrophysics Data System (ADS)

    Goswami, M.; O'Connor, K. M.; Shamseldin, A. Y.

    The "Galway Real-Time River Flow Forecasting System" (GFFS) is a software pack- age developed at the Department of Engineering Hydrology, of the National University of Ireland, Galway, Ireland. It is based on a selection of lumped black-box and con- ceptual rainfall-runoff models, all developed in Galway, consisting primarily of both the non-parametric (NP) and parametric (P) forms of two black-box-type rainfall- runoff models, namely, the Simple Linear Model (SLM-NP and SLM-P) and the seasonally-based Linear Perturbation Model (LPM-NP and LPM-P), together with the non-parametric wetness-index-based Linearly Varying Gain Factor Model (LVGFM), the black-box Artificial Neural Network (ANN) Model, and the conceptual Soil Mois- ture Accounting and Routing (SMAR) Model. Comprised of the above suite of mod- els, the system enables the user to calibrate each model individually, initially without updating, and it is capable also of producing combined (i.e. consensus) forecasts us- ing the Simple Average Method (SAM), the Weighted Average Method (WAM), or the Artificial Neural Network Method (NNM). The updating of each model output is achieved using one of four different techniques, namely, simple Auto-Regressive (AR) updating, Linear Transfer Function (LTF) updating, Artificial Neural Network updating (NNU), and updating by the Non-linear Auto-Regressive Exogenous-input method (NARXM). The models exhibit a considerable range of variation in degree of complexity of structure, with corresponding degrees of complication in objective func- tion evaluation. Operating in continuous river-flow simulation and updating modes, these models and techniques have been applied to two Irish catchments, namely, the Fergus and the Brosna. A number of performance evaluation criteria have been used to comparatively assess the model discharge forecast efficiency.

  20. A higher order panel method for linearized supersonic flow

    NASA Technical Reports Server (NTRS)

    Ehlers, F. E.; Epton, M. A.; Johnson, F. T.; Magnus, A. E.; Rubbert, P. E.

    1979-01-01

    The basic integral equations of linearized supersonic theory for an advanced supersonic panel method are derived. Methods using only linear varying source strength over each panel or only quadratic doublet strength over each panel gave good agreement with analytic solutions over cones and zero thickness cambered wings. For three dimensional bodies and wings of general shape, combined source and doublet panels with interior boundary conditions to eliminate the internal perturbations lead to a stable method providing good agreement experiment. A panel system with all edges contiguous resulted from dividing the basic four point non-planar panel into eight triangular subpanels, and the doublet strength was made continuous at all edges by a quadratic distribution over each subpanel. Superinclined panels were developed and tested on s simple nacelle and on an airplane model having engine inlets, with excellent results.

  1. A simple bias correction in linear regression for quantitative trait association under two-tail extreme selection.

    PubMed

    Kwan, Johnny S H; Kung, Annie W C; Sham, Pak C

    2011-09-01

    Selective genotyping can increase power in quantitative trait association. One example of selective genotyping is two-tail extreme selection, but simple linear regression analysis gives a biased genetic effect estimate. Here, we present a simple correction for the bias.

  2. Practical Session: Simple Linear Regression

    NASA Astrophysics Data System (ADS)

    Clausel, M.; Grégoire, G.

    2014-12-01

    Two exercises are proposed to illustrate the simple linear regression. The first one is based on the famous Galton's data set on heredity. We use the lm R command and get coefficients estimates, standard error of the error, R2, residuals …In the second example, devoted to data related to the vapor tension of mercury, we fit a simple linear regression, predict values, and anticipate on multiple linear regression. This pratical session is an excerpt from practical exercises proposed by A. Dalalyan at EPNC (see Exercises 1 and 2 of http://certis.enpc.fr/~dalalyan/Download/TP_ENPC_4.pdf).

  3. Transmission of linear regression patterns between time series: From relationship in time series to complex networks

    NASA Astrophysics Data System (ADS)

    Gao, Xiangyun; An, Haizhong; Fang, Wei; Huang, Xuan; Li, Huajiao; Zhong, Weiqiong; Ding, Yinghui

    2014-07-01

    The linear regression parameters between two time series can be different under different lengths of observation period. If we study the whole period by the sliding window of a short period, the change of the linear regression parameters is a process of dynamic transmission over time. We tackle fundamental research that presents a simple and efficient computational scheme: a linear regression patterns transmission algorithm, which transforms linear regression patterns into directed and weighted networks. The linear regression patterns (nodes) are defined by the combination of intervals of the linear regression parameters and the results of the significance testing under different sizes of the sliding window. The transmissions between adjacent patterns are defined as edges, and the weights of the edges are the frequency of the transmissions. The major patterns, the distance, and the medium in the process of the transmission can be captured. The statistical results of weighted out-degree and betweenness centrality are mapped on timelines, which shows the features of the distribution of the results. Many measurements in different areas that involve two related time series variables could take advantage of this algorithm to characterize the dynamic relationships between the time series from a new perspective.

  4. Transmission of linear regression patterns between time series: from relationship in time series to complex networks.

    PubMed

    Gao, Xiangyun; An, Haizhong; Fang, Wei; Huang, Xuan; Li, Huajiao; Zhong, Weiqiong; Ding, Yinghui

    2014-07-01

    The linear regression parameters between two time series can be different under different lengths of observation period. If we study the whole period by the sliding window of a short period, the change of the linear regression parameters is a process of dynamic transmission over time. We tackle fundamental research that presents a simple and efficient computational scheme: a linear regression patterns transmission algorithm, which transforms linear regression patterns into directed and weighted networks. The linear regression patterns (nodes) are defined by the combination of intervals of the linear regression parameters and the results of the significance testing under different sizes of the sliding window. The transmissions between adjacent patterns are defined as edges, and the weights of the edges are the frequency of the transmissions. The major patterns, the distance, and the medium in the process of the transmission can be captured. The statistical results of weighted out-degree and betweenness centrality are mapped on timelines, which shows the features of the distribution of the results. Many measurements in different areas that involve two related time series variables could take advantage of this algorithm to characterize the dynamic relationships between the time series from a new perspective.

  5. The algebra of complex 2 × 2 matrices and a general closed Baker-Campbell-Hausdorff formula

    NASA Astrophysics Data System (ADS)

    Foulis, D. L.

    2017-07-01

    We derive a closed formula for the Baker-Campbell-Hausdorff series expansion in the case of complex 2×2 matrices. For arbitrary matrices A and B, and a matrix Z such that \\exp Z = \\exp A \\exp B , our result expresses Z as a linear combination of A and B, their commutator [A, B] , and the identity matrix I. The coefficients in this linear combination are functions of the traces and determinants of A and B, and the trace of their product. The derivation proceeds purely via algebraic manipulations of the given matrices and their products, making use of relations developed here, based on the Cayley-Hamilton theorem, as well as a characterization of the consequences of [A, B] and/or its determinant being zero or otherwise. As a corollary of our main result we also derive a closed formula for the Zassenhaus expansion. We apply our results to several special cases, most notably the parametrization of the product of two SU(2) matrices and a verification of the recent result of Van-Brunt and Visser (2015 J. Phys. A: Math. Theor. 48 225207) for complex 2×2 matrices, in this latter case deriving also the related Zassenhaus formula which turns out to be quite simple. We then show that this simple formula should be valid for all matrices and operators.

  6. Matrix Fatigue Cracking Mechanisms of Alpha(2) TMC for Hypersonic Applications

    NASA Technical Reports Server (NTRS)

    Gabb, Timothy P.; Gayda, John

    1994-01-01

    The objective of this work was to understand matrix cracking mechanisms in a unidirectional alpha(sub 2) TMC in possible hypersonic applications. A (0)(sub 8) SCS-6/Ti-24Al-11Nb (at. percent) TMC was first subjected to a variety of simple isothermal and nonisothermal fatigue cycles to evaluate the damage mechanisms in simple conditions. A modified ascent mission cycle test was then performed to evaluate the combined effects of loading modes. This cycle mixes mechanical cycling at 150 and 483 C, sustained loads, and a slow thermal cycle to 815 C. At low cyclic stresses and strains more common in hypersonic applications, environment-assisted surface cracking limited fatigue resistance. This damage mechanism was most acute for out-of-phase nonisothermal cycles having extended cycle periods and the ascent mission cycle. A simple linear fraction damage model was employed to help understand this damage mechanism. Time-dependent environmental damage was found to strongly influence out-of-phase and mission life, with mechanical cycling damage due to the combination of external loading and CTE mismatch stresses playing a smaller role. The mechanical cycling and sustained loads in the mission cycle also had a smaller role.

  7. Technical report. The application of probability-generating functions to linear-quadratic radiation survival curves.

    PubMed

    Kendal, W S

    2000-04-01

    To illustrate how probability-generating functions (PGFs) can be employed to derive a simple probabilistic model for clonogenic survival after exposure to ionizing irradiation. Both repairable and irreparable radiation damage to DNA were assumed to occur by independent (Poisson) processes, at intensities proportional to the irradiation dose. Also, repairable damage was assumed to be either repaired or further (lethally) injured according to a third (Bernoulli) process, with the probability of lethal conversion being directly proportional to dose. Using the algebra of PGFs, these three processes were combined to yield a composite PGF that described the distribution of lethal DNA lesions in irradiated cells. The composite PGF characterized a Poisson distribution with mean, chiD+betaD2, where D was dose and alpha and beta were radiobiological constants. This distribution yielded the conventional linear-quadratic survival equation. To test the composite model, the derived distribution was used to predict the frequencies of multiple chromosomal aberrations in irradiated human lymphocytes. The predictions agreed well with observation. This probabilistic model was consistent with single-hit mechanisms, but it was not consistent with binary misrepair mechanisms. A stochastic model for radiation survival has been constructed from elementary PGFs that exactly yields the linear-quadratic relationship. This approach can be used to investigate other simple probabilistic survival models.

  8. Application of thermal model for pan evaporation to the hydrology of a defined medium, the sponge

    NASA Technical Reports Server (NTRS)

    Trenchard, M. H.; Artley, J. A. (Principal Investigator)

    1981-01-01

    A technique is presented which estimates pan evaporation from the commonly observed values of daily maximum and minimum air temperatures. These two variables are transformed to saturation vapor pressure equivalents which are used in a simple linear regression model. The model provides reasonably accurate estimates of pan evaporation rates over a large geographic area. The derived evaporation algorithm is combined with precipitation to obtain a simple moisture variable. A hypothetical medium with a capacity of 8 inches of water is initialized at 4 inches. The medium behaves like a sponge: it absorbs all incident precipitation, with runoff or drainage occurring only after it is saturated. Water is lost from this simple system through evaporation just as from a Class A pan, but at a rate proportional to its degree of saturation. The contents of the sponge is a moisture index calculated from only the maximum and minium temperatures and precipitation.

  9. Simple and efficient LCAO basis sets for the diffuse states in carbon nanostructures.

    PubMed

    Papior, Nick R; Calogero, Gaetano; Brandbyge, Mads

    2018-06-27

    We present a simple way to describe the lowest unoccupied diffuse states in carbon nanostructures in density functional theory calculations using a minimal LCAO (linear combination of atomic orbitals) basis set. By comparing plane wave basis calculations, we show how these states can be captured by adding long-range orbitals to the standard LCAO basis sets for the extreme cases of planar sp 2 (graphene) and curved carbon (C 60 ). In particular, using Bessel functions with a long range as additional basis functions retain a minimal basis size. This provides a smaller and simpler atom-centered basis set compared to the standard pseudo-atomic orbitals (PAOs) with multiple polarization orbitals or by adding non-atom-centered states to the basis.

  10. A simple way to synthesize large-scale Cu2O/Ag nanoflowers for ultrasensitive surface-enhanced Raman scattering detection

    NASA Astrophysics Data System (ADS)

    Zou, Junyan; Song, Weijia; Xie, Weiguang; Huang, Bo; Yang, Huidong; Luo, Zhi

    2018-03-01

    Here, we report a simple strategy to prepare highly sensitive surface-enhanced Raman spectroscopy (SERS) substrates based on Ag decorated Cu2O nanoparticles by combining two common techniques, viz, thermal oxidation growth of Cu2O nanoparticles and magnetron sputtering fabrication of a Ag nanoparticle film. Methylene blue is used as the Raman analyte for the SERS study, and the substrates fabricated under optimized conditions have very good sensitivity (analytical enhancement factor ˜108), stability, and reproducibility. A linear dependence of the SERS intensities with the concentration was obtained with an R 2 value >0.9. These excellent properties indicate that the substrate has great potential in the detection of biological and chemical substances.

  11. Simple and efficient LCAO basis sets for the diffuse states in carbon nanostructures

    NASA Astrophysics Data System (ADS)

    Papior, Nick R.; Calogero, Gaetano; Brandbyge, Mads

    2018-06-01

    We present a simple way to describe the lowest unoccupied diffuse states in carbon nanostructures in density functional theory calculations using a minimal LCAO (linear combination of atomic orbitals) basis set. By comparing plane wave basis calculations, we show how these states can be captured by adding long-range orbitals to the standard LCAO basis sets for the extreme cases of planar sp 2 (graphene) and curved carbon (C60). In particular, using Bessel functions with a long range as additional basis functions retain a minimal basis size. This provides a smaller and simpler atom-centered basis set compared to the standard pseudo-atomic orbitals (PAOs) with multiple polarization orbitals or by adding non-atom-centered states to the basis.

  12. Development and Integration of an Advanced Stirling Convertor Linear Alternator Model for a Tool Simulating Convertor Performance and Creating Phasor Diagrams

    NASA Technical Reports Server (NTRS)

    Metscher, Jonathan F.; Lewandowski, Edward J.

    2013-01-01

    A simple model of the Advanced Stirling Convertors (ASC) linear alternator and an AC bus controller has been developed and combined with a previously developed thermodynamic model of the convertor for a more complete simulation and analysis of the system performance. The model was developed using Sage, a 1-D thermodynamic modeling program that now includes electro-magnetic components. The convertor, consisting of a free-piston Stirling engine combined with a linear alternator, has sufficiently sinusoidal steady-state behavior to allow for phasor analysis of the forces and voltages acting in the system. A MATLAB graphical user interface (GUI) has been developed to interface with the Sage software for simplified use of the ASC model, calculation of forces, and automated creation of phasor diagrams. The GUI allows the user to vary convertor parameters while fixing different input or output parameters and observe the effect on the phasor diagrams or system performance. The new ASC model and GUI help create a better understanding of the relationship between the electrical component voltages and mechanical forces. This allows better insight into the overall convertor dynamics and performance.

  13. JMOSFET: A MOSFET parameter extractor with geometry-dependent terms

    NASA Technical Reports Server (NTRS)

    Buehler, M. G.; Moore, B. T.

    1985-01-01

    The parameters from metal-oxide-silicon field-effect transistors (MOSFETs) that are included on the Combined Release and Radiation Effects Satellite (CRRES) test chips need to be extracted to have a simple but comprehensive method that can be used in wafer acceptance, and to have a method that is sufficiently accurate that it can be used in integrated circuits. A set of MOSFET parameter extraction procedures that are directly linked to the MOSFET model equations and that facilitate the use of simple, direct curve-fitting techniques are developed. In addition, the major physical effects that affect MOSFET operation in the linear and saturation regions of operation for devices fabricated in 1.2 to 3.0 mm CMOS technology are included. The fitting procedures were designed to establish single values for such parameters as threshold voltage and transconductance and to provide for slope matching between the linear and saturation regions of the MOSFET output current-voltage curves. Four different sizes of transistors that cover a rectangular-shaped region of the channel length-width plane are analyzed.

  14. Flight evaluation of a simple total energy-rate system with potential wind-shear application

    NASA Technical Reports Server (NTRS)

    Ostroff, A. J.; Hueschen, R. M.; Hellbaum, R. F.; Creedon, J. F.

    1981-01-01

    Wind shears can create havoc during aircraft terminal area operations and have been cited as the primary cause of several major aircraft accidents. A simple sensor, potentially having application to the wind-shear problem, was developed to rapidly measure aircraft total energy relative to the air mass. Combining this sensor with either a variometer or a rate-of-climb indicator provides a total energy-rate system which was successfully applied in soaring flight. The measured rate of change of aircraft energy can potentially be used on display/control systems of powered aircraft to reduce glide-slope deviations caused by wind shear. The experimental flight configuration and evaluations of the energy-rate system are described. Two mathematical models are developed: the first describes operation of the energy probe in a linear design region and the second model is for the nonlinear region. The calculated total rate is compared with measured signals for many different flight tests. Time history plots show the tow curves to be almost the same for the linear operating region and very close for the nonlinear region.

  15. Exploiting symmetries in the modeling and analysis of tires

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Andersen, Carl M.; Tanner, John A.

    1987-01-01

    A simple and efficient computational strategy for reducing both the size of a tire model and the cost of the analysis of tires in the presence of symmetry-breaking conditions (unsymmetry in the tire material, geometry, or loading) is presented. The strategy is based on approximating the unsymmetric response of the tire with a linear combination of symmetric and antisymmetric global approximation vectors (or modes). Details are presented for the three main elements of the computational strategy, which include: use of special three-field mixed finite-element models, use of operator splitting, and substantial reduction in the number of degrees of freedom. The proposed computational stategy is applied to three quasi-symmetric problems of tires: linear analysis of anisotropic tires, through use of semianalytic finite elements, nonlinear analysis of anisotropic tires through use of two-dimensional shell finite elements, and nonlinear analysis of orthotropic tires subjected to unsymmetric loading. Three basic types of symmetry (and their combinations) exhibited by the tire response are identified.

  16. Simple taper: Taper equations for the field forester

    Treesearch

    David R. Larsen

    2017-01-01

    "Simple taper" is set of linear equations that are based on stem taper rates; the intent is to provide taper equation functionality to field foresters. The equation parameters are two taper rates based on differences in diameter outside bark at two points on a tree. The simple taper equations are statistically equivalent to more complex equations. The linear...

  17. Using Simple Linear Regression to Assess the Success of the Montreal Protocol in Reducing Atmospheric Chlorofluorocarbons

    ERIC Educational Resources Information Center

    Nelson, Dean

    2009-01-01

    Following the Guidelines for Assessment and Instruction in Statistics Education (GAISE) recommendation to use real data, an example is presented in which simple linear regression is used to evaluate the effect of the Montreal Protocol on atmospheric concentration of chlorofluorocarbons. This simple set of data, obtained from a public archive, can…

  18. Spectral likelihood expansions for Bayesian inference

    NASA Astrophysics Data System (ADS)

    Nagel, Joseph B.; Sudret, Bruno

    2016-03-01

    A spectral approach to Bayesian inference is presented. It pursues the emulation of the posterior probability density. The starting point is a series expansion of the likelihood function in terms of orthogonal polynomials. From this spectral likelihood expansion all statistical quantities of interest can be calculated semi-analytically. The posterior is formally represented as the product of a reference density and a linear combination of polynomial basis functions. Both the model evidence and the posterior moments are related to the expansion coefficients. This formulation avoids Markov chain Monte Carlo simulation and allows one to make use of linear least squares instead. The pros and cons of spectral Bayesian inference are discussed and demonstrated on the basis of simple applications from classical statistics and inverse modeling.

  19. Surface Plasmon Coupling and Control Using Spherical Cap Structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gong, Yu; Joly, Alan G.; Zhang, Xin

    2017-06-05

    Propagating surface plasmons (PSPs) launched from a protruded silver spherical cap structure are investigated using photoemission electron microscopy (PEEM) and finite difference time domain (FDTD) calculations. Our combined experimental and theoretical findings reveal that PSP coupling efficiency is comparable to conventional etched-in plasmonic coupling structures. Additionally, plasmon propagation direction can be varied by a linear rotation of the driving laser polarization. A simple geometric model is proposed in which the plasmon direction selectivity is proportional to the projection of the linear laser polarization on the surface normal. An application for the spherical cap coupler as a gate device is proposed.more » Overall, our results indicate that protruded cap structures hold great promise as elements in emerging surface plasmon applications.« less

  20. Reduction of a linear complex model for respiratory system during Airflow Interruption.

    PubMed

    Jablonski, Ireneusz; Mroczka, Janusz

    2010-01-01

    The paper presents methodology of a complex model reduction to its simpler version - an identifiable inverse model. Its main tool is a numerical procedure of sensitivity analysis (structural and parametric) applied to the forward linear equivalent designed for the conditions of interrupter experiment. Final result - the reduced analog for the interrupter technique is especially worth of notice as it fills a major gap in occlusional measurements, which typically use simple, one- or two-element physical representations. Proposed electrical reduced circuit, being structural combination of resistive, inertial and elastic properties, can be perceived as a candidate for reliable reconstruction and quantification (in the time and frequency domain) of dynamical behavior of the respiratory system in response to a quasi-step excitation by valve closure.

  1. Improving EMG based classification of basic hand movements using EMD.

    PubMed

    Sapsanis, Christos; Georgoulas, George; Tzes, Anthony; Lymberopoulos, Dimitrios

    2013-01-01

    This paper presents a pattern recognition approach for the identification of basic hand movements using surface electromyographic (EMG) data. The EMG signal is decomposed using Empirical Mode Decomposition (EMD) into Intrinsic Mode Functions (IMFs) and subsequently a feature extraction stage takes place. Various combinations of feature subsets are tested using a simple linear classifier for the detection task. Our results suggest that the use of EMD can increase the discrimination ability of the conventional feature sets extracted from the raw EMG signal.

  2. The application of finite volume methods for modelling three-dimensional incompressible flow on an unstructured mesh

    NASA Astrophysics Data System (ADS)

    Lonsdale, R. D.; Webster, R.

    This paper demonstrates the application of a simple finite volume approach to a finite element mesh, combining the economy of the former with the geometrical flexibility of the latter. The procedure is used to model a three-dimensional flow on a mesh of linear eight-node brick (hexahedra). Simulations are performed for a wide range of flow problems, some in excess of 94,000 nodes. The resulting computer code ASTEC that incorporates these procedures is described.

  3. Analytical Studies on the Synchronization of a Network of Linearly-Coupled Simple Chaotic Systems

    NASA Astrophysics Data System (ADS)

    Sivaganesh, G.; Arulgnanam, A.; Seethalakshmi, A. N.; Selvaraj, S.

    2018-05-01

    We present explicit generalized analytical solutions for a network of linearly-coupled simple chaotic systems. Analytical solutions are obtained for the normalized state equations of a network of linearly-coupled systems driven by a common chaotic drive system. Two parameter bifurcation diagrams revealing the various hidden synchronization regions, such as complete, phase and phase-lag synchronization are identified using the analytical results. The synchronization dynamics and their stability are studied using phase portraits and the master stability function, respectively. Further, experimental results for linearly-coupled simple chaotic systems are presented to confirm the analytical results. The synchronization dynamics of a network of chaotic systems studied analytically is reported for the first time.

  4. Simple method for the determination of personal care product ingredients in lettuce by ultrasound-assisted extraction combined with solid-phase microextraction followed by GC-MS.

    PubMed

    Cabrera-Peralta, Jerónimo; Peña-Alvarez, Araceli

    2018-05-01

    A simple method for the simultaneous determination of personal care product ingredients: galaxolide, tonalide, oxybenzone, 4-methylbenzyliden camphor, padimate-o, 2-ethylhexyl methoxycinnamate, octocrylene, triclosan, and methyl triclosan in lettuce by ultrasound-assisted extraction combined with solid-phase microextraction followed by gas chromatography with mass spectrometry was developed. Lettuce was directly extracted by ultrasound-assisted extraction with methanol, this extract was combined with water, extracted by solid-phase microextraction in immersion mode, and analyzed by gas chromatography with mass spectrometry. Good linear relationships (25-250 ng/g, R 2  > 0.9702) and low detection limits (1.0-25 ng/g) were obtained for analytes along with acceptable precision for almost all analytes (RSDs < 20%). The validated method was applied for the determination of personal care product ingredients in commercial lettuce and lettuces grown in soil and irrigated with the analytes, identifying the target analytes in leaves and roots of the latter. This procedure is a miniaturized and environmentally friendly proposal which can be a useful tool for quality analysis in lettuce. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Combined tension and bending testing of tapered composite laminates

    NASA Astrophysics Data System (ADS)

    O'Brien, T. Kevin; Murri, Gretchen B.; Hagemeier, Rick; Rogers, Charles

    1994-11-01

    A simple beam element used at Bell Helicopter was incorporated in the Computational Mechanics Testbed (COMET) finite element code at the Langley Research Center (LaRC) to analyze the responce of tappered laminates typical of flexbeams in composite rotor hubs. This beam element incorporated the influence of membrane loads on the flexural response of the tapered laminate configurations modeled and tested in a combined axial tension and bending (ATB) hydraulic load frame designed and built at LaRC. The moments generated from the finite element model were used in a tapered laminated plate theory analysis to estimate axial stresses on the surface of the tapered laminates due to combined bending and tension loads. Surfaces strains were calculated and compared to surface strains measured using strain gages mounted along the laminate length. The strain distributions correlated reasonably well with the analysis. The analysis was then used to examine the surface strain distribution in a non-linear tapered laminate where a similarly good correlation was obtained. Results indicate that simple finite element beam models may be used to identify tapered laminate configurations best suited for simulating the response of a composite flexbeam in a full scale rotor hub.

  6. Correlation and simple linear regression.

    PubMed

    Eberly, Lynn E

    2007-01-01

    This chapter highlights important steps in using correlation and simple linear regression to address scientific questions about the association of two continuous variables with each other. These steps include estimation and inference, assessing model fit, the connection between regression and ANOVA, and study design. Examples in microbiology are used throughout. This chapter provides a framework that is helpful in understanding more complex statistical techniques, such as multiple linear regression, linear mixed effects models, logistic regression, and proportional hazards regression.

  7. Estimating linear temporal trends from aggregated environmental monitoring data

    USGS Publications Warehouse

    Erickson, Richard A.; Gray, Brian R.; Eager, Eric A.

    2017-01-01

    Trend estimates are often used as part of environmental monitoring programs. These trends inform managers (e.g., are desired species increasing or undesired species decreasing?). Data collected from environmental monitoring programs is often aggregated (i.e., averaged), which confounds sampling and process variation. State-space models allow sampling variation and process variations to be separated. We used simulated time-series to compare linear trend estimations from three state-space models, a simple linear regression model, and an auto-regressive model. We also compared the performance of these five models to estimate trends from a long term monitoring program. We specifically estimated trends for two species of fish and four species of aquatic vegetation from the Upper Mississippi River system. We found that the simple linear regression had the best performance of all the given models because it was best able to recover parameters and had consistent numerical convergence. Conversely, the simple linear regression did the worst job estimating populations in a given year. The state-space models did not estimate trends well, but estimated population sizes best when the models converged. We found that a simple linear regression performed better than more complex autoregression and state-space models when used to analyze aggregated environmental monitoring data.

  8. Linking brain-wide multivoxel activation patterns to behaviour: Examples from language and math.

    PubMed

    Raizada, Rajeev D S; Tsao, Feng-Ming; Liu, Huei-Mei; Holloway, Ian D; Ansari, Daniel; Kuhl, Patricia K

    2010-05-15

    A key goal of cognitive neuroscience is to find simple and direct connections between brain and behaviour. However, fMRI analysis typically involves choices between many possible options, with each choice potentially biasing any brain-behaviour correlations that emerge. Standard methods of fMRI analysis assess each voxel individually, but then face the problem of selection bias when combining those voxels into a region-of-interest, or ROI. Multivariate pattern-based fMRI analysis methods use classifiers to analyse multiple voxels together, but can also introduce selection bias via data-reduction steps as feature selection of voxels, pre-selecting activated regions, or principal components analysis. We show here that strong brain-behaviour links can be revealed without any voxel selection or data reduction, using just plain linear regression as a classifier applied to the whole brain at once, i.e. treating each entire brain volume as a single multi-voxel pattern. The brain-behaviour correlations emerged despite the fact that the classifier was not provided with any information at all about subjects' behaviour, but instead was given only the neural data and its condition-labels. Surprisingly, more powerful classifiers such as a linear SVM and regularised logistic regression produce very similar results. We discuss some possible reasons why the very simple brain-wide linear regression model is able to find correlations with behaviour that are as strong as those obtained on the one hand from a specific ROI and on the other hand from more complex classifiers. In a manner which is unencumbered by arbitrary choices, our approach offers a method for investigating connections between brain and behaviour which is simple, rigorous and direct. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  9. Linking brain-wide multivoxel activation patterns to behaviour: Examples from language and math

    PubMed Central

    Raizada, Rajeev D.S.; Tsao, Feng-Ming; Liu, Huei-Mei; Holloway, Ian D.; Ansari, Daniel; Kuhl, Patricia K.

    2010-01-01

    A key goal of cognitive neuroscience is to find simple and direct connections between brain and behaviour. However, fMRI analysis typically involves choices between many possible options, with each choice potentially biasing any brain–behaviour correlations that emerge. Standard methods of fMRI analysis assess each voxel individually, but then face the problem of selection bias when combining those voxels into a region-of-interest, or ROI. Multivariate pattern-based fMRI analysis methods use classifiers to analyse multiple voxels together, but can also introduce selection bias via data-reduction steps as feature selection of voxels, pre-selecting activated regions, or principal components analysis. We show here that strong brain–behaviour links can be revealed without any voxel selection or data reduction, using just plain linear regression as a classifier applied to the whole brain at once, i.e. treating each entire brain volume as a single multi-voxel pattern. The brain–behaviour correlations emerged despite the fact that the classifier was not provided with any information at all about subjects' behaviour, but instead was given only the neural data and its condition-labels. Surprisingly, more powerful classifiers such as a linear SVM and regularised logistic regression produce very similar results. We discuss some possible reasons why the very simple brain-wide linear regression model is able to find correlations with behaviour that are as strong as those obtained on the one hand from a specific ROI and on the other hand from more complex classifiers. In a manner which is unencumbered by arbitrary choices, our approach offers a method for investigating connections between brain and behaviour which is simple, rigorous and direct. PMID:20132896

  10. [Phonological characteristics and rehabilitation training of abnormal velar in children with functional articulation disorders].

    PubMed

    Lina, Xu; Feng, Li; Yanyun, Zhang; Nan, Gao; Mingfang, Hu

    2016-12-01

    To explore the phonological characteristics and rehabilitation training of abnormal velar in patients with functional articulation disorders (FAD). Eighty-seven patients with FAD were observed of the phonological characteristics of velar. Seventy-two patients with abnormal velar accepted speech training. The correlation and simple linear regression analysis were carried out on abnormal velar articulation and age. The articulation disorder of /g/ mainly showed replacement by /d/, /b/ or omission. /k/ mainly showed replacement by /d/, /t/, /g/, /p/, /b/. /h/ mainly showed replacement by /g/, /f/, /p/, /b/ or omission. The common erroneous articulation forms of /g/, /k/, /h/ were fronting of tongue and replacement by bilabial consonants. When velar combined with vowels contained /a/ and /e/, the main error was fronting of tongue. When velar combined with vowels contained /u/, the errors trended to be replacement by bilabial consonants. After 3 to 10 times of speech training, the number of erroneous words decreased to (6.24±2.61) from (40.28±6.08) before the speech training was established, the difference was statistically significant (Z=-7.379, P=0.000). The number of erroneous words was negatively correlated with age (r=-0.691, P=0.000). The result of simple linear regression analysis showed that the determination coefficient was 0.472. The articulation disorder of velar mainly shows replacement, varies with the vowels. The targeted rehabilitation training hereby established is significantly effective. Age plays an important role in the outcome of velar.

  11. Detecting natural occlusion boundaries using local cues

    PubMed Central

    DiMattina, Christopher; Fox, Sean A.; Lewicki, Michael S.

    2012-01-01

    Occlusion boundaries and junctions provide important cues for inferring three-dimensional scene organization from two-dimensional images. Although several investigators in machine vision have developed algorithms for detecting occlusions and other edges in natural images, relatively few psychophysics or neurophysiology studies have investigated what features are used by the visual system to detect natural occlusions. In this study, we addressed this question using a psychophysical experiment where subjects discriminated image patches containing occlusions from patches containing surfaces. Image patches were drawn from a novel occlusion database containing labeled occlusion boundaries and textured surfaces in a variety of natural scenes. Consistent with related previous work, we found that relatively large image patches were needed to attain reliable performance, suggesting that human subjects integrate complex information over a large spatial region to detect natural occlusions. By defining machine observers using a set of previously studied features measured from natural occlusions and surfaces, we demonstrate that simple features defined at the spatial scale of the image patch are insufficient to account for human performance in the task. To define machine observers using a more biologically plausible multiscale feature set, we trained standard linear and neural network classifiers on the rectified outputs of a Gabor filter bank applied to the image patches. We found that simple linear classifiers could not match human performance, while a neural network classifier combining filter information across location and spatial scale compared well. These results demonstrate the importance of combining a variety of cues defined at multiple spatial scales for detecting natural occlusions. PMID:23255731

  12. A new and simple resonance Rayleigh scattering method for human serum albumin using graphite oxide as probe.

    PubMed

    Wang, Shengmian; Xu, Lili; Wang, Lisheng; Liang, Aihui; Jiang, Zhiliang

    2013-01-01

    Graphite oxide (GO) was prepared by the Hummer procedure, and can be dispersed to stable colloid solution by ultrasonic wave. The GO exhibited an absorption peak at 313 nm, and a resonance Rayleigh scattering (RRS) peak at 490 nm. In pH 4.6 HAc-NaAc buffer solution, human serum albumin (HSA) combined with GO probe to form large HSA-GO particles that caused the RRS peak increasing at 490 nm. The increased RRS intensity was linear to HSA concentration in the range 0.50-200 µg/mL. Thus, a new and simple RRS method was proposed for the determination of HSA in samples, with a recovery of 98.1-104%. Copyright © 2012 John Wiley & Sons, Ltd.

  13. A Simple Simulation Technique for Nonnormal Data with Prespecified Skewness, Kurtosis, and Covariance Matrix.

    PubMed

    Foldnes, Njål; Olsson, Ulf Henning

    2016-01-01

    We present and investigate a simple way to generate nonnormal data using linear combinations of independent generator (IG) variables. The simulated data have prespecified univariate skewness and kurtosis and a given covariance matrix. In contrast to the widely used Vale-Maurelli (VM) transform, the obtained data are shown to have a non-Gaussian copula. We analytically obtain asymptotic robustness conditions for the IG distribution. We show empirically that popular test statistics in covariance analysis tend to reject true models more often under the IG transform than under the VM transform. This implies that overly optimistic evaluations of estimators and fit statistics in covariance structure analysis may be tempered by including the IG transform for nonnormal data generation. We provide an implementation of the IG transform in the R environment.

  14. Simple Test Functions in Meshless Local Petrov-Galerkin Methods

    NASA Technical Reports Server (NTRS)

    Raju, Ivatury S.

    2016-01-01

    Two meshless local Petrov-Galerkin (MLPG) methods based on two different trial functions but that use a simple linear test function were developed for beam and column problems. These methods used generalized moving least squares (GMLS) and radial basis (RB) interpolation functions as trial functions. These two methods were tested on various patch test problems. Both methods passed the patch tests successfully. Then the methods were applied to various beam vibration problems and problems involving Euler and Beck's columns. Both methods yielded accurate solutions for all problems studied. The simple linear test function offers considerable savings in computing efforts as the domain integrals involved in the weak form are avoided. The two methods based on this simple linear test function method produced accurate results for frequencies and buckling loads. Of the two methods studied, the method with radial basis trial functions is very attractive as the method is simple, accurate, and robust.

  15. Detecting single-trial EEG evoked potential using a wavelet domain linear mixed model: application to error potentials classification.

    PubMed

    Spinnato, J; Roubaud, M-C; Burle, B; Torrésani, B

    2015-06-01

    The main goal of this work is to develop a model for multisensor signals, such as magnetoencephalography or electroencephalography (EEG) signals that account for inter-trial variability, suitable for corresponding binary classification problems. An important constraint is that the model be simple enough to handle small size and unbalanced datasets, as often encountered in BCI-type experiments. The method involves the linear mixed effects statistical model, wavelet transform, and spatial filtering, and aims at the characterization of localized discriminant features in multisensor signals. After discrete wavelet transform and spatial filtering, a projection onto the relevant wavelet and spatial channels subspaces is used for dimension reduction. The projected signals are then decomposed as the sum of a signal of interest (i.e., discriminant) and background noise, using a very simple Gaussian linear mixed model. Thanks to the simplicity of the model, the corresponding parameter estimation problem is simplified. Robust estimates of class-covariance matrices are obtained from small sample sizes and an effective Bayes plug-in classifier is derived. The approach is applied to the detection of error potentials in multichannel EEG data in a very unbalanced situation (detection of rare events). Classification results prove the relevance of the proposed approach in such a context. The combination of the linear mixed model, wavelet transform and spatial filtering for EEG classification is, to the best of our knowledge, an original approach, which is proven to be effective. This paper improves upon earlier results on similar problems, and the three main ingredients all play an important role.

  16. ParaExp Using Leapfrog as Integrator for High-Frequency Electromagnetic Simulations

    NASA Astrophysics Data System (ADS)

    Merkel, M.; Niyonzima, I.; Schöps, S.

    2017-12-01

    Recently, ParaExp was proposed for the time integration of linear hyperbolic problems. It splits the time interval of interest into subintervals and computes the solution on each subinterval in parallel. The overall solution is decomposed into a particular solution defined on each subinterval with zero initial conditions and a homogeneous solution propagated by the matrix exponential applied to the initial conditions. The efficiency of the method depends on fast approximations of this matrix exponential based on recent results from numerical linear algebra. This paper deals with the application of ParaExp in combination with Leapfrog to electromagnetic wave problems in time domain. Numerical tests are carried out for a simple toy problem and a realistic spiral inductor model discretized by the Finite Integration Technique.

  17. Morse Code, Scrabble, and the Alphabet

    ERIC Educational Resources Information Center

    Richardson, Mary; Gabrosek, John; Reischman, Diann; Curtiss, Phyliss

    2004-01-01

    In this paper we describe an interactive activity that illustrates simple linear regression. Students collect data and analyze it using simple linear regression techniques taught in an introductory applied statistics course. The activity is extended to illustrate checks for regression assumptions and regression diagnostics taught in an…

  18. Optimized Spectral Editing of 13C MAS NMR Spectra of Rigid Solids Using Cross-Polarization Methods

    NASA Astrophysics Data System (ADS)

    Sangill, R.; Rastrupandersen, N.; Bildsoe, H.; Jakobsen, H. J.; Nielsen, N. C.

    Combinations of 13C magic-angle spinning (MAS) NMR experiments employing cross polarization (CP), cross polarization-depolarization (CPD), and cross polarization-depolarization-repolarization are analyzed quantitatively to derive simple and general procedures for optimized spectral editing of 13C CP/MAS NMR spectra of rigid solids by separation of the 13C resonances into CH n subspectra ( n = 0, 1, 2, and 3). Special attention is devoted to a differentiation by CPD/MAS of CH and CH 2 resonances since these groups behave quite similarly during spin lock under Hartmann-Hahn match and are therefore generally difficult to distinguish unambiguously. A general procedure for the design of subexperiments and linear combinations of their spectra to provide optimized signal-to-noise ratios for the edited subspectra is described. The technique is illustrated by a series of edited 13C CP/MAS spectra for a number of rigid solids ranging from simple organic compounds (sucrose and l-menthol) to complex pharmaceutical products (calcipotriol monohydrate and vitamin D 3) and polymers (polypropylene, polyvinyl alcohol, polyvinyl chloride, and polystyrene).

  19. Computational principles underlying recognition of acoustic signals in grasshoppers and crickets.

    PubMed

    Ronacher, Bernhard; Hennig, R Matthias; Clemens, Jan

    2015-01-01

    Grasshoppers and crickets independently evolved hearing organs and acoustic communication. They differ considerably in the organization of their auditory pathways, and the complexity of their songs, which are essential for mate attraction. Recent approaches aimed at describing the behavioral preference functions of females in both taxa by a simple modeling framework. The basic structure of the model consists of three processing steps: (1) feature extraction with a bank of 'LN models'-each containing a linear filter followed by a nonlinearity, (2) temporal integration, and (3) linear combination. The specific properties of the filters and nonlinearities were determined using a genetic learning algorithm trained on a large set of different song features and the corresponding behavioral response scores. The model showed an excellent prediction of the behavioral responses to the tested songs. Most remarkably, in both taxa the genetic algorithm found Gabor-like functions as the optimal filter shapes. By slight modifications of Gabor filters several types of preference functions could be modeled, which are observed in different cricket species. Furthermore, this model was able to explain several so far enigmatic results in grasshoppers. The computational approach offered a remarkably simple framework that can account for phenotypically rather different preference functions across several taxa.

  20. Liquid density analysis of sucrose and alcoholic beverages using polyimide guided Love-mode acoustic wave sensors

    NASA Astrophysics Data System (ADS)

    Turton, Andrew; Bhattacharyya, Debabrata; Wood, David

    2006-02-01

    A liquid density sensor using Love-mode acoustic waves has been developed which is suitable for use in the food and drinks industries. The sensor has an open flat surface allowing immersion into a sample and simple cleaning. A polyimide waveguide layer allows cheap and simple fabrication combined with a robust chemically resistant surface. The low shear modulus of polyimide allows thin guiding layers giving a high sensitivity. A dual structure with a smooth reference device exhibiting viscous coupling with the wave, and a patterned sense area to trap the liquid causing mass loading, allows discrimination of the liquid density from the square root of the density-viscosity product (ρη)0.5. Frequency shift and insertion loss change were proportional to (ρη)0.5 with a non-linear response due to the non-Newtonian nature of viscous liquids at high frequencies. Measurements were made with sucrose solutions up to 50% and different alcoholic drinks. A maximum sensitivity of 0.13 µg cm-3 Hz-1 was achieved, with a linear frequency response to density. This is the highest liquid density sensitivity obtained for acoustic mode sensors to the best of our knowledge.

  1. Locally linear regression for pose-invariant face recognition.

    PubMed

    Chai, Xiujuan; Shan, Shiguang; Chen, Xilin; Gao, Wen

    2007-07-01

    The variation of facial appearance due to the viewpoint (/pose) degrades face recognition systems considerably, which is one of the bottlenecks in face recognition. One of the possible solutions is generating virtual frontal view from any given nonfrontal view to obtain a virtual gallery/probe face. Following this idea, this paper proposes a simple, but efficient, novel locally linear regression (LLR) method, which generates the virtual frontal view from a given nonfrontal face image. We first justify the basic assumption of the paper that there exists an approximate linear mapping between a nonfrontal face image and its frontal counterpart. Then, by formulating the estimation of the linear mapping as a prediction problem, we present the regression-based solution, i.e., globally linear regression. To improve the prediction accuracy in the case of coarse alignment, LLR is further proposed. In LLR, we first perform dense sampling in the nonfrontal face image to obtain many overlapped local patches. Then, the linear regression technique is applied to each small patch for the prediction of its virtual frontal patch. Through the combination of all these patches, the virtual frontal view is generated. The experimental results on the CMU PIE database show distinct advantage of the proposed method over Eigen light-field method.

  2. Exact linearized Coulomb collision operator in the moment expansion

    DOE PAGES

    Ji, Jeong -Young; Held, Eric D.

    2006-10-05

    In the moment expansion, the Rosenbluth potentials, the linearized Coulomb collision operators, and the moments of the collision operators are analytically calculated for any moment. The explicit calculation of Rosenbluth potentials converts the integro-differential form of the Coulomb collision operator into a differential operator, which enables one to express the collision operator in a simple closed form for any arbitrary mass and temperature ratios. In addition, it is shown that gyrophase averaging the collision operator acting on arbitrary distribution functions is the same as the collision operator acting on the corresponding gyrophase averaged distribution functions. The moments of the collisionmore » operator are linear combinations of the fluid moments with collision coefficients parametrized by mass and temperature ratios. Furthermore, useful forms involving the small mass-ratio approximation are easily found since the collision operators and their moments are expressed in terms of the mass ratio. As an application, the general moment equations are explicitly written and the higher order heat flux equation is derived.« less

  3. Experimental and numerical analysis of pre-compressed masonry walls in two-way-bending with second order effects

    NASA Astrophysics Data System (ADS)

    Milani, Gabriele; Olivito, Renato S.; Tralli, Antonio

    2014-10-01

    The buckling behavior of slender unreinforced masonry (URM) walls subjected to axial compression and out-of-plane lateral loads is investigated through a combined experimental and numerical homogenizedapproach. After a preliminary analysis performed on a unit cell meshed by means of elastic FEs and non-linear interfaces, macroscopic moment-curvature diagrams so obtained are implemented at a structural level, discretizing masonry by means of rigid triangular elements and non-linear interfaces. The non-linear incremental response of the structure is accounted for a specific quadratic programming routine. In parallel, a wide experimental campaign is conducted on walls in two way bending, with the double aim of both validating the numerical model and investigating the behavior of walls that may not be reduced to simple cantilevers or simply supported beams. Panels investigated are dry-joint in scale square walls simply supported at the base and on a vertical edge, exhibiting the classical Rondelet's mechanism. The results obtained are compared with those provided by the numerical model.

  4. A single-phase axially-magnetized permanent-magnet oscillating machine for miniature aerospace power sources

    NASA Astrophysics Data System (ADS)

    Sui, Yi; Zheng, Ping; Cheng, Luming; Wang, Weinan; Liu, Jiaqi

    2017-05-01

    A single-phase axially-magnetized permanent-magnet (PM) oscillating machine which can be integrated with a free-piston Stirling engine to generate electric power, is investigated for miniature aerospace power sources. Machine structure, operating principle and detent force characteristic are elaborately studied. With the sinusoidal speed characteristic of the mover considered, the proposed machine is designed by 2D finite-element analysis (FEA), and some main structural parameters such as air gap diameter, dimensions of PMs, pole pitches of both stator and mover, and the pole-pitch combinations, etc., are optimized to improve both the power density and force capability. Compared with the three-phase PM linear machines, the proposed single-phase machine features less PM use, simple control and low controller cost. The power density of the proposed machine is higher than that of the three-phase radially-magnetized PM linear machine, but lower than the three-phase axially-magnetized PM linear machine.

  5. Optical Measurement of Radiocarbon below Unity Fraction Modern by Linear Absorption Spectroscopy.

    PubMed

    Fleisher, Adam J; Long, David A; Liu, Qingnan; Gameson, Lyn; Hodges, Joseph T

    2017-09-21

    High-precision measurements of radiocarbon ( 14 C) near or below a fraction modern 14 C of 1 (F 14 C ≤ 1) are challenging and costly. An accurate, ultrasensitive linear absorption approach to detecting 14 C would provide a simple and robust benchtop alternative to off-site accelerator mass spectrometry facilities. Here we report the quantitative measurement of 14 C in gas-phase samples of CO 2 with F 14 C < 1 using cavity ring-down spectroscopy in the linear absorption regime. Repeated analysis of CO 2 derived from the combustion of either biogenic or petrogenic sources revealed a robust ability to differentiate samples with F 14 C < 1. With a combined uncertainty of 14 C/ 12 C = 130 fmol/mol (F 14 C = 0.11), initial performance of the calibration-free instrument is sufficient to investigate a variety of applications in radiocarbon measurement science including the study of biofuels and bioplastics, illicitly traded specimens, bomb dating, and atmospheric transport.

  6. Validation of a computer code for analysis of subsonic aerodynamic performance of wings with flaps in combination with a canard or horizontal tail and an application to optimization

    NASA Technical Reports Server (NTRS)

    Carlson, Harry W.; Darden, Christine M.; Mann, Michael J.

    1990-01-01

    Extensive correlations of computer code results with experimental data are employed to illustrate the use of a linearized theory, attached flow method for the estimation and optimization of the longitudinal aerodynamic performance of wing-canard and wing-horizontal tail configurations which may employ simple hinged flap systems. Use of an attached flow method is based on the premise that high levels of aerodynamic efficiency require a flow that is as nearly attached as circumstances permit. The results indicate that linearized theory, attached flow, computer code methods (modified to include estimated attainable leading-edge thrust and an approximate representation of vortex forces) provide a rational basis for the estimation and optimization of aerodynamic performance at subsonic speeds below the drag rise Mach number. Generally, good prediction of aerodynamic performance, as measured by the suction parameter, can be expected for near optimum combinations of canard or horizontal tail incidence and leading- and trailing-edge flap deflections at a given lift coefficient (conditions which tend to produce a predominantly attached flow).

  7. Magnetically suspended stepping motors for clean room and vacuum environments

    NASA Technical Reports Server (NTRS)

    Higuchi, Toshiro

    1994-01-01

    To answer the growing needs for super-clean or contact free actuators for uses in clean rooms, vacuum chambers, and space, innovative actuators which combine the functions of stepping motors and magnetic bearings in one body were developed. The rotor of the magnetically suspended stepping motor is suspended like a magnetic bearing and rotated and positioned like a stepping motor. The important trait of the motor is that it is not a simple mixture or combination of a stepping motor and conventional magnetic bearing, but an amalgam of a stepping motor and a magnetic bearing. Owing to optimal design and feed-back control, a toothed stator and rotor are all that are needed structurewise for stable suspension. More than ten types of motors such as linear type, high accuracy rotary type, two-dimensional type, and high vacuum type were built and tested. This paper describes the structure and design of these motors and their performance for such applications as precise positioning rotary table, linear conveyor system, and theta-zeta positioner for clean room and high vacuum use.

  8. Comparative study between different simple methods manipulating ratio spectra for the analysis of alogliptin and metformin co-formulated with highly different concentrations

    NASA Astrophysics Data System (ADS)

    Zaghary, Wafaa A.; Mowaka, Shereen; Hassan, Mostafa A.; Ayoub, Bassam M.

    2017-11-01

    Different simple spectrophotometric methods were developed for simultaneous determination of alogliptin and metformin manipulating their ratio spectra with successful application on recently approved combination, Kazano® tablets. Spiking was implemented to detect alogliptin in spite of its low contribution in the pharmaceutical formulation as low quantity in comparison to metformin. Linearity was acceptable over the concentration range of 2.5-25.0 μg/mL and 2.5-15.0 μg/mL for alogliptin and metformin, respectively using derivative ratio, ratio subtraction coupled with extended ratio subtraction and spectrum subtraction coupled with constant multiplication. The optimized methods were compared using one-way analysis of variance (ANOVA) and proved to be accurate for assay of the investigated drugs in their pharmaceutical dosage form.

  9. Smith predictor based-sliding mode controller for integrating processes with elevated deadtime.

    PubMed

    Camacho, Oscar; De la Cruz, Francisco

    2004-04-01

    An approach to control integrating processes with elevated deadtime using a Smith predictor sliding mode controller is presented. A PID sliding surface and an integrating first-order plus deadtime model have been used to synthesize the controller. Since the performance of existing controllers with a Smith predictor decrease in the presence of modeling errors, this paper presents a simple approach to combining the Smith predictor with the sliding mode concept, which is a proven, simple, and robust procedure. The proposed scheme has a set of tuning equations as a function of the characteristic parameters of the model. For implementation of our proposed approach, computer based industrial controllers that execute PID algorithms can be used. The performance and robustness of the proposed controller are compared with the Matausek-Micić scheme for linear systems using simulations.

  10. Correlation and simple linear regression.

    PubMed

    Zou, Kelly H; Tuncali, Kemal; Silverman, Stuart G

    2003-06-01

    In this tutorial article, the concepts of correlation and regression are reviewed and demonstrated. The authors review and compare two correlation coefficients, the Pearson correlation coefficient and the Spearman rho, for measuring linear and nonlinear relationships between two continuous variables. In the case of measuring the linear relationship between a predictor and an outcome variable, simple linear regression analysis is conducted. These statistical concepts are illustrated by using a data set from published literature to assess a computed tomography-guided interventional technique. These statistical methods are important for exploring the relationships between variables and can be applied to many radiologic studies.

  11. Lithographically patterned electrodeposition of gold, silver, and nickel nanoring arrays with widely tunable near-infrared plasmonic resonances.

    PubMed

    Halpern, Aaron R; Corn, Robert M

    2013-02-26

    A novel low-cost nanoring array fabrication method that combines the process of lithographically patterned nanoscale electrodeposition (LPNE) with colloidal lithography is described. Nanoring array fabrication was accomplished in three steps: (i) a thin (70 nm) sacrificial nickel or silver film was first vapor-deposited onto a plasma-etched packed colloidal monolayer; (ii) the polymer colloids were removed from the surface, a thin film of positive photoresist was applied, and a backside exposure of the photoresist was used to create a nanohole electrode array; (iii) this array of nanoscale cylindrical electrodes was then used for the electrodeposition of gold, silver, or nickel nanorings. Removal of the photoresist and sacrificial metal film yielded a nanoring array in which all of the nanoring dimensions were set independently: the inter-ring spacing was fixed by the colloidal radius, the radius of the nanorings was controlled by the plasma etching process, and the width of the nanorings was controlled by the electrodeposition process. A combination of scanning electron microscopy (SEM) measurements and Fourier transform near-infrared (FT-NIR) absorption spectroscopy were used to characterize the nanoring arrays. Nanoring arrays with radii from 200 to 400 nm exhibited a single strong NIR plasmonic resonance with an absorption maximum wavelength that varied linearly from 1.25 to 3.33 μm as predicted by a simple standing wave model linear antenna theory. This simple yet versatile nanoring array fabrication method was also used to electrodeposit concentric double gold nanoring arrays that exhibited multiple NIR plasmonic resonances.

  12. Teaching the Concept of Breakdown Point in Simple Linear Regression.

    ERIC Educational Resources Information Center

    Chan, Wai-Sum

    2001-01-01

    Most introductory textbooks on simple linear regression analysis mention the fact that extreme data points have a great influence on ordinary least-squares regression estimation; however, not many textbooks provide a rigorous mathematical explanation of this phenomenon. Suggests a way to fill this gap by teaching students the concept of breakdown…

  13. Circuit models and three-dimensional electromagnetic simulations of a 1-MA linear transformer driver stage

    NASA Astrophysics Data System (ADS)

    Rose, D. V.; Miller, C. L.; Welch, D. R.; Clark, R. E.; Madrid, E. A.; Mostrom, C. B.; Stygar, W. A.; Lechien, K. R.; Mazarakis, M. A.; Langston, W. L.; Porter, J. L.; Woodworth, J. R.

    2010-09-01

    A 3D fully electromagnetic (EM) model of the principal pulsed-power components of a high-current linear transformer driver (LTD) has been developed. LTD systems are a relatively new modular and compact pulsed-power technology based on high-energy density capacitors and low-inductance switches located within a linear-induction cavity. We model 1-MA, 100-kV, 100-ns rise-time LTD cavities [A. A. Kim , Phys. Rev. ST Accel. Beams 12, 050402 (2009)PRABFM1098-440210.1103/PhysRevSTAB.12.050402] which can be used to drive z-pinch and material dynamics experiments. The model simulates the generation and propagation of electromagnetic power from individual capacitors and triggered gas switches to a radially symmetric output line. Multiple cavities, combined to provide voltage addition, drive a water-filled coaxial transmission line. A 3D fully EM model of a single 1-MA 100-kV LTD cavity driving a simple resistive load is presented and compared to electrical measurements. A new model of the current loss through the ferromagnetic cores is developed for use both in circuit representations of an LTD cavity and in the 3D EM simulations. Good agreement between the measured core current, a simple circuit model, and the 3D simulation model is obtained. A 3D EM model of an idealized ten-cavity LTD accelerator is also developed. The model results demonstrate efficient voltage addition when driving a matched impedance load, in good agreement with an idealized circuit model.

  14. Linear analysis of auto-organization in Hebbian neural networks.

    PubMed

    Carlos Letelier, J; Mpodozis, J

    1995-01-01

    The self-organization of neurotopies where neural connections follow Hebbian dynamics is framed in terms of linear operator theory. A general and exact equation describing the time evolution of the overall synaptic strength connecting two neural laminae is derived. This linear matricial equation, which is similar to the equations used to describe oscillating systems in physics, is modified by the introduction of non-linear terms, in order to capture self-organizing (or auto-organizing) processes. The behavior of a simple and small system, that contains a non-linearity that mimics a metabolic constraint, is analyzed by computer simulations. The emergence of a simple "order" (or degree of organization) in this low-dimensionality model system is discussed.

  15. Valid statistical approaches for analyzing sholl data: Mixed effects versus simple linear models.

    PubMed

    Wilson, Machelle D; Sethi, Sunjay; Lein, Pamela J; Keil, Kimberly P

    2017-03-01

    The Sholl technique is widely used to quantify dendritic morphology. Data from such studies, which typically sample multiple neurons per animal, are often analyzed using simple linear models. However, simple linear models fail to account for intra-class correlation that occurs with clustered data, which can lead to faulty inferences. Mixed effects models account for intra-class correlation that occurs with clustered data; thus, these models more accurately estimate the standard deviation of the parameter estimate, which produces more accurate p-values. While mixed models are not new, their use in neuroscience has lagged behind their use in other disciplines. A review of the published literature illustrates common mistakes in analyses of Sholl data. Analysis of Sholl data collected from Golgi-stained pyramidal neurons in the hippocampus of male and female mice using both simple linear and mixed effects models demonstrates that the p-values and standard deviations obtained using the simple linear models are biased downwards and lead to erroneous rejection of the null hypothesis in some analyses. The mixed effects approach more accurately models the true variability in the data set, which leads to correct inference. Mixed effects models avoid faulty inference in Sholl analysis of data sampled from multiple neurons per animal by accounting for intra-class correlation. Given the widespread practice in neuroscience of obtaining multiple measurements per subject, there is a critical need to apply mixed effects models more widely. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Surgery for left ventricular aneurysm: early and late survival after simple linear repair and endoventricular patch plasty.

    PubMed

    Lundblad, Runar; Abdelnoor, Michel; Svennevig, Jan Ludvig

    2004-09-01

    Simple linear resection and endoventricular patch plasty are alternative techniques to repair postinfarction left ventricular aneurysm. The aim of the study was to compare these 2 methods with regard to early mortality and long-term survival. We retrospectively reviewed 159 patients undergoing operations between 1989 and 2003. The epidemiologic design was of an exposed (simple linear repair, n = 74) versus nonexposed (endoventricular patch plasty, n = 85) cohort with 2 endpoints: early mortality and long-term survival. The crude effect of aneurysm repair technique versus endpoint was estimated by odds ratio, rate ratio, or relative risk and their 95% confidence intervals. Stratification analysis by using the Mantel-Haenszel method was done to quantify confounders and pinpoint effect modifiers. Adjustment for multiconfounders was performed by using logistic regression and Cox regression analysis. Survival curves were analyzed with the Breslow test and the log-rank test. Early mortality was 8.2% for all patients, 13.5% after linear repair and 3.5% after endoventricular patch plasty. When adjusted for multiconfounders, the risk of early mortality was significantly higher after simple linear repair than after endoventricular patch plasty (odds ratio, 4.4; 95% confidence interval, 1.1-17.8). Mean follow-up was 5.8 +/- 3.8 years (range, 0-14.0 years). Overall 5-year cumulative survival was 78%, 70.1% after linear repair and 91.4% after endoventricular patch plasty. The risk of total mortality was significantly higher after linear repair than after endoventricular patch plasty when controlled for multiconfounders (relative risk, 4.5; 95% confidence interval, 2.0-9.7). Linear repair dominated early in the series and patch plasty dominated later, giving a possible learning-curve bias in favor of patch plasty that could not be adjusted for in the regression analysis. Postinfarction left ventricular aneurysm can be repaired with satisfactory early and late results. Surgical risk was lower and long-term survival was higher after endoventricular patch plasty than simple linear repair. Differences in outcome should be interpreted with care because of the retrospective study design and the chronology of the 2 repair methods.

  17. Oxidatively-Stable Linear Poly(propylenimine)-Containing Adsorbents for CO2 Capture from Ultra-Dilute Streams.

    PubMed

    Pang, Simon H; Lively, Ryan P; Jones, Christopher W

    2018-05-29

    Aminopolymer-based solid sorbents have been widely investigated for CO2 capture from dilute streams such as flue gas or ambient air. However, the oxidative stability of the most well-studied aminopolymer, poly(ethylenimine) (PEI), is limited, causing it to lose its CO2 capture capacity after exposure to oxygen at elevated temperatures. Here we demonstrate the use of linear poly(propylenimine) (PPI), synthesized via a simple cationic ring-opening polymerization, as a more oxidatively-stable alternative to PEI with high CO2 capacity and amine efficiency. The performance of linear PPI/SBA-15 composites is investigated over a range of CO2 capture conditions (CO2 partial pressure, adsorption temperature) to examine the trade-off between adsorption capacity and sorption site accessibility, which may be expected to be more limited in linear polymers relative to the prototypical hyperbranched PEI. Linear PPI/SBA-15 composites are more efficient at CO2 capture and retain 65-83% of their CO2 capacity after exposure to a harsh oxidative treatment, compared to 20-40% retention for linear PEI. Additionally, we demonstrate long-term stability of linear PPI sorbents over 50 adsorption/desorption cycles with no loss in performance. Combined with other strategies for improving oxidative stability and adsorption kinetics, linear PPI may play a role as a component of stable, solid adsorbents in commercial applications for CO2 capture. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Assessment of the Uniqueness of Wind Tunnel Strain-Gage Balance Load Predictions

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.

    2016-01-01

    A new test was developed to assess the uniqueness of wind tunnel strain-gage balance load predictions that are obtained from regression models of calibration data. The test helps balance users to gain confidence in load predictions of non-traditional balance designs. It also makes it possible to better evaluate load predictions of traditional balances that are not used as originally intended. The test works for both the Iterative and Non-Iterative Methods that are used in the aerospace testing community for the prediction of balance loads. It is based on the hypothesis that the total number of independently applied balance load components must always match the total number of independently measured bridge outputs or bridge output combinations. This hypothesis is supported by a control volume analysis of the inputs and outputs of a strain-gage balance. It is concluded from the control volume analysis that the loads and bridge outputs of a balance calibration data set must separately be tested for linear independence because it cannot always be guaranteed that a linearly independent load component set will result in linearly independent bridge output measurements. Simple linear math models for the loads and bridge outputs in combination with the variance inflation factor are used to test for linear independence. A highly unique and reversible mapping between the applied load component set and the measured bridge output set is guaranteed to exist if the maximum variance inflation factor of both sets is less than the literature recommended threshold of five. Data from the calibration of a six{component force balance is used to illustrate the application of the new test to real-world data.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romano, J.D.; Woan, G.

    Data from the Laser Interferometer Space Antenna (LISA) is expected to be dominated by frequency noise from its lasers. However, the noise from any one laser appears more than once in the data and there are combinations of the data that are insensitive to this noise. These combinations, called time delay interferometry (TDI) variables, have received careful study and point the way to how LISA data analysis may be performed. Here we approach the problem from the direction of statistical inference, and show that these variables are a direct consequence of a principal component analysis of the problem. We presentmore » a formal analysis for a simple LISA model and show that there are eigenvectors of the noise covariance matrix that do not depend on laser frequency noise. Importantly, these orthogonal basis vectors correspond to linear combinations of TDI variables. As a result we show that the likelihood function for source parameters using LISA data can be based on TDI combinations of the data without loss of information.« less

  20. Single Image Super-Resolution Using Global Regression Based on Multiple Local Linear Mappings.

    PubMed

    Choi, Jae-Seok; Kim, Munchurl

    2017-03-01

    Super-resolution (SR) has become more vital, because of its capability to generate high-quality ultra-high definition (UHD) high-resolution (HR) images from low-resolution (LR) input images. Conventional SR methods entail high computational complexity, which makes them difficult to be implemented for up-scaling of full-high-definition input images into UHD-resolution images. Nevertheless, our previous super-interpolation (SI) method showed a good compromise between Peak-Signal-to-Noise Ratio (PSNR) performances and computational complexity. However, since SI only utilizes simple linear mappings, it may fail to precisely reconstruct HR patches with complex texture. In this paper, we present a novel SR method, which inherits the large-to-small patch conversion scheme from SI but uses global regression based on local linear mappings (GLM). Thus, our new SR method is called GLM-SI. In GLM-SI, each LR input patch is divided into 25 overlapped subpatches. Next, based on the local properties of these subpatches, 25 different local linear mappings are applied to the current LR input patch to generate 25 HR patch candidates, which are then regressed into one final HR patch using a global regressor. The local linear mappings are learned cluster-wise in our off-line training phase. The main contribution of this paper is as follows: Previously, linear-mapping-based conventional SR methods, including SI only used one simple yet coarse linear mapping to each patch to reconstruct its HR version. On the contrary, for each LR input patch, our GLM-SI is the first to apply a combination of multiple local linear mappings, where each local linear mapping is found according to local properties of the current LR patch. Therefore, it can better approximate nonlinear LR-to-HR mappings for HR patches with complex texture. Experiment results show that the proposed GLM-SI method outperforms most of the state-of-the-art methods, and shows comparable PSNR performance with much lower computational complexity when compared with a super-resolution method based on convolutional neural nets (SRCNN15). Compared with the previous SI method that is limited with a scale factor of 2, GLM-SI shows superior performance with average 0.79 dB higher in PSNR, and can be used for scale factors of 3 or higher.

  1. Secretory immunoglobulin purification from whey by chromatographic techniques.

    PubMed

    Matlschweiger, Alexander; Engelmaier, Hannah; Himmler, Gottfried; Hahn, Rainer

    2017-08-15

    Secretory immunoglobulins (SIg) are a major fraction of the mucosal immune system and represent potential drug candidates. So far, platform technologies for their purification do not exist. SIg from animal whey was used as a model to develop a simple, efficient and potentially generic chromatographic purification process. Several chromatographic stationary phases were tested. A combination of two anion-exchange steps resulted in the highest purity. The key step was the use of a small-porous anion exchanger operated in flow-through mode. Diffusion of SIg into the resin particles was significantly hindered, while the main impurities, IgG and serum albumin, were bound. In this step, initial purity was increased from 66% to 89% with a step yield of 88%. In a second anion-exchange step using giga-porous material, SIg was captured and purified by step or linear gradient elution to obtain fractions with purities >95%. For the step gradient elution step yield of highly pure SIg was 54%. Elution of SIgA and SIgM with a linear gradient resulted in a step yield of 56% and 35%, respectively. Overall yields for both anion exchange steps were 43% for the combination of flow-through and step elution mode. Combination of flow-through and linear gradient elution mode resulted in a yield of 44% for SIgA and 39% for SIgM. The proposed process allows the purification of biologically active SIg from animal whey in preparative scale. For future applications, the process can easily be adopted for purification of recombinant secretory immunoglobulin species. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Effects of Inter- and Intra-aggregate Pore Space on the Soil-Gas Diffusivity Behavior in Unsaturated, Undisturbed Volcanic Ash Soils

    NASA Astrophysics Data System (ADS)

    Resurreccion, A. C.; Kawamoto, K.; Komatsu, T.; Moldrup, P.

    2006-12-01

    Volcanic ash soils (Andisols) have a unique dual porosity structure that results in good drainage and high soil- water retention. Despite of the complicated and highly developed soil structure, recent studies have reported a simple, highly linear relation between the soil-gas diffusion coefficient, Dp, and the soil-air content, ɛ, for several Japanese Andisols. In this study, we explain the linear Dp(ɛ) behavior from the effects of the inter- and intra-aggregate pore-size distributions. We couple the bimodal van Genuchten soil-water retention model with a general Dp(ɛ) model, ɛ^{X}, allowing the tortuosity- connectivity factor X to vary with pF (= log(-ψ; the soil-water matric potential in cm H2O)). Measured data suggest that the tortuosity-connectivity parameter X is at the minimum at pF 3 (where X ~ 2, following Buckingham, 1904), equal to the water retention point where a separation of inter- and intra-aggregate effects on Dp is observed. At pF < 3, the X values increased as pF decreased because of inactive/remote air-filled pore space entrapped by the inter-connected water films between inter-aggregate pore spaces. At pF > 3, X increased to a high value at very dry conditions due to remote air-filled space inside the intra-aggregate pores. By combining the complex dual porosity soil-water retention model with the power- law gas diffusivity model using a parabolic X(pF) function, the surprisingly simple linear behavior of Dp with ɛ was captured while the variation of Dp with pF followed a dual s-shaped curve similar to the water retention curve. A simple linear model to predict Dp(ɛ) is suggested, with slope C and threshold soil-air content, ɛth, calculated from the power-law model ɛ^{X} at pF 2 (near field capacity) and at pF 4.1 (near wilting point) using the same X value (= 2.3) at both pF in agreement with measured data. This linear Dp(ɛ) model performed better, especially at dry conditions, compared to the traditionally-used predictive models when tested against several independent Andisol datasets from literature.

  3. Feedback control of combustion instabilities from within limit cycle oscillations using H∞ loop-shaping and the ν-gap metric

    PubMed Central

    Morgans, Aimee S.

    2016-01-01

    Combustion instabilities arise owing to a two-way coupling between acoustic waves and unsteady heat release. Oscillation amplitudes successively grow, until nonlinear effects cause saturation into limit cycle oscillations. Feedback control, in which an actuator modifies some combustor input in response to a sensor measurement, can suppress combustion instabilities. Linear feedback controllers are typically designed, using linear combustor models. However, when activated from within limit cycle, the linear model is invalid, and such controllers are not guaranteed to stabilize. This work develops a feedback control strategy guaranteed to stabilize from within limit cycle oscillations. A low-order model of a simple combustor, exhibiting the essential features of more complex systems, is presented. Linear plane acoustic wave modelling is combined with a weakly nonlinear describing function for the flame. The latter is determined numerically using a level set approach. Its implication is that the open-loop transfer function (OLTF) needed for controller design varies with oscillation level. The difference between the mean and the rest of the OLTFs is characterized using the ν-gap metric, providing the minimum required ‘robustness margin’ for an H∞ loop-shaping controller. Such controllers are designed and achieve stability both for linear fluctuations and from within limit cycle oscillations. PMID:27493558

  4. Plate and butt-weld stresses beyond elastic limit, material and structural modeling

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1991-01-01

    Ultimate safety factors of high performance structures depend on stress behavior beyond the elastic limit, a region not too well understood. An analytical modeling approach was developed to gain fundamental insights into inelastic responses of simple structural elements. Nonlinear material properties were expressed in engineering stresses and strains variables and combined with strength of material stress and strain equations similar to numerical piece-wise linear method. Integrations are continuous which allows for more detailed solutions. Included with interesting results are the classical combined axial tension and bending load model and the strain gauge conversion to stress beyond the elastic limit. Material discontinuity stress factors in butt-welds were derived. This is a working-type document with analytical methods and results applicable to all industries of high reliability structures.

  5. Validation of a spectrophotometric assay method for bisoprolol using picric acid.

    PubMed

    Panainte, Alina-Diana; Bibire, Nela; Tântaru, Gladiola; Apostu, M; Vieriu, Mădălina

    2013-01-01

    Bisoprolol is a drug belonging to beta blockers drugs used primarily for the treatment of cardiovascular diseases. A spectrophotometric method for quantitative determination of bisoprolol was developed based on the formation of a complex combination between bisoprolol and picric acid. The complex combination of bisoprolol and picric acid has a maximum absorbance peak at 420 nm. Optimum working conditions were established and the method was validated. The method presented a good linearity in the concentration range 5-120 microg/ml (regression coefficient r2 = 0.9992). The RSD for the precision of the method was 1.74 and for the intermediate precision 1.43, and recovery values ranged between 98.25-101.48%. The proposed and validated spectrophotometric method for the determination of bisoprolol is simple and cost effective.

  6. Anthraquinones quinizarin and danthron unwind negatively supercoiled DNA and lengthen linear DNA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Verebová, Valéria; Adamcik, Jozef; Danko, Patrik

    2014-01-31

    Highlights: • Anthraquinones quinizarin and danthron unwind negatively supercoiled DNA. • Anthraquinones quinizarin and danthron lengthen linear DNA. • Anthraquinones quinizarin and danthron possess middle binding affinity to DNA. • Anthraquinones quinizarin and danthron interact with DNA by intercalating mode. - Abstract: The intercalating drugs possess a planar aromatic chromophore unit by which they insert between DNA bases causing the distortion of classical B-DNA form. The planar tricyclic structure of anthraquinones belongs to the group of chromophore units and enables anthraquinones to bind to DNA by intercalating mode. The interactions of simple derivatives of anthraquinone, quinizarin (1,4-dihydroxyanthraquinone) and danthron (1,8-dihydroxyanthraquinone),more » with negatively supercoiled and linear DNA were investigated using a combination of the electrophoretic methods, fluorescence spectrophotometry and single molecule technique an atomic force microscopy. The detection of the topological change of negatively supercoiled plasmid DNA, unwinding of negatively supercoiled DNA, corresponding to appearance of DNA topoisomers with the low superhelicity and an increase of the contour length of linear DNA in the presence of quinizarin and danthron indicate the binding of both anthraquinones to DNA by intercalating mode.« less

  7. On the repeated measures designs and sample sizes for randomized controlled trials.

    PubMed

    Tango, Toshiro

    2016-04-01

    For the analysis of longitudinal or repeated measures data, generalized linear mixed-effects models provide a flexible and powerful tool to deal with heterogeneity among subject response profiles. However, the typical statistical design adopted in usual randomized controlled trials is an analysis of covariance type analysis using a pre-defined pair of "pre-post" data, in which pre-(baseline) data are used as a covariate for adjustment together with other covariates. Then, the major design issue is to calculate the sample size or the number of subjects allocated to each treatment group. In this paper, we propose a new repeated measures design and sample size calculations combined with generalized linear mixed-effects models that depend not only on the number of subjects but on the number of repeated measures before and after randomization per subject used for the analysis. The main advantages of the proposed design combined with the generalized linear mixed-effects models are (1) it can easily handle missing data by applying the likelihood-based ignorable analyses under the missing at random assumption and (2) it may lead to a reduction in sample size, compared with the simple pre-post design. The proposed designs and the sample size calculations are illustrated with real data arising from randomized controlled trials. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  8. Coupling-Induced Bipartite Pointer States in Arrays of Electron Billiards: Quantum Darwinism in Action?

    NASA Astrophysics Data System (ADS)

    Brunner, R.; Akis, R.; Ferry, D. K.; Kuchar, F.; Meisels, R.

    2008-07-01

    We discuss a quantum system coupled to the environment, composed of an open array of billiards (dots) in series. Beside pointer states occurring in individual dots, we observe sets of robust states which arise only in the array. We define these new states as bipartite pointer states, since they cannot be described in terms of simple linear combinations of robust single-dot states. The classical existence of bipartite pointer states is confirmed by comparing the quantum-mechanical and classical results. The ability of the robust states to create “offspring” indicates that quantum Darwinism is in action.

  9. Coupling-induced bipartite pointer states in arrays of electron billiards: quantum Darwinism in action?

    PubMed

    Brunner, R; Akis, R; Ferry, D K; Kuchar, F; Meisels, R

    2008-07-11

    We discuss a quantum system coupled to the environment, composed of an open array of billiards (dots) in series. Beside pointer states occurring in individual dots, we observe sets of robust states which arise only in the array. We define these new states as bipartite pointer states, since they cannot be described in terms of simple linear combinations of robust single-dot states. The classical existence of bipartite pointer states is confirmed by comparing the quantum-mechanical and classical results. The ability of the robust states to create "offspring" indicates that quantum Darwinism is in action.

  10. Cell growth, division, and death in cohesive tissues: A thermodynamic approach

    NASA Astrophysics Data System (ADS)

    Yabunaka, Shunsuke; Marcq, Philippe

    2017-08-01

    Cell growth, division, and death are defining features of biological tissues that contribute to morphogenesis. In hydrodynamic descriptions of cohesive tissues, their occurrence implies a nonzero rate of variation of cell density. We show how linear nonequilibrium thermodynamics allows us to express this rate as a combination of relevant thermodynamic forces: chemical potential, velocity divergence, and activity. We illustrate the resulting effects of the nonconservation of cell density on simple examples inspired by recent experiments on cell monolayers, considering first the velocity of a spreading front, and second an instability leading to mechanical waves.

  11. Complementary Reliability-Based Decodings of Binary Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Fossorier, Marc P. C.; Lin, Shu

    1997-01-01

    This correspondence presents a hybrid reliability-based decoding algorithm which combines the reprocessing method based on the most reliable basis and a generalized Chase-type algebraic decoder based on the least reliable positions. It is shown that reprocessing with a simple additional algebraic decoding effort achieves significant coding gain. For long codes, the order of reprocessing required to achieve asymptotic optimum error performance is reduced by approximately 1/3. This significantly reduces the computational complexity, especially for long codes. Also, a more efficient criterion for stopping the decoding process is derived based on the knowledge of the algebraic decoding solution.

  12. Linear Models for Systematics and Nuisances

    NASA Astrophysics Data System (ADS)

    Luger, Rodrigo; Foreman-Mackey, Daniel; Hogg, David W.

    2017-12-01

    The target of many astronomical studies is the recovery of tiny astrophysical signals living in a sea of uninteresting (but usually dominant) noise. In many contexts (i.e., stellar time-series, or high-contrast imaging, or stellar spectroscopy), there are structured components in this noise caused by systematic effects in the astronomical source, the atmosphere, the telescope, or the detector. More often than not, evaluation of the true physical model for these nuisances is computationally intractable and dependent on too many (unknown) parameters to allow rigorous probabilistic inference. Sometimes, housekeeping data---and often the science data themselves---can be used as predictors of the systematic noise. Linear combinations of simple functions of these predictors are often used as computationally tractable models that can capture the nuisances. These models can be used to fit and subtract systematics prior to investigation of the signals of interest, or they can be used in a simultaneous fit of the systematics and the signals. In this Note, we show that if a Gaussian prior is placed on the weights of the linear components, the weights can be marginalized out with an operation in pure linear algebra, which can (often) be made fast. We illustrate this model by demonstrating the applicability of a linear model for the non-linear systematics in K2 time-series data, where the dominant noise source for many stars is spacecraft motion and variability.

  13. Comparative study between different simple methods manipulating ratio spectra for the analysis of alogliptin and metformin co-formulated with highly different concentrations.

    PubMed

    Zaghary, Wafaa A; Mowaka, Shereen; Hassan, Mostafa A; Ayoub, Bassam M

    2017-11-05

    Different simple spectrophotometric methods were developed for simultaneous determination of alogliptin and metformin manipulating their ratio spectra with successful application on recently approved combination, Kazano® tablets. Spiking was implemented to detect alogliptin in spite of its low contribution in the pharmaceutical formulation as low quantity in comparison to metformin. Linearity was acceptable over the concentration range of 2.5-25.0μg/mL and 2.5-15.0μg/mL for alogliptin and metformin, respectively using derivative ratio, ratio subtraction coupled with extended ratio subtraction and spectrum subtraction coupled with constant multiplication. The optimized methods were compared using one-way analysis of variance (ANOVA) and proved to be accurate for assay of the investigated drugs in their pharmaceutical dosage form. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. A Partially-Stirred Batch Reactor Model for Under-Ventilated Fire Dynamics

    NASA Astrophysics Data System (ADS)

    McDermott, Randall; Weinschenk, Craig

    2013-11-01

    A simple discrete quadrature method is developed for closure of the mean chemical source term in large-eddy simulations (LES) and implemented in the publicly available fire model, Fire Dynamics Simulator (FDS). The method is cast as a partially-stirred batch reactor model for each computational cell. The model has three distinct components: (1) a subgrid mixing environment, (2) a mixing model, and (3) a set of chemical rate laws. The subgrid probability density function (PDF) is described by a linear combination of Dirac delta functions with quadrature weights set to satisfy simple integral constraints for the computational cell. It is shown that under certain limiting assumptions, the present method reduces to the eddy dissipation concept (EDC). The model is used to predict carbon monoxide concentrations in direct numerical simulation (DNS) of a methane slot burner and in LES of an under-ventilated compartment fire.

  15. Capacitive touch sensing : signal and image processing algorithms

    NASA Astrophysics Data System (ADS)

    Baharav, Zachi; Kakarala, Ramakrishna

    2011-03-01

    Capacitive touch sensors have been in use for many years, and recently gained center stage with the ubiquitous use in smart-phones. In this work we will analyze the most common method of projected capacitive sensing, that of absolute capacitive sensing, together with the most common sensing pattern, that of diamond-shaped sensors. After a brief introduction to the problem, and the reasons behind its popularity, we will formulate the problem as a reconstruction from projections. We derive analytic solutions for two simple cases: circular finger on a wire grid, and square finger on a square grid. The solutions give insight into the ambiguities of finding finger location from sensor readings. The main contribution of our paper is the discussion of interpolation algorithms including simple linear interpolation , curve fitting (parabolic and Gaussian), filtering, general look-up-table, and combinations thereof. We conclude with observations on the limits of the present algorithmic methods, and point to possible future research.

  16. Vacillations induced by interference of stationary and traveling planetary waves

    NASA Technical Reports Server (NTRS)

    Salby, Murry L.; Garcia, Rolando R.

    1987-01-01

    The interference pattern produced when a traveling planetary wave propagates over a stationary forced wave is explored, examining the interference signature in a variety of diagnostics. The wave field is first restricted to a diatomic spectrum consisting of two components: a single stationary wave and a single monochromatic traveling wave. A simple barotropic normal mode propagating over a simple stationary plane wave is considered, and closed form solutions are obtained. The wave fields are then restricted spatially, providing more realistic structures without sacrificing the advantages of an analytical solution. Both stationary and traveling wave fields are calculated numerically with the linearized Primitive Equations in a realistic basic state. The mean flow reaction to the fluctuating eddy forcing which results from interference is derived. Synoptic geopotential behavior corresponding to the combined wave and mean flow fields is presented, and the synoptic signature in potential vorticity on isentropic surfaces is examined.

  17. Action Centered Contextual Bandits.

    PubMed

    Greenewald, Kristjan; Tewari, Ambuj; Klasnja, Predrag; Murphy, Susan

    2017-12-01

    Contextual bandits have become popular as they offer a middle ground between very simple approaches based on multi-armed bandits and very complex approaches using the full power of reinforcement learning. They have demonstrated success in web applications and have a rich body of associated theoretical guarantees. Linear models are well understood theoretically and preferred by practitioners because they are not only easily interpretable but also simple to implement and debug. Furthermore, if the linear model is true, we get very strong performance guarantees. Unfortunately, in emerging applications in mobile health, the time-invariant linear model assumption is untenable. We provide an extension of the linear model for contextual bandits that has two parts: baseline reward and treatment effect. We allow the former to be complex but keep the latter simple. We argue that this model is plausible for mobile health applications. At the same time, it leads to algorithms with strong performance guarantees as in the linear model setting, while still allowing for complex nonlinear baseline modeling. Our theory is supported by experiments on data gathered in a recently concluded mobile health study.

  18. A powerful and flexible approach to the analysis of RNA sequence count data.

    PubMed

    Zhou, Yi-Hui; Xia, Kai; Wright, Fred A

    2011-10-01

    A number of penalization and shrinkage approaches have been proposed for the analysis of microarray gene expression data. Similar techniques are now routinely applied to RNA sequence transcriptional count data, although the value of such shrinkage has not been conclusively established. If penalization is desired, the explicit modeling of mean-variance relationships provides a flexible testing regimen that 'borrows' information across genes, while easily incorporating design effects and additional covariates. We describe BBSeq, which incorporates two approaches: (i) a simple beta-binomial generalized linear model, which has not been extensively tested for RNA-Seq data and (ii) an extension of an expression mean-variance modeling approach to RNA-Seq data, involving modeling of the overdispersion as a function of the mean. Our approaches are flexible, allowing for general handling of discrete experimental factors and continuous covariates. We report comparisons with other alternate methods to handle RNA-Seq data. Although penalized methods have advantages for very small sample sizes, the beta-binomial generalized linear model, combined with simple outlier detection and testing approaches, appears to have favorable characteristics in power and flexibility. An R package containing examples and sample datasets is available at http://www.bios.unc.edu/research/genomic_software/BBSeq yzhou@bios.unc.edu; fwright@bios.unc.edu Supplementary data are available at Bioinformatics online.

  19. A simple fabrication of plasmonic surface-enhanced Raman scattering (SERS) substrate for pesticide analysis via the immobilization of gold nanoparticles on UF membrane

    NASA Astrophysics Data System (ADS)

    Hong, Jangho; Kawashima, Ayato; Hamada, Noriaki

    2017-06-01

    In this study, we developed a facile fabrication method to access a highly reproducible plasmonic surface enhanced Raman scattering substrate via the immobilization of gold nanoparticles on an Ultrafiltration (UF) membrane using a suction technique. This was combined with a simple and rapid analyte concentration and detection method utilizing portable Raman spectroscopy. The minimum detectable concentrations for aqueous thiabendazole standard solution and thiabendazole in orange extract are 0.01 μg/mL and 0.125 μg/g, respectively. The partial least squares (PLS) regression plot shows a good linear relationship between 0.001 and 100 μg/mL of analyte, with a root mean square error of prediction (RMSEP) of 0.294 and a correlation coefficient (R2) of 0.976 for the thiabendazole standard solution. Meanwhile, the PLS plot also shows a good linear relationship between 0.0 and 2.5 μg/g of analyte, with an RMSEP value of 0.298 and an R2 value of 0.993 for the orange peel extract. In addition to the detection of other types of pesticides in agricultural products, this highly uniform plasmonic substrate has great potential for application in various environmentally-related areas.

  20. Kinetics of DSB rejoining and formation of simple chromosome exchange aberrations

    NASA Technical Reports Server (NTRS)

    Cucinotta, F. A.; Nikjoo, H.; O'Neill, P.; Goodhead, D. T.

    2000-01-01

    PURPOSE: To investigate the role of kinetics in the processing of DNA double strand breaks (DSB), and the formation of simple chromosome exchange aberrations following X-ray exposures to mammalian cells based on an enzymatic approach. METHODS: Using computer simulations based on a biochemical approach, rate-equations that describe the processing of DSB through the formation of a DNA-enzyme complex were formulated. A second model that allows for competition between two processing pathways was also formulated. The formation of simple exchange aberrations was modelled as misrepair during the recombination of single DSB with undamaged DNA. Non-linear coupled differential equations corresponding to biochemical pathways were solved numerically by fitting to experimental data. RESULTS: When mediated by a DSB repair enzyme complex, the processing of single DSB showed a complex behaviour that gives the appearance of fast and slow components of rejoining. This is due to the time-delay caused by the action time of enzymes in biomolecular reactions. It is shown that the kinetic- and dose-responses of simple chromosome exchange aberrations are well described by a recombination model of DSB interacting with undamaged DNA when aberration formation increases with linear dose-dependence. Competition between two or more recombination processes is shown to lead to the formation of simple exchange aberrations with a dose-dependence similar to that of a linear quadratic model. CONCLUSIONS: Using a minimal number of assumptions, the kinetics and dose response observed experimentally for DSB rejoining and the formation of simple chromosome exchange aberrations are shown to be consistent with kinetic models based on enzymatic reaction approaches. A non-linear dose response for simple exchange aberrations is possible in a model of recombination of DNA containing a DSB with undamaged DNA when two or more pathways compete for DSB repair.

  1. Bet-hedging as a complex interaction among developmental instability, environmental heterogeneity, dispersal, and life-history strategy.

    PubMed

    Scheiner, Samuel M

    2014-02-01

    One potential evolutionary response to environmental heterogeneity is the production of randomly variable offspring through developmental instability, a type of bet-hedging. I used an individual-based, genetically explicit model to examine the evolution of developmental instability. The model considered both temporal and spatial heterogeneity alone and in combination, the effect of migration pattern (stepping stone vs. island), and life-history strategy. I confirmed that temporal heterogeneity alone requires a threshold amount of variation to select for a substantial amount of developmental instability. For spatial heterogeneity only, the response to selection on developmental instability depended on the life-history strategy and the form and pattern of dispersal with the greatest response for island migration when selection occurred before dispersal. Both spatial and temporal variation alone select for similar amounts of instability, but in combination resulted in substantially more instability than either alone. Local adaptation traded off against bet-hedging, but not in a simple linear fashion. I found higher-order interactions between life-history patterns, dispersal rates, dispersal patterns, and environmental heterogeneity that are not explainable by simple intuition. We need additional modeling efforts to understand these interactions and empirical tests that explicitly account for all of these factors.

  2. A Thermodynamic Theory Of Solid Viscoelasticity. Part 1: Linear Viscoelasticity.

    NASA Technical Reports Server (NTRS)

    Freed, Alan D.; Leonov, Arkady I.

    2002-01-01

    The present series of three consecutive papers develops a general theory for linear and finite solid viscoelasticity. Because the most important object for nonlinear studies are rubber-like materials, the general approach is specified in a form convenient for solving problems important for many industries that involve rubber-like materials. General linear and nonlinear theories for non-isothermal deformations of viscoelastic solids are developed based on the quasi-linear approach of non-equilibrium thermodynamics. In this, the first paper of the series, we analyze non-isothermal linear viscoelasticity, which is applicable in a range of small strains not only to all synthetic polymers and bio-polymers but also to some non-polymeric materials. Although the linear case seems to be well developed, there still are some reasons to implement a thermodynamic derivation of constitutive equations for solid-like, non-isothermal, linear viscoelasticity. The most important is the thermodynamic modeling of thermo-rheological complexity , i.e. different temperature dependences of relaxation parameters in various parts of relaxation spectrum. A special structure of interaction matrices is established for different physical mechanisms contributed to the normal relaxation modes. This structure seems to be in accord with observations, and creates a simple mathematical framework for both continuum and molecular theories of the thermo-rheological complex relaxation phenomena. Finally, a unified approach is briefly discussed that, in principle, allows combining both the long time (discrete) and short time (continuous) descriptions of relaxation behaviors for polymers in the rubbery and glassy regions.

  3. An experimental and analytical investigation of stall effects on flap-lag stability in forward flight

    NASA Technical Reports Server (NTRS)

    Nagabhushanam, J.; Gaonkar, Gopal H.; Mcnulty, Michael J.

    1987-01-01

    Experiments have been performed with a 1.62 m diameter hingeless rotor in a wind tunnel to investigate flap-lag stability of isolated rotors in forward flight. The three-bladed rotor model closely approaches the simple theoretical concept of a hingeless rotor as a set of rigid, articulated flap-lag blades with offset and spring restrained flap and lag hinges. Lag regressing mode stability data was obtained for advance ratios as high as 0.55 for various combinations of collective pitch and shaft angle. The prediction includes quasi-steady stall effects on rotor trim and Floquet stability analyses. Correlation between data and prediction is presented and is compared with that of an earlier study based on a linear theory without stall effects. While the results with stall effects show marked differences from the linear theory results, the stall theory still falls short of adequate agreement with the experimental data.

  4. Microfluidic breakups of confined droplets against a linear obstacle: The importance of the viscosity contrast

    NASA Astrophysics Data System (ADS)

    Salkin, Louis; Courbin, Laurent; Panizza, Pascal

    2012-09-01

    Combining experiments and theory, we investigate the break-up dynamics of deformable objects, such as drops and bubbles, against a linear micro-obstacle. Our experiments bring the role of the viscosity contrast Δη between dispersed and continuous phases to light: the evolution of the critical capillary number to break a drop as a function of its size is either nonmonotonic (Δη>0) or monotonic (Δη≤0). In the case of positive viscosity contrasts, experiments and modeling reveal the existence of an unexpected critical object size for which the critical capillary number for breakup is minimum. Using simple physical arguments, we derive a model that well describes observations, provides diagrams mapping the four hydrodynamic regimes identified experimentally, and demonstrates that the critical size originating from confinement solely depends on geometrical parameters of the obstacle.

  5. A Simplified Theory of Coupled Oscillator Array Phase Control

    NASA Technical Reports Server (NTRS)

    Pogorzelski, R. J.; York, R. A.

    1997-01-01

    Linear and planar arrays of coupled oscillators have been proposed as means of achieving high power rf sources through coherent spatial power combining. In such - applications, a uniform phase distribution over the aperture is desired. However, it has been shown that by detuning some of the oscillators away from the oscillation frequency of the ensemble of oscillators, one may achieve other useful aperture phase distributions. Notable among these are linear phase distributions resulting in steering of the output rf beam away from the broadside direction. The theory describing the operation of such arrays of coupled oscillators is quite complicated since the phenomena involved are inherently nonlinear. This has made it difficult to develop an intuitive understanding of the impact of oscillator tuning on phase control and has thus impeded practical application. In this work a simpl!fied theory is developed which facilitates intuitive understanding by establishing an analog of the phase control problem in terms of electrostatics.

  6. Pole-placement Predictive Functional Control for under-damped systems with real numbers algebra.

    PubMed

    Zabet, K; Rossiter, J A; Haber, R; Abdullah, M

    2017-11-01

    This paper presents the new algorithm of PP-PFC (Pole-placement Predictive Functional Control) for stable, linear under-damped higher-order processes. It is shown that while conventional PFC aims to get first-order exponential behavior, this is not always straightforward with significant under-damped modes and hence a pole-placement PFC algorithm is proposed which can be tuned more precisely to achieve the desired dynamics, but exploits complex number algebra and linear combinations in order to deliver guarantees of stability and performance. Nevertheless, practical implementation is easier by avoiding complex number algebra and hence a modified formulation of the PP-PFC algorithm is also presented which utilises just real numbers while retaining the key attributes of simple algebra, coding and tuning. The potential advantages are demonstrated with numerical examples and real-time control of a laboratory plant. Copyright © 2017 ISA. All rights reserved.

  7. A dimensionally split Cartesian cut cell method for hyperbolic conservation laws

    NASA Astrophysics Data System (ADS)

    Gokhale, Nandan; Nikiforakis, Nikos; Klein, Rupert

    2018-07-01

    We present a dimensionally split method for solving hyperbolic conservation laws on Cartesian cut cell meshes. The approach combines local geometric and wave speed information to determine a novel stabilised cut cell flux, and we provide a full description of its three-dimensional implementation in the dimensionally split framework of Klein et al. [1]. The convergence and stability of the method are proved for the one-dimensional linear advection equation, while its multi-dimensional numerical performance is investigated through the computation of solutions to a number of test problems for the linear advection and Euler equations. When compared to the cut cell flux of Klein et al., it was found that the new flux alleviates the problem of oscillatory boundary solutions produced by the former at higher Courant numbers, and also enables the computation of more accurate solutions near stagnation points. Being dimensionally split, the method is simple to implement and extends readily to multiple dimensions.

  8. Simple quasi-analytical holonomic homogenization model for the non-linear analysis of in-plane loaded masonry panels: Part 1, meso-scale

    NASA Astrophysics Data System (ADS)

    Milani, G.; Bertolesi, E.

    2017-07-01

    A simple quasi analytical holonomic homogenization approach for the non-linear analysis of masonry walls in-plane loaded is presented. The elementary cell (REV) is discretized with 24 triangular elastic constant stress elements (bricks) and non-linear interfaces (mortar). A holonomic behavior with softening is assumed for mortar. It is shown how the mechanical problem in the unit cell is characterized by very few displacement variables and how homogenized stress-strain behavior can be evaluated semi-analytically.

  9. A modal aeroelastic analysis scheme for turbomachinery blading. M.S. Thesis - Case Western Reserve Univ. Final Report

    NASA Technical Reports Server (NTRS)

    Smith, Todd E.

    1991-01-01

    An aeroelastic analysis is developed which has general application to all types of axial-flow turbomachinery blades. The approach is based on linear modal analysis, where the blade's dynamic response is represented as a linear combination of contributions from each of its in-vacuum free vibrational modes. A compressible linearized unsteady potential theory is used to model the flow over the oscillating blades. The two-dimensional unsteady flow is evaluated along several stacked axisymmetric strips along the span of the airfoil. The unsteady pressures at the blade surface are integrated to result in the generalized force acting on the blade due to simple harmonic motions. The unsteady aerodynamic forces are coupled to the blade normal modes in the frequency domain using modal analysis. An iterative eigenvalue problem is solved to determine the stability of the blade when the unsteady aerodynamic forces are included in the analysis. The approach is demonstrated by applying it to a high-energy subsonic turbine blade from a rocket engine turbopump power turbine. The results indicate that this turbine could undergo flutter in an edgewise mode of vibration.

  10. Experimental and numerical analysis of pre-compressed masonry walls in two-way-bending with second order effects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Milani, Gabriele, E-mail: milani@stru.polimi.it; Olivito, Renato S.; Tralli, Antonio

    2014-10-06

    The buckling behavior of slender unreinforced masonry (URM) walls subjected to axial compression and out-of-plane lateral loads is investigated through a combined experimental and numerical homogenizedapproach. After a preliminary analysis performed on a unit cell meshed by means of elastic FEs and non-linear interfaces, macroscopic moment-curvature diagrams so obtained are implemented at a structural level, discretizing masonry by means of rigid triangular elements and non-linear interfaces. The non-linear incremental response of the structure is accounted for a specific quadratic programming routine. In parallel, a wide experimental campaign is conducted on walls in two way bending, with the double aim ofmore » both validating the numerical model and investigating the behavior of walls that may not be reduced to simple cantilevers or simply supported beams. Panels investigated are dry-joint in scale square walls simply supported at the base and on a vertical edge, exhibiting the classical Rondelet’s mechanism. The results obtained are compared with those provided by the numerical model.« less

  11. Preliminary study of the association between the elimination parameters of phenytoin and phenobarbital.

    PubMed

    Methaneethorn, Janthima; Panomvana, Duangchit; Vachirayonstien, Thaveechai

    2017-09-26

    Therapeutic drug monitoring is essential for both phenytoin and phenobarbital therapy given their narrow therapeutic indexes. Nevertheless, the measurement of either phenytoin or phenobarbital concentrations might not be available in some rural hospitals. Information assisting individualized phenytoin and phenobarbital combination therapy is important. This study's objective was to determine the relationship between the maximum rate of metabolism of phenytoin (Vmax) and phenobarbital clearance (CLPB), which can serve as a guide to individualized drug therapy. Data on phenytoin and phenobarbital concentrations of 19 epileptic patients concurrently receiving both drugs were obtained from medical records. Phenytoin and phenobarbital pharmacokinetic parameters were studied at steady-state conditions. The relationship between the elimination parameters of both drugs was determined using simple linear regression. A high correlation coefficient between Vmax and CLPB was found [r=0.744; p<0.001 for Vmax (mg/kg/day) vs. CLPB (L/kg/day)]. Such a relatively strong linear relationship between the elimination parameters of both drugs indicates that Vmax might be predicted from CLPB and vice versa. Regression equations were established for estimating Vmax from CLPB, and vice versa in patients treated with combination of phenytoin and phenobarbital. These proposed equations can be of use in aiding individualized drug therapy.

  12. The brain adjusts grip forces differently according to gravity and inertia: a parabolic flight experiment

    PubMed Central

    White, Olivier

    2015-01-01

    In everyday life, one of the most frequent activities involves accelerating and decelerating an object held in precision grip. In many contexts, humans scale and synchronize their grip force (GF), normal to the finger/object contact, in anticipation of the expected tangential load force (LF), resulting from the combination of the gravitational and the inertial forces. In many contexts, GF and LF are linearly coupled. A few studies have examined how we adjust the parameters–gain and offset–of this linear relationship. However, the question remains open as to how the brain adjusts GF regardless of whether LF is generated by different combinations of weight and inertia. Here, we designed conditions to generate equivalent magnitudes of LF by independently varying mass and movement frequency. In a control experiment, we directly manipulated gravity in parabolic flights, while other factors remained constant. We show with a simple computational approach that, to adjust GF, the brain is sensitive to how LFs are produced at the fingertips. This provides clear evidence that the analysis of the origin of LF is performed centrally, and not only at the periphery. PMID:25717293

  13. Non-Linear Dynamics of Saturn's Rings

    NASA Astrophysics Data System (ADS)

    Esposito, L. W.

    2016-12-01

    Non-linear processes can explain why Saturn's rings are so active and dynamic. Ring systems differ from simple linear systems in two significant ways: 1. They are systems of granular material: where particle-to-particle collisions dominate; thus a kinetic, not a fluid description needed. Stresses are strikingly inhomogeneous and fluctuations are large compared to equilibrium. 2. They are strongly forced by resonances: which drive a non-linear response, that push the system across thresholds that lead to persistent states. Some of this non-linearity is captured in a simple Predator-Prey Model: Periodic forcing from the moon causes streamline crowding; This damps the relative velocity. About a quarter phase later, the aggregates stir the system to higher relative velocity and the limit cycle repeats each orbit, with relative velocity ranging from nearly zero to a multiple of the orbit average. Summary of Halo Results: A predator-prey model for ring dynamics produces transient structures like `straw' that can explain the halo morphology and spectroscopy: Cyclic velocity changes cause perturbed regions to reach higher collision speeds at some orbital phases, which preferentially removes small regolith particles; surrounding particles diffuse back too slowly to erase the effect: this gives the halo morphology; this requires energetic collisions (v ≈ 10m/sec, with throw distances about 200km, implying objects of scale R ≈ 20km).Transform to Duffing Eqn : With the coordinate transformation, z = M2/3, the Predator-Prey equations can be combined to form a single second-order differential equation with harmonic resonance forcing.Ring dynamics and history implications: Moon-triggered clumping explains both small and large particles at resonances. We calculate the stationary size distribution using a cell-to-cell mapping procedure that converts the phase-plane trajectories to a Markov chain. Approximating it as an asymmetric random walk with reflecting boundaries determines the power law index, using results of numerical simulations in the tidal environment. Aggregates can explain many dynamic aspects of the rings and can renew rings by shielding and recycling the material within them, depending on how long the mass is sequestered. We can ask: Are Saturn's rings a chaotic non-linear driven system?

  14. Non-Linear Dynamics of Saturn’s Rings

    NASA Astrophysics Data System (ADS)

    Esposito, Larry W.

    2015-11-01

    Non-linear processes can explain why Saturn’s rings are so active and dynamic. Ring systems differ from simple linear systems in two significant ways: 1. They are systems of granular material: where particle-to-particle collisions dominate; thus a kinetic, not a fluid description needed. We find that stresses are strikingly inhomogeneous and fluctuations are large compared to equilibrium. 2. They are strongly forced by resonances: which drive a non-linear response, pushing the system across thresholds that lead to persistent states.Some of this non-linearity is captured in a simple Predator-Prey Model: Periodic forcing from the moon causes streamline crowding; This damps the relative velocity, and allows aggregates to grow. About a quarter phase later, the aggregates stir the system to higher relative velocity and the limit cycle repeats each orbit.Summary of Halo Results: A predator-prey model for ring dynamics produces transient structures like ‘straw’ that can explain the halo structure and spectroscopy: This requires energetic collisions (v ≈ 10m/sec, with throw distances about 200km, implying objects of scale R ≈ 20km).Transform to Duffing Eqn : With the coordinate transformation, z = M2/3, the Predator-Prey equations can be combined to form a single second-order differential equation with harmonic resonance forcing.Ring dynamics and history implications: Moon-triggered clumping at perturbed regions in Saturn’s rings creates both high velocity dispersion and large aggregates at these distances, explaining both small and large particles observed there. We calculate the stationary size distribution using a cell-to-cell mapping procedure that converts the phase-plane trajectories to a Markov chain. Approximating the Markov chain as an asymmetric random walk with reflecting boundaries allows us to determine the power law index from results of numerical simulations in the tidal environment surrounding Saturn. Aggregates can explain many dynamic aspects of the rings and can renew rings by shielding and recycling the material within them, depending on how long the mass is sequestered. We can ask: Are Saturn’s rings a chaotic non-linear driven system?

  15. Seasonal ENSO forecasting: Where does a simple model stand amongst other operational ENSO models?

    NASA Astrophysics Data System (ADS)

    Halide, Halmar

    2017-01-01

    We apply a simple linear multiple regression model called IndOzy for predicting ENSO up to 7 seasonal lead times. The model still used 5 (five) predictors of the past seasonal Niño 3.4 ENSO indices derived from chaos theory and it was rolling-validated to give a one-step ahead forecast. The model skill was evaluated against data from the season of May-June-July (MJJ) 2003 to November-December-January (NDJ) 2015/2016. There were three skill measures such as: Pearson correlation, RMSE, and Euclidean distance were used for forecast verification. The skill of this simple model was than compared to those of combined Statistical and Dynamical models compiled at the IRI (International Research Institute) website. It was found that the simple model was only capable of producing a useful ENSO prediction only up to 3 seasonal leads, while the IRI statistical and Dynamical model skill were still useful up to 4 and 6 seasonal leads, respectively. Even with its short-range seasonal prediction skills, however, the simple model still has a potential to give ENSO-derived tailored products such as probabilistic measures of precipitation and air temperature. Both meteorological conditions affect the presence of wild-land fire hot-spots in Sumatera and Kalimantan. It is suggested that to improve its long-range skill, the simple INDOZY model needs to incorporate a nonlinear model such as an artificial neural network technique.

  16. New simple spectrophotometric method for determination of the binary mixtures (atorvastatin calcium and ezetimibe; candesartan cilexetil and hydrochlorothiazide) in tablets.

    PubMed

    Belal, Tarek S; Daabees, Hoda G; Abdel-Khalek, Magdi M; Mahrous, Mohamed S; Khamis, Mona M

    2013-04-01

    A new simple spectrophotometric method was developed for the determination of binary mixtures without prior separation. The method is based on the generation of ratio spectra of compound X by using a standard spectrum of compound Y as a divisor. The peak to trough amplitudes between two selected wavelengths in the ratio spectra are proportional to concentration of X without interference from Y . The method was demonstrated by determination of two drug combinations. The first consists of the two antihyperlipidemics: atorvastatin calcium (ATV) and ezetimibe (EZE), and the second comprises the antihypertensives: candesartan cilexetil (CAN) and hydrochlorothiazide (HCT). For mixture 1, ATV was determined using 10 μg/mL EZE as the divisor to generate the ratio spectra, and the peak to trough amplitudes between 231 and 276 nm were plotted against ATV concentration. Similarly, by using 10 μg/mL ATV as divisor, the peak to trough amplitudes between 231 and 276 nm were found proportional to EZE concentration. Calibration curves were linear in the range 2.5-40 μg/mL for both drugs. For mixture 2, divisor concentration was 7.5 μg/mL for both drugs. CAN was determined using its peak to trough amplitudes at 251 and 277 nm, while HCT was estimated using the amplitudes between 251 and 276 nm. The measured amplitudes were linearly correlated to concentration in the ranges 2.5-50 and 1-30 μg/mL for CAN and HCT, respectively. The proposed spectrophotometric method was validated and successfully applied for the assay of both drug combinations in several laboratory-prepared mixtures and commercial tablets.

  17. New simple spectrophotometric method for determination of the binary mixtures (atorvastatin calcium and ezetimibe; candesartan cilexetil and hydrochlorothiazide) in tablets

    PubMed Central

    Belal, Tarek S.; Daabees, Hoda G.; Abdel-Khalek, Magdi M.; Mahrous, Mohamed S.; Khamis, Mona M.

    2012-01-01

    A new simple spectrophotometric method was developed for the determination of binary mixtures without prior separation. The method is based on the generation of ratio spectra of compound X by using a standard spectrum of compound Y as a divisor. The peak to trough amplitudes between two selected wavelengths in the ratio spectra are proportional to concentration of X without interference from Y. The method was demonstrated by determination of two drug combinations. The first consists of the two antihyperlipidemics: atorvastatin calcium (ATV) and ezetimibe (EZE), and the second comprises the antihypertensives: candesartan cilexetil (CAN) and hydrochlorothiazide (HCT). For mixture 1, ATV was determined using 10 μg/mL EZE as the divisor to generate the ratio spectra, and the peak to trough amplitudes between 231 and 276 nm were plotted against ATV concentration. Similarly, by using 10 μg/mL ATV as divisor, the peak to trough amplitudes between 231 and 276 nm were found proportional to EZE concentration. Calibration curves were linear in the range 2.5–40 μg/mL for both drugs. For mixture 2, divisor concentration was 7.5 μg/mL for both drugs. CAN was determined using its peak to trough amplitudes at 251 and 277 nm, while HCT was estimated using the amplitudes between 251 and 276 nm. The measured amplitudes were linearly correlated to concentration in the ranges 2.5–50 and 1–30 μg/mL for CAN and HCT, respectively. The proposed spectrophotometric method was validated and successfully applied for the assay of both drug combinations in several laboratory-prepared mixtures and commercial tablets. PMID:29403805

  18. Multi-Mode Analysis of Dual Ridged Waveguide Systems for Material Characterization

    DTIC Science & Technology

    2015-09-17

    characterization is the process of determining the dielectric, magnetic, and magnetoelectric properties of a material. For simple (i.e., linear ...field expressions in terms of elementary functions (sines, cosines, exponentials and Bessel functions) and corresponding propagation constants of the...with material parameters 0 and µ0. • The MUT is simple ( linear , isotropic, homogeneous), and the sample has a uniform thickness. • The waveguide

  19. WE-G-18A-02: Calibration-Free Combined KV/MV Short Scan CBCT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, M; Loo, B; Bazalova, M

    Purpose: To combine orthogonal kilo-voltage (kV) and Mega-voltage (MV) projection data for short scan cone-beam CT to reduce imaging time on current radiation treatment systems, using a calibration-free gain correction method. Methods: Combining two orthogonal projection data sets for kV and MV imaging hardware can reduce the scan angle to as small as 110° (90°+fan) such that the total scan time is ∼18 seconds, or within a breath hold. To obtain an accurate reconstruction, the MV projection data is first linearly corrected using linear regression using the redundant data from the start and end of the sinogram, and then themore » combined data is reconstructed using the FDK method. To correct for the different changes of attenuation coefficients in kV/MV between soft tissue and bone, the forward projection of the segmented bone and soft tissue from the first reconstruction in the redundant region are added to the linear regression model. The MV data is corrected again using the additional information from the segmented image, and combined with kV for a second FDK reconstruction. We simulated polychromatic 120 kVp (conventional a-Si EPID with CsI) and 2.5 MVp (prototype high-DQE MV detector) projection data with Poisson noise using the XCAT phantom. The gain correction and combined kV/MV short scan reconstructions were tested with head and thorax cases, and simple contrast-to-noise ratio measurements were made in a low-contrast pattern in the head. Results: The FDK reconstruction using the proposed gain correction method can effectively reduce artifacts caused by the differences of attenuation coefficients in the kV/MV data. The CNRs of the short scans for kV, MV, and kV/MV are 5.0, 2.6 and 3.4 respectively. The proposed gain correction method also works with truncated projections. Conclusion: A novel gain correction and reconstruction method was developed to generate short scan CBCT from orthogonal kV/MV projections. This work is supported by NIH Grant 5R01CA138426-05.« less

  20. Mining Distance Based Outliers in Near Linear Time with Randomization and a Simple Pruning Rule

    NASA Technical Reports Server (NTRS)

    Bay, Stephen D.; Schwabacher, Mark

    2003-01-01

    Defining outliers by their distance to neighboring examples is a popular approach to finding unusual examples in a data set. Recently, much work has been conducted with the goal of finding fast algorithms for this task. We show that a simple nested loop algorithm that in the worst case is quadratic can give near linear time performance when the data is in random order and a simple pruning rule is used. We test our algorithm on real high-dimensional data sets with millions of examples and show that the near linear scaling holds over several orders of magnitude. Our average case analysis suggests that much of the efficiency is because the time to process non-outliers, which are the majority of examples, does not depend on the size of the data set.

  1. The role of model dynamics in ensemble Kalman filter performance for chaotic systems

    USGS Publications Warehouse

    Ng, G.-H.C.; McLaughlin, D.; Entekhabi, D.; Ahanin, A.

    2011-01-01

    The ensemble Kalman filter (EnKF) is susceptible to losing track of observations, or 'diverging', when applied to large chaotic systems such as atmospheric and ocean models. Past studies have demonstrated the adverse impact of sampling error during the filter's update step. We examine how system dynamics affect EnKF performance, and whether the absence of certain dynamic features in the ensemble may lead to divergence. The EnKF is applied to a simple chaotic model, and ensembles are checked against singular vectors of the tangent linear model, corresponding to short-term growth and Lyapunov vectors, corresponding to long-term growth. Results show that the ensemble strongly aligns itself with the subspace spanned by unstable Lyapunov vectors. Furthermore, the filter avoids divergence only if the full linearized long-term unstable subspace is spanned. However, short-term dynamics also become important as non-linearity in the system increases. Non-linear movement prevents errors in the long-term stable subspace from decaying indefinitely. If these errors then undergo linear intermittent growth, a small ensemble may fail to properly represent all important modes, causing filter divergence. A combination of long and short-term growth dynamics are thus critical to EnKF performance. These findings can help in developing practical robust filters based on model dynamics. ?? 2011 The Authors Tellus A ?? 2011 John Wiley & Sons A/S.

  2. Prediction of the Main Engine Power of a New Container Ship at the Preliminary Design Stage

    NASA Astrophysics Data System (ADS)

    Cepowski, Tomasz

    2017-06-01

    The paper presents mathematical relationships that allow us to forecast the estimated main engine power of new container ships, based on data concerning vessels built in 2005-2015. The presented approximations allow us to estimate the engine power based on the length between perpendiculars and the number of containers the ship will carry. The approximations were developed using simple linear regression and multivariate linear regression analysis. The presented relations have practical application for estimation of container ship engine power needed in preliminary parametric design of the ship. It follows from the above that the use of multiple linear regression to predict the main engine power of a container ship brings more accurate solutions than simple linear regression.

  3. 3D inelastic analysis methods for hot section components

    NASA Technical Reports Server (NTRS)

    Dame, L. T.; Chen, P. C.; Hartle, M. S.; Huang, H. T.

    1985-01-01

    The objective is to develop analytical tools capable of economically evaluating the cyclic time dependent plasticity which occurs in hot section engine components in areas of strain concentration resulting from the combination of both mechanical and thermal stresses. Three models were developed. A simple model performs time dependent inelastic analysis using the power law creep equation. The second model is the classical model of Professors Walter Haisler and David Allen of Texas A and M University. The third model is the unified model of Bodner, Partom, et al. All models were customized for linear variation of loads and temperatures with all material properties and constitutive models being temperature dependent.

  4. Statistical mechanics of broadcast channels using low-density parity-check codes.

    PubMed

    Nakamura, Kazutaka; Kabashima, Yoshiyuki; Morelos-Zaragoza, Robert; Saad, David

    2003-03-01

    We investigate the use of Gallager's low-density parity-check (LDPC) codes in a degraded broadcast channel, one of the fundamental models in network information theory. Combining linear codes is a standard technique in practical network communication schemes and is known to provide better performance than simple time sharing methods when algebraic codes are used. The statistical physics based analysis shows that the practical performance of the suggested method, achieved by employing the belief propagation algorithm, is superior to that of LDPC based time sharing codes while the best performance, when received transmissions are optimally decoded, is bounded by the time sharing limit.

  5. Algebraic Bethe ansatz for the sℓ (2) Gaudin model with boundary

    NASA Astrophysics Data System (ADS)

    Cirilo António, N.; Manojlović, N.; Ragoucy, E.; Salom, I.

    2015-04-01

    Following Sklyanin's proposal in the periodic case, we derive the generating function of the Gaudin Hamiltonians with boundary terms. Our derivation is based on the quasi-classical expansion of the linear combination of the transfer matrix of the XXX Heisenberg spin chain and the central element, the so-called Sklyanin determinant. The corresponding Gaudin Hamiltonians with boundary terms are obtained as the residues of the generating function. By defining the appropriate Bethe vectors which yield strikingly simple off shell action of the generating function, we fully implement the algebraic Bethe ansatz, obtaining the spectrum of the generating function and the corresponding Bethe equations.

  6. Competing Thermodynamic and Dynamic Factors Select Molecular Assemblies on a Gold Surface

    NASA Astrophysics Data System (ADS)

    Haxton, Thomas K.; Zhou, Hui; Tamblyn, Isaac; Eom, Daejin; Hu, Zonghai; Neaton, Jeffrey B.; Heinz, Tony F.; Whitelam, Stephen

    2013-12-01

    Controlling the self-assembly of surface-adsorbed molecules into nanostructures requires understanding physical mechanisms that act across multiple length and time scales. By combining scanning tunneling microscopy with hierarchical ab initio and statistical mechanical modeling of 1,4-substituted benzenediamine (BDA) molecules adsorbed on a gold (111) surface, we demonstrate that apparently simple nanostructures are selected by a subtle competition of thermodynamics and dynamics. Of the collection of possible BDA nanostructures mechanically stabilized by hydrogen bonding, the interplay of intermolecular forces, surface modulation, and assembly dynamics select at low temperature a particular subset: low free energy oriented linear chains of monomers and high free energy branched chains.

  7. Combining two-directional synthesis and tandem reactions. Part 21: Exploitation of a dimeric macrocycle for chain terminus differentiation and synthesis of an sp(3)-rich library.

    PubMed

    Storr, Thomas E; Cully, Sarah J; Rawling, Michael J; Lewis, William; Hamza, Daniel; Jones, Geraint; Stockman, Robert A

    2015-06-01

    The application of a tandem condensation/cyclisation/[3+2]-cycloaddition/elimination reaction gives an sp(3)-rich tricyclic pyrazoline scaffold with two ethyl esters in a single step from a simple linear starting material. The successive hydrolysis and cyclisation (with Boc anhydride) of these 3-dimensional architectures, generates unprecedented 16-membered macrocyclic bisanhydrides (characterised by XRD). Selective amidations could then be achieved by ring opening with a primary amine followed by HATU-promoted amide coupling to yield an sp(3)-rich natural product-like library. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Inhomogeneous structure in the chromospheres of dwarf M stars

    NASA Technical Reports Server (NTRS)

    Turner, N. J.; Cram, L. E.; Robinson, R. D.

    1991-01-01

    Linear combinations of observed spectra of the H-alpha and Ca-II resonance and IR lines from the chromospheres of a quiet (Gl 1) and an active (Gl 735) dwarf-M star are compared with the corresponding spectra from a star of intermediate activity (Gl 887). It is shown that the intermediate spectra cannot be explained as a simple juxtaposition of the extreme chromospheric states. It is concluded that the range of observed strengths of chromospheric activity indicators in dwarf-M stars is due, at least in part, to changes in the radial structure of the chromospheric heating function and not to changes in the area filling factor.

  9. Spatial-mode switchable ring fiber laser based on low mode-crosstalk all-fiber mode MUX/DEMUX

    NASA Astrophysics Data System (ADS)

    Ren, Fang; Yu, Jinyi; Wang, Jianping

    2018-05-01

    We report an all-fiber ring laser that emits linearly polarized (LP) modes based on the intracavity all-fiber mode multiplexer/demultiplexer (MUX/DEMUX). Multiple LP modes in ring fiber laser are generated by taking advantage of mode MUX/DEMUX. The all-fiber mode MUX/DEMUX are composed of cascaded mode-selective couplers (MSCs). The output lasing mode of the ring fiber laser can be switched among the three lowest-order LP modes by employing combination of a mode MUX and a simple N × 1 optical switch. The slope efficiencies, optical spectra and mode profiles are measured.

  10. Are V1 Simple Cells Optimized for Visual Occlusions? A Comparative Study

    PubMed Central

    Bornschein, Jörg; Henniges, Marc; Lücke, Jörg

    2013-01-01

    Simple cells in primary visual cortex were famously found to respond to low-level image components such as edges. Sparse coding and independent component analysis (ICA) emerged as the standard computational models for simple cell coding because they linked their receptive fields to the statistics of visual stimuli. However, a salient feature of image statistics, occlusions of image components, is not considered by these models. Here we ask if occlusions have an effect on the predicted shapes of simple cell receptive fields. We use a comparative approach to answer this question and investigate two models for simple cells: a standard linear model and an occlusive model. For both models we simultaneously estimate optimal receptive fields, sparsity and stimulus noise. The two models are identical except for their component superposition assumption. We find the image encoding and receptive fields predicted by the models to differ significantly. While both models predict many Gabor-like fields, the occlusive model predicts a much sparser encoding and high percentages of ‘globular’ receptive fields. This relatively new center-surround type of simple cell response is observed since reverse correlation is used in experimental studies. While high percentages of ‘globular’ fields can be obtained using specific choices of sparsity and overcompleteness in linear sparse coding, no or only low proportions are reported in the vast majority of studies on linear models (including all ICA models). Likewise, for the here investigated linear model and optimal sparsity, only low proportions of ‘globular’ fields are observed. In comparison, the occlusive model robustly infers high proportions and can match the experimentally observed high proportions of ‘globular’ fields well. Our computational study, therefore, suggests that ‘globular’ fields may be evidence for an optimal encoding of visual occlusions in primary visual cortex. PMID:23754938

  11. Estimation of Thalamocortical and Intracortical Network Models from Joint Thalamic Single-Electrode and Cortical Laminar-Electrode Recordings in the Rat Barrel System

    PubMed Central

    Blomquist, Patrick; Devor, Anna; Indahl, Ulf G.; Ulbert, Istvan; Einevoll, Gaute T.; Dale, Anders M.

    2009-01-01

    A new method is presented for extraction of population firing-rate models for both thalamocortical and intracortical signal transfer based on stimulus-evoked data from simultaneous thalamic single-electrode and cortical recordings using linear (laminar) multielectrodes in the rat barrel system. Time-dependent population firing rates for granular (layer 4), supragranular (layer 2/3), and infragranular (layer 5) populations in a barrel column and the thalamic population in the homologous barreloid are extracted from the high-frequency portion (multi-unit activity; MUA) of the recorded extracellular signals. These extracted firing rates are in turn used to identify population firing-rate models formulated as integral equations with exponentially decaying coupling kernels, allowing for straightforward transformation to the more common firing-rate formulation in terms of differential equations. Optimal model structures and model parameters are identified by minimizing the deviation between model firing rates and the experimentally extracted population firing rates. For the thalamocortical transfer, the experimental data favor a model with fast feedforward excitation from thalamus to the layer-4 laminar population combined with a slower inhibitory process due to feedforward and/or recurrent connections and mixed linear-parabolic activation functions. The extracted firing rates of the various cortical laminar populations are found to exhibit strong temporal correlations for the present experimental paradigm, and simple feedforward population firing-rate models combined with linear or mixed linear-parabolic activation function are found to provide excellent fits to the data. The identified thalamocortical and intracortical network models are thus found to be qualitatively very different. While the thalamocortical circuit is optimally stimulated by rapid changes in the thalamic firing rate, the intracortical circuits are low-pass and respond most strongly to slowly varying inputs from the cortical layer-4 population. PMID:19325875

  12. Commande optimale minimisant la consommation d'energie d'un drone utilise comme relai de communication

    NASA Astrophysics Data System (ADS)

    Mechirgui, Monia

    The purpose of this project is to implement an optimal control regulator, particularly the linear quadratic regulator in order to control the position of an unmanned aerial vehicle known as a quadrotor. This type of UAV has a symmetrical and simple structure. Thus, its control is relatively easy compared to conventional helicopters. Optimal control can be proven to be an ideal controller to reconcile between the tracking performance and energy consumption. In practice, the linearity requirements are not met, but some elaborations of the linear quadratic regulator have been used in many nonlinear applications with good results. The linear quadratic controller used in this thesis is presented in two forms: simple and adapted to the state of charge of the battery. Based on the traditional structure of the linear quadratic regulator, we introduced a new criterion which relies on the state of charge of the battery, in order to optimize energy consumption. This command is intended to be used to monitor and maintain the desired trajectory during several maneuvers while minimizing energy consumption. Both simple and adapted, linear quadratic controller are implemented in Simulink in discrete time. The model simulates the dynamics and control of a quadrotor. Performance and stability of the system are analyzed with several tests, from the simply hover to the complex trajectories in closed loop.

  13. A simple, stable, and accurate linear tetrahedral finite element for transient, nearly, and fully incompressible solid dynamics: A dynamic variational multiscale approach [A simple, stable, and accurate tetrahedral finite element for transient, nearly incompressible, linear and nonlinear elasticity: A dynamic variational multiscale approach

    DOE PAGES

    Scovazzi, Guglielmo; Carnes, Brian; Zeng, Xianyi; ...

    2015-11-12

    Here, we propose a new approach for the stabilization of linear tetrahedral finite elements in the case of nearly incompressible transient solid dynamics computations. Our method is based on a mixed formulation, in which the momentum equation is complemented by a rate equation for the evolution of the pressure field, approximated with piece-wise linear, continuous finite element functions. The pressure equation is stabilized to prevent spurious pressure oscillations in computations. Incidentally, it is also shown that many stabilized methods previously developed for the static case do not generalize easily to transient dynamics. Extensive tests in the context of linear andmore » nonlinear elasticity are used to corroborate the claim that the proposed method is robust, stable, and accurate.« less

  14. A simple, stable, and accurate linear tetrahedral finite element for transient, nearly, and fully incompressible solid dynamics: A dynamic variational multiscale approach [A simple, stable, and accurate tetrahedral finite element for transient, nearly incompressible, linear and nonlinear elasticity: A dynamic variational multiscale approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scovazzi, Guglielmo; Carnes, Brian; Zeng, Xianyi

    Here, we propose a new approach for the stabilization of linear tetrahedral finite elements in the case of nearly incompressible transient solid dynamics computations. Our method is based on a mixed formulation, in which the momentum equation is complemented by a rate equation for the evolution of the pressure field, approximated with piece-wise linear, continuous finite element functions. The pressure equation is stabilized to prevent spurious pressure oscillations in computations. Incidentally, it is also shown that many stabilized methods previously developed for the static case do not generalize easily to transient dynamics. Extensive tests in the context of linear andmore » nonlinear elasticity are used to corroborate the claim that the proposed method is robust, stable, and accurate.« less

  15. Asymptotic Linear Spectral Statistics for Spiked Hermitian Random Matrices

    NASA Astrophysics Data System (ADS)

    Passemier, Damien; McKay, Matthew R.; Chen, Yang

    2015-07-01

    Using the Coulomb Fluid method, this paper derives central limit theorems (CLTs) for linear spectral statistics of three "spiked" Hermitian random matrix ensembles. These include Johnstone's spiked model (i.e., central Wishart with spiked correlation), non-central Wishart with rank-one non-centrality, and a related class of non-central matrices. For a generic linear statistic, we derive simple and explicit CLT expressions as the matrix dimensions grow large. For all three ensembles under consideration, we find that the primary effect of the spike is to introduce an correction term to the asymptotic mean of the linear spectral statistic, which we characterize with simple formulas. The utility of our proposed framework is demonstrated through application to three different linear statistics problems: the classical likelihood ratio test for a population covariance, the capacity analysis of multi-antenna wireless communication systems with a line-of-sight transmission path, and a classical multiple sample significance testing problem.

  16. A simple scheme for magnetic balance in four-component relativistic Kohn-Sham calculations of nuclear magnetic resonance shielding constants in a Gaussian basis.

    PubMed

    Olejniczak, Małgorzata; Bast, Radovan; Saue, Trond; Pecul, Magdalena

    2012-01-07

    We report the implementation of nuclear magnetic resonance (NMR) shielding tensors within the four-component relativistic Kohn-Sham density functional theory including non-collinear spin magnetization and employing London atomic orbitals to ensure gauge origin independent results, together with a new and efficient scheme for assuring correct balance between the large and small components of a molecular four-component spinor in the presence of an external magnetic field (simple magnetic balance). To test our formalism we have carried out calculations of NMR shielding tensors for the HX series (X = F, Cl, Br, I, At), the Xe atom, and the Xe dimer. The advantage of simple magnetic balance scheme combined with the use of London atomic orbitals is the fast convergence of results (when compared with restricted kinetic balance) and elimination of linear dependencies in the basis set (when compared to unrestricted kinetic balance). The effect of including spin magnetization in the description of NMR shielding tensor has been found important for hydrogen atoms in heavy HX molecules, causing an increase of isotropic values of 10%, but negligible for heavy atoms.

  17. A powerful and flexible approach to the analysis of RNA sequence count data

    PubMed Central

    Zhou, Yi-Hui; Xia, Kai; Wright, Fred A.

    2011-01-01

    Motivation: A number of penalization and shrinkage approaches have been proposed for the analysis of microarray gene expression data. Similar techniques are now routinely applied to RNA sequence transcriptional count data, although the value of such shrinkage has not been conclusively established. If penalization is desired, the explicit modeling of mean–variance relationships provides a flexible testing regimen that ‘borrows’ information across genes, while easily incorporating design effects and additional covariates. Results: We describe BBSeq, which incorporates two approaches: (i) a simple beta-binomial generalized linear model, which has not been extensively tested for RNA-Seq data and (ii) an extension of an expression mean–variance modeling approach to RNA-Seq data, involving modeling of the overdispersion as a function of the mean. Our approaches are flexible, allowing for general handling of discrete experimental factors and continuous covariates. We report comparisons with other alternate methods to handle RNA-Seq data. Although penalized methods have advantages for very small sample sizes, the beta-binomial generalized linear model, combined with simple outlier detection and testing approaches, appears to have favorable characteristics in power and flexibility. Availability: An R package containing examples and sample datasets is available at http://www.bios.unc.edu/research/genomic_software/BBSeq Contact: yzhou@bios.unc.edu; fwright@bios.unc.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:21810900

  18. Rational design and dynamics of self-propelled colloidal bead chains: from rotators to flagella.

    PubMed

    Vutukuri, Hanumantha Rao; Bet, Bram; van Roij, René; Dijkstra, Marjolein; Huck, Wilhelm T S

    2017-12-01

    The quest for designing new self-propelled colloids is fuelled by the demand for simple experimental models to study the collective behaviour of their more complex natural counterparts. Most synthetic self-propelled particles move by converting the input energy into translational motion. In this work we address the question if simple self-propelled spheres can assemble into more complex structures that exhibit rotational motion, possibly coupled with translational motion as in flagella. We exploit a combination of induced dipolar interactions and a bonding step to create permanent linear bead chains, composed of self-propelled Janus spheres, with a well-controlled internal structure. Next, we study how flexibility between individual swimmers in a chain can affect its swimming behaviour. Permanent rigid chains showed only active rotational or spinning motion, whereas longer semi-flexible chains showed both translational and rotational motion resembling flagella like-motion, in the presence of the fuel. Moreover, we are able to reproduce our experimental results using numerical calculations with a minimal model, which includes full hydrodynamic interactions with the fluid. Our method is general and opens a new way to design novel self-propelled colloids with complex swimming behaviours, using different complex starting building blocks in combination with the flexibility between them.

  19. Simultaneous Determination of Eight Hypotensive Drugs of Various Chemical Groups in Pharmaceutical Preparations by HPLC-DAD.

    PubMed

    Stolarczyk, Mariusz; Hubicka, Urszula; Żuromska-Witek, Barbara; Krzek, Jan

    2015-01-01

    A new sensitive, simple, rapid, and precise HPLC method with diode array detection has been developed for separation and simultaneous determination of hydrochlorothiazide, furosemide, torasemide, losartane, quinapril, valsartan, spironolactone, and canrenone in combined pharmaceutical dosage forms. The chromatographic analysis of the tested drugs was performed on an ACE C18, 100 Å, 250×4.6 mm, 5 μm particle size column with 0.0.05 M phosphate buffer (pH=3.00)-acetonitrile-methanol (30+20+50 v/v/v) mobile phase at a flow rate of 1.0 mL/min. The column was thermostatted at 25°C. UV detection was performed at 230 nm. Analysis time was 10 min. The elaborated method meets the acceptance criteria for specificity, linearity, sensitivity, accuracy, and precision. The proposed method was successfully applied for the determination of the studied drugs in the selected combined dosage forms.

  20. Experimental demonstration of time- and mode-division multiplexed passive optical network

    NASA Astrophysics Data System (ADS)

    Ren, Fang; Li, Juhao; Tang, Ruizhi; Hu, Tao; Yu, Jinyi; Mo, Qi; He, Yongqi; Chen, Zhangyuan; Li, Zhengbin

    2017-07-01

    A time- and mode-division multiplexed passive optical network (TMDM-PON) architecture is proposed, in which each optical network unit (ONU) communicates with the optical line terminal (OLT) independently utilizing both different time slots and switched optical linearly polarized (LP) spatial modes. Combination of a mode multiplexer/demultiplexer (MUX/DEUX) and a simple N × 1 optical switch is employed to select the specific LP mode in each ONU. A mode-insensitive power splitter is used for signal broadcast/combination between OLT and ONUs. We theoretically propose a dynamic mode and time slot assignment scheme for TMDM-PON based on inter-ONU priority rating, in which the time delay and packet loss ratio's variation tendency are investigated by simulation. Moreover, we experimentally demonstrate 2-mode TMDM-PON transmission over 10 km FMF with 10-Gb/s on-off keying (OOK) signal and direct detection.

  1. Comprehensive Chemical Fingerprinting of High-Quality Cocoa at Early Stages of Processing: Effectiveness of Combined Untargeted and Targeted Approaches for Classification and Discrimination.

    PubMed

    Magagna, Federico; Guglielmetti, Alessandro; Liberto, Erica; Reichenbach, Stephen E; Allegrucci, Elena; Gobino, Guido; Bicchi, Carlo; Cordero, Chiara

    2017-08-02

    This study investigates chemical information of volatile fractions of high-quality cocoa (Theobroma cacao L. Malvaceae) from different origins (Mexico, Ecuador, Venezuela, Columbia, Java, Trinidad, and Sao Tomè) produced for fine chocolate. This study explores the evolution of the entire pattern of volatiles in relation to cocoa processing (raw, roasted, steamed, and ground beans). Advanced chemical fingerprinting (e.g., combined untargeted and targeted fingerprinting) with comprehensive two-dimensional gas chromatography coupled with mass spectrometry allows advanced pattern recognition for classification, discrimination, and sensory-quality characterization. The entire data set is analyzed for 595 reliable two-dimensional peak regions, including 130 known analytes and 13 potent odorants. Multivariate analysis with unsupervised exploration (principal component analysis) and simple supervised discrimination methods (Fisher ratios and linear regression trees) reveal informative patterns of similarities and differences and identify characteristic compounds related to sample origin and manufacturing step.

  2. Bio-Inspired Asynchronous Pixel Event Tricolor Vision Sensor.

    PubMed

    Lenero-Bardallo, Juan Antonio; Bryn, D H; Hafliger, Philipp

    2014-06-01

    This article investigates the potential of the first ever prototype of a vision sensor that combines tricolor stacked photo diodes with the bio-inspired asynchronous pixel event communication protocol known as Address Event Representation (AER). The stacked photo diodes are implemented in a 22 × 22 pixel array in a standard STM 90 nm CMOS process. Dynamic range is larger than 60 dB and pixels fill factor is 28%. The pixels employ either simple pulse frequency modulation (PFM) or a Time-to-First-Spike (TFS) mode. A heuristic linear combination of the chip's inherent pseudo colors serves to approximate RGB color representation. Furthermore, the sensor outputs can be processed to represent the radiation in the near infrared (NIR) band without employing external filters, and to color-encode direction of motion due to an asymmetry in the update rates of the different diode layers.

  3. Decentralized Control of Sound Radiation using a High-Authority/Low-Authority Control Strategy with Anisotropic Actuators

    NASA Technical Reports Server (NTRS)

    Schiller, Noah H.; Cabell, Randolph H.; Fuller, Chris R.

    2008-01-01

    This paper describes a combined control strategy designed to reduce sound radiation from stiffened aircraft-style panels. The control architecture uses robust active damping in addition to high-authority linear quadratic Gaussian (LQG) control. Active damping is achieved using direct velocity feedback with triangularly shaped anisotropic actuators and point velocity sensors. While active damping is simple and robust, stability is guaranteed at the expense of performance. Therefore the approach is often referred to as low-authority control. In contrast, LQG control strategies can achieve substantial reductions in sound radiation. Unfortunately, the unmodeled interaction between neighboring control units can destabilize decentralized control systems. Numerical simulations show that combining active damping and decentralized LQG control can be beneficial. In particular, augmenting the in-bandwidth damping supplements the performance of the LQG control strategy and reduces the destabilizing interaction between neighboring control units.

  4. Automated vehicle detection in forward-looking infrared imagery.

    PubMed

    Der, Sandor; Chan, Alex; Nasrabadi, Nasser; Kwon, Heesung

    2004-01-10

    We describe an algorithm for the detection and clutter rejection of military vehicles in forward-looking infrared (FLIR) imagery. The detection algorithm is designed to be a prescreener that selects regions for further analysis and uses a spatial anomaly approach that looks for target-sized regions of the image that differ in texture, brightness, edge strength, or other spatial characteristics. The features are linearly combined to form a confidence image that is thresholded to find likely target locations. The clutter rejection portion uses target-specific information extracted from training samples to reduce the false alarms of the detector. The outputs of the clutter rejecter and detector are combined by a higher-level evidence integrator to improve performance over simple concatenation of the detector and clutter rejecter. The algorithm has been applied to a large number of FLIR imagery sets, and some of these results are presented here.

  5. Plant cell wall characterization using scanning probe microscopy techniques

    PubMed Central

    Yarbrough, John M; Himmel, Michael E; Ding, Shi-You

    2009-01-01

    Lignocellulosic biomass is today considered a promising renewable resource for bioenergy production. A combined chemical and biological process is currently under consideration for the conversion of polysaccharides from plant cell wall materials, mainly cellulose and hemicelluloses, to simple sugars that can be fermented to biofuels. Native plant cellulose forms nanometer-scale microfibrils that are embedded in a polymeric network of hemicelluloses, pectins, and lignins; this explains, in part, the recalcitrance of biomass to deconstruction. The chemical and structural characteristics of these plant cell wall constituents remain largely unknown today. Scanning probe microscopy techniques, particularly atomic force microscopy and its application in characterizing plant cell wall structure, are reviewed here. We also further discuss future developments based on scanning probe microscopy techniques that combine linear and nonlinear optical techniques to characterize plant cell wall nanometer-scale structures, specifically apertureless near-field scanning optical microscopy and coherent anti-Stokes Raman scattering microscopy. PMID:19703302

  6. The evolution of contralateral control of the body by the brain: is it a protective mechanism?

    PubMed

    Whitehead, Lorne; Banihani, Saleh

    2014-01-01

    Contralateral control, the arrangement whereby most of the human motor and sensory fibres cross the midline in order to provide control for contralateral portions of the body, presents a puzzle from an evolutionary perspective. What caused such a counterintuitive and complex arrangement to become dominant? In this paper we offer a new perspective on this question by showing that in a complex interactive control system there could be a significant net survival advantage with contralateral control, associated with the effect of injuries of intermediate severity. In such cases an advantage could arise from a combination of non-linear system response combined with correlations between injuries on the same side of the head and body. We show that a simple mathematical model of these ideas emulates such an advantage. Based on this model, we conclude that effects of this kind are a plausible driving force for the evolution of contralateral control.

  7. Chemical association in simple models of molecular and ionic fluids. III. The cavity function

    NASA Astrophysics Data System (ADS)

    Zhou, Yaoqi; Stell, George

    1992-01-01

    Exact equations which relate the cavity function to excess solvation free energies and equilibrium association constants are rederived by using a thermodynamic cycle. A zeroth-order approximation, derived previously by us as a simple interpolation scheme, is found to be very accurate if the associative bonding occurs on or near the surface of the repulsive core of the interaction potential. If the bonding radius is substantially less than the core radius, the approximation overestimates the association degree and the association constant. For binary association, the zeroth-order approximation is equivalent to the first-order thermodynamic perturbation theory (TPT) of Wertheim. For n-particle association, the combination of the zeroth-order approximation with a ``linear'' approximation (for n-particle distribution functions in terms of the two-particle function) yields the first-order TPT result. Using our exact equations to go beyond TPT, near-exact analytic results for binary hard-sphere association are obtained. Solvent effects on binary hard-sphere association and ionic association are also investigated. A new rule which generalizes Le Chatelier's principle is used to describe the three distinct forms of behaviors involving solvent effects that we find. The replacement of the dielectric-continuum solvent model by a dipolar hard-sphere model leads to improved agreement with an experimental observation. Finally, equation of state for an n-particle flexible linear-chain fluid is derived on the basis of a one-parameter approximation that interpolates between the generalized Kirkwood superposition approximation and the linear approximation. A value of the parameter that appears to be near optimal in the context of this application is obtained from comparison with computer-simulation data.

  8. Iron oxide functionalized graphene oxide as an efficient sorbent for dispersive micro-solid phase extraction of sulfadiazine followed by spectrophotometric and mode-mismatched thermal lens spectrometric determination.

    PubMed

    Kazemi, Elahe; Dadfarnia, Shayessteh; Haji Shabani, Ali Mohammad; Abbasi, Amir; Rashidian Vaziri, Mohammad Reza; Behjat, Abbas

    2016-01-15

    A simple and rapid dispersive micro-solid phase extraction (DMSPE) combined with mode-mismatched thermal lens spectrometry as well as fiber optic linear array spectrophotometry was developed for the separation, extraction and determination of sulfadiazine. Graphene oxide was synthesized using the modified Hummers method and functionalized with iron oxide nanoparticles by means of a simple one step chemical coprecipitation method. The synthesized iron oxide functionalized graphene oxide was utilized as an efficient sorbent in DMSPE of sulfadiazine. The retained analyte was eluted by using 180µL of a 6:4 mixture of methanol/acetic acid solution and was spectrophotometrically determined based on the formation of an azo dye through coupling with thenoyltrifluoroacetone. Under the optimized conditions, with the application of spectrophotometry technique and with a sample volume of 100mL, the method exhibited a linear dynamic range of 3-80µg L(-1) with a detection limit of 0.82µg L(-1), an enrichment factor of 200 as well as the relative standard deviations of 2.6% and 4.3% (n=6) at 150µg L(-1) level of sulfadiazine for intra- and inter-day analyses, respectively. Whereas, through the application of the thermal lens spectrometry and a sample volume of 10mL, the method exhibited a linear dynamic range of 1-800µg L(-1) with a detection limit of 0.34µg L(-1) and the relative standard deviations of 3.1% and 5.4% (n=6) at 150µg L(-1) level of sulfadiazine for intra- and inter-day analyses, respectively. The method was successfully applied to the determination of sulfadiazine in milk, honey and water samples. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Wave-induced hydraulic forces on submerged aquatic plants in shallow lakes.

    PubMed

    Schutten, J; Dainty, J; Davy, A J

    2004-03-01

    Hydraulic pulling forces arising from wave action are likely to limit the presence of freshwater macrophytes in shallow lakes, particularly those with soft sediments. The aim of this study was to develop and test experimentally simple models, based on linear wave theory for deep water, to predict such forces on individual shoots. Models were derived theoretically from the action of the vertical component of the orbital velocity of the waves on shoot size. Alternative shoot-size descriptors (plan-form area or dry mass) and alternative distributions of the shoot material along its length (cylinder or inverted cone) were examined. Models were tested experimentally in a flume that generated sinusoidal waves which lasted 1 s and were up to 0.2 m high. Hydraulic pulling forces were measured on plastic replicas of Elodea sp. and on six species of real plants with varying morphology (Ceratophyllum demersum, Chara intermedia, Elodea canadensis, Myriophyllum spicatum, Potamogeton natans and Potamogeton obtusifolius). Measurements on the plastic replicas confirmed predicted relationships between force and wave phase, wave height and plant submergence depth. Predicted and measured forces were linearly related over all combinations of wave height and submergence depth. Measured forces on real plants were linearly related to theoretically derived predictors of the hydraulic forces (integrals of the products of the vertical orbital velocity raised to the power 1.5 and shoot size). The general applicability of the simplified wave equations used was confirmed. Overall, dry mass and plan-form area performed similarly well as shoot-size descriptors, as did the conical or cylindrical models of shoot distribution. The utility of the modelling approach in predicting hydraulic pulling forces from relatively simple plant and environmental measurements was validated over a wide range of forces, plant sizes and species.

  10. Indirect glyphosate detection based on ninhydrin reaction and surface-enhanced Raman scattering spectroscopy

    NASA Astrophysics Data System (ADS)

    Xu, Meng-Lei; Gao, Yu; Li, Yali; Li, Xueliang; Zhang, Huanjie; Han, Xiao Xia; Zhao, Bing; Su, Liang

    2018-05-01

    Glyphosate is one of the most commonly-used and non-selective herbicides in agriculture, which may directly pollute the environment and threaten human health. A simple and effective approach to assessment of its damage to the natural environment is thus quite necessary. However, traditional chromatography-based detection methods usually suffer from complex pretreatment procedures. Herein, we propose a simple and sensitive method for the determination of glyphosate by combining ninhydrin reaction and surface-enhanced Raman scattering (SERS) spectroscopy. The product (purple color dye, PD) of the ninhydrin reaction is found to SERS-active and directly correlate with the glyphosate concentration. The limit of detection of the proposed method for glyphosate is as low as 1.43 × 10- 8 mol·L- 1 with a relatively wider linear concentration range (1.0 × 10- 7-1.0 × 10- 4 mol·L- 1), which demonstrates its great potential in rapid, highly sensitive concentration determination of glyphosate in practical applications for safety assessment of food and environment.

  11. A Criterion to Control Nonlinear Error in the Mixed-Mode Bending Test

    NASA Technical Reports Server (NTRS)

    Reeder, James R.

    2002-01-01

    The mixed-mode bending test ha: been widely used to measure delamination toughness and was recently standardized by ASTM as Standard Test Method D6671-01. This simple test is a combination of the standard Mode I (opening) test and a Mode II (sliding) test. This test uses a unidirectional composite test specimen with an artificial delamination subjected to bending loads to characterize when a delamination will extend. When the displacements become large, the linear theory used to analyze the results of the test yields errors in the calcu1ated toughness values. The current standard places no limit on the specimen loading and therefore test data can be created using the standard that are significantly in error. A method of limiting the error that can be incurred in the calculated toughness values is needed. In this paper, nonlinear models of the MMB test are refined. One of the nonlinear models is then used to develop a simple criterion for prescribing conditions where thc nonlinear error will remain below 5%.

  12. Colour and luminance contrasts predict the human detection of natural stimuli in complex visual environments.

    PubMed

    White, Thomas E; Rojas, Bibiana; Mappes, Johanna; Rautiala, Petri; Kemp, Darrell J

    2017-09-01

    Much of what we know about human colour perception has come from psychophysical studies conducted in tightly-controlled laboratory settings. An enduring challenge, however, lies in extrapolating this knowledge to the noisy conditions that characterize our actual visual experience. Here we combine statistical models of visual perception with empirical data to explore how chromatic (hue/saturation) and achromatic (luminant) information underpins the detection and classification of stimuli in a complex forest environment. The data best support a simple linear model of stimulus detection as an additive function of both luminance and saturation contrast. The strength of each predictor is modest yet consistent across gross variation in viewing conditions, which accords with expectation based upon general primate psychophysics. Our findings implicate simple visual cues in the guidance of perception amidst natural noise, and highlight the potential for informing human vision via a fusion between psychophysical modelling and real-world behaviour. © 2017 The Author(s).

  13. A simple linear regression method for quantitative trait loci linkage analysis with censored observations.

    PubMed

    Anderson, Carl A; McRae, Allan F; Visscher, Peter M

    2006-07-01

    Standard quantitative trait loci (QTL) mapping techniques commonly assume that the trait is both fully observed and normally distributed. When considering survival or age-at-onset traits these assumptions are often incorrect. Methods have been developed to map QTL for survival traits; however, they are both computationally intensive and not available in standard genome analysis software packages. We propose a grouped linear regression method for the analysis of continuous survival data. Using simulation we compare this method to both the Cox and Weibull proportional hazards models and a standard linear regression method that ignores censoring. The grouped linear regression method is of equivalent power to both the Cox and Weibull proportional hazards methods and is significantly better than the standard linear regression method when censored observations are present. The method is also robust to the proportion of censored individuals and the underlying distribution of the trait. On the basis of linear regression methodology, the grouped linear regression model is computationally simple and fast and can be implemented readily in freely available statistical software.

  14. Sparse principal component analysis in medical shape modeling

    NASA Astrophysics Data System (ADS)

    Sjöstrand, Karl; Stegmann, Mikkel B.; Larsen, Rasmus

    2006-03-01

    Principal component analysis (PCA) is a widely used tool in medical image analysis for data reduction, model building, and data understanding and exploration. While PCA is a holistic approach where each new variable is a linear combination of all original variables, sparse PCA (SPCA) aims at producing easily interpreted models through sparse loadings, i.e. each new variable is a linear combination of a subset of the original variables. One of the aims of using SPCA is the possible separation of the results into isolated and easily identifiable effects. This article introduces SPCA for shape analysis in medicine. Results for three different data sets are given in relation to standard PCA and sparse PCA by simple thresholding of small loadings. Focus is on a recent algorithm for computing sparse principal components, but a review of other approaches is supplied as well. The SPCA algorithm has been implemented using Matlab and is available for download. The general behavior of the algorithm is investigated, and strengths and weaknesses are discussed. The original report on the SPCA algorithm argues that the ordering of modes is not an issue. We disagree on this point and propose several approaches to establish sensible orderings. A method that orders modes by decreasing variance and maximizes the sum of variances for all modes is presented and investigated in detail.

  15. Single-trial classification of motor imagery differing in task complexity: a functional near-infrared spectroscopy study

    PubMed Central

    2011-01-01

    Background For brain computer interfaces (BCIs), which may be valuable in neurorehabilitation, brain signals derived from mental activation can be monitored by non-invasive methods, such as functional near-infrared spectroscopy (fNIRS). Single-trial classification is important for this purpose and this was the aim of the presented study. In particular, we aimed to investigate a combined approach: 1) offline single-trial classification of brain signals derived from a novel wireless fNIRS instrument; 2) to use motor imagery (MI) as mental task thereby discriminating between MI signals in response to different tasks complexities, i.e. simple and complex MI tasks. Methods 12 subjects were asked to imagine either a simple finger-tapping task using their right thumb or a complex sequential finger-tapping task using all fingers of their right hand. fNIRS was recorded over secondary motor areas of the contralateral hemisphere. Using Fisher's linear discriminant analysis (FLDA) and cross validation, we selected for each subject a best-performing feature combination consisting of 1) one out of three channel, 2) an analysis time interval ranging from 5-15 s after stimulation onset and 3) up to four Δ[O2Hb] signal features (Δ[O2Hb] mean signal amplitudes, variance, skewness and kurtosis). Results The results of our single-trial classification showed that using the simple combination set of channels, time intervals and up to four Δ[O2Hb] signal features comprising Δ[O2Hb] mean signal amplitudes, variance, skewness and kurtosis, it was possible to discriminate single-trials of MI tasks differing in complexity, i.e. simple versus complex tasks (inter-task paired t-test p ≤ 0.001), over secondary motor areas with an average classification accuracy of 81%. Conclusions Although the classification accuracies look promising they are nevertheless subject of considerable subject-to-subject variability. In the discussion we address each of these aspects, their limitations for future approaches in single-trial classification and their relevance for neurorehabilitation. PMID:21682906

  16. New Stability-Indicating RP-HPLC Method for Determination of Diclofenac Potassium and Metaxalone from their Combined Dosage Form

    PubMed Central

    Panda, Sagar Suman; Patanaik, Debasis; Ravi Kumar, Bera V. V.

    2012-01-01

    A simple, precise and accurate isocratic RP-HPLC stability-indicating assay method has been developed to determine diclofenac potassium and metaxalone in their combined dosage forms. Isocratic separation was achieved on a Hibar-C18, Lichrosphere-100® (250 mm × 4.6 mm i.d., particle size 5 μm) column at room temperature in isocratic mode, the mobile phase consists of methanol: water (80:20, v/v) at a flow rate of 1.0 ml/min, the injection volume was 20 μl and UV detection was carried out at 280nm. The drug was subjected to acid and alkali hydrolysis, oxidation, photolysis and heat as stress conditions. The method was validated for specificity, linearity, precision, accuracy, robustness and system suitability. The method was linear in the drug concentration range of 2.5–30 μg/ml and 20–240 μg/ml for diclofenac potassium and metaxalone, respectively. The precision (RSD) of six samples was 0.83 and 0.93% for repeatability, and the intermediate precision (RSD) among six-sample preparation was 1.63 and 0.49% for diclofenac potassium and metaxalone, respectively. The mean recoveries were between 100.99–102.58% and 99.97–100.01% for diclofenac potassium and metaxalone, respectively. The proposed method can be used successfully for routine analysis of the drug in bulk and combined pharmaceutical dosage forms. PMID:22396909

  17. Unequal-Arm Interferometry and Ranging in Space

    NASA Technical Reports Server (NTRS)

    Tinto, Massimo

    2005-01-01

    Space-borne interferometric gravitational wave detectors, sensitive in the low-frequency (millihertz) band, will fly in the next decade. In these detectors the spacecraft-to-spacecraft light-traveltimes will necessarily be unequal, time-varying, and (due to aberration) have different time delays on up- and down-links. By using knowledge of the inter-spacecraft light-travel-times and their time evolution it is possible to cancel in post-processing the otherwise dominant laser phase noise and obtain a variety of interferometric data combinations sensitive to gravitational radiation. This technique, which has been named Time-Delay Interferometry (TDI), can be implemented with constellations of three or more formation-flying spacecraft that coherently track each other. As an example application we consider the Laser Interferometer Space Antenna (LISA) mission and show that TDI combinations can be synthesized by properly time-shifting and linearly combining the phase measurements performed on board the three spacecraft. Since TDI exactly suppresses the laser noises when the delays coincide with the light-travel-times, we then show that TDI can also be used for estimating the time-delays needed for its implementation. This is done by performing a post-processing non-linear minimization procedure, which provides an effective, powerful, and simple way for making measurements of the inter-spacecraft light-travel-times. This processing technique, named Time-Delay Interferometric Ranging (TDIR), is highly accurate in estimating the time-delays and allows TDI to be successfully implemented without the need of a dedicated ranging subsystem.

  18. High-order Newton-penalty algorithms

    NASA Astrophysics Data System (ADS)

    Dussault, Jean-Pierre

    2005-10-01

    Recent efforts in differentiable non-linear programming have been focused on interior point methods, akin to penalty and barrier algorithms. In this paper, we address the classical equality constrained program solved using the simple quadratic loss penalty function/algorithm. The suggestion to use extrapolations to track the differentiable trajectory associated with penalized subproblems goes back to the classic monograph of Fiacco & McCormick. This idea was further developed by Gould who obtained a two-steps quadratically convergent algorithm using prediction steps and Newton correction. Dussault interpreted the prediction step as a combined extrapolation with respect to the penalty parameter and the residual of the first order optimality conditions. Extrapolation with respect to the residual coincides with a Newton step.We explore here higher-order extrapolations, thus higher-order Newton-like methods. We first consider high-order variants of the Newton-Raphson method applied to non-linear systems of equations. Next, we obtain improved asymptotic convergence results for the quadratic loss penalty algorithm by using high-order extrapolation steps.

  19. Quantum mechanical/molecular mechanical/continuum style solvation model: linear response theory, variational treatment, and nuclear gradients.

    PubMed

    Li, Hui

    2009-11-14

    Linear response and variational treatment are formulated for Hartree-Fock (HF) and Kohn-Sham density functional theory (DFT) methods and combined discrete-continuum solvation models that incorporate self-consistently induced dipoles and charges. Due to the variational treatment, analytic nuclear gradients can be evaluated efficiently for these discrete and continuum solvation models. The forces and torques on the induced point dipoles and point charges can be evaluated using simple electrostatic formulas as for permanent point dipoles and point charges, in accordance with the electrostatic nature of these methods. Implementation and tests using the effective fragment potential (EFP, a polarizable force field) method and the conductorlike polarizable continuum model (CPCM) show that the nuclear gradients are as accurate as those in the gas phase HF and DFT methods. Using B3LYP/EFP/CPCM and time-dependent-B3LYP/EFP/CPCM methods, acetone S(0)-->S(1) excitation in aqueous solution is studied. The results are close to those from full B3LYP/CPCM calculations.

  20. Adjusted variable plots for Cox's proportional hazards regression model.

    PubMed

    Hall, C B; Zeger, S L; Bandeen-Roche, K J

    1996-01-01

    Adjusted variable plots are useful in linear regression for outlier detection and for qualitative evaluation of the fit of a model. In this paper, we extend adjusted variable plots to Cox's proportional hazards model for possibly censored survival data. We propose three different plots: a risk level adjusted variable (RLAV) plot in which each observation in each risk set appears, a subject level adjusted variable (SLAV) plot in which each subject is represented by one point, and an event level adjusted variable (ELAV) plot in which the entire risk set at each failure event is represented by a single point. The latter two plots are derived from the RLAV by combining multiple points. In each point, the regression coefficient and standard error from a Cox proportional hazards regression is obtained by a simple linear regression through the origin fit to the coordinates of the pictured points. The plots are illustrated with a reanalysis of a dataset of 65 patients with multiple myeloma.

  1. The design and analysis of simple low speed flap systems with the aid of linearized theory computer programs

    NASA Technical Reports Server (NTRS)

    Carlson, Harry W.

    1985-01-01

    The purpose here is to show how two linearized theory computer programs in combination may be used for the design of low speed wing flap systems capable of high levels of aerodynamic efficiency. A fundamental premise of the study is that high levels of aerodynamic performance for flap systems can be achieved only if the flow about the wing remains predominantly attached. Based on this premise, a wing design program is used to provide idealized attached flow camber surfaces from which candidate flap systems may be derived, and, in a following step, a wing evaluation program is used to provide estimates of the aerodynamic performance of the candidate systems. Design strategies and techniques that may be employed are illustrated through a series of examples. Applicability of the numerical methods to the analysis of a representative flap system (although not a system designed by the process described here) is demonstrated in a comparison with experimental data.

  2. Injuries of the head from backface deformation of ballistic protective helmets under ballistic impact.

    PubMed

    Rafaels, Karin A; Cutcliffe, Hattie C; Salzar, Robert S; Davis, Martin; Boggess, Brian; Bush, Bryan; Harris, Robert; Rountree, Mark Steve; Sanderson, Ellory; Campman, Steven; Koch, Spencer; Dale Bass, Cameron R

    2015-01-01

    Modern ballistic helmets defeat penetrating bullets by energy transfer from the projectile to the helmet, producing helmet deformation. This deformation may cause severe injuries without completely perforating the helmet, termed "behind armor blunt trauma" (BABT). As helmets become lighter, the likelihood of larger helmet backface deformation under ballistic impact increases. To characterize the potential for BABT, seven postmortem human head/neck specimens wearing a ballistic protective helmet were exposed to nonperforating impact, using a 9 mm, full metal jacket, 124 grain bullet with velocities of 400-460 m/s. An increasing trend of injury severity was observed, ranging from simple linear fractures to combinations of linear and depressed fractures. Overall, the ability to identify skull fractures resulting from BABT can be used in forensic investigations. Our results demonstrate a high risk of skull fracture due to BABT and necessitate the prevention of BABT as a design factor in future generations of protective gear. © 2014 American Academy of Forensic Sciences.

  3. Estimating Causal Effects with Ancestral Graph Markov Models

    PubMed Central

    Malinsky, Daniel; Spirtes, Peter

    2017-01-01

    We present an algorithm for estimating bounds on causal effects from observational data which combines graphical model search with simple linear regression. We assume that the underlying system can be represented by a linear structural equation model with no feedback, and we allow for the possibility of latent variables. Under assumptions standard in the causal search literature, we use conditional independence constraints to search for an equivalence class of ancestral graphs. Then, for each model in the equivalence class, we perform the appropriate regression (using causal structure information to determine which covariates to include in the regression) to estimate a set of possible causal effects. Our approach is based on the “IDA” procedure of Maathuis et al. (2009), which assumes that all relevant variables have been measured (i.e., no unmeasured confounders). We generalize their work by relaxing this assumption, which is often violated in applied contexts. We validate the performance of our algorithm on simulated data and demonstrate improved precision over IDA when latent variables are present. PMID:28217244

  4. Programmable growth of branched silicon nanowires using a focused ion beam.

    PubMed

    Jun, Kimin; Jacobson, Joseph M

    2010-08-11

    Although significant progress has been made in being able to spatially define the position of material layers in vapor-liquid-solid (VLS) grown nanowires, less work has been carried out in deterministically defining the positions of nanowire branching points to facilitate more complicated structures beyond simple 1D wires. Work to date has focused on the growth of randomly branched nanowire structures. Here we develop a means for programmably designating nanowire branching points by means of focused ion beam-defined VLS catalytic points. This technique is repeatable without losing fidelity allowing multiple rounds of branching point definition followed by branch growth resulting in complex structures. The single crystal nature of this approach allows us to describe resulting structures with linear combinations of base vectors in three-dimensional (3D) space. Finally, by etching the resulting 3D defined wire structures branched nanotubes were fabricated with interconnected nanochannels inside. We believe that the techniques developed here should comprise a useful tool for extending linear VLS nanowire growth to generalized 3D wire structures.

  5. Slope efficiency over 30% single-frequency ytterbium-doped fiber laser based on Sagnac loop mirror filter.

    PubMed

    Yin, Mojuan; Huang, Shenghong; Lu, Baole; Chen, Haowei; Ren, Zhaoyu; Bai, Jintao

    2013-09-20

    A high-slope-efficiency single-frequency (SF) ytterbium-doped fiber laser, based on a Sagnac loop mirror filter (LMF), was demonstrated. It combined a simple linear cavity with a Sagnac LMF that acted as a narrow-bandwidth filter to select the longitudinal modes. And we introduced a polarization controller to restrain the spatial hole burning effect in the linear cavity. The system could operate at a stable SF oscillating at 1064 nm with the obtained maximum output power of 32 mW. The slope efficiency was found to be primarily dependent on the reflectivity of the fiber Bragg grating. The slope efficiency of multi-longitudinal modes was higher than 45%, and the highest slope efficiency of the single longitudinal mode we achieved was 33.8%. The power stability and spectrum stability were <2% and <0.1%, respectively, and the signal-to-noise ratio measured was around 60 dB.

  6. Minimal model for a hydrodynamic fingering instability in microroller suspensions

    NASA Astrophysics Data System (ADS)

    Delmotte, Blaise; Donev, Aleksandar; Driscoll, Michelle; Chaikin, Paul

    2017-11-01

    We derive a minimal continuum model to investigate the hydrodynamic mechanism behind the fingering instability recently discovered in a suspension of microrollers near a floor [M. Driscoll et al., Nat. Phys. 13, 375 (2017), 10.1038/nphys3970]. Our model, consisting of two continuous lines of rotlets, exhibits a linear instability driven only by hydrodynamic interactions and reproduces the length-scale selection observed in large-scale particle simulations and in experiments. By adjusting only one parameter, the distance between the two lines, our dispersion relation exhibits quantitative agreement with the simulations and qualitative agreement with experimental measurements. Our linear stability analysis indicates that this instability is caused by the combination of the advective and transverse flows generated by the microrollers near a no-slip surface. Our simple model offers an interesting formalism to characterize other hydrodynamic instabilities that have not been well understood, such as size scale selection in suspensions of particles sedimenting adjacent to a wall, or the recently observed formations of traveling phonons in systems of confined driven particles.

  7. Novel and general approach to linear filter design for contrast-to-noise ratio enhancement of magnetic resonance images with multiple interfering features in the scene

    NASA Astrophysics Data System (ADS)

    Soltanian-Zadeh, Hamid; Windham, Joe P.

    1992-04-01

    Maximizing the minimum absolute contrast-to-noise ratios (CNRs) between a desired feature and multiple interfering processes, by linear combination of images in a magnetic resonance imaging (MRI) scene sequence, is attractive for MRI analysis and interpretation. A general formulation of the problem is presented, along with a novel solution utilizing the simple and numerically stable method of Gram-Schmidt orthogonalization. We derive explicit solutions for the case of two interfering features first, then for three interfering features, and, finally, using a typical example, for an arbitrary number of interfering feature. For the case of two interfering features, we also provide simplified analytical expressions for the signal-to-noise ratios (SNRs) and CNRs of the filtered images. The technique is demonstrated through its applications to simulated and acquired MRI scene sequences of a human brain with a cerebral infarction. For these applications, a 50 to 100% improvement for the smallest absolute CNR is obtained.

  8. A Simple Piece of Apparatus to Aid the Understanding of the Relationship between Angular Velocity and Linear Velocity

    ERIC Educational Resources Information Center

    Unsal, Yasin

    2011-01-01

    One of the subjects that is confusing and difficult for students to fully comprehend is the concept of angular velocity and linear velocity. It is the relationship between linear and angular velocity that students find difficult; most students understand linear motion in isolation. In this article, we detail the design, construction and…

  9. A novel method of the image processing on irregular triangular meshes

    NASA Astrophysics Data System (ADS)

    Vishnyakov, Sergey; Pekhterev, Vitaliy; Sokolova, Elizaveta

    2018-04-01

    The paper describes a novel method of the image processing based on irregular triangular meshes implementation. The triangular mesh is adaptive to the image content, least mean square linear approximation is proposed for the basic interpolation within the triangle. It is proposed to use triangular numbers to simplify using of the local (barycentric) coordinates for the further analysis - triangular element of the initial irregular mesh is to be represented through the set of the four equilateral triangles. This allows to use fast and simple pixels indexing in local coordinates, e.g. "for" or "while" loops for access to the pixels. Moreover, representation proposed allows to use discrete cosine transform of the simple "rectangular" symmetric form without additional pixels reordering (as it is used for shape-adaptive DCT forms). Furthermore, this approach leads to the simple form of the wavelet transform on triangular mesh. The results of the method application are presented. It is shown that advantage of the method proposed is a combination of the flexibility of the image-adaptive irregular meshes with the simple form of the pixel indexing in local triangular coordinates and the using of the common forms of the discrete transforms for triangular meshes. Method described is proposed for the image compression, pattern recognition, image quality improvement, image search and indexing. It also may be used as a part of video coding (intra-frame or inter-frame coding, motion detection).

  10. Improving validation methods for molecular diagnostics: application of Bland-Altman, Deming and simple linear regression analyses in assay comparison and evaluation for next-generation sequencing

    PubMed Central

    Misyura, Maksym; Sukhai, Mahadeo A; Kulasignam, Vathany; Zhang, Tong; Kamel-Reid, Suzanne; Stockley, Tracy L

    2018-01-01

    Aims A standard approach in test evaluation is to compare results of the assay in validation to results from previously validated methods. For quantitative molecular diagnostic assays, comparison of test values is often performed using simple linear regression and the coefficient of determination (R2), using R2 as the primary metric of assay agreement. However, the use of R2 alone does not adequately quantify constant or proportional errors required for optimal test evaluation. More extensive statistical approaches, such as Bland-Altman and expanded interpretation of linear regression methods, can be used to more thoroughly compare data from quantitative molecular assays. Methods We present the application of Bland-Altman and linear regression statistical methods to evaluate quantitative outputs from next-generation sequencing assays (NGS). NGS-derived data sets from assay validation experiments were used to demonstrate the utility of the statistical methods. Results Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in quantitative values of NGS data. Deming linear regression was used in the context of assay comparison studies, while simple linear regression was used to analyse serial dilution data. Bland-Altman statistical approach was also adapted to quantify assay accuracy, including constant and proportional errors, and precision where theoretical and empirical values were known. Conclusions The complementary application of the statistical methods described in this manuscript enables more extensive evaluation of performance characteristics of quantitative molecular assays, prior to implementation in the clinical molecular laboratory. PMID:28747393

  11. A polymer, random walk model for the size-distribution of large DNA fragments after high linear energy transfer radiation

    NASA Technical Reports Server (NTRS)

    Ponomarev, A. L.; Brenner, D.; Hlatky, L. R.; Sachs, R. K.

    2000-01-01

    DNA double-strand breaks (DSBs) produced by densely ionizing radiation are not located randomly in the genome: recent data indicate DSB clustering along chromosomes. Stochastic DSB clustering at large scales, from > 100 Mbp down to < 0.01 Mbp, is modeled using computer simulations and analytic equations. A random-walk, coarse-grained polymer model for chromatin is combined with a simple track structure model in Monte Carlo software called DNAbreak and is applied to data on alpha-particle irradiation of V-79 cells. The chromatin model neglects molecular details but systematically incorporates an increase in average spatial separation between two DNA loci as the number of base-pairs between the loci increases. Fragment-size distributions obtained using DNAbreak match data on large fragments about as well as distributions previously obtained with a less mechanistic approach. Dose-response relations, linear at small doses of high linear energy transfer (LET) radiation, are obtained. They are found to be non-linear when the dose becomes so large that there is a significant probability of overlapping or close juxtaposition, along one chromosome, for different DSB clusters from different tracks. The non-linearity is more evident for large fragments than for small. The DNAbreak results furnish an example of the RLC (randomly located clusters) analytic formalism, which generalizes the broken-stick fragment-size distribution of the random-breakage model that is often applied to low-LET data.

  12. Simple linear and multivariate regression models.

    PubMed

    Rodríguez del Águila, M M; Benítez-Parejo, N

    2011-01-01

    In biomedical research it is common to find problems in which we wish to relate a response variable to one or more variables capable of describing the behaviour of the former variable by means of mathematical models. Regression techniques are used to this effect, in which an equation is determined relating the two variables. While such equations can have different forms, linear equations are the most widely used form and are easy to interpret. The present article describes simple and multiple linear regression models, how they are calculated, and how their applicability assumptions are checked. Illustrative examples are provided, based on the use of the freely accessible R program. Copyright © 2011 SEICAP. Published by Elsevier Espana. All rights reserved.

  13. Techniques for detumbling a disabled space base

    NASA Technical Reports Server (NTRS)

    Kaplan, M. H.

    1973-01-01

    Techniques and conceptual devices for carrying out detumbling operations are examined, and progress in the development of these concepts is discussed. Devices which reduce tumble to simple spin through active linear motion of a small mass are described, together with a Module for Automatic Dock and Detumble (MADD) that could perform an orbital transfer from the shuttle in order to track and dock at a preselected point on the distressed craft. Once docked, MADD could apply torques by firing thrustors to detumble the passive vehicle. Optimum combinations of mass-motion and external devices for various situation should be developed. The need for completely formulating the automatic control logic of MADD is also emphasized.

  14. A numerical study of the axisymmetric Couette-Taylor problem using a fast high-resolution second-order central scheme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kupferman, R.

    The author presents a numerical study of the axisymmetric Couette-Taylor problem using a finite difference scheme. The scheme is based on a staggered version of a second-order central-differencing method combined with a discrete Hodge projection. The use of central-differencing operators obviates the need to trace the characteristic flow associated with the hyperbolic terms. The result is a simple and efficient scheme which is readily adaptable to other geometries and to more complicated flows. The scheme exhibits competitive performance in terms of accuracy, resolution, and robustness. The numerical results agree accurately with linear stability theory and with previous numerical studies.

  15. Unsteady hovering wake parameters identified from dynamic model tests, part 1

    NASA Technical Reports Server (NTRS)

    Hohenemser, K. H.; Crews, S. T.

    1977-01-01

    The development of a 4-bladed model rotor is reported that can be excited with a simple eccentric mechanism in progressing and regressing modes with either harmonic or transient inputs. Parameter identification methods were applied to the problem of extracting parameters for linear perturbation models, including rotor dynamic inflow effects, from the measured blade flapping responses to transient pitch stirring excitations. These perturbation models were then used to predict blade flapping response to other pitch stirring transient inputs, and rotor wake and blade flapping responses to harmonic inputs. The viability and utility of using parameter identification methods for extracting the perturbation models from transients are demonstrated through these combined analytical and experimental studies.

  16. Thermodynamic signatures for the existence of Dirac electrons in ZrTe 5

    DOE PAGES

    Nair, Nityan L.; Dumitrescu, Philipp T.; Channa, Sanyum; ...

    2017-09-12

    We combine transport, magnetization, and torque magnetometry measurements to investigate the electronic structure of ZrTe 5 and its evolution with temperature. At fields beyond the quantum limit, we observe a magnetization reversal from paramagnetic to diamagnetic response, which is characteristic of a Dirac semi-metal. We also observe a strong non-linearity in the magnetization that suggests the presence of additional low-lying carriers from other low-energy bands. Finally, we observe a striking sensitivity of the magnetic reversal to temperature that is not readily explained by simple band-structure models, but may be connected to a temperature dependent Lifshitz transition proposed to exist inmore » this material.« less

  17. Generation of continuously rotating polarization by combining cross-polarizations and its application in surface structuring.

    PubMed

    Lam, Billy; Zhang, Jihua; Guo, Chunlei

    2017-08-01

    In this study, we develop a simple but highly effective technique that generates a continuously varying polarization within a laser beam. This is achieved by having orthogonal linear polarizations on each side of the beam. By simply focusing such a laser beam, we can attain a gradually and continuously changing polarization within the entire Rayleigh range due to diffraction. To demonstrate this polarization distribution, we apply this laser beam onto a metal surface and create a continuously rotating laser induced periodic surface structure pattern. This technique provides a very effective way to produce complex surface structures that may potentially find applications, such as polarization modulators and metasurfaces.

  18. Guidelines and Procedures for Computing Time-Series Suspended-Sediment Concentrations and Loads from In-Stream Turbidity-Sensor and Streamflow Data

    USGS Publications Warehouse

    Rasmussen, Patrick P.; Gray, John R.; Glysson, G. Douglas; Ziegler, Andrew C.

    2009-01-01

    In-stream continuous turbidity and streamflow data, calibrated with measured suspended-sediment concentration data, can be used to compute a time series of suspended-sediment concentration and load at a stream site. Development of a simple linear (ordinary least squares) regression model for computing suspended-sediment concentrations from instantaneous turbidity data is the first step in the computation process. If the model standard percentage error (MSPE) of the simple linear regression model meets a minimum criterion, this model should be used to compute a time series of suspended-sediment concentrations. Otherwise, a multiple linear regression model using paired instantaneous turbidity and streamflow data is developed and compared to the simple regression model. If the inclusion of the streamflow variable proves to be statistically significant and the uncertainty associated with the multiple regression model results in an improvement over that for the simple linear model, the turbidity-streamflow multiple linear regression model should be used to compute a suspended-sediment concentration time series. The computed concentration time series is subsequently used with its paired streamflow time series to compute suspended-sediment loads by standard U.S. Geological Survey techniques. Once an acceptable regression model is developed, it can be used to compute suspended-sediment concentration beyond the period of record used in model development with proper ongoing collection and analysis of calibration samples. Regression models to compute suspended-sediment concentrations are generally site specific and should never be considered static, but they represent a set period in a continually dynamic system in which additional data will help verify any change in sediment load, type, and source.

  19. Using crosscorrelation techniques to determine the impulse response of linear systems

    NASA Technical Reports Server (NTRS)

    Dallabetta, Michael J.; Li, Harry W.; Demuth, Howard B.

    1993-01-01

    A crosscorrelation method of measuring the impulse response of linear systems is presented. The technique, implementation, and limitations of this method are discussed. A simple system is designed and built using discrete components and the impulse response of a linear circuit is measured. Theoretical and software simulation results are presented.

  20. Oriented and ordered mesoporous ZrO{sub 2}/TiO{sub 2} fibers with well-organized linear and spring structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Luyi, E-mail: zhuly@sdu.edu.cn; Liu, Benxue; Qin, Weiwei, E-mail: jiuyuan.1001@163.com

    Graphical abstract: The ultra-stable order mesoporous ZrO{sub 2}/TiO{sub 2} fibers with well-organized linear and spring structure and large surface area under higher temperatures were prepared by a simple EISA process. - Highlights: • The ZrO{sub 2}/TiO{sub 2} fibers were prepared by EISA process combined with steam heat-treatment. • The mesoporous ZrO{sub 2}/TiO{sub 2} fibers have well-organized linear and spring structure. • The fibers were composed of oval rod nanocrystals of ZrTiO{sub 4}. - Abstract: The ultra-stable order mesoporous ZrO{sub 2}/TiO{sub 2} fibers with well-organized linear and spring structure and large surface areas under higher temperatures were prepared by a (simplemore » evaporation-induced assembly) EISA process. The preparation, microstructures and formation processes were characterized by Fourier transformation infrared (FTIR), X-ray diffraction (XRD), scanning electron microscopy (SEM), transmission electron microscopy (TEM) and N{sub 2} adsorption–absorption measurements. The fibers take on pinstripe configuration which is very orderly along or perpendicular to the axial direction of the fibers. The diameters of the pinstripe are in the region of 200–400 nm and arranges regularly, which are composed of oval rod nanocrystals of ZrTiO{sub 4}.« less

  1. A simple-shear rheometer for linear viscoelastic characterization of vocal fold tissues at phonatory frequencies.

    PubMed

    Chan, Roger W; Rodriguez, Maritza L

    2008-08-01

    Previous studies reporting the linear viscoelastic shear properties of the human vocal fold cover or mucosa have been based on torsional rheometry, with measurements limited to low audio frequencies, up to around 80 Hz. This paper describes the design and validation of a custom-built, controlled-strain, linear, simple-shear rheometer system capable of direct empirical measurements of viscoelastic shear properties at phonatory frequencies. A tissue specimen was subjected to simple shear between two parallel, rigid acrylic plates, with a linear motor creating a translational sinusoidal displacement of the specimen via the upper plate, and the lower plate transmitting the harmonic shear force resulting from the viscoelastic response of the specimen. The displacement of the specimen was measured by a linear variable differential transformer whereas the shear force was detected by a piezoelectric transducer. The frequency response characteristics of these system components were assessed by vibration experiments with accelerometers. Measurements of the viscoelastic shear moduli (G' and G") of a standard ANSI S2.21 polyurethane material and those of human vocal fold cover specimens were made, along with estimation of the system signal and noise levels. Preliminary results showed that the rheometer can provide valid and reliable rheometric data of vocal fold lamina propria specimens at frequencies of up to around 250 Hz, well into the phonatory range.

  2. A simple attitude control of quadrotor helicopter based on Ziegler-Nichols rules for tuning PD parameters.

    PubMed

    He, ZeFang; Zhao, Long

    2014-01-01

    An attitude control strategy based on Ziegler-Nichols rules for tuning PD (proportional-derivative) parameters of quadrotor helicopters is presented to solve the problem that quadrotor tends to be instable. This problem is caused by the narrow definition domain of attitude angles of quadrotor helicopters. The proposed controller is nonlinear and consists of a linear part and a nonlinear part. The linear part is a PD controller with PD parameters tuned by Ziegler-Nichols rules and acts on the quadrotor decoupled linear system after feedback linearization; the nonlinear part is a feedback linearization item which converts a nonlinear system into a linear system. It can be seen from the simulation results that the attitude controller proposed in this paper is highly robust, and its control effect is better than the other two nonlinear controllers. The nonlinear parts of the other two nonlinear controllers are the same as the attitude controller proposed in this paper. The linear part involves a PID (proportional-integral-derivative) controller with the PID controller parameters tuned by Ziegler-Nichols rules and a PD controller with the PD controller parameters tuned by GA (genetic algorithms). Moreover, this attitude controller is simple and easy to implement.

  3. Improved Noncoherent UWB Receiver for Implantable Biomedical Devices.

    PubMed

    Nagaraj, Santosh; Rassam, Faris G

    2016-10-01

    The purpose of this paper is to describe a novel noncoherent receiver architecture to improve the error performance of impulse-radio ultrawideband (IR-UWB) in bioimplanted devices. IR-UWB receivers based on energy detection are popular in biomedical applications owing to the low implementation cost/complexity and the high data rates that UWB can potentially support. Implanted devices suffer from severe frequency-dependent attenuation due to human blood and tissues, while most receivers in the literature are designed based on commonly used indoor wireless channel models. We propose a novel receiver design that is based on judiciously combining the energies in different bands of the signal spectrum with a weighted linear combiner. We derive the optimum coefficients of the combiner. The receiver retains almost all of the advantages of a conventional noncoherent detector, but can also compensate for attenuation properties of blood/tissue. The receiver design can be adapted to different implantation depths by simply varying the combiner weights. The receiver can also be considered to be a simple form of equalizer for noncoherent reception. Our simulations show about 2-dB improvement over other commonly used receivers. This receiver design is significant in that it can enhance critical battery life of implanted transmitters.

  4. Code Samples Used for Complexity and Control

    NASA Astrophysics Data System (ADS)

    Ivancevic, Vladimir G.; Reid, Darryn J.

    2015-11-01

    The following sections are included: * MathematicaⓇ Code * Generic Chaotic Simulator * Vector Differential Operators * NLS Explorer * 2C++ Code * C++ Lambda Functions for Real Calculus * Accelerometer Data Processor * Simple Predictor-Corrector Integrator * Solving the BVP with the Shooting Method * Linear Hyperbolic PDE Solver * Linear Elliptic PDE Solver * Method of Lines for a Set of the NLS Equations * C# Code * Iterative Equation Solver * Simulated Annealing: A Function Minimum * Simple Nonlinear Dynamics * Nonlinear Pendulum Simulator * Lagrangian Dynamics Simulator * Complex-Valued Crowd Attractor Dynamics * Freeform Fortran Code * Lorenz Attractor Simulator * Complex Lorenz Attractor * Simple SGE Soliton * Complex Signal Presentation * Gaussian Wave Packet * Hermitian Matrices * Euclidean L2-Norm * Vector/Matrix Operations * Plain C-Code: Levenberg-Marquardt Optimizer * Free Basic Code: 2D Crowd Dynamics with 3000 Agents

  5. Improving validation methods for molecular diagnostics: application of Bland-Altman, Deming and simple linear regression analyses in assay comparison and evaluation for next-generation sequencing.

    PubMed

    Misyura, Maksym; Sukhai, Mahadeo A; Kulasignam, Vathany; Zhang, Tong; Kamel-Reid, Suzanne; Stockley, Tracy L

    2018-02-01

    A standard approach in test evaluation is to compare results of the assay in validation to results from previously validated methods. For quantitative molecular diagnostic assays, comparison of test values is often performed using simple linear regression and the coefficient of determination (R 2 ), using R 2 as the primary metric of assay agreement. However, the use of R 2 alone does not adequately quantify constant or proportional errors required for optimal test evaluation. More extensive statistical approaches, such as Bland-Altman and expanded interpretation of linear regression methods, can be used to more thoroughly compare data from quantitative molecular assays. We present the application of Bland-Altman and linear regression statistical methods to evaluate quantitative outputs from next-generation sequencing assays (NGS). NGS-derived data sets from assay validation experiments were used to demonstrate the utility of the statistical methods. Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in quantitative values of NGS data. Deming linear regression was used in the context of assay comparison studies, while simple linear regression was used to analyse serial dilution data. Bland-Altman statistical approach was also adapted to quantify assay accuracy, including constant and proportional errors, and precision where theoretical and empirical values were known. The complementary application of the statistical methods described in this manuscript enables more extensive evaluation of performance characteristics of quantitative molecular assays, prior to implementation in the clinical molecular laboratory. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  6. How Darcy's equation is linked to the linear reservoir at catchment scale

    NASA Astrophysics Data System (ADS)

    Savenije, Hubert H. G.

    2017-04-01

    In groundwater hydrology two simple linear equations exist that describe the relation between groundwater flow and the gradient that drives it: Darcy's equation and the linear reservoir. Both equations are empirical at heart: Darcy's equation at the laboratory scale and the linear reservoir at the watershed scale. Although at first sight they show similarity, without having detailed knowledge of the structure of the underlying aquifers it is not trivial to upscale Darcy's equation to the watershed scale. In this paper, a relatively simple connection is provided between the two, based on the assumption that the groundwater system is organized by an efficient drainage network, a mostly invisible pattern that has evolved over geological time scales. This drainage network provides equally distributed resistance to flow along the streamlines that connect the active groundwater body to the stream, much like a leaf is organized to provide all stomata access to moisture at equal resistance.

  7. The non-linear response of a muscle in transverse compression: assessment of geometry influence using a finite element model.

    PubMed

    Gras, Laure-Lise; Mitton, David; Crevier-Denoix, Nathalie; Laporte, Sébastien

    2012-01-01

    Most recent finite element models that represent muscles are generic or subject-specific models that use complex, constitutive laws. Identification of the parameters of such complex, constitutive laws could be an important limit for subject-specific approaches. The aim of this study was to assess the possibility of modelling muscle behaviour in compression with a parametric model and a simple, constitutive law. A quasi-static compression test was performed on the muscles of dogs. A parametric finite element model was designed using a linear, elastic, constitutive law. A multi-variate analysis was performed to assess the effects of geometry on muscle response. An inverse method was used to define Young's modulus. The non-linear response of the muscles was obtained using a subject-specific geometry and a linear elastic law. Thus, a simple muscle model can be used to have a bio-faithful, biomechanical response.

  8. A Simple Numerical Procedure for the Simulation of "Lifelike" Linear-Sweep Voltammograms

    NASA Astrophysics Data System (ADS)

    Bozzini, Benedetto P.

    2000-01-01

    Practical linear-sweep voltammograms seldom resemble the theoretical ones shown in textbooks. This is because several phenomena (activation, mass transport, ohmic resistance) control the kinetics over different potential ranges scanned during the potential sweep. These effects are generally treated separately in the didactic literature, yet they have never been "assembled" in a way that allows the educational use of real experiments. This makes linear-sweep voltammetric experiments almost unusable in the teaching of physical chemistry. A simple approach to the classroom description of "lifelike" experimental results is proposed in this paper. Analytical expressions of linear sweep voltammograms are provided. The actual numerical evaluations can be carried out with a pocket calculator. Two typical examples are executed and comparison with experimental data is described. This approach to teaching electrode kinetics has proved an effective tool to provide students with an insight into the effects of electrochemical parameters and operating conditions.

  9. Valuation of financial models with non-linear state spaces

    NASA Astrophysics Data System (ADS)

    Webber, Nick

    2001-02-01

    A common assumption in valuation models for derivative securities is that the underlying state variables take values in a linear state space. We discuss numerical implementation issues in an interest rate model with a simple non-linear state space, formulating and comparing Monte Carlo, finite difference and lattice numerical solution methods. We conclude that, at least in low dimensional spaces, non-linear interest rate models may be viable.

  10. MO-F-16A-02: Simulation of a Medical Linear Accelerator for Teaching Purposes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlone, M; Lamey, M; Anderson, R

    Purpose: Detailed functioning of linear accelerator physics is well known. Less well developed is the basic understanding of how the adjustment of the linear accelerator's electrical components affects the resulting radiation beam. Other than the text by Karzmark, there is very little literature devoted to the practical understanding of linear accelerator functionality targeted at the radiotherapy clinic level. The purpose of this work is to describe a simulation environment for medical linear accelerators with the purpose of teaching linear accelerator physics. Methods: Varian type lineacs were simulated. Klystron saturation and peak output were modelled analytically. The energy gain of anmore » electron beam was modelled using load line expressions. The bending magnet was assumed to be a perfect solenoid whose pass through energy varied linearly with solenoid current. The dose rate calculated at depth in water was assumed to be a simple function of the target's beam current. The flattening filter was modelled as an attenuator with conical shape, and the time-averaged dose rate at a depth in water was determined by calculating kerma. Results: Fifteen analytical models were combined into a single model called SIMAC. Performance was verified systematically by adjusting typical linac control parameters. Increasing klystron pulse voltage increased dose rate to a peak, which then decreased as the beam energy was further increased due to the fixed pass through energy of the bending magnet. Increasing accelerator beam current leads to a higher dose per pulse. However, the energy of the electron beam decreases due to beam loading and so the dose rate eventually maximizes and the decreases as beam current was further increased. Conclusion: SIMAC can realistically simulate the functionality of a linear accelerator. It is expected to have value as a teaching tool for both medical physicists and linear accelerator service personnel.« less

  11. Detection and recognition of simple spatial forms

    NASA Technical Reports Server (NTRS)

    Watson, A. B.

    1983-01-01

    A model of human visual sensitivity to spatial patterns is constructed. The model predicts the visibility and discriminability of arbitrary two-dimensional monochrome images. The image is analyzed by a large array of linear feature sensors, which differ in spatial frequency, phase, orientation, and position in the visual field. All sensors have one octave frequency bandwidths, and increase in size linearly with eccentricity. Sensor responses are processed by an ideal Bayesian classifier, subject to uncertainty. The performance of the model is compared to that of the human observer in detecting and discriminating some simple images.

  12. A Linear Theory for Inflatable Plates of Arbitrary Shape

    NASA Technical Reports Server (NTRS)

    McComb, Harvey G., Jr.

    1961-01-01

    A linear small-deflection theory is developed for the elastic behavior of inflatable plates of which Airmat is an example. Included in the theory are the effects of a small linear taper in the depth of the plate. Solutions are presented for some simple problems in the lateral deflection and vibration of constant-depth rectangular inflatable plates.

  13. Computational Tools for Probing Interactions in Multiple Linear Regression, Multilevel Modeling, and Latent Curve Analysis

    ERIC Educational Resources Information Center

    Preacher, Kristopher J.; Curran, Patrick J.; Bauer, Daniel J.

    2006-01-01

    Simple slopes, regions of significance, and confidence bands are commonly used to evaluate interactions in multiple linear regression (MLR) models, and the use of these techniques has recently been extended to multilevel or hierarchical linear modeling (HLM) and latent curve analysis (LCA). However, conducting these tests and plotting the…

  14. Cosmological N -body simulations with generic hot dark matter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brandbyge, Jacob; Hannestad, Steen, E-mail: jacobb@phys.au.dk, E-mail: sth@phys.au.dk

    2017-10-01

    We have calculated the non-linear effects of generic fermionic and bosonic hot dark matter components in cosmological N -body simulations. For sub-eV masses, the non-linear power spectrum suppression caused by thermal free-streaming resembles the one seen for massive neutrinos, whereas for masses larger than 1 eV, the non-linear relative suppression of power is smaller than in linear theory. We furthermore find that in the non-linear regime, one can map fermionic to bosonic models by performing a simple transformation.

  15. Cosmological N-body simulations with generic hot dark matter

    NASA Astrophysics Data System (ADS)

    Brandbyge, Jacob; Hannestad, Steen

    2017-10-01

    We have calculated the non-linear effects of generic fermionic and bosonic hot dark matter components in cosmological N-body simulations. For sub-eV masses, the non-linear power spectrum suppression caused by thermal free-streaming resembles the one seen for massive neutrinos, whereas for masses larger than 1 eV, the non-linear relative suppression of power is smaller than in linear theory. We furthermore find that in the non-linear regime, one can map fermionic to bosonic models by performing a simple transformation.

  16. Tear glucose detection combining microfluidic thread based device, amperometric biosensor and microflow injection analysis.

    PubMed

    Agustini, Deonir; Bergamini, Márcio F; Marcolino-Junior, Luiz Humberto

    2017-12-15

    The tear glucose analysis is an important alternative for the indirect, simple and less invasive monitoring of blood glucose levels. However, the high cost and complex manufacturing process of tear glucose analyzers combined with the need to exchange the sensor after each analysis in the disposable tests prevent widespread application of the tear in glucose monitoring. Here, we present the integration of a biosensor made by the electropolymerization of poly(toluidine blue O) (PTB) and glucose oxidase (GOx) with an electroanalytical microfluidic device of easy assembly based on cotton threads, low cost materials and measurements by microflow injection analysis (µFIA) through passive pumping for performing tear glucose analyses in a simple, rapid and inexpensive way. A high stability between the analyses (RSD = 2.54%) and among the different systems (RSD = 3.13%) was obtained for the determination of glucose, in addition to a wide linear range between 0.075 and 7.5mmolL -1 and a limit of detection of 22.2µmolL -1 . The proposed method was efficiently employed in the determination of tear glucose in non-diabetic volunteers, obtaining a close correlation with their blood glucose levels, simplifying and reducing the costs of the analyses, making the tear glucose monitoring more accessible for the population. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Development of orientation tuning in simple cells of primary visual cortex

    PubMed Central

    Moore, Bartlett D.

    2012-01-01

    Orientation selectivity and its development are basic features of visual cortex. The original model of orientation selectivity proposes that elongated simple cell receptive fields are constructed from convergent input of an array of lateral geniculate nucleus neurons. However, orientation selectivity of simple cells in the visual cortex is generally greater than the linear contributions based on projections from spatial receptive field profiles. This implies that additional selectivity may arise from intracortical mechanisms. The hierarchical processing idea implies mainly linear connections, whereas cortical contributions are generally considered to be nonlinear. We have explored development of orientation selectivity in visual cortex with a focus on linear and nonlinear factors in a population of anesthetized 4-wk postnatal kittens and adult cats. Linear contributions are estimated from receptive field maps by which orientation tuning curves are generated and bandwidth is quantified. Nonlinear components are estimated as the magnitude of the power function relationship between responses measured from drifting sinusoidal gratings and those predicted from the spatial receptive field. Measured bandwidths for kittens are slightly larger than those in adults, whereas predicted bandwidths are substantially broader. These results suggest that relatively strong nonlinearities in early postnatal stages are substantially involved in the development of orientation tuning in visual cortex. PMID:22323631

  18. Development of HPTLC-UV absorption densitometry method for the analysis of alprazolam and sertraline in combination and its application in the evaluation of marketed preparations.

    PubMed

    Venkateswarlu, K; Venisetty, R K; Yellu, N R; Keshetty, S; Pai, M G

    2007-09-01

    A new simple, sensitive, and reproducible high-performance thin-layer chromatography method for the estimation of alprazolam and sertraline in combination is developed using silica gel plates with fluorescent indicators. The system is equipped with an automated sample applicator, and the detection was performed at 254 nm by using UV absorption densitometry. The mobile phase consists of carbon tetrachloride, methanol, acetone, and ammonia in the ratio 12:3:5:0.1. The retention factor values for alprazolam and sertraline are found to be 0.52 and 0.70, respectively. The limit of detection of alprazolam and sertraline in the mixture of given proportion is observed to be 0.05 microg/mL and 2.5 microg/mL and the limit of quantitation is 0.2 microg/mL and 10 microg/mL, respectively. The method has shown good linearity in the range of 0.2 microg/mL to 0.65 pg/mL for alprazolam (R2 > 0.9953) and 10 pg/mL to 32.5 microg/mL for sertraline (R2 > 0.9942). The intra- and inter-assay (n=5) variations in the linear range are less than 4% for alprazolam and 6% for sertraline. Three pharmaceutical products containing this combination are analyzed to test the applicability of the new method. The percentage of alprazolam and sertraline in the tablets studied range from 97.7% to 102.82% and 96.5% to 99.9%, respectively.

  19. Investigation of Super Learner Methodology on HIV-1 Small Sample: Application on Jaguar Trial Data.

    PubMed

    Houssaïni, Allal; Assoumou, Lambert; Marcelin, Anne Geneviève; Molina, Jean Michel; Calvez, Vincent; Flandre, Philippe

    2012-01-01

    Background. Many statistical models have been tested to predict phenotypic or virological response from genotypic data. A statistical framework called Super Learner has been introduced either to compare different methods/learners (discrete Super Learner) or to combine them in a Super Learner prediction method. Methods. The Jaguar trial is used to apply the Super Learner framework. The Jaguar study is an "add-on" trial comparing the efficacy of adding didanosine to an on-going failing regimen. Our aim was also to investigate the impact on the use of different cross-validation strategies and different loss functions. Four different repartitions between training set and validations set were tested through two loss functions. Six statistical methods were compared. We assess performance by evaluating R(2) values and accuracy by calculating the rates of patients being correctly classified. Results. Our results indicated that the more recent Super Learner methodology of building a new predictor based on a weighted combination of different methods/learners provided good performance. A simple linear model provided similar results to those of this new predictor. Slight discrepancy arises between the two loss functions investigated, and slight difference arises also between results based on cross-validated risks and results from full dataset. The Super Learner methodology and linear model provided around 80% of patients correctly classified. The difference between the lower and higher rates is around 10 percent. The number of mutations retained in different learners also varys from one to 41. Conclusions. The more recent Super Learner methodology combining the prediction of many learners provided good performance on our small dataset.

  20. Slip accumulation and lateral propagation of active normal faults in Afar

    NASA Astrophysics Data System (ADS)

    Manighetti, I.; King, G. C. P.; Gaudemer, Y.; Scholz, C. H.; Doubre, C.

    2001-01-01

    We investigate fault growth in Afar, where normal fault systems are known to be currently growing fast and most are propagating to the northwest. Using digital elevation models, we have examined the cumulative slip distribution along 255 faults with lengths ranging from 0.3 to 60 km. Faults exhibiting the elliptical or "bell-shaped" slip profiles predicted by simple linear elastic fracture mechanics or elastic-plastic theories are rare. Most slip profiles are roughly linear for more than half of their length, with overall slopes always <0.035. For the dominant population of NW striking faults and fault systems longer than 2 km, the slip profiles are asymmetric, with slip being maximum near the eastern ends of the profiles where it drops abruptly to zero, whereas slip decreases roughly linearly and tapers in the direction of overall Aden rift propagation. At a more detailed level, most faults appear to be composed of distinct, shorter subfaults or segments, whose slip profiles, while different from one to the next, combine to produce the roughly linear overall slip decrease along the entire fault. On a larger scale, faults cluster into kinematically coupled systems, along which the slip on any scale individual fault or fault system complements that of its neighbors, so that the total slip of the whole system is roughly linearly related to its length, with an average slope again <0.035. We discuss the origin of these quasilinear, asymmetric profiles in terms of "initiation points" where slip starts, and "barriers" where fault propagation is arrested. In the absence of a barrier, slip apparently extends with a roughly linear profile, tapered in the direction of fault propagation.

  1. Consensus Algorithms for Networks of Systems with Second- and Higher-Order Dynamics

    NASA Astrophysics Data System (ADS)

    Fruhnert, Michael

    This thesis considers homogeneous networks of linear systems. We consider linear feedback controllers and require that the directed graph associated with the network contains a spanning tree and systems are stabilizable. We show that, in continuous-time, consensus with a guaranteed rate of convergence can always be achieved using linear state feedback. For networks of continuous-time second-order systems, we provide a new and simple derivation of the conditions for a second-order polynomials with complex coefficients to be Hurwitz. We apply this result to obtain necessary and sufficient conditions to achieve consensus with networks whose graph Laplacian matrix may have complex eigenvalues. Based on the conditions found, methods to compute feedback gains are proposed. We show that gains can be chosen such that consensus is achieved robustly over a variety of communication structures and system dynamics. We also consider the use of static output feedback. For networks of discrete-time second-order systems, we provide a new and simple derivation of the conditions for a second-order polynomials with complex coefficients to be Schur. We apply this result to obtain necessary and sufficient conditions to achieve consensus with networks whose graph Laplacian matrix may have complex eigenvalues. We show that consensus can always be achieved for marginally stable systems and discretized systems. Simple conditions for consensus achieving controllers are obtained when the Laplacian eigenvalues are all real. For networks of continuous-time time-variant higher-order systems, we show that uniform consensus can always be achieved if systems are quadratically stabilizable. In this case, we provide a simple condition to obtain a linear feedback control. For networks of discrete-time higher-order systems, we show that constant gains can be chosen such that consensus is achieved for a variety of network topologies. First, we develop simple results for networks of time-invariant systems and networks of time-variant systems that are given in controllable canonical form. Second, we formulate the problem in terms of Linear Matrix Inequalities (LMIs). The condition found simplifies the design process and avoids the parallel solution of multiple LMIs. The result yields a modified Algebraic Riccati Equation (ARE) for which we present an equivalent LMI condition.

  2. Linear Legendrian curves in T(3)

    NASA Astrophysics Data System (ADS)

    Ghiggini, Paolo

    2006-05-01

    Using convex surfaces and Kanda's classification theorem, we classify Legendrian isotopy classes of Legendrian linear curves in all tight contact structures on T(3) . Some of the knot types considered in this paper provide new examples of non transversally simple knot types.

  3. Thin layer chromatography-densitometric determination of some non-sedating antihistamines in combination with pseudoephedrine or acetaminophen in synthetic mixtures and in pharmaceutical formulations.

    PubMed

    El-Kommos, Michael E; El-Gizawy, Samia M; Atia, Noha N; Hosny, Noha M

    2014-03-01

    The combination of certain non-sedating antihistamines (NSA) such as fexofenadine (FXD), ketotifen (KET) and loratadine (LOR) with pseudoephedrine (PSE) or acetaminophen (ACE) is widely used in the treatment of allergic rhinitis, conjunctivitis and chronic urticaria. A rapid, simple, selective and precise densitometric method was developed and validated for simultaneous estimation of six synthetic binary mixtures and their pharmaceutical dosage forms. The method employed thin layer chromatography aluminum plates precoated with silica gel G 60 F254 as the stationary phase. The mobile phases chosen for development gave compact bands for the mixtures FXD-PSE (I), KET-PSE (II), LOR-PSE (III), FXD-ACE (IV), KET-ACE (V) and LOR-ACE (VI) [Retardation factor (Rf ) values were (0.20, 0.32), (0.69, 0.34), (0.79, 0.13), (0.36, 0.70), (0.51, 0.30) and (0.76, 0.26), respectively]. Spectrodensitometric scanning integration was performed at 217, 218, 218, 233, 272 and 251 nm for the mixtures I-VI, respectively. The linear regression data for the calibration plots showed an excellent linear relationship. The method was validated for precision, accuracy, robustness and recovery. Limits of detection and quantitation were calculated. Statistical analysis proved that the method is reproducible and selective for the simultaneous estimation of these binary mixtures. Copyright © 2013 John Wiley & Sons, Ltd.

  4. HESS Opinions: Linking Darcy's equation to the linear reservoir

    NASA Astrophysics Data System (ADS)

    Savenije, Hubert H. G.

    2018-03-01

    In groundwater hydrology, two simple linear equations exist describing the relation between groundwater flow and the gradient driving it: Darcy's equation and the linear reservoir. Both equations are empirical and straightforward, but work at different scales: Darcy's equation at the laboratory scale and the linear reservoir at the watershed scale. Although at first sight they appear similar, it is not trivial to upscale Darcy's equation to the watershed scale without detailed knowledge of the structure or shape of the underlying aquifers. This paper shows that these two equations, combined by the water balance, are indeed identical provided there is equal resistance in space for water entering the subsurface network. This implies that groundwater systems make use of an efficient drainage network, a mostly invisible pattern that has evolved over geological timescales. This drainage network provides equally distributed resistance for water to access the system, connecting the active groundwater body to the stream, much like a leaf is organized to provide all stomata access to moisture at equal resistance. As a result, the timescale of the linear reservoir appears to be inversely proportional to Darcy's conductance, the proportionality being the product of the porosity and the resistance to entering the drainage network. The main question remaining is which physical law lies behind pattern formation in groundwater systems, evolving in a way that resistance to drainage is constant in space. But that is a fundamental question that is equally relevant for understanding the hydraulic properties of leaf veins in plants or of blood veins in animals.

  5. A simplified calculation procedure for mass isotopomer distribution analysis (MIDA) based on multiple linear regression.

    PubMed

    Fernández-Fernández, Mario; Rodríguez-González, Pablo; García Alonso, J Ignacio

    2016-10-01

    We have developed a novel, rapid and easy calculation procedure for Mass Isotopomer Distribution Analysis based on multiple linear regression which allows the simultaneous calculation of the precursor pool enrichment and the fraction of newly synthesized labelled proteins (fractional synthesis) using linear algebra. To test this approach, we used the peptide RGGGLK as a model tryptic peptide containing three subunits of glycine. We selected glycine labelled in two 13 C atoms ( 13 C 2 -glycine) as labelled amino acid to demonstrate that spectral overlap is not a problem in the proposed methodology. The developed methodology was tested first in vitro by changing the precursor pool enrichment from 10 to 40% of 13 C 2 -glycine. Secondly, a simulated in vivo synthesis of proteins was designed by combining the natural abundance RGGGLK peptide and 10 or 20% 13 C 2 -glycine at 1 : 1, 1 : 3 and 3 : 1 ratios. Precursor pool enrichments and fractional synthesis values were calculated with satisfactory precision and accuracy using a simple spreadsheet. This novel approach can provide a relatively rapid and easy means to measure protein turnover based on stable isotope tracers. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  6. Differential solvation of intrinsically disordered linkers drives the formation of spatially organized droplets in ternary systems of linear multivalent proteins

    NASA Astrophysics Data System (ADS)

    Harmon, Tyler S.; Holehouse, Alex S.; Pappu, Rohit V.

    2018-04-01

    Intracellular biomolecular condensates are membraneless organelles that encompass large numbers of multivalent protein and nucleic acid molecules. The bodies assemble via a combination of liquid–liquid phase separation and gelation. A majority of condensates included multiple components and show multilayered organization as opposed to being well-mixed unitary liquids. Here, we put forward a simple thermodynamic framework to describe the emergence of spatially organized droplets in multicomponent systems comprising of linear multivalent polymers also known as associative polymers. These polymers, which mimic proteins and/or RNA have the architecture of domains or motifs known as stickers that are interspersed by flexible spacers known as linkers. Using a minimalist numerical model for a four-component system, we have identified features of linear multivalent molecules that are necessary and sufficient for generating spatially organized droplets. We show that differences in sequence-specific effective solvation volumes of disordered linkers between interaction domains enable the formation of spatially organized droplets. Molecules with linkers that are preferentially solvated are driven to the interface with the bulk solvent, whereas molecules that have linkers with negligible effective solvation volumes form cores in the core–shell architectures that emerge in the minimalist four-component systems. Our modeling has relevance for understanding the physical determinants of spatially organized membraneless organelles.

  7. An Efficient Test for Gene-Environment Interaction in Generalized Linear Mixed Models with Family Data.

    PubMed

    Mazo Lopera, Mauricio A; Coombes, Brandon J; de Andrade, Mariza

    2017-09-27

    Gene-environment (GE) interaction has important implications in the etiology of complex diseases that are caused by a combination of genetic factors and environment variables. Several authors have developed GE analysis in the context of independent subjects or longitudinal data using a gene-set. In this paper, we propose to analyze GE interaction for discrete and continuous phenotypes in family studies by incorporating the relatedness among the relatives for each family into a generalized linear mixed model (GLMM) and by using a gene-based variance component test. In addition, we deal with collinearity problems arising from linkage disequilibrium among single nucleotide polymorphisms (SNPs) by considering their coefficients as random effects under the null model estimation. We show that the best linear unbiased predictor (BLUP) of such random effects in the GLMM is equivalent to the ridge regression estimator. This equivalence provides a simple method to estimate the ridge penalty parameter in comparison to other computationally-demanding estimation approaches based on cross-validation schemes. We evaluated the proposed test using simulation studies and applied it to real data from the Baependi Heart Study consisting of 76 families. Using our approach, we identified an interaction between BMI and the Peroxisome Proliferator Activated Receptor Gamma ( PPARG ) gene associated with diabetes.

  8. Recruitment of local inhibitory networks by horizontal connections in layer 2/3 of ferret visual cortex.

    PubMed

    Tucker, Thomas R; Katz, Lawrence C

    2003-01-01

    To investigate how neurons in cortical layer 2/3 integrate horizontal inputs arising from widely distributed sites, we combined intracellular recording and voltage-sensitive dye imaging to visualize the spatiotemporal dynamics of neuronal activity evoked by electrical stimulation of multiple sites in visual cortex. Individual stimuli evoked characteristic patterns of optical activity, while delivering stimuli at multiple sites generated interacting patterns in the regions of overlap. We observed that neurons in overlapping regions received convergent horizontal activation that generated nonlinear responses due to the emergence of large inhibitory potentials. The results indicate that co-activation of multiple sets of horizontal connections recruit strong inhibition from local inhibitory networks, causing marked deviations from simple linear integration.

  9. Universal dispersion model for characterization of optical thin films over wide spectral range: Application to magnesium fluoride

    NASA Astrophysics Data System (ADS)

    Franta, Daniel; Nečas, David; Giglia, Angelo; Franta, Pavel; Ohlídal, Ivan

    2017-11-01

    Optical characterization of magnesium fluoride thin films is performed in a wide spectral range from far infrared to extreme ultraviolet (0.01-45 eV) utilizing the universal dispersion model. Two film defects, i.e. random roughness of the upper boundaries and defect transition layer at lower boundary are taken into account. An extension of universal dispersion model consisting in expressing the excitonic contributions as linear combinations of Gaussian and truncated Lorentzian terms is introduced. The spectral dependencies of the optical constants are presented in a graphical form and by the complete set of dispersion parameters that allows generating tabulated optical constants with required range and step using a simple utility in the newAD2 software package.

  10. Ultrafast Single-Shot Optical Oscilloscope based on Time-to-Space Conversion due to Temporal and Spatial Walk-Off Effects in Nonlinear Mixing Crystal

    NASA Astrophysics Data System (ADS)

    Takagi, Yoshihiro; Yamada, Yoshifumi; Ishikawa, Kiyoshi; Shimizu, Seiji; Sakabe, Shuji

    2005-09-01

    A simple method for single-shot sub-picosecond optical pulse diagnostics has been demonstrated by imaging the time evolution of the optical mixing onto the beam cross section of the sum-frequency wave when the interrogating pulse passes over the tested pulse in the mixing crystal as a result of the combined effect of group-velocity difference and walk-off beam propagation. A high linearity of the time-to-space projection is deduced from the process solely dependent upon the spatial uniformity of the refractive indices. A snap profile of the accidental coincidence between asynchronous pulses from separate mode-locked lasers has been detected, which demonstrates the single-shot ability.

  11. Design and fabrication of a hybrid maglev model employing PML and SML

    NASA Astrophysics Data System (ADS)

    Sun, R. X.; Zheng, J.; Zhan, L. J.; Huang, S. Y.; Li, H. T.; Deng, Z. G.

    2017-10-01

    A hybrid maglev model combining permanent magnet levitation (PML) and superconducting magnetic levitation (SML) was designed and fabricated to explore a heavy-load levitation system advancing in passive stability and simple structure. In this system, the PML was designed to levitate the load, and the SML was introduced to guarantee the stability. In order to realize different working gaps of the two maglev components, linear bearings were applied to connect the PML layer (for load) and the SML layer (for stability) of the hybrid maglev model. Experimental results indicate that the hybrid maglev model possesses excellent advantages of heavy-load ability and passive stability at the same time. This work presents a possible way to realize a heavy-load passive maglev concept.

  12. Directed Self-Assembly of Gradient Concentric Carbon Nanotube Rings

    NASA Astrophysics Data System (ADS)

    Hong, Suck Won; Jeong, Wonje; Ko, Hyunhyub; Tsukruk, Vladimir; Kessler, Michael; Lin, Zhiqun

    2008-03-01

    Hundreds of gradient concentric rings of linear conjugated polymer, (poly[2-methoxy-5-(2-ethylhexyloxy)-1,4- phenylenevinylene], i.e., MEH-PPV) with remarkable regularity over large areas were produced by controlled, repetitive ``stick- slip'' motions of the contact line in a confined geometry consisting of a sphere on a flat substrate (i.e., sphere-on-flat geometry). Subsequently, MEH-PPV rings exploited as template to direct the formation of gradient concentric rings of multiwalled carbon nanotubes (MWNTs) with controlled density. This method is simple, cost effective, and robust, combining two consecutive self-assembly processes, namely, evaporation-induced self- assembly of polymers in a sphere-on-flat geometry, followed by subsequent directed self-assembly of MWNTs on the polymer- templated surfaces.

  13. A volume-of-fluid method for simulation of compressible axisymmetric multi-material flow

    NASA Astrophysics Data System (ADS)

    de Niem, D.; Kührt, E.; Motschmann, U.

    2007-02-01

    A two-dimensional Eulerian hydrodynamic method for the numerical simulation of inviscid compressible axisymmetric multi-material flow in external force fields for the situation of pure fluids separated by macroscopic interfaces is presented. The method combines an implicit Lagrangian step with an explicit Eulerian advection step. Individual materials obey separate energy equations, fulfill general equations of state, and may possess different temperatures. Material volume is tracked using a piecewise linear volume-of-fluid method. An overshoot-free logically simple and economic material advection algorithm for cylinder coordinates is derived, in an algebraic formulation. New aspects arising in the case of more than two materials such as the material ordering strategy during transport are presented. One- and two-dimensional numerical examples are given.

  14. Investigating the linearity assumption between lumber grade mix and yield using design of experiments (DOE)

    Treesearch

    Xiaoqiu Zuo; Urs Buehlmann; R. Edward Thomas

    2004-01-01

    Solving the least-cost lumber grade mix problem allows dimension mills to minimize the cost of dimension part production. This problem, due to its economic importance, has attracted much attention from researchers and industry in the past. Most solutions used linear programming models and assumed that a simple linear relationship existed between lumber grade mix and...

  15. A neural network approach to job-shop scheduling.

    PubMed

    Zhou, D N; Cherkassky, V; Baldwin, T R; Olson, D E

    1991-01-01

    A novel analog computational network is presented for solving NP-complete constraint satisfaction problems, i.e. job-shop scheduling. In contrast to most neural approaches to combinatorial optimization based on quadratic energy cost function, the authors propose to use linear cost functions. As a result, the network complexity (number of neurons and the number of resistive interconnections) grows only linearly with problem size, and large-scale implementations become possible. The proposed approach is related to the linear programming network described by D.W. Tank and J.J. Hopfield (1985), which also uses a linear cost function for a simple optimization problem. It is shown how to map a difficult constraint-satisfaction problem onto a simple neural net in which the number of neural processors equals the number of subjobs (operations) and the number of interconnections grows linearly with the total number of operations. Simulations show that the authors' approach produces better solutions than existing neural approaches to job-shop scheduling, i.e. the traveling salesman problem-type Hopfield approach and integer linear programming approach of J.P.S. Foo and Y. Takefuji (1988), in terms of the quality of the solution and the network complexity.

  16. Calibrating page sized Gafchromic EBT3 films

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crijns, W.; Maes, F.; Heide, U. A. van der

    2013-01-15

    Purpose: The purpose is the development of a novel calibration method for dosimetry with Gafchromic EBT3 films. The method should be applicable for pretreatment verification of volumetric modulated arc, and intensity modulated radiotherapy. Because the exposed area on film can be large for such treatments, lateral scan errors must be taken into account. The correction for the lateral scan effect is obtained from the calibration data itself. Methods: In this work, the film measurements were modeled using their relative scan values (Transmittance, T). Inside the transmittance domain a linear combination and a parabolic lateral scan correction described the observed transmittancemore » values. The linear combination model, combined a monomer transmittance state (T{sub 0}) and a polymer transmittance state (T{sub {infinity}}) of the film. The dose domain was associated with the observed effects in the transmittance domain through a rational calibration function. On the calibration film only simple static fields were applied and page sized films were used for calibration and measurements (treatment verification). Four different calibration setups were considered and compared with respect to dose estimation accuracy. The first (I) used a calibration table from 32 regions of interest (ROIs) spread on 4 calibration films, the second (II) used 16 ROIs spread on 2 calibration films, the third (III), and fourth (IV) used 8 ROIs spread on a single calibration film. The calibration tables of the setups I, II, and IV contained eight dose levels delivered to different positions on the films, while for setup III only four dose levels were applied. Validation was performed by irradiating film strips with known doses at two different time points over the course of a week. Accuracy of the dose response and the lateral effect correction was estimated using the dose difference and the root mean squared error (RMSE), respectively. Results: A calibration based on two films was the optimal balance between cost effectiveness and dosimetric accuracy. The validation resulted in dose errors of 1%-2% for the two different time points, with a maximal absolute dose error around 0.05 Gy. The lateral correction reduced the RMSE values on the sides of the film to the RMSE values at the center of the film. Conclusions: EBT3 Gafchromic films were calibrated for large field dosimetry with a limited number of page sized films and simple static calibration fields. The transmittance was modeled as a linear combination of two transmittance states, and associated with dose using a rational calibration function. Additionally, the lateral scan effect was resolved in the calibration function itself. This allows the use of page sized films. Only two calibration films were required to estimate both the dose and the lateral response. The calibration films were used over the course of a week, with residual dose errors Less-Than-Or-Slanted-Equal-To 2% or Less-Than-Or-Slanted-Equal-To 0.05 Gy.« less

  17. The influence of social capital towards the quality of community tourism services in Lake Toba Parapat North Sumatera

    NASA Astrophysics Data System (ADS)

    Revida, Erika; Yanti Siahaan, Asima; Purba, Sukarman

    2018-03-01

    The objective of the research was to analyze the influence of social capital towards the quality of community tourism service In Lake Toba Parapat North Sumatera. The method used the combination between quantitative and qualitative research. Sample was taken from the Community in the area around Lake Toba Parapat North Sumatera with sample of 150 head of the family. The sampling technique was Simple Random Sampling. Data collection techniques used documentary studies, questionnaires, interview and observations, while the data analysis used were Product Moment and Simple Linear Regression analysis. The results of the research showed that there were positive and significant influence between Social Capital and the Quality of Community Tourism Services in Lake Toba Parapat North Sumatera. This research recommend the need to enhance Social Capital such as trust, norms and network and the quality of community tourism services such as Tangibles, Reliability, Responsiveness, Assurance, and Empathy by giving communications, information and education continuously from the families, institutions formal and informal, community leaders, religious figures and all communities in Lake Toba Parapat North Sumatera.

  18. Using McStas for modelling complex optics, using simple building bricks

    NASA Astrophysics Data System (ADS)

    Willendrup, Peter K.; Udby, Linda; Knudsen, Erik; Farhi, Emmanuel; Lefmann, Kim

    2011-04-01

    The McStas neutron ray-tracing simulation package is a versatile tool for producing accurate neutron simulations, extensively used for design and optimization of instruments, virtual experiments, data analysis and user training.In McStas, component organization and simulation flow is intrinsically linear: the neutron interacts with the beamline components in a sequential order, one by one. Historically, a beamline component with several parts had to be implemented with a complete, internal description of all these parts, e.g. a guide component including all four mirror plates and required logic to allow scattering between the mirrors.For quite a while, users have requested the ability to allow “components inside components” or meta-components, allowing to combine functionality of several simple components to achieve more complex behaviour, i.e. four single mirror plates together defining a guide.We will here show that it is now possible to define meta-components in McStas, and present a set of detailed, validated examples including a guide with an embedded, wedged, polarizing mirror system of the Helmholtz-Zentrum Berlin type.

  19. Highly accurate symplectic element based on two variational principles

    NASA Astrophysics Data System (ADS)

    Qing, Guanghui; Tian, Jia

    2018-02-01

    For the stability requirement of numerical resultants, the mathematical theory of classical mixed methods are relatively complex. However, generalized mixed methods are automatically stable, and their building process is simple and straightforward. In this paper, based on the seminal idea of the generalized mixed methods, a simple, stable, and highly accurate 8-node noncompatible symplectic element (NCSE8) was developed by the combination of the modified Hellinger-Reissner mixed variational principle and the minimum energy principle. To ensure the accuracy of in-plane stress results, a simultaneous equation approach was also suggested. Numerical experimentation shows that the accuracy of stress results of NCSE8 are nearly the same as that of displacement methods, and they are in good agreement with the exact solutions when the mesh is relatively fine. NCSE8 has advantages of the clearing concept, easy calculation by a finite element computer program, higher accuracy and wide applicability for various linear elasticity compressible and nearly incompressible material problems. It is possible that NCSE8 becomes even more advantageous for the fracture problems due to its better accuracy of stresses.

  20. Improving Students’ Science Process Skills through Simple Computer Simulations on Linear Motion Conceptions

    NASA Astrophysics Data System (ADS)

    Siahaan, P.; Suryani, A.; Kaniawati, I.; Suhendi, E.; Samsudin, A.

    2017-02-01

    The purpose of this research is to identify the development of students’ science process skills (SPS) on linear motion concept by utilizing simple computer simulation. In order to simplify the learning process, the concept is able to be divided into three sub-concepts: 1) the definition of motion, 2) the uniform linear motion and 3) the uniformly accelerated motion. This research was administered via pre-experimental method with one group pretest-posttest design. The respondents which were involved in this research were 23 students of seventh grade in one of junior high schools in Bandung City. The improving process of students’ science process skill is examined based on normalized gain analysis from pretest and posttest scores for all sub-concepts. The result of this research shows that students’ science process skills are dramatically improved by 47% (moderate) on observation skill; 43% (moderate) on summarizing skill, 70% (high) on prediction skill, 44% (moderate) on communication skill and 49% (moderate) on classification skill. These results clarify that the utilizing simple computer simulations in physics learning is be able to improve overall science skills at moderate level.

  1. A Simple Attitude Control of Quadrotor Helicopter Based on Ziegler-Nichols Rules for Tuning PD Parameters

    PubMed Central

    He, ZeFang

    2014-01-01

    An attitude control strategy based on Ziegler-Nichols rules for tuning PD (proportional-derivative) parameters of quadrotor helicopters is presented to solve the problem that quadrotor tends to be instable. This problem is caused by the narrow definition domain of attitude angles of quadrotor helicopters. The proposed controller is nonlinear and consists of a linear part and a nonlinear part. The linear part is a PD controller with PD parameters tuned by Ziegler-Nichols rules and acts on the quadrotor decoupled linear system after feedback linearization; the nonlinear part is a feedback linearization item which converts a nonlinear system into a linear system. It can be seen from the simulation results that the attitude controller proposed in this paper is highly robust, and its control effect is better than the other two nonlinear controllers. The nonlinear parts of the other two nonlinear controllers are the same as the attitude controller proposed in this paper. The linear part involves a PID (proportional-integral-derivative) controller with the PID controller parameters tuned by Ziegler-Nichols rules and a PD controller with the PD controller parameters tuned by GA (genetic algorithms). Moreover, this attitude controller is simple and easy to implement. PMID:25614879

  2. A new formulation for anisotropic radiative transfer problems. I - Solution with a variational technique

    NASA Technical Reports Server (NTRS)

    Cheyney, H., III; Arking, A.

    1976-01-01

    The equations of radiative transfer in anisotropically scattering media are reformulated as linear operator equations in a single independent variable. The resulting equations are suitable for solution by a variety of standard mathematical techniques. The operators appearing in the resulting equations are in general nonsymmetric; however, it is shown that every bounded linear operator equation can be embedded in a symmetric linear operator equation and a variational solution can be obtained in a straightforward way. For purposes of demonstration, a Rayleigh-Ritz variational method is applied to three problems involving simple phase functions. It is to be noted that the variational technique demonstrated is of general applicability and permits simple solutions for a wide range of otherwise difficult mathematical problems in physics.

  3. Combining Biomarkers Linearly and Nonlinearly for Classification Using the Area Under the ROC Curve

    PubMed Central

    Fong, Youyi; Yin, Shuxin; Huang, Ying

    2016-01-01

    In biomedical studies, it is often of interest to classify/predict a subject’s disease status based on a variety of biomarker measurements. A commonly used classification criterion is based on AUC - Area under the Receiver Operating Characteristic Curve. Many methods have been proposed to optimize approximated empirical AUC criteria, but there are two limitations to the existing methods. First, most methods are only designed to find the best linear combination of biomarkers, which may not perform well when there is strong nonlinearity in the data. Second, many existing linear combination methods use gradient-based algorithms to find the best marker combination, which often result in sub-optimal local solutions. In this paper, we address these two problems by proposing a new kernel-based AUC optimization method called Ramp AUC (RAUC). This method approximates the empirical AUC loss function with a ramp function, and finds the best combination by a difference of convex functions algorithm. We show that as a linear combination method, RAUC leads to a consistent and asymptotically normal estimator of the linear marker combination when the data is generated from a semiparametric generalized linear model, just as the Smoothed AUC method (SAUC). Through simulation studies and real data examples, we demonstrate that RAUC out-performs SAUC in finding the best linear marker combinations, and can successfully capture nonlinear pattern in the data to achieve better classification performance. We illustrate our method with a dataset from a recent HIV vaccine trial. PMID:27058981

  4. Bariatric surgery trends: an 18-year report from the International Bariatric Surgery Registry.

    PubMed

    Samuel, Isaac; Mason, Edward E; Renquist, Kathleen E; Huang, Yu-Hui; Zimmerman, M Bridget; Jamal, Mohammad

    2006-11-01

    The epidemic of morbid obesity has increased bariatric procedures performed. Trend analyses provide important information that may impact individual practices. Patient data from 137 surgeons were examined from 1987 to 2004 (41,860 patients) using Cochran-Armitage Trend test and Generalized Linear Model. Over an 18-year period, surgeon preference for combined restrictive-malabsorptive procedures increased from 33% to 94%, while simple gastric restriction decreased correspondingly (P < .0001). Surgeons per worksite doubled and cases per surgeon increased 71%. Laparoscopic procedures increased to 24%. The percentage of males, mean operative age, and initial body mass index (BMI) increased significantly (P < .0001). Postoperative hospital stay decreased from 5.0 to 3.9 days (P < .0001). The most common procedure in 2004 was Roux-en-Y gastric bypass (RYGB) (59%). Bariatric surgery patients are now older and heavier, length of stay is shorter, and the laparoscopic approach is more frequent. From 1987 to 2004, the general trend shows a clear preference for combined restrictive-malabsorptive operations.

  5. Simultaneous HPTLC Determination of Rabeprazole and Itopride Hydrochloride From Their Combined Dosage Form.

    PubMed

    Suganthi, A; John, Sofiya; Ravi, T K

    2008-01-01

    A simple, precise, sensitive, rapid and reproducible HPTLC method for the simultaneous estimation of the rabeprazole and itopride hydrochloride in tablets was developed and validated. This method involves separation of the components by TLC on precoated silica gel G60F254 plate with solvent system of n-butanol, toluene and ammonia (8.5:0.5:1 v/v/v) and detection was carried out densitometrically using a UV detector at 288 nm in absorbance mode. This system was found to give compact spots for rabeprazole (Rf value of 0.23 0.02) and for itopride hydrochloride (Rf value of 0.75+/-0.02). Linearity was found to be in the range of 40-200 ng/spot and 300-1500 ng/spot for rabeprazole and itopride hydrochloride. The limit of detection and limit of quantification for rabeprazole were 10 and 20 ng/spot and for itopride hydrochloride were 50 and 100 ng/spot, respectively. The method was found to be beneficial for the routine analysis of combined dosage form.

  6. Combining forecast weights: Why and how?

    NASA Astrophysics Data System (ADS)

    Yin, Yip Chee; Kok-Haur, Ng; Hock-Eam, Lim

    2012-09-01

    This paper proposes a procedure called forecast weight averaging which is a specific combination of forecast weights obtained from different methods of constructing forecast weights for the purpose of improving the accuracy of pseudo out of sample forecasting. It is found that under certain specified conditions, forecast weight averaging can lower the mean squared forecast error obtained from model averaging. In addition, we show that in a linear and homoskedastic environment, this superior predictive ability of forecast weight averaging holds true irrespective whether the coefficients are tested by t statistic or z statistic provided the significant level is within the 10% range. By theoretical proofs and simulation study, we have shown that model averaging like, variance model averaging, simple model averaging and standard error model averaging, each produces mean squared forecast error larger than that of forecast weight averaging. Finally, this result also holds true marginally when applied to business and economic empirical data sets, Gross Domestic Product (GDP growth rate), Consumer Price Index (CPI) and Average Lending Rate (ALR) of Malaysia.

  7. Global Patch Matching

    NASA Astrophysics Data System (ADS)

    Huang, X.; Hu, K.; Ling, X.; Zhang, Y.; Lu, Z.; Zhou, G.

    2017-09-01

    This paper introduces a novel global patch matching method that focuses on how to remove fronto-parallel bias and obtain continuous smooth surfaces with assuming that the scenes covered by stereos are piecewise continuous. Firstly, simple linear iterative cluster method (SLIC) is used to segment the base image into a series of patches. Then, a global energy function, which consists of a data term and a smoothness term, is built on the patches. The data term is the second-order Taylor expansion of correlation coefficients, and the smoothness term is built by combing connectivity constraints and the coplanarity constraints are combined to construct the smoothness term. Finally, the global energy function can be built by combining the data term and the smoothness term. We rewrite the global energy function in a quadratic matrix function, and use least square methods to obtain the optimal solution. Experiments on Adirondack stereo and Motorcycle stereo of Middlebury benchmark show that the proposed method can remove fronto-parallel bias effectively, and produce continuous smooth surfaces.

  8. QuEChERS Purification Combined with Ultrahigh-Performance Liquid Chromatography Tandem Mass Spectrometry for Simultaneous Quantification of 25 Mycotoxins in Cereals.

    PubMed

    Sun, Juan; Li, Weixi; Zhang, Yan; Hu, Xuexu; Wu, Li; Wang, Bujun

    2016-12-15

    A method based on the QuEChERS (quick, easy, cheap, effective, rugged, and safe) purification combined with ultrahigh performance liquid chromatography tandem mass spectrometry (UPLC-MS/MS), was optimized for the simultaneous quantification of 25 mycotoxins in cereals. Samples were extracted with a solution containing 80% acetonitrile and 0.1% formic acid, and purified with QuEChERS before being separated by a C18 column. The mass spectrometry was conducted by using positive electrospray ionization (ESI+) and multiple reaction monitoring (MRM) models. The method gave good linear relations with regression coefficients ranging from 0.9950 to 0.9999. The detection limits ranged from 0.03 to 15.0 µg·kg -1 , and the average recovery at three different concentrations ranged from 60.2% to 115.8%, with relative standard deviations (RSD%) varying from 0.7% to 19.6% for the 25 mycotoxins. The method is simple, rapid, accurate, and an improvement compared with the existing methods published so far.

  9. Development of a novel mixed hemimicelles dispersive micro solid phase extraction using 1-hexadecyl-3-methylimidazolium bromide coated magnetic graphene for the separation and preconcentration of fluoxetine in different matrices before its determination by fiber optic linear array spectrophotometry and mode-mismatched thermal lens spectroscopy.

    PubMed

    Kazemi, Elahe; Haji Shabani, Ali Mohammad; Dadfarnia, Shayessteh; Abbasi, Amir; Rashidian Vaziri, Mohammad Reza; Behjat, Abbas

    2016-01-28

    This study aims at developing a novel, sensitive, fast, simple and convenient method for separation and preconcentration of trace amounts of fluoxetine before its spectrophotometric determination. The method is based on combination of magnetic mixed hemimicelles solid phase extraction and dispersive micro solid phase extraction using 1-hexadecyl-3-methylimidazolium bromide coated magnetic graphene as a sorbent. The magnetic graphene was synthesized by a simple coprecipitation method and characterized by X-ray diffraction (XRD), Fourier transform infrared (FT-IR) spectroscopy and scanning electron microscopy (SEM). The retained analyte was eluted using a 100 μL mixture of methanol/acetic acid (9:1) and converted into fluoxetine-β-cyclodextrin inclusion complex. The analyte was then quantified by fiber optic linear array spectrophotometry as well as mode-mismatched thermal lens spectroscopy (TLS). The factors affecting the separation, preconcentration and determination of fluoxetine were investigated and optimized. With a 50 mL sample and under optimized conditions using the spectrophotometry technique, the method exhibited a linear dynamic range of 0.4-60.0 μg L(-1), a detection limit of 0.21 μg L(-1), an enrichment factor of 167, and a relative standard deviation of 2.1% and 3.8% (n = 6) at 60 μg L(-1) level of fluoxetine for intra- and inter-day analyses, respectively. However, with thermal lens spectrometry and a sample volume of 10 mL, the method exhibited a linear dynamic range of 0.05-300 μg L(-1), a detection limit of 0.016 μg L(-1) and a relative standard deviation of 3.8% and 5.6% (n = 6) at 60 μg L(-1) level of fluoxetine for intra- and inter-day analyses, respectively. The method was successfully applied to determine fluoxetine in pharmaceutical formulation, human urine and environmental water samples. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. What Do Contrast Threshold Equivalent Noise Studies Actually Measure? Noise vs. Nonlinearity in Different Masking Paradigms

    PubMed Central

    Baldwin, Alex S.; Baker, Daniel H.; Hess, Robert F.

    2016-01-01

    The internal noise present in a linear system can be quantified by the equivalent noise method. By measuring the effect that applying external noise to the system’s input has on its output one can estimate the variance of this internal noise. By applying this simple “linear amplifier” model to the human visual system, one can entirely explain an observer’s detection performance by a combination of the internal noise variance and their efficiency relative to an ideal observer. Studies using this method rely on two crucial factors: firstly that the external noise in their stimuli behaves like the visual system’s internal noise in the dimension of interest, and secondly that the assumptions underlying their model are correct (e.g. linearity). Here we explore the effects of these two factors while applying the equivalent noise method to investigate the contrast sensitivity function (CSF). We compare the results at 0.5 and 6 c/deg from the equivalent noise method against those we would expect based on pedestal masking data collected from the same observers. We find that the loss of sensitivity with increasing spatial frequency results from changes in the saturation constant of the gain control nonlinearity, and that this only masquerades as a change in internal noise under the equivalent noise method. Part of the effect we find can be attributed to the optical transfer function of the eye. The remainder can be explained by either changes in effective input gain, divisive suppression, or a combination of the two. Given these effects the efficiency of our observers approaches the ideal level. We show the importance of considering these factors in equivalent noise studies. PMID:26953796

  11. What Do Contrast Threshold Equivalent Noise Studies Actually Measure? Noise vs. Nonlinearity in Different Masking Paradigms.

    PubMed

    Baldwin, Alex S; Baker, Daniel H; Hess, Robert F

    2016-01-01

    The internal noise present in a linear system can be quantified by the equivalent noise method. By measuring the effect that applying external noise to the system's input has on its output one can estimate the variance of this internal noise. By applying this simple "linear amplifier" model to the human visual system, one can entirely explain an observer's detection performance by a combination of the internal noise variance and their efficiency relative to an ideal observer. Studies using this method rely on two crucial factors: firstly that the external noise in their stimuli behaves like the visual system's internal noise in the dimension of interest, and secondly that the assumptions underlying their model are correct (e.g. linearity). Here we explore the effects of these two factors while applying the equivalent noise method to investigate the contrast sensitivity function (CSF). We compare the results at 0.5 and 6 c/deg from the equivalent noise method against those we would expect based on pedestal masking data collected from the same observers. We find that the loss of sensitivity with increasing spatial frequency results from changes in the saturation constant of the gain control nonlinearity, and that this only masquerades as a change in internal noise under the equivalent noise method. Part of the effect we find can be attributed to the optical transfer function of the eye. The remainder can be explained by either changes in effective input gain, divisive suppression, or a combination of the two. Given these effects the efficiency of our observers approaches the ideal level. We show the importance of considering these factors in equivalent noise studies.

  12. Simple scale interpolator facilitates reading of graphs

    NASA Technical Reports Server (NTRS)

    Fazio, A.; Henry, B.; Hood, D.

    1966-01-01

    Set of cards with scale divisions and a scale finder permits accurate reading of the coordinates of points on linear or logarithmic graphs plotted on rectangular grids. The set contains 34 different scales for linear plotting and 28 single cycle scales for log plots.

  13. Simultaneous Determination of Withanolide A and Bacoside A in Spansules by High-Performance Thin-Layer Chromatography

    PubMed Central

    Shinde, P B; Aragade, P D; Agrawal, M R; Deokate, U A; Khadabadi, S S

    2011-01-01

    The objective of this work was to develop and validate a simple, rapid, precise, and accurate high performance thin layer chromatography method for simultaneous determination of withanolide A and bacoside A in combined dosage form. The stationary phase used was silica gel G60F254. The mobile phase used was mixture of ethyl acetate: methanol: toluene: water (4:1:1:0.5 v/v/v/v). The detection of spots was carried out at 320 nm using absorbance reflectance mode. The method was validated in terms of linearity, accuracy, precision and specificity. The calibration curve was found to be linear between 200 to 800 ng/spot for withanolide A and 50 to 350 ng/spot for bacoside A. The limit of detection and limit of quantification for the withanolide A were found to be 3.05 and 10.06 ng/spot, respectively and for bacoside A 8.3 and 27.39 ng/spot, respectively. The proposed method can be successfully used to determine the drug content of marketed formulation. PMID:22303073

  14. Simultaneous determination of withanolide a and bacoside a in spansules by high-performance thin-layer chromatography.

    PubMed

    Shinde, P B; Aragade, P D; Agrawal, M R; Deokate, U A; Khadabadi, S S

    2011-03-01

    The objective of this work was to develop and validate a simple, rapid, precise, and accurate high performance thin layer chromatography method for simultaneous determination of withanolide A and bacoside A in combined dosage form. The stationary phase used was silica gel G60F(254). The mobile phase used was mixture of ethyl acetate: methanol: toluene: water (4:1:1:0.5 v/v/v/v). The detection of spots was carried out at 320 nm using absorbance reflectance mode. The method was validated in terms of linearity, accuracy, precision and specificity. The calibration curve was found to be linear between 200 to 800 ng/spot for withanolide A and 50 to 350 ng/spot for bacoside A. The limit of detection and limit of quantification for the withanolide A were found to be 3.05 and 10.06 ng/spot, respectively and for bacoside A 8.3 and 27.39 ng/spot, respectively. The proposed method can be successfully used to determine the drug content of marketed formulation.

  15. Development and validation of a HPTLC method for simultaneous estimation of lornoxicam and thiocolchicoside in combined dosage form.

    PubMed

    Sahoo, Madhusmita; Syal, Pratima; Hable, Asawaree A; Raut, Rahul P; Choudhari, Vishnu P; Kuchekar, Bhanudas S

    2011-07-01

    To develop a simple, precise, rapid and accurate HPTLC method for the simultaneous estimation of Lornoxicam (LOR) and Thiocolchicoside (THIO) in bulk and pharmaceutical dosage forms. The separation of the active compounds from pharmaceutical dosage form was carried out using methanol:chloroform:water (9.6:0.2:0.2 v/v/v) as the mobile phase and no immiscibility issues were found. The densitometric scanning was carried out at 377 nm. The method was validated for linearity, accuracy, precision, LOD (Limit of Detection), LOQ (Limit of Quantification), robustness and specificity. The Rf values (±SD) were found to be 0.84 ± 0.05 for LOR and 0.58 ± 0.05 for THIO. Linearity was obtained in the range of 60-360 ng/band for LOR and 30-180 ng/band for THIO with correlation coefficients r(2) = 0.998 and 0.999, respectively. The percentage recovery for both the analytes was in the range of 98.7-101.2 %. The proposed method was optimized and validated as per the ICH guidelines.

  16. Ultrahigh Molecular Weight Linear Block Copolymers: Rapid Access by Reversible-Deactivation Radical Polymerization and Self- Assembly into Large Domain Nanostructures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mapas, Jose Kenneth D.; Thomay, Tim; Cartwright, Alexander N.

    2016-05-05

    Block copolymer (BCP) derived periodic nanostructures with domain sizes larger than 150 nm present a versatile platform for the fabrication of photonic materials. So far, the access to such materials has been limited to highly synthetically involved protocols. Herein, we report a simple, “user-friendly” method for the preparation of ultrahigh molecular weight linear poly(solketal methacrylate-b-styrene) block copolymers by a combination of Cu-wire-mediated ATRP and RAFT polymerizations. The synthesized copolymers with molecular weights up to 1.6 million g/mol and moderate dispersities readily assemble into highly ordered cylindrical or lamella microstructures with domain sizes as large as 292 nm, as determined bymore » ultra-small-angle x-ray scattering and scanning electron microscopy analyses. Solvent cast films of the synthesized block copolymers exhibit stop bands in the visible spectrum correlated to their domain spacings. The described method opens new avenues for facilitated fabrication and the advancement of fundamental understanding of BCP-derived photonic nanomaterials for a variety of applications.« less

  17. Cost-effectiveness analysis of the diarrhea alleviation through zinc and oral rehydration therapy (DAZT) program in rural Gujarat India: an application of the net-benefit regression framework.

    PubMed

    Shillcutt, Samuel D; LeFevre, Amnesty E; Fischer-Walker, Christa L; Taneja, Sunita; Black, Robert E; Mazumder, Sarmila

    2017-01-01

    This study evaluates the cost-effectiveness of the DAZT program for scaling up treatment of acute child diarrhea in Gujarat India using a net-benefit regression framework. Costs were calculated from societal and caregivers' perspectives and effectiveness was assessed in terms of coverage of zinc and both zinc and Oral Rehydration Salt. Regression models were tested in simple linear regression, with a specified set of covariates, and with a specified set of covariates and interaction terms using linear regression with endogenous treatment effects was used as the reference case. The DAZT program was cost-effective with over 95% certainty above $5.50 and $7.50 per appropriately treated child in the unadjusted and adjusted models respectively, with specifications including interaction terms being cost-effective with 85-97% certainty. Findings from this study should be combined with other evidence when considering decisions to scale up programs such as the DAZT program to promote the use of ORS and zinc to treat child diarrhea.

  18. Imparting Motion to a Test Object Such as a Motor Vehicle in a Controlled Fashion

    NASA Technical Reports Server (NTRS)

    Southward, Stephen C. (Inventor); Reubush, Chandler (Inventor); Pittman, Bryan (Inventor); Roehrig, Kurt (Inventor); Gerard, Doug (Inventor)

    2014-01-01

    An apparatus imparts motion to a test object such as a motor vehicle in a controlled fashion. A base has mounted on it a linear electromagnetic motor having a first end and a second end, the first end being connected to the base. A pneumatic cylinder and piston combination have a first end and a second end, the first end connected to the base so that the pneumatic cylinder and piston combination is generally parallel with the linear electromagnetic motor. The second ends of the linear electromagnetic motor and pneumatic cylinder and piston combination being commonly linked to a mount for the test object. A control system for the linear electromagnetic motor and pneumatic cylinder and piston combination drives the pneumatic cylinder and piston combination to support a substantial static load of the test object and the linear electromagnetic motor to impart controlled motion to the test object.

  19. New constraints on the 3D shear wave velocity structure of the upper mantle underneath Southern Scandinavia revealed from non-linear tomography

    NASA Astrophysics Data System (ADS)

    Wawerzinek, B.; Ritter, J. R. R.; Roy, C.

    2013-08-01

    We analyse travel times of shear waves, which were recorded at the MAGNUS network, to determine the 3D shear wave velocity (vS) structure underneath Southern Scandinavia. The travel time residuals are corrected for the known crustal structure of Southern Norway and weighted to account for data quality and pick uncertainties. The resulting residual pattern of subvertically incident waves is very uniform and simple. It shows delayed arrivals underneath Southern Norway compared to fast arrivals underneath the Oslo Graben and the Baltic Shield. The 3D upper mantle vS structure underneath the station network is determined by performing non-linear travel time tomography. As expected from the residual pattern the resulting tomographic model shows a simple and continuous vS perturbation pattern: a negative vS anomaly is visible underneath Southern Norway relative to the Baltic Shield in the east with a contrast of up to 4% vS and a sharp W-E dipping transition zone. Reconstruction tests reveal besides vertical smearing a good lateral reconstruction of the dipping vS transition zone and suggest that a deep-seated anomaly at 330-410 km depth is real and not an inversion artefact. The upper part of the reduced vS anomaly underneath Southern Norway (down to 250 km depth) might be due to an increase in lithospheric thickness from the Caledonian Southern Scandes in the west towards the Proterozoic Baltic Shield in Sweden in the east. The deeper-seated negative vS anomaly (330-410 km depth) could be caused by a temperature anomaly possibly combined with effects due to fluids or hydrous minerals. The determined simple 3D vS structure underneath Southern Scandinavia indicates that mantle processes might influence and contribute to a Neogene uplift of Southern Norway.

  20. Continuous Quantitative Measurements on a Linear Air Track

    ERIC Educational Resources Information Center

    Vogel, Eric

    1973-01-01

    Describes the construction and operational procedures of a spark-timing apparatus which is designed to record the back and forth motion of one or two carts on linear air tracks. Applications to measurements of velocity, acceleration, simple harmonic motion, and collision problems are illustrated. (CC)

  1. Finite Element Based Structural Damage Detection Using Artificial Boundary Conditions

    DTIC Science & Technology

    2007-09-01

    C. (2005). Elementary Linear Algebra . New York: John Wiley and Sons. Avitable, Peter (2001, January) Experimental Modal Analysis, A Simple Non...variables under consideration. 3 Frequency sensitivities are the basis for a linear approximation to compute the change in the natural frequencies of a...THEORY The general problem statement for a non- linear constrained optimization problem is: To minimize ( )f x Objective Function Subject to

  2. A step-by-step guide to non-linear regression analysis of experimental data using a Microsoft Excel spreadsheet.

    PubMed

    Brown, A M

    2001-06-01

    The objective of this present study was to introduce a simple, easily understood method for carrying out non-linear regression analysis based on user input functions. While it is relatively straightforward to fit data with simple functions such as linear or logarithmic functions, fitting data with more complicated non-linear functions is more difficult. Commercial specialist programmes are available that will carry out this analysis, but these programmes are expensive and are not intuitive to learn. An alternative method described here is to use the SOLVER function of the ubiquitous spreadsheet programme Microsoft Excel, which employs an iterative least squares fitting routine to produce the optimal goodness of fit between data and function. The intent of this paper is to lead the reader through an easily understood step-by-step guide to implementing this method, which can be applied to any function in the form y=f(x), and is well suited to fast, reliable analysis of data in all fields of biology.

  3. A study of the limitations of linear theory methods as applied to sonic boom calculations

    NASA Technical Reports Server (NTRS)

    Darden, Christine M.

    1990-01-01

    Current sonic boom minimization theories have been reviewed to emphasize the capabilities and flexibilities of the methods. Flexibility is important because it is necessary for the designer to meet optimized area constraints while reducing the impact on vehicle aerodynamic performance. Preliminary comparisons of sonic booms predicted for two Mach 3 concepts illustrate the benefits of shaping. Finally, for very simple bodies of revolution, sonic boom predictions were made using two methods - a modified linear theory method and a nonlinear method - for signature shapes which were both farfield N-waves and midfield waves. Preliminary analysis on these simple bodies verified that current modified linear theory prediction methods become inadequate for predicting midfield signatures for Mach numbers above 3. The importance of impulse is sonic boom disturbance and the importance of three-dimensional effects which could not be simulated with the bodies of revolution will determine the validity of current modified linear theory methods in predicting midfield signatures at lower Mach numbers.

  4. An Alternative Derivation of the Energy Levels of the "Particle on a Ring" System

    NASA Astrophysics Data System (ADS)

    Vincent, Alan

    1996-10-01

    All acceptable wave functions must be continuous mathematical functions. This criterion limits the acceptable functions for a particle in a linear 1-dimensional box to sine functions. If, however, the linear box is bent round into a ring, acceptable wave functions are those which are continuous at the 'join'. On this model some acceptable linear functions become unacceptable for the ring and some unacceptable cosine functions become acceptable. This approach can be used to produce a straightforward derivation of the energy levels and wave functions of the particle on a ring. These simple wave mechanical systems can be used as models of linear and cyclic delocalised systems such as conjugated hydrocarbons or the benzene ring. The promotion energy of an electron can then be used to calculate the wavelength of absorption of uv light. The simple model gives results of the correct order of magnitude and shows that, as the chain length increases, the uv maximum moves to longer wavelengths, as found experimentally.

  5. Improving Prediction Accuracy for WSN Data Reduction by Applying Multivariate Spatio-Temporal Correlation

    PubMed Central

    Carvalho, Carlos; Gomes, Danielo G.; Agoulmine, Nazim; de Souza, José Neuman

    2011-01-01

    This paper proposes a method based on multivariate spatial and temporal correlation to improve prediction accuracy in data reduction for Wireless Sensor Networks (WSN). Prediction of data not sent to the sink node is a technique used to save energy in WSNs by reducing the amount of data traffic. However, it may not be very accurate. Simulations were made involving simple linear regression and multiple linear regression functions to assess the performance of the proposed method. The results show a higher correlation between gathered inputs when compared to time, which is an independent variable widely used for prediction and forecasting. Prediction accuracy is lower when simple linear regression is used, whereas multiple linear regression is the most accurate one. In addition to that, our proposal outperforms some current solutions by about 50% in humidity prediction and 21% in light prediction. To the best of our knowledge, we believe that we are probably the first to address prediction based on multivariate correlation for WSN data reduction. PMID:22346626

  6. Phase properties of elastic waves in systems constituted of adsorbed diatomic molecules on the (001) surface of a simple cubic crystal

    NASA Astrophysics Data System (ADS)

    Deymier, P. A.; Runge, K.

    2018-03-01

    A Green's function-based numerical method is developed to calculate the phase of scattered elastic waves in a harmonic model of diatomic molecules adsorbed on the (001) surface of a simple cubic crystal. The phase properties of scattered waves depend on the configuration of the molecules. The configurations of adsorbed molecules on the crystal surface such as parallel chain-like arrays coupled via kinks are used to demonstrate not only linear but also non-linear dependency of the phase on the number of kinks along the chains. Non-linear behavior arises for scattered waves with frequencies in the vicinity of a diatomic molecule resonance. In the non-linear regime, the variation in phase with the number of kinks is formulated mathematically as unitary matrix operations leading to an analogy between phase-based elastic unitary operations and quantum gates. The advantage of elastic based unitary operations is that they are easily realizable physically and measurable.

  7. Comparing convective heat fluxes derived from thermodynamics to a radiative-convective model and GCMs

    NASA Astrophysics Data System (ADS)

    Dhara, Chirag; Renner, Maik; Kleidon, Axel

    2015-04-01

    The convective transport of heat and moisture plays a key role in the climate system, but the transport is typically parameterized in models. Here, we aim at the simplest possible physical representation and treat convective heat fluxes as the result of a heat engine. We combine the well-known Carnot limit of this heat engine with the energy balances of the surface-atmosphere system that describe how the temperature difference is affected by convective heat transport, yielding a maximum power limit of convection. This results in a simple analytic expression for convective strength that depends primarily on surface solar absorption. We compare this expression with an idealized grey atmosphere radiative-convective (RC) model as well as Global Circulation Model (GCM) simulations at the grid scale. We find that our simple expression as well as the RC model can explain much of the geographic variation of the GCM output, resulting in strong linear correlations among the three approaches. The RC model, however, shows a lower bias than our simple expression. We identify the use of the prescribed convective adjustment in RC-like models as the reason for the lower bias. The strength of our model lies in its ability to capture the geographic variation of convective strength with a parameter-free expression. On the other hand, the comparison with the RC model indicates a method for improving the formulation of radiative transfer in our simple approach. We also find that the latent heat fluxes compare very well among the approaches, as well as their sensitivity to surface warming. What our comparison suggests is that the strength of convection and their sensitivity in the climatic mean can be estimated relatively robustly by rather simple approaches.

  8. Critical Analysis of Different Methods to Retrieve Atmosphere Humidity Profiles from GNSS Radio Occultation Observations

    NASA Astrophysics Data System (ADS)

    Vespe, Francesco; Benedetto, Catia

    2013-04-01

    The huge amount of GPS Radio Occultation (RO) observations currently available thanks to space mission like COSMIC, CHAMP, GRACE, TERRASAR-X etc., have greatly encouraged the research of new algorithms suitable to extract humidity, temperature and pressure profiles of the atmosphere in a more and more precise way. For what concern the humidity profiles in these last years two different approaches have been widely proved and applied: the "Simple" and the 1DVAR methods. The Simple methods essentially determine dry refractivity profiles from temperature analysis profiles and hydrostatic equation. Then the dry refractivity is subtracted from RO refractivity to achieve the wet component. Finally from the wet refractivity is achieved humidity. The 1DVAR approach combines RO observations with profiles given by the background models with both the terms weighted with the inverse of covariance matrix. The advantage of "Simple" methods is that they are not affected by bias due to the background models. We have proposed in the past the BPV approach to retrieve humidity. Our approach can be classified among the "Simple" methods. The BPV approach works with dry atmospheric CIRA-Q models which depend on latitude, DoY and height. The dry CIRA-Q refractivity profile is selected estimating the involved parameters in a non linear least square fashion achieved by fitting RO observed bending angles through the stratosphere. The BPV as well as all the other "Simple" methods, has as drawback the unphysical occurrence of negative "humidity". Thus we propose to apply a modulated weighting of the fit residuals just to minimize the effects of this inconvenient. After a proper tuning of the approach, we plan to present the results of the validation.

  9. A Practical Model for Forecasting New Freshman Enrollment during the Application Period.

    ERIC Educational Resources Information Center

    Paulsen, Michael B.

    1989-01-01

    A simple and effective model for forecasting freshman enrollment during the application period is presented step by step. The model requires minimal and readily available information, uses a simple linear regression analysis on a personal computer, and provides updated monthly forecasts. (MSE)

  10. A simple recipe for setting up the flux equations of cyclic and linear reaction schemes of ion transport with a high number of states: The arrow scheme.

    PubMed

    Hansen, Ulf-Peter; Rauh, Oliver; Schroeder, Indra

    2016-01-01

    The calculation of flux equations or current-voltage relationships in reaction kinetic models with a high number of states can be very cumbersome. Here, a recipe based on an arrow scheme is presented, which yields a straightforward access to the minimum form of the flux equations and the occupation probability of the involved states in cyclic and linear reaction schemes. This is extremely simple for cyclic schemes without branches. If branches are involved, the effort of setting up the equations is a little bit higher. However, also here a straightforward recipe making use of so-called reserve factors is provided for implementing the branches into the cyclic scheme, thus enabling also a simple treatment of such cases.

  11. A simple recipe for setting up the flux equations of cyclic and linear reaction schemes of ion transport with a high number of states: The arrow scheme

    PubMed Central

    Hansen, Ulf-Peter; Rauh, Oliver; Schroeder, Indra

    2016-01-01

    abstract The calculation of flux equations or current-voltage relationships in reaction kinetic models with a high number of states can be very cumbersome. Here, a recipe based on an arrow scheme is presented, which yields a straightforward access to the minimum form of the flux equations and the occupation probability of the involved states in cyclic and linear reaction schemes. This is extremely simple for cyclic schemes without branches. If branches are involved, the effort of setting up the equations is a little bit higher. However, also here a straightforward recipe making use of so-called reserve factors is provided for implementing the branches into the cyclic scheme, thus enabling also a simple treatment of such cases. PMID:26646356

  12. Wave‐induced Hydraulic Forces on Submerged Aquatic Plants in Shallow Lakes

    PubMed Central

    SCHUTTEN, J.; DAINTY, J.; DAVY, A. J.

    2004-01-01

    • Background and Aims Hydraulic pulling forces arising from wave action are likely to limit the presence of freshwater macrophytes in shallow lakes, particularly those with soft sediments. The aim of this study was to develop and test experimentally simple models, based on linear wave theory for deep water, to predict such forces on individual shoots. • Methods Models were derived theoretically from the action of the vertical component of the orbital velocity of the waves on shoot size. Alternative shoot‐size descriptors (plan‐form area or dry mass) and alternative distributions of the shoot material along its length (cylinder or inverted cone) were examined. Models were tested experimentally in a flume that generated sinusoidal waves which lasted 1 s and were up to 0·2 m high. Hydraulic pulling forces were measured on plastic replicas of Elodea sp. and on six species of real plants with varying morphology (Ceratophyllum demersum, Chara intermedia, Elodea canadensis, Myriophyllum spicatum, Potamogeton natans and Potamogeton obtusifolius). • Key Results Measurements on the plastic replicas confirmed predicted relationships between force and wave phase, wave height and plant submergence depth. Predicted and measured forces were linearly related over all combinations of wave height and submergence depth. Measured forces on real plants were linearly related to theoretically derived predictors of the hydraulic forces (integrals of the products of the vertical orbital velocity raised to the power 1·5 and shoot size). • Conclusions The general applicability of the simplified wave equations used was confirmed. Overall, dry mass and plan‐form area performed similarly well as shoot‐size descriptors, as did the conical or cylindrical models of shoot distribution. The utility of the modelling approach in predicting hydraulic pulling forces from relatively simple plant and environmental measurements was validated over a wide range of forces, plant sizes and species. PMID:14988098

  13. A simple theory of motor protein kinetics and energetics. II.

    PubMed

    Qian, H

    2000-01-10

    A three-state stochastic model of motor protein [Qian, Biophys. Chem. 67 (1997) pp. 263-267] is further developed to illustrate the relationship between the external load on an individual motor protein in aqueous solution with various ATP concentrations and its steady-state velocity. A wide variety of dynamic motor behavior are obtained from this simple model. For the particular case of free-load translocation being the most unfavorable step within the hydrolysis cycle, the load-velocity curve is quasi-linear, V/Vmax = (cF/Fmax-c)/(1-c), in contrast to the hyperbolic relationship proposed by A.V. Hill for macroscopic muscle. Significant deviation from the linearity is expected when the velocity is less than 10% of its maximal (free-load) value--a situation under which the processivity of motor diminishes and experimental observations are less certain. We then investigate the dependence of load-velocity curve on ATP (ADP) concentration. It is shown that the free load Vmax exhibits a Michaelis-Menten like behavior, and the isometric Fmax increases linearly with ln([ATP]/[ADP]). However, the quasi-linear region is independent of the ATP concentration, yielding an apparently ATP-independent maximal force below the true isometric force. Finally, the heat production as a function of ATP concentration and external load are calculated. In simple terms and solved with elementary algebra, the present model provides an integrated picture of biochemical kinetics and mechanical energetics of motor proteins.

  14. Low-Velocity Impact Response of Sandwich Beams with Functionally Graded Core

    NASA Technical Reports Server (NTRS)

    Apetre, N. A.; Sankar, B. V.; Ambur, D. R.

    2006-01-01

    The problem of low-speed impact of a one-dimensional sandwich panel by a rigid cylindrical projectile is considered. The core of the sandwich panel is functionally graded such that the density, and hence its stiffness, vary through the thickness. The problem is a combination of static contact problem and dynamic response of the sandwich panel obtained via a simple nonlinear spring-mass model (quasi-static approximation). The variation of core Young's modulus is represented by a polynomial in the thickness coordinate, but the Poisson's ratio is kept constant. The two-dimensional elasticity equations for the plane sandwich structure are solved using a combination of Fourier series and Galerkin method. The contact problem is solved using the assumed contact stress distribution method. For the impact problem we used a simple dynamic model based on quasi-static behavior of the panel - the sandwich beam was modeled as a combination of two springs, a linear spring to account for the global deflection and a nonlinear spring to represent the local indentation effects. Results indicate that the contact stiffness of thc beam with graded core Increases causing the contact stresses and other stress components in the vicinity of contact to increase. However, the values of maximum strains corresponding to the maximum impact load arc reduced considerably due to grading of thc core properties. For a better comparison, the thickness of the functionally graded cores was chosen such that the flexural stiffness was equal to that of a beam with homogeneous core. The results indicate that functionally graded cores can be used effectively to mitigate or completely prevent impact damage in sandwich composites.

  15. Stress analysis method for clearance-fit joints with bearing-bypass loads

    NASA Technical Reports Server (NTRS)

    Naik, R. A.; Crews, J. H., Jr.

    1989-01-01

    Within a multi-fastener joint, fastener holes may be subjected to the combined effects of bearing loads and loads that bypass the hole to be reacted elsewhere in the joint. The analysis of a joint subjected to search combined bearing and bypass loads is complicated by the usual clearance between the hole and the fastener. A simple analysis method for such clearance-fit joints subjected to bearing-bypass loading has been developed in the present study. It uses an inverse formulation with a linear elastic finite-element analysis. Conditions along the bolt-hole contact arc are specified by displacement constraint equations. The present method is simple to apply and can be implemented with most general purpose finite-element programs since it does not use complicated iterative-incremental procedures. The method was used to study the effects of bearing-bypass loading on bolt-hole contact angles and local stresses. In this study, a rigid, frictionless bolt was used with a plate having the properties of a quasi-isotropic graphite/epoxy laminate. Results showed that the contact angle as well as the peak stresses around the hole and their locations were strongly influenced by the ratio of bearing and bypass loads. For single contact, tension and compression bearing-bypass loading had opposite effects on the contact angle. For some compressive bearing-bypass loads, the hole tended to close on the fastener leading to dual contact. It was shown that dual contact reduces the stress concentration at the fastener and would, therefore, increase joint strength in compression. The results illustrate the general importance of accounting for bolt-hole clearance and contact to accurately compute local bolt-hole stresses for combined bearings and bypass loading.

  16. A methodology for physically based rockfall hazard assessment

    NASA Astrophysics Data System (ADS)

    Crosta, G. B.; Agliardi, F.

    Rockfall hazard assessment is not simple to achieve in practice and sound, physically based assessment methodologies are still missing. The mobility of rockfalls implies a more difficult hazard definition with respect to other slope instabilities with minimal runout. Rockfall hazard assessment involves complex definitions for "occurrence probability" and "intensity". This paper is an attempt to evaluate rockfall hazard using the results of 3-D numerical modelling on a topography described by a DEM. Maps portraying the maximum frequency of passages, velocity and height of blocks at each model cell, are easily combined in a GIS in order to produce physically based rockfall hazard maps. Different methods are suggested and discussed for rockfall hazard mapping at a regional and local scale both along linear features or within exposed areas. An objective approach based on three-dimensional matrixes providing both a positional "Rockfall Hazard Index" and a "Rockfall Hazard Vector" is presented. The opportunity of combining different parameters in the 3-D matrixes has been evaluated to better express the relative increase in hazard. Furthermore, the sensitivity of the hazard index with respect to the included variables and their combinations is preliminarily discussed in order to constrain as objective as possible assessment criteria.

  17. Combination of dynamic Bayesian network classifiers for the recognition of degraded characters

    NASA Astrophysics Data System (ADS)

    Likforman-Sulem, Laurence; Sigelle, Marc

    2009-01-01

    We investigate in this paper the combination of DBN (Dynamic Bayesian Network) classifiers, either independent or coupled, for the recognition of degraded characters. The independent classifiers are a vertical HMM and a horizontal HMM whose observable outputs are the image columns and the image rows respectively. The coupled classifiers, presented in a previous study, associate the vertical and horizontal observation streams into single DBNs. The scores of the independent and coupled classifiers are then combined linearly at the decision level. We compare the different classifiers -independent, coupled or linearly combined- on two tasks: the recognition of artificially degraded handwritten digits and the recognition of real degraded old printed characters. Our results show that coupled DBNs perform better on degraded characters than the linear combination of independent HMM scores. Our results also show that the best classifier is obtained by linearly combining the scores of the best coupled DBN and the best independent HMM.

  18. A simple approach to lifetime learning in genetic programming-based symbolic regression.

    PubMed

    Azad, Raja Muhammad Atif; Ryan, Conor

    2014-01-01

    Genetic programming (GP) coarsely models natural evolution to evolve computer programs. Unlike in nature, where individuals can often improve their fitness through lifetime experience, the fitness of GP individuals generally does not change during their lifetime, and there is usually no opportunity to pass on acquired knowledge. This paper introduces the Chameleon system to address this discrepancy and augment GP with lifetime learning by adding a simple local search that operates by tuning the internal nodes of individuals. Although not the first attempt to combine local search with GP, its simplicity means that it is easy to understand and cheap to implement. A simple cache is added which leverages the local search to reduce the tuning cost to a small fraction of the expected cost, and we provide a theoretical upper limit on the maximum tuning expense given the average tree size of the population and show that this limit grows very conservatively as the average tree size of the population increases. We show that Chameleon uses available genetic material more efficiently by exploring more actively than with standard GP, and demonstrate that not only does Chameleon outperform standard GP (on both training and test data) over a number of symbolic regression type problems, it does so by producing smaller individuals and it works harmoniously with two other well-known extensions to GP, namely, linear scaling and a diversity-promoting tournament selection method.

  19. Combined linear theory/impact theory method for analysis and design of high speed configurations

    NASA Technical Reports Server (NTRS)

    Brooke, D.; Vondrasek, D. V.

    1980-01-01

    Pressure distributions on a wing body at Mach 4.63 are calculated. The combined theory is shown to give improved predictions over either linear theory or impact theory alone. The combined theory is also applied in the inverse design mode to calculate optimum camber slopes at Mach 4.63. Comparisons with optimum camber slopes obtained from unmodified linear theory show large differences. Analysis of the results indicate that the combined theory correctly predicts the effect of thickness on the loading distributions at high Mach numbers, and that finite thickness wings optimized at high Mach numbers using unmodified linear theory will not achieve the minimum drag characteristics for which they are designed.

  20. Inverse Modelling Problems in Linear Algebra Undergraduate Courses

    ERIC Educational Resources Information Center

    Martinez-Luaces, Victor E.

    2013-01-01

    This paper will offer an analysis from a theoretical point of view of mathematical modelling, applications and inverse problems of both causation and specification types. Inverse modelling problems give the opportunity to establish connections between theory and practice and to show this fact, a simple linear algebra example in two different…

  1. Transportable Maps Software. Volume I.

    DTIC Science & Technology

    1982-07-01

    being collected at the beginning or end of the routine. This allows the interaction to be followed sequentially through its steps by anyone reading the...flow is either simple sequential , simple conditional (the equivalent of ’if-then-else’), simple iteration (’DO-loop’), or the non-linear recursion...input raster images to be in the form of sequential binary files with a SEGMENTED record type. The advantage of this form is that large logical records

  2. Development and Validation of Chemometric Spectrophotometric Methods for Simultaneous Determination of Simvastatin and Nicotinic Acid in Binary Combinations.

    PubMed

    Alahmad, Shoeb; Elfatatry, Hamed M; Mabrouk, Mokhtar M; Hammad, Sherin F; Mansour, Fotouh R

    2018-01-01

    The development and introduction of combined therapy represent a challenge for analysis due to severe overlapping of their UV spectra in case of spectroscopy or the requirement of a long tedious and high cost separation technique in case of chromatography. Quality control laboratories have to develop and validate suitable analytical procedures in order to assay such multi component preparations. New spectrophotometric methods for the simultaneous determination of simvastatin (SIM) and nicotinic acid (NIA) in binary combinations were developed. These methods are based on chemometric treatment of data, the applied chemometric techniques are multivariate methods including classical least squares (CLS), principal component regression (PCR) and partial least squares (PLS). In these techniques, the concentration data matrix were prepared by using the synthetic mixtures containing SIM and NIA dissolved in ethanol. The absorbance data matrix corresponding to the concentration data matrix was obtained by measuring the absorbance at 12 wavelengths in the range 216 - 240 nm at 2 nm intervals in the zero-order. The spectrophotometric procedures do not require any separation step. The accuracy, precision and the linearity ranges of the methods have been determined and validated by analyzing synthetic mixtures containing the studied drugs. Chemometric spectrophotometric methods have been developed in the present study for the simultaneous determination of simvastatin and nicotinic acid in their synthetic binary mixtures and in their mixtures with possible excipients present in tablet dosage form. The validation was performed successfully. The developed methods have been shown to be accurate, linear, precise, and so simple. The developed methods can be used routinely for the determination dosage form. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  3. Dispersive liquid-liquid microextraction followed by high-performance liquid chromatography-ultraviolet detection to determination of opium alkaloids in human plasma.

    PubMed

    Ahmadi-Jouibari, Toraj; Fattahi, Nazir; Shamsipur, Mojtaba; Pirsaheb, Meghdad

    2013-11-01

    A novel, simple, rapid and sensitive dispersive liquid-liquid microextraction method based on the solidification of floating organic drop (DLLME-SFO) combined with high-performance liquid chromatography-ultraviolet detection (HPLC-UV) was used to determine opium alkaloids in human plasma. During the extraction procedure, plasma protein was precipitated by using a mixture of zinc sulfate solution and acetonitrile. Some effective parameters on extraction were studied and optimized. Under the optimum conditions (extraction solvent: 30.0 μl 1-undecanol; disperser solvent: 470 μl acetone; pH: 9; salt addition: 1%(w/v) NaCl and extraction time: 0.5 min), calibration curves are linear in the range of 1.5-1000 μgl(-1) and limit of detections (LODs) are in the range of 0.5-5 μgl(-1). The relative standard deviations (RSDs) for 100 μgl(-1) of morphine and codeine, 10.0 μgl(-1) of papaverine and 20.0 μgl(-1) of noscapine in diluted human plasma are in the range of 4.3-7.4% (n=5). Finally, the method was successfully applied in the determination of opium alkaloids in the actual human plasma samples. The relative recoveries of plasma samples spiked with alkaloids are 88-110.5%. The obtained results show that DLLME-SFO combined with HPLC-UV is a fast and simple method for the determination of opium alkaloids in human plasma. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. Environmental controls on alpine cirque size

    NASA Astrophysics Data System (ADS)

    Delmas, Magali; Gunnell, Yanni; Calvet, Marc

    2014-02-01

    Pleistocene alpine cirques are emblematic landforms of mountain scenery, yet their deceptively simple template conceals complex controlling variables. This comparative study presents a new database of 1071 cirques, the largest of its kind, located in the French eastern Pyrenees. It is embedded in a review of previous work on cirque morphometry and thus provides a perspective on a global scale. First-order cirque attributes of length, width, and amplitude were measured; and their power as predictors of climatic and lithological variables and as proxies for the duration of glacier activity was tested using ANOVA, simple and multiple linear regression, and their various post-hoc tests. Conventional variables such as cirque aspect, floor elevation, and exposure with respect to regional precipitation-bearing weather systems are shown to present some consistency in spatial patterns determined by solar radiation, the morning-afternoon effect, and wind-blown snow accumulation in the lee of ridgetops. This confirms in greater detail the previously encountered links between landforms and climate. A special focus on the influence of bedrock lithology, a previously neglected nonclimatic variable, highlights the potential for spurious relations in the use of cirque size as a proxy of past environmental conditions. Cirques are showcased as complex landforms resulting from the combination of many climatic and nonclimatic variables that remain difficult to rank by order of importance. Apart from a few statistically weak trends, several combinations of different factors in different proportions are shown to produce similar morphometric outcomes, suggesting a case of equifinality in landform development.

  5. Simple, explicitly time-dependent, and regular solutions of the linearized vacuum Einstein equations in Bondi-Sachs coordinates

    NASA Astrophysics Data System (ADS)

    Mädler, Thomas

    2013-05-01

    Perturbations of the linearized vacuum Einstein equations in the Bondi-Sachs formulation of general relativity can be derived from a single master function with spin weight two, which is related to the Weyl scalar Ψ0, and which is determined by a simple wave equation. By utilizing a standard spin representation of tensors on a sphere and two different approaches to solve the master equation, we are able to determine two simple and explicitly time-dependent solutions. Both solutions, of which one is asymptotically flat, comply with the regularity conditions at the vertex of the null cone. For the asymptotically flat solution we calculate the corresponding linearized perturbations, describing all multipoles of spin-2 waves that propagate on a Minkowskian background spacetime. We also analyze the asymptotic behavior of this solution at null infinity using a Penrose compactification and calculate the Weyl scalar Ψ4. Because of its simplicity, the asymptotically flat solution presented here is ideally suited for test bed calculations in the Bondi-Sachs formulation of numerical relativity. It may be considered as a sibling of the Bergmann-Sachs or Teukolsky-Rinne solutions, on spacelike hypersurfaces, for a metric adapted to null hypersurfaces.

  6. Polarized radiance distribution measurements of skylight. I. System description and characterization.

    PubMed

    Voss, K J; Liu, Y

    1997-08-20

    A new system to measure the natural skylight polarized radiance distribution has been developed. The system is based on a fish-eye lens, CCD camera system, and filter changer. With this system sequences of images can be combined to determine the linear polarization components of the incident light field. Calibration steps to determine the system 's polarization characteristics are described. Comparisons of the radiance measurements of this system and a simple pointing radiometer were made in the field and agreed within 10 % for measurements at 560 and 670 nm and 25 % at 860 nm. Polarization tests were done in the laboratory. The accuracy of the intensity measurements is estimated to be 10 %, while the accuracy of measurements of elements of the Mueller matrix are estimated to be 2 %.

  7. High-order local maximum principle preserving (MPP) discontinuous Galerkin finite element method for the transport equation

    NASA Astrophysics Data System (ADS)

    Anderson, R.; Dobrev, V.; Kolev, Tz.; Kuzmin, D.; Quezada de Luna, M.; Rieben, R.; Tomov, V.

    2017-04-01

    In this work we present a FCT-like Maximum-Principle Preserving (MPP) method to solve the transport equation. We use high-order polynomial spaces; in particular, we consider up to 5th order spaces in two and three dimensions and 23rd order spaces in one dimension. The method combines the concepts of positive basis functions for discontinuous Galerkin finite element spatial discretization, locally defined solution bounds, element-based flux correction, and non-linear local mass redistribution. We consider a simple 1D problem with non-smooth initial data to explain and understand the behavior of different parts of the method. Convergence tests in space indicate that high-order accuracy is achieved. Numerical results from several benchmarks in two and three dimensions are also reported.

  8. On simulating flow with multiple time scales using a method of averages

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Margolin, L.G.

    1997-12-31

    The author presents a new computational method based on averaging to efficiently simulate certain systems with multiple time scales. He first develops the method in a simple one-dimensional setting and employs linear stability analysis to demonstrate numerical stability. He then extends the method to multidimensional fluid flow. His method of averages does not depend on explicit splitting of the equations nor on modal decomposition. Rather he combines low order and high order algorithms in a generalized predictor-corrector framework. He illustrates the methodology in the context of a shallow fluid approximation to an ocean basin circulation. He finds that his newmore » method reproduces the accuracy of a fully explicit second-order accurate scheme, while costing less than a first-order accurate scheme.« less

  9. Autonomous propulsion of nanorods trapped in an acoustic field

    NASA Astrophysics Data System (ADS)

    Sader, John; Collis, Jesse; Chakraborty, Debadi

    2017-11-01

    Recent measurements demonstrate that nanorods trapped in acoustic fields generate autonomous propulsion, with their direction and speed controlled by both the particle's shape and density distribution. In this talk, we investigate the physical mechanisms underlying this combined density/shape induced phenomenon by developing a simple yet rigorous mathematical framework for arbitrary axisymmetric particles. This only requires solution of the (linear) unsteady Stokes equations. Geometric and density asymmetries in the particle generate axial jets that can produce motion in either direction. Strikingly, the propulsion direction is found to reverse with increasing frequency, an effect that is yet to be reported experimentally. The general theory and mechanism described here enable the a priori design and fabrication of nano-motors in fluid for transport of small-scale payloads and robotic applications.

  10. Particle-in-a-box model of one-dimensional excitons in conjugated polymers

    NASA Astrophysics Data System (ADS)

    Pedersen, Thomas G.; Johansen, Per M.; Pedersen, Henrik C.

    2000-04-01

    A simple two-particle model of excitons in conjugated polymers is proposed as an alternative to usual highly computationally demanding quantum chemical methods. In the two-particle model, the exciton is described as an electron-hole pair interacting via Coulomb forces and confined to the polymer backbone by rigid walls. Furthermore, by integrating out the transverse part, the two-particle equation is reduced to one-dimensional form. It is demonstrated how essentially exact solutions are obtained in the cases of short and long conjugation length, respectively. From a linear combination of these cases an approximate solution for the general case is obtained. As an application of the model the influence of a static electric field on the electron-hole overlap integral and exciton energy is considered.

  11. Cylinders out of a top hat: counts-in-cells for projected densities

    NASA Astrophysics Data System (ADS)

    Uhlemann, Cora; Pichon, Christophe; Codis, Sandrine; L'Huillier, Benjamin; Kim, Juhan; Bernardeau, Francis; Park, Changbom; Prunet, Simon

    2018-06-01

    Large deviation statistics is implemented to predict the statistics of cosmic densities in cylinders applicable to photometric surveys. It yields few per cent accurate analytical predictions for the one-point probability distribution function (PDF) of densities in concentric or compensated cylinders; and also captures the density dependence of their angular clustering (cylinder bias). All predictions are found to be in excellent agreement with the cosmological simulation Horizon Run 4 in the quasi-linear regime where standard perturbation theory normally breaks down. These results are combined with a simple local bias model that relates dark matter and tracer densities in cylinders and validated on simulated halo catalogues. This formalism can be used to probe cosmology with existing and upcoming photometric surveys like DES, Euclid or WFIRST containing billions of galaxies.

  12. Single gate p-n junctions in graphene-ferroelectric devices

    NASA Astrophysics Data System (ADS)

    Hinnefeld, J. Henry; Xu, Ruijuan; Rogers, Steven; Pandya, Shishir; Shim, Moonsub; Martin, Lane W.; Mason, Nadya

    2016-05-01

    Graphene's linear dispersion relation and the attendant implications for bipolar electronics applications have motivated a range of experimental efforts aimed at producing p-n junctions in graphene. Here we report electrical transport measurements of graphene p-n junctions formed via simple modifications to a PbZr0.2Ti0.8O3 substrate, combined with a self-assembled layer of ambient environmental dopants. We show that the substrate configuration controls the local doping region, and that the p-n junction behavior can be controlled with a single gate. Finally, we show that the ferroelectric substrate induces a hysteresis in the environmental doping which can be utilized to activate and deactivate the doping, yielding an "on-demand" p-n junction in graphene controlled by a single, universal backgate.

  13. Remote secure observing for the Faulkes Telescopes

    NASA Astrophysics Data System (ADS)

    Smith, Robert J.; Steele, Iain A.; Marchant, Jonathan M.; Fraser, Stephen N.; Mucke-Herzberg, Dorothea

    2004-09-01

    Since the Faulkes Telescopes are to be used by a wide variety of audiences, both powerful engineering level and simple graphical interfaces exist giving complete remote and robotic control of the telescope over the internet. Security is extremely important to protect the health of both humans and equipment. Data integrity must also be carefully guarded for images being delivered directly into the classroom. The adopted network architecture is described along with the variety of security and intrusion detection software. We use a combination of SSL, proxies, IPSec, and both Linux iptables and Cisco IOS firewalls to ensure only authenticated and safe commands are sent to the telescopes. With an eye to a possible future global network of robotic telescopes, the system implemented is capable of scaling linearly to any moderate (of order ten) number of telescopes.

  14. Dynamical heterogeneities and mechanical non-linearities: Modeling the onset of plasticity in polymer in the glass transition.

    PubMed

    Masurel, R J; Gelineau, P; Lequeux, F; Cantournet, S; Montes, H

    2017-12-27

    In this paper we focus on the role of dynamical heterogeneities on the non-linear response of polymers in the glass transition domain. We start from a simple coarse-grained model that assumes a random distribution of the initial local relaxation times and that quantitatively describes the linear viscoelasticity of a polymer in the glass transition regime. We extend this model to non-linear mechanics assuming a local Eyring stress dependence of the relaxation times. Implementing the model in a finite element mechanics code, we derive the mechanical properties and the local mechanical fields at the beginning of the non-linear regime. The model predicts a narrowing of distribution of relaxation times and the storage of a part of the mechanical energy --internal stress-- transferred to the material during stretching in this temperature range. We show that the stress field is not spatially correlated under and after loading and follows a Gaussian distribution. In addition the strain field exhibits shear bands, but the strain distribution is narrow. Hence, most of the mechanical quantities can be calculated analytically, in a very good approximation, with the simple assumption that the strain rate is constant.

  15. Development of spatially diverse and complex dune-field patterns: Gran Desierto Dune Field, Sonora, Mexico

    USGS Publications Warehouse

    Beveridge, C.; Kocurek, G.; Ewing, R.C.; Lancaster, N.; Morthekai, P.; Singhvi, A.K.; Mahan, S.A.

    2006-01-01

    The pattern of dunes within the Gran Desierto of Sonora, Mexico, is both spatially diverse and complex. Identification of the pattern components from remote-sensing images, combined with statistical analysis of their measured parameters demonstrate that the composite pattern consists of separate populations of simple dune patterns. Age-bracketing by optically stimulated luminescence (OSL) indicates that the simple patterns represent relatively short-lived aeolian constructional events since ???25 ka. The simple dune patterns consist of: (i) late Pleistocene relict linear dunes; (ii) degraded crescentic dunes formed at ???12 ka; (iii) early Holocene western crescentic dunes; (iv) eastern crescentic dunes emplaced at ???7 ka; and (v) star dunes formed during the last 3 ka. Recognition of the simple patterns and their ages allows for the geomorphic backstripping of the composite pattern. Palaeowind reconstructions, based upon the rule of gross bedform-normal transport, are largely in agreement with regional proxy data. The sediment state over time for the Gran Desierto is one in which the sediment supply for aeolian constructional events is derived from previously stored sediment (Ancestral Colorado River sediment), and contemporaneous influx from the lower Colorado River valley and coastal influx from the Bahia del Adair inlet. Aeolian constructional events are triggered by climatic shifts to greater aridity, changes in the wind regime, and the development of a sediment supply. The rate of geomorphic change within the Gran Desierto is significantly greater than the rate of subsidence and burial of the accumulation surface upon which it rests. ?? 2006 The Authors. Journal compilation 2006 International Association of Sedimentologists.

  16. Monitoring and evaluating the quality consistency of Compound Bismuth Aluminate tablets by a simple quantified ratio fingerprint method combined with simultaneous determination of five compounds and correlated with antioxidant activities.

    PubMed

    Liu, Yingchun; Liu, Zhongbo; Sun, Guoxiang; Wang, Yan; Ling, Junhong; Gao, Jiayue; Huang, Jiahao

    2015-01-01

    A combination method of multi-wavelength fingerprinting and multi-component quantification by high performance liquid chromatography (HPLC) coupled with diode array detector (DAD) was developed and validated to monitor and evaluate the quality consistency of herbal medicines (HM) in the classical preparation Compound Bismuth Aluminate tablets (CBAT). The validation results demonstrated that our method met the requirements of fingerprint analysis and quantification analysis with suitable linearity, precision, accuracy, limits of detection (LOD) and limits of quantification (LOQ). In the fingerprint assessments, rather than using conventional qualitative "Similarity" as a criterion, the simple quantified ratio fingerprint method (SQRFM) was recommended, which has an important quantified fingerprint advantage over the "Similarity" approach. SQRFM qualitatively and quantitatively offers the scientific criteria for traditional Chinese medicines (TCM)/HM quality pyramid and warning gate in terms of three parameters. In order to combine the comprehensive characterization of multi-wavelength fingerprints, an integrated fingerprint assessment strategy based on information entropy was set up involving a super-information characteristic digitized parameter of fingerprints, which reveals the total entropy value and absolute information amount about the fingerprints and, thus, offers an excellent method for fingerprint integration. The correlation results between quantified fingerprints and quantitative determination of 5 marker compounds, including glycyrrhizic acid (GLY), liquiritin (LQ), isoliquiritigenin (ILG), isoliquiritin (ILQ) and isoliquiritin apioside (ILA), indicated that multi-component quantification could be replaced by quantified fingerprints. The Fenton reaction was employed to determine the antioxidant activities of CBAT samples in vitro, and they were correlated with HPLC fingerprint components using the partial least squares regression (PLSR) method. In summary, the method of multi-wavelength fingerprints combined with antioxidant activities has been proved to be a feasible and scientific procedure for monitoring and evaluating the quality consistency of CBAT.

  17. A simple derivation for amplitude and time period of charged particles in an electrostatic bathtub potential

    NASA Astrophysics Data System (ADS)

    Prathap Reddy, K.

    2016-11-01

    An ‘electrostatic bathtub potential’ is defined and analytical expressions for the time period and amplitude of charged particles in this potential are obtained and compared with simulations. These kinds of potentials are encountered in linear electrostatic ion traps, where the potential along the axis appears like a bathtub. Ion traps are used in basic physics research and mass spectrometry to store ions; these stored ions make oscillatory motion within the confined volume of the trap. Usually these traps are designed and studied using ion optical software, but in this work the bathtub potential is reproduced by making two simple modifications to the harmonic oscillator potential. The addition of a linear ‘k 1|x|’ potential makes the simple harmonic potential curve steeper with a sharper turn at the origin, while the introduction of a finite-length zero potential region at the centre reproduces the flat region of the bathtub curve. This whole exercise of modelling a practical experimental situation in terms of a well-known simple physics problem may generate interest among readers.

  18. Linearization of the bradford protein assay.

    PubMed

    Ernst, Orna; Zor, Tsaffrir

    2010-04-12

    Determination of microgram quantities of protein in the Bradford Coomassie brilliant blue assay is accomplished by measurement of absorbance at 590 nm. This most common assay enables rapid and simple protein quantification in cell lysates, cellular fractions, or recombinant protein samples, for the purpose of normalization of biochemical measurements. However, an intrinsic nonlinearity compromises the sensitivity and accuracy of this method. It is shown that under standard assay conditions, the ratio of the absorbance measurements at 590 nm and 450 nm is strictly linear with protein concentration. This simple procedure increases the accuracy and improves the sensitivity of the assay about 10-fold, permitting quantification down to 50 ng of bovine serum albumin. Furthermore, the interference commonly introduced by detergents that are used to create the cell lysates is greatly reduced by the new protocol. A linear equation developed on the basis of mass action and Beer's law perfectly fits the experimental data.

  19. Quantum monodromy and quantum phase transitions in floppy molecules

    NASA Astrophysics Data System (ADS)

    Larese, Danielle

    2012-10-01

    A simple algebraic Hamiltonian has been used to explore the vibrational and rotational spectra of the skeletal bending modes of HCNO, BrCNO, NCNCS, and other "floppy" (quasi-linear or quasi-bent) molecules. These molecules have large-amplitude, low-energy bending modes and champagne-bottle potential surfaces, making them good candidates for observing quantum phase transitions (QPT). We describe the geometric phase transitions from bent to linear in these and other non-rigid molecules, quantitatively analyzing the spectroscopic signatures of ground state QPT, excited state QPT, and quantum monodromy. The algebraic framework is ideal for this work because of its small calculational effort yet robust results. Although these methods have historically found success with tri-and four-atomic molecules, we now address five-atomic and simple branched molecules such as CH3NCO and GeH3NCO. Extraction of potential functions are completed for several molecules, resulting in predictions of barriers to linearity and equilibrium bond angles.

  20. Ball-morph: definition, implementation, and comparative evaluation.

    PubMed

    Whited, Brian; Rossignac, Jaroslaw Jarek

    2011-06-01

    We define b-compatibility for planar curves and propose three ball morphing techniques between pairs of b-compatible curves. Ball-morphs use the automatic ball-map correspondence, proposed by Chazal et al., from which we derive different vertex trajectories (linear, circular, and parabolic). All three morphs are symmetric, meeting both curves with the same angle, which is a right angle for the circular and parabolic. We provide simple constructions for these ball-morphs and compare them to each other and other simple morphs (linear-interpolation, closest-projection, curvature-interpolation, Laplace-blending, and heat-propagation) using six cost measures (travel-distance, distortion, stretch, local acceleration, average squared mean curvature, and maximum squared mean curvature). The results depend heavily on the input curves. Nevertheless, we found that the linear ball-morph has consistently the shortest travel-distance and the circular ball-morph has the least amount of distortion.

  1. A cooperation and competition based simple cell receptive field model and study of feed-forward linear and nonlinear contributions to orientation selectivity.

    PubMed

    Bhaumik, Basabi; Mathur, Mona

    2003-01-01

    We present a model for development of orientation selectivity in layer IV simple cells. Receptive field (RF) development in the model, is determined by diffusive cooperation and resource limited competition guided axonal growth and retraction in geniculocortical pathway. The simulated cortical RFs resemble experimental RFs. The receptive field model is incorporated in a three-layer visual pathway model consisting of retina, LGN and cortex. We have studied the effect of activity dependent synaptic scaling on orientation tuning of cortical cells. The mean value of hwhh (half width at half the height of maximum response) in simulated cortical cells is 58 degrees when we consider only the linear excitatory contribution from LGN. We observe a mean improvement of 22.8 degrees in tuning response due to the non-linear spiking mechanisms that include effects of threshold voltage and synaptic scaling factor.

  2. A new standing-wave-type linear ultrasonic motor based on in-plane modes.

    PubMed

    Shi, Yunlai; Zhao, Chunsheng

    2011-05-01

    This paper presents a new standing-wave-type linear ultrasonic motor using combination of the first longitudinal and the second bending modes. Two piezoelectric plates in combination with a metal thin plate are used to construct the stator. The superior point of the stator is its isosceles triangular structure part of the stator, which can amplify the displacement in horizontal direction of the stator in perpendicular direction when the stator is operated in the first longitudinal mode. The influence of the base angle θ of the triangular structure part on the amplitude of the driving foot has been analyzed by numerical analysis. Four prototype stators with different angles θ have been fabricated and the experimental investigation of these stators has validated the numerical simulation. The overall dimensions of the prototype stators are no more than 40 mm (length) × 20 mm (width) × 5 mm (thickness). Driven by an AC signal with the driving frequency of 53.3 kHz, the no-load speed and the maximal thrust of the prototype motor using the stator with base angle 20° were 98 mm/s and 3.2N, respectively. The effective elliptical motion trajectory of the contact point of the stator can be achieved by the isosceles triangular structure part using only two PZTs, and thus it makes the motor low cost in fabrication, simple in structure and easy to realize miniaturization. Copyright © 2010 Elsevier B.V. All rights reserved.

  3. Gynecomastia: glandular-liposculpture through a single transaxillary one hole incision.

    PubMed

    Lee, Yung Ki; Lee, Jun Hee; Kang, Sang Yoon

    2018-04-01

    Gynecomastia is characterized by the benign proliferation of breast tissue in men. Herein, we present a new method for the treatment of gynecomastia, using ultrasound-assisted liposuction with both conventional and reverse-cutting edge tip cannulas in combination with a pull-through lipectomy technique with pituitary forceps through a single transaxillary incision. Thirty patients were treated with this technique at the author's institution from January 2010 to January 2015. Ten patients were treated with conventional surgical excision of the glandular/fibrous breast tissue combined with liposuction through a periareolar incision before January 2010. Medical records, clinical photographs and linear analog scale scores were analyzed to compare the surgical results and complications. The patients were required to rate their cosmetic outcomes based on the linear analog scale with which they rated their own surgical results; the mean overall average score indicated a good or high level of satisfaction. There were no incidences of skin necrosis, hematoma, infection and scar contracture; however, one case each of seroma and nipple inversion did occur. Operative time was reduced overall using the new technique since it is relatively simple and straightforward. According to the evaluation by the four independent researchers, the patients treated with this new technique showed statistically significant improvements in scar and nipple-areolar complex (NAC) deformity compared to those who were treated using the conventional method. Glandular liposculpture through a single transaxillary incision is an efficient and safe technique that can provide aesthetically satisfying and consistent results.

  4. A reliable algorithm for optimal control synthesis

    NASA Technical Reports Server (NTRS)

    Vansteenwyk, Brett; Ly, Uy-Loi

    1992-01-01

    In recent years, powerful design tools for linear time-invariant multivariable control systems have been developed based on direct parameter optimization. In this report, an algorithm for reliable optimal control synthesis using parameter optimization is presented. Specifically, a robust numerical algorithm is developed for the evaluation of the H(sup 2)-like cost functional and its gradients with respect to the controller design parameters. The method is specifically designed to handle defective degenerate systems and is based on the well-known Pade series approximation of the matrix exponential. Numerical test problems in control synthesis for simple mechanical systems and for a flexible structure with densely packed modes illustrate positively the reliability of this method when compared to a method based on diagonalization. Several types of cost functions have been considered: a cost function for robust control consisting of a linear combination of quadratic objectives for deterministic and random disturbances, and one representing an upper bound on the quadratic objective for worst case initial conditions. Finally, a framework for multivariable control synthesis has been developed combining the concept of closed-loop transfer recovery with numerical parameter optimization. The procedure enables designers to synthesize not only observer-based controllers but also controllers of arbitrary order and structure. Numerical design solutions rely heavily on the robust algorithm due to the high order of the synthesis model and the presence of near-overlapping modes. The design approach is successfully applied to the design of a high-bandwidth control system for a rotorcraft.

  5. Linear thermal circulator based on Coriolis forces.

    PubMed

    Li, Huanan; Kottos, Tsampikos

    2015-02-01

    We show that the presence of a Coriolis force in a rotating linear lattice imposes a nonreciprocal propagation of the phononic heat carriers. Using this effect we propose the concept of Coriolis linear thermal circulator which can control the circulation of a heat current. A simple model of three coupled harmonic masses on a rotating platform permits us to demonstrate giant circulating rectification effects for moderate values of the angular velocities of the platform.

  6. The Evolution of El Nino-Precipitation Relationships from Satellites and Gauges

    NASA Technical Reports Server (NTRS)

    Curtis, Scott; Adler, Robert F.; Starr, David OC (Technical Monitor)

    2002-01-01

    This study uses a twenty-three year (1979-2001) satellite-gauge merged community data set to further describe the relationship between El Nino Southern Oscillation (ENSO) and precipitation. The globally complete precipitation fields reveal coherent bands of anomalies that extend from the tropics to the polar regions. Also, ENSO-precipitation relationships were analyzed during the six strongest El Ninos from 1979 to 2001. Seasons of evolution, Pre-onset, Onset, Peak, Decay, and Post-decay, were identified based on the strength of the El Nino. Then two simple and independent models, first order harmonic and linear, were fit to the monthly time series of normalized precipitation anomalies for each grid block. The sinusoidal model represents a three-phase evolution of precipitation, either dry-wet-dry or wet-dry-wet. This model is also highly correlated with the evolution of sea surface temperatures in the equatorial Pacific. The linear model represents a two-phase evolution of precipitation, either dry-wet or wet-dry. These models combine to account for over 50% of the precipitation variability for over half the globe during El Nino. Most regions, especially away from the Equator, favor the linear model. Areas that show the largest trend from dry to wet are southeastern Australia, eastern Indian Ocean, southern Japan, and off the coast of Peru. The northern tropical Pacific and Southeast Asia show the opposite trend.

  7. Using color histograms and SPA-LDA to classify bacteria.

    PubMed

    de Almeida, Valber Elias; da Costa, Gean Bezerra; de Sousa Fernandes, David Douglas; Gonçalves Dias Diniz, Paulo Henrique; Brandão, Deysiane; de Medeiros, Ana Claudia Dantas; Véras, Germano

    2014-09-01

    In this work, a new approach is proposed to verify the differentiating characteristics of five bacteria (Escherichia coli, Enterococcus faecalis, Streptococcus salivarius, Streptococcus oralis, and Staphylococcus aureus) by using digital images obtained with a simple webcam and variable selection by the Successive Projections Algorithm associated with Linear Discriminant Analysis (SPA-LDA). In this sense, color histograms in the red-green-blue (RGB), hue-saturation-value (HSV), and grayscale channels and their combinations were used as input data, and statistically evaluated by using different multivariate classifiers (Soft Independent Modeling by Class Analogy (SIMCA), Principal Component Analysis-Linear Discriminant Analysis (PCA-LDA), Partial Least Squares Discriminant Analysis (PLS-DA) and Successive Projections Algorithm-Linear Discriminant Analysis (SPA-LDA)). The bacteria strains were cultivated in a nutritive blood agar base layer for 24 h by following the Brazilian Pharmacopoeia, maintaining the status of cell growth and the nature of nutrient solutions under the same conditions. The best result in classification was obtained by using RGB and SPA-LDA, which reached 94 and 100 % of classification accuracy in the training and test sets, respectively. This result is extremely positive from the viewpoint of routine clinical analyses, because it avoids bacterial identification based on phenotypic identification of the causative organism using Gram staining, culture, and biochemical proofs. Therefore, the proposed method presents inherent advantages, promoting a simpler, faster, and low-cost alternative for bacterial identification.

  8. Combined genetic algorithm and multiple linear regression (GA-MLR) optimizer: Application to multi-exponential fluorescence decay surface.

    PubMed

    Fisz, Jacek J

    2006-12-07

    The optimization approach based on the genetic algorithm (GA) combined with multiple linear regression (MLR) method, is discussed. The GA-MLR optimizer is designed for the nonlinear least-squares problems in which the model functions are linear combinations of nonlinear functions. GA optimizes the nonlinear parameters, and the linear parameters are calculated from MLR. GA-MLR is an intuitive optimization approach and it exploits all advantages of the genetic algorithm technique. This optimization method results from an appropriate combination of two well-known optimization methods. The MLR method is embedded in the GA optimizer and linear and nonlinear model parameters are optimized in parallel. The MLR method is the only one strictly mathematical "tool" involved in GA-MLR. The GA-MLR approach simplifies and accelerates considerably the optimization process because the linear parameters are not the fitted ones. Its properties are exemplified by the analysis of the kinetic biexponential fluorescence decay surface corresponding to a two-excited-state interconversion process. A short discussion of the variable projection (VP) algorithm, designed for the same class of the optimization problems, is presented. VP is a very advanced mathematical formalism that involves the methods of nonlinear functionals, algebra of linear projectors, and the formalism of Fréchet derivatives and pseudo-inverses. Additional explanatory comments are added on the application of recently introduced the GA-NR optimizer to simultaneous recovery of linear and weakly nonlinear parameters occurring in the same optimization problem together with nonlinear parameters. The GA-NR optimizer combines the GA method with the NR method, in which the minimum-value condition for the quadratic approximation to chi(2), obtained from the Taylor series expansion of chi(2), is recovered by means of the Newton-Raphson algorithm. The application of the GA-NR optimizer to model functions which are multi-linear combinations of nonlinear functions, is indicated. The VP algorithm does not distinguish the weakly nonlinear parameters from the nonlinear ones and it does not apply to the model functions which are multi-linear combinations of nonlinear functions.

  9. Black carbon cookstove emissions: A field assessment of 19 stove/fuel combinations

    NASA Astrophysics Data System (ADS)

    Garland, Charity; Delapena, Samantha; Prasad, Rajendra; L'Orange, Christian; Alexander, Donee; Johnson, Michael

    2017-11-01

    Black carbon (BC) emissions from household cookstoves consuming solid fuel produce approximately 25 percent of total anthropogenic BC emissions. The short atmospheric lifetime of BC means that reducing BC emissions would result in a faster climate response than mitigating CO2 and other long-lived greenhouse gases. This study presents the results of optical BC measurements of two new cookstove emissions field assessments and 17 archived cookstove datasets. BC was determined from attenuation of 880 nm light, which is strongly absorbed by BC, and linearly related between 1 and 125 attenuation units. A relationship was experimentally determined correlating BC mass deposition on quartz filters determined via thermal optical analysis (TOA) and on PTFE and quartz filters using transmissometry, yielding an attenuation cross-section (σATN) for both filter media types. σATN relates TOA measurements to optical measurements on PTFE and quartz (σATN(PTFE) = 13.7 cm-2 μg, R2 = 0.87, σATN(Quartz) = 15.6 cm-2 μg, R2 = 0.87). These filter-specific σATN, optical measurements of archived filters were used to determine BC emission factors and the fraction of particulate matter (PM) in the form of black carbon (BC/PM). The 19 stoves measured fell into five stove classes; simple wood, rocket, advanced biomass, simple charcoal, and advanced charcoal. Advanced biomass stoves include forced- and natural-draft gasifiers which use wood or biomass pellets as fuel. Of these classes, the simple wood and rocket stoves demonstrated the highest median BC emission factors, ranging from 0.051 to 0.14 g MJ-1. The lowest BC emission factors were seen in charcoal stoves, which corresponds to the generally low PM emission factors observed during charcoal combustion, ranging from 0.0084 to 0.014 g MJ-1. The advanced biomass stoves generally showed an improvement in BC emissions factors compared to simple wood and rocket stoves, ranging from 0.0031 to 0.071 g MJ-1. BC/PM ratios were highest for the advanced and rocket stoves. Potential relative climate impacts were estimated by converting aerosol emissions to CO2-equivalent, and suggest that some advanced stove/fuel combinations could provide substantial climate benefits.

  10. Testing the consistency of wildlife data types before combining them: the case of camera traps and telemetry.

    PubMed

    Popescu, Viorel D; Valpine, Perry; Sweitzer, Rick A

    2014-04-01

    Wildlife data gathered by different monitoring techniques are often combined to estimate animal density. However, methods to check whether different types of data provide consistent information (i.e., can information from one data type be used to predict responses in the other?) before combining them are lacking. We used generalized linear models and generalized linear mixed-effects models to relate camera trap probabilities for marked animals to independent space use from telemetry relocations using 2 years of data for fishers (Pekania pennanti) as a case study. We evaluated (1) camera trap efficacy by estimating how camera detection probabilities are related to nearby telemetry relocations and (2) whether home range utilization density estimated from telemetry data adequately predicts camera detection probabilities, which would indicate consistency of the two data types. The number of telemetry relocations within 250 and 500 m from camera traps predicted detection probability well. For the same number of relocations, females were more likely to be detected during the first year. During the second year, all fishers were more likely to be detected during the fall/winter season. Models predicting camera detection probability and photo counts solely from telemetry utilization density had the best or nearly best Akaike Information Criterion (AIC), suggesting that telemetry and camera traps provide consistent information on space use. Given the same utilization density, males were more likely to be photo-captured due to larger home ranges and higher movement rates. Although methods that combine data types (spatially explicit capture-recapture) make simple assumptions about home range shapes, it is reasonable to conclude that in our case, camera trap data do reflect space use in a manner consistent with telemetry data. However, differences between the 2 years of data suggest that camera efficacy is not fully consistent across ecological conditions and make the case for integrating other sources of space-use data.

  11. Primate empathy: three factors and their combinations for empathy-related phenomena.

    PubMed

    Yamamoto, Shinya

    2017-05-01

    Empathy as a research topic is receiving increasing attention, although there seems some confusion on the definition of empathy across different fields. Frans de Waal (de Waal FBM. Putting the altruism back into altruism: the evolution of empathy. Annu Rev Psychol 2008, 59:279-300. doi:10.1146/annurev.psych.59.103006.093625) used empathy as an umbrella term and proposed a comprehensive model for the evolution of empathy with some of its basic elements in nonhuman animals. In de Waal's model, empathy consists of several layers distinguished by required cognitive levels; the perception-action mechanism plays the core role for connecting ourself and others. Then, human-like empathy such as perspective-taking develops in outer layers according to cognitive sophistication, leading to prosocial acts such as targeted helping. I agree that animals demonstrate many empathy-related phenomena; however, the species differences and the level of cognitive sophistication of the phenomena might be interpreted in another way than this simple linearly developing model. Our recent studies with chimpanzees showed that their perspective-taking ability does not necessarily lead to proactive helping behavior. Herein, as a springboard for further studies, I reorganize the empathy-related phenomena by proposing a combination model instead of the linear development model. This combination model is composed of three organizing factors: matching with others, understanding of others, and prosociality. With these three factors and their combinations, most empathy-related matters can be categorized and mapped to appropriate context; this may be a good first step to discuss the evolution of empathy in relation to the neural connections in human and nonhuman animal brains. I would like to propose further comparative studies, especially from the viewpoint of Homo-Pan (chimpanzee and bonobo) comparison. WIREs Cogn Sci 2017, 8:e1431. doi: 10.1002/wcs.1431 For further resources related to this article, please visit the WIREs website. © 2016 Wiley Periodicals, Inc.

  12. ''Math in a Can'': Teaching Mathematics and Engineering Design

    ERIC Educational Resources Information Center

    Narode, Ronald B.

    2011-01-01

    Using an apparently simple problem, ''Design a cylindrical can that will hold a liter of milk,'' this paper demonstrates how engineering design may facilitate the teaching of the following ideas to secondary students: linear and non-linear relationships; basic geometry of circles, rectangles, and cylinders; unit measures of area and volume;…

  13. The Multifaceted Variable Approach: Selection of Method in Solving Simple Linear Equations

    ERIC Educational Resources Information Center

    Tahir, Salma; Cavanagh, Michael

    2010-01-01

    This paper presents a comparison of the solution strategies used by two groups of Year 8 students as they solved linear equations. The experimental group studied algebra following a multifaceted variable approach, while the comparison group used a traditional approach. Students in the experimental group employed different solution strategies,…

  14. A Simple and Convenient Method of Multiple Linear Regression to Calculate Iodine Molecular Constants

    ERIC Educational Resources Information Center

    Cooper, Paul D.

    2010-01-01

    A new procedure using a student-friendly least-squares multiple linear-regression technique utilizing a function within Microsoft Excel is described that enables students to calculate molecular constants from the vibronic spectrum of iodine. This method is advantageous pedagogically as it calculates molecular constants for ground and excited…

  15. Fitting program for linear regressions according to Mahon (1996)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trappitsch, Reto G.

    2018-01-09

    This program takes the users' Input data and fits a linear regression to it using the prescription presented by Mahon (1996). Compared to the commonly used York fit, this method has the correct prescription for measurement error propagation. This software should facilitate the proper fitting of measurements with a simple Interface.

  16. Testing hypotheses for differences between linear regression lines

    Treesearch

    Stanley J. Zarnoch

    2009-01-01

    Five hypotheses are identified for testing differences between simple linear regression lines. The distinctions between these hypotheses are based on a priori assumptions and illustrated with full and reduced models. The contrast approach is presented as an easy and complete method for testing for overall differences between the regressions and for making pairwise...

  17. Revisiting the Scale-Invariant, Two-Dimensional Linear Regression Method

    ERIC Educational Resources Information Center

    Patzer, A. Beate C.; Bauer, Hans; Chang, Christian; Bolte, Jan; Su¨lzle, Detlev

    2018-01-01

    The scale-invariant way to analyze two-dimensional experimental and theoretical data with statistical errors in both the independent and dependent variables is revisited by using what we call the triangular linear regression method. This is compared to the standard least-squares fit approach by applying it to typical simple sets of example data…

  18. Linearized self-consistent quasiparticle GW method: Application to semiconductors and simple metals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kutepov, A. L.; Oudovenko, V. S.; Kotliar, G.

    We present a code implementing the linearized self-consistent quasiparticle GW method (QSGW) in the LAPW basis. Our approach is based on the linearization of the self-energy around zero frequency which differs it from the existing implementations of the QSGW method. The linearization allows us to use Matsubara frequencies instead of working on the real axis. This results in efficiency gains by switching to the imaginary time representation in the same way as in the space time method. The all electron LAPW basis set eliminates the need for pseudopotentials. We discuss the advantages of our approach, such as its N 3more » scaling with the system size N, as well as its shortcomings. We apply our approach to study the electronic properties of selected semiconductors, insulators, and simple metals and show that our code produces the results very close to the previously published QSGW data. Our implementation is a good platform for further many body diagrammatic resummations such as the vertex-corrected GW approach and the GW+DMFT method.« less

  19. Linearized self-consistent quasiparticle GW method: Application to semiconductors and simple metals

    DOE PAGES

    Kutepov, A. L.; Oudovenko, V. S.; Kotliar, G.

    2017-06-23

    We present a code implementing the linearized self-consistent quasiparticle GW method (QSGW) in the LAPW basis. Our approach is based on the linearization of the self-energy around zero frequency which differs it from the existing implementations of the QSGW method. The linearization allows us to use Matsubara frequencies instead of working on the real axis. This results in efficiency gains by switching to the imaginary time representation in the same way as in the space time method. The all electron LAPW basis set eliminates the need for pseudopotentials. We discuss the advantages of our approach, such as its N 3more » scaling with the system size N, as well as its shortcomings. We apply our approach to study the electronic properties of selected semiconductors, insulators, and simple metals and show that our code produces the results very close to the previously published QSGW data. Our implementation is a good platform for further many body diagrammatic resummations such as the vertex-corrected GW approach and the GW+DMFT method.« less

  20. Linear Optical Quantum Metrology with Single Photons: Exploiting Spontaneously Generated Entanglement to Beat the Shot-Noise Limit

    NASA Astrophysics Data System (ADS)

    Motes, Keith R.; Olson, Jonathan P.; Rabeaux, Evan J.; Dowling, Jonathan P.; Olson, S. Jay; Rohde, Peter P.

    2015-05-01

    Quantum number-path entanglement is a resource for supersensitive quantum metrology and in particular provides for sub-shot-noise or even Heisenberg-limited sensitivity. However, such number-path entanglement has been thought to be resource intensive to create in the first place—typically requiring either very strong nonlinearities, or nondeterministic preparation schemes with feedforward, which are difficult to implement. Very recently, arising from the study of quantum random walks with multiphoton walkers, as well as the study of the computational complexity of passive linear optical interferometers fed with single-photon inputs, it has been shown that such passive linear optical devices generate a superexponentially large amount of number-path entanglement. A logical question to ask is whether this entanglement may be exploited for quantum metrology. We answer that question here in the affirmative by showing that a simple, passive, linear-optical interferometer—fed with only uncorrelated, single-photon inputs, coupled with simple, single-mode, disjoint photodetection—is capable of significantly beating the shot-noise limit. Our result implies a pathway forward to practical quantum metrology with readily available technology.

  1. Linear optical quantum metrology with single photons: exploiting spontaneously generated entanglement to beat the shot-noise limit.

    PubMed

    Motes, Keith R; Olson, Jonathan P; Rabeaux, Evan J; Dowling, Jonathan P; Olson, S Jay; Rohde, Peter P

    2015-05-01

    Quantum number-path entanglement is a resource for supersensitive quantum metrology and in particular provides for sub-shot-noise or even Heisenberg-limited sensitivity. However, such number-path entanglement has been thought to be resource intensive to create in the first place--typically requiring either very strong nonlinearities, or nondeterministic preparation schemes with feedforward, which are difficult to implement. Very recently, arising from the study of quantum random walks with multiphoton walkers, as well as the study of the computational complexity of passive linear optical interferometers fed with single-photon inputs, it has been shown that such passive linear optical devices generate a superexponentially large amount of number-path entanglement. A logical question to ask is whether this entanglement may be exploited for quantum metrology. We answer that question here in the affirmative by showing that a simple, passive, linear-optical interferometer--fed with only uncorrelated, single-photon inputs, coupled with simple, single-mode, disjoint photodetection--is capable of significantly beating the shot-noise limit. Our result implies a pathway forward to practical quantum metrology with readily available technology.

  2. Effective Surfactants Blend Concentration Determination for O/W Emulsion Stabilization by Two Nonionic Surfactants by Simple Linear Regression.

    PubMed

    Hassan, A K

    2015-01-01

    In this work, O/W emulsion sets were prepared by using different concentrations of two nonionic surfactants. The two surfactants, tween 80(HLB=15.0) and span 80(HLB=4.3) were used in a fixed proportions equal to 0.55:0.45 respectively. HLB value of the surfactants blends were fixed at 10.185. The surfactants blend concentration is starting from 3% up to 19%. For each O/W emulsion set the conductivity was measured at room temperature (25±2°), 40, 50, 60, 70 and 80°. Applying the simple linear regression least squares method statistical analysis to the temperature-conductivity obtained data determines the effective surfactants blend concentration required for preparing the most stable O/W emulsion. These results were confirmed by applying the physical stability centrifugation testing and the phase inversion temperature range measurements. The results indicated that, the relation which represents the most stable O/W emulsion has the strongest direct linear relationship between temperature and conductivity. This relationship is linear up to 80°. This work proves that, the most stable O/W emulsion is determined via the determination of the maximum R² value by applying of the simple linear regression least squares method to the temperature-conductivity obtained data up to 80°, in addition to, the true maximum slope is represented by the equation which has the maximum R² value. Because the conditions would be changed in a more complex formulation, the method of the determination of the effective surfactants blend concentration was verified by applying it for more complex formulations of 2% O/W miconazole nitrate cream and the results indicate its reproducibility.

  3. The Canopy Conductance of a Humid Grassland

    NASA Astrophysics Data System (ADS)

    Lu, C. T.; Hsieh, C. I.

    2015-12-01

    Penman-Monteith equation is widely used for estimating latent heat flux. The key parameter for implementing this equation is the canopy conductance (gc). Recent research (Blaken and Black, 2004) showed that gc could be well parameterized by a linear function of An/ (D0* X0c), where An represents net assimilation, D0 is leaf level saturation deficit, and X0c is CO2 mole fraction. In this study, we tried to use the same idea for estimating gcfor a humid grassland. The study site was located in County Cork, southwest Ireland (51o59''N 8o46''W), and perennial ryegrass (Lolium perenne L.) was the dominant grass species in this area. An eddy covariance system was used to measure the latent heat flux above this humid grassland. The measured gc was calculated by rearranging Penman-Monteith equation combined with the measured latent heat flux. Our data showed that the gc decreased as the vapor pressure deficit and temperature increased. And it increased as the net radiation increased. Therefore, we found out that the best parameterization of gc was a linear function of the product of the vapor deficit, temperature, and net radiation. Also, we used the gc which was estimated by this linear function to predict the latent heat flux by Penman-Monteith equation and compared the predictions with those where the gc was chosen to be a fixed value. Our analysis showed that this simple linear function for gc can improve the latent heat flux predictions (R square increased from 0.48 to 0.66).

  4. A nested observation and model approach to non linear groundwater surface water interactions.

    NASA Astrophysics Data System (ADS)

    van der Velde, Y.; Rozemeijer, J. C.; de Rooij, G. H.

    2009-04-01

    Surface water quality measurements in The Netherlands are scattered in time and space. Therefore, water quality status and its variations and trends are difficult to determine. In order to reach the water quality goals according to the European Water Framework Directive, we need to improve our understanding of the dynamics of surface water quality and the processes that affect it. In heavily drained lowland catchment groundwater influences the discharge towards the surface water network in many complex ways. Especially a strong seasonal contracting and expanding system of discharging ditches and streams affects discharge and solute transport. At a tube drained field site the tube drain flux and the combined flux of all other flow routes toward a stretch of 45 m of surface water have been measured for a year. Also the groundwater levels at various locations in the field and the discharge at two nested catchment scales have been monitored. The unique reaction of individual flow routes on rainfall events at the field site allowed us to separate the discharge at a 4 ha catchment and at a 6 km2 into flow route contributions. The results of this nested experimental setup combined with the results of a distributed hydrological model has lead to the formulation of a process model approach that focuses on the spatial variability of discharge generation driven by temporal and spatial variations in groundwater levels. The main idea of this approach is that discharge is not generated by catchment average storages or groundwater heads, but is mainly generated by points scale extremes i.e. extreme low permeability, extreme high groundwater heads or extreme low surface elevations, all leading to catchment discharge. We focused on describing the spatial extremes in point scale storages and this led to a simple and measurable expression that governs the non-linear groundwater surface water interaction. We will present the analysis of the field site data to demonstrate the potential of nested-scale, high frequency observations. The distributed hydrological model results will be used to show transient catchment scale relations between groundwater levels and discharges. These analyses lead to a simple expression that can describe catchment scale groundwater surface water interactions.

  5. Linear modeling of the soil-water partition coefficient normalized to organic carbon content by reversed-phase thin-layer chromatography.

    PubMed

    Andrić, Filip; Šegan, Sandra; Dramićanin, Aleksandra; Majstorović, Helena; Milojković-Opsenica, Dušanka

    2016-08-05

    Soil-water partition coefficient normalized to the organic carbon content (KOC) is one of the crucial properties influencing the fate of organic compounds in the environment. Chromatographic methods are well established alternative for direct sorption techniques used for KOC determination. The present work proposes reversed-phase thin-layer chromatography (RP-TLC) as a simpler, yet equally accurate method as officially recommended HPLC technique. Several TLC systems were studied including octadecyl-(RP18) and cyano-(CN) modified silica layers in combination with methanol-water and acetonitrile-water mixtures as mobile phases. In total 50 compounds of different molecular shape, size, and various ability to establish specific interactions were selected (phenols, beznodiazepines, triazine herbicides, and polyaromatic hydrocarbons). Calibration set of 29 compounds with known logKOC values determined by sorption experiments was used to build simple univariate calibrations, Principal Component Regression (PCR) and Partial Least Squares (PLS) models between logKOC and TLC retention parameters. Models exhibit good statistical performance, indicating that CN-layers contribute better to logKOC modeling than RP18-silica. The most promising TLC methods, officially recommended HPLC method, and four in silico estimation approaches have been compared by non-parametric Sum of Ranking Differences approach (SRD). The best estimations of logKOC values were achieved by simple univariate calibration of TLC retention data involving CN-silica layers and moderate content of methanol (40-50%v/v). They were ranked far well compared to the officially recommended HPLC method which was ranked in the middle. The worst estimates have been obtained from in silico computations based on octanol-water partition coefficient. Linear Solvation Energy Relationship study revealed that increased polarity of CN-layers over RP18 in combination with methanol-water mixtures is the key to better modeling of logKOC through significant diminishing of dipolar and proton accepting influence of the mobile phase as well as enhancing molar refractivity in excess of the chromatographic systems. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Extending the range of real time density matrix renormalization group simulations

    NASA Astrophysics Data System (ADS)

    Kennes, D. M.; Karrasch, C.

    2016-03-01

    We discuss a few simple modifications to time-dependent density matrix renormalization group (DMRG) algorithms which allow to access larger time scales. We specifically aim at beginners and present practical aspects of how to implement these modifications within any standard matrix product state (MPS) based formulation of the method. Most importantly, we show how to 'combine' the Schrödinger and Heisenberg time evolutions of arbitrary pure states | ψ 〉 and operators A in the evaluation of 〈A〉ψ(t) = 〈 ψ | A(t) | ψ 〉 . This includes quantum quenches. The generalization to (non-)thermal mixed state dynamics 〈A〉ρ(t) =Tr [ ρA(t) ] induced by an initial density matrix ρ is straightforward. In the context of linear response (ground state or finite temperature T > 0) correlation functions, one can extend the simulation time by a factor of two by 'exploiting time translation invariance', which is efficiently implementable within MPS DMRG. We present a simple analytic argument for why a recently-introduced disentangler succeeds in reducing the effort of time-dependent simulations at T > 0. Finally, we advocate the python programming language as an elegant option for beginners to set up a DMRG code.

  7. Scaling laws and fluctuations in the statistics of word frequencies

    NASA Astrophysics Data System (ADS)

    Gerlach, Martin; Altmann, Eduardo G.

    2014-11-01

    In this paper, we combine statistical analysis of written texts and simple stochastic models to explain the appearance of scaling laws in the statistics of word frequencies. The average vocabulary of an ensemble of fixed-length texts is known to scale sublinearly with the total number of words (Heaps’ law). Analyzing the fluctuations around this average in three large databases (Google-ngram, English Wikipedia, and a collection of scientific articles), we find that the standard deviation scales linearly with the average (Taylor's law), in contrast to the prediction of decaying fluctuations obtained using simple sampling arguments. We explain both scaling laws (Heaps’ and Taylor) by modeling the usage of words using a Poisson process with a fat-tailed distribution of word frequencies (Zipf's law) and topic-dependent frequencies of individual words (as in topic models). Considering topical variations lead to quenched averages, turn the vocabulary size a non-self-averaging quantity, and explain the empirical observations. For the numerous practical applications relying on estimations of vocabulary size, our results show that uncertainties remain large even for long texts. We show how to account for these uncertainties in measurements of lexical richness of texts with different lengths.

  8. Design of a dynamic sensor inspired by bat ears

    NASA Astrophysics Data System (ADS)

    Müller, Rolf; Pannala, Mittu; Reddy, O. Praveen K.; Meymand, Sajjad Z.

    2012-09-01

    In bats, the outer ear shapes act as beamforming baffles that create a spatial sensitivity pattern for the reception of the biosonar signals. Whereas technical receivers for wave-based signals usually have rigid geometries, the outer ears of some bat species, such as horseshoe bats, can undergo non-rigid deformations as a result of muscular actuation. It is hypothesized that these deformations provide the animals with a mechanism to adapt their spatial hearing sensitivity on short, sub-second time scales. This biological approach could be of interest to engineering as an inspiration for the design of beamforming devices that combine flexibility with parsimonious implementation. To explore this possibility, a biomimetic dynamic baffle was designed based on a simple shape overall geometry based on an average bat ear. This shape was augmented with three different biomimetic local shape features, a ridge on its exposed surface as well as a flap and an incision along its rim. Dynamic non-rigid deformations of the shape were accomplished through a simple actuation mechanism based on linear actuation inserted at a single point. Despite its simplicity, the prototype device was able to reproduce the dynamic functional characteristics that have been predicted for its biological paragon in a qualitative fashion.

  9. Simplified adaptive control of an orbiting flexible spacecraft

    NASA Astrophysics Data System (ADS)

    Maganti, Ganesh B.; Singh, Sahjendra N.

    2007-10-01

    The paper presents the design of a new simple adaptive system for the rotational maneuver and vibration suppression of an orbiting spacecraft with flexible appendages. A moment generating device located on the central rigid body of the spacecraft is used for the attitude control. It is assumed that the system parameters are unknown and the truncated model of the spacecraft has finite but arbitrary dimension. In addition, only the pitch angle and its derivative are measured and elastic modes are not available for feedback. The control output variable is chosen as the linear combination of the pitch angle and the pitch rate. Exploiting the hyper minimum phase nature of the spacecraft, a simple adaptive control law is derived for the pitch angle control and elastic mode stabilization. The adaptation rule requires only four adjustable parameters and the structure of the control system does not depend on the order of the truncated spacecraft model. For the synthesis of control system, the measured output error and the states of a third-order command generator are used. Simulation results are presented which show that in the closed-loop system adaptive output regulation is accomplished in spite of large parameter uncertainties and disturbance input.

  10. A simple solar radiation index for wildlife habitat studies

    USGS Publications Warehouse

    Keating, Kim A.; Gogan, Peter J.; Vore, John N.; Irby, Lynn R.

    2007-01-01

    Solar radiation is a potentially important covariate in many wildlife habitat studies, but it is typically addressed only indirectly, using problematic surrogates like aspect or hillshade. We devised a simple solar radiation index (SRI) that combines readily available information about aspect, slope, and latitude. Our SRI is proportional to the amount of extraterrestrial solar radiation theoretically striking an arbitrarily oriented surface during the hour surrounding solar noon on the equinox. Because it derives from first geometric principles and is linearly distributed, SRI offers clear advantages over aspect-based surrogates. The SRI also is superior to hillshade, which we found to be sometimes imprecise and ill-behaved. To illustrate application of our SRI, we assessed niche separation among 3 ungulate species along a single environmental axis, solar radiation, on the northern Yellowstone winter range. We detected no difference between the niches occupied by bighorn sheep (Ovis canadensis) and elk (Cervus elaphus; P = 0.104), but found that mule deer (Odocoileus hemionus) tended to use areas receiving more solar radiation than either of the other species (P < 0.001). Overall, our SRI provides a useful metric that can reduce noise, improve interpretability, and increase parsimony in wildlife habitat models containing a solar radiation component.

  11. Mouse epileptic seizure detection with multiple EEG features and simple thresholding technique

    NASA Astrophysics Data System (ADS)

    Tieng, Quang M.; Anbazhagan, Ashwin; Chen, Min; Reutens, David C.

    2017-12-01

    Objective. Epilepsy is a common neurological disorder characterized by recurrent, unprovoked seizures. The search for new treatments for seizures and epilepsy relies upon studies in animal models of epilepsy. To capture data on seizures, many applications require prolonged electroencephalography (EEG) with recordings that generate voluminous data. The desire for efficient evaluation of these recordings motivates the development of automated seizure detection algorithms. Approach. A new seizure detection method is proposed, based on multiple features and a simple thresholding technique. The features are derived from chaos theory, information theory and the power spectrum of EEG recordings and optimally exploit both linear and nonlinear characteristics of EEG data. Main result. The proposed method was tested with real EEG data from an experimental mouse model of epilepsy and distinguished seizures from other patterns with high sensitivity and specificity. Significance. The proposed approach introduces two new features: negative logarithm of adaptive correlation integral and power spectral coherence ratio. The combination of these new features with two previously described features, entropy and phase coherence, improved seizure detection accuracy significantly. Negative logarithm of adaptive correlation integral can also be used to compute the duration of automatically detected seizures.

  12. Combinations of Aromatic and Aliphatic Radiolysis.

    PubMed

    LaVerne, Jay A; Dowling-Medley, Jennifer

    2015-10-08

    The production of H(2) in the radiolysis of benzene, methylbenzene (toluene), ethylbenzene, butylbenzene, and hexylbenzene with γ-rays, 2-10 MeV protons, 5-20 MeV helium ions, and 10-30 MeV carbon ions is used as a probe of the overall radiation sensitivity and to determine the relative contributions of aromatic and aliphatic entities in mixed hydrocarbons. The addition of an aliphatic side chain with progressively from one to six carbon lengths to benzene increases the H(2) yield with γ-rays, but the yield seems to reach a plateau far below that found from a simple aliphatic such as cyclohexane. There is a large increase in H(2) with LET (linear energy transfer) for all of the substituted benzenes, which indicates that the main process for H(2) formation is a second-order process and dominated by the aromatic entity. The addition of a small amount of benzene to cyclohexane can lower the H(2) yield from the value expected from a simple mixture law. A 50:50% volume mixture of benzene-cyclohexane has essentially the same H(2) yield as cyclohexylbenzene at a wide variation in LET, suggesting that intermolecular energy transfer is as efficient as intramolecular energy transfer.

  13. Making chaotic behavior in a damped linear harmonic oscillator

    NASA Astrophysics Data System (ADS)

    Konishi, Keiji

    2001-06-01

    The present Letter proposes a simple control method which makes chaotic behavior in a damped linear harmonic oscillator. This method is a modified scheme proposed in paper by Wang and Chen (IEEE CAS-I 47 (2000) 410) which presents an anti-control method for making chaotic behavior in discrete-time linear systems. We provide a systematic procedure to design parameters and sampling period of a feedback controller. Furthermore, we show that our method works well on numerical simulations.

  14. Advanced statistics: linear regression, part II: multiple linear regression.

    PubMed

    Marill, Keith A

    2004-01-01

    The applications of simple linear regression in medical research are limited, because in most situations, there are multiple relevant predictor variables. Univariate statistical techniques such as simple linear regression use a single predictor variable, and they often may be mathematically correct but clinically misleading. Multiple linear regression is a mathematical technique used to model the relationship between multiple independent predictor variables and a single dependent outcome variable. It is used in medical research to model observational data, as well as in diagnostic and therapeutic studies in which the outcome is dependent on more than one factor. Although the technique generally is limited to data that can be expressed with a linear function, it benefits from a well-developed mathematical framework that yields unique solutions and exact confidence intervals for regression coefficients. Building on Part I of this series, this article acquaints the reader with some of the important concepts in multiple regression analysis. These include multicollinearity, interaction effects, and an expansion of the discussion of inference testing, leverage, and variable transformations to multivariate models. Examples from the first article in this series are expanded on using a primarily graphic, rather than mathematical, approach. The importance of the relationships among the predictor variables and the dependence of the multivariate model coefficients on the choice of these variables are stressed. Finally, concepts in regression model building are discussed.

  15. Chicken barn climate and hazardous volatile compounds control using simple linear regression and PID

    NASA Astrophysics Data System (ADS)

    Abdullah, A. H.; Bakar, M. A. A.; Shukor, S. A. A.; Saad, F. S. A.; Kamis, M. S.; Mustafa, M. H.; Khalid, N. S.

    2016-07-01

    The hazardous volatile compounds from chicken manure in chicken barn are potentially to be a health threat to the farm animals and workers. Ammonia (NH3) and hydrogen sulphide (H2S) produced in chicken barn are influenced by climate changes. The Electronic Nose (e-nose) is used for the barn's air, temperature and humidity data sampling. Simple Linear Regression is used to identify the correlation between temperature-humidity, humidity-ammonia and ammonia-hydrogen sulphide. MATLAB Simulink software was used for the sample data analysis using PID controller. Results shows that the performance of PID controller using the Ziegler-Nichols technique can improve the system controller to control climate in chicken barn.

  16. Impulse measurement using an Arduíno

    NASA Astrophysics Data System (ADS)

    Espindola, P. R.; Cena, C. R.; Alves, D. C. B.; Bozano, D. F.; Goncalves, A. M. B.

    2018-05-01

    In this paper, we propose a simple experimental apparatus that can measure the force variation over time to study the impulse-momentum theorem. In this proposal, a body attached to a rubber string falls freely from rest until it stretches and changes the linear momentum. During that process the force due to the tension on the rubber string is measured with a load cell by using an Arduíno board. We check the instrumental results with the basic concept of impulse, finding the area under the force versus time curve and comparing this with the linear momentum variation estimated from software analysis. The apparatus is presented as a simple and low cost alternative to mechanical physics laboratories.

  17. A simple white noise analysis of neuronal light responses.

    PubMed

    Chichilnisky, E J

    2001-05-01

    A white noise technique is presented for estimating the response properties of spiking visual system neurons. The technique is simple, robust, efficient and well suited to simultaneous recordings from multiple neurons. It provides a complete and easily interpretable model of light responses even for neurons that display a common form of response nonlinearity that precludes classical linear systems analysis. A theoretical justification of the technique is presented that relies only on elementary linear algebra and statistics. Implementation is described with examples. The technique and the underlying model of neural responses are validated using recordings from retinal ganglion cells, and in principle are applicable to other neurons. Advantages and disadvantages of the technique relative to classical approaches are discussed.

  18. Introducing Stochastic Simulation of Chemical Reactions Using the Gillespie Algorithm and MATLAB: Revisited and Augmented

    ERIC Educational Resources Information Center

    Argoti, A.; Fan, L. T.; Cruz, J.; Chou, S. T.

    2008-01-01

    The stochastic simulation of chemical reactions, specifically, a simple reversible chemical reaction obeying the first-order, i.e., linear, rate law, has been presented by Martinez-Urreaga and his collaborators in this journal. The current contribution is intended to complement and augment their work in two aspects. First, the simple reversible…

  19. Exploring Measurement Error with Cookies: A Real and Virtual Approach via Interactive Excel

    ERIC Educational Resources Information Center

    Sinex, Scott A; Gage, Barbara A.; Beck, Peggy J.

    2007-01-01

    A simple, guided-inquiry investigation using stacked sandwich cookies is employed to develop a simple linear mathematical model and to explore measurement error by incorporating errors as part of the investigation. Both random and systematic errors are presented. The model and errors are then investigated further by engaging with an interactive…

  20. An Anharmonic Solution to the Equation of Motion for the Simple Pendulum

    ERIC Educational Resources Information Center

    Johannessen, Kim

    2011-01-01

    An anharmonic solution to the differential equation describing the oscillations of a simple pendulum at large angles is discussed. The solution is expressed in terms of functions not involving the Jacobi elliptic functions. In the derivation, a sinusoidal expression, including a linear and a Fourier sine series in the argument, has been applied.…

  1. The Double-Well Potential in Quantum Mechanics: A Simple, Numerically Exact Formulation

    ERIC Educational Resources Information Center

    Jelic, V.; Marsiglio, F.

    2012-01-01

    The double-well potential is arguably one of the most important potentials in quantum mechanics, because the solution contains the notion of a state as a linear superposition of "classical" states, a concept which has become very important in quantum information theory. It is therefore desirable to have solutions to simple double-well potentials…

  2. A Simple and Effective Protein Folding Activity Suitable for Large Lectures

    ERIC Educational Resources Information Center

    White, Brian

    2006-01-01

    This article describes a simple and inexpensive hands-on simulation of protein folding suitable for use in large lecture classes. This activity uses a minimum of parts, tools, and skill to simulate some of the fundamental principles of protein folding. The major concepts targeted are that proteins begin as linear polypeptides and fold to…

  3. Restoring Low Sidelobe Antenna Patterns with Failed Elements in a Phased Array Antenna

    DTIC Science & Technology

    2016-02-01

    optimum low sidelobes are demonstrated in several examples. Index Terms — Array signal processing, beams, linear algebra , phased arrays, shaped...represented by a linear combination of low sidelobe beamformers with no failed elements, ’s, in a neighborhood around under the constraint that the linear ...would expect that linear combinations of them in a neighborhood around would also have low sidelobes. The algorithms in this paper exploit this

  4. Simultaneous extraction, identification and quantification of phenolic compounds in Eclipta prostrata using microwave-assisted extraction combined with HPLC-DAD-ESI-MS/MS.

    PubMed

    Fang, Xinsheng; Wang, Jianhua; Hao, Jifu; Li, Xueke; Guo, Ning

    2015-12-01

    A simple and rapid method was developed using microwave-assisted extraction (MAE) combined with HPLC-DAD-ESI-MS/MS for the simultaneous extraction, identification, and quantification of phenolic compounds in Eclipta prostrata, a common herb and vegetable in China. The optimized parameters of MAE were: employing 50% ethanol as solvent, microwave power 400 W, temperature 70 °C, ratio of liquid/solid 30 mL/g and extraction time 2 min. Compared to conventional extraction methods, the optimized MAE can avoid the degradation of the phenolic compounds and simultaneously obtained the highest yields of all components faster with less consumption of solvent and energy. Six phenolic acids, six flavonoid glycosides and one coumarin were firstly identified. The phenolic compounds were quantified by HPLC-DAD with good linearity, precision, and accuracy. The extract obtained by MAE showed significant antioxidant activity. The proposed method provides a valuable and green analytical methodology for the investigation of phenolic components in natural plants. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Design criteria for synthetic riboswitches acting on transcription

    PubMed Central

    Wachsmuth, Manja; Domin, Gesine; Lorenz, Ronny; Serfling, Robert; Findeiß, Sven; Stadler, Peter F; Mörl, Mario

    2015-01-01

    Riboswitches are RNA-based regulators of gene expression composed of a ligand-sensing aptamer domain followed by an overlapping expression platform. The regulation occurs at either the level of transcription (by formation of terminator or antiterminator structures) or translation (by presentation or sequestering of the ribosomal binding site). Due to a modular composition, these elements can be manipulated by combining different aptamers and expression platforms and therefore represent useful tools to regulate gene expression in synthetic biology. Using computationally designed theophylline-dependent riboswitches we show that 2 parameters, terminator hairpin stability and folding traps, have a major impact on the functionality of the designed constructs. These have to be considered very carefully during design phase. Furthermore, a combination of several copies of individual riboswitches leads to a much improved activation ratio between induced and uninduced gene activity and to a linear dose-dependent increase in reporter gene expression. Such serial arrangements of synthetic riboswitches closely resemble their natural counterparts and may form the basis for simple quantitative read out systems for the detection of specific target molecules in the cell. PMID:25826571

  6. G-quadruplex DNA biosensor for sensitive visible detection of genetically modified food.

    PubMed

    Jiang, Xiaohua; Zhang, Huimin; Wu, Jun; Yang, Xiang; Shao, Jingwei; Lu, Yujing; Qiu, Bin; Lin, Zhenyu; Chen, Guonan

    2014-10-01

    In this paper, a novel label-free G-quadruplex DNAzyme sensor has been proposed for colorimetric identification of GMO using CaMV 35S promoter sequence as the target. The binary probes can fold into G-quadruplex structure in the presence of DNA-T (Target DNA) and then combine with hemin to form a DNAzyme resembling horseradish peroxidase. The detection system consists of two G-rich probes with 2:2 split mode by using the absorbance and color of ABTS(2-) as signal reporter. Upon the addition of a target sequence, two probes both hybridize with target and then their G-rich sequences combine to form a G-quadruplex DNAzyme, and the DNAzyme can catalyze the reaction of ABTS(2-) with H2O2. Then the linear range is from 0.05 to 0.5 μM while detection limit is 5nM. These results demonstrate that the proposed G-quadruplex DNAzyme method could be used as a simple, sensitive and cost-effective approach for assays of GMO. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Dynamical screening of the van der Waals interaction between graphene layers.

    PubMed

    Dappe, Y J; Bolcatto, P G; Ortega, J; Flores, F

    2012-10-24

    The interaction between graphene layers is analyzed combining local orbital DFT and second order perturbation theory. For this purpose we use the linear combination of atomic orbitals-orbital occupancy (LCAO-OO) formalism, that allows us to separate the interaction energy as the sum of a weak chemical interaction between graphene layers plus the van der Waals interaction (Dappe et al 2006 Phys. Rev. B 74 205434). In this work, the weak chemical interaction is calculated by means of corrected-LDA calculations using an atomic-like sp(3)d(5) basis set. The van der Waals interaction is calculated by means of second order perturbation theory using an atom-atom interaction approximation and the atomic-like-orbital occupancies. We also analyze the effect of dynamical screening in the van der Waals interaction using a simple model. We find that this dynamical screening reduces by 40% the van der Waals interaction. Taking this effect into account, we obtain a graphene-graphene interaction energy of 70 ± 5 meV/atom in reasonable agreement with the experimental evidence.

  8. Dual line CW fiber laser module based on FBG combination

    NASA Astrophysics Data System (ADS)

    Dobashi, Kazuma; Hoshi, Masayuki; Hirohashi, Junji; Makio, Satoshi

    2018-02-01

    We developed the dual line fiber laser module based on FBG combination. The proposed configuration has several advantages such as compact, simple, and inexpensive. The laser was composed pump LD (40W), two HR FBGs for 1053 nm and 1058 nm, Yb-doped fiber, two OC FBGs for 1053 nm and 1058 nm, and delivery fiber. All single mode fibers were polarization maintained with approximately 6 micron core. All FBGs were mounted on holders with TECs and their temperatures were controlled independently. The center wavelengths of HR and OC FBGs were temperature dependent and their shifts are approximately 7 nm/degree-C for all integrated FBG. By adjusting the temperature, it is possible to realize the resonant condition for only 1053 nm or only for 1058 nm. Based on this configuration, we demonstrated dual line CW fiber laser module. This module was compact with the size of 200 mm X 150 mm X 23 mm. By adjusting the FBG temperatures, we obtained the output power of more than 10 W at 1053 nm and 1058 nm with linear polarization.

  9. Cardiovascular risk factors and cognitive function in adults 30-59 years of age (NHANES III).

    PubMed

    Pavlik, Valory N; Hyman, David J; Doody, Rachelle

    2005-01-01

    In the Third National Health and Nutrition Examination Survey (NHANES III), three measures of cognitive function [Simple Reaction Time Test (SRTT), Symbol Digit Substitution Test (SDST), and Serial Digit Learning Test (SDLT)] were administered to a half-sample of 3,385 adult men and nonpregnant women 30-59 years of age with no history of stroke. We used multiple linear regression analysis to determine whether there was an independent association between performance on each cognitive function measure and defined hypertension (HTN) alone, type 2 diabetes mellitus (DM) alone, and coexistent HTN and DM after adjustment for demographic and socioeconomic variables and selected health behaviors. After adjustment for the sociodemographic variables, the combination of HTN + DM, but not HTN alone or DM alone, was significantly associated with worse performance on the SRTT (p = 0.031) and the SDST (p = 0.011). A similar pattern was observed for SDLT performance, but the relationship did not reach statistical significance (p = 0.101). We conclude that HTN in combination with DM is associated with detectable cognitive decrements in persons under age 60.

  10. Development of dense glass-ceramic from recycled soda-lime-silicate glass and fly ash for tiling

    NASA Astrophysics Data System (ADS)

    Mustaffar, Mohd Idham; Mahmud, Mohamad Haniza; Hassan, Mahadi Abu

    2017-12-01

    Dense glass-ceramics were prepared by sinter-crystallization process from a combination of soda-lime-silicate glass waste and fly ash. Bentonite clay that acted as a binder was also added in a prepared formulation. The powder mixture of soda-lime glass, fly ash and bentonite clay were compacted by using uniaxial hydraulic press machine and sintered at six (6) various temperatures namely 750, 800, 850, 900, 950 and 1000 °C. The heating rate and sintering time were set at 5 °C/min and 30 minutes respectively. The results revealed that modulus of rupture (MOR), density and linear shrinkage increase first from 750 to 800 °C but decrease later after 800 to 1000 °C. In the meantime, water absorption was showing completely an opposite trend. The glass-ceramic sintered at 800 °C was found to have the best combination of physical-mechanical properties and has the potential to be applied in the construction industry particularly as floor and wall tiles because of the simple manufacturing process at low temperature.

  11. Graphene oxide-based dispersive solid-phase extraction combined with in situ derivatization and gas chromatography-mass spectrometry for the determination of acidic pharmaceuticals in water.

    PubMed

    Naing, Nyi Nyi; Li, Sam Fong Yau; Lee, Hian Kee

    2015-12-24

    A fast and low-cost sample preparation method of graphene based dispersive solid-phase extraction combined with gas chromatography-mass spectrometric (GC-MS) analysis, was developed. The procedure involves an initial extraction with water-immiscible organic solvent, followed by a rapid clean-up using amine functionalized reduced graphene oxide as sorbent. Simple and fast one-step in situ derivatization using trimethylphenylammonium hydroxide was subsequently applied on acidic pharmaceuticals serving as model analytes, ibuprofen, gemfibrozil, naproxen, ketoprofen and diclofenac, before GC-MS analysis. Extraction parameters affecting the derivatization and extraction efficiency such as volume of derivatization agent, effect of desorption solvent, effect of pH and effect of ionic strength were investigated. Under the optimum conditions, the method demonstrated good limits of detection ranging from 1 to 16ngL(-1), linearity (from 0.01 to 50 and 0.05 to 50μgL(-1), depending on the analytes) and satisfactory repeatability of extractions (relative standard deviations, below 13%, n=3). Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Simultaneous HPTLC Determination of Rabeprazole and Itopride Hydrochloride From Their Combined Dosage Form

    PubMed Central

    Suganthi, A.; John, Sofiya; Ravi, T. K.

    2008-01-01

    A simple, precise, sensitive, rapid and reproducible HPTLC method for the simultaneous estimation of the rabeprazole and itopride hydrochloride in tablets was developed and validated. This method involves separation of the components by TLC on precoated silica gel G60F254 plate with solvent system of n-butanol, toluene and ammonia (8.5:0.5:1 v/v/v) and detection was carried out densitometrically using a UV detector at 288 nm in absorbance mode. This system was found to give compact spots for rabeprazole (Rf value of 0.23 0.02) and for itopride hydrochloride (Rf value of 0.75±0.02). Linearity was found to be in the range of 40-200 ng/spot and 300-1500 ng/spot for rabeprazole and itopride hydrochloride. The limit of detection and limit of quantification for rabeprazole were 10 and 20 ng/spot and for itopride hydrochloride were 50 and 100 ng/spot, respectively. The method was found to be beneficial for the routine analysis of combined dosage form. PMID:20046748

  13. QuEChERS Purification Combined with Ultrahigh-Performance Liquid Chromatography Tandem Mass Spectrometry for Simultaneous Quantification of 25 Mycotoxins in Cereals

    PubMed Central

    Sun, Juan; Li, Weixi; Zhang, Yan; Hu, Xuexu; Wu, Li; Wang, Bujun

    2016-01-01

    A method based on the QuEChERS (quick, easy, cheap, effective, rugged, and safe) purification combined with ultrahigh performance liquid chromatography tandem mass spectrometry (UPLC–MS/MS), was optimized for the simultaneous quantification of 25 mycotoxins in cereals. Samples were extracted with a solution containing 80% acetonitrile and 0.1% formic acid, and purified with QuEChERS before being separated by a C18 column. The mass spectrometry was conducted by using positive electrospray ionization (ESI+) and multiple reaction monitoring (MRM) models. The method gave good linear relations with regression coefficients ranging from 0.9950 to 0.9999. The detection limits ranged from 0.03 to 15.0 µg·kg−1, and the average recovery at three different concentrations ranged from 60.2% to 115.8%, with relative standard deviations (RSD%) varying from 0.7% to 19.6% for the 25 mycotoxins. The method is simple, rapid, accurate, and an improvement compared with the existing methods published so far. PMID:27983693

  14. A validated densitometric method for analysis of atorvastatin calcium and metoprolol tartarate as bulk drugs and in combined capsule dosage forms.

    PubMed

    Patole, Sm; Khodke, As; Potale, Lv; Damle, Mc

    2011-01-01

    A simple, accurate and precise high-performance thin-layer chromatographic method has been developed for the estimation of Atorvastatin Calcium and Metoprolol Tartarate simultaneously from a capsule dosage form. The method employed Silica gel 60F (254s)precoated plates as stationary phase and a mixture of Chloroform: Methanol: Glacial acetic acid (dil.) :: (9:1.5:0.2 ml %v/v) as mobile phase. Densitometric scanning was performed at 220 nm using Camag TLC scanner 3. The method was linear in the drug concentrations' range of 500 to 2500 ng/spot for Atorvastatin Calcium, also for Metoprolol Tartarate with correlation coefficient of 0.984 for Atorvastatin Calcium and 0.995 for Metoprolol Tartarate respectively. The retention factor for Atorvastatin Calcium was 0.45 ± 0.04 and for Metoprolol Tartarate was 0.25 ± 0.02. The method was validated as per ICH (International Conference on Harmonisation) Guidelines, proving its utility in estimation of Atorvastatin Calcium and Metoprolol Tartarate in combined dosage form.

  15. A comparative study of smart spectrophotometric methods for simultaneous determination of a skeletal muscle relaxant and an analgesic in combined dosage form

    NASA Astrophysics Data System (ADS)

    Salem, Hesham; Mohamed, Dalia

    2015-04-01

    Six simple, specific, accurate and precise spectrophotometric methods were developed and validated for the simultaneous determination of the analgesic drug; paracetamol (PARA) and the skeletal muscle relaxant; dantrolene sodium (DANT). Three methods are manipulating ratio spectra namely; ratio difference (RD), ratio subtraction (RS) and mean centering (MC). The other three methods are utilizing the isoabsorptive point either at zero order namely; absorbance ratio (AR) and absorbance subtraction (AS) or at ratio spectrum namely; amplitude modulation (AM). The proposed spectrophotometric procedures do not require any preliminary separation step. The accuracy, precision and linearity ranges of the proposed methods were determined. The selectivity of the developed methods was investigated by analyzing laboratory prepared mixtures of the drugs and their combined dosage form. Standard deviation values are less than 1.5 in the assay of raw materials and capsules. The obtained results were statistically compared with each other and with those of reported spectrophotometric ones. The comparison showed that there is no significant difference between the proposed methods and the reported methods regarding both accuracy and precision.

  16. Inference with minimal Gibbs free energy in information field theory.

    PubMed

    Ensslin, Torsten A; Weig, Cornelius

    2010-11-01

    Non-linear and non-gaussian signal inference problems are difficult to tackle. Renormalization techniques permit us to construct good estimators for the posterior signal mean within information field theory (IFT), but the approximations and assumptions made are not very obvious. Here we introduce the simple concept of minimal Gibbs free energy to IFT, and show that previous renormalization results emerge naturally. They can be understood as being the gaussian approximation to the full posterior probability, which has maximal cross information with it. We derive optimized estimators for three applications, to illustrate the usage of the framework: (i) reconstruction of a log-normal signal from poissonian data with background counts and point spread function, as it is needed for gamma ray astronomy and for cosmography using photometric galaxy redshifts, (ii) inference of a gaussian signal with unknown spectrum, and (iii) inference of a poissonian log-normal signal with unknown spectrum, the combination of (i) and (ii). Finally we explain how gaussian knowledge states constructed by the minimal Gibbs free energy principle at different temperatures can be combined into a more accurate surrogate of the non-gaussian posterior.

  17. Sustainable Engineering and Improved Recycling of PET for High-Value Applications: Transforming Linear PET to Lightly Branched PET with a Novel, Scalable Process

    NASA Astrophysics Data System (ADS)

    Pierre, Cynthia; Torkelson, John

    2009-03-01

    A major challenge for the most effective recycling of poly(ethylene terephthalate) concerns the fact that initial melt processing of PET into a product leads to substantial degradation of molecular weight. Thus, recycled PET has insufficient melt viscosity for reuse in high-value applications such as melt-blowing of PET bottles. Academic and industrial research has tried to remedy this situation by synthesis and use of ``chain extenders'' that can lead to branched PET (with higher melt viscosity than the linear recycled PET) via condensation reactions with functional groups on the PET. Here we show that simple processing of PET via solid-state shear pulverization (SSSP) leads to enhanced PET melt viscosity without need for chemical additives. We hypothesize that this branching results from low levels of chain scission accompanying SSSP, leading to formation of polymeric radicals that participate in chain transfer and combination reactions with other PET chains and thereby to in situ branch formation. The pulverized PET exhibits vastly enhanced crystallization kinetics, eliminating the need to employ cold crystallization to achieve maximum PET crystallinity. Results of SSSP processing of PET will be compared to results obtained with poly(butylene terephthalate).

  18. Evolutionary dynamics of general group interactions in structured populations

    NASA Astrophysics Data System (ADS)

    Li, Aming; Broom, Mark; Du, Jinming; Wang, Long

    2016-02-01

    The evolution of populations is influenced by many factors, and the simple classical models have been developed in a number of important ways. Both population structure and multiplayer interactions have been shown to significantly affect the evolution of important properties, such as the level of cooperation or of aggressive behavior. Here we combine these two key factors and develop the evolutionary dynamics of general group interactions in structured populations represented by regular graphs. The traditional linear and threshold public goods games are adopted as models to address the dynamics. We show that for linear group interactions, population structure can favor the evolution of cooperation compared to the well-mixed case, and we see that the more neighbors there are, the harder it is for cooperators to persist in structured populations. We further show that threshold group interactions could lead to the emergence of cooperation even in well-mixed populations. Here population structure sometimes inhibits cooperation for the threshold public goods game, where depending on the benefit to cost ratio, the outcomes are bistability or a monomorphic population of defectors or cooperators. Our results suggest, counterintuitively, that structured populations are not always beneficial for the evolution of cooperation for nonlinear group interactions.

  19. Determination of perfluorinated compounds in fish fillet homogenates: method validation and application to fillet homogenates from the Mississippi River.

    PubMed

    Malinsky, Michelle Duval; Jacoby, Cliffton B; Reagen, William K

    2011-01-10

    We report herein a simple protein precipitation extraction-liquid chromatography tandem mass spectrometry (LC/MS/MS) method, validation, and application for the analysis of perfluorinated carboxylic acids (C7-C12), perfluorinated sulfonic acids (C4, C6, and C8), and perfluorooctane sulfonamide (FOSA) in fish fillet tissue. The method combines a rapid homogenization and protein precipitation tissue extraction procedure using stable-isotope internal standard (IS) calibration. Method validation in bluegill (Lepomis macrochirus) fillet tissue evaluated the following: (1) method accuracy and precision in both extracted matrix-matched calibration and solvent (unextracted) calibration, (2) quantitation of mixed branched and linear isomers of perfluorooctanoate (PFOA) and perfluorooctanesulfonate (PFOS) with linear isomer calibration, (3) quantitation of low level (ppb) perfluorinated compounds (PFCs) in the presence of high level (ppm) PFOS, and (4) specificity from matrix interferences. Both calibration techniques produced method accuracy of at least 100±13% with a precision (%RSD) ≤18% for all target analytes. Method accuracy and precision results for fillet samples from nine different fish species taken from the Mississippi River in 2008 and 2009 are also presented. Copyright © 2010 Elsevier B.V. All rights reserved.

  20. Kinetic modeling and fitting software for interconnected reaction schemes: VisKin.

    PubMed

    Zhang, Xuan; Andrews, Jared N; Pedersen, Steen E

    2007-02-15

    Reaction kinetics for complex, highly interconnected kinetic schemes are modeled using analytical solutions to a system of ordinary differential equations. The algorithm employs standard linear algebra methods that are implemented using MatLab functions in a Visual Basic interface. A graphical user interface for simple entry of reaction schemes facilitates comparison of a variety of reaction schemes. To ensure microscopic balance, graph theory algorithms are used to determine violations of thermodynamic cycle constraints. Analytical solutions based on linear differential equations result in fast comparisons of first order kinetic rates and amplitudes as a function of changing ligand concentrations. For analysis of higher order kinetics, we also implemented a solution using numerical integration. To determine rate constants from experimental data, fitting algorithms that adjust rate constants to fit the model to imported data were implemented using the Levenberg-Marquardt algorithm or using Broyden-Fletcher-Goldfarb-Shanno methods. We have included the ability to carry out global fitting of data sets obtained at varying ligand concentrations. These tools are combined in a single package, which we have dubbed VisKin, to guide and analyze kinetic experiments. The software is available online for use on PCs.

  1. Metacarpal geometry changes during Thoroughbred race training are compatible with sagittal-plane cantilever bending.

    PubMed

    Merritt, J S; Davies, H M S

    2010-11-01

    Bending of the equine metacarpal bones during locomotion is poorly understood. Cantilever bending, in particular, may influence the loading of the metacarpal bones and surrounding structures in unique ways. We hypothesised that increased amounts of sagittal-plane cantilever bending may govern changes to the shape of the metacarpal bones of Thoroughbred racehorses during training. We hypothesised that this type of bending would require a linear change to occur in the combined second moment of area of the bones for sagittal-plane bending (I) during race training. Six Thoroughbred racehorses were used, who had all completed at least 4 years of race training at a commercial stable. The approximate change in I that had occurred during race training was computed from radiographic measurements at the start and end of training using a simple model of bone shape. A significant (P < 0.001), approximately linear pattern of change in I was observed in each horse, with the maximum change occurring proximally and the minimum change occurring distally. The pattern of change in I was compatible with the hypothesis that sagittal-plane cantilever bending governed changes to the shape of the metacarpal bones during race training. © 2010 EVJ Ltd.

  2. Simultaneous Determination of Ofloxacin and Flavoxate Hydrochloride by Absorption Ratio and Second Derivative UV Spectrophotometry

    PubMed Central

    Attimarad, Mahesh

    2010-01-01

    The objective of this study was to develop simple, precise, accurate and sensitive UV spectrophotometric methods for the simultaneous determination of ofloxacin (OFX) and flavoxate HCl (FLX) in pharmaceutical formulations. The first method is based on absorption ratio method, by formation of Q absorbance equation at 289 nm (λmax of OFX) and 322.4 nm (isoabsorptive point). The linearity range was found to be 1 to 30 μg/ml for FLX and OFX. In the method-II second derivative absorption at 311.4 nm for OFX (zero crossing for FLX) and at 246.2 nm for FLX (zero crossing for OFX) was used for the determination of the drugs and the linearity range was found to be 2 to 30 μg/ml for OFX and 2-75 μg /ml for FLX. The accuracy and precision of the methods were determined and validated statistically. Both the methods showed good reproducibility and recovery with % RSD less than 1.5%. Both the methods were found to be rapid, specific, precise and accurate and can be successfully applied for the routine analysis of OFX and FLX in combined dosage form PMID:24826003

  3. IETI – Isogeometric Tearing and Interconnecting

    PubMed Central

    Kleiss, Stefan K.; Pechstein, Clemens; Jüttler, Bert; Tomar, Satyendra

    2012-01-01

    Finite Element Tearing and Interconnecting (FETI) methods are a powerful approach to designing solvers for large-scale problems in computational mechanics. The numerical simulation problem is subdivided into a number of independent sub-problems, which are then coupled in appropriate ways. NURBS- (Non-Uniform Rational B-spline) based isogeometric analysis (IGA) applied to complex geometries requires to represent the computational domain as a collection of several NURBS geometries. Since there is a natural decomposition of the computational domain into several subdomains, NURBS-based IGA is particularly well suited for using FETI methods. This paper proposes the new IsogEometric Tearing and Interconnecting (IETI) method, which combines the advanced solver design of FETI with the exact geometry representation of IGA. We describe the IETI framework for two classes of simple model problems (Poisson and linearized elasticity) and discuss the coupling of the subdomains along interfaces (both for matching interfaces and for interfaces with T-joints, i.e. hanging nodes). Special attention is paid to the construction of a suitable preconditioner for the iterative linear solver used for the interface problem. We report several computational experiments to demonstrate the performance of the proposed IETI method. PMID:24511167

  4. Wave scattering from random sets of closely spaced objects through linear embedding via Green's operators

    NASA Astrophysics Data System (ADS)

    Lancellotti, V.; de Hon, B. P.; Tijhuis, A. G.

    2011-08-01

    In this paper we present the application of linear embedding via Green's operators (LEGO) to the solution of the electromagnetic scattering from clusters of arbitrary (both conducting and penetrable) bodies randomly placed in a homogeneous background medium. In the LEGO method the objects are enclosed within simple-shaped bricks described in turn via scattering operators of equivalent surface current densities. Such operators have to be computed only once for a given frequency, and hence they can be re-used to perform the study of many distributions comprising the same objects located in different positions. The surface integral equations of LEGO are solved via the Moments Method combined with Adaptive Cross Approximation (to save memory) and Arnoldi basis functions (to compress the system). By means of purposefully selected numerical experiments we discuss the time requirements with respect to the geometry of a given distribution. Besides, we derive an approximate relationship between the (near-field) accuracy of the computed solution and the number of Arnoldi basis functions used to obtain it. This result endows LEGO with a handy practical criterion for both estimating the error and keeping it in check.

  5. Optimization of dispersive liquid-phase microextraction based on solidified floating organic drop combined with high-performance liquid chromatography for the analysis of glucocorticoid residues in food.

    PubMed

    Huang, Yuan; Zheng, Zhiqun; Huang, Liying; Yao, Hong; Wu, Xiao Shan; Li, Shaoguang; Lin, Dandan

    2017-05-10

    A rapid, simple, cost-effective dispersive liquid-phase microextraction based on solidified floating organic drop (SFOD-LPME) was developed in this study. Along with high-performance liquid chromatography, we used the developed approach to determine and enrich trace amounts of four glucocorticoids, namely, prednisone, betamethasone, dexamethasone, and cortisone acetate, in animal-derived food. We also investigated and optimized several important parameters that influenced the extraction efficiency of SFOD-LPME. These parameters include the extractant species, volumes of extraction and dispersant solvents, sodium chloride addition, sample pH, extraction time and temperature, and stirring rate. Under optimum experimental conditions, the calibration graph exhibited linearity over the range of 1.2-200.0ng/ml for the four analytes, with a reasonable linearity(r 2 : 0.9990-0.9999). The enrichment factor was 142-276, and the detection limits was 0.39-0.46ng/ml (0.078-0.23μg/kg). This method was successfully applied to analyze actual food samples, and good spiked recoveries of over 81.5%-114.3% were obtained. Copyright © 2017. Published by Elsevier B.V.

  6. Development and validation of a HPTLC method for simultaneous estimation of lornoxicam and thiocolchicoside in combined dosage form

    PubMed Central

    Sahoo, Madhusmita; Syal, Pratima; Hable, Asawaree A.; Raut, Rahul P.; Choudhari, Vishnu P.; Kuchekar, Bhanudas S.

    2011-01-01

    Aim: To develop a simple, precise, rapid and accurate HPTLC method for the simultaneous estimation of Lornoxicam (LOR) and Thiocolchicoside (THIO) in bulk and pharmaceutical dosage forms. Materials and Methods: The separation of the active compounds from pharmaceutical dosage form was carried out using methanol:chloroform:water (9.6:0.2:0.2 v/v/v) as the mobile phase and no immiscibility issues were found. The densitometric scanning was carried out at 377 nm. The method was validated for linearity, accuracy, precision, LOD (Limit of Detection), LOQ (Limit of Quantification), robustness and specificity. Results: The Rf values (±SD) were found to be 0.84 ± 0.05 for LOR and 0.58 ± 0.05 for THIO. Linearity was obtained in the range of 60–360 ng/band for LOR and 30–180 ng/band for THIO with correlation coefficients r2 = 0.998 and 0.999, respectively. The percentage recovery for both the analytes was in the range of 98.7–101.2 %. Conclusion: The proposed method was optimized and validated as per the ICH guidelines. PMID:23781452

  7. The influence of and the identification of nonlinearity in flexible structures

    NASA Technical Reports Server (NTRS)

    Zavodney, Lawrence D.

    1988-01-01

    Several models were built at NASA Langley and used to demonstrate the following nonlinear behavior: internal resonance in a free response, principal parametric resonance and subcritical instability in a cantilever beam-lumped mass structure, combination resonance in a parametrically excited flexible beam, autoparametric interaction in a two-degree-of-freedom system, instability of the linear solution, saturation of the excited mode, subharmonic bifurcation, and chaotic responses. A video tape documenting these phenomena was made. An attempt to identify a simple structure consisting of two light-weight beams and two lumped masses using the Eigensystem Realization Algorithm showed the inherent difficulty of using a linear based theory to identify a particular nonlinearity. Preliminary results show the technique requires novel interpretation, and hence may not be useful for structural modes that are coupled by a guadratic nonlinearity. A literature survey was also completed on recent work in parametrically excited nonlinear system. In summary, nonlinear systems may possess unique behaviors that require nonlinear identification techniques based on an understanding of how nonlinearity affects the dynamic response of structures. In this was, the unique behaviors of nonlinear systems may be properly identified. Moreover, more accutate quantifiable estimates can be made once the qualitative model has been determined.

  8. Neuromorphic computing with nanoscale spintronic oscillators

    PubMed Central

    Torrejon, Jacob; Riou, Mathieu; Araujo, Flavio Abreu; Tsunegi, Sumito; Khalsa, Guru; Querlioz, Damien; Bortolotti, Paolo; Cros, Vincent; Fukushima, Akio; Kubota, Hitoshi; Yuasa, Shinji; Stiles, M. D.; Grollier, Julie

    2017-01-01

    Neurons in the brain behave as non-linear oscillators, which develop rhythmic activity and interact to process information1. Taking inspiration from this behavior to realize high density, low power neuromorphic computing will require huge numbers of nanoscale non-linear oscillators. Indeed, a simple estimation indicates that, in order to fit a hundred million oscillators organized in a two-dimensional array inside a chip the size of a thumb, their lateral dimensions must be smaller than one micrometer. However, despite multiple theoretical proposals2–5, and several candidates such as memristive6 or superconducting7 oscillators, there is no proof of concept today of neuromorphic computing with nano-oscillators. Indeed, nanoscale devices tend to be noisy and to lack the stability required to process data in a reliable way. Here, we show experimentally that a nanoscale spintronic oscillator8,9 can achieve spoken digit recognition with accuracies similar to state of the art neural networks. We pinpoint the regime of magnetization dynamics leading to highest performance. These results, combined with the exceptional ability of these spintronic oscillators to interact together, their long lifetime, and low energy consumption, open the path to fast, parallel, on-chip computation based on networks of oscillators. PMID:28748930

  9. Efficient multidimensional regularization for Volterra series estimation

    NASA Astrophysics Data System (ADS)

    Birpoutsoukis, Georgios; Csurcsia, Péter Zoltán; Schoukens, Johan

    2018-05-01

    This paper presents an efficient nonparametric time domain nonlinear system identification method. It is shown how truncated Volterra series models can be efficiently estimated without the need of long, transient-free measurements. The method is a novel extension of the regularization methods that have been developed for impulse response estimates of linear time invariant systems. To avoid the excessive memory needs in case of long measurements or large number of estimated parameters, a practical gradient-based estimation method is also provided, leading to the same numerical results as the proposed Volterra estimation method. Moreover, the transient effects in the simulated output are removed by a special regularization method based on the novel ideas of transient removal for Linear Time-Varying (LTV) systems. Combining the proposed methodologies, the nonparametric Volterra models of the cascaded water tanks benchmark are presented in this paper. The results for different scenarios varying from a simple Finite Impulse Response (FIR) model to a 3rd degree Volterra series with and without transient removal are compared and studied. It is clear that the obtained models capture the system dynamics when tested on a validation dataset, and their performance is comparable with the white-box (physical) models.

  10. Reduction of chemical formulas from the isotopic peak distributions of high-resolution mass spectra.

    PubMed

    Roussis, Stilianos G; Proulx, Richard

    2003-03-15

    A method has been developed for the reduction of the chemical formulas of compounds in complex mixtures from the isotopic peak distributions of high-resolution mass spectra. The method is based on the principle that the observed isotopic peak distribution of a mixture of compounds is a linear combination of the isotopic peak distributions of the individual compounds in the mixture. All possible chemical formulas that meet specific criteria (e.g., type and number of atoms in structure, limits of unsaturation, etc.) are enumerated, and theoretical isotopic peak distributions are generated for each formula. The relative amount of each formula is obtained from the accurately measured isotopic peak distribution and the calculated isotopic peak distributions of all candidate formulas. The formulas of compounds in simple spectra, where peak components are fully resolved, are rapidly determined by direct comparison of the calculated and experimental isotopic peak distributions. The singular value decomposition linear algebra method is used to determine the contributions of compounds in complex spectra containing unresolved peak components. The principles of the approach and typical application examples are presented. The method is most useful for the characterization of complex spectra containing partially resolved peaks and structures with multiisotopic elements.

  11. Linear combination methods to improve diagnostic/prognostic accuracy on future observations

    PubMed Central

    Kang, Le; Liu, Aiyi; Tian, Lili

    2014-01-01

    Multiple diagnostic tests or biomarkers can be combined to improve diagnostic accuracy. The problem of finding the optimal linear combinations of biomarkers to maximise the area under the receiver operating characteristic curve has been extensively addressed in the literature. The purpose of this article is threefold: (1) to provide an extensive review of the existing methods for biomarker combination; (2) to propose a new combination method, namely, the nonparametric stepwise approach; (3) to use leave-one-pair-out cross-validation method, instead of re-substitution method, which is overoptimistic and hence might lead to wrong conclusion, to empirically evaluate and compare the performance of different linear combination methods in yielding the largest area under receiver operating characteristic curve. A data set of Duchenne muscular dystrophy was analysed to illustrate the applications of the discussed combination methods. PMID:23592714

  12. Improved CORF model of simple cell combined with non-classical receptive field and its application on edge detection

    NASA Astrophysics Data System (ADS)

    Sun, Xiao; Chai, Guobei; Liu, Wei; Bao, Wenzhuo; Zhao, Xiaoning; Ming, Delie

    2018-02-01

    Simple cells in primary visual cortex are believed to extract local edge information from a visual scene. In this paper, inspired by different receptive field properties and visual information flow paths of neurons, an improved Combination of Receptive Fields (CORF) model combined with non-classical receptive fields was proposed to simulate the responses of simple cell's receptive fields. Compared to the classical model, the proposed model is able to better imitate simple cell's physiologic structure with consideration of facilitation and suppression of non-classical receptive fields. And on this base, an edge detection algorithm as an application of the improved CORF model was proposed. Experimental results validate the robustness of the proposed algorithm to noise and background interference.

  13. Simple Expressions for the Design of Linear Tapers in Overmoded Corrugated Waveguides

    DOE PAGES

    Schaub, S. C.; Shapiro, M. A.; Temkin, R. J.

    2015-08-16

    In this paper, simple analytical formulae are presented for the design of linear tapers with very low mode conversion loss in overmoded corrugated waveguides. For tapers from waveguide radius a2 to a1, with a11a 2/λ. Here, λ is the wavelength of radiation. The fractional loss of the HE 11 mode in an optimized taper is 0.0293(a 2-a 1) 4/amore » $$2\\atop{1}$$1a$$2\\atop{2}$$. These formulae are accurate when a2≲2a 1. Slightly more complex formulae, accurate for a 2≤4a 1, are also presented in this paper. The loss in an overmoded corrugated linear taper is less than 1 % when a 2≤2.12a 1 and less than 0.1 % when a 2≤1.53a 1. The present analytic results have been benchmarked against a rigorous mode matching code and have been found to be very accurate. The results for linear tapers are compared with the analogous expressions for parabolic tapers. Finally, parabolic tapers may provide lower loss, but linear tapers with moderate values of a 2/a 1 may be attractive because of their simplicity of fabrication.« less

  14. A Vernacular for Linear Latent Growth Models

    ERIC Educational Resources Information Center

    Hancock, Gregory R.; Choi, Jaehwa

    2006-01-01

    In its most basic form, latent growth modeling (latent curve analysis) allows an assessment of individuals' change in a measured variable X over time. For simple linear models, as with other growth models, parameter estimates associated with the a construct (amount of X at a chosen temporal reference point) and b construct (growth in X per unit…

  15. The Development of a High Speed Exponential Function Generator for Linearization of Microwave Voltage Controlled Oscillators.

    DTIC Science & Technology

    1985-10-01

    characteristic of a p-n junction to provide exponential linearization in a simple, thermally-stable, wide band circuit. RESME Les oscillateurs A...exponentielle (fr6quence/tension) que V’on 1 retrouve chez plusieurs oscillateurs . Ce circuit, d’une grande largeur de bande, utilise la caractfiristique

  16. The Pendulum: A Paradigm for the Linear Oscillator

    ERIC Educational Resources Information Center

    Newburgh, Ronald

    2004-01-01

    The simple pendulum is a model for the linear oscillator. The usual mathematical treatment of the problem begins with a differential equation that one solves with the techniques of the differential calculus, a formal process that tends to obscure the physics. In this paper we begin with a kinematic description of the motion obtained by experiment…

  17. The relationship between psychological distress and baseline sports-related concussion testing.

    PubMed

    Bailey, Christopher M; Samples, Hillary L; Broshek, Donna K; Freeman, Jason R; Barth, Jeffrey T

    2010-07-01

    This study examined the effect of psychological distress on neurocognitive performance measured during baseline concussion testing. Archival data were utilized to examine correlations between personality testing and computerized baseline concussion testing. Significantly correlated personality measures were entered into linear regression analyses, predicting baseline concussion testing performance. Suicidal ideation was examined categorically. Athletes underwent testing and screening at a university athletic training facility. Participants included 47 collegiate football players 17 to 19 years old, the majority of whom were in their first year of college. Participants were administered the Concussion Resolution Index (CRI), an internet-based neurocognitive test designed to monitor and manage both at-risk and concussed athletes. Participants took the Personality Assessment Inventory (PAI), a self-administered inventory designed to measure clinical syndromes, treatment considerations, and interpersonal style. Scales and subscales from the PAI were utilized to determine the influence psychological distress had on the CRI indices: simple reaction time, complex reaction time, and processing speed. Analyses revealed several significant correlations among aspects of somatic concern, depression, anxiety, substance abuse, and suicidal ideation and CRI performance, each with at least a moderate effect. When entered into a linear regression, the block of combined psychological symptoms accounted for a significant amount of baseline CRI performance, with moderate to large effects (r = 0.23-0.30). When examined categorically, participants with suicidal ideation showed significantly slower simple reaction time and complex reaction time, with a similar trend on processing speed. Given the possibility of obscured concussion deficits after injury, implications for premature return to play, and the need to target psychological distress outright, these findings heighten the clinical importance of screening for psychological distress during baseline and post-injury concussion evaluations.

  18. Stoichiometric determination of moisture in edible oils by Mid-FTIR spectroscopy.

    PubMed

    van de Voort, F R; Tavassoli-Kafrani, M H; Curtis, J M

    2016-04-28

    A simple and accurate method for the determination of moisture in edible oils by differential FTIR spectroscopy has been devised based on the stoichiometric reaction of the moisture in oil with toluenesulfonyl isocyanate (TSI) to produce CO2. Calibration standards were devised by gravimetrically spiking dry dioxane with water, followed by the addition of neat TSI and examination of the differential spectra relative to the dry dioxane. In the method, CO2 peak area changes are measured at 2335 cm(-1) and were shown to be related to the amount of moisture added, with any CO2 inherent to residual moisture in the dry dioxane subtracted ratioed out. CO2 volatility issues were determined to be minimal, with the overall SD of dioxane calibrations being ∼18 ppm over a range of 0-1000 ppm. Gravimetrically blended dry and water-saturated oils analysed in a similar manner produced linear CO2 responses with SD's of <15 ppm on average. One set of dry-wet blends was analysed in duplicate by FTIR and by two independent laboratories using coulometric Karl Fischer (KF) procedures. All 3 methods produced highly linear moisture relationships with SD's of 7, 16 and 28 ppm, respectively over a range of 200-1500 ppm. Although the absolute moisture values obtained by each method did not exactly coincide, each tracked the expected moisture changes proportionately. The FTIRTSI-H2O method provides a simple and accurate instrumental means of determining moisture in oils rivaling the accuracy and specificity of standard KF procedures and has the potential to be automated. It could also be applied to other hydrophobic matrices and possibly evolve into a more generalized method, if combined with polar aprotic solvent extraction. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. One-step extraction and quantitation of toxic alcohols and ethylene glycol in plasma by capillary gas chromatography (GC) with flame ionization detection (FID).

    PubMed

    Orton, Dennis J; Boyd, Jessica M; Affleck, Darlene; Duce, Donna; Walsh, Warren; Seiden-Long, Isolde

    2016-01-01

    Clinical analysis of volatile alcohols (i.e. methanol, ethanol, isopropanol, and metabolite acetone) and ethylene glycol (EG) generally employs separate gas chromatography (GC) methods for analysis. Here, a method for combined analysis of volatile alcohols and EG is described. Volatile alcohols and EG were extracted with 2:1 (v:v) acetonitrile containing internal standards (IS) 1,2 butanediol (for EG) and n-propanol (for alcohols). Samples were analyzed on an Agilent 6890 GC FID. The method was evaluated for precision, accuracy, reproducibility, linearity, selectivity and limit of quantitation (LOQ), followed by correlation to existing GC methods using patient samples, Bio-Rad QC, and in-house prepared QC material. Inter-day precision was from 6.5-11.3% CV, and linearity was verified from down to 0.6mmol/L up to 150mmol/L for each analyte. The method showed good recovery (~100%) and the LOQ was calculated to be between 0.25 and 0.44mmol/L. Patient correlation against current GC methods showed good agreement (slopes from 1.03-1.12, and y-intercepts from 0 to 0.85mmol/L; R(2)>0.98; N=35). Carryover was negligible for volatile alcohols in the measuring range, and of the potential interferences tested, only toluene and 1,3 propanediol interfered. The method was able to resolve 2,3 butanediol, diethylene glycol, and propylene glycol in addition to the peaks quantified. Here we describe a simple procedure for simultaneous analysis of EG and volatile alcohols that comes at low cost and with a simple liquid-liquid extraction requiring no derivitization to obtain adequate sensitivity for clinical specimens. Copyright © 2015 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  20. Trends in Global Vegetation Activity and Climatic Drivers Indicate a Decoupled Response to Climate Change.

    PubMed

    Schut, Antonius G T; Ivits, Eva; Conijn, Jacob G; Ten Brink, Ben; Fensholt, Rasmus

    2015-01-01

    Detailed understanding of a possible decoupling between climatic drivers of plant productivity and the response of ecosystems vegetation is required. We compared trends in six NDVI metrics (1982-2010) derived from the GIMMS3g dataset with modelled biomass productivity and assessed uncertainty in trend estimates. Annual total biomass weight (TBW) was calculated with the LINPAC model. Trends were determined using a simple linear regression, a Thiel-Sen medium slope and a piecewise regression (PWR) with two segments. Values of NDVI metrics were related to Net Primary Production (MODIS-NPP) and TBW per biome and land-use type. The simple linear and Thiel-Sen trends did not differ much whereas PWR increased the fraction of explained variation, depending on the NDVI metric considered. A positive trend in TBW indicating more favorable climatic conditions was found for 24% of pixels on land, and for 5% a negative trend. A decoupled trend, indicating positive TBW trends and monotonic negative or segmented and negative NDVI trends, was observed for 17-36% of all productive areas depending on the NDVI metric used. For only 1-2% of all pixels in productive areas, a diverging and greening trend was found despite a strong negative trend in TBW. The choice of NDVI metric used strongly affected outcomes on regional scales and differences in the fraction of explained variation in MODIS-NPP between biomes were large, and a combination of NDVI metrics is recommended for global studies. We have found an increasing difference between trends in climatic drivers and observed NDVI for large parts of the globe. Our findings suggest that future scenarios must consider impacts of constraints on plant growth such as extremes in weather and nutrient availability to predict changes in NPP and CO2 sequestration capacity.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dubrovsky, V. G.; Topovsky, A. V.

    New exact solutions, nonstationary and stationary, of Veselov-Novikov (VN) equation in the forms of simple nonlinear and linear superpositions of arbitrary number N of exact special solutions u{sup (n)}, n= 1, Horizontal-Ellipsis , N are constructed via Zakharov and Manakov {partial_derivative}-dressing method. Simple nonlinear superpositions are represented up to a constant by the sums of solutions u{sup (n)} and calculated by {partial_derivative}-dressing on nonzero energy level of the first auxiliary linear problem, i.e., 2D stationary Schroedinger equation. It is remarkable that in the zero energy limit simple nonlinear superpositions convert to linear ones in the form of the sums ofmore » special solutions u{sup (n)}. It is shown that the sums u=u{sup (k{sub 1})}+...+u{sup (k{sub m})}, 1 Less-Than-Or-Slanted-Equal-To k{sub 1} < k{sub 2} < Horizontal-Ellipsis < k{sub m} Less-Than-Or-Slanted-Equal-To N of arbitrary subsets of these solutions are also exact solutions of VN equation. The presented exact solutions include as superpositions of special line solitons and also superpositions of plane wave type singular periodic solutions. By construction these exact solutions represent also new exact transparent potentials of 2D stationary Schroedinger equation and can serve as model potentials for electrons in planar structures of modern electronics.« less

  2. A simple method for HPLC retention time prediction: linear calibration using two reference substances.

    PubMed

    Sun, Lei; Jin, Hong-Yu; Tian, Run-Tao; Wang, Ming-Juan; Liu, Li-Na; Ye, Liu-Ping; Zuo, Tian-Tian; Ma, Shuang-Cheng

    2017-01-01

    Analysis of related substances in pharmaceutical chemicals and multi-components in traditional Chinese medicines needs bulk of reference substances to identify the chromatographic peaks accurately. But the reference substances are costly. Thus, the relative retention (RR) method has been widely adopted in pharmacopoeias and literatures for characterizing HPLC behaviors of those reference substances unavailable. The problem is it is difficult to reproduce the RR on different columns due to the error between measured retention time (t R ) and predicted t R in some cases. Therefore, it is useful to develop an alternative and simple method for prediction of t R accurately. In the present study, based on the thermodynamic theory of HPLC, a method named linear calibration using two reference substances (LCTRS) was proposed. The method includes three steps, procedure of two points prediction, procedure of validation by multiple points regression and sequential matching. The t R of compounds on a HPLC column can be calculated by standard retention time and linear relationship. The method was validated in two medicines on 30 columns. It was demonstrated that, LCTRS method is simple, but more accurate and more robust on different HPLC columns than RR method. Hence quality standards using LCTRS method are easy to reproduce in different laboratories with lower cost of reference substances.

  3. Joint effect of unlinked genotypes: application to type 2 diabetes in the EPIC-Potsdam case-cohort study.

    PubMed

    Knüppel, Sven; Meidtner, Karina; Arregui, Maria; Holzhütter, Hermann-Georg; Boeing, Heiner

    2015-07-01

    Analyzing multiple single nucleotide polymorphisms (SNPs) is a promising approach to finding genetic effects beyond single-locus associations. We proposed the use of multilocus stepwise regression (MSR) to screen for allele combinations as a method to model joint effects, and compared the results with the often used genetic risk score (GRS), conventional stepwise selection, and the shrinkage method LASSO. In contrast to MSR, the GRS, conventional stepwise selection, and LASSO model each genotype by the risk allele doses. We reanalyzed 20 unlinked SNPs related to type 2 diabetes (T2D) in the EPIC-Potsdam case-cohort study (760 cases, 2193 noncases). No SNP-SNP interactions and no nonlinear effects were found. Two SNP combinations selected by MSR (Nagelkerke's R² = 0.050 and 0.048) included eight SNPs with mean allele combination frequency of 2%. GRS and stepwise selection selected nearly the same SNP combinations consisting of 12 and 13 SNPs (Nagelkerke's R² ranged from 0.020 to 0.029). LASSO showed similar results. The MSR method showed the best model fit measured by Nagelkerke's R² suggesting that further improvement may render this method a useful tool in genetic research. However, our comparison suggests that the GRS is a simple way to model genetic effects since it does not consider linkage, SNP-SNP interactions, and no non-linear effects. © 2015 John Wiley & Sons Ltd/University College London.

  4. Tree-Structured Digital Organisms Model

    NASA Astrophysics Data System (ADS)

    Suzuki, Teruhiko; Nobesawa, Shiho; Tahara, Ikuo

    Tierra and Avida are well-known models of digital organisms. They describe a life process as a sequence of computation codes. A linear sequence model may not be the only way to describe a digital organism, though it is very simple for a computer-based model. Thus we propose a new digital organism model based on a tree structure, which is rather similar to the generic programming. With our model, a life process is a combination of various functions, as if life in the real world is. This implies that our model can easily describe the hierarchical structure of life, and it can simulate evolutionary computation through mutual interaction of functions. We verified our model by simulations that our model can be regarded as a digital organism model according to its definitions. Our model even succeeded in creating species such as viruses and parasites.

  5. Single gate p-n junctions in graphene-ferroelectric devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hinnefeld, J. Henry; Mason, Nadya, E-mail: nadya@illinois.edu; Xu, Ruijuan

    Graphene's linear dispersion relation and the attendant implications for bipolar electronics applications have motivated a range of experimental efforts aimed at producing p-n junctions in graphene. Here we report electrical transport measurements of graphene p-n junctions formed via simple modifications to a PbZr{sub 0.2}Ti{sub 0.8}O{sub 3} substrate, combined with a self-assembled layer of ambient environmental dopants. We show that the substrate configuration controls the local doping region, and that the p-n junction behavior can be controlled with a single gate. Finally, we show that the ferroelectric substrate induces a hysteresis in the environmental doping which can be utilized to activatemore » and deactivate the doping, yielding an “on-demand” p-n junction in graphene controlled by a single, universal backgate.« less

  6. Fitting dynamic models to the Geosat sea level observations in the tropical Pacific Ocean. I - A free wave model

    NASA Technical Reports Server (NTRS)

    Fu, Lee-Lueng; Vazquez, Jorge; Perigaud, Claire

    1991-01-01

    Free, equatorially trapped sinusoidal wave solutions to a linear model on an equatorial beta plane are used to fit the Geosat altimetric sea level observations in the tropical Pacific Ocean. The Kalman filter technique is used to estimate the wave amplitude and phase from the data. The estimation is performed at each time step by combining the model forecast with the observation in an optimal fashion utilizing the respective error covariances. The model error covariance is determined such that the performance of the model forecast is optimized. It is found that the dominant observed features can be described qualitatively by basin-scale Kelvin waves and the first meridional-mode Rossby waves. Quantitatively, however, only 23 percent of the signal variance can be accounted for by this simple model.

  7. Rapid Design of Gravity Assist Trajectories

    NASA Technical Reports Server (NTRS)

    Carrico, J.; Hooper, H. L.; Roszman, L.; Gramling, C.

    1991-01-01

    Several International Solar Terrestrial Physics (ISTP) missions require the design of complex gravity assisted trajectories in order to investigate the interaction of the solar wind with the Earth's magnetic field. These trajectories present a formidable trajectory design and optimization problem. The philosophy and methodology that enable an analyst to design and analyse such trajectories are discussed. The so called 'floating end point' targeting, which allows the inherently nonlinear multiple body problem to be solved with simple linear techniques, is described. The combination of floating end point targeting with analytic approximations with a Newton method targeter to achieve trajectory design goals quickly, even for the very sensitive double lunar swingby trajectories used by the ISTP missions, is demonstrated. A multiconic orbit integration scheme allows fast and accurate orbit propagation. A prototype software tool, Swingby, built for trajectory design and launch window analysis, is described.

  8. Nonlinear oscillation of a rigid body over high- Tc superconductors supported by electro-magnetic forces

    NASA Astrophysics Data System (ADS)

    Sugiura, T.; Ogawa, S.; Ura, H.

    2005-10-01

    Characteristics of high- Tc superconducting levitation systems are no contact support and stable levitation without control. They can be applied to supporting mechanisms in machines, such as linear-drives and magnetically levitated trains. But small damping due to noncontact support and nonlinearity in the magnetic force can easily cause complicated phenomena of nonlinear dynamics. This research deals with nonlinear oscillation of a rigid bar supported at its both ends by electro-magnetic forces between superconductors and permanent magnets as a simple modeling of the above application. Deriving the equation of motion, we discussed an effect of nonlinearity in the magnetic force on dynamics of the levitated body: occurrence of combination resonance in the asymmetrical system. Numerical analyses and experiments were also carried out, and their results confirmed the above theoretical prediction.

  9. SAW based systems for mobile communications satellites

    NASA Technical Reports Server (NTRS)

    Peach, R. C.; Miller, N.; Lee, M.

    1993-01-01

    Modern mobile communications satellites, such as INMARSAT 3, EMS, and ARTEMIS, use advanced onboard processing to make efficient use of the available L-band spectrum. In all of these cases, high performance surface acoustic wave (SAW) devices are used. SAW filters can provide high selectivity (100-200 kHz transition widths), combined with flat amplitude and linear phase characteristics; their simple construction and radiation hardness also makes them especially suitable for space applications. An overview of the architectures used in the above systems, describing the technologies employed, and the use of bandwidth switchable SAW filtering (BSSF) is given. The tradeoffs to be considered when specifying a SAW based system are analyzed, using both theoretical and experimental data. Empirical rules for estimating SAW filter performance are given. Achievable performance is illustrated using data from the INMARSAT 3 engineering model (EM) processors.

  10. Topology optimized and 3D printed polymer-bonded permanent magnets for a predefined external field

    NASA Astrophysics Data System (ADS)

    Huber, C.; Abert, C.; Bruckner, F.; Pfaff, C.; Kriwet, J.; Groenefeld, M.; Teliban, I.; Vogler, C.; Suess, D.

    2017-08-01

    Topology optimization offers great opportunities to design permanent magnetic systems that have specific external field characteristics. Additive manufacturing of polymer-bonded magnets with an end-user 3D printer can be used to manufacture permanent magnets with structures that had been difficult or impossible to manufacture previously. This work combines these two powerful methods to design and manufacture permanent magnetic systems with specific properties. The topology optimization framework is simple, fast, and accurate. It can also be used for the reverse engineering of permanent magnets in order to find the topology from field measurements. Furthermore, a magnetic system that generates a linear external field above the magnet is presented. With a volume constraint, the amount of magnetic material can be minimized without losing performance. Simulations and measurements of the printed systems show very good agreement.

  11. Phenomenology of the SU(3)_c⊗ SU(3)_L⊗ U(1)_X model with right-handed neutrinos

    NASA Astrophysics Data System (ADS)

    Gutiérrez, D. A.; Ponce, W. A.; Sánchez, L. A.

    2006-05-01

    A phenomenological analysis of the three-family model based on the local gauge group SU(3)_c⊗ SU(3)_L⊗ U(1)_X with right-handed neutrinos is carried out. Instead of using the minimal scalar sector able to break the symmetry in a proper way, we introduce an alternative set of four Higgs scalar triplets, which combined with an anomaly-free discrete symmetry, produces a quark mass spectrum without hierarchies in the Yukawa coupling constants. We also embed the structure into a simple gauge group and show some conditions for achieving a low energy gauge coupling unification, avoiding possible conflict with proton decay bounds. By using experimental results from the CERN-LEP, SLAC linear collider, and atomic parity violation data, we update constraints on several parameters of the model.

  12. Phenomenology of the SU(3)c⊗SU(3)L⊗U(1)X model with exotic charged leptons

    NASA Astrophysics Data System (ADS)

    Salazar, Juan C.; Ponce, William A.; Gutiérrez, Diego A.

    2007-04-01

    A phenomenological analysis of the three-family model based on the local gauge group SU(3)c⊗SU(3)L⊗U(1)X with exotic charged leptons, is carried out. Instead of using the minimal scalar sector able to break the symmetry in a proper way, we introduce an alternative set of four Higgs scalar triplets, which combined with an anomaly-free discrete symmetry, produce quark and charged lepton mass spectrum without hierarchies in the Yukawa coupling constants. We also embed the structure into a simple gauge group and show some conditions to achieve a low energy gauge coupling unification, avoiding possible conflict with proton decay bounds. By using experimental results from the CERN-LEP, SLAC linear collider, and atomic parity violation data, we update constraints on several parameters of the model.

  13. Strain-induced phase transition and electron spin-polarization in graphene spirals

    PubMed Central

    Zhang, Xiaoming; Zhao, Mingwen

    2014-01-01

    Spin-polarized triangular graphene nanoflakes (t-GNFs) serve as ideal building blocks for the long-desired ferromagnetic graphene superlattices, but they are always assembled to planar structures which reduce its mechanical properties. Here, by joining t-GNFs in a spiral way, we propose one-dimensional graphene spirals (GSs) with superior mechanical properties and tunable electronic structures. We demonstrate theoretically the unique features of electron motion in the spiral lattice by means of first-principles calculations combined with a simple Hubbard model. Within a linear elastic deformation range, the GSs are nonmagnetic metals. When the axial tensile strain exceeds an ultimate strain, however, they convert to magnetic semiconductors with stable ferromagnetic ordering along the edges. Such strain-induced phase transition and tunable electron spin-polarization revealed in the GSs open a new avenue for spintronics devices. PMID:25027550

  14. Strain-induced phase transition and electron spin-polarization in graphene spirals.

    PubMed

    Zhang, Xiaoming; Zhao, Mingwen

    2014-07-16

    Spin-polarized triangular graphene nanoflakes (t-GNFs) serve as ideal building blocks for the long-desired ferromagnetic graphene superlattices, but they are always assembled to planar structures which reduce its mechanical properties. Here, by joining t-GNFs in a spiral way, we propose one-dimensional graphene spirals (GSs) with superior mechanical properties and tunable electronic structures. We demonstrate theoretically the unique features of electron motion in the spiral lattice by means of first-principles calculations combined with a simple Hubbard model. Within a linear elastic deformation range, the GSs are nonmagnetic metals. When the axial tensile strain exceeds an ultimate strain, however, they convert to magnetic semiconductors with stable ferromagnetic ordering along the edges. Such strain-induced phase transition and tunable electron spin-polarization revealed in the GSs open a new avenue for spintronics devices.

  15. Contact area of rough spheres: Large scale simulations and simple scaling laws

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pastewka, Lars, E-mail: lars.pastewka@kit.edu; Department of Physics and Astronomy, Johns Hopkins University, 3400 North Charles Street, Baltimore, Maryland 21218; Robbins, Mark O., E-mail: mr@pha.jhu.edu

    2016-05-30

    We use molecular simulations to study the nonadhesive and adhesive atomic-scale contact of rough spheres with radii ranging from nanometers to micrometers over more than ten orders of magnitude in applied normal load. At the lowest loads, the interfacial mechanics is governed by the contact mechanics of the first asperity that touches. The dependence of contact area on normal force becomes linear at intermediate loads and crosses over to Hertzian at the largest loads. By combining theories for the limiting cases of nominally flat rough surfaces and smooth spheres, we provide parameter-free analytical expressions for contact area over the wholemore » range of loads. Our results establish a range of validity for common approximations that neglect curvature or roughness in modeling objects on scales from atomic force microscope tips to ball bearings.« less

  16. Analytical and experimental investigations of human spine flexure.

    NASA Technical Reports Server (NTRS)

    Moffatt, C. A.; Advani, S. H.; Lin, C.-J.

    1971-01-01

    The authors report on experiments to measure the resistance of fresh human spines to flexion in the upper lumbar and lower thoracic regions and evaluate results by using a combination of strength of materials theory and effects of shear and comparing with data reported by other authors. The test results indicate that the thoraco-lumbar spine behaves approximately as a linear elastic beam, without relaxation effects. The authors formulate a simple continuum dynamic model of the spine simulating aircraft ejection and solve the resulting boundary value problem to illustrate the importance of the flexural mode. A constant cross-section, the selected model is a sinusoidally curved elastic beam with an end mass subjected to a Heaviside axial acceleration at the other end. The paper presents transient response results for the spinal model axial and bending displacements and axial force.-

  17. n-Iterative Exponential Forgetting Factor for EEG Signals Parameter Estimation

    PubMed Central

    Palma Orozco, Rosaura

    2018-01-01

    Electroencephalograms (EEG) signals are of interest because of their relationship with physiological activities, allowing a description of motion, speaking, or thinking. Important research has been developed to take advantage of EEG using classification or predictor algorithms based on parameters that help to describe the signal behavior. Thus, great importance should be taken to feature extraction which is complicated for the Parameter Estimation (PE)–System Identification (SI) process. When based on an average approximation, nonstationary characteristics are presented. For PE the comparison of three forms of iterative-recursive uses of the Exponential Forgetting Factor (EFF) combined with a linear function to identify a synthetic stochastic signal is presented. The one with best results seen through the functional error is applied to approximate an EEG signal for a simple classification example, showing the effectiveness of our proposal. PMID:29568310

  18. Portfolio optimization using fuzzy linear programming

    NASA Astrophysics Data System (ADS)

    Pandit, Purnima K.

    2013-09-01

    Portfolio Optimization (PO) is a problem in Finance, in which investor tries to maximize return and minimize risk by carefully choosing different assets. Expected return and risk are the most important parameters with regard to optimal portfolios. In the simple form PO can be modeled as quadratic programming problem which can be put into equivalent linear form. PO problems with the fuzzy parameters can be solved as multi-objective fuzzy linear programming problem. In this paper we give the solution to such problems with an illustrative example.

  19. On the Stability of Jump-Linear Systems Driven by Finite-State Machines with Markovian Inputs

    NASA Technical Reports Server (NTRS)

    Patilkulkarni, Sudarshan; Herencia-Zapana, Heber; Gray, W. Steven; Gonzalez, Oscar R.

    2004-01-01

    This paper presents two mean-square stability tests for a jump-linear system driven by a finite-state machine with a first-order Markovian input process. The first test is based on conventional Markov jump-linear theory and avoids the use of any higher-order statistics. The second test is developed directly using the higher-order statistics of the machine s output process. The two approaches are illustrated with a simple model for a recoverable computer control system.

  20. A Maple package for computing Gröbner bases for linear recurrence relations

    NASA Astrophysics Data System (ADS)

    Gerdt, Vladimir P.; Robertz, Daniel

    2006-04-01

    A Maple package for computing Gröbner bases of linear difference ideals is described. The underlying algorithm is based on Janet and Janet-like monomial divisions associated with finite difference operators. The package can be used, for example, for automatic generation of difference schemes for linear partial differential equations and for reduction of multiloop Feynman integrals. These two possible applications are illustrated by simple examples of the Laplace equation and a one-loop scalar integral of propagator type.

  1. Reliability Analysis of the Gradual Degradation of Semiconductor Devices.

    DTIC Science & Technology

    1983-07-20

    under the heading of linear models or linear statistical models . 3 ,4 We have not used this material in this report. Assuming catastrophic failure when...assuming a catastrophic model . In this treatment we first modify our system loss formula and then proceed to the actual analysis. II. ANALYSIS OF...Failure Time 1 Ti Ti 2 T2 T2 n Tn n and are easily analyzed by simple linear regression. Since we have assumed a log normal/Arrhenius activation

  2. A Simple Equation to Predict a Subscore's Value

    ERIC Educational Resources Information Center

    Feinberg, Richard A.; Wainer, Howard

    2014-01-01

    Subscores are often used to indicate test-takers' relative strengths and weaknesses and so help focus remediation. But a subscore is not worth reporting if it is too unreliable to believe or if it contains no information that is not already contained in the total score. It is possible, through the use of a simple linear equation provided in…

  3. Short relaxation times but long transient times in both simple and complex reaction networks

    PubMed Central

    Henry, Adrien; Martin, Olivier C.

    2016-01-01

    When relaxation towards an equilibrium or steady state is exponential at large times, one usually considers that the associated relaxation time τ, i.e. the inverse of the decay rate, is the longest characteristic time in the system. However, that need not be true, other times such as the lifetime of an infinitesimal perturbation can be much longer. In the present work, we demonstrate that this paradoxical property can arise even in quite simple systems such as a linear chain of reactions obeying mass action (MA) kinetics. By mathematical analysis of simple reaction networks, we pin-point the reason why the standard relaxation time does not provide relevant information on the potentially long transient times of typical infinitesimal perturbations. Overall, we consider four characteristic times and study their behaviour in both simple linear chains and in more complex reaction networks taken from the publicly available database ‘Biomodels’. In all these systems, whether involving MA rates, Michaelis–Menten reversible kinetics, or phenomenological laws for reaction rates, we find that the characteristic times corresponding to lifetimes of tracers and of concentration perturbations can be significantly longer than τ. PMID:27411726

  4. Activities of Antibiotic Combinations against Resistant Strains of Pseudomonas aeruginosa in a Model of Infected THP-1 Monocytes

    PubMed Central

    Buyck, Julien M.

    2014-01-01

    Antibiotic combinations are often used for treating Pseudomonas aeruginosa infections but their efficacy toward intracellular bacteria has not been investigated so far. We have studied combinations of representatives of the main antipseudomonal classes (ciprofloxacin, meropenem, tobramycin, and colistin) against intracellular P. aeruginosa in a model of THP-1 monocytes in comparison with bacteria growing in broth, using the reference strain PAO1 and two clinical isolates (resistant to ciprofloxacin and meropenem, respectively). Interaction between drugs was assessed by checkerboard titration (extracellular model only), by kill curves, and by using the fractional maximal effect (FME) method, which allows studying the effects of combinations when dose-effect relationships are not linear. For drugs used alone, simple sigmoidal functions could be fitted to all concentration-effect relationships (extracellular and intracellular bacteria), with static concentrations close to (ciprofloxacin, colistin, and meropenem) or slightly higher than (tobramycin) the MIC and with maximal efficacy reaching the limit of detection in broth but only a 1 to 1.5 (colistin, meropenem, and tobramycin) to 2 to 3 (ciprofloxacin) log10 CFU decrease intracellularly. Extracellularly, all combinations proved additive by checkerboard titration but synergistic using the FME method and more bactericidal in kill curve assays. Intracellularly, all combinations proved additive only based on both FME and kill curve assays. Thus, although combinations appeared to modestly improve antibiotic activity against intracellular P. aeruginosa, they do not allow eradication of these persistent forms of infections. Combinations including ciprofloxacin were the most active (even against the ciprofloxacin-resistant strain), which is probably related to the fact this drug was the most effective alone intracellularly. PMID:25348528

  5. [Clinical study of cervical spondylotic radiculopathy treated with massage therapy combined with Magnetic sticking therapy at the auricular points and the cost comparison].

    PubMed

    Wang, Saina; Sheng, Feng; Pan, Yunhua; Xu, Feng; Wang, Zhichao; Cheng, Lei

    2015-08-01

    To compare the clinical efficacy on cervical spondylotic radiculopathy between the combined therapy of massage and magnetic-sticking at the auricular points and the simple massage therapy, and conduct the health economics evaluation. Seventy-two patients of cervical spondylotic radiculopathy were randomized into a combined therapy group, and a simple massage group, 36 cases in each one. Finally, 35 cases and 34 cases were met the inclusive criteria in the corresponding groups separately. In the combined therapy group, the massage therapy and the magnetic sticking therapy at auricular points were combined in the treatment. Massage therapy was mainly applied to Fengchi (GB 20), Jianjing (GB 21), Jianwaishu (SI 14), Jianyu (LI 15) and Quchi (LI 11). The main auricular points for magnetic sticking pressure were Jingzhui (AH13), Gan (On12) Shen (CO10), Shenmen (TF4), Pizhixia (AT4). In the simple massage group, the simple massage therapy was given, the massage parts and methods were the same as those in the combined therapy group. The treatment was given once every two days, three times a week, for 4 weeks totally. The cervical spondylosis effect scale and the simplified McGill pain questionnaire were adopted to observe the improvements in the clinical symptoms, clinical examination, daily life movement, superficial muscular pain in the neck and the health economics cost in the patients of the two groups. The effect was evaluated in the two groups. The effective rate and the clinical curative rate in the combined therapy group were better than those in the control group [100. 0% (35/35) vs 85. 3% (29/34), 42. 9% (15/35) vs 17. 6% (6/34), both P<0. 05]. The scores of the spontaneous symptoms, clinical examnation, daily life movement and superficialmuscular pain in the neck were improved apparently after treatment as compared with those before treatment in the patients of the two groups (all P<0. 001). In terms of the improvements in the spontaneous symptoms, clinical examination total scores and superficial muscular pain in the' neck were more significant in the combined therapy group as compared with those in the simple massage group (P<0. 05, P<0. 01, P<0. 001). The cost at the unit effect in the combined therapy group was lower than that in the simple massage group (P<0. 05). Compared with the simple massage therapy, the massage therapy combined with magnetic sticking therapy at auricular points achieves the better effect and lower cost in health economics.

  6. A simple formula for predicting claw volume of cattle.

    PubMed

    Scott, T D; Naylor, J M; Greenough, P R

    1999-11-01

    The object of this study was to develop a simple method for accurately calculating the volume of bovine claws under field conditions. The digits of 30 slaughterhouse beef cattle were examined and the following four linear measurements taken from each pair of claws: (1) the length of the dorsal surface of the claw (Toe); (2) the length of the coronary band (CorBand); (3) the length of the bearing surface (Base); and (4) the height of the claw at the abaxial groove (AbaxGr). Measurements of claw volume using a simple hydrometer were highly repeatable (r(2)= 0.999) and could be calculated from linear measurements using the formula:Claw Volume (cm(3)) = (17.192 x Base) + (7.467 x AbaxGr) + 45.270 x (CorBand) - 798.5This formula was found to be accurate (r(2)= 0.88) when compared to volume data derived from a hydrometer displacement procedure. The front claws occupied 54% of the total volume compared to 46% for the hind claws. Copyright 1999 Harcourt Publishers Ltd.

  7. Acceleration and Velocity Sensing from Measured Strain

    NASA Technical Reports Server (NTRS)

    Pak, Chan-Gi; Truax, Roger

    2015-01-01

    A simple approach for computing acceleration and velocity of a structure from the strain is proposed in this study. First, deflection and slope of the structure are computed from the strain using a two-step theory. Frequencies of the structure are computed from the time histories of strain using a parameter estimation technique together with an autoregressive moving average model. From deflection, slope, and frequencies of the structure, acceleration and velocity of the structure can be obtained using the proposed approach. Simple harmonic motion is assumed for the acceleration computations, and the central difference equation with a linear autoregressive model is used for the computations of velocity. A cantilevered rectangular wing model is used to validate the simple approach. Quality of the computed deflection, acceleration, and velocity values are independent of the number of fibers. The central difference equation with a linear autoregressive model proposed in this study follows the target response with reasonable accuracy. Therefore, the handicap of the backward difference equation, phase shift, is successfully overcome.

  8. Evolution of finite-amplitude localized vortices in planar homogeneous shear flows

    NASA Astrophysics Data System (ADS)

    Karp, Michael; Shukhman, Ilia G.; Cohen, Jacob

    2017-02-01

    An analytical-based method is utilized to follow the evolution of localized initially Gaussian disturbances in flows with homogeneous shear, in which the base velocity components are at most linear functions of the coordinates, including hyperbolic, elliptic, and simple shear. Coherent structures, including counterrotating vortex pairs (CVPs) and hairpin vortices, are formed for the cases where the streamlines of the base flow are open (hyperbolic and simple shear). For hyperbolic base flows, the dominance of shear over rotation leads to elongation of the localized disturbance along the outlet asymptote and formation of CVPs. For simple shear CVPs are formed from linear and nonlinear disturbances, whereas hairpins are observed only for highly nonlinear disturbances. For elliptic base flows CVPs, hairpins and vortex loops form initially, however they do not last and break into various vortical structures that spread in the spanwise direction. The effect of the disturbance's initial amplitude and orientation is examined and the optimal orientation achieving maximal growth is identified.

  9. Nonlinear analysis of saccade speed fluctuations during combined action and perception tasks

    PubMed Central

    Stan, C.; Astefanoaei, C.; Pretegiani, E.; Optican, L.; Creanga, D.; Rufa, A.; Cristescu, C.P.

    2014-01-01

    Background: Saccades are rapid eye movements used to gather information about a scene which requires both action and perception. These are usually studied separately, so that how perception influences action is not well understood. In a dual task, where the subject looks at a target and reports a decision, subtle changes in the saccades might be caused by action-perception interactions. Studying saccades might provide insight into how brain pathways for action and for perception interact. New method: We applied two complementary methods, multifractal detrended fluctuation analysis and Lempel-Ziv complexity index to eye peak speed recorded in two experiments, a pure action task and a combined action-perception task. Results: Multifractality strength is significantly different in the two experiments, showing smaller values for dual decision task saccades compared to simple-task saccades. The normalized Lempel-Ziv complexity index behaves similarly i.e. is significantly smaller in the decision saccade task than in the simple task. Comparison with existing methods: Compared to the usual statistical and linear approaches, these analyses emphasize the character of the dynamics involved in the fluctuations and offer a sensitive tool for quantitative evaluation of the multifractal features and of the complexity measure in the saccades peak speeds when different brain circuits are involved. Conclusion: Our results prove that the peak speed fluctuations have multifractal characteristics with lower magnitude for the multifractality strength and for the complexity index when two neural pathways are simultaneously activated, demonstrating the nonlinear interaction in the brain pathways for action and perception. PMID:24854830

  10. Easy, Fast, and Reproducible Quantification of Cholesterol and Other Lipids in Human Plasma by Combined High Resolution MSX and FTMS Analysis

    NASA Astrophysics Data System (ADS)

    Gallego, Sandra F.; Højlund, Kurt; Ejsing, Christer S.

    2018-01-01

    Reliable, cost-effective, and gold-standard absolute quantification of non-esterified cholesterol in human plasma is of paramount importance in clinical lipidomics and for the monitoring of metabolic health. Here, we compared the performance of three mass spectrometric approaches available for direct detection and quantification of cholesterol in extracts of human plasma. These approaches are high resolution full scan Fourier transform mass spectrometry (FTMS) analysis, parallel reaction monitoring (PRM), and novel multiplexed MS/MS (MSX) technology, where fragments from selected precursor ions are detected simultaneously. Evaluating the performance of these approaches in terms of dynamic quantification range, linearity, and analytical precision showed that the MSX-based approach is superior to that of the FTMS and PRM-based approaches. To further show the efficacy of this approach, we devised a simple routine for extensive plasma lipidome characterization using only 8 μL of plasma, using a new commercially available ready-to-spike-in mixture with 14 synthetic lipid standards, and executing a single 6 min sample injection with combined MSX analysis for cholesterol quantification and FTMS analysis for quantification of sterol esters, glycerolipids, glycerophospholipids, and sphingolipids. Using this simple routine afforded reproducible and absolute quantification of 200 lipid species encompassing 13 lipid classes in human plasma samples. Notably, the analysis time of this procedure can be shortened for high throughput-oriented clinical lipidomics studies or extended with more advanced MSALL technology (Almeida R. et al., J. Am. Soc. Mass Spectrom. 26, 133-148 [1]) to support in-depth structural elucidation of lipid molecules. [Figure not available: see fulltext.

  11. Ultrasonication followed by single-drop microextraction combined with GC/MS for rapid determination of organochlorine pesticides from fish.

    PubMed

    Shrivas, Kamlesh; Wu, Hui-Fen

    2008-02-01

    A novel, rapid and simple sample pretreatment technique termed ultrasonication followed by single-drop micro-extraction (U-SDME) has been developed and combined with GC/MS for the determination of organochlorine pesticides (OCPs) in fish. In the present work, the lengthy procedures generally used in the conventional methods like, Soxhlet extraction, supercritical fluid extraction, pressurized liquid extraction and microwave assisted solvent extraction for extraction of OCPs from fish tissues are minimized by the use of two simple extraction procedures. Firstly, OCPs from fish were extracted in organic solvent with ultrasonication and then subsequently preconcentrated by single-drop micro-extraction (SDME). Extraction parameters of ultrasonication and SDME were optimized in spiked sample solution in order to obtain efficient extraction of OCPs from fish tissues. The calibration curves for OCPs were found to be linear between 10-1000 ng/g with correlation of estimations in the range 0.990-0.994. The recoveries obtained in blank fish tissues were ranged from 82.1 to 95.3%. The LOD and RSD for determination of OCPs in fish were 0.5 ng/g and 9.4-10.0%, respectively. The proposed method was applied for the determination of bioconcentration factor in fish after exposure to different concentrations of OCPs in cultured water. The present method avoids the co-extraction of lipids, long extraction steps (>12 h) and large amount of organic solvent for the separation of OCPs. The main advantages of the present method are rapid, selective, sensitive and low cost for the determination of OCPs in fish.

  12. Evaluation of interpolation techniques for the creation of gridded daily precipitation (1 × 1 km2); Cyprus, 1980-2010

    NASA Astrophysics Data System (ADS)

    Camera, Corrado; Bruggeman, Adriana; Hadjinicolaou, Panos; Pashiardis, Stelios; Lange, Manfred A.

    2014-01-01

    High-resolution gridded daily data sets are essential for natural resource management and the analyses of climate changes and their effects. This study aims to evaluate the performance of 15 simple or complex interpolation techniques in reproducing daily precipitation at a resolution of 1 km2 over topographically complex areas. Methods are tested considering two different sets of observation densities and different rainfall amounts. We used rainfall data that were recorded at 74 and 145 observational stations, respectively, spread over the 5760 km2 of the Republic of Cyprus, in the Eastern Mediterranean. Regression analyses utilizing geographical copredictors and neighboring interpolation techniques were evaluated both in isolation and combined. Linear multiple regression (LMR) and geographically weighted regression methods (GWR) were tested. These included a step-wise selection of covariables, as well as inverse distance weighting (IDW), kriging, and 3D-thin plate splines (TPS). The relative rank of the different techniques changes with different station density and rainfall amounts. Our results indicate that TPS performs well for low station density and large-scale events and also when coupled with regression models. It performs poorly for high station density. The opposite is observed when using IDW. Simple IDW performs best for local events, while a combination of step-wise GWR and IDW proves to be the best method for large-scale events and high station density. This study indicates that the use of step-wise regression with a variable set of geographic parameters can improve the interpolation of large-scale events because it facilitates the representation of local climate dynamics.

  13. Easy, Fast, and Reproducible Quantification of Cholesterol and Other Lipids in Human Plasma by Combined High Resolution MSX and FTMS Analysis.

    PubMed

    Gallego, Sandra F; Højlund, Kurt; Ejsing, Christer S

    2018-01-01

    Reliable, cost-effective, and gold-standard absolute quantification of non-esterified cholesterol in human plasma is of paramount importance in clinical lipidomics and for the monitoring of metabolic health. Here, we compared the performance of three mass spectrometric approaches available for direct detection and quantification of cholesterol in extracts of human plasma. These approaches are high resolution full scan Fourier transform mass spectrometry (FTMS) analysis, parallel reaction monitoring (PRM), and novel multiplexed MS/MS (MSX) technology, where fragments from selected precursor ions are detected simultaneously. Evaluating the performance of these approaches in terms of dynamic quantification range, linearity, and analytical precision showed that the MSX-based approach is superior to that of the FTMS and PRM-based approaches. To further show the efficacy of this approach, we devised a simple routine for extensive plasma lipidome characterization using only 8 μL of plasma, using a new commercially available ready-to-spike-in mixture with 14 synthetic lipid standards, and executing a single 6 min sample injection with combined MSX analysis for cholesterol quantification and FTMS analysis for quantification of sterol esters, glycerolipids, glycerophospholipids, and sphingolipids. Using this simple routine afforded reproducible and absolute quantification of 200 lipid species encompassing 13 lipid classes in human plasma samples. Notably, the analysis time of this procedure can be shortened for high throughput-oriented clinical lipidomics studies or extended with more advanced MS ALL technology (Almeida R. et al., J. Am. Soc. Mass Spectrom. 26, 133-148 [1]) to support in-depth structural elucidation of lipid molecules. Graphical Abstract ᅟ.

  14. Evaluation of Teaching Signals for Motor Control in the Cerebellum during Real-World Robot Application.

    PubMed

    Pinzon Morales, Ruben Dario; Hirata, Yutaka

    2016-12-20

    Motor learning in the cerebellum is believed to entail plastic changes at synapses between parallel fibers and Purkinje cells, induced by the teaching signal conveyed in the climbing fiber (CF) input. Despite the abundant research on the cerebellum, the nature of this signal is still a matter of debate. Two types of movement error information have been proposed to be plausible teaching signals: sensory error (SE) and motor command error (ME); however, their plausibility has not been tested in the real world. Here, we conducted a comparison of different types of CF teaching signals in real-world engineering applications by using a realistic neuronal network model of the cerebellum. We employed a direct current motor (simple task) and a two-wheeled balancing robot (difficult task). We demonstrate that SE, ME or a linear combination of the two is sufficient to yield comparable performance in a simple task. When the task is more difficult, although SE slightly outperformed ME, these types of error information are all able to adequately control the robot. We categorize granular cells according to their inputs and the error signal revealing that different granule cells are preferably engaged for SE, ME or their combination. Thus, unlike previous theoretical and simulation studies that support either SE or ME, it is demonstrated for the first time in a real-world engineering application that both SE and ME are adequate as the CF teaching signal in a realistic computational cerebellar model, even when the control task is as difficult as stabilizing a two-wheeled balancing robot.

  15. Evaluation of Teaching Signals for Motor Control in the Cerebellum during Real-World Robot Application

    PubMed Central

    Pinzon Morales, Ruben Dario; Hirata, Yutaka

    2016-01-01

    Motor learning in the cerebellum is believed to entail plastic changes at synapses between parallel fibers and Purkinje cells, induced by the teaching signal conveyed in the climbing fiber (CF) input. Despite the abundant research on the cerebellum, the nature of this signal is still a matter of debate. Two types of movement error information have been proposed to be plausible teaching signals: sensory error (SE) and motor command error (ME); however, their plausibility has not been tested in the real world. Here, we conducted a comparison of different types of CF teaching signals in real-world engineering applications by using a realistic neuronal network model of the cerebellum. We employed a direct current motor (simple task) and a two-wheeled balancing robot (difficult task). We demonstrate that SE, ME or a linear combination of the two is sufficient to yield comparable performance in a simple task. When the task is more difficult, although SE slightly outperformed ME, these types of error information are all able to adequately control the robot. We categorize granular cells according to their inputs and the error signal revealing that different granule cells are preferably engaged for SE, ME or their combination. Thus, unlike previous theoretical and simulation studies that support either SE or ME, it is demonstrated for the first time in a real-world engineering application that both SE and ME are adequate as the CF teaching signal in a realistic computational cerebellar model, even when the control task is as difficult as stabilizing a two-wheeled balancing robot. PMID:27999381

  16. Development of a Nonlinear Probability of Collision Tool for the Earth Observing System

    NASA Technical Reports Server (NTRS)

    McKinley, David P.

    2006-01-01

    The Earth Observing System (EOS) spacecraft Terra, Aqua, and Aura fly in constellation with several other spacecraft in 705-kilometer mean altitude sun-synchronous orbits. All three spacecraft are operated by the Earth Science Mission Operations (ESMO) Project at Goddard Space Flight Center (GSFC). In 2004, the ESMO project began assessing the probability of collision of the EOS spacecraft with other space objects. In addition to conjunctions with high relative velocities, the collision assessment method for the EOS spacecraft must address conjunctions with low relative velocities during potential collisions between constellation members. Probability of Collision algorithms that are based on assumptions of high relative velocities and linear relative trajectories are not suitable for these situations; therefore an algorithm for handling the nonlinear relative trajectories was developed. This paper describes this algorithm and presents results from its validation for operational use. The probability of collision is typically calculated by integrating a Gaussian probability distribution over the volume swept out by a sphere representing the size of the space objects involved in the conjunction. This sphere is defined as the Hard Body Radius. With the assumption of linear relative trajectories, this volume is a cylinder, which translates into simple limits of integration for the probability calculation. For the case of nonlinear relative trajectories, the volume becomes a complex geometry. However, with an appropriate choice of coordinate systems, the new algorithm breaks down the complex geometry into a series of simple cylinders that have simple limits of integration. This nonlinear algorithm will be discussed in detail in the paper. The nonlinear Probability of Collision algorithm was first verified by showing that, when used in high relative velocity cases, it yields similar answers to existing high relative velocity linear relative trajectory algorithms. The comparison with the existing high velocity/linear theory will also be used to determine at what relative velocity the analysis should use the new nonlinear theory in place of the existing linear theory. The nonlinear algorithm was also compared to a known exact solution for the probability of collision between two objects when the relative motion is strictly circular and the error covariance is spherically symmetric. Figure I shows preliminary results from this comparison by plotting the probabilities calculated from the new algorithm and those from the exact solution versus the Hard Body Radius to Covariance ratio. These results show about 5% error when the Hard Body Radius is equal to one half the spherical covariance magnitude. The algorithm was then combined with a high fidelity orbit state and error covariance propagator into a useful tool for analyzing low relative velocity nonlinear relative trajectories. The high fidelity propagator is capable of using atmospheric drag, central body gravitational, solar radiation, and third body forces to provide accurate prediction of the relative trajectories and covariance evolution. The covariance propagator also includes a process noise model to ensure realistic evolutions of the error covariance. This paper will describe the integration of the nonlinear probability algorithm and the propagators into a useful collision assessment tool. Finally, a hypothetical case study involving a low relative velocity conjunction between members of the Earth Observation System constellation will be presented.

  17. The effects of buoyancy on shear-induced melt bands in a compacting porous medium

    NASA Astrophysics Data System (ADS)

    Butler, S. L.

    2009-03-01

    It has recently been shown [Holtzman, B., Groebner, N., Zimmerman, M., Ginsberg, S., Kohlstedt, D., 2003. Stress-driven melt segregation in partially molten rocks. Geochem. Geophys. Geosyst. 4, Art. No. 8607; Holtzman, B.K., Kohlstedt, D.L., 2007. Stress-driven melt segregation and strain partitioning in partially molten rocks: effects of stress and strain. J. Petrol. 48, 2379-2406] that when partially molten rock is subjected to simple shear, bands of high and low porosity are formed at a particular angle to the direction of instantaneous maximum extension. These have been modeled numerically and it has been speculated that high porosity bands may form an interconnected network with a bulk, effective permeability that is enhanced in a direction parallel to the bands. As a result, the bands may act to focus mantle melt towards the axis of mid-ocean ridges [Katz, R.F., Spiegelman, M., Holtzman, B., 2006. The dynamics of melt and shear localization in partially molten aggregates. Nature 442, 676-679]. In this contribution, we examine the combined effects of buoyancy and matrix shear on a deforming porous layer. The linear theory of Spiegelman [Spiegelman, M., 1993. Flow in deformable porous media. Part 1. Simple analysis. J. Fluid Mech. 247, 17-38; Spiegelman, M., 2003. Linear analysis of melt band formation by simple shear. Geochem. Geophys. Geosyst. 4, doi:10.1029/2002GC000499, Article 8615] and Katz et al. [Katz, R.F., Spiegelman, M., Holtzman, B., 2006. The dynamics of melt and shear localization in partially molten aggregates. Nature 442, 676-679] is generalized to include both the effects of buoyancy and matrix shear on a deformable porous layer with strain-rate dependent rheology. The predictions of linear theory are compared with the early time evolution of our 2D numerical model and they are found to be in excellent agreement. For conditions similar to the upper mantle, buoyancy forces can be similar to or much greater than matrix shear-induced forces. The results of the numerical model indicate that bands form when buoyancy forces are large and that these can significantly alter the direction of the flow of liquid away from vertical. The bands form at angles similar to the angle of maximum instantaneous growth rate. Consequently, for strongly strain-rate dependent rheology, there may be two sets of bands formed that are symmetric about the direction of maximum compressive stress in the background mantle flow. This second set of bands would reduce the efficiency with which melt bands would focus melts towards the ridge axis.

  18. Multiscale morphological filtering for analysis of noisy and complex images

    NASA Astrophysics Data System (ADS)

    Kher, A.; Mitra, S.

    Images acquired with passive sensing techniques suffer from illumination variations and poor local contrasts that create major difficulties in interpretation and identification tasks. On the other hand, images acquired with active sensing techniques based on monochromatic illumination are degraded with speckle noise. Mathematical morphology offers elegant techniques to handle a wide range of image degradation problems. Unlike linear filters, morphological filters do not blur the edges and hence maintain higher image resolution. Their rich mathematical framework facilitates the design and analysis of these filters as well as their hardware implementation. Morphological filters are easier to implement and are more cost effective and efficient than several conventional linear filters. Morphological filters to remove speckle noise while maintaining high resolution and preserving thin image regions that are particularly vulnerable to speckle noise were developed and applied to SAR imagery. These filters used combination of linear (one-dimensional) structuring elements in different (typically four) orientations. Although this approach preserves more details than the simple morphological filters using two-dimensional structuring elements, the limited orientations of one-dimensional elements approximate the fine details of the region boundaries. A more robust filter designed recently overcomes the limitation of the fixed orientations. This filter uses a combination of concave and convex structuring elements. Morphological operators are also useful in extracting features from visible and infrared imagery. A multiresolution image pyramid obtained with successive filtering and a subsampling process aids in the removal of the illumination variations and enhances local contrasts. A morphology-based interpolation scheme was also introduced to reduce intensity discontinuities created in any morphological filtering task. The generality of morphological filtering techniques in extracting information from a wide variety of images obtained with active and passive sensing techniques is discussed. Such techniques are particularly useful in obtaining more information from fusion of complex images by different sensors such as SAR, visible, and infrared.

  19. Multiscale Morphological Filtering for Analysis of Noisy and Complex Images

    NASA Technical Reports Server (NTRS)

    Kher, A.; Mitra, S.

    1993-01-01

    Images acquired with passive sensing techniques suffer from illumination variations and poor local contrasts that create major difficulties in interpretation and identification tasks. On the other hand, images acquired with active sensing techniques based on monochromatic illumination are degraded with speckle noise. Mathematical morphology offers elegant techniques to handle a wide range of image degradation problems. Unlike linear filters, morphological filters do not blur the edges and hence maintain higher image resolution. Their rich mathematical framework facilitates the design and analysis of these filters as well as their hardware implementation. Morphological filters are easier to implement and are more cost effective and efficient than several conventional linear filters. Morphological filters to remove speckle noise while maintaining high resolution and preserving thin image regions that are particularly vulnerable to speckle noise were developed and applied to SAR imagery. These filters used combination of linear (one-dimensional) structuring elements in different (typically four) orientations. Although this approach preserves more details than the simple morphological filters using two-dimensional structuring elements, the limited orientations of one-dimensional elements approximate the fine details of the region boundaries. A more robust filter designed recently overcomes the limitation of the fixed orientations. This filter uses a combination of concave and convex structuring elements. Morphological operators are also useful in extracting features from visible and infrared imagery. A multiresolution image pyramid obtained with successive filtering and a subsampling process aids in the removal of the illumination variations and enhances local contrasts. A morphology-based interpolation scheme was also introduced to reduce intensity discontinuities created in any morphological filtering task. The generality of morphological filtering techniques in extracting information from a wide variety of images obtained with active and passive sensing techniques is discussed. Such techniques are particularly useful in obtaining more information from fusion of complex images by different sensors such as SAR, visible, and infrared.

  20. A Method for Calculating Strain Energy Release Rates in Preliminary Design of Composite Skin/Stringer Debonding Under Multi-Axial Loading

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Minguet, Pierre J.; OBrien, T. Kevin

    1999-01-01

    Three simple procedures were developed to determine strain energy release rates, G, in composite skin/stringer specimens for various combinations of unaxial and biaxial (in-plane/out-of-plane) loading conditions. These procedures may be used for parametric design studies in such a way that only a few finite element computations will be necessary for a study of many load combinations. The results were compared with mixed mode strain energy release rates calculated directly from nonlinear two-dimensional plane-strain finite element analyses using the virtual crack closure technique. The first procedure involved solving three unknown parameters needed to determine the energy release rates. Good agreement was obtained when the external loads were used in the expression derived. This superposition technique was only applicable if the structure exhibits a linear load/deflection behavior. Consequently, a second technique was derived which was applicable in the case of nonlinear load/deformation behavior. The technique involved calculating six unknown parameters from a set of six simultaneous linear equations with data from six nonlinear analyses to determine the energy release rates. This procedure was not time efficient, and hence, less appealing. A third procedure was developed to calculate mixed mode energy release rates as a function of delamination lengths. This procedure required only one nonlinear finite element analysis of the specimen with a single delamination length to obtain a reference solution for the energy release rates and the scale factors. The delamination was extended in three separate linear models of the local area in the vicinity of the delamination subjected to unit loads to obtain the distribution of G with delamination lengths. This set of sub-problems was Although additional modeling effort is required to create the sub- models, this local technique is efficient for parametric studies.

  1. Quantiles for Finite Mixtures of Normal Distributions

    ERIC Educational Resources Information Center

    Rahman, Mezbahur; Rahman, Rumanur; Pearson, Larry M.

    2006-01-01

    Quantiles for finite mixtures of normal distributions are computed. The difference between a linear combination of independent normal random variables and a linear combination of independent normal densities is emphasized. (Contains 3 tables and 1 figure.)

  2. Horizontal Running Mattress Suture Modified with Intermittent Simple Loops

    PubMed Central

    Chacon, Anna H; Shiman, Michael I; Strozier, Narissa; Zaiac, Martin N

    2013-01-01

    Using the combination of a horizontal running mattress suture with intermittent loops achieves both good eversion with the horizontal running mattress plus the ease of removal of the simple loops. This combination technique also avoids the characteristic railroad track marks that result from prolonged non-absorbable suture retention. The unique feature of our technique is the incorporation of one simple running suture after every two runs of the horizontal running mattress suture. To demonstrate its utility, we used the suturing technique on several patients and analyzed the cosmetic outcome with post-operative photographs in comparison to other suturing techniques. In summary, the combination of running horizontal mattress suture with simple intermittent loops demonstrates functional and cosmetic benefits that can be readily taught, comprehended, and employed, leading to desirable aesthetic results and wound edge eversion. PMID:23723610

  3. Using complexity metrics with R-R intervals and BPM heart rate measures.

    PubMed

    Wallot, Sebastian; Fusaroli, Riccardo; Tylén, Kristian; Jegindø, Else-Marie

    2013-01-01

    Lately, growing attention in the health sciences has been paid to the dynamics of heart rate as indicator of impending failures and for prognoses. Likewise, in social and cognitive sciences, heart rate is increasingly employed as a measure of arousal, emotional engagement and as a marker of interpersonal coordination. However, there is no consensus about which measurements and analytical tools are most appropriate in mapping the temporal dynamics of heart rate and quite different metrics are reported in the literature. As complexity metrics of heart rate variability depend critically on variability of the data, different choices regarding the kind of measures can have a substantial impact on the results. In this article we compare linear and non-linear statistics on two prominent types of heart beat data, beat-to-beat intervals (R-R interval) and beats-per-min (BPM). As a proof-of-concept, we employ a simple rest-exercise-rest task and show that non-linear statistics-fractal (DFA) and recurrence (RQA) analyses-reveal information about heart beat activity above and beyond the simple level of heart rate. Non-linear statistics unveil sustained post-exercise effects on heart rate dynamics, but their power to do so critically depends on the type data that is employed: While R-R intervals are very susceptible to non-linear analyses, the success of non-linear methods for BPM data critically depends on their construction. Generally, "oversampled" BPM time-series can be recommended as they retain most of the information about non-linear aspects of heart beat dynamics.

  4. Using complexity metrics with R-R intervals and BPM heart rate measures

    PubMed Central

    Wallot, Sebastian; Fusaroli, Riccardo; Tylén, Kristian; Jegindø, Else-Marie

    2013-01-01

    Lately, growing attention in the health sciences has been paid to the dynamics of heart rate as indicator of impending failures and for prognoses. Likewise, in social and cognitive sciences, heart rate is increasingly employed as a measure of arousal, emotional engagement and as a marker of interpersonal coordination. However, there is no consensus about which measurements and analytical tools are most appropriate in mapping the temporal dynamics of heart rate and quite different metrics are reported in the literature. As complexity metrics of heart rate variability depend critically on variability of the data, different choices regarding the kind of measures can have a substantial impact on the results. In this article we compare linear and non-linear statistics on two prominent types of heart beat data, beat-to-beat intervals (R-R interval) and beats-per-min (BPM). As a proof-of-concept, we employ a simple rest-exercise-rest task and show that non-linear statistics—fractal (DFA) and recurrence (RQA) analyses—reveal information about heart beat activity above and beyond the simple level of heart rate. Non-linear statistics unveil sustained post-exercise effects on heart rate dynamics, but their power to do so critically depends on the type data that is employed: While R-R intervals are very susceptible to non-linear analyses, the success of non-linear methods for BPM data critically depends on their construction. Generally, “oversampled” BPM time-series can be recommended as they retain most of the information about non-linear aspects of heart beat dynamics. PMID:23964244

  5. An Algebraic Method for Exploring Quantum Monodromy and Quantum Phase Transitions in Non-Rigid Molecules

    NASA Astrophysics Data System (ADS)

    Larese, D.; Iachello, F.

    2011-06-01

    A simple algebraic Hamiltonian has been used to explore the vibrational and rotational spectra of the skeletal bending modes of HCNO, BrCNO, NCNCS, and other ``floppy`` (quasi-linear or quasi-bent) molecules. These molecules have large-amplitude, low-energy bending modes and champagne-bottle potential surfaces, making them good candidates for observing quantum phase transitions (QPT). We describe the geometric phase transitions from bent to linear in these and other non-rigid molecules, quantitatively analysing the spectroscopy signatures of ground state QPT, excited state QPT, and quantum monodromy.The algebraic framework is ideal for this work because of its small calculational effort yet robust results. Although these methods have historically found success with tri- and four-atomic molecules, we now address five-atomic and simple branched molecules such as CH_3NCO and GeH_3NCO. Extraction of potential functions is completed for several molecules, resulting in predictions of barriers to linearity and equilibrium bond angles.

  6. Relation of the external mechanical stress to the properties of piezoelectric materials for energy harvesting

    NASA Astrophysics Data System (ADS)

    Jeong, Soon-Jong; Kim, Min-Soo; Lee, Dae-Su; Song, Jae-Sung; Cho, Kyung-Ho

    2013-12-01

    We investigated the piezoelectric properties and the generation of voltage and power under the mechanical compressive loads for three types of piezoelectric ceramics 0.2Pb(Mg1/3Nb2/3)O3-0.8Pb(Zr0.475Ti0.525)O3 (soft-PZT), 0.1Pb(Mg1/3Sb2/3)O3- 0.9Pb(Zr0.475Ti0.525)O3 (hard-PZT) and [0.675Pb(Mg1/3Nb2/3)O3-0.35PbTiO3]+5 wt% BaTiO3 (textured-PMNT). The piezoelectric d 33 coefficients of all specimens increased with increasing compressive load. The generated voltage and power showed a linear relation and square relation to the applied stress, respectively. These results were larger than those calculated using the simple piezoelectric equation due to the non-linear characteristics of the ceramics, so they were evaluated with a simple model based on a non-linear relation.

  7. Flexible Modes Control Using Sliding Mode Observers: Application to Ares I

    NASA Technical Reports Server (NTRS)

    Shtessel, Yuri B.; Hall, Charles E.; Baev, Simon; Orr, Jeb S.

    2010-01-01

    The launch vehicle dynamics affected by bending and sloshing modes are considered. Attitude measurement data that are corrupted by flexible modes could yield instability of the vehicle dynamics. Flexible body and sloshing modes are reconstructed by sliding mode observers. The resultant estimates are used to remove the undesirable dynamics from the measurements, and the direct effects of sloshing and bending modes on the launch vehicle are compensated by means of a controller that is designed without taking the bending and sloshing modes into account. A linearized mathematical model of Ares I launch vehicle was derived based on FRACTAL, a linear model developed by NASA/MSFC. The compensated vehicle dynamics with a simple PID controller were studied for the launch vehicle model that included two bending modes, two slosh modes and actuator dynamics. A simulation study demonstrated stable and accurate performance of the flight control system with the augmented simple PID controller without the use of traditional linear bending filters.

  8. Linear and nonlinear mechanical properties of a series of epoxy resins

    NASA Technical Reports Server (NTRS)

    Curliss, D. B.; Caruthers, J. M.

    1987-01-01

    The linear viscoelastic properties have been measured for a series of bisphenol-A-based epoxy resins cured with the diamine DDS. The linear viscoelastic master curves were constructed via time-temperature superposition of frequency dependent G-prime and G-double-prime isotherms. The G-double-prime master curves exhibited two sub-Tg transitions. Superposition of isotherms in the glass-to-rubber transition (i.e., alpha) and the beta transition at -60 C was achieved by simple horizontal shifts in the log frequency axis; however, in the region between alpha and beta, superposition could not be effected by simple horizontal shifts along the log frequency axis. The different temperature dependency of the alpha and beta relaxation mechanisms causes a complex response of G-double-prime in the so called alpha-prime region. A novel numerical procedure has been developed to extract the complete relaxation spectra and its temperature dependence from the G-prime and G-double-prime isothermal data in the alpha-prime region.

  9. Simple, Efficient Estimators of Treatment Effects in Randomized Trials Using Generalized Linear Models to Leverage Baseline Variables

    PubMed Central

    Rosenblum, Michael; van der Laan, Mark J.

    2010-01-01

    Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation. PMID:20628636

  10. Exploring Renner-Teller induced quenching in the reaction H(2S)+NH(a 1Delta): a combined experimental and theoretical study.

    PubMed

    Adam, L; Hack, W; McBane, G C; Zhu, H; Qu, Z-W; Schinke, R

    2007-01-21

    Experimental rate coefficients for the removal of NH(a (1)Delta) and ND(a (1)Delta) in collisions with H and D atoms are presented; all four isotope combinations are considered: NH+H, NH+D, ND+H, and ND+D. The experiments were performed in a quasistatic laser-flash photolysis/laser-induced fluorescence system at low pressures. NH(a (1)Delta) and ND(a (1)Delta) were generated by photolysis of HN(3) and DN(3), respectively. The total removal rate coefficients at room temperature are in the range of (3-5)x10(13) cm(3) mol(-1) s(-1). For two isotope combinations, NH+H and NH+D, quenching rate coefficients for the production of NH(X (3)Sigma(-)) or ND(X (3)Sigma(-)) were also determined; they are in the range of 1 x 10(13) cm(3) mol(-1) s(-1). The quenching rate coefficients directly reflect the strength of the Renner-Teller coupling between the (2)A(") and (2)A(') electronic states near linearity and so can be used to test theoretical models for describing this nonadiabatic process. The title reaction was modeled with a simple surface-hopping approach including a single parameter, which was adjusted to reproduce the quenching rate for NH+H; the same parameter value was used for all isotope combinations. The agreement with the measured total removal rate is good for all but one isotope combination. However, the quenching rates for the NH+D combination are only in fair (factor of 2) agreement with the corresponding measured data.

  11. On the derivation of linear irreversible thermodynamics for classical fluids

    PubMed Central

    Theodosopulu, M.; Grecos, A.; Prigogine, I.

    1978-01-01

    We consider the microscopic derivation of the linearized hydrodynamic equations for an arbitrary simple fluid. Our discussion is based on the concept of hydrodynamical modes, and use is made of the ideas and methods of the theory of subdynamics. We also show that this analysis leads to the Gibbs relation for the entropy of the system. PMID:16592516

  12. Implementing dense linear algebra algorithms using multitasking on the CRAY X-MP-4 (or approaching the gigaflop)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dongarra, J.J.; Hewitt, T.

    1985-08-01

    This note describes some experiments on simple, dense linear algebra algorithms. These experiments show that the CRAY X-MP is capable of small-grain multitasking arising from standard implementations of LU and Cholesky decomposition. The implementation described here provides the ''fastest'' execution rate for LU decomposition, 718 MFLOPS for a matrix of order 1000.

  13. Design sensitivity analysis of nonlinear structural response

    NASA Technical Reports Server (NTRS)

    Cardoso, J. B.; Arora, J. S.

    1987-01-01

    A unified theory is described of design sensitivity analysis of linear and nonlinear structures for shape, nonshape and material selection problems. The concepts of reference volume and adjoint structure are used to develop the unified viewpoint. A general formula for design sensitivity analysis is derived. Simple analytical linear and nonlinear examples are used to interpret various terms of the formula and demonstrate its use.

  14. Assessing the cumulative effects of linear recreation routes on wildlife habitats on the Okanogan and Wenatchee National Forests.

    Treesearch

    William L. Gaines; Peter H. Singleton; Roger C. Ross

    2003-01-01

    We conducted a literature review to document the effects of linear recreation routes on focal wildlife species. We identified a variety of interactions between focal species and roads, motorized trails, and nonmotorized trails. We used the available science to develop simple geographic information system-based models to evaluate the cumulative effects of recreational...

  15. A comparison of numerical and machine-learning modeling of soil water content with limited input data

    NASA Astrophysics Data System (ADS)

    Karandish, Fatemeh; Šimůnek, Jiří

    2016-12-01

    Soil water content (SWC) is a key factor in optimizing the usage of water resources in agriculture since it provides information to make an accurate estimation of crop water demand. Methods for predicting SWC that have simple data requirements are needed to achieve an optimal irrigation schedule, especially for various water-saving irrigation strategies that are required to resolve both food and water security issues under conditions of water shortages. Thus, a two-year field investigation was carried out to provide a dataset to compare the effectiveness of HYDRUS-2D, a physically-based numerical model, with various machine-learning models, including Multiple Linear Regressions (MLR), Adaptive Neuro-Fuzzy Inference Systems (ANFIS), and Support Vector Machines (SVM), for simulating time series of SWC data under water stress conditions. SWC was monitored using TDRs during the maize growing seasons of 2010 and 2011. Eight combinations of six, simple, independent parameters, including pan evaporation and average air temperature as atmospheric parameters, cumulative growth degree days (cGDD) and crop coefficient (Kc) as crop factors, and water deficit (WD) and irrigation depth (In) as crop stress factors, were adopted for the estimation of SWCs in the machine-learning models. Having Root Mean Square Errors (RMSE) in the range of 0.54-2.07 mm, HYDRUS-2D ranked first for the SWC estimation, while the ANFIS and SVM models with input datasets of cGDD, Kc, WD and In ranked next with RMSEs ranging from 1.27 to 1.9 mm and mean bias errors of -0.07 to 0.27 mm, respectively. However, the MLR models did not perform well for SWC forecasting, mainly due to non-linear changes of SWCs under the irrigation process. The results demonstrated that despite requiring only simple input data, the ANFIS and SVM models could be favorably used for SWC predictions under water stress conditions, especially when there is a lack of data. However, process-based numerical models are undoubtedly a better choice for predicting SWCs with lower uncertainties when required data are available, and thus for designing water saving strategies for agriculture and for other environmental applications requiring estimates of SWCs.

  16. Nonlinear multiplicative dendritic integration in neuron and network models

    PubMed Central

    Zhang, Danke; Li, Yuanqing; Rasch, Malte J.; Wu, Si

    2013-01-01

    Neurons receive inputs from thousands of synapses distributed across dendritic trees of complex morphology. It is known that dendritic integration of excitatory and inhibitory synapses can be highly non-linear in reality and can heavily depend on the exact location and spatial arrangement of inhibitory and excitatory synapses on the dendrite. Despite this known fact, most neuron models used in artificial neural networks today still only describe the voltage potential of a single somatic compartment and assume a simple linear summation of all individual synaptic inputs. We here suggest a new biophysical motivated derivation of a single compartment model that integrates the non-linear effects of shunting inhibition, where an inhibitory input on the route of an excitatory input to the soma cancels or “shunts” the excitatory potential. In particular, our integration of non-linear dendritic processing into the neuron model follows a simple multiplicative rule, suggested recently by experiments, and allows for strict mathematical treatment of network effects. Using our new formulation, we further devised a spiking network model where inhibitory neurons act as global shunting gates, and show that the network exhibits persistent activity in a low firing regime. PMID:23658543

  17. Reversed inverse regression for the univariate linear calibration and its statistical properties derived using a new methodology

    NASA Astrophysics Data System (ADS)

    Kang, Pilsang; Koo, Changhoi; Roh, Hokyu

    2017-11-01

    Since simple linear regression theory was established at the beginning of the 1900s, it has been used in a variety of fields. Unfortunately, it cannot be used directly for calibration. In practical calibrations, the observed measurements (the inputs) are subject to errors, and hence they vary, thus violating the assumption that the inputs are fixed. Therefore, in the case of calibration, the regression line fitted using the method of least squares is not consistent with the statistical properties of simple linear regression as already established based on this assumption. To resolve this problem, "classical regression" and "inverse regression" have been proposed. However, they do not completely resolve the problem. As a fundamental solution, we introduce "reversed inverse regression" along with a new methodology for deriving its statistical properties. In this study, the statistical properties of this regression are derived using the "error propagation rule" and the "method of simultaneous error equations" and are compared with those of the existing regression approaches. The accuracy of the statistical properties thus derived is investigated in a simulation study. We conclude that the newly proposed regression and methodology constitute the complete regression approach for univariate linear calibrations.

  18. Feasibility of Decentralized Linear-Quadratic-Gaussian Control of Autonomous Distributed Spacecraft

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell

    1999-01-01

    A distributed satellite formation, modeled as an arbitrary number of fully connected nodes in a network, could be controlled using a decentralized controller framework that distributes operations in parallel over the network. For such problems, a solution that minimizes data transmission requirements, in the context of linear-quadratic-Gaussian (LQG) control theory, was given by Speyer. This approach is advantageous because it is non-hierarchical, detected failures gracefully degrade system performance, fewer local computations are required than for a centralized controller, and it is optimal with respect to the standard LQG cost function. Disadvantages of the approach are the need for a fully connected communications network, the total operations performed over all the nodes are greater than for a centralized controller, and the approach is formulated for linear time-invariant systems. To investigate the feasibility of the decentralized approach to satellite formation flying, a simple centralized LQG design for a spacecraft orbit control problem is adapted to the decentralized framework. The simple design uses a fixed reference trajectory (an equatorial, Keplerian, circular orbit), and by appropriate choice of coordinates and measurements is formulated as a linear time-invariant system.

  19. A distributed lag approach to fitting non-linear dose-response models in particulate matter air pollution time series investigations.

    PubMed

    Roberts, Steven; Martin, Michael A

    2007-06-01

    The majority of studies that have investigated the relationship between particulate matter (PM) air pollution and mortality have assumed a linear dose-response relationship and have used either a single-day's PM or a 2- or 3-day moving average of PM as the measure of PM exposure. Both of these modeling choices have come under scrutiny in the literature, the linear assumption because it does not allow for non-linearities in the dose-response relationship, and the use of the single- or multi-day moving average PM measure because it does not allow for differential PM-mortality effects spread over time. These two problems have been dealt with on a piecemeal basis with non-linear dose-response models used in some studies and distributed lag models (DLMs) used in others. In this paper, we propose a method for investigating the shape of the PM-mortality dose-response relationship that combines a non-linear dose-response model with a DLM. This combined model will be shown to produce satisfactory estimates of the PM-mortality dose-response relationship in situations where non-linear dose response models and DLMs alone do not; that is, the combined model did not systemically underestimate or overestimate the effect of PM on mortality. The combined model is applied to ten cities in the US and a pooled dose-response model formed. When fitted with a change-point value of 60 microg/m(3), the pooled model provides evidence for a positive association between PM and mortality. The combined model produced larger estimates for the effect of PM on mortality than when using a non-linear dose-response model or a DLM in isolation. For the combined model, the estimated percentage increase in mortality for PM concentrations of 25 and 75 microg/m(3) were 3.3% and 5.4%, respectively. In contrast, the corresponding values from a DLM used in isolation were 1.2% and 3.5%, respectively.

  20. Comparing and combining process-based crop models and statistical models with some implications for climate change

    NASA Astrophysics Data System (ADS)

    Roberts, Michael J.; Braun, Noah O.; Sinclair, Thomas R.; Lobell, David B.; Schlenker, Wolfram

    2017-09-01

    We compare predictions of a simple process-based crop model (Soltani and Sinclair 2012), a simple statistical model (Schlenker and Roberts 2009), and a combination of both models to actual maize yields on a large, representative sample of farmer-managed fields in the Corn Belt region of the United States. After statistical post-model calibration, the process model (Simple Simulation Model, or SSM) predicts actual outcomes slightly better than the statistical model, but the combined model performs significantly better than either model. The SSM, statistical model and combined model all show similar relationships with precipitation, while the SSM better accounts for temporal patterns of precipitation, vapor pressure deficit and solar radiation. The statistical and combined models show a more negative impact associated with extreme heat for which the process model does not account. Due to the extreme heat effect, predicted impacts under uniform climate change scenarios are considerably more severe for the statistical and combined models than for the process-based model.

  1. Supervised linear dimensionality reduction with robust margins for object recognition

    NASA Astrophysics Data System (ADS)

    Dornaika, F.; Assoum, A.

    2013-01-01

    Linear Dimensionality Reduction (LDR) techniques have been increasingly important in computer vision and pattern recognition since they permit a relatively simple mapping of data onto a lower dimensional subspace, leading to simple and computationally efficient classification strategies. Recently, many linear discriminant methods have been developed in order to reduce the dimensionality of visual data and to enhance the discrimination between different groups or classes. Many existing linear embedding techniques relied on the use of local margins in order to get a good discrimination performance. However, dealing with outliers and within-class diversity has not been addressed by margin-based embedding method. In this paper, we explored the use of different margin-based linear embedding methods. More precisely, we propose to use the concepts of Median miss and Median hit for building robust margin-based criteria. Based on such margins, we seek the projection directions (linear embedding) such that the sum of local margins is maximized. Our proposed approach has been applied to the problem of appearance-based face recognition. Experiments performed on four public face databases show that the proposed approach can give better generalization performance than the classic Average Neighborhood Margin Maximization (ANMM). Moreover, thanks to the use of robust margins, the proposed method down-grades gracefully when label outliers contaminate the training data set. In particular, we show that the concept of Median hit was crucial in order to get robust performance in the presence of outliers.

  2. Linear stability analysis of collective neutrino oscillations without spurious modes

    NASA Astrophysics Data System (ADS)

    Morinaga, Taiki; Yamada, Shoichi

    2018-01-01

    Collective neutrino oscillations are induced by the presence of neutrinos themselves. As such, they are intrinsically nonlinear phenomena and are much more complex than linear counterparts such as the vacuum or Mikheyev-Smirnov-Wolfenstein oscillations. They obey integro-differential equations, for which it is also very challenging to obtain numerical solutions. If one focuses on the onset of collective oscillations, on the other hand, the equations can be linearized and the technique of linear analysis can be employed. Unfortunately, however, it is well known that such an analysis, when applied with discretizations of continuous angular distributions, suffers from the appearance of so-called spurious modes: unphysical eigenmodes of the discretized linear equations. In this paper, we analyze in detail the origin of these unphysical modes and present a simple solution to this annoying problem. We find that the spurious modes originate from the artificial production of pole singularities instead of a branch cut on the Riemann surface by the discretizations. The branching point singularities on the Riemann surface for the original nondiscretized equations can be recovered by approximating the angular distributions with polynomials and then performing the integrals analytically. We demonstrate for some examples that this simple prescription does remove the spurious modes. We also propose an even simpler method: a piecewise linear approximation to the angular distribution. It is shown that the same methodology is applicable to the multienergy case as well as to the dispersion relation approach that was proposed very recently.

  3. SNDR enhancement in noisy sinusoidal signals by non-linear processing elements

    NASA Astrophysics Data System (ADS)

    Martorell, Ferran; McDonnell, Mark D.; Abbott, Derek; Rubio, Antonio

    2007-06-01

    We investigate the possibility of building linear amplifiers capable of enhancing the Signal-to-Noise and Distortion Ratio (SNDR) of sinusoidal input signals using simple non-linear elements. Other works have proven that it is possible to enhance the Signal-to-Noise Ratio (SNR) by using limiters. In this work we study a soft limiter non-linear element with and without hysteresis. We show that the SNDR of sinusoidal signals can be enhanced by 0.94 dB using a wideband soft limiter and up to 9.68 dB using a wideband soft limiter with hysteresis. These results indicate that linear amplifiers could be constructed using non-linear circuits with hysteresis. This paper presents mathematical descriptions for the non-linear elements using statistical parameters. Using these models, the input-output SNDR enhancement is obtained by optimizing the non-linear transfer function parameters to maximize the output SNDR.

  4. A sequential linear optimization approach for controller design

    NASA Technical Reports Server (NTRS)

    Horta, L. G.; Juang, J.-N.; Junkins, J. L.

    1985-01-01

    A linear optimization approach with a simple real arithmetic algorithm is presented for reliable controller design and vibration suppression of flexible structures. Using first order sensitivity of the system eigenvalues with respect to the design parameters in conjunction with a continuation procedure, the method converts a nonlinear optimization problem into a maximization problem with linear inequality constraints. The method of linear programming is then applied to solve the converted linear optimization problem. The general efficiency of the linear programming approach allows the method to handle structural optimization problems with a large number of inequality constraints on the design vector. The method is demonstrated using a truss beam finite element model for the optimal sizing and placement of active/passive-structural members for damping augmentation. Results using both the sequential linear optimization approach and nonlinear optimization are presented and compared. The insensitivity to initial conditions of the linear optimization approach is also demonstrated.

  5. Non-linear, connectivity and threshold-dominated runoff-generation controls DOC and heavy metal export in a small peat catchment

    NASA Astrophysics Data System (ADS)

    Birkel, Christian; Broder, Tanja; Biester, Harald

    2017-04-01

    Peat soils act as important carbon sinks, but they also release large amounts of dissolved organic carbon (DOC) to the aquatic system. The DOC export is strongly tied to the export of soluble heavy metals. The accumulation of potentially toxic substances due to anthropogenic activities, and their natural export from peat soils to the aquatic system is an important health and environmental issue. However, limited knowledge exists as to how much of these substances are mobilized, how they are mobilized in terms of flow pathways and under which hydrometeorological conditions. In this study, we report from a combined experimental and modelling effort to provide greater process understanding from a small, lead (Pb) and arsenic (As) contaminated upland peat catchment in northwestern Germany. We developed a minimally parameterized, but process-based, coupled hydrology-biogeochemistry model applied to simulate detailed hydrometric and biogeochemical data. The model was based on an initial data mining analysis, in combination with regression relationships of discharge, DOC and element export. We assessed the internal model DOC-processing based on stream-DOC hysteresis patterns and 3-hourly time step groundwater level and soil DOC data (not used for calibration as an independent model test) for two consecutive summer periods in 2013 and 2014. We found that Pb and As mobilization can be efficiently predicted from DOC transport alone, but Pb showed a significant non-linear relationship with DOC, while As was linearly related to DOC. The relatively parsimonious model (nine calibrated parameters in total) showed the importance of non-linear and rapid near-surface runoff-generation mechanisms that caused around 60% of simulated DOC load. The total load was high even though these pathways were only activated during storm events on average 30% of the monitoring time - as also shown by the experimental data. Overall, the drier period 2013 resulted in increased nonlinearity, but exported less DOC (115 kg C ha-1 yr-1 ± 11 kg C ha-1 yr-1) compared to the equivalent but wetter period in 2014 (189 kg C ha-1 yr-1 ± 38 kg C ha-1 yr-1). The exceedance of a critical water table threshold (-10 cm) triggered a rapid near-surface runoff response with associated higher DOC transport connecting all available DOC pools, and with subsequent dilution. We conclude that the combination of detailed experimental work with relatively simple, coupled hydrology-biogeochemistry models allowed not only the model to be internally constrained, but also provided important insight into how DOC and tightly coupled heavy metals are mobilized.

  6. A non-modal analytical method to predict turbulent properties applied to the Hasegawa-Wakatani model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedman, B., E-mail: friedman11@llnl.gov; Lawrence Livermore National Laboratory, Livermore, California 94550; Carter, T. A.

    2015-01-15

    Linear eigenmode analysis often fails to describe turbulence in model systems that have non-normal linear operators and thus nonorthogonal eigenmodes, which can cause fluctuations to transiently grow faster than expected from eigenmode analysis. When combined with energetically conservative nonlinear mode mixing, transient growth can lead to sustained turbulence even in the absence of eigenmode instability. Since linear operators ultimately provide the turbulent fluctuations with energy, it is useful to define a growth rate that takes into account non-modal effects, allowing for prediction of energy injection, transport levels, and possibly even turbulent onset in the subcritical regime. We define such amore » non-modal growth rate using a relatively simple model of the statistical effect that the nonlinearities have on cross-phases and amplitude ratios of the system state variables. In particular, we model the nonlinearities as delta-function-like, periodic forces that randomize the state variables once every eddy turnover time. Furthermore, we estimate the eddy turnover time to be the inverse of the least stable eigenmode frequency or growth rate, which allows for prediction without nonlinear numerical simulation. We test this procedure on the 2D and 3D Hasegawa-Wakatani model [A. Hasegawa and M. Wakatani, Phys. Rev. Lett. 50, 682 (1983)] and find that the non-modal growth rate is a good predictor of energy injection rates, especially in the strongly non-normal, fully developed turbulence regime.« less

  7. A non-modal analytical method to predict turbulent properties applied to the Hasegawa-Wakatani model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedman, B.; Carter, T. A.

    2015-01-15

    Linear eigenmode analysis often fails to describe turbulence in model systems that have non-normal linear operators and thus nonorthogonal eigenmodes, which can cause fluctuations to transiently grow faster than expected from eigenmode analysis. When combined with energetically conservative nonlinear mode mixing, transient growth can lead to sustained turbulence even in the absence of eigenmode instability. Since linear operators ultimately provide the turbulent fluctuations with energy, it is useful to define a growth rate that takes into account non-modal effects, allowing for prediction of energy injection, transport levels, and possibly even turbulent onset in the subcritical regime. Here, we define suchmore » a non-modal growth rate using a relatively simple model of the statistical effect that the nonlinearities have on cross-phases and amplitude ratios of the system state variables. In particular, we model the nonlinearities as delta-function-like, periodic forces that randomize the state variables once every eddy turnover time. Furthermore, we estimate the eddy turnover time to be the inverse of the least stable eigenmode frequency or growth rate, which allows for prediction without nonlinear numerical simulation. Also, we test this procedure on the 2D and 3D Hasegawa-Wakatani model [A. Hasegawa and M. Wakatani, Phys. Rev. Lett. 50, 682 (1983)] and find that the non-modal growth rate is a good predictor of energy injection rates, especially in the strongly non-normal, fully developed turbulence regime.« less

  8. Differential Dynamic Engagement within 24 SH3 Domain: Peptide Complexes Revealed by Co-Linear Chemical Shift Perturbation Analysis

    PubMed Central

    Stollar, Elliott J.; Lin, Hong; Davidson, Alan R.; Forman-Kay, Julie D.

    2012-01-01

    There is increasing evidence for the functional importance of multiple dynamically populated states within single proteins. However, peptide binding by protein-protein interaction domains, such as the SH3 domain, has generally been considered to involve the full engagement of peptide to the binding surface with minimal dynamics and simple methods to determine dynamics at the binding surface for multiple related complexes have not been described. We have used NMR spectroscopy combined with isothermal titration calorimetry to comprehensively examine the extent of engagement to the yeast Abp1p SH3 domain for 24 different peptides. Over one quarter of the domain residues display co-linear chemical shift perturbation (CCSP) behavior, in which the position of a given chemical shift in a complex is co-linear with the same chemical shift in the other complexes, providing evidence that each complex exists as a unique dynamic rapidly inter-converting ensemble. The extent the specificity determining sub-surface of AbpSH3 is engaged as judged by CCSP analysis correlates with structural and thermodynamic measurements as well as with functional data, revealing the basis for significant structural and functional diversity amongst the related complexes. Thus, CCSP analysis can distinguish peptide complexes that may appear identical in terms of general structure and percent peptide occupancy but have significant local binding differences across the interface, affecting their ability to transmit conformational change across the domain and resulting in functional differences. PMID:23251481

  9. Design and analysis of linear cascade DNA hybridization chain reactions using DNA hairpins

    NASA Astrophysics Data System (ADS)

    Bui, Hieu; Garg, Sudhanshu; Miao, Vincent; Song, Tianqi; Mokhtar, Reem; Reif, John

    2017-01-01

    DNA self-assembly has been employed non-conventionally to construct nanoscale structures and dynamic nanoscale machines. The technique of hybridization chain reactions by triggered self-assembly has been shown to form various interesting nanoscale structures ranging from simple linear DNA oligomers to dendritic DNA structures. Inspired by earlier triggered self-assembly works, we present a system for controlled self-assembly of linear cascade DNA hybridization chain reactions using nine distinct DNA hairpins. NUPACK is employed to assist in designing DNA sequences and Matlab has been used to simulate DNA hairpin interactions. Gel electrophoresis and ensemble fluorescence reaction kinetics data indicate strong evidence of linear cascade DNA hybridization chain reactions. The half-time completion of the proposed linear cascade reactions indicates a linear dependency on the number of hairpins.

  10. Two is better than one: joint statistics of density and velocity in concentric spheres as a cosmological probe

    NASA Astrophysics Data System (ADS)

    Uhlemann, C.; Codis, S.; Hahn, O.; Pichon, C.; Bernardeau, F.

    2017-08-01

    The analytical formalism to obtain the probability distribution functions (PDFs) of spherically averaged cosmic densities and velocity divergences in the mildly non-linear regime is presented. A large-deviation principle is applied to those cosmic fields assuming their most likely dynamics in spheres is set by the spherical collapse model. We validate our analytical results using state-of-the-art dark matter simulations with a phase-space resolved velocity field finding a 2 per cent level agreement for a wide range of velocity divergences and densities in the mildly non-linear regime (˜10 Mpc h-1 at redshift zero), usually inaccessible to perturbation theory. From the joint PDF of densities and velocity divergences measured in two concentric spheres, we extract with the same accuracy velocity profiles and conditional velocity PDF subject to a given over/underdensity that are of interest to understand the non-linear evolution of velocity flows. Both PDFs are used to build a simple but accurate maximum likelihood estimator for the redshift evolution of the variance of both the density and velocity divergence fields, which have smaller relative errors than their sample variances when non-linearities appear. Given the dependence of the velocity divergence on the growth rate, there is a significant gain in using the full knowledge of both PDFs to derive constraints on the equation of state-of-dark energy. Thanks to the insensitivity of the velocity divergence to bias, its PDF can be used to obtain unbiased constraints on the growth of structures (σ8, f) or it can be combined with the galaxy density PDF to extract bias parameters.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mooij, E.

    Application of simple adaptive control (SAC) theory to the design of guidance and control systems for winged re-entry vehicles has been proven successful. To apply SAC to these non-linear and non-stationary systems, it needs to be Almost Strictly Passive (ASP), which is an extension of the Almost Strictly Positive Real (ASPR) condition for linear, time-invariant systems. To fulfill the ASP condition, the controlled, non-linear system has to be minimum-phase (i.e., the zero dynamics is stable), and there is a specific condition for the product of output and input matrix. Earlier studies indicate that even the linearised system is not ASPR.more » The two problems at hand are: 1) the system is non-minimum phase when flying with zero bank angle, and 2) whenever there is hybrid control, e.g., yaw control is established by combined reaction and aerodynamic control for the major part of flight, the second ASPR condition cannot be met. In this paper we look at both issues, the former related to the guidance system and the latter to the attitude-control system. It is concluded that whenever the nominal bank angle is zero, the passivity conditions can never be met, and guidance should be based on nominal commands and a redefinition of those whenever the error becomes too large. For the remaining part of the trajectory, the passivity conditions are marginally met, but it is proposed to add feedforward compensators to alleviate these conditions. The issue of hybrid control is avoided by redefining the controls with total control moments and adding a so-called control allocator. Deriving the passivity conditions for rotational motion, and evaluating these conditions along the trajectory shows that the (non-linear) winged entry vehicle is ASP. The sufficient conditions to apply SAC for attitude control are thus met.« less

  12. A simple finite element method for linear hyperbolic problems

    DOE PAGES

    Mu, Lin; Ye, Xiu

    2017-09-14

    Here, we introduce a simple finite element method for solving first order hyperbolic equations with easy implementation and analysis. Our new method, with a symmetric, positive definite system, is designed to use discontinuous approximations on finite element partitions consisting of arbitrary shape of polygons/polyhedra. Error estimate is established. Extensive numerical examples are tested that demonstrate the robustness and flexibility of the method.

  13. A simple finite element method for linear hyperbolic problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mu, Lin; Ye, Xiu

    Here, we introduce a simple finite element method for solving first order hyperbolic equations with easy implementation and analysis. Our new method, with a symmetric, positive definite system, is designed to use discontinuous approximations on finite element partitions consisting of arbitrary shape of polygons/polyhedra. Error estimate is established. Extensive numerical examples are tested that demonstrate the robustness and flexibility of the method.

  14. Simple, Fast, and Sensitive Method for Quantification of Tellurite in Culture Media▿

    PubMed Central

    Molina, Roberto C.; Burra, Radhika; Pérez-Donoso, José M.; Elías, Alex O.; Muñoz, Claudia; Montes, Rebecca A.; Chasteen, Thomas G.; Vásquez, Claudio C.

    2010-01-01

    A fast, simple, and reliable chemical method for tellurite quantification is described. The procedure is based on the NaBH4-mediated reduction of TeO32− followed by the spectrophotometric determination of elemental tellurium in solution. The method is highly reproducible, is stable at different pH values, and exhibits linearity over a broad range of tellurite concentrations. PMID:20525868

  15. Combinatorial structures to modeling simple games and applications

    NASA Astrophysics Data System (ADS)

    Molinero, Xavier

    2017-09-01

    We connect three different topics: combinatorial structures, game theory and chemistry. In particular, we establish the bases to represent some simple games, defined as influence games, and molecules, defined from atoms, by using combinatorial structures. First, we characterize simple games as influence games using influence graphs. It let us to modeling simple games as combinatorial structures (from the viewpoint of structures or graphs). Second, we formally define molecules as combinations of atoms. It let us to modeling molecules as combinatorial structures (from the viewpoint of combinations). It is open to generate such combinatorial structures using some specific techniques as genetic algorithms, (meta-)heuristics algorithms and parallel programming, among others.

  16. Normalization of cell responses in cat striate cortex

    NASA Technical Reports Server (NTRS)

    Heeger, D. J.

    1992-01-01

    Simple cells in the striate cortex have been depicted as half-wave-rectified linear operators. Complex cells have been depicted as energy mechanisms, constructed from the squared sum of the outputs of quadrature pairs of linear operators. However, the linear/energy model falls short of a complete explanation of striate cell responses. In this paper, a modified version of the linear/energy model is presented in which striate cells mutually inhibit one another, effectively normalizing their responses with respect to stimulus contrast. This paper reviews experimental measurements of striate cell responses, and shows that the new model explains a significantly larger body of physiological data.

  17. Simplified large African carnivore density estimators from track indices.

    PubMed

    Winterbach, Christiaan W; Ferreira, Sam M; Funston, Paul J; Somers, Michael J

    2016-01-01

    The range, population size and trend of large carnivores are important parameters to assess their status globally and to plan conservation strategies. One can use linear models to assess population size and trends of large carnivores from track-based surveys on suitable substrates. The conventional approach of a linear model with intercept may not intercept at zero, but may fit the data better than linear model through the origin. We assess whether a linear regression through the origin is more appropriate than a linear regression with intercept to model large African carnivore densities and track indices. We did simple linear regression with intercept analysis and simple linear regression through the origin and used the confidence interval for ß in the linear model y  =  αx  + ß, Standard Error of Estimate, Mean Squares Residual and Akaike Information Criteria to evaluate the models. The Lion on Clay and Low Density on Sand models with intercept were not significant ( P  > 0.05). The other four models with intercept and the six models thorough origin were all significant ( P  < 0.05). The models using linear regression with intercept all included zero in the confidence interval for ß and the null hypothesis that ß = 0 could not be rejected. All models showed that the linear model through the origin provided a better fit than the linear model with intercept, as indicated by the Standard Error of Estimate and Mean Square Residuals. Akaike Information Criteria showed that linear models through the origin were better and that none of the linear models with intercept had substantial support. Our results showed that linear regression through the origin is justified over the more typical linear regression with intercept for all models we tested. A general model can be used to estimate large carnivore densities from track densities across species and study areas. The formula observed track density = 3.26 × carnivore density can be used to estimate densities of large African carnivores using track counts on sandy substrates in areas where carnivore densities are 0.27 carnivores/100 km 2 or higher. To improve the current models, we need independent data to validate the models and data to test for non-linear relationship between track indices and true density at low densities.

  18. A buoyant tornado-probe concept incorporating an inverted lifting device. [and balloon combination

    NASA Technical Reports Server (NTRS)

    Grant, F. C.

    1973-01-01

    Addition of an inverted lifting device to a simple balloon probe is shown to make possible low-altitude entry to tornado cores with easier launch conditions than for the simple balloon probe. Balloon-lifter combinations are particularly suitable for penetration of tornadoes with average to strong circulation, but tornadoes of less than average circulation which are inaccessible to simple balloon probes become accessible. The increased launch radius which is needed for access to tornadoes over a wide range of circulation results in entry times of about 3 minutes. For a simple balloon probe the uninflated balloon must be first dropped on, or near, the track of the tornado from a safe distance. The increase in typical launch radius from about 0.75 kilometer to slightly over 1.0 kilometer with a balloon-lifter combination suggests that a direct air launch may be feasible.

  19. Predicting haemodynamic networks using electrophysiology: The role of non-linear and cross-frequency interactions

    PubMed Central

    Tewarie, P.; Bright, M.G.; Hillebrand, A.; Robson, S.E.; Gascoyne, L.E.; Morris, P.G.; Meier, J.; Van Mieghem, P.; Brookes, M.J.

    2016-01-01

    Understanding the electrophysiological basis of resting state networks (RSNs) in the human brain is a critical step towards elucidating how inter-areal connectivity supports healthy brain function. In recent years, the relationship between RSNs (typically measured using haemodynamic signals) and electrophysiology has been explored using functional Magnetic Resonance Imaging (fMRI) and magnetoencephalography (MEG). Significant progress has been made, with similar spatial structure observable in both modalities. However, there is a pressing need to understand this relationship beyond simple visual similarity of RSN patterns. Here, we introduce a mathematical model to predict fMRI-based RSNs using MEG. Our unique model, based upon a multivariate Taylor series, incorporates both phase and amplitude based MEG connectivity metrics, as well as linear and non-linear interactions within and between neural oscillations measured in multiple frequency bands. We show that including non-linear interactions, multiple frequency bands and cross-frequency terms significantly improves fMRI network prediction. This shows that fMRI connectivity is not only the result of direct electrophysiological connections, but is also driven by the overlap of connectivity profiles between separate regions. Our results indicate that a complete understanding of the electrophysiological basis of RSNs goes beyond simple frequency-specific analysis, and further exploration of non-linear and cross-frequency interactions will shed new light on distributed network connectivity, and its perturbation in pathology. PMID:26827811

  20. A velocity command stepper motor for CSI application

    NASA Technical Reports Server (NTRS)

    Sulla, Jeffrey L.; Juang, Jer-Nan; Horta, Lucas G.

    1991-01-01

    The application of linear force actuators for vibration suppression of flexible structures has received much attention in recent years. A linear force actuator consists of a movable mass that is restrained such that its motion is linear. By application of a force to the mass, an equal and opposite reaction force can be applied to a structure. The use of an industrial linear stepper motor as a reaction mass actuator is described. With the linear stepper motor mounted on a simple test beam and the NASA Mini-Mast, output feedback of acceleration or displacement are used to augment the structural damping of the test articles. Significant increases in damping were obtained for both the test beam and the Mini-Mast.

  1. Numerical computation of linear instability of detonations

    NASA Astrophysics Data System (ADS)

    Kabanov, Dmitry; Kasimov, Aslan

    2017-11-01

    We propose a method to study linear stability of detonations by direct numerical computation. The linearized governing equations together with the shock-evolution equation are solved in the shock-attached frame using a high-resolution numerical algorithm. The computed results are processed by the Dynamic Mode Decomposition technique to generate dispersion relations. The method is applied to the reactive Euler equations with simple-depletion chemistry as well as more complex multistep chemistry. The results are compared with those known from normal-mode analysis. We acknowledge financial support from King Abdullah University of Science and Technology.

  2. Iteration with Spreadsheets.

    ERIC Educational Resources Information Center

    Smith, Michael

    1990-01-01

    Presents several examples of the iteration method using computer spreadsheets. Examples included are simple iterative sequences and the solution of equations using the Newton-Raphson formula, linear interpolation, and interval bisection. (YP)

  3. Broadband Venetian-Blind Polarizer With Dual Vanes

    NASA Technical Reports Server (NTRS)

    Conroy, Bruce L.; Hoppe, Daniel J.

    1995-01-01

    Improved venetian-blind polarizer features optimized tandem, two-layer vane configuration reducing undesired reflections and deformation of radiation pattern below those of prior single-layer vane configuration. Consists of number of thin, parallel metal strips placed in path of propagating radio-frequency beam. Offers simple way to convert polarization from linear to circular or from circular to linear. Particularly useful for beam-wave-guide applications.

  4. A Family of Ellipse Methods for Solving Non-Linear Equations

    ERIC Educational Resources Information Center

    Gupta, K. C.; Kanwar, V.; Kumar, Sanjeev

    2009-01-01

    This note presents a method for the numerical approximation of simple zeros of a non-linear equation in one variable. In order to do so, the method uses an ellipse rather than a tangent approach. The main advantage of our method is that it does not fail even if the derivative of the function is either zero or very small in the vicinity of the…

  5. Feasibility of combining linear theory and impact theory methods for the analysis and design of high speed configurations

    NASA Technical Reports Server (NTRS)

    Brooke, D.; Vondrasek, D. V.

    1978-01-01

    The aerodynamic influence coefficients calculated using an existing linear theory program were used to modify the pressures calculated using impact theory. Application of the combined approach to several wing-alone configurations shows that the combined approach gives improved predictions of the local pressure and loadings over either linear theory alone or impact theory alone. The approach not only removes most of the short-comings of the individual methods, as applied in the Mach 4 to 8 range, but also provides the basis for an inverse design procedure applicable to high speed configurations.

  6. CAG - computer-aid-georeferencing, or rapid sharing, restructuring and presentation of environmental data using remote-server georeferencing for the GE clients. Educational and scientific implications.

    NASA Astrophysics Data System (ADS)

    Hronusov, V. V.

    2006-12-01

    We suggest a method of using external public servers for rearranging, restructuring and rapid sharing of environmental data for the purpose of quick presentations in numerous GE clients. The method allows to add new philosophy for the presentation (publication) of the data (mostly static) stored in the public domain (e.g., Blue Marble, Visible Earth, etc). - The new approach is generated by publishing freely accessible spreadsheets which contain enough information and links to the data. Due to the fact that most of the large depositories of the data on the environmental monitoring have rather simple net address system as well as simple hierarchy mostly based on the date and type of the data, it is possible to develop the http-based link to the file which contains the data. Publication of new data on the server is recorded by a simple entering a new address into a cell in the spreadsheet. At the moment we use the EditGrid (www.editgrid.com) system as a spreadsheet platform. The generation of kml-codes is achieved on the basis of XML data and XSLT procedures. Since the EditGride environment supports "fetch" and similar commands, it is possible to create"smart-adaptive" KML generation on the fly based on the data streams from RSS and XML sources. The previous GIS-based methods could combine hi-definition data combined from various sources, but large- scale comparisons of dynamic processes have been usually out of reach of the technology. The suggested method allows unlimited number of GE clients to view, review and compare dynamic and static process of previously un-combinable sources, and on unprecedent scales. The ease of automated or computer-assisted georeferencing has already led to translation about 3000 raster public domain imagery, point and linear data sources into GE-language. In addition the suggested method allows a user to create rapid animations to demonstrate dynamic processes; roducts of high demand in education, meteorology, volcanology and potentially in a number of industries. In general it is possible to state that the new approach, which we have tested on numerous projects, saves times and energy in creating huge amounts of georeferenced data of various kinds, and thus provided an excellent tools for education and science.

  7. Relationship between the Arctic oscillation and surface air temperature in multi-decadal time-scale

    NASA Astrophysics Data System (ADS)

    Tanaka, Hiroshi L.; Tamura, Mina

    2016-09-01

    In this study, a simple energy balance model (EBM) was integrated in time, considering a hypothetical long-term variability in ice-albedo feedback mimicking the observed multi-decadal temperature variability. A natural variability was superimposed on a linear warming trend due to the increasing radiative forcing of CO2. The result demonstrates that the superposition of the natural variability and the background linear trend can offset with each other to show the warming hiatus for some period. It is also stressed that the rapid warming during 1970-2000 can be explained by the superposition of the natural variability and the background linear trend at least within the simple model. The key process of the fluctuating planetary albedo in multi-decadal time scale is investigated using the JRA-55 reanalysis data. It is found that the planetary albedo increased for 1958-1970, decreased for 1970-2000, and increased for 2000-2012, as expected by the simple EBM experiments. The multi-decadal variability in the planetary albedo is compared with the time series of the AO mode and Barents Sea mode of surface air temperature. It is shown that the recent AO negative pattern showing warm Arctic and cold mid-latitudes is in good agreement with planetary albedo change indicating negative anomaly in high latitudes and positive anomaly in mid-latitudes. Moreover, the Barents Sea mode with the warm Barents Sea and cold mid-latitudes shows long-term variability similar to planetary albedo change. Although further studies are needed, the natural variabilities of both the AO mode and Barents Sea mode indicate some possible link to the planetary albedo as suggested by the simple EBM to cause the warming hiatus in recent years.

  8. The influence of spontaneous activity on stimulus processing in primary visual cortex.

    PubMed

    Schölvinck, M L; Friston, K J; Rees, G

    2012-02-01

    Spontaneous activity in the resting human brain has been studied extensively; however, how such activity affects the local processing of a sensory stimulus is relatively unknown. Here, we examined the impact of spontaneous activity in primary visual cortex on neuronal and behavioural responses to a simple visual stimulus, using functional MRI. Stimulus-evoked responses remained essentially unchanged by spontaneous fluctuations, combining with them in a largely linear fashion (i.e., with little evidence for an interaction). However, interactions between spontaneous fluctuations and stimulus-evoked responses were evident behaviourally; high levels of spontaneous activity tended to be associated with increased stimulus detection at perceptual threshold. Our results extend those found in studies of spontaneous fluctuations in motor cortex and higher order visual areas, and suggest a fundamental role for spontaneous activity in stimulus processing. Copyright © 2011. Published by Elsevier Inc.

  9. Liquid chromatography determination of 10-hydroxycamptothecin in human serum by a column-switching system containing a pre-column with restricted access media and its application to a clinical pharmacokinetic study.

    PubMed

    Ma, Jun; Jia, Zheng-Ping; Zhang, Qiang; Fan, Jun-Jie; Jiang, Ning-Xi; Wang, Rong; Xie, Hua; Wang, Juan

    2003-10-25

    A simple, rapid, sensitive column-switching HPLC method is described for the analysis of the 10-hydroxycamptothecin (HCPT) in human serum. A pre-column containing restricted access media (RAM) is used for the sample clean-up and trace enrichment and is combined with a C18 column for the final separation. The analytical time is 8 min. The HCPT is monitored with fluorescence detector, excitation and emission wavelengths being 385 and 539 nm, respectively. There is a linear response range of 1-1000 ng/ml with correlation coefficient of 0.998 while the limit of quantification is 0.1 ng/ml. The intra-day and inter-day variations are less than 5%. This analytic procedure has been applied to a pharmacokinetic study of HCPT in clinical patients and the pharmacokinetic parameters of one-compartment model are calculated.

  10. Amperometric sensor for ethanol based on one-step electropolymerization of thionine-carbon nanofiber nanocomposite containing alcohol oxidase.

    PubMed

    Wu, Lina; McIntosh, Mike; Zhang, Xueji; Ju, Huangxian

    2007-12-15

    Thionine had strong interaction with carbon nanofiber (CNF) and was used in the non-covalent functionalization of carbon nanofiber for the preparation of stable thionine-CNF nanocomposite with good dispersion. With a simple one-step electrochemical polymerization of thionine-CNF nanocomposite and alcohol oxidase (AOD), a stable poly(thionine)-CNF/AOD biocomposite film was formed on electrode surface. Based on the excellent catalytic activity of the biocomposite film toward reduction of dissolved oxygen, a sensitive ethanol biosensor was proposed. The ethanol biosensor could monitor ethanol ranging from 2.0 to 252 microM with a detection limit of 1.7 microM. It displayed a rapid response, an expanded linear response range as well as excellent reproducibility and stability. The combination of catalytic activity of CNF and the promising feature of the biocomposite with one-step non-manual technique favored the sensitive determination of ethanol with improved analytical capabilities.

  11. Multiple abiotic stimuli are integrated in the regulation of rice gene expression under field conditions.

    PubMed

    Plessis, Anne; Hafemeister, Christoph; Wilkins, Olivia; Gonzaga, Zennia Jean; Meyer, Rachel Sarah; Pires, Inês; Müller, Christian; Septiningsih, Endang M; Bonneau, Richard; Purugganan, Michael

    2015-11-26

    Plants rely on transcriptional dynamics to respond to multiple climatic fluctuations and contexts in nature. We analyzed the genome-wide gene expression patterns of rice (Oryza sativa) growing in rainfed and irrigated fields during two distinct tropical seasons and determined simple linear models that relate transcriptomic variation to climatic fluctuations. These models combine multiple environmental parameters to account for patterns of expression in the field of co-expressed gene clusters. We examined the similarities of our environmental models between tropical and temperate field conditions, using previously published data. We found that field type and macroclimate had broad impacts on transcriptional responses to environmental fluctuations, especially for genes involved in photosynthesis and development. Nevertheless, variation in solar radiation and temperature at the timescale of hours had reproducible effects across environmental contexts. These results provide a basis for broad-based predictive modeling of plant gene expression in the field.

  12. Strehl ratio: a tool for optimizing optical nulls and singularities.

    PubMed

    Hénault, François

    2015-07-01

    In this paper a set of radial and azimuthal phase functions are reviewed that have a null Strehl ratio, which is equivalent to generating a central extinction in the image plane of an optical system. The study is conducted in the framework of Fraunhofer scalar diffraction, and is oriented toward practical cases where optical nulls or singularities are produced by deformable mirrors or phase plates. The identified solutions reveal unexpected links with the zeros of type-J Bessel functions of integer order. They include linear azimuthal phase ramps giving birth to an optical vortex, azimuthally modulated phase functions, and circular phase gratings (CPGs). It is found in particular that the CPG radiometric efficiency could be significantly improved by the null Strehl ratio condition. Simple design rules for rescaling and combining the different phase functions are also defined. Finally, the described analytical solutions could also serve as starting points for an automated searching software tool.

  13. Rapid Characterization and Identification of Flavonoids in Radix Astragali by Ultra-High-Pressure Liquid Chromatography Coupled with Linear Ion Trap-Orbitrap Mass Spectrometry.

    PubMed

    Zhang, Jing; Xu, Xiao-Jie; Xu, Wen; Huang, Juan; Zhu, Da-yuan; Qiu, Xiao-Hui

    2015-07-01

    A simple and effective method was established for separation and characterization of flavonoid constituents in Radix Astragali (RA) by combination of ultra-high-pressure liquid chromatography with LTQ-Orbitrap tandem mass spectrometry (u-HPLC-LTQ-Orbitrap-MS(n)). For three major structural types of flavonoids, the proposed fragmentation pathways and major diagnostic fragment ions of isoflavones, pterocarpans and isoflavans were investigated to trace isoflavonoid derivatives in crude plant extracts. Based on the systematic identification strategy, 48 constituents were rapidly detected and characterized or tentatively identified, many of which were first reported in RA. The u-PHLC-LTQ-Orbitrap MS(n) platform was proved as an effective tool for rapid qualitative analysis of secondary metabolite productions from natural resources. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  14. Determination of Tryptamines and β-Carbolines in Ayahuasca Beverage Consumed During Brazilian Religious Ceremonies.

    PubMed

    Santos, Mônica Cardoso; Navickiene, Sandro; Gaujac, Alain

    2017-05-01

    Ayahuasca is a potent hallucinogenic beverage prepared from Banisteriopsis caapi in combination with other psychoactive plants. N,N-dimethyltryptamine, tryptamine, harmine, harmaline, harmalol, and tetrahydroharmine were quantified in ayahuasca samples using a simple and low-cost method based on SPE and LC with UV diode-array detection. The experimental variables that affect the SPE method, such as type of solid phase and nature of solvent, were optimized. The method showed good linearity (r > 0.9902) and repeatability (RSD < 0.8%) for alkaloid compounds, with an LOD of 0.12 mg/L. The proposed method was used to analyze 20 samples from an ayahuasca cooking process from a religious group located in the municipality of Fortaleza, state of Ceará, Brazil. The results showed that concentrations of the target compounds ranged from 0.3 to 36.7 g/L for these samples.

  15. Machine learning for cardiac ultrasound time series data

    NASA Astrophysics Data System (ADS)

    Yuan, Baichuan; Chitturi, Sathya R.; Iyer, Geoffrey; Li, Nuoyu; Xu, Xiaochuan; Zhan, Ruohan; Llerena, Rafael; Yen, Jesse T.; Bertozzi, Andrea L.

    2017-03-01

    We consider the problem of identifying frames in a cardiac ultrasound video associated with left ventricular chamber end-systolic (ES, contraction) and end-diastolic (ED, expansion) phases of the cardiac cycle. Our procedure involves a simple application of non-negative matrix factorization (NMF) to a series of frames of a video from a single patient. Rank-2 NMF is performed to compute two end-members. The end members are shown to be close representations of the actual heart morphology at the end of each phase of the heart function. Moreover, the entire time series can be represented as a linear combination of these two end-member states thus providing a very low dimensional representation of the time dynamics of the heart. Unlike previous work, our methods do not require any electrocardiogram (ECG) information in order to select the end-diastolic frame. Results are presented for a data set of 99 patients including both healthy and diseased examples.

  16. Graphene-multiwall carbon nanotube-gold nanocluster composites modified electrode for the simultaneous determination of ascorbic acid, dopamine, and uric acid.

    PubMed

    Liu, Xiaofang; Wei, Shaping; Chen, Shihong; Yuan, Dehua; Zhang, Wen

    2014-08-01

    In this paper, graphene-multiwall carbon nanotube-gold nanocluster (GP-MWCNT-AuNC) composites were synthesized and used as modifier to fabricate a sensor for simultaneous detection of ascorbic acid (AA), dopamine (DA), and uric acid (UA). The electrochemical behavior of the sensor was investigated by electrochemical impedance spectroscopy (EIS), cyclic voltammetry (CV) and differential pulse voltammetry (DPV) techniques. The combination of GP, MWCNTs, and AuNCs endowed the electrode with a large surface area, good catalytic activity, and high selectivity and sensitivity. The linear response range for simultaneous detection of AA, DA, and UA at the sensor were 120-1,701, 2-213, and 0.7-88.3 μM, correspondingly, and the detection limits were 40, 0.67, and 0.23 μM (S/N=3), respectively. The proposed method offers a promise for simple, rapid, selective, and cost-effective analysis of small biomolecules.

  17. Coronal heating by stochastic magnetic pumping

    NASA Technical Reports Server (NTRS)

    Sturrock, P. A.; Uchida, Y.

    1980-01-01

    Recent observational data cast serious doubt on the widely held view that the Sun's corona is heated by traveling waves (acoustic or magnetohydrodynamic). It is proposed that the energy responsible for heating the corona is derived from the free energy of the coronal magnetic field derived from motion of the 'feet' of magnetic field lines in the photosphere. Stochastic motion of the feet of magnetic field lines leads, on the average, to a linear increase of magnetic free energy with time. This rate of energy input is calculated for a simple model of a single thin flux tube. The model appears to agree well with observational data if the magnetic flux originates in small regions of high magnetic field strength. On combining this energy input with estimates of energy loss by radiation and of energy redistribution by thermal conduction, we obtain scaling laws for density and temperature in terms of length and coronal magnetic field strength.

  18. Low Density Solvent-Based Dispersive Liquid-Liquid Microextraction for the Determination of Synthetic Antioxidants in Beverages by High-Performance Liquid Chromatography

    PubMed Central

    Çabuk, Hasan; Köktürk, Mustafa

    2013-01-01

    A simple and efficient method was established for the determination of synthetic antioxidants in beverages by using dispersive liquid-liquid microextraction combined with high-performance liquid chromatography with ultraviolet detection. Butylated hydroxy toluene, butylated hydroxy anisole, and tert-butylhydroquinone were the antioxidants evaluated. Experimental parameters including extraction solvent, dispersive solvent, pH of sample solution, salt concentration, and extraction time were optimized. Under optimal conditions, the extraction recoveries ranged from 53 to 96%. Good linearity was observed by the square of correlation coefficients ranging from 0.9975 to 0.9997. The relative standard deviations ranged from 1.0 to 5.2% for all of the analytes. Limits of detection ranged from 0.85 to 2.73 ng mL−1. The method was successfully applied for determination of synthetic antioxidants in undiluted beverage samples with satisfactory recoveries. PMID:23853535

  19. On the volume-dependence of the index of refraction from the viewpoint of the complex dielectric function and the Kramers-Kronig relation.

    PubMed

    Rocquefelte, Xavier; Jobic, Stéphane; Whangbo, Myung-Hwan

    2006-02-16

    How indices of refraction n(omega) of insulating solids are affected by the volume dilution of an optical entity and the mixing of different, noninteracting simple solid components was examined on the basis of the dielectric function epsilon(1)(omega) + iepsilon(2)(omega). For closely related insulating solids with an identical composition and the formula unit volume V, the relation [epsilon(1)(omega) - 1]V = constant was found by combining the relation epsilon(2)(omega)V = constant with the Kramers-Kronig relation. This relation becomes [n(2)(omega) - 1]V = constant for the index of refraction n(omega) determined for the incident light with energy less than the band gap (i.e., h omega < E(g)). For a narrow range of change in the formula unit volume, the latter relation is well approximated by a linear relation between n and 1/V.

  20. Mutation-selection equilibrium in games with multiple strategies.

    PubMed

    Antal, Tibor; Traulsen, Arne; Ohtsuki, Hisashi; Tarnita, Corina E; Nowak, Martin A

    2009-06-21

    In evolutionary games the fitness of individuals is not constant but depends on the relative abundance of the various strategies in the population. Here we study general games among n strategies in populations of large but finite size. We explore stochastic evolutionary dynamics under weak selection, but for any mutation rate. We analyze the frequency dependent Moran process in well-mixed populations, but almost identical results are found for the Wright-Fisher and Pairwise Comparison processes. Surprisingly simple conditions specify whether a strategy is more abundant on average than 1/n, or than another strategy, in the mutation-selection equilibrium. We find one condition that holds for low mutation rate and another condition that holds for high mutation rate. A linear combination of these two conditions holds for any mutation rate. Our results allow a complete characterization of nxn games in the limit of weak selection.

Top