Traveling-wave piezoelectric linear motor part II: experiment and performance evaluation.
Ting, Yung; Li, Chun-Chung; Chen, Liang-Chiang; Yang, Chieh-Min
2007-04-01
This article continues the discussion of a traveling-wave piezoelectric linear motor. Part I of this article dealt with the design and analysis of the stator of a traveling-wave piezoelectric linear motor. In this part, the discussion focuses on the structure and modeling of the contact layer and the carriage. In addition, the performance analysis and evaluation of the linear motor also are dealt with in this study. The traveling wave is created by stator, which is constructed by a series of bimorph actuators arranged in a line and connected to form a meander-line structure. Analytical and experimental results of the performance are presented and shown to be almost in agreement. Power losses due to friction and transmission are studied and found to be significant. Compared with other types of linear motors, the motor in this study is capable of supporting heavier loads and provides a larger thrust force.
Mathematical Methods in Wave Propagation: Part 2--Non-Linear Wave Front Analysis
ERIC Educational Resources Information Center
Jeffrey, Alan
1971-01-01
The paper presents applications and methods of analysis for non-linear hyperbolic partial differential equations. The paper is concluded by an account of wave front analysis as applied to the piston problem of gas dynamics. (JG)
Relative Velocity as a Metric for Probability of Collision Calculations
NASA Technical Reports Server (NTRS)
Frigm, Ryan Clayton; Rohrbaugh, Dave
2008-01-01
Collision risk assessment metrics, such as the probability of collision calculation, are based largely on assumptions about the interaction of two objects during their close approach. Specifically, the approach to probabilistic risk assessment can be performed more easily if the relative trajectories of the two close approach objects are assumed to be linear during the encounter. It is shown in this analysis that one factor in determining linearity is the relative velocity of the two encountering bodies, in that the assumption of linearity breaks down at low relative approach velocities. The first part of this analysis is the determination of the relative velocity threshold below which the assumption of linearity becomes invalid. The second part is a statistical study of conjunction interactions between representative asset spacecraft and the associated debris field environment to determine the likelihood of encountering a low relative velocity close approach. This analysis is performed for both the LEO and GEO orbit regimes. Both parts comment on the resulting effects to collision risk assessment operations.
NASA Technical Reports Server (NTRS)
Middleton, W. D.; Lundry, J. L.
1975-01-01
An integrated system of computer programs has been developed for the design and analysis of supersonic configurations. The system uses linearized theory methods for the calculation of surface pressures and supersonic area rule concepts in combination with linearized theory for calculation of aerodynamic force coefficients. Interactive graphics are optional at the user's request. This part presents a general description of the system and describes the theoretical methods used.
1980-06-01
sufficient. Dropping the time lag terms, the equations for Xu, Xx’, and X reduce to linear algebraic equations.Y Hence in the quasistatic case the...quasistatic variables now are not described by differential equations but rather by linear algebraic equations. The solution for x0 then is simply -365...matrices for two-bladed rotor 414 7. LINEAR SYSTEM ANALYSIS 425 7,1 State Variable Form 425 7.2 Constant Coefficient System 426 7.2. 1 Eigen-analysis 426
Advanced statistics: linear regression, part I: simple linear regression.
Marill, Keith A
2004-01-01
Simple linear regression is a mathematical technique used to model the relationship between a single independent predictor variable and a single dependent outcome variable. In this, the first of a two-part series exploring concepts in linear regression analysis, the four fundamental assumptions and the mechanics of simple linear regression are reviewed. The most common technique used to derive the regression line, the method of least squares, is described. The reader will be acquainted with other important concepts in simple linear regression, including: variable transformations, dummy variables, relationship to inference testing, and leverage. Simplified clinical examples with small datasets and graphic models are used to illustrate the points. This will provide a foundation for the second article in this series: a discussion of multiple linear regression, in which there are multiple predictor variables.
Hemanth, M; Deoli, Shilpi; Raghuveer, H P; Rani, M S; Hegde, Chatura; Vedavathi, B
2015-09-01
Simulation of periodontal ligament (PDL) using non-linear finite element method (FEM) analysis gives better insight into understanding of the biology of tooth movement. The stresses in the PDL were evaluated for intrusion and lingual root torque using non-linear properties. A three-dimensional (3D) FEM model of the maxillary incisors was generated using Solidworks modeling software. Stresses in the PDL were evaluated for intrusive and lingual root torque movements by 3D FEM using ANSYS software. These stresses were compared with linear and non-linear analyses. For intrusive and lingual root torque movements, distribution of stress over the PDL was within the range of optimal stress value as proposed by Lee, but was exceeding the force system given by Proffit as optimum forces for orthodontic tooth movement with linear properties. When same force load was applied in non-linear analysis, stresses were more compared to linear analysis and were beyond the optimal stress range as proposed by Lee for both intrusive and lingual root torque. To get the same stress as linear analysis, iterations were done using non-linear properties and the force level was reduced. This shows that the force level required for non-linear analysis is lesser than that of linear analysis.
NASA Astrophysics Data System (ADS)
Wu, Bofeng; Huang, Chao-Guang
2018-04-01
The 1 /r expansion in the distance to the source is applied to the linearized f (R ) gravity, and its multipole expansion in the radiation field with irreducible Cartesian tensors is presented. Then, the energy, momentum, and angular momentum in the gravitational waves are provided for linearized f (R ) gravity. All of these results have two parts, which are associated with the tensor part and the scalar part in the multipole expansion of linearized f (R ) gravity, respectively. The former is the same as that in General Relativity, and the latter, as the correction to the result in General Relativity, is caused by the massive scalar degree of freedom and plays an important role in distinguishing General Relativity and f (R ) gravity.
Physics and control of wall turbulence for drag reduction.
Kim, John
2011-04-13
Turbulence physics responsible for high skin-friction drag in turbulent boundary layers is first reviewed. A self-sustaining process of near-wall turbulence structures is then discussed from the perspective of controlling this process for the purpose of skin-friction drag reduction. After recognizing that key parts of this self-sustaining process are linear, a linear systems approach to boundary-layer control is discussed. It is shown that singular-value decomposition analysis of the linear system allows us to examine different approaches to boundary-layer control without carrying out the expensive nonlinear simulations. Results from the linear analysis are consistent with those observed in full nonlinear simulations, thus demonstrating the validity of the linear analysis. Finally, fundamental performance limit expected of optimal control input is discussed.
NASA Technical Reports Server (NTRS)
Groom, N. J.; Woolley, C. T.; Joshi, S. M.
1981-01-01
A linear analysis and the results of a nonlinear simulation of a magnetic bearing suspension system which uses permanent magnet flux biasing are presented. The magnetic bearing suspension is part of a 4068 N-m-s (3000 lb-ft-sec) laboratory model annular momentum control device (AMCD). The simulation includes rigid body rim dynamics, linear and nonlinear axial actuators, linear radial actuators, axial and radial rim warp, and power supply and power driver current limits.
ERIC Educational Resources Information Center
Kane, Michael T.; Mroch, Andrew A.; Suh, Youngsuk; Ripkey, Douglas R.
2009-01-01
This paper analyzes five linear equating models for the "nonequivalent groups with anchor test" (NEAT) design with internal anchors (i.e., the anchor test is part of the full test). The analysis employs a two-dimensional framework. The first dimension contrasts two general approaches to developing the equating relationship. Under a "parameter…
Wavelet analysis of birefringence images of myocardium tissue
NASA Astrophysics Data System (ADS)
Sakhnovskiy, M. Yu.; Ushenko, Yu. O.; Kushnerik, L.; Soltys, I. V.; Pavlyukovich, N.; Pavlyukovich, O.
2018-01-01
The paper consists of two parts. The first part presents short theoretical basics of the method of azimuthally-invariant Mueller-matrix description of optical anisotropy of biological tissues. It was provided experimentally measured coordinate distributions of Mueller-matrix invariants (MMI) of linear and circular birefringences of skeletal muscle tissue. It was defined the values of statistic moments, which characterize the distributions of amplitudes of wavelet coefficients of MMI at different scales of scanning. The second part presents the data of statistic analysis of the distributions of amplitude of wavelet coefficients of the distributions of linear birefringence of myocardium tissue died after the infarction and ischemic heart disease. It was defined the objective criteria of differentiation of the cause of death.
A system for aerodynamic design and analysis of supersonic aircraft. Part 4: Test cases
NASA Technical Reports Server (NTRS)
Middleton, W. D.; Lundry, J. L.
1980-01-01
An integrated system of computer programs was developed for the design and analysis of supersonic configurations. The system uses linearized theory methods for the calculation of surface pressures and supersonic area rule concepts in combination with linearized theory for calculation of aerodynamic force coefficients. Interactive graphics are optional at the user's request. Representative test cases and associated program output are presented.
NASA Technical Reports Server (NTRS)
Huang, L. C. P.; Cook, R. A.
1973-01-01
Models utilizing various sub-sets of the six degrees of freedom are used in trajectory simulation. A 3-D model with only linear degrees of freedom is especially attractive, since the coefficients for the angular degrees of freedom are the most difficult to determine and the angular equations are the most time consuming for the computer to evaluate. A computer program is developed that uses three separate subsections to predict trajectories. A launch rail subsection is used until the rocket has left its launcher. The program then switches to a special 3-D section which computes motions in two linear and one angular degrees of freedom. When the rocket trims out, the program switches to the standard, three linear degrees of freedom model.
NASA Astrophysics Data System (ADS)
Milani, G.; Bertolesi, E.
2017-07-01
A simple quasi analytical holonomic homogenization approach for the non-linear analysis of masonry walls in-plane loaded is presented. The elementary cell (REV) is discretized with 24 triangular elastic constant stress elements (bricks) and non-linear interfaces (mortar). A holonomic behavior with softening is assumed for mortar. It is shown how the mechanical problem in the unit cell is characterized by very few displacement variables and how homogenized stress-strain behavior can be evaluated semi-analytically.
Direct use of linear time-domain aerodynamics in aeroservoelastic analysis: Aerodynamic model
NASA Technical Reports Server (NTRS)
Woods, J. A.; Gilbert, Michael G.
1990-01-01
The work presented here is the first part of a continuing effort to expand existing capabilities in aeroelasticity by developing the methodology which is necessary to utilize unsteady time-domain aerodynamics directly in aeroservoelastic design and analysis. The ultimate objective is to define a fully integrated state-space model of an aeroelastic vehicle's aerodynamics, structure and controls which may be used to efficiently determine the vehicle's aeroservoelastic stability. Here, the current status of developing a state-space model for linear or near-linear time-domain indicial aerodynamic forces is presented.
Guerrini, A M; Ascenzioni, F; Tribioli, C; Donini, P
1985-01-01
Linear plasmids were constructed by adding telomeres prepared from Tetrahymena pyriformis rDNA to a circular hybrid Escherichia coli-yeast vector and transforming Saccharomyces cerevisiae. The parental vector contained the entire 2 mu yeast circle and the LEU gene from S. cerevisiae. Three transformed clones were shown to contain linear plasmids which were characterized by restriction analysis and shown to be rearranged versions of the desired linear plasmids. The plasmids obtained were imperfect palindromes: part of the parental vector was present in duplicated form, part as unique sequences and part was absent. The sequences that had been lost included a large portion of the 2 mu circle. The telomeres were approximately 450 bp longer than those of T. pyriformis. DNA prepared from transformed S. cerevisiae clones was used to transform Schizosaccharomyces pombe. The transformed S. pombe clones contained linear plasmids identical in structure to their linear parents in S. cerevisiae. No structural re-arrangements or integration into S. pombe was observed. Little or no telomere growth had occurred after transfer from S. cerevisiae to S. pombe. A model is proposed to explain the genesis of the plasmids. Images Fig. 1. Fig. 2. Fig. 4. PMID:3896773
Employment of CB models for non-linear dynamic analysis
NASA Technical Reports Server (NTRS)
Klein, M. R. M.; Deloo, P.; Fournier-Sicre, A.
1990-01-01
The non-linear dynamic analysis of large structures is always very time, effort and CPU consuming. Whenever possible the reduction of the size of the mathematical model involved is of main importance to speed up the computational procedures. Such reduction can be performed for the part of the structure which perform linearly. Most of the time, the classical Guyan reduction process is used. For non-linear dynamic process where the non-linearity is present at interfaces between different structures, Craig-Bampton models can provide a very rich information, and allow easy selection of the relevant modes with respect to the phenomenon driving the non-linearity. The paper presents the employment of Craig-Bampton models combined with Newmark direct integration for solving non-linear friction problems appearing at the interface between the Hubble Space Telescope and its solar arrays during in-orbit maneuvers. Theory, implementation in the FEM code ASKA, and practical results are shown.
Accuracy of active chirp linearization for broadband frequency modulated continuous wave ladar.
Barber, Zeb W; Babbitt, Wm Randall; Kaylor, Brant; Reibel, Randy R; Roos, Peter A
2010-01-10
As the bandwidth and linearity of frequency modulated continuous wave chirp ladar increase, the resulting range resolution, precisions, and accuracy are improved correspondingly. An analysis of a very broadband (several THz) and linear (<1 ppm) chirped ladar system based on active chirp linearization is presented. Residual chirp nonlinearity and material dispersion are analyzed as to their effect on the dynamic range, precision, and accuracy of the system. Measurement precision and accuracy approaching the part per billion level is predicted.
NASA Technical Reports Server (NTRS)
Middleton, W. D.; Lundry, J. L.
1976-01-01
An integrated system of computer programs was developed for the design and analysis of supersonic configurations. The system uses linearized theory methods for the calculation of surface pressures and supersonic area rule concepts in combination with linearized theory for calculation of aerodynamic force coefficients. Interactive graphics are optional at the user's request. Schematics of the program structure and the individual overlays and subroutines are described.
Structural Analysis Using NX Nastran 9.0
NASA Technical Reports Server (NTRS)
Rolewicz, Benjamin M.
2014-01-01
NX Nastran is a powerful Finite Element Analysis (FEA) software package used to solve linear and non-linear models for structural and thermal systems. The software, which consists of both a solver and user interface, breaks down analysis into four files, each of which are important to the end results of the analysis. The software offers capabilities for a variety of types of analysis, and also contains a respectable modeling program. Over the course of ten weeks, I was trained to effectively implement NX Nastran into structural analysis and refinement for parts of two missions at NASA's Kennedy Space Center, the Restore mission and the Orion mission.
Exploration for fractured petroleum reservoirs using radar/Landsat merge combinations
NASA Technical Reports Server (NTRS)
Macdonald, H.; Waite, W.; Borengasser, M.; Tolman, D.; Elachi, C.
1981-01-01
Since fractures are commonly propagated upward and reflected at the earth's surface as subtle linears, detection of these surface features is extremely important in many phases of petroleum exploration and development. To document the usefulness of microwave analysis for petroleum exploration, the Arkansas part of the Arkoma basin is selected as a prime test site. The research plan involves comparing the aircraft microwave imagery and Landsat imagery in an area where significant subsurface borehole geophysical data are available. In the northern Arkoma basin, a positive correlation between the number of linears in a given area and production from cherty carbonate strata is found. In the southern part of the basin, little relationship is discernible between surface structure and gas production, and no correlation is found between gas productivity and linear proximity or linear density as determined from remote sensor data.
NASA Technical Reports Server (NTRS)
Fertis, D. G.; Simon, A. L.
1981-01-01
The requisite methodology to solve linear and nonlinear problems associated with the static and dynamic analysis of rotating machinery, their static and dynamic behavior, and the interaction between the rotating and nonrotating parts of an engine is developed. Linear and nonlinear structural engine problems are investigated by developing solution strategies and interactive computational methods whereby the man and computer can communicate directly in making analysis decisions. Representative examples include modifying structural models, changing material, parameters, selecting analysis options and coupling with interactive graphical display for pre- and postprocessing capability.
NASA Technical Reports Server (NTRS)
Middleton, W. D.; Lundry, J. L.; Coleman, R. G.
1976-01-01
An integrated system of computer programs was developed for the design and analysis of supersonic configurations. The system uses linearized theory methods for the calculation of surface pressures and supersonic area rule concepts in combination with linearized theory for calculation of aerodynamic force coefficients. Interactive graphics are optional at the user's request. This user's manual contains a description of the system, an explanation of its usage, the input definition, and example output.
Mechanisms for the elevation structure of a giant telescope
NASA Astrophysics Data System (ADS)
Hu, Shouwei; Song, Xiaoli; Zhang, Hui
2018-06-01
This paper describes an innovative mechanism based on hydrostatic pads and linear motors for the elevation structure of next-generation extremely large telescopes. Both hydrostatic pads and linear motors are integrated on the frame that includes a kinematical joint, such that the upper part is properly positioned with respect to the elevation runner tracks, while the lower part is connected to the azimuth structure. Potential deflections of the elevation runner bearings at the radial pad locations are absorbed by this flexible kinematic connection and not transmitted to the linear motors and hydrostatic pads. Extensive simulations using finite-element analysis are carried out to verify that the auxiliary whiffletree hydraulic design of the mechanism is sufficient to satisfy the assigned optical length variation errors.
Mechanisms for the elevation structure of a giant telescope
NASA Astrophysics Data System (ADS)
Hu, Shouwei; Song, Xiaoli; Zhang, Hui
2018-05-01
This paper describes an innovative mechanism based on hydrostatic pads and linear motors for the elevation structure of next-generation extremely large telescopes. Both hydrostatic pads and linear motors are integrated on the frame that includes a kinematical joint, such that the upper part is properly positioned with respect to the elevation runner tracks, while the lower part is connected to the azimuth structure. Potential deflections of the elevation runner bearings at the radial pad locations are absorbed by this flexible kinematic connection and not transmitted to the linear motors and hydrostatic pads. Extensive simulations using finite-element analysis are carried out to verify that the auxiliary whiffletree hydraulic design of the mechanism is sufficient to satisfy the assigned optical length variation errors.
Giacomino, Agnese; Abollino, Ornella; Malandrino, Mery; Mentasti, Edoardo
2011-03-04
Single and sequential extraction procedures are used for studying element mobility and availability in solid matrices, like soils, sediments, sludge, and airborne particulate matter. In the first part of this review we reported an overview on these procedures and described the applications of chemometric uni- and bivariate techniques and of multivariate pattern recognition techniques based on variable reduction to the experimental results obtained. The second part of the review deals with the use of chemometrics not only for the visualization and interpretation of data, but also for the investigation of the effects of experimental conditions on the response, the optimization of their values and the calculation of element fractionation. We will describe the principles of the multivariate chemometric techniques considered, the aims for which they were applied and the key findings obtained. The following topics will be critically addressed: pattern recognition by cluster analysis (CA), linear discriminant analysis (LDA) and other less common techniques; modelling by multiple linear regression (MLR); investigation of spatial distribution of variables by geostatistics; calculation of fractionation patterns by a mixture resolution method (Chemometric Identification of Substrates and Element Distributions, CISED); optimization and characterization of extraction procedures by experimental design; other multivariate techniques less commonly applied. Copyright © 2010 Elsevier B.V. All rights reserved.
Integrated Modeling Activities for the James Webb Space Telescope: Optical Jitter Analysis
NASA Technical Reports Server (NTRS)
Hyde, T. Tupper; Ha, Kong Q.; Johnston, John D.; Howard, Joseph M.; Mosier, Gary E.
2004-01-01
This is a continuation of a series of papers on the integrated modeling activities for the James Webb Space Telescope(JWST). Starting with the linear optical model discussed in part one, and using the optical sensitivities developed in part two, we now assess the optical image motion and wavefront errors from the structural dynamics. This is often referred to as "jitter: analysis. The optical model is combined with the structural model and the control models to create a linear structural/optical/control model. The largest jitter is due to spacecraft reaction wheel assembly disturbances which are harmonic in nature and will excite spacecraft and telescope structural. The structural/optic response causes image quality degradation due to image motion (centroid error) as well as dynamic wavefront error. Jitter analysis results are used to predict imaging performance, improve the structural design, and evaluate the operational impact of the disturbance sources.
LINEAR LATTICE AND TRAJECTORY RECONSTRUCTION AND CORRECTION AT FAST LINEAR ACCELERATOR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romanov, A.; Edstrom, D.; Halavanau, A.
2017-07-16
The low energy part of the FAST linear accelerator based on 1.3 GHz superconducting RF cavities was successfully commissioned [1]. During commissioning, beam based model dependent methods were used to correct linear lattice and trajectory. Lattice correction algorithm is based on analysis of beam shape from profile monitors and trajectory responses to dipole correctors. Trajectory responses to field gradient variations in quadrupoles and phase variations in superconducting RF cavities were used to correct bunch offsets in quadrupoles and accelerating cavities relative to their magnetic axes. Details of used methods and experimental results are presented.
Photon Limited Images and Their Restoration
1976-03-01
arises from noise inherent in the detected image data. In the first part of this report a model is developed which can be used to mathematically and...statistically describe an image detected at low light levels. This rodel serves to clarify some basic properties of photon noise , and provides a basis...for the analysi.s of image restoration. In the second part the problem of linear least-square restoration of imagery limited by photon noise is
NASA Technical Reports Server (NTRS)
Balbus, Steven A.; Hawley, John F.
1991-01-01
A broad class of astronomical accretion disks is presently shown to be dynamically unstable to axisymmetric disturbances in the presence of a weak magnetic field, an insight with consequently broad applicability to gaseous, differentially-rotating systems. In the first part of this work, a linear analysis is presented of the instability, which is local and extremely powerful; the maximum growth rate, which is of the order of the angular rotation velocity, is independent of the strength of the magnetic field. Fluid motions associated with the instability directly generate both poloidal and toroidal field components. In the second part of this investigation, the scaling relation between the instability's wavenumber and the Alfven velocity is demonstrated, and the independence of the maximum growth rate from magnetic field strength is confirmed.
Yadav, Manuj; Cabrera, Densil; Kenny, Dianna T
2015-09-01
Messa di voce (MDV) is a singing exercise that involves sustaining a single pitch with a linear change in loudness from silence to maximum intensity (the crescendo part) and back to silence again (the decrescendo part), with time symmetry between the two parts. Previous studies have used the sound pressure level (SPL, in decibels) of a singer's voice to measure loudness, so as to assess the linearity of each part-an approach that has limitations due to loudness and SPL not being linearly related. This article studies the loudness envelope shapes of MDVs, comparing the SPL approach with approaches that are more closely related to human loudness perception. The MDVs were performed by a cohort of tertiary singing students, recorded six times (once per semester) over a period of 3 years. The loudness envelopes were derived for a typical audience listening position, and for listening to one's own singing, using three models: SPL, Stevens' power law-based model, and a computational loudness model. The effects on the envelope shape due to room acoustics (an important effect) and vibrato (minimal effect) were also considered. The results showed that the SPL model yielded a lower proportion of linear crescendi and decrescendi, compared with other models. The Stevens' power law-based model provided results similar to the more complicated computational loudness model. Longitudinally, there was no consistent trend in the shape of the MDV loudness envelope for the cohort although there were some individual singers who exhibited improvements in linearity. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.
Mathematics and statistics research progress report, period ending June 30, 1983
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beauchamp, J. J.; Denson, M. V.; Heath, M. T.
1983-08-01
This report is the twenty-sixth in the series of progress reports of Mathematics and Statistics Research of the Computer Sciences organization, Union Carbide Corporation Nuclear Division. Part A records research progress in analysis of large data sets, applied analysis, biometrics research, computational statistics, materials science applications, numerical linear algebra, and risk analysis. Collaboration and consulting with others throughout the Oak Ridge Department of Energy complex are recorded in Part B. Included are sections on biological sciences, energy, engineering, environmental sciences, health and safety, and safeguards. Part C summarizes the various educational activities in which the staff was engaged. Part Dmore » lists the presentations of research results, and Part E records the staff's other professional activities during the report period.« less
Factor Scores, Structure and Communality Coefficients: A Primer
ERIC Educational Resources Information Center
Odum, Mary
2011-01-01
(Purpose) The purpose of this paper is to present an easy-to-understand primer on three important concepts of factor analysis: Factor scores, structure coefficients, and communality coefficients. Given that statistical analyses are a part of a global general linear model (GLM), and utilize weights as an integral part of analyses (Thompson, 2006;…
3D Mueller-matrix mapping of biological optically anisotropic networks
NASA Astrophysics Data System (ADS)
Ushenko, O. G.; Ushenko, V. O.; Bodnar, G. B.; Zhytaryuk, V. G.; Prydiy, O. G.; Koval, G.; Lukashevich, I.; Vanchuliak, O.
2018-01-01
The paper consists of two parts. The first part presents short theoretical basics of the method of azimuthally-invariant Mueller-matrix description of optical anisotropy of biological tissues. It was provided experimentally measured coordinate distributions of Mueller-matrix invariants (MMI) of linear and circular birefringences of skeletal muscle tissue. It was defined the values of statistic moments, which characterize the distributions of amplitudes of wavelet coefficients of MMI at different scales of scanning. The second part presents the data of statistic analysis of the distributions of amplitude of wavelet coefficients of the distributions of linear birefringence of myocardium tissue died after the infarction and ischemic heart disease. It was defined the objective criteria of differentiation of the cause of death.
NASA Astrophysics Data System (ADS)
Sakhnovskiy, M. Yu.; Ushenko, Yu. O.; Ushenko, V. O.; Besaha, R. N.; Pavlyukovich, N.; Pavlyukovich, O.
2018-01-01
The paper consists of two parts. The first part presents short theoretical basics of the method of azimuthally-invariant Mueller-matrix description of optical anisotropy of biological tissues. It was provided experimentally measured coordinate distributions of Mueller-matrix invariants (MMI) of linear and circular birefringences of skeletal muscle tissue. It was defined the values of statistic moments, which characterize the distributions of amplitudes of wavelet coefficients of MMI at different scales of scanning. The second part presents the data of statistic analysis of the distributions of amplitude of wavelet coefficients of the distributions of linear birefringence of myocardium tissue died after the infarction and ischemic heart disease. It was defined the objective criteria of differentiation of the cause of death.
NASA Astrophysics Data System (ADS)
Bona, J. L.; Chen, M.; Saut, J.-C.
2004-05-01
In part I of this work (Bona J L, Chen M and Saut J-C 2002 Boussinesq equations and other systems for small-amplitude long waves in nonlinear dispersive media I: Derivation and the linear theory J. Nonlinear Sci. 12 283-318), a four-parameter family of Boussinesq systems was derived to describe the propagation of surface water waves. Similar systems are expected to arise in other physical settings where the dominant aspects of propagation are a balance between the nonlinear effects of convection and the linear effects of frequency dispersion. In addition to deriving these systems, we determined in part I exactly which of them are linearly well posed in various natural function classes. It was argued that linear well-posedness is a natural necessary requirement for the possible physical relevance of the model in question. In this paper, it is shown that the first-order correct models that are linearly well posed are in fact locally nonlinearly well posed. Moreover, in certain specific cases, global well-posedness is established for physically relevant initial data. In part I, higher-order correct models were also derived. A preliminary analysis of a promising subclass of these models shows them to be well posed.
Acquah, Gifty E.; Via, Brian K.; Billor, Nedret; Fasina, Oladiran O.; Eckhardt, Lori G.
2016-01-01
As new markets, technologies and economies evolve in the low carbon bioeconomy, forest logging residue, a largely untapped renewable resource will play a vital role. The feedstock can however be variable depending on plant species and plant part component. This heterogeneity can influence the physical, chemical and thermochemical properties of the material, and thus the final yield and quality of products. Although it is challenging to control compositional variability of a batch of feedstock, it is feasible to monitor this heterogeneity and make the necessary changes in process parameters. Such a system will be a first step towards optimization, quality assurance and cost-effectiveness of processes in the emerging biofuel/chemical industry. The objective of this study was therefore to qualitatively classify forest logging residue made up of different plant parts using both near infrared spectroscopy (NIRS) and Fourier transform infrared spectroscopy (FTIRS) together with linear discriminant analysis (LDA). Forest logging residue harvested from several Pinus taeda (loblolly pine) plantations in Alabama, USA, were classified into three plant part components: clean wood, wood and bark and slash (i.e., limbs and foliage). Five-fold cross-validated linear discriminant functions had classification accuracies of over 96% for both NIRS and FTIRS based models. An extra factor/principal component (PC) was however needed to achieve this in FTIRS modeling. Analysis of factor loadings of both NIR and FTIR spectra showed that, the statistically different amount of cellulose in the three plant part components of logging residue contributed to their initial separation. This study demonstrated that NIR or FTIR spectroscopy coupled with PCA and LDA has the potential to be used as a high throughput tool in classifying the plant part makeup of a batch of forest logging residue feedstock. Thus, NIR/FTIR could be employed as a tool to rapidly probe/monitor the variability of forest biomass so that the appropriate online adjustments to parameters can be made in time to ensure process optimization and product quality. PMID:27618901
A method for nonlinear exponential regression analysis
NASA Technical Reports Server (NTRS)
Junkin, B. G.
1971-01-01
A computer-oriented technique is presented for performing a nonlinear exponential regression analysis on decay-type experimental data. The technique involves the least squares procedure wherein the nonlinear problem is linearized by expansion in a Taylor series. A linear curve fitting procedure for determining the initial nominal estimates for the unknown exponential model parameters is included as an integral part of the technique. A correction matrix was derived and then applied to the nominal estimate to produce an improved set of model parameters. The solution cycle is repeated until some predetermined criterion is satisfied.
Linear regression analysis: part 14 of a series on evaluation of scientific publications.
Schneider, Astrid; Hommel, Gerhard; Blettner, Maria
2010-11-01
Regression analysis is an important statistical method for the analysis of medical data. It enables the identification and characterization of relationships among multiple factors. It also enables the identification of prognostically relevant risk factors and the calculation of risk scores for individual prognostication. This article is based on selected textbooks of statistics, a selective review of the literature, and our own experience. After a brief introduction of the uni- and multivariable regression models, illustrative examples are given to explain what the important considerations are before a regression analysis is performed, and how the results should be interpreted. The reader should then be able to judge whether the method has been used correctly and interpret the results appropriately. The performance and interpretation of linear regression analysis are subject to a variety of pitfalls, which are discussed here in detail. The reader is made aware of common errors of interpretation through practical examples. Both the opportunities for applying linear regression analysis and its limitations are presented.
On the numerical treatment of nonlinear source terms in reaction-convection equations
NASA Technical Reports Server (NTRS)
Lafon, A.; Yee, H. C.
1992-01-01
The objectives of this paper are to investigate how various numerical treatments of the nonlinear source term in a model reaction-convection equation can affect the stability of steady-state numerical solutions and to show under what conditions the conventional linearized analysis breaks down. The underlying goal is to provide part of the basic building blocks toward the ultimate goal of constructing suitable numerical schemes for hypersonic reacting flows, combustions and certain turbulence models in compressible Navier-Stokes computations. It can be shown that nonlinear analysis uncovers much of the nonlinear phenomena which linearized analysis is not capable of predicting in a model reaction-convection equation.
Sensitivity analysis for large-scale problems
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Whitworth, Sandra L.
1987-01-01
The development of efficient techniques for calculating sensitivity derivatives is studied. The objective is to present a computational procedure for calculating sensitivity derivatives as part of performing structural reanalysis for large-scale problems. The scope is limited to framed type structures. Both linear static analysis and free-vibration eigenvalue problems are considered.
NASA Technical Reports Server (NTRS)
Chang, Chau-Lyan
2003-01-01
During the past two decades, our understanding of laminar-turbulent transition flow physics has advanced significantly owing to, in a large part, the NASA program support such as the National Aerospace Plane (NASP), High-speed Civil Transport (HSCT), and Advanced Subsonic Technology (AST). Experimental, theoretical, as well as computational efforts on various issues such as receptivity and linear and nonlinear evolution of instability waves take part in broadening our knowledge base for this intricate flow phenomenon. Despite all these advances, transition prediction remains a nontrivial task for engineers due to the lack of a widely available, robust, and efficient prediction tool. The design and development of the LASTRAC code is aimed at providing one such engineering tool that is easy to use and yet capable of dealing with a broad range of transition related issues. LASTRAC was written from scratch based on the state-of-the-art numerical methods for stability analysis and modem software technologies. At low fidelity, it allows users to perform linear stability analysis and N-factor transition correlation for a broad range of flow regimes and configurations by using either the linear stability theory (LST) or linear parabolized stability equations (LPSE) method. At high fidelity, users may use nonlinear PSE to track finite-amplitude disturbances until the skin friction rise. Coupled with the built-in receptivity model that is currently under development, the nonlinear PSE method offers a synergistic approach to predict transition onset for a given disturbance environment based on first principles. This paper describes the governing equations, numerical methods, code development, and case studies for the current release of LASTRAC. Practical applications of LASTRAC are demonstrated for linear stability calculations, N-factor transition correlation, non-linear breakdown simulations, and controls of stationary crossflow instability in supersonic swept wing boundary layers.
Nonlinear versus Ordinary Adaptive Control of Continuous Stirred-Tank Reactor
Dostal, Petr
2015-01-01
Unfortunately, the major group of the systems in industry has nonlinear behavior and control of such processes with conventional control approaches with fixed parameters causes problems and suboptimal or unstable control results. An adaptive control is one way to how we can cope with nonlinearity of the system. This contribution compares classic adaptive control and its modification with Wiener system. This configuration divides nonlinear controller into the dynamic linear part and the static nonlinear part. The dynamic linear part is constructed with the use of polynomial synthesis together with the pole-placement method and the spectral factorization. The static nonlinear part uses static analysis of the controlled plant for introducing the mathematical nonlinear description of the relation between the controlled output and the change of the control input. Proposed controller is tested by the simulations on the mathematical model of the continuous stirred-tank reactor with cooling in the jacket as a typical nonlinear system. PMID:26346878
An analysis of hypercritical states in elastic and inelastic systems
NASA Astrophysics Data System (ADS)
Kowalczk, Maciej
The author raises a wide range of problems whose common characteristic is an analysis of hypercritical states in elastic and inelastic systems. the article consists of two basic parts. The first part primarily discusses problems of modelling hypercritical states, while the second analyzes numerical methods (so-called continuation methods) used to solve non-linear problems. The original approaches for modelling hypercritical states found in this article include the combination of plasticity theory and an energy condition for cracking, accounting for the variability and cyclical nature of the forms of fracture of a brittle material under a die, and the combination of plasticity theory and a simplified description of the phenomenon of localization along a discontinuity line. The author presents analytical solutions of three non-linear problems for systems made of elastic/brittle/plastic and elastic/ideally plastic materials. The author proceeds to discuss the analytical basics of continuation methods and analyzes the significance of the parameterization of non-linear problems, provides a method for selecting control parameters based on an analysis of the rank of a rectangular matrix of a uniform system of increment equations, and also provides a new method for selecting an equilibrium path originating from a bifurcation point. The author provides a general outline of continuation methods based on an analysis of the rank of a matrix of a corrective system of equations. The author supplements his theoretical solutions with numerical solutions of non-linear problems for rod systems and problems of the plastic disintegration of a notched rectangular plastic plate.
Analysis of separation test for automatic brake adjuster based on linear radon transformation
NASA Astrophysics Data System (ADS)
Luo, Zai; Jiang, Wensong; Guo, Bin; Fan, Weijun; Lu, Yi
2015-01-01
The linear Radon transformation is applied to extract inflection points for online test system under the noise conditions. The linear Radon transformation has a strong ability of anti-noise and anti-interference by fitting the online test curve in several parts, which makes it easy to handle consecutive inflection points. We applied the linear Radon transformation to the separation test system to solve the separating clearance of automatic brake adjuster. The experimental results show that the feature point extraction error of the gradient maximum optimal method is approximately equal to ±0.100, while the feature point extraction error of linear Radon transformation method can reach to ±0.010, which has a lower error than the former one. In addition, the linear Radon transformation is robust.
ERIC Educational Resources Information Center
Hannan, Michael T.
This document is part of a series of chapters described in SO 011 759. Addressing the question of effective models to measure change and the change process, the author suggests that linear structural equation systems may be viewed as steady state outcomes of continuous-change models and have rich sociological grounding. Two interpretations of the…
Simulation of Thermal Signature of Tires and Tracks
2012-08-01
the body-ply is a linear elastic material. To facilitate the analysis, the tire was divided into Tread and Sidewall by the dash line as shown in...only one element is assigned through the thickness of the tire . Therefore, the thickness of the element is the same as the thickness of the tire ...to the whole part of the 3D full tire in the thermal analysis. The average strain energy density for each part ( tread or sidewall) in the cross
Fundamentals of digital filtering with applications in geophysical prospecting for oil
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mesko, A.
This book is a comprehensive work bringing together the important mathematical foundations and computing techniques for numerical filtering methods. The first two parts of the book introduce the techniques, fundamental theory and applications, while the third part treats specific applications in geophysical prospecting. Discussion is limited to linear filters, but takes in related fields such as correlational and spectral analysis.
The measurement of linear frequency drift in oscillators
NASA Astrophysics Data System (ADS)
Barnes, J. A.
1985-04-01
A linear drift in frequency is an important element in most stochastic models of oscillator performance. Quartz crystal oscillators often have drifts in excess of a part in ten to the tenth power per day. Even commercial cesium beam devices often show drifts of a few parts in ten to the thirteenth per year. There are many ways to estimate the drift rates from data samples (e.g., regress the phase on a quadratic; regress the frequency on a linear; compute the simple mean of the first difference of frequency; use Kalman filters with a drift term as one element in the state vector; and others). Although most of these estimators are unbiased, they vary in efficiency (i.e., confidence intervals). Further, the estimation of confidence intervals using the standard analysis of variance (typically associated with the specific estimating technique) can give amazingly optimistic results. The source of these problems is not an error in, say, the regressions techniques, but rather the problems arise from correlations within the residuals. That is, the oscillator model is often not consistent with constraints on the analysis technique or, in other words, some specific analysis techniques are often inappropriate for the task at hand. The appropriateness of a specific analysis technique is critically dependent on the oscillator model and can often be checked with a simple whiteness test on the residuals.
Singularity perturbed zero dynamics of nonlinear systems
NASA Technical Reports Server (NTRS)
Isidori, A.; Sastry, S. S.; Kokotovic, P. V.; Byrnes, C. I.
1992-01-01
Stability properties of zero dynamics are among the crucial input-output properties of both linear and nonlinear systems. Unstable, or 'nonminimum phase', zero dynamics are a major obstacle to input-output linearization and high-gain designs. An analysis of the effects of regular perturbations in system equations on zero dynamics shows that whenever a perturbation decreases the system's relative degree, it manifests itself as a singular perturbation of zero dynamics. Conditions are given under which the zero dynamics evolve in two timescales characteristic of a standard singular perturbation form that allows a separate analysis of slow and fast parts of the zero dynamics.
Graph-based normalization and whitening for non-linear data analysis.
Aaron, Catherine
2006-01-01
In this paper we construct a graph-based normalization algorithm for non-linear data analysis. The principle of this algorithm is to get a spherical average neighborhood with unit radius. First we present a class of global dispersion measures used for "global normalization"; we then adapt these measures using a weighted graph to build a local normalization called "graph-based" normalization. Then we give details of the graph-based normalization algorithm and illustrate some results. In the second part we present a graph-based whitening algorithm built by analogy between the "global" and the "local" problem.
NASA Technical Reports Server (NTRS)
Hoppin, R. A. (Principal Investigator)
1974-01-01
The author has identified the following significant results. Analysis of SL-3, S-190A, and S-190B color frames indicates two sets of linears obliquely cutting across the east-west trending Owl Creek-Bridger uplifts. A northwest set of faults and folds has been mapped previously but the imagery indicates some changes and addition of detail can be made. A less pronounced east-northeast set of linear alignments (drainage segments, lithologic contacts, possible faults) extends into the southeast part of the Big Horn Basin.
Analysis of the faster-than-Nyquist optimal linear multicarrier system
NASA Astrophysics Data System (ADS)
Marquet, Alexandre; Siclet, Cyrille; Roque, Damien
2017-02-01
Faster-than-Nyquist signalization enables a better spectral efficiency at the expense of an increased computational complexity. Regarding multicarrier communications, previous work mainly relied on the study of non-linear systems exploiting coding and/or equalization techniques, with no particular optimization of the linear part of the system. In this article, we analyze the performance of the optimal linear multicarrier system when used together with non-linear receiving structures (iterative decoding and direct feedback equalization), or in a standalone fashion. We also investigate the limits of the normality assumption of the interference, used for implementing such non-linear systems. The use of this optimal linear system leads to a closed-form expression of the bit-error probability that can be used to predict the performance and help the design of coded systems. Our work also highlights the great performance/complexity trade-off offered by decision feedback equalization in a faster-than-Nyquist context. xml:lang="fr"
Coker, Freya; Williams, Cylie M; Taylor, Nicholas F; Caspers, Kirsten; McAlinden, Fiona; Wilton, Anita; Shields, Nora; Haines, Terry P
2018-05-10
This protocol considers three allied health staffing models across public health subacute hospitals. This quasi-experimental mixed-methods study, including qualitative process evaluation, aims to evaluate the impact of additional allied health services in subacute care, in rehabilitation and geriatric evaluation management settings, on patient, health service and societal outcomes. This health services research will analyse outcomes of patients exposed to different allied health models of care at three health services. Each health service will have a control ward (routine care) and an intervention ward (additional allied health). This project has two parts. Part 1: a whole of site data extraction for included wards. Outcome measures will include: length of stay, rate of readmissions, discharge destinations, community referrals, patient feedback and staff perspectives. Part 2: Functional Independence Measure scores will be collected every 2-3 days for the duration of 60 patient admissions.Data from part 1 will be analysed by linear regression analysis for continuous outcomes using patient-level data and logistic regression analysis for binary outcomes. Qualitative data will be analysed using a deductive thematic approach. For part 2, a linear mixed model analysis will be conducted using therapy service delivery and days since admission to subacute care as fixed factors in the model and individual participant as a random factor. Graphical analysis will be used to examine the growth curve of the model and transformations. The days since admission factor will be used to examine non-linear growth trajectories to determine if they lead to better model fit. Findings will be disseminated through local reports and to the Department of Health and Human Services Victoria. Results will be presented at conferences and submitted to peer-reviewed journals. The Monash Health Human Research Ethics committee approved this multisite research (HREC/17/MonH/144 and HREC/17/MonH/547). © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
On the dynamics of viscous masonry beams
NASA Astrophysics Data System (ADS)
Lucchesi, M.; Pintucchi, B.; Šilhavý, M.; Zani, N.
2015-05-01
In this paper, we consider the longitudinal and transversal vibrations of the masonry beams and arches. The basic motivation is the seismic vulnerability analysis of masonry structures that can be modeled as monodimensional elements. The Euler-Bernoulli hypothesis is employed for the system of forces in the beam. The axial force and the bending moment are assumed to consist of the elastic and viscous parts. The elastic part is described by the no-tension material, i.e., the material with no resistance to tension and which accounts for the cases of limitless, as well as bounded compressive strength. The adaptation of this material to beams has been developed in Orlandi (Analisi non lineare di strutture ad arco in muratura. Thesis, 1999) and Zani (Eur J Mech A/Solids 23:467-484, 2004). The viscous part amounts to the Kelvin-Voigt damping depending linearly on the time derivatives of the linearized strain and curvature. The dynamical equations are formulated, and a mathematical analysis of them is presented. Specifically, following Gajewski et al. (Nichtlineare Operatorgleichungen und Operatordifferentialgleichungen. Akademie-Verlag, Berlin, 1974), the theorems of existence, uniqueness and regularity of the solution of the dynamical equations are recapitulated and specialized for our purposes, to support the numerical analysis applied previously in Lucchesi and Pintucchi (Eur J Mech A/Solids 26:88-105, 2007 ). As usual, for that the Galerkin method has been used. As an illustration, two numerical examples (slender masonry tower and masonry arch) are presented in this paper with the applied forces corresponding to the acceleration in the earthquake in Emilia Romagna in May 29, 2012.
Effects of non-tidal atmospheric loading on a Kalman filter-based terrestrial reference frame
NASA Astrophysics Data System (ADS)
Abbondanza, C.; Altamimi, Z.; Chin, T. M.; Collilieux, X.; Dach, R.; Heflin, M. B.; Gross, R. S.; König, R.; Lemoine, F. G.; MacMillan, D. S.; Parker, J. W.; van Dam, T. M.; Wu, X.
2013-12-01
The International Terrestrial Reference Frame (ITRF) adopts a piece-wise linear model to parameterize regularized station positions and velocities. The space-geodetic (SG) solutions from VLBI, SLR, GPS and DORIS global networks used as input in the ITRF combination process account for tidal loading deformations, but ignore the non-tidal part. As a result, the non-linear signal observed in the time series of SG-derived station positions in part reflects non-tidal loading displacements not introduced in the SG data reduction. In this analysis, the effect of non-tidal atmospheric loading (NTAL) corrections on the TRF is assessed adopting a Remove/Restore approach: (i) Focusing on the a-posteriori approach, the NTAL model derived from the National Center for Environmental Prediction (NCEP) surface pressure is removed from the SINEX files of the SG solutions used as inputs to the TRF determinations. (ii) Adopting a Kalman-filter based approach, a linear TRF is estimated combining the 4 SG solutions free from NTAL displacements. (iii) Linear fits to the NTAL displacements removed at step (i) are restored to the linear reference frame estimated at (ii). The velocity fields of the (standard) linear reference frame in which the NTAL model has not been removed and the one in which the model has been removed/restored are compared and discussed.
Tuuli, Methodius G; Odibo, Anthony O
2011-08-01
The objective of this article is to discuss the rationale for common statistical tests used for the analysis and interpretation of prenatal diagnostic imaging studies. Examples from the literature are used to illustrate descriptive and inferential statistics. The uses and limitations of linear and logistic regression analyses are discussed in detail.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bower, G.
We summarize the current status and future developments of the North American Group's Java-based system for studying physics and detector design issues at a linear collider. The system is built around Java Analysis Studio (JAS) an experiment-independent Java-based utility for data analysis. Although the system is an integrated package running in JAS, many parts of it are also standalone Java utilities.
Design and analysis of a field modulated magnetic screw for artificial heart
NASA Astrophysics Data System (ADS)
Ling, Zhijian; Ji, Jinghua; Wang, Fangqun; Bian, Fangfang
2017-05-01
This paper proposes a new electromechanical energy conversion system, called Field Modulated Magnetic Screw (FMMS) as a high force density linear actuator for artificial heart. This device is based on the concept of magnetic screw and linear magnetic gear. The proposed FMMS consists of three parts with the outer and inner carrying the radially magnetized helically permanent-magnet (PM), and the intermediate having a set of helically ferromagnetic pole pieces, which modulate the magnetic fields produced by the PMs. The configuration of the newly designed FMMS is presented and its electromagnetic performances are analyzed by using the finite-element analysis, verifying the advantages of the proposed structure.
Aerodynamic preliminary analysis system. Part 1: Theory. [linearized potential theory
NASA Technical Reports Server (NTRS)
Bonner, E.; Clever, W.; Dunn, K.
1978-01-01
A comprehensive aerodynamic analysis program based on linearized potential theory is described. The solution treats thickness and attitude problems at subsonic and supersonic speeds. Three dimensional configurations with or without jet flaps having multiple non-planar surfaces of arbitrary planform and open or closed slender bodies of non-circular contour may be analyzed. Longitudinal and lateral-directional static and rotary derivative solutions may be generated. The analysis was implemented on a time sharing system in conjunction with an input tablet digitizer and an interactive graphics input/output display and editing terminal to maximize its responsiveness to the preliminary analysis problem. Nominal case computation time of 45 CPU seconds on the CDC 175 for a 200 panel simulation indicates the program provides an efficient analysis for systematically performing various aerodynamic configuration tradeoff and evaluation studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Filipkowski, M.E.; Budnick, J.I.
1991-11-15
We describe a quantitative analysis of the low-temperature ({ital T}{lt}300 K) susceptibility ({chi}({ital T})) of La{sub 2{minus}x}Sr{sub x}CuO{sub 4+y} for dopant concentrations in the vicinity of the superconducting phase boundary (SPB) at {ital x}=0.055. This analysis is based on a phenomenological model for the temperature dependence consisting of a Curie-like 1/{ital T} term plus a term linear in {ital T}. We find that the former exhibits nontrivial doping dependence at the SPB, while the {ital T}-linear part accepts decomposition into a Pauli contribution and a portion which can be understood using spin-wave theory.
Diffusion of passive particles in active suspensions
NASA Astrophysics Data System (ADS)
Mussler, Matthias; Rafai, Salima; John, Thomas; Peyla, Philippe; Wagner, Christian
2013-11-01
We study how an active suspension consisting of a definite volume fraction of the microswimmer Chlamydomonas Reinhardtii modifies the Brownian movement of small to medium size microspheres. We present measurements and simulations of trajectories of microspheres with a diameter of 20 μm in suspensions of Chlamydomonas Reinhardtii, a so called ``puller,'' and show that the mean squared displacement of such trajectories consist of parabolic and a linear part. The linear part is due to the hydrodynamic noise of the microswimmers while the parabolic part is a consequence of directed motion events that occur randomly, when a microsphere is transported by a microswimmer on a timescale that is in higher order of magnitude than the Brownian like hydrodynamic interaction. In addition, we theoretically describe this effect with a dimensional analysis that takes the force dipole model used to describe ``puller'' like Chlamydomonas Reinhardtii into account.
Mathematics and Statistics Research Department progress report, period ending June 30, 1982
DOE Office of Scientific and Technical Information (OSTI.GOV)
Denson, M.V.; Funderlic, R.E.; Gosslee, D.G.
1982-08-01
This report is the twenty-fifth in the series of progress reports of the Mathematics and Statistics Research Department of the Computer Sciences Division, Union Carbide Corporation Nuclear Division (UCC-ND). Part A records research progress in analysis of large data sets, biometrics research, computational statistics, materials science applications, moving boundary problems, numerical linear algebra, and risk analysis. Collaboration and consulting with others throughout the UCC-ND complex are recorded in Part B. Included are sections on biology, chemistry, energy, engineering, environmental sciences, health and safety, materials science, safeguards, surveys, and the waste storage program. Part C summarizes the various educational activities inmore » which the staff was engaged. Part D lists the presentations of research results, and Part E records the staff's other professional activities during the report period.« less
Mathematics and statistics research department. Progress report, period ending June 30, 1981
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lever, W.E.; Kane, V.E.; Scott, D.S.
1981-09-01
This report is the twenty-fourth in the series of progress reports of the Mathematics and Statistics Research Department of the Computer Sciences Division, Union Carbide Corporation - Nuclear Division (UCC-ND). Part A records research progress in biometrics research, materials science applications, model evaluation, moving boundary problems, multivariate analysis, numerical linear algebra, risk analysis, and complementary areas. Collaboration and consulting with others throughout the UCC-ND complex are recorded in Part B. Included are sections on biology and health sciences, chemistry, energy, engineering, environmental sciences, health and safety research, materials sciences, safeguards, surveys, and uranium resource evaluation. Part C summarizes the variousmore » educational activities in which the staff was engaged. Part D lists the presentations of research results, and Part E records the staff's other professional activities during the report period.« less
Hybrid Feature Extraction-based Approach for Facial Parts Representation and Recognition
NASA Astrophysics Data System (ADS)
Rouabhia, C.; Tebbikh, H.
2008-06-01
Face recognition is a specialized image processing which has attracted a considerable attention in computer vision. In this article, we develop a new facial recognition system from video sequences images dedicated to person identification whose face is partly occulted. This system is based on a hybrid image feature extraction technique called ACPDL2D (Rouabhia et al. 2007), it combines two-dimensional principal component analysis and two-dimensional linear discriminant analysis with neural network. We performed the feature extraction task on the eyes and the nose images separately then a Multi-Layers Perceptron classifier is used. Compared to the whole face, the results of simulation are in favor of the facial parts in terms of memory capacity and recognition (99.41% for the eyes part, 98.16% for the nose part and 97.25 % for the whole face).
NASA Astrophysics Data System (ADS)
Singh, Trailokyanath; Mishra, Pandit Jagatananda; Pattanayak, Hadibandhu
2017-12-01
In this paper, an economic order quantity (EOQ) inventory model for a deteriorating item is developed with the following characteristics: (i) The demand rate is deterministic and two-staged, i.e., it is constant in first part of the cycle and linear function of time in the second part. (ii) Deterioration rate is time-proportional. (iii) Shortages are not allowed to occur. The optimal cycle time and the optimal order quantity have been derived by minimizing the total average cost. A simple solution procedure is provided to illustrate the proposed model. The article concludes with a numerical example and sensitivity analysis of various parameters as illustrations of the theoretical results.
Modulation linearization of a frequency-modulated voltage controlled oscillator, part 3
NASA Technical Reports Server (NTRS)
Honnell, M. A.
1975-01-01
An analysis is presented for the voltage versus frequency characteristics of a varactor modulated VHF voltage controlled oscillator in which the frequency deviation is linearized by using the nonlinear characteristics of a field effect transistor as a signal amplifier. The equations developed are used to calculate the oscillator output frequency in terms of pertinent circuit parameters. It is shown that the nonlinearity exponent of the FET has a pronounced influence on frequency deviation linearity, whereas the junction exponent of the varactor controls total frequency deviation for a given input signal. A design example for a 250 MHz frequency modulated oscillator is presented.
Applications of statistics to medical science, III. Correlation and regression.
Watanabe, Hiroshi
2012-01-01
In this third part of a series surveying medical statistics, the concepts of correlation and regression are reviewed. In particular, methods of linear regression and logistic regression are discussed. Arguments related to survival analysis will be made in a subsequent paper.
The solution of the optimization problem of small energy complexes using linear programming methods
NASA Astrophysics Data System (ADS)
Ivanin, O. A.; Director, L. B.
2016-11-01
Linear programming methods were used for solving the optimization problem of schemes and operation modes of distributed generation energy complexes. Applicability conditions of simplex method, applied to energy complexes, including installations of renewable energy (solar, wind), diesel-generators and energy storage, considered. The analysis of decomposition algorithms for various schemes of energy complexes was made. The results of optimization calculations for energy complexes, operated autonomously and as a part of distribution grid, are presented.
NASA Astrophysics Data System (ADS)
Hutterer, Rudi
2018-01-01
The author discusses methods for the fluorometric determination of affinity constants by linear and nonlinear fitting methods. This is outlined in particular for the interaction between cyclodextrins and several anesthetic drugs including benzocaine. Special emphasis is given to the limitations of certain fits, and the impact of such studies on enzyme-substrate interactions are demonstrated. Both the experimental part and methods of analysis are well suited for students in an advanced lab.
Control System for Prosthetic Devices
NASA Technical Reports Server (NTRS)
Bozeman, Richard J. (Inventor)
1996-01-01
A control system and method for prosthetic devices is provided. The control system comprises a transducer for receiving movement from a body part for generating a sensing signal associated with that of movement. The sensing signal is processed by a linearizer for linearizing the sensing signal to be a linear function of the magnitude of the distance moved by the body part. The linearized sensing signal is normalized to be a function of the entire range of body part movement from the no-shrug position of the moveable body part through the full-shrg position of the moveable body part. The normalized signal is divided into a plurality of discrete command signals. The discrete command signals are used by typical converter devices which are in operational association with the prosthetic device. The converter device uses the discrete command signals for driving the moveable portions of the prosthetic device and its sub-prosthesis. The method for controlling a prosthetic device associated with the present invention comprises the steps of receiving the movement from the body part, generating a sensing signal in association with the movement of the body part, linearizing the sensing signal to be a linear function of the magnitude of the distance moved by the body part, normalizing the linear signal to be a function of the entire range of the body part movement, dividing the normalized signal into a plurality of discrete command signals, and implementing the plurality of discrete command signals for driving the respective moveable prosthesis device and its sub-prosthesis.
Control system and method for prosthetic devices
NASA Technical Reports Server (NTRS)
Bozeman, Richard J., Jr. (Inventor)
1992-01-01
A control system and method for prosthetic devices is provided. The control system comprises a transducer for receiving movement from a body part for generating a sensing signal associated with that movement. The sensing signal is processed by a linearizer for linearizing the sensing signal to be a linear function of the magnitude of the distance moved by the body part. The linearized sensing signal is normalized to be a function of the entire range of body part movement from the no-shrug position of the movable body part through the full-shrug position of the movable body part. The normalized signal is divided into a plurality of discrete command signals. The discrete command signals are used by typical converter devices which are in operational association with the prosthetic device. The converter device uses the discrete command signals for driving the movable portions of the prosthetic device and its sub-prosthesis. The method for controlling a prosthetic device associated with the present invention comprises the steps of receiving the movement from the body part, generating a sensing signal in association with the movement of the body part, linearizing the sensing signal to be a linear function of the magnitude of the distance moved by the body part, normalizing the linear signal to be a function of the entire range of the body part movement, dividing the normalized signal into a plurality of discrete command signals, and implementing the plurality of discrete command signals for driving the respective movable prosthesis device and its sub-prosthesis.
He, ZeFang; Zhao, Long
2014-01-01
An attitude control strategy based on Ziegler-Nichols rules for tuning PD (proportional-derivative) parameters of quadrotor helicopters is presented to solve the problem that quadrotor tends to be instable. This problem is caused by the narrow definition domain of attitude angles of quadrotor helicopters. The proposed controller is nonlinear and consists of a linear part and a nonlinear part. The linear part is a PD controller with PD parameters tuned by Ziegler-Nichols rules and acts on the quadrotor decoupled linear system after feedback linearization; the nonlinear part is a feedback linearization item which converts a nonlinear system into a linear system. It can be seen from the simulation results that the attitude controller proposed in this paper is highly robust, and its control effect is better than the other two nonlinear controllers. The nonlinear parts of the other two nonlinear controllers are the same as the attitude controller proposed in this paper. The linear part involves a PID (proportional-integral-derivative) controller with the PID controller parameters tuned by Ziegler-Nichols rules and a PD controller with the PD controller parameters tuned by GA (genetic algorithms). Moreover, this attitude controller is simple and easy to implement.
NASA Astrophysics Data System (ADS)
Jintao, Xue; Yufei, Liu; Liming, Ye; Chunyan, Li; Quanwei, Yang; Weiying, Wang; Yun, Jing; Minxiang, Zhang; Peng, Li
2018-01-01
Near-Infrared Spectroscopy (NIRS) was first used to develop a method for rapid and simultaneous determination of 5 active alkaloids (berberine, coptisine, palmatine, epiberberine and jatrorrhizine) in 4 parts (rhizome, fibrous root, stem and leaf) of Coptidis Rhizoma. A total of 100 samples from 4 main places of origin were collected and studied. With HPLC analysis values as calibration reference, the quantitative analysis of 5 marker components was performed by two different modeling methods, partial least-squares (PLS) regression as linear regression and artificial neural networks (ANN) as non-linear regression. The results indicated that the 2 types of models established were robust, accurate and repeatable for five active alkaloids, and the ANN models was more suitable for the determination of berberine, coptisine and palmatine while the PLS model was more suitable for the analysis of epiberberine and jatrorrhizine. The performance of the optimal models was achieved as follows: the correlation coefficient (R) for berberine, coptisine, palmatine, epiberberine and jatrorrhizine was 0.9958, 0.9956, 0.9959, 0.9963 and 0.9923, respectively; the root mean square error of validation (RMSEP) was 0.5093, 0.0578, 0.0443, 0.0563 and 0.0090, respectively. Furthermore, for the comprehensive exploitation and utilization of plant resource of Coptidis Rhizoma, the established NIR models were used to analysis the content of 5 active alkaloids in 4 parts of Coptidis Rhizoma and 4 main origin of places. This work demonstrated that NIRS may be a promising method as routine screening for off-line fast analysis or on-line quality assessment of traditional Chinese medicine (TCM).
Advanced statistics: linear regression, part II: multiple linear regression.
Marill, Keith A
2004-01-01
The applications of simple linear regression in medical research are limited, because in most situations, there are multiple relevant predictor variables. Univariate statistical techniques such as simple linear regression use a single predictor variable, and they often may be mathematically correct but clinically misleading. Multiple linear regression is a mathematical technique used to model the relationship between multiple independent predictor variables and a single dependent outcome variable. It is used in medical research to model observational data, as well as in diagnostic and therapeutic studies in which the outcome is dependent on more than one factor. Although the technique generally is limited to data that can be expressed with a linear function, it benefits from a well-developed mathematical framework that yields unique solutions and exact confidence intervals for regression coefficients. Building on Part I of this series, this article acquaints the reader with some of the important concepts in multiple regression analysis. These include multicollinearity, interaction effects, and an expansion of the discussion of inference testing, leverage, and variable transformations to multivariate models. Examples from the first article in this series are expanded on using a primarily graphic, rather than mathematical, approach. The importance of the relationships among the predictor variables and the dependence of the multivariate model coefficients on the choice of these variables are stressed. Finally, concepts in regression model building are discussed.
Dimension Reduction With Extreme Learning Machine.
Kasun, Liyanaarachchi Lekamalage Chamara; Yang, Yan; Huang, Guang-Bin; Zhang, Zhengyou
2016-08-01
Data may often contain noise or irrelevant information, which negatively affect the generalization capability of machine learning algorithms. The objective of dimension reduction algorithms, such as principal component analysis (PCA), non-negative matrix factorization (NMF), random projection (RP), and auto-encoder (AE), is to reduce the noise or irrelevant information of the data. The features of PCA (eigenvectors) and linear AE are not able to represent data as parts (e.g. nose in a face image). On the other hand, NMF and non-linear AE are maimed by slow learning speed and RP only represents a subspace of original data. This paper introduces a dimension reduction framework which to some extend represents data as parts, has fast learning speed, and learns the between-class scatter subspace. To this end, this paper investigates a linear and non-linear dimension reduction framework referred to as extreme learning machine AE (ELM-AE) and sparse ELM-AE (SELM-AE). In contrast to tied weight AE, the hidden neurons in ELM-AE and SELM-AE need not be tuned, and their parameters (e.g, input weights in additive neurons) are initialized using orthogonal and sparse random weights, respectively. Experimental results on USPS handwritten digit recognition data set, CIFAR-10 object recognition, and NORB object recognition data set show the efficacy of linear and non-linear ELM-AE and SELM-AE in terms of discriminative capability, sparsity, training time, and normalized mean square error.
On the minimum quantum requirement of photosynthesis.
Zeinalov, Yuzeir
2009-01-01
An analysis of the shape of photosynthetic light curves is presented and the existence of the initial non-linear part is shown as a consequence of the operation of the non-cooperative (Kok's) mechanism of oxygen evolution or the effect of dark respiration. The effect of nonlinearity on the quantum efficiency (yield) and quantum requirement is reconsidered. The essential conclusions are: 1) The non-linearity of the light curves cannot be compensated using suspensions of algae or chloroplasts with high (>1.0) optical density or absorbance. 2) The values of the maxima of the quantum efficiency curves or the values of the minima of the quantum requirement curves cannot be used for estimation of the exact value of the maximum quantum efficiency and the minimum quantum requirement. The estimation of the maximum quantum efficiency or the minimum quantum requirement should be performed only after extrapolation of the linear part at higher light intensities of the quantum requirement curves to "0" light intensity.
Unified Framework for Deriving Simultaneous Equation Algorithms for Water Distribution Networks
The known formulations for steady state hydraulics within looped water distribution networks are re-derived in terms of linear and non-linear transformations of the original set of partly linear and partly non-linear equations that express conservation of mass and energy. All of ...
A Wavelet Support Vector Machine Combination Model for Singapore Tourist Arrival to Malaysia
NASA Astrophysics Data System (ADS)
Rafidah, A.; Shabri, Ani; Nurulhuda, A.; Suhaila, Y.
2017-08-01
In this study, wavelet support vector machine model (WSVM) is proposed and applied for monthly data Singapore tourist time series prediction. The WSVM model is combination between wavelet analysis and support vector machine (SVM). In this study, we have two parts, first part we compare between the kernel function and second part we compare between the developed models with single model, SVM. The result showed that kernel function linear better than RBF while WSVM outperform with single model SVM to forecast monthly Singapore tourist arrival to Malaysia.
Refining and end use study of coal liquids II - linear programming analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lowe, C.; Tam, S.
1995-12-31
A DOE-funded study is underway to determine the optimum refinery processing schemes for producing transportation fuels that will meet CAAA regulations from direct and indirect coal liquids. The study consists of three major parts: pilot plant testing of critical upgrading processes, linear programming analysis of different processing schemes, and engine emission testing of final products. Currently, fractions of a direct coal liquid produced form bituminous coal are being tested in sequence of pilot plant upgrading processes. This work is discussed in a separate paper. The linear programming model, which is the subject of this paper, has been completed for themore » petroleum refinery and is being modified to handle coal liquids based on the pilot plant test results. Preliminary coal liquid evaluation studies indicate that, if a refinery expansion scenario is adopted, then the marginal value of the coal liquid (over the base petroleum crude) is $3-4/bbl.« less
Control method for prosthetic devices
NASA Technical Reports Server (NTRS)
Bozeman, Richard J., Jr. (Inventor)
1995-01-01
A control system and method for prosthetic devices is provided. The control system comprises a transducer for receiving movement from a body part for generating a sensing signal associated with that movement. The sensing signal is processed by a linearizer for linearizing the sensing signal to be a linear function of the magnitude of the distance moved by the body part. The linearized sensing signal is normalized to be a function of the entire range of body part movement from the no-shrug position of the moveable body part. The normalized signal is divided into a plurality of discrete command signals. The discrete command signals are used by typical converter devices which are in operational association with the prosthetic device. The converter device uses the discrete command signals for driving the moveable portions of the prosthetic device and its sub-prosthesis. The method for controlling a prosthetic device associated with the present invention comprises the steps of receiving the movement from the body part, generating a sensing signal in association with the movement of the body part, linearizing the sensing signal to be a linear function of the magnitude of the distance moved by the body part, normalizing the linear signal to be a function of the entire range of the body part movement, dividing the normalized signal into a plurality of discrete command signals, and implementing the plurality of discrete command signals for driving the respective moveable prosthesis device and its sub-prosthesis.
Online beam energy measurement of Beijing electron positron collider II linear accelerator
NASA Astrophysics Data System (ADS)
Wang, S.; Iqbal, M.; Liu, R.; Chi, Y.
2016-02-01
This paper describes online beam energy measurement of Beijing Electron Positron Collider upgraded version II linear accelerator (linac) adequately. It presents the calculation formula, gives the error analysis in detail, discusses the realization in practice, and makes some verification. The method mentioned here measures the beam energy by acquiring the horizontal beam position with three beam position monitors (BPMs), which eliminates the effect of orbit fluctuation, and is much better than the one using the single BPM. The error analysis indicates that this online measurement has further potential usage such as a part of beam energy feedback system. The reliability of this method is also discussed and demonstrated in this paper.
Online beam energy measurement of Beijing electron positron collider II linear accelerator.
Wang, S; Iqbal, M; Liu, R; Chi, Y
2016-02-01
This paper describes online beam energy measurement of Beijing Electron Positron Collider upgraded version II linear accelerator (linac) adequately. It presents the calculation formula, gives the error analysis in detail, discusses the realization in practice, and makes some verification. The method mentioned here measures the beam energy by acquiring the horizontal beam position with three beam position monitors (BPMs), which eliminates the effect of orbit fluctuation, and is much better than the one using the single BPM. The error analysis indicates that this online measurement has further potential usage such as a part of beam energy feedback system. The reliability of this method is also discussed and demonstrated in this paper.
NASA Astrophysics Data System (ADS)
Widowati; Putro, S. P.; Silfiana
2018-05-01
Integrated Multi-Trophic Aquaculture (IMTA) is a polyculture with several biotas maintained in it to optimize waste recycling as a food source. The interaction between phytoplankton and nitrogen as waste in fish cultivation including ammonia, nitrite, and nitrate studied in the form of mathematical models. The form model is non-linear systems of differential equations with the four variables. The analytical analysis was used to study the dynamic behavior of this model. Local stability analysis is performed at the equilibrium point with the first step linearized model by using Taylor series, then determined the Jacobian matrix. If all eigenvalues have negative real parts, then the equilibrium of the system is locally asymptotic stable. Some numerical simulations were also demonstrated to verify our analytical result.
Simulation of crash tests for high impact levels of a new bridge safety barrier
NASA Astrophysics Data System (ADS)
Drozda, Jiří; Rotter, Tomáš
2017-09-01
The purpose is to show the opportunity of a non-linear dynamic impact simulation and to explain the possibility of using finite element method (FEM) for developing new designs of safety barriers. The main challenge is to determine the means to create and validate the finite element (FE) model. The results of accurate impact simulations can help to reduce necessary costs for developing of a new safety barrier. The introductory part deals with the creation of the FE model, which includes the newly-designed safety barrier and focuses on the application of an experimental modal analysis (EMA). The FE model has been created in ANSYS Workbench and is formed from shell and solid elements. The experimental modal analysis, which was performed on a real pattern, was employed for measuring the modal frequencies and shapes. After performing the EMA, the FE mesh was calibrated after comparing the measured modal frequencies with the calculated ones. The last part describes the process of the numerical non-linear dynamic impact simulation in LS-DYNA. This simulation was validated after comparing the measured ASI index with the calculated ones. The aim of the study is to improve professional public knowledge about dynamic non-linear impact simulations. This should ideally lead to safer, more accurate and profitable designs.
SAMPA: A free software tool for skin and membrane permeation data analysis.
Bezrouk, Aleš; Fiala, Zdeněk; Kotingová, Lenka; Krulichová, Iva Selke; Kopečná, Monika; Vávrová, Kateřina
2017-10-01
Skin and membrane permeation experiments comprise an important step in the development of a transdermal or topical formulation or toxicological risk assessment. The standard method for analyzing these data relies on the linear part of a permeation profile. However, it is difficult to objectively determine when the profile becomes linear, or the experiment duration may be insufficient to reach a maximum or steady state. Here, we present a software tool for Skin And Membrane Permeation data Analysis, SAMPA, that is easy to use and overcomes several of these difficulties. The SAMPA method and software have been validated on in vitro and in vivo permeation data on human, pig and rat skin and model stratum corneum lipid membranes using compounds that range from highly lipophilic polycyclic aromatic hydrocarbons to highly hydrophilic antiviral drug, with and without two permeation enhancers. The SAMPA performance was compared with the standard method using a linear part of the permeation profile and a complex mathematical model. SAMPA is a user-friendly, open-source software tool for analyzing the data obtained from skin and membrane permeation experiments. It runs on a Microsoft Windows platform and is freely available as a Supporting file to this article. Copyright © 2017 Elsevier Ltd. All rights reserved.
Test and Analysis of a Hyper-X Carbon-Carbon Leading Edge Chine
NASA Technical Reports Server (NTRS)
Smith, Russell W.; Sikora, Joseph G.; Lindell, Michael C.
2005-01-01
During parts production for the X43A Mach 10 hypersonic vehicle nondestructive evaluation (NDE) of a leading edge chine detected on imbedded delamination near the lower surface of the part. An ultimate proof test was conducted to verify the ultimate strength of this leading edge chine part. The ultimate proof test setup used a pressure bladder design to impose a uniform distributed pressure field over the bi-planar surface of the chine test article. A detailed description of the chine test article and experimental test setup is presented. Analysis results from a linear status model of the test article are also presented and discussed. Post-test inspection of the specimen revealed no visible failures or areas of delamination.
He, ZeFang
2014-01-01
An attitude control strategy based on Ziegler-Nichols rules for tuning PD (proportional-derivative) parameters of quadrotor helicopters is presented to solve the problem that quadrotor tends to be instable. This problem is caused by the narrow definition domain of attitude angles of quadrotor helicopters. The proposed controller is nonlinear and consists of a linear part and a nonlinear part. The linear part is a PD controller with PD parameters tuned by Ziegler-Nichols rules and acts on the quadrotor decoupled linear system after feedback linearization; the nonlinear part is a feedback linearization item which converts a nonlinear system into a linear system. It can be seen from the simulation results that the attitude controller proposed in this paper is highly robust, and its control effect is better than the other two nonlinear controllers. The nonlinear parts of the other two nonlinear controllers are the same as the attitude controller proposed in this paper. The linear part involves a PID (proportional-integral-derivative) controller with the PID controller parameters tuned by Ziegler-Nichols rules and a PD controller with the PD controller parameters tuned by GA (genetic algorithms). Moreover, this attitude controller is simple and easy to implement. PMID:25614879
NASA Astrophysics Data System (ADS)
Banks, J. W.; Henshaw, W. D.; Schwendeman, D. W.; Tang, Qi
2017-08-01
A stable partitioned algorithm is developed for fluid-structure interaction (FSI) problems involving viscous incompressible flow and rigid bodies. This added-mass partitioned (AMP) algorithm remains stable, without sub-iterations, for light and even zero mass rigid bodies when added-mass and viscous added-damping effects are large. The scheme is based on a generalized Robin interface condition for the fluid pressure that includes terms involving the linear acceleration and angular acceleration of the rigid body. Added-mass effects are handled in the Robin condition by inclusion of a boundary integral term that depends on the pressure. Added-damping effects due to the viscous shear forces on the body are treated by inclusion of added-damping tensors that are derived through a linearization of the integrals defining the force and torque. Added-damping effects may be important at low Reynolds number, or, for example, in the case of a rotating cylinder or rotating sphere when the rotational moments of inertia are small. In this first part of a two-part series, the properties of the AMP scheme are motivated and evaluated through the development and analysis of some model problems. The analysis shows when and why the traditional partitioned scheme becomes unstable due to either added-mass or added-damping effects. The analysis also identifies the proper form of the added-damping which depends on the discrete time-step and the grid-spacing normal to the rigid body. The results of the analysis are confirmed with numerical simulations that also demonstrate a second-order accurate implementation of the AMP scheme.
Geomorphic domains and linear features on Landsat images, Circle Quadrangle, Alaska
Simpson, S.L.
1984-01-01
A remote sensing study using Landsat images was undertaken as part of the Alaska Mineral Resource Assessment Program (AMRAP). Geomorphic domains A and B, identified on enhanced Landsat images, divide Circle quadrangle south of Tintina fault zone into two regional areas having major differences in surface characteristics. Domain A is a roughly rectangular, northeast-trending area of relatively low relief and simple, widely spaced drainages, except where igneous rocks are exposed. In contrast, domain B, which bounds two sides of domain A, is more intricately dissected showing abrupt changes in slope and relatively high relief. The northwestern part of geomorphic domain A includes a previously mapped tectonostratigraphic terrane. The southeastern boundary of domain A occurs entirely within the adjoining tectonostratigraphic terrane. The sharp geomorphic contrast along the southeastern boundary of domain A and the existence of known faults along this boundary suggest that the southeastern part of domain A may be a subdivision of the adjoining terrane. Detailed field studies would be necessary to determine the characteristics of the subdivision. Domain B appears to be divisible into large areas of different geomorphic terrains by east-northeast-trending curvilinear lines drawn on Landsat images. Segments of two of these lines correlate with parts of boundaries of mapped tectonostratigraphic terranes. On Landsat images prominent north-trending lineaments together with the curvilinear lines form a large-scale regional pattern that is transected by mapped north-northeast-trending high-angle faults. The lineaments indicate possible lithlogic variations and/or structural boundaries. A statistical strike-frequency analysis of the linear features data for Circle quadrangle shows that northeast-trending linear features predominate throughout, and that most northwest-trending linear features are found south of Tintina fault zone. A major trend interval of N.64-72E. in the linear feature data, corresponds to the strike of foliations in metamorphic rocks and magnetic anomalies reflecting compositional variations suggesting that most linear features in the southern part of the quadrangle probably are related to lithologic variations brought about by folding and foliation of metamorphic rocks. A second important trend interval, N.14-35E., may be related to thrusting south of the Tintina fault zone, as high concentrations of linear features within this interval are found in areas of mapped thrusts. Low concentrations of linear features are found in areas of most igneous intrusives. High concentrations of linear features do not correspond to areas of mineralization in any consistent or significant way that would allow concentration patterns to be easily used as an aid in locating areas of mineralization. The results of this remote sensing study indicate that there are several possibly important areas where further detailed studies are warranted.
Estimation of the linear mixed integrated Ornstein–Uhlenbeck model
Hughes, Rachael A.; Kenward, Michael G.; Sterne, Jonathan A. C.; Tilling, Kate
2017-01-01
ABSTRACT The linear mixed model with an added integrated Ornstein–Uhlenbeck (IOU) process (linear mixed IOU model) allows for serial correlation and estimation of the degree of derivative tracking. It is rarely used, partly due to the lack of available software. We implemented the linear mixed IOU model in Stata and using simulations we assessed the feasibility of fitting the model by restricted maximum likelihood when applied to balanced and unbalanced data. We compared different (1) optimization algorithms, (2) parameterizations of the IOU process, (3) data structures and (4) random-effects structures. Fitting the model was practical and feasible when applied to large and moderately sized balanced datasets (20,000 and 500 observations), and large unbalanced datasets with (non-informative) dropout and intermittent missingness. Analysis of a real dataset showed that the linear mixed IOU model was a better fit to the data than the standard linear mixed model (i.e. independent within-subject errors with constant variance). PMID:28515536
DOT National Transportation Integrated Search
1975-05-31
Prediction of wheel displacements and wheel-rail forces is a prerequisite to the evaluation of the curving performance of rail vehicles. This information provides part of the basis for the rational design of wheels and suspension components, for esta...
New Results on the Linear Equating Methods for the Non-Equivalent-Groups Design
ERIC Educational Resources Information Center
von Davier, Alina A.
2008-01-01
The two most common observed-score equating functions are the linear and equipercentile functions. These are often seen as different methods, but von Davier, Holland, and Thayer showed that any equipercentile equating function can be decomposed into linear and nonlinear parts. They emphasized the dominant role of the linear part of the nonlinear…
Demand for prescription drugs under non-linear pricing in Medicare Part D.
Jung, Kyoungrae; Feldman, Roger; McBean, A Marshall
2014-03-01
We estimate the price elasticity of prescription drug use in Medicare Part D, which features a non-linear price schedule due to a coverage gap. We analyze patterns of drug utilization prior to the coverage gap, where the "effective price" is higher than the actual copayment for drugs because consumers anticipate that more spending will make them more likely to reach the gap. We find that enrollees' total pre-gap drug spending is sensitive to their effective prices: the estimated price elasticity of drug spending ranges between [Formula: see text]0.14 and [Formula: see text]0.36. This finding suggests that filling in the coverage gap, as mandated by the health care reform legislation passed in 2010, will influence drug utilization prior to the gap. A simulation analysis indicates that closing the gap could increase Part D spending by a larger amount than projected, with additional pre-gap costs among those who do not hit the gap.
The Shock and Vibration Bulletin. Part 3. Shock Testing, Shock Analysis
1974-08-01
APPROXIMATE TRANSFORMATION C.S. O’Hearne and J.W. Shipley, Martin Marietta Aerospace, Orlando, Florida LINEAR LUMPED-MASS MODELING TECHNIQUES FOR BLAST LOADED...Leppert, B.K. Wada, Jet Propulsion Laboratory, Pasadena, California, and R. Miyakawa, Martin - Marietta Aerospace, Denver, Colorado (assigned to the Jet...Wilmington, Delaware Vibration Testing and Analysis DEVELOPMENT OF SAM-D MISSILE RANDOM VIBRATION RESPONSE LOADS P.G. Hahn, Martin Marietta Aerospace
Rigorous approaches to tether dynamics in deployment and retrieval
NASA Technical Reports Server (NTRS)
Antona, Ettore
1987-01-01
Dynamics of tethers in a linearized analysis can be considered as the superposition of propagating waves. This approach permits a new way for the analysis of tether behavior during deployment and retrieval, where a tether is composed by a part at rest and a part subjected to propagation phenomena, with the separating section depending on time. The dependence on time of the separating section requires the analysis of the reflection of the waves travelling toward the part at rest. Such a reflection generates a reflected wave, whose characteristics are determined. The propagation phenomena of major interest in a tether are transverse waves and longitudinal waves, all mathematically modelled by the vibrating chord equations, if the tension is considered constant along the tether. An interesting problem also considered is concerned with the dependence of the tether tension from the longitudinal position, due to microgravity, and the influence of this dependence on the propagation waves.
Werner, René; Ehrhardt, Jan; Schmidt-Richberg, Alexander; Heiss, Anabell; Handels, Heinz
2010-11-01
Motivated by radiotherapy of lung cancer non- linear registration is applied to estimate 3D motion fields for local lung motion analysis in thoracic 4D CT images. Reliability of analysis results depends on the registration accuracy. Therefore, our study consists of two parts: optimization and evaluation of a non-linear registration scheme for motion field estimation, followed by a registration-based analysis of lung motion patterns. The study is based on 4D CT data of 17 patients. Different distance measures and force terms for thoracic CT registration are implemented and compared: sum of squared differences versus a force term related to Thirion's demons registration; masked versus unmasked force computation. The most accurate approach is applied to local lung motion analysis. Masked Thirion forces outperform the other force terms. The mean target registration error is 1.3 ± 0.2 mm, which is in the order of voxel size. Based on resulting motion fields and inter-patient normalization of inner lung coordinates and breathing depths a non-linear dependency between inner lung position and corresponding strength of motion is identified. The dependency is observed for all patients without or with only small tumors. Quantitative evaluation of the estimated motion fields indicates high spatial registration accuracy. It allows for reliable registration-based local lung motion analysis. The large amount of information encoded in the motion fields makes it possible to draw detailed conclusions, e.g., to identify the dependency of inner lung localization and motion. Our examinations illustrate the potential of registration-based motion analysis.
Statistics for nuclear engineers and scientists. Part 1. Basic statistical inference
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beggs, W.J.
1981-02-01
This report is intended for the use of engineers and scientists working in the nuclear industry, especially at the Bettis Atomic Power Laboratory. It serves as the basis for several Bettis in-house statistics courses. The objectives of the report are to introduce the reader to the language and concepts of statistics and to provide a basic set of techniques to apply to problems of the collection and analysis of data. Part 1 covers subjects of basic inference. The subjects include: descriptive statistics; probability; simple inference for normally distributed populations, and for non-normal populations as well; comparison of two populations; themore » analysis of variance; quality control procedures; and linear regression analysis.« less
NASA Technical Reports Server (NTRS)
Chapman, Dean R
1952-01-01
A theoretical investigation is made of the airfoil profile for minimum pressure drag at zero lift in supersonic flow. In the first part of the report a general method is developed for calculating the profile having the least pressure drag for a given auxiliary condition, such as a given structural requirement or a given thickness ratio. The various structural requirements considered include bending strength, bending stiffness, torsional strength, and torsional stiffness. No assumption is made regarding the trailing-edge thickness; the optimum value is determined in the calculations as a function of the base pressure. To illustrate the general method, the optimum airfoil, defined as the airfoil having minimum pressure drag for a given auxiliary condition, is calculated in a second part of the report using the equations of linearized supersonic flow.
Data dependent systems approach to modal analysis Part 1: Theory
NASA Astrophysics Data System (ADS)
Pandit, S. M.; Mehta, N. P.
1988-05-01
The concept of Data Dependent Systems (DDS) and its applicability in the context of modal vibration analysis is presented. The ability of the DDS difference equation models to provide a complete representation of a linear dynamic system from its sampled response data forms the basis of the approach. The models are decomposed into deterministic and stochastic components so that system characteristics are isolated from noise effects. The modelling strategy is outlined, and the method of analysis associated with modal parameter identification is described in detail. Advantages and special features of the DDS methodology are discussed. Since the correlated noise is appropriately and automatically modelled by the DDS, the modal parameters are shown to be estimated very accurately and hence no preprocessing of the data is needed. Complex mode shapes and non-classical damping are as easily analyzed as the classical normal mode analysis. These features are illustrated by using simulated data in this Part I and real data on a disc-brake rotor in Part II.
A Multilevel Analysis of Phase II of the Louisiana School Effectiveness Study.
ERIC Educational Resources Information Center
Kennedy, Eugene; And Others
This paper presents findings of a study that used conventional modeling strategies (student- and school-level) and a new multilevel modeling strategy, Hierarchical Linear Modeling, to investigate school effects on student-achievement outcomes for data collected as part of Phase 2 of the Louisiana School Effectiveness Study. The purpose was to…
U. S. Fourth Graders' Informational Text Comprehension: Indicators from NAEP
ERIC Educational Resources Information Center
Schugar, Heather R.; Dreher, Miriam Jean
2017-01-01
This study is a secondary analysis of reading data collected from over 165,000 fourth graders as part of the U.S. National Assessment of Educational Progress. Using hierarchical linear modelling, the authors investigated factors associated with students' informational text comprehension, including out-of-school reading engagement, and in-school…
Understanding a Normal Distribution of Data (Part 2).
Maltenfort, Mitchell
2016-02-01
Completing the discussion of data normality, advanced techniques for analysis of non-normal data are discussed including data transformation, Generalized Linear Modeling, and bootstrapping. Relative strengths and weaknesses of each technique are helpful in choosing a strategy, but help from a statistician is usually necessary to analyze non-normal data using these methods.
Global Nonlinear Analysis of Piezoelectric Energy Harvesting from Ambient and Aeroelastic Vibrations
NASA Astrophysics Data System (ADS)
Abdelkefi, Abdessattar
Converting vibrations to a usable form of energy has been the topic of many recent investigations. The ultimate goal is to convert ambient or aeroelastic vibrations to operate low-power consumption devices, such as microelectromechanical systems, heath monitoring sensors, wireless sensors or replacing small batteries that have a finite life span or would require hard and expensive maintenance. The transduction mechanisms used for transforming vibrations to electric power include: electromagnetic, electrostatic, and piezoelectric mechanisms. Because it can be used to harvest energy over a wide range of frequencies and because of its ease of application, the piezoelectric option has attracted significant interest. In this work, we investigate the performance of different types of piezoelectric energy harvesters. The objective is to design and enhance the performance of these harvesters. To this end, distributed-parameter and phenomenological models of these harvesters are developed. Global analysis of these models is then performed using modern methods of nonlinear dynamics. In the first part of this Dissertation, global nonlinear distributed-parameter models for piezoelectric energy harvesters under direct and parametric excitations are developed. The method of multiple scales is then used to derive nonlinear forms of the governing equations and associated boundary conditions, which are used to evaluate their performance and determine the effects of the nonlinear piezoelectric coefficients on their behavior in terms of softening or hardening. In the second part, we assess the influence of the linear and nonlinear parameters on the dynamic behavior of a wing-based piezoaeroelastic energy harvester. The system is composed of a rigid airfoil that is constrained to pitch and plunge and supported by linear and nonlinear torsional and flexural springs with a piezoelectric coupling attached to the plunge degree of freedom. Linear analysis is performed to determine the effects of the linear spring coefficients and electrical load resistance on the flutter speed. Then, the normal form of the Hopf bifurcation ( utter) is derived to characterize the type of instability and determine the effects of the aerodynamic nonlinearities and the nonlinear coefficients of the springs on the system's stability near the bifurcation. This is useful to characterize the effects of different parameters on the system's output and ensure that subcritical or "catastrophic" bifurcation does not take place. Both linear and nonlinear analyses are then used to design and enhance the performance of these harvesters. In the last part, the concept of energy harvesting from vortex-induced vibrations of a circular cylinder is investigated. The power levels that can be generated from these vibrations and the variations of these levels with the freestream velocity are determined. A mathematical model that accounts for the coupled lift force, cylinder motion and generated voltage is presented. Linear analysis of the electromechanical model is performed to determine the effects of the electrical load resistance on the natural frequency of the rigid cylinder and the onset of the synchronization region. The impacts of the nonlinearities on the cylinder's response and energy harvesting are then investigated.
Critical analysis of commonly used fluorescence metrics to characterize dissolved organic matter.
Korak, Julie A; Dotson, Aaron D; Summers, R Scott; Rosario-Ortiz, Fernando L
2014-02-01
The use of fluorescence spectroscopy for the analysis and characterization of dissolved organic matter (DOM) has gained widespread interest over the past decade, in part because of its ease of use and ability to provide bulk DOM chemical characteristics. However, the lack of standard approaches for analysis and data evaluation has complicated its use. This study utilized comparative statistics to systematically evaluate commonly used fluorescence metrics for DOM characterization to provide insight into the implications for data analysis and interpretation such as peak picking methods, carbon-normalized metrics and the fluorescence index (FI). The uncertainty associated with peak picking methods was evaluated, including the reporting of peak intensity and peak position. The linear relationship between fluorescence intensity and dissolved organic carbon (DOC) concentration was found to deviate from linearity at environmentally relevant concentrations and simultaneously across all peak regions. Comparative analysis suggests that the loss of linearity is composition specific and likely due to non-ideal intermolecular interactions of the DOM rather than the inner filter effects. For some DOM sources, Peak A deviated from linearity at optical densities a factor of 2 higher than that of Peak C. For carbon-normalized fluorescence intensities, the error associated with DOC measurements significantly decreases the ability to distinguish compositional differences. An in-depth analysis of FI determined that the metric is mostly driven by peak emission wavelength and less by emission spectra slope. This study also demonstrates that fluorescence intensity follows property balance principles, but the fluorescence index does not. Copyright © 2013 Elsevier Ltd. All rights reserved.
Nonclassical point of view of the Brownian motion generation via fractional deterministic model
NASA Astrophysics Data System (ADS)
Gilardi-Velázquez, H. E.; Campos-Cantón, E.
In this paper, we present a dynamical system based on the Langevin equation without stochastic term and using fractional derivatives that exhibit properties of Brownian motion, i.e. a deterministic model to generate Brownian motion is proposed. The stochastic process is replaced by considering an additional degree of freedom in the second-order Langevin equation. Thus, it is transformed into a system of three first-order linear differential equations, additionally α-fractional derivative are considered which allow us to obtain better statistical properties. Switching surfaces are established as a part of fluctuating acceleration. The final system of three α-order linear differential equations does not contain a stochastic term, so the system generates motion in a deterministic way. Nevertheless, from the time series analysis, we found that the behavior of the system exhibits statistics properties of Brownian motion, such as, a linear growth in time of mean square displacement, a Gaussian distribution. Furthermore, we use the detrended fluctuation analysis to prove the Brownian character of this motion.
Drug awareness in adolescents attending a mental health service: analysis of longitudinal data.
Arnau, Jaume; Bono, Roser; Díaz, Rosa; Goti, Javier
2011-11-01
One of the procedures used most recently with longitudinal data is linear mixed models. In the context of health research the increasing number of studies that now use these models bears witness to the growing interest in this type of analysis. This paper describes the application of linear mixed models to a longitudinal study of a sample of Spanish adolescents attending a mental health service, the aim being to investigate their knowledge about the consumption of alcohol and other drugs. More specifically, the main objective was to compare the efficacy of a motivational interviewing programme with a standard approach to drug awareness. The models used to analyse the overall indicator of drug awareness were as follows: (a) unconditional linear growth curve model; (b) growth model with subject-associated variables; and (c) individual curve model with predictive variables. The results showed that awareness increased over time and that the variable 'schooling years' explained part of the between-subjects variation. The effect of motivational interviewing was also significant.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, J.S.; Moeller, D.W.; Cooper, D.W.
1985-07-01
Analysis of the radiological health effects of nuclear power plant accidents requires models for predicting early health effects, cancers and benign thyroid nodules, and genetic effects. Since the publication of the Reactor Safety Study, additional information on radiological health effects has become available. This report summarizes the efforts of a program designed to provide revised health effects models for nuclear power plant accident consequence modeling. The new models for early effects address four causes of mortality and nine categories of morbidity. The models for early effects are based upon two parameter Weibull functions. They permit evaluation of the influence ofmore » dose protraction and address the issue of variation in radiosensitivity among the population. The piecewise-linear dose-response models used in the Reactor Safety Study to predict cancers and thyroid nodules have been replaced by linear and linear-quadratic models. The new models reflect the most recently reported results of the follow-up of the survivors of the bombings of Hiroshima and Nagasaki and permit analysis of both morbidity and mortality. The new models for genetic effects allow prediction of genetic risks in each of the first five generations after an accident and include information on the relative severity of various classes of genetic effects. The uncertainty in modeloling radiological health risks is addressed by providing central, upper, and lower estimates of risks. An approach is outlined for summarizing the health consequences of nuclear power plant accidents. 298 refs., 9 figs., 49 tabs.« less
Non-linear dynamic analysis of geared systems, part 2
NASA Technical Reports Server (NTRS)
Singh, Rajendra; Houser, Donald R.; Kahraman, Ahmet
1990-01-01
A good understanding of the steady state dynamic behavior of a geared system is required in order to design reliable and quiet transmissions. This study focuses on a system containing a spur gear pair with backlash and periodically time-varying mesh stiffness, and rolling element bearings with clearance type non-linearities. A dynamic finite element model of the linear time-invariant (LTI) system is developed. Effects of several system parameters, such as torsional and transverse flexibilities of the shafts and prime mover/load inertias, on free and force vibration characteristics are investigated. Several reduced order LTI models are developed and validated by comparing their eigen solution with the finite element model results. Several key system parameters such as mean load and damping ratio are identified and their effects on the non-linear frequency response are evaluated quantitatively. Other fundamental issues such as the dynamic coupling between non-linear modes, dynamic interactions between component non-linearities and time-varying mesh stiffness, and the existence of subharmonic and chaotic solutions including routes to chaos have also been examined in depth.
NASA Astrophysics Data System (ADS)
Singh, S.; Jaishi, H. P.; Tiwari, R. P.; Tiwari, R. C.
2017-07-01
This paper reports the analysis of soil radon data recorded in the seismic zone-V, located in the northeastern part of India (latitude 23.73N, longitude 92.73E). Continuous measurements of soil-gas emission along Chite fault in Mizoram (India) were carried out with the replacement of solid-state nuclear track detectors at weekly interval. The present study was done for the period from March 2013 to May 2015 using LR-115 Type II detectors, manufactured by Kodak Pathe, France. In order to reduce the influence of meteorological parameters, statistical analysis tools such as multiple linear regression and artificial neural network have been used. Decrease in radon concentration was recorded prior to some earthquakes that occurred during the observation period. Some false anomalies were also recorded which may be attributed to the ongoing crustal deformation which was not major enough to produce an earthquake.
Approximate analytical solutions in the analysis of elastic structures of complex geometry
NASA Astrophysics Data System (ADS)
Goloskokov, Dmitriy P.; Matrosov, Alexander V.
2018-05-01
A method of analytical decomposition for analysis plane structures of a complex configuration is presented. For each part of the structure in the form of a rectangle all the components of the stress-strain state are constructed by the superposition method. The method is based on two solutions derived in the form of trigonometric series with unknown coefficients using the method of initial functions. The coefficients are determined from the system of linear algebraic equations obtained while satisfying the boundary conditions and the conditions for joining the structure parts. The components of the stress-strain state of a bent plate with holes are calculated using the analytical decomposition method.
NASA Astrophysics Data System (ADS)
Radu, M. C.; Schnakovszky, C.; Herghelegiu, E.; Tampu, N. C.; Zichil, V.
2016-08-01
Experimental tests were carried out on two high-strength steel materials (Ramor 400 and Ramor 550). Quantification of the dimensional accuracy was achieved by measuring the deviations from some geometric parameters of part (two lengths and two radii). It was found that in case of Ramor 400 steel, at the jet inlet, the deviations from the part radii are quite small for all the three analysed processes. Instead for the linear dimensions, the deviations are small only in case of laser cutting. At the jet outlet, the deviations raised in small amount compared to those obtained at the jet inlet for both materials as well as for all the three processes. Related to Ramor 550 steel, at the jet inlet the deviations from the part radii are very small in case of AWJ and laser cutting but larger in case of plasma cutting. At the jet outlet, the deviations from the part radii are very small for all processes; in case of linear dimensions, there was obtained very small deviations only in the case of laser processing, the other two processes leading to very large deviations.
NASA Astrophysics Data System (ADS)
Pipkins, Daniel Scott
Two diverse topics of relevance in modern computational mechanics are treated. The first involves the modeling of linear and non-linear wave propagation in flexible, lattice structures. The technique used combines the Laplace Transform with the Finite Element Method (FEM). The procedure is to transform the governing differential equations and boundary conditions into the transform domain where the FEM formulation is carried out. For linear problems, the transformed differential equations can be solved exactly, hence the method is exact. As a result, each member of the lattice structure is modeled using only one element. In the non-linear problem, the method is no longer exact. The approximation introduced is a spatial discretization of the transformed non-linear terms. The non-linear terms are represented in the transform domain by making use of the complex convolution theorem. A weak formulation of the resulting transformed non-linear equations yields a set of element level matrix equations. The trial and test functions used in the weak formulation correspond to the exact solution of the linear part of the transformed governing differential equation. Numerical results are presented for both linear and non-linear systems. The linear systems modeled are longitudinal and torsional rods and Bernoulli-Euler and Timoshenko beams. For non-linear systems, a viscoelastic rod and Von Karman type beam are modeled. The second topic is the analysis of plates and shallow shells under-going finite deflections by the Field/Boundary Element Method. Numerical results are presented for two plate problems. The first is the bifurcation problem associated with a square plate having free boundaries which is loaded by four, self equilibrating corner forces. The results are compared to two existing numerical solutions of the problem which differ substantially.
Galindo-Romero, Marta; Lippert, Tristan; Gavrilov, Alexander
2015-12-01
This paper presents an empirical linear equation to predict peak pressure level of anthropogenic impulsive signals based on its correlation with the sound exposure level. The regression coefficients are shown to be weakly dependent on the environmental characteristics but governed by the source type and parameters. The equation can be applied to values of the sound exposure level predicted with a numerical model, which provides a significant improvement in the prediction of the peak pressure level. Part I presents the analysis for airgun arrays signals, and Part II considers the application of the empirical equation to offshore impact piling noise.
NASA Technical Reports Server (NTRS)
Holloway, Sidney E., III
1994-01-01
This paper describes the mechanical design, analysis, fabrication, testing, and lessons learned by developing a uniquely designed spaceflight-like actuator. The linear proof mass actuator (LPMA) was designed to attach to both a large space structure and a ground test model without modification. Previous designs lacked the power to perform in a terrestrial environment while other designs failed to produce the desired accelerations or frequency range for spaceflight applications. Thus, the design for a unique actuator was conceived and developed at NASA Langley Research Center. The basic design consists of four large mechanical parts (mass, upper housing, lower housing, and center support) and numerous smaller supporting components including an accelerometer, encoder, and four drive motors. Fabrication personnel were included early in the design phase of the LPMA as part of an integrated manufacturing process to alleviate potential difficulties in machining an already challenging design. Operating testing of the LPMA demonstrated that the actuator is capable of various types of load functions.
NASA Technical Reports Server (NTRS)
Holloway, S. E., III
1995-01-01
This paper describes the mechanical design, analysis, fabrication, testing, and lessons learned by developing a uniquely designed spaceflight-like actuator. The Linear Proof Mass Actuator (LPMA) was designed to attach to both a large space structure and a ground test model without modification. Previous designs lacked the power to perform in a terrestrial environment while other designs failed to produce the desired accelerations or frequency range for spaceflight applications. Thus, the design for a unique actuator was conceived and developed at NASA Langley Research Center. The basic design consists of four large mechanical parts (Mass, Upper Housing, Lower Housing, and Center Support) and numerous smaller supporting components including an accelerometer, encoder, and four drive motors. Fabrication personnel were included early in the design phase of the LPMA as part of an integrated manufacturing process to alleviate potential difficulties in machining an already challenging design. Operational testing of the LPMA demonstrated that the actuator is capable of various types of load functions.
ERIC Educational Resources Information Center
Larson, Christine
2010-01-01
Little is known about the variety of ways students conceptualize matrix multiplication, yet this is a fundamental part of most introductory linear algebra courses. My dissertation follows a three-paper format, with the three papers exploring conceptualizations of matrix multiplication from a variety of viewpoints. In these papers, I explore (1)…
Sensitivity analysis for aeroacoustic and aeroelastic design of turbomachinery blades
NASA Technical Reports Server (NTRS)
Lorence, Christopher B.; Hall, Kenneth C.
1995-01-01
A new method for computing the effect that small changes in the airfoil shape and cascade geometry have on the aeroacoustic and aeroelastic behavior of turbomachinery cascades is presented. The nonlinear unsteady flow is assumed to be composed of a nonlinear steady flow plus a small perturbation unsteady flow that is harmonic in time. First, the full potential equation is used to describe the behavior of the nonlinear mean (steady) flow through a two-dimensional cascade. The small disturbance unsteady flow through the cascade is described by the linearized Euler equations. Using rapid distortion theory, the unsteady velocity is split into a rotational part that contains the vorticity and an irrotational part described by a scalar potential. The unsteady vorticity transport is described analytically in terms of the drift and stream functions computed from the steady flow. Hence, the solution of the linearized Euler equations may be reduced to a single inhomogeneous equation for the unsteady potential. The steady flow and small disturbance unsteady flow equations are discretized using bilinear quadrilateral isoparametric finite elements. The nonlinear mean flow solution and streamline computational grid are computed simultaneously using Newton iteration. At each step of the Newton iteration, LU decomposition is used to solve the resulting set of linear equations. The unsteady flow problem is linear, and is also solved using LU decomposition. Next, a sensitivity analysis is performed to determine the effect small changes in cascade and airfoil geometry have on the mean and unsteady flow fields. The sensitivity analysis makes use of the nominal steady and unsteady flow LU decompositions so that no additional matrices need to be factored. Hence, the present method is computationally very efficient. To demonstrate how the sensitivity analysis may be used to redesign cascades, a compressor is redesigned for improved aeroelastic stability and two different fan exit guide vanes are redesigned for reduced downstream radiated noise. In addition, a framework detailing how the two-dimensional version of the method may be used to redesign three-dimensional geometries is presented.
Information theoretic analysis of linear shift-invariant edge-detection operators
NASA Astrophysics Data System (ADS)
Jiang, Bo; Rahman, Zia-ur
2012-06-01
Generally, the designs of digital image processing algorithms and image gathering devices remain separate. Consequently, the performance of digital image processing algorithms is evaluated without taking into account the influences by the image gathering process. However, experiments show that the image gathering process has a profound impact on the performance of digital image processing and the quality of the resulting images. Huck et al. proposed one definitive theoretic analysis of visual communication channels, where the different parts, such as image gathering, processing, and display, are assessed in an integrated manner using Shannon's information theory. We perform an end-to-end information theory based system analysis to assess linear shift-invariant edge-detection algorithms. We evaluate the performance of the different algorithms as a function of the characteristics of the scene and the parameters, such as sampling, additive noise etc., that define the image gathering system. The edge-detection algorithm is regarded as having high performance only if the information rate from the scene to the edge image approaches its maximum possible. This goal can be achieved only by jointly optimizing all processes. Our information-theoretic assessment provides a new tool that allows us to compare different linear shift-invariant edge detectors in a common environment.
Chandra, Preeti; Kannujia, Rekha; Saxena, Ankita; Srivastava, Mukesh; Bahadur, Lal; Pal, Mahesh; Singh, Bhim Pratap; Kumar Ojha, Sanjeev; Kumar, Brijesh
2016-09-10
An ultra-high performance liquid chromatography electrospray ionization tandem mass spectrometry method has been developed and validated for simultaneous quantification of six major bioactive compounds in five varieties of Withania somnifera in various plant parts (leaf, stem and root). The analysis was accomplished on Waters ACQUITY UPLC BEH C18 column with linear gradient elution of water/formic acid (0.1%) and acetonitrile at a flow rate of 0.3mLmin(-1). The proposed method was validated with acceptable linearity (r(2), 0.9989-0.9998), precision (RSD, 0.16-2.01%), stability (RSD, 1.04-1.62%) and recovery (RSD ≤2.45%), under optimum conditions. The method was also successfully applied for the simultaneous determination of six marker compounds in twenty-six marketed formulations. Hierarchical cluster analysis and principal component analysis were applied to discriminate these twenty-six batches based on characteristics of the bioactive compounds. The results indicated that this method is advance, rapid, sensitive and suitable to reveal the quality of Withania somnifera and also capable of performing quality evaluation of polyherbal formulations having similar markers/raw herbs. Copyright © 2016 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, J.S.; Abrahmson, S.; Bender, M.A.
1993-10-01
This report is a revision of NUREG/CR-4214, Rev. 1, Part 1 (1990), Health Effects Models for Nuclear Power Plant Accident Consequence Analysis. This revision has been made to incorporate changes to the Health Effects Models recommended in two addenda to the NUREG/CR-4214, Rev. 1, Part 11, 1989 report. The first of these addenda provided recommended changes to the health effects models for low-LET radiations based on recent reports from UNSCEAR, ICRP and NAS/NRC (BEIR V). The second addendum presented changes needed to incorporate alpha-emitting radionuclides into the accident exposure source term. As in the earlier version of this report, modelsmore » are provided for early and continuing effects, cancers and thyroid nodules, and genetic effects. Weibull dose-response functions are recommended for evaluating the risks of early and continuing health effects. Three potentially lethal early effects -- the hematopoietic, pulmonary, and gastrointestinal syndromes are considered. Linear and linear-quadratic models are recommended for estimating the risks of seven types of cancer in adults - leukemia, bone, lung, breast, gastrointestinal, thyroid, and ``other``. For most cancers, both incidence and mortality are addressed. Five classes of genetic diseases -- dominant, x-linked, aneuploidy, unbalanced translocations, and multifactorial diseases are also considered. Data are provided that should enable analysts to consider the timing and severity of each type of health risk.« less
Brightness analysis of an electron beam with a complex profile
NASA Astrophysics Data System (ADS)
Maesaka, Hirokazu; Hara, Toru; Togawa, Kazuaki; Inagaki, Takahiro; Tanaka, Hitoshi
2018-05-01
We propose a novel analysis method to obtain the core bright part of an electron beam with a complex phase-space profile. This method is beneficial to evaluate the performance of simulation data of a linear accelerator (linac), such as an x-ray free electron laser (XFEL) machine, since the phase-space distribution of a linac electron beam is not simple, compared to a Gaussian beam in a synchrotron. In this analysis, the brightness of undulator radiation is calculated and the core of an electron beam is determined by maximizing the brightness. We successfully extracted core electrons from a complex beam profile of XFEL simulation data, which was not expressed by a set of slice parameters. FEL simulations showed that the FEL intensity was well remained even after extracting the core part. Consequently, the FEL performance can be estimated by this analysis without time-consuming FEL simulations.
NASA Astrophysics Data System (ADS)
Abbondanza, Claudio; Altamimi, Zuheir; Chin, Toshio; Collilieux, Xavier; Dach, Rolf; Gross, Richard; Heflin, Michael; König, Rolf; Lemoine, Frank; Macmillan, Dan; Parker, Jay; van Dam, Tonie; Wu, Xiaoping
2014-05-01
The International Terrestrial Reference Frame (ITRF) adopts a piece-wise linear model to parameterize regularized station positions and velocities. The space-geodetic (SG) solutions from VLBI, SLR, GPS and DORIS used as input in the ITRF combination process account for tidal loading deformations, but ignore the non-tidal part. As a result, the non-linear signal observed in the time series of SG-derived station positions in part reflects non-tidal loading displacements not introduced in the SG data reduction. In this analysis, we assess the impact of non-tidal atmospheric loading (NTAL) corrections on the TRF computation. Focusing on the a-posteriori approach, (i) the NTAL model derived from the National Centre for Environmental Prediction (NCEP) surface pressure is removed from the SINEX files of the SG solutions used as inputs to the TRF determinations; (ii) adopting a Kalman-filter based approach, two distinct linear TRFs are estimated combining the 4 SG solutions with (corrected TRF solution) and without the NTAL displacements (standard TRF solution). Linear fits (offset and atmospheric velocity) of the NTAL displacements removed during step (i) are estimated accounting for the station position discontinuities introduced in the SG solutions and adopting different weighting strategies. The NTAL-derived (atmospheric) velocity fields are compared to those obtained from the TRF reductions during step (ii). The consistency between the atmospheric and the TRF-derived velocity fields is examined. We show how the presence of station position discontinuities in SG solutions degrades the agreement between the velocity fields and compare the effect of different weighting structure adopted while estimating the linear fits to the NTAL displacements. Finally, we evaluate the effect of restoring the atmospheric velocities determined through the linear fits of the NTAL displacements to the single-technique linear reference frames obtained by stacking the standard SG SINEX files. Differences between the velocity fields obtained restoring the NTAL displacements and the standard stacked linear reference frames are discussed.
Next Generation Robots for STEM Education andResearch at Huston Tillotson University
2017-11-10
dynamics through the following command: roslaunch mtb_lab6_feedback_linearization gravity_compensation.launch Part B: Gravity Inversion : After...understood the system’s natural dynamics. roslaunch mtb_lab6_feedback_linearization gravity_compensation.launch Part B: Gravity Inversion ...is created using the following command: roslaunch mtb_lab6_feedback_linearization gravity_inversion.launch Gravity inversion is just one
1979-09-01
a " high performance fast timing" engine thrust with a mismatch between right and left SRfls...examine the dynamic behavior of a blade having a root geometry compatible with low frictional forces at high rotational speeds , somewhat like a "Christmas...Tree" root, but with a gap introduced which will close up only at high speed . Approximate non-linear equations of motion are derived and solved
NASA Astrophysics Data System (ADS)
Sun, Xiao-Yan; Chu, Dong-Kai; Dong, Xin-Ran; Zhou, Chu; Li, Hai-Tao; Luo-Zhi; Hu, You-Wang; Zhou, Jian-Ying; Cong-Wang; Duan, Ji-An
2016-03-01
A High sensitive refractive index (RI) sensor based on Mach-Zehnder interferometer (MZI) in a conventional single-mode optical fiber is proposed, which is fabricated by femtosecond laser transversal-scanning inscription method and chemical etching. A rectangular cavity structure is formed in part of fiber core and cladding interface. The MZI sensor shows excellent refractive index sensitivity and linearity, which exhibits an extremely high RI sensitivity of -17197 nm/RIU (refractive index unit) with the linearity of 0.9996 within the refractive index range of 1.3371-1.3407. The experimental results are consistent with theoretical analysis.
NASA Astrophysics Data System (ADS)
Rattez, Hadrien; Stefanou, Ioannis; Sulem, Jean; Veveakis, Manolis; Poulet, Thomas
2018-06-01
In this paper we study the phenomenon of localization of deformation in fault gouges during seismic slip. This process is of key importance to understand frictional heating and energy budget during an earthquake. A infinite layer of fault gouge is modeled as a Cosserat continuum taking into account Thermo-Hydro-Mechanical (THM) couplings. The theoretical aspects of the problem are presented in the companion paper (Rattez et al., 2017a), together with a linear stability analysis to determine the conditions of localization and estimate the shear band thickness. In this Part II of the study, we investigate the post-bifurcation evolution of the system by integrating numerically the full system of non-linear equations using the method of Finite Elements. The problem is formulated in the framework of Cosserat theory. It enables to introduce information about the microstructure of the material in the constitutive equations and to regularize the mathematical problem in the post-localization regime. We emphasize the influence of the size of the microstructure and of the softening law on the material response and the strain localization process. The weakening effect of pore fluid thermal pressurization induced by shear heating is examined and quantified. It enhances the weakening process and contributes to the narrowing of shear band thickness. Moreover, due to THM couplings an apparent rate-dependency is observed, even for rate-independent material behavior. Finally, comparisons show that when the perturbed field of shear deformation dominates, the estimation of the shear band thickness obtained from linear stability analysis differs from the one obtained from the finite element computations, demonstrating the importance of post-localization numerical simulations.
Mechanical Design of Innovative Electromagnetic Linear Actuators for Marine Applications
NASA Astrophysics Data System (ADS)
Muscia, Roberto
2017-11-01
We describe an engineering solution to manufacture electromagnetic linear actuators for moving rudders and fin stabilizers of military ships
NASA Technical Reports Server (NTRS)
Tanveer, Saleh
1989-01-01
The analysis is extended to determine the linear stability of a bubble in a Hele-Shaw cell analytically. Only the solution branch corresponding to largest possible bubble velocity U for given surface tension is found to be stable, while all the others are unstable, in accordance with earlier numerical results.
Structural and lithologic study of Northern Coast Range and Sacramento Valley, California
NASA Technical Reports Server (NTRS)
Rich, E. I. (Principal Investigator)
1973-01-01
The author has identified the following significant results. Preliminary analysis of the data received has disclosed two potentially important northwest-trending systems of linear features within the Northern California Coast Ranges. A third system, which trends northeast, can be traced with great uncertainty across the alluviated part of the Sacramento Valley and into the foothills of the Sierra Nevada. These linear features may represent fault systems or zones of shearing. Of interest, although not yet verified, is the observation that some of the mercury concentrations and some of the geothermally active areas of California may be located at the intersection of the Central and the Valley Systems. One, perhaps two, stratigraphic unconformities within the Late Mesozoic sedimentary rocks were detected during preliminary examination of the imagery; however, more analysis is necessary in order to verify this preliminary interpretation. A heretofore unrecognized, large circular depression, about 15 km in diameter, was detected within the alluviated part of the Sacramento Valley. The depression is adjacent to a large laccolithic intrusion and may be geologically related to it. Changes in the photogeologic characteristics of this feature will continue to be monitored.
Design and evaluation of a parametric model for cardiac sounds.
Ibarra-Hernández, Roilhi F; Alonso-Arévalo, Miguel A; Cruz-Gutiérrez, Alejandro; Licona-Chávez, Ana L; Villarreal-Reyes, Salvador
2017-10-01
Heart sound analysis plays an important role in the auscultative diagnosis process to detect the presence of cardiovascular diseases. In this paper we propose a novel parametric heart sound model that accurately represents normal and pathological cardiac audio signals, also known as phonocardiograms (PCG). The proposed model considers that the PCG signal is formed by the sum of two parts: one of them is deterministic and the other one is stochastic. The first part contains most of the acoustic energy. This part is modeled by the Matching Pursuit (MP) algorithm, which performs an analysis-synthesis procedure to represent the PCG signal as a linear combination of elementary waveforms. The second part, also called residual, is obtained after subtracting the deterministic signal from the original heart sound recording and can be accurately represented as an autoregressive process using the Linear Predictive Coding (LPC) technique. We evaluate the proposed heart sound model by performing subjective and objective tests using signals corresponding to different pathological cardiac sounds. The results of the objective evaluation show an average Percentage of Root-Mean-Square Difference of approximately 5% between the original heart sound and the reconstructed signal. For the subjective test we conducted a formal methodology for perceptual evaluation of audio quality with the assistance of medical experts. Statistical results of the subjective evaluation show that our model provides a highly accurate approximation of real heart sound signals. We are not aware of any previous heart sound model rigorously evaluated as our proposal. Copyright © 2017 Elsevier Ltd. All rights reserved.
A procedure to determine the radiation isocenter size in a linear accelerator.
González, A; Castro, I; Martínez, J A
2004-06-01
Measurement of radiation isocenter is a fundamental part of commissioning and quality assurance (QA) for a linear accelerator (linac). In this work we present an automated procedure for the analysis of the stars-shots employed in the radiation isocenter determination. Once the star-shot film has been developed and digitized, the resulting image is analyzed by scanning concentric circles centered around the intersection of the lasers that had been previously marked on the film. The center and the radius of the minimum circle intersecting the central rays are determined with an accuracy and precision better than 1% of the pixel size. The procedure is applied to the position and size determination of the radiation isocenter by means of the analysis of star-shots, placed in different planes with respect to the gantry, couch and collimator rotation axes.
NASA Astrophysics Data System (ADS)
Beardsell, Alec; Collier, William; Han, Tao
2016-09-01
There is a trend in the wind industry towards ever larger and more flexible turbine blades. Blade tip deflections in modern blades now commonly exceed 10% of blade length. Historically, the dynamic response of wind turbine blades has been analysed using linear models of blade deflection which include the assumption of small deflections. For modern flexible blades, this assumption is becoming less valid. In order to continue to simulate dynamic turbine performance accurately, routine use of non-linear models of blade deflection may be required. This can be achieved by representing the blade as a connected series of individual flexible linear bodies - referred to in this paper as the multi-part approach. In this paper, Bladed is used to compare load predictions using single-part and multi-part blade models for several turbines. The study examines the impact on fatigue and extreme loads and blade deflection through reduced sets of load calculations based on IEC 61400-1 ed. 3. Damage equivalent load changes of up to 16% and extreme load changes of up to 29% are observed at some turbine load locations. It is found that there is no general pattern in the loading differences observed between single-part and multi-part blade models. Rather, changes in fatigue and extreme loads with a multi-part blade model depend on the characteristics of the individual turbine and blade. Key underlying causes of damage equivalent load change are identified as differences in edgewise- torsional coupling between the multi-part and single-part models, and increased edgewise rotor mode damping in the multi-part model. Similarly, a causal link is identified between torsional blade dynamics and changes in ultimate load results.
Automatic design of synthetic gene circuits through mixed integer non-linear programming.
Huynh, Linh; Kececioglu, John; Köppe, Matthias; Tagkopoulos, Ilias
2012-01-01
Automatic design of synthetic gene circuits poses a significant challenge to synthetic biology, primarily due to the complexity of biological systems, and the lack of rigorous optimization methods that can cope with the combinatorial explosion as the number of biological parts increases. Current optimization methods for synthetic gene design rely on heuristic algorithms that are usually not deterministic, deliver sub-optimal solutions, and provide no guaranties on convergence or error bounds. Here, we introduce an optimization framework for the problem of part selection in synthetic gene circuits that is based on mixed integer non-linear programming (MINLP), which is a deterministic method that finds the globally optimal solution and guarantees convergence in finite time. Given a synthetic gene circuit, a library of characterized parts, and user-defined constraints, our method can find the optimal selection of parts that satisfy the constraints and best approximates the objective function given by the user. We evaluated the proposed method in the design of three synthetic circuits (a toggle switch, a transcriptional cascade, and a band detector), with both experimentally constructed and synthetic promoter libraries. Scalability and robustness analysis shows that the proposed framework scales well with the library size and the solution space. The work described here is a step towards a unifying, realistic framework for the automated design of biological circuits.
Development of a linearized unsteady aerodynamic analysis for cascade gust response predictions
NASA Technical Reports Server (NTRS)
Verdon, Joseph M.; Hall, Kenneth C.
1990-01-01
A method for predicting the unsteady aerodynamic response of a cascade of airfoils to entropic, vortical, and acoustic gust excitations is being developed. Here, the unsteady flow is regarded as a small perturbation of a nonuniform isentropic and irrotational steady background flow. A splitting technique is used to decompose the linearized unsteady velocity into rotational and irrotational parts leading to equations for the complex amplitudes of the linearized unsteady entropy, rotational velocity, and velocity potential that are coupled only sequentially. The entropic and rotational velocity fluctuations are described by transport equations for which closed-form solutions in terms of the mean-flow drift and stream functions can be determined. The potential fluctuation is described by an inhomogeneous convected wave equation in which the source term depends on the rotational velocity field, and is determined using finite-difference procedures. The analytical and numerical techniques used to determine the linearized unsteady flow are outlined. Results are presented to indicate the status of the solution procedure and to demonstrate the impact of blade geometry and mean blade loading on the aerodynamic response of cascades to vortical gust excitations. The analysis described herein leads to very efficient predictions of cascade unsteady aerodynamic response phenomena making it useful for turbomachinery aeroelastic and aeroacoustic design applications.
AITRAC: Augmented Interactive Transient Radiation Analysis by Computer. User's information manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1977-10-01
AITRAC is a program designed for on-line, interactive, DC, and transient analysis of electronic circuits. The program solves linear and nonlinear simultaneous equations which characterize the mathematical models used to predict circuit response. The program features 100 external node--200 branch capability; conversional, free-format input language; built-in junction, FET, MOS, and switch models; sparse matrix algorithm with extended-precision H matrix and T vector calculations, for fast and accurate execution; linear transconductances: beta, GM, MU, ZM; accurate and fast radiation effects analysis; special interface for user-defined equations; selective control of multiple outputs; graphical outputs in wide and narrow formats; and on-line parametermore » modification capability. The user describes the problem by entering the circuit topology and part parameters. The program then automatically generates and solves the circuit equations, providing the user with printed or plotted output. The circuit topology and/or part values may then be changed by the user, and a new analysis, requested. Circuit descriptions may be saved on disk files for storage and later use. The program contains built-in standard models for resistors, voltage and current sources, capacitors, inductors including mutual couplings, switches, junction diodes and transistors, FETS, and MOS devices. Nonstandard models may be constructed from standard models or by using the special equations interface. Time functions may be described by straight-line segments or by sine, damped sine, and exponential functions. 42 figures, 1 table. (RWR)« less
Meta-analysis of thirty-two case-control and two ecological radon studies of lung cancer.
Dobrzynski, Ludwik; Fornalski, Krzysztof W; Reszczynska, Joanna
2018-03-01
A re-analysis has been carried out of thirty-two case-control and two ecological studies concerning the influence of radon, a radioactive gas, on the risk of lung cancer. Three mathematically simplest dose-response relationships (models) were tested: constant (zero health effect), linear, and parabolic (linear-quadratic). Health effect end-points reported in the analysed studies are odds ratios or relative risk ratios, related either to morbidity or mortality. In our preliminary analysis, we show that the results of dose-response fitting are qualitatively (within uncertainties, given as error bars) the same, whichever of these health effect end-points are applied. Therefore, we deemed it reasonable to aggregate all response data into the so-called Relative Health Factor and jointly analysed such mixed data, to obtain better statistical power. In the second part of our analysis, robust Bayesian and classical methods of analysis were applied to this combined dataset. In this part of our analysis, we selected different subranges of radon concentrations. In view of substantial differences between the methodology used by the authors of case-control and ecological studies, the mathematical relationships (models) were applied mainly to the thirty-two case-control studies. The degree to which the two ecological studies, analysed separately, affect the overall results when combined with the thirty-two case-control studies, has also been evaluated. In all, as a result of our meta-analysis of the combined cohort, we conclude that the analysed data concerning radon concentrations below ~1000 Bq/m3 (~20 mSv/year of effective dose to the whole body) do not support the thesis that radon may be a cause of any statistically significant increase in lung cancer incidence.
Simulation Analysis of Zero Mean Flow Edge Turbulence in LAPD
NASA Astrophysics Data System (ADS)
Friedman, Brett Cory
I model, simulate, and analyze the turbulence in a particular experiment on the Large Plasma Device (LAPD) at UCLA. The experiment, conducted by Schaffner et al. [D. Schaffner et al., Phys. Rev. Lett. 109, 135002 (2012)], nulls out the intrinsic mean flow in LAPD by limiter biasing. The model that I use in the simulation is an electrostatic reduced Braginskii two-fluid model that describes the time evolution of density, electron temperature, electrostatic potential, and parallel electron velocity fluctuations in the edge region of LAPD. The spatial domain is annular, encompassing the radial coordinates over which a significant equilibrium density gradient exists. My model breaks the independent variables in the equations into time-independent equilibrium parts and time-dependent fluctuating parts, and I use experimentally obtained values as input for the equilibrium parts. After an initial exponential growth period due to a linear drift wave instability, the fluctuations saturate and the frequency and azimuthal wavenumber spectra become broadband with no visible coherent peaks, at which point the fluctuations become turbulent. The turbulence develops intermittent pressure and flow filamentary structures that grow and dissipate, but look much different than the unstable linear drift waves, primarily in the extremely long axial wavelengths that the filaments possess. An energy dynamics analysis that I derive reveals the mechanism that drives these structures. The long k|| ˜ 0 intermittent potential filaments convect equilibrium density across the equilibrium density gradient, setting up local density filaments. These density filaments, also with k || ˜ 0, produce azimuthal density gradients, which drive radially propagating secondary drift waves. These finite k|| drift waves nonlinearly couple to one another and reinforce the original convective filament, allowing the process to bootstrap itself. The growth of these structures is by nonlinear instability because they require a finite amplitude to start, and they require nonlinear terms in the equations to sustain their growth. The reason why k|| ˜ 0 structures can grow and support themselves in a dynamical system with no k|| = 0 linear instability is because the linear eigenmodes of the system are nonorthogonal. Nonorthogonal eigenmodes that individually decay under linear dynamics can transiently inject energy into the system, allowing for instability. The instability, however, can only occur when the fluctuations have a finite starting amplitude, and nonlinearities are available to mix energy among eigenmodes. Finally, I attempt to figure out how many effective degrees of freedom control the turbulence to determine whether it is stochastic or deterministic. Using two different methods - permutation entropy analysis by means of time delay trajectory reconstruction and Proper Orthogonal Decomposition - I determine that more than a few degrees of freedom, possibly even dozens or hundreds, are all active. The turbulence, while not stochastic, is not a manifestation of low-dimensional chaos - it is high-dimensional.
NASA Astrophysics Data System (ADS)
Shcherbakov, Alexandre S.; Campos Acosta, Joaquin; Moreno Zarate, Pedro; Pons Aglio, Alicia
2011-02-01
An advanced qualitative characterization of simultaneously existing various low-power trains of ultra-short optical pulses with an internal frequency modulation in a distributed laser system based on semiconductor heterostructure is presented. The scheme represents a hybrid cavity consisting of a single-mode heterolaser operating in the active mode-locking regime and an external long single-mode optical fiber exhibiting square-law dispersion, cubic Kerr nonlinearity, and linear optical losses. In fact, we consider the trains of optical dissipative solitons, which appear within double balance between the second-order dispersion and cubic-law nonlinearity as well as between the active-medium gain and linear optical losses in a hybrid cavity. Moreover, we operate on specially designed modulating signals providing non-conventional composite regimes of simultaneous multi-pulse active mode-locking. As a result, the mode-locking process allows shaping regular trains of picosecond optical pulses excited by multi-pulse independent on each other sequences of periodic modulations. In so doing, we consider the arranged hybrid cavity as a combination of a quasi-linear part responsible for the active mode-locking by itself and a nonlinear part determining the regime of dissipative soliton propagation. Initially, these parts are analyzed individually, and then the primarily obtained data are coordinated with each other. Within this approach, a contribution of the appeared cubically nonlinear Ginzburg-Landau operator is analyzed via exploiting an approximate variational procedure involving the technique of trial functions.
NASA Technical Reports Server (NTRS)
Lund, Kurt O.
1991-01-01
The simplified geometry for the analysis is an infinite, axis symmetric annulus with a specified solar flux at the outer radius. The inner radius is either adiabatic (modeling Flight Experiment conditions), or convective (modeling Solar Dynamic conditions). Liquid LiF either contacts the outer wall (modeling ground based testing), or faces a void gap at the outer wall (modeling possible space based conditions). The analysis is presented in three parts: Part 3 considers and adiabatic inner wall and linearized radiation equations; part 2 adds effects of convection at the inner wall; and part 1 includes the effect of the void gap, as well as previous effects, and develops the radiation model further. The main results are the differences in melting behavior which can occur between ground based 1 g experiments and the microgravity flight experiments. Under 1 gravity, melted PCM will always contact the outer wall having the heat flux source, thus providing conductance from this source to the phase change front. In space based tests where a void gap may likely form during solidification, the situation is reversed; radiation is now the only mode of heat transfer and the majority of melting takes place from the inner wall.
On the Impact of a Quadratic Acceleration Term in the Analysis of Position Time Series
NASA Astrophysics Data System (ADS)
Bogusz, Janusz; Klos, Anna; Bos, Machiel Simon; Hunegnaw, Addisu; Teferle, Felix Norman
2016-04-01
The analysis of Global Navigation Satellite System (GNSS) position time series generally assumes that each of the coordinate component series is described by the sum of a linear rate (velocity) and various periodic terms. The residuals, the deviations between the fitted model and the observations, are then a measure of the epoch-to-epoch scatter and have been used for the analysis of the stochastic character (noise) of the time series. Often the parameters of interest in GNSS position time series are the velocities and their associated uncertainties, which have to be determined with the highest reliability. It is clear that not all GNSS position time series follow this simple linear behaviour. Therefore, we have added an acceleration term in the form of a quadratic polynomial function to the model in order to better describe the non-linear motion in the position time series. This non-linear motion could be a response to purely geophysical processes, for example, elastic rebound of the Earth's crust due to ice mass loss in Greenland, artefacts due to deficiencies in bias mitigation models, for example, of the GNSS satellite and receiver antenna phase centres, or any combination thereof. In this study we have simulated 20 time series with different stochastic characteristics such as white, flicker or random walk noise of length of 23 years. The noise amplitude was assumed at 1 mm/y-/4. Then, we added the deterministic part consisting of a linear trend of 20 mm/y (that represents the averaged horizontal velocity) and accelerations ranging from minus 0.6 to plus 0.6 mm/y2. For all these data we estimated the noise parameters with Maximum Likelihood Estimation (MLE) using the Hector software package without taken into account the non-linear term. In this way we set the benchmark to then investigate how the noise properties and velocity uncertainty may be affected by any un-modelled, non-linear term. The velocities and their uncertainties versus the accelerations for different types of noise are determined. Furthermore, we have selected 40 globally distributed stations that have a clear non-linear behaviour from two different International GNSS Service (IGS) analysis centers: JPL (Jet Propulsion Laboratory) and BLT (British Isles continuous GNSS Facility and University of Luxembourg Tide Gauge Benchmark Monitoring (TIGA) Analysis Center). We obtained maximum accelerations of -1.8±1.2 mm2/y and -4.5±3.3 mm2/y for the horizontal and vertical components, respectively. The noise analysis tests have shown that the addition of the non-linear term has significantly whitened the power spectra of the position time series, i.e. shifted the spectral index from flicker towards white noise.
NASA Astrophysics Data System (ADS)
Pang, A. L.; Ismail, H.; Abu Bakar, A.
2018-02-01
Linear low-density polyethylene (LLDPE)/poly (vinyl alcohol) (PVOH) filled with untreated kenaf (UT-KNF) and eco-friendly coupling agent (ECA)-treated kenaf (ECAT-KNF) were prepared using ThermoHaake internal mixer, respectively. Filler loadings of UT-KNF and ECAT-KNF used in this study are 10 and 40 parts per hundred parts of resin (phr). The effect of ECA on tensile properties and water absorption of LLDPE/PVOH/KNF composites were investigated. Field emission scanning electron microscopy (FESEM) analysis was applied to visualize filler-matrix adhesion. The results indicate LLDPE/PVOH/ECAT-KNF composites possess higher tensile strength and tensile modulus, but lower elongation at break compared to LLDPE/PVOH/UT-KNF composites. The morphological studies of tensile fractured surfaces using FESEM support the increment in tensile properties of LLDPE/PVOH/ECAT-KNF composites. Nevertheless, LLDPE/PVOH/UT-KNF composites reveal higher water absorption compared to LLDPE/PVOH/ECAT-KNF composites.
Linear analysis of the evolution of nearly polar low-mass circumbinary discs
NASA Astrophysics Data System (ADS)
Lubow, Stephen H.; Martin, Rebecca G.
2018-01-01
In a recent paper Martin & Lubow showed through simulations that an initially tilted disc around an eccentric binary can evolve to polar alignment in which the disc lies perpendicular to the binary orbital plane. We apply linear theory to show both analytically and numerically that a nearly polar aligned low-mass circumbinary disc evolves to polar alignment and determine the alignment time-scale. Significant disc evolution towards the polar state around moderately eccentric binaries can occur for typical protostellar disc parameters in less than a typical disc lifetime for binaries with orbital periods of order 100 yr or less. Resonant torques are much less effective at truncating the inner parts of circumbinary polar discs than the inner parts of coplanar discs. For polar discs, they vanish for a binary eccentricity of unity. The results agree with the simulations in showing that discs can evolve to a polar state. Circumbinary planets may then form in such discs and reside on polar orbits.
NASA Astrophysics Data System (ADS)
Çeçen, Yiğit; Gülümser, Tuğçe; Yazgan, Çağrı; Dapo, Haris; Üstün, Mahmut; Boztosun, Ismail
2017-09-01
In cancer treatment, high energy X-rays are used which are produced by linear accelerators (LINACs). If the energy of these beams is over 8 MeV, photonuclear reactions occur between the bremsstrahlung photons and the metallic parts of the LINAC. As a result of these interactions, neutrons are also produced as secondary radiation products (γ,n) which are called photoneutrons. The study aims to map the photoneutron flux distribution within the LINAC bunker via neutron activation analysis (NAA) using indium-cadmium foils. Irradiations made at different gantry angles (0°, 90°, 180° and 270°) with a total of 91 positions in the Philips SLI-25 linear accelerator treatment room and location-based distribution of thermal neutron flux was obtained. Gamma spectrum analysis was carried out with high purity germanium (HPGe) detector. Results of the analysis showed that the maximum neutron flux in the room occurred at just above of the LINAC head (1.2x105 neutrons/cm2.s) which is compatible with an americium-beryllium (Am-Be) neutron source. There was a 90% decrease of flux at the walls and at the start of the maze with respect to the maximum neutron flux. And, just in front of the LINAC door, inside the room, neutron flux was measured less than 1% of the maximum.
Wang, Kun; Jiang, Tianzi; Liang, Meng; Wang, Liang; Tian, Lixia; Zhang, Xinqing; Li, Kuncheng; Liu, Zhening
2006-01-01
In this work, we proposed a discriminative model of Alzheimer's disease (AD) on the basis of multivariate pattern classification and functional magnetic resonance imaging (fMRI). This model used the correlation/anti-correlation coefficients of two intrinsically anti-correlated networks in resting brains, which have been suggested by two recent studies, as the feature of classification. Pseudo-Fisher Linear Discriminative Analysis (pFLDA) was then performed on the feature space and a linear classifier was generated. Using leave-one-out (LOO) cross validation, our results showed a correct classification rate of 83%. We also compared the proposed model with another one based on the whole brain functional connectivity. Our proposed model outperformed the other one significantly, and this implied that the two intrinsically anti-correlated networks may be a more susceptible part of the whole brain network in the early stage of AD.
Perturbative stability of SFT-based cosmological models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Galli, Federico; Koshelev, Alexey S., E-mail: fgalli@tena4.vub.ac.be, E-mail: alexey.koshelev@vub.ac.be
2011-05-01
We review the appearance of multiple scalar fields in linearized SFT based cosmological models with a single non-local scalar field. Some of these local fields are canonical real scalar fields and some are complex fields with unusual coupling. These systems only admit numerical or approximate analysis. We introduce a modified potential for multiple scalar fields that makes the system exactly solvable in the cosmological context of Friedmann equations and at the same time preserves the asymptotic behavior expected from SFT. The main part of the paper consists of the analysis of inhomogeneous cosmological perturbations in this system. We show numericallymore » that perturbations corresponding to the new type of complex fields always vanish. As an example of application of this model we consider an explicit construction of the phantom divide crossing and prove the perturbative stability of this process at the linear order. The issue of ghosts and ways to resolve it are briefly discussed.« less
Ionization effects and linear stability in a coaxial plasma device
NASA Astrophysics Data System (ADS)
Kurt, Erol; Kurt, Hilal; Bayhan, Ulku
2009-03-01
A 2-D computer simulation of a coaxial plasma device depending on the conservation equations of electrons, ions and excited atoms together with the Poisson equation for a plasma gun is carried out. Some characteristics of the plasma focus device (PF) such as critical wave numbers a c and voltages U c in the cases of various pressures Pare estimated in order to satisfy the necessary conditions of traveling particle densities ( i.e. plasma patterns) via a linear analysis. Oscillatory solutions are characterized by a nonzero imaginary part of the growth rate Im ( σ) for all cases. The model also predicts the minimal voltage ranges of the system for certain pressure intervals.
NASA Technical Reports Server (NTRS)
Belcastro, Christine M.
1998-01-01
Robust control system analysis and design is based on an uncertainty description, called a linear fractional transformation (LFT), which separates the uncertain (or varying) part of the system from the nominal system. These models are also useful in the design of gain-scheduled control systems based on Linear Parameter Varying (LPV) methods. Low-order LFT models are difficult to form for problems involving nonlinear parameter variations. This paper presents a numerical computational method for constructing and LFT model for a given LPV model. The method is developed for multivariate polynomial problems, and uses simple matrix computations to obtain an exact low-order LFT representation of the given LPV system without the use of model reduction. Although the method is developed for multivariate polynomial problems, multivariate rational problems can also be solved using this method by reformulating the rational problem into a polynomial form.
NASA Technical Reports Server (NTRS)
Pelletier, R. E.
1984-01-01
A need exists for digitized information pertaining to linear features such as roads, streams, water bodies and agricultural field boundaries as component parts of a data base. For many areas where this data may not yet exist or is in need of updating, these features may be extracted from remotely sensed digital data. This paper examines two approaches for identifying linear features, one utilizing raw data and the other classified data. Each approach uses a series of data enhancement procedures including derivation of standard deviation values, principal component analysis and filtering procedures using a high-pass window matrix. Just as certain bands better classify different land covers, so too do these bands exhibit high spectral contrast by which boundaries between land covers can be delineated. A few applications for this kind of data are briefly discussed, including its potential in a Universal Soil Loss Equation Model.
Marrero-Ponce, Yovani; Medina-Marrero, Ricardo; Castillo-Garit, Juan A; Romero-Zaldivar, Vicente; Torrens, Francisco; Castro, Eduardo A
2005-04-15
A novel approach to bio-macromolecular design from a linear algebra point of view is introduced. A protein's total (whole protein) and local (one or more amino acid) linear indices are a new set of bio-macromolecular descriptors of relevance to protein QSAR/QSPR studies. These amino-acid level biochemical descriptors are based on the calculation of linear maps on Rn[f k(xmi):Rn-->Rn] in canonical basis. These bio-macromolecular indices are calculated from the kth power of the macromolecular pseudograph alpha-carbon atom adjacency matrix. Total linear indices are linear functional on Rn. That is, the kth total linear indices are linear maps from Rn to the scalar R[f k(xm):Rn-->R]. Thus, the kth total linear indices are calculated by summing the amino-acid linear indices of all amino acids in the protein molecule. A study of the protein stability effects for a complete set of alanine substitutions in the Arc repressor illustrates this approach. A quantitative model that discriminates near wild-type stability alanine mutants from the reduced-stability ones in a training series was obtained. This model permitted the correct classification of 97.56% (40/41) and 91.67% (11/12) of proteins in the training and test set, respectively. It shows a high Matthews correlation coefficient (MCC=0.952) for the training set and an MCC=0.837 for the external prediction set. Additionally, canonical regression analysis corroborated the statistical quality of the classification model (Rcanc=0.824). This analysis was also used to compute biological stability canonical scores for each Arc alanine mutant. On the other hand, the linear piecewise regression model compared favorably with respect to the linear regression one on predicting the melting temperature (tm) of the Arc alanine mutants. The linear model explains almost 81% of the variance of the experimental tm (R=0.90 and s=4.29) and the LOO press statistics evidenced its predictive ability (q2=0.72 and scv=4.79). Moreover, the TOMOCOMD-CAMPS method produced a linear piecewise regression (R=0.97) between protein backbone descriptors and tm values for alanine mutants of the Arc repressor. A break-point value of 51.87 degrees C characterized two mutant clusters and coincided perfectly with the experimental scale. For this reason, we can use the linear discriminant analysis and piecewise models in combination to classify and predict the stability of the mutant Arc homodimers. These models also permitted the interpretation of the driving forces of such folding process, indicating that topologic/topographic protein backbone interactions control the stability profile of wild-type Arc and its alanine mutants.
Electromagnetic Cyclotron Waves in the Solar Wind: Wind Observation and Wave Dispersion Analysis
NASA Technical Reports Server (NTRS)
Jian, L. K.; Moya, P. S.; Vinas, A. F.; Stevens, M.
2016-01-01
Wind observed long-lasting electromagnetic cyclotron waves near the proton cyclotron frequency on 11 March 2005, in the descending part of a fast wind stream. Bi-Maxwellian velocity distributions are fitted for core protons, beam protons, and alpha-particles. Using the fitted plasma parameters we conduct kinetic linear dispersion analysis and find ion cyclotron and/or firehose instabilities grow in six of 10 wave intervals. After Doppler shift, some of the waves have frequency and polarization consistent with observation, thus may be correspondence to the cyclotron waves observed.
Electromagnetic cyclotron waves in the solar wind: Wind observation and wave dispersion analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jian, L. K., E-mail: lan.jian@nasa.gov; Heliophysics Science Division, NASA Goddard Space Flight Center, Greenbelt, MD 20771; Moya, P. S.
2016-03-25
Wind observed long-lasting electromagnetic cyclotron waves near the proton cyclotron frequency on 11 March 2005, in the descending part of a fast wind stream. Bi-Maxwellian velocity distributions are fitted for core protons, beam protons, and α-particles. Using the fitted plasma parameters we conduct kinetic linear dispersion analysis and find ion cyclotron and/or firehose instabilities grow in six of 10 wave intervals. After Doppler shift, some of the waves have frequency and polarization consistent with observation, thus may be correspondence to the cyclotron waves observed.
A method for evaluating dynamical friction in linear ball bearings.
Fujii, Yusaku; Maru, Koichi; Jin, Tao; Yupapin, Preecha P; Mitatha, Somsak
2010-01-01
A method is proposed for evaluating the dynamical friction of linear bearings, whose motion is not perfectly linear due to some play in its internal mechanism. In this method, the moving part of a linear bearing is made to move freely, and the force acting on the moving part is measured as the inertial force given by the product of its mass and the acceleration of its centre of gravity. To evaluate the acceleration of its centre of gravity, the acceleration of two different points on it is measured using a dual-axis optical interferometer.
Supporting Marine Corps Enhanced Company Operations: A Quantitative Analysis
2010-06-01
by decomposition into simple independent parts. o Agents interact with each other in non-linear ways, and “ adapt ” to their local environment . (p...Center Co Company CoLT Company Landing Team CAS Complex Adaptive Systems CSV Comma-separated Value DO Distributed Operations DODIC Department...SUMMARY The modern irregular warfare environment has dramatically impacted the battle space assignments and mission scope of tactical units that now
Li, Jie; Na, Lixin; Ma, Hao; Zhang, Zhe; Li, Tianjiao; Lin, Liqun; Li, Qiang; Sun, Changhao; Li, Ying
2015-01-01
The effects of prenatal nutrition on adult cognitive function have been reported for one generation. However, human evidence for multigenerational effects is lacking. We examined whether prenatal exposure to the Chinese famine of 1959–61 affects adult cognitive function in two consecutive generations. In this retrospective family cohort study, we investigated 1062 families consisting of 2124 parents and 1215 offspring. We assessed parental and offspring cognitive performance by means of a comprehensive test battery. Generalized linear regression model analysis in the parental generation showed that prenatal exposure to famine was associated with a 8.1 (95% CI 5.8 to 10.4) second increase in trail making test part A, a 7.0 (1.5 to 12.5) second increase in trail making test part B, and a 5.5 (−7.3 to −3.7) score decrease in the Stroop color-word test in adulthood, after adjustment for potential confounders. In the offspring generation, linear mixed model analysis found no significant association between parental prenatal exposure to famine and offspring cognitive function in adulthood after adjustment for potential confounders. In conclusion, prenatal exposure to severe malnutrition is negatively associated with visual- motor skill, mental flexibility, and selective attention in adulthood. However, these associations are limited to only one generation. PMID:26333696
Vosmeer, C Ruben; Kooi, Derk P; Capoferri, Luigi; Terpstra, Margreet M; Vermeulen, Nico P E; Geerke, Daan P
2016-01-01
Recently an iterative method was proposed to enhance the accuracy and efficiency of ligand-protein binding affinity prediction through linear interaction energy (LIE) theory. For ligand binding to flexible Cytochrome P450s (CYPs), this method was shown to decrease the root-mean-square error and standard deviation of error prediction by combining interaction energies of simulations starting from different conformations. Thereby, different parts of protein-ligand conformational space are sampled in parallel simulations. The iterative LIE framework relies on the assumption that separate simulations explore different local parts of phase space, and do not show transitions to other parts of configurational space that are already covered in parallel simulations. In this work, a method is proposed to (automatically) detect such transitions during the simulations that are performed to construct LIE models and to predict binding affinities. Using noise-canceling techniques and splines to fit time series of the raw data for the interaction energies, transitions during simulation between different parts of phase space are identified. Boolean selection criteria are then applied to determine which parts of the interaction energy trajectories are to be used as input for the LIE calculations. Here we show that this filtering approach benefits the predictive quality of our previous CYP 2D6-aryloxypropanolamine LIE model. In addition, an analysis is performed of the gain in computational efficiency that can be obtained from monitoring simulations using the proposed filtering method and by prematurely terminating simulations accordingly.
Abdul-Nasir, Aimi Salihah; Mashor, Mohd Yusoff; Mohamed, Zeehaida
2012-01-01
Malaria is one of the serious global health problem, causing widespread sufferings and deaths in various parts of the world. With the large number of cases diagnosed over the year, early detection and accurate diagnosis which facilitates prompt treatment is an essential requirement to control malaria. For centuries now, manual microscopic examination of blood slide remains the gold standard for malaria diagnosis. However, low contrast of the malaria and variable smears quality are some factors that may influence the accuracy of interpretation by microbiologists. In order to reduce this problem, this paper aims to investigate the performance of the proposed contrast enhancement techniques namely, modified global and modified linear contrast stretching as well as the conventional global and linear contrast stretching that have been applied on malaria images of P. vivax species. The results show that the proposed modified global and modified linear contrast stretching techniques have successfully increased the contrast of the parasites and the infected red blood cells compared to the conventional global and linear contrast stretching. Hence, the resultant images would become useful to microbiologists for identification of various stages and species of malaria.
NASA Astrophysics Data System (ADS)
Philip, Jimmy; Karp, Michael; Cohen, Jacob
2016-01-01
Streaks and hairpin-vortices are experimentally generated in a laminar plane Poiseuille crossflow by injecting a continuous jet through a streamwise slot normal to the crossflow, with air as the working media. Small disturbances form stable streaks, however, higher disturbances cause the formation of streaks which undergo instability leading to the generation of hairpin vortices. Particular emphasis is placed on the flow conditions close to the generation of hairpin-vortices. Measurements are carried out in the cases of natural and phase-locked disturbance employing smoke visualisation, particle image velocimetry, and hot-wire anemometry, which include, the dominant frequency, wavelength, and the disturbance shape (or eigenfunctions) associated with the coherent part of the velocity field. A linear stability analysis for both one- and two-dimensional base-flows is carried out to understand the mechanism of instability and good agreement of wavelength and eigenfunctions are obtained when compared to the experimental data, and a slight under-prediction of the growth-rates by the linear stability analysis consistent with the final nonlinear stages in transitional flows. Furthermore, an energy analysis for both the temporal and spatial stability analysis revels the dominance of the symmetric varicose mode, again, in agreement with the experiments, which is found to be governed by the balance of the wallnormal shear and dissipative effects rather than the spanwise shear. In all cases the anti-symmetric sinuous modes governed by the spanwise shear are found to be damped both in analysis and in our experiments.
From Arithmetic Sequences to Linear Equations
ERIC Educational Resources Information Center
Matsuura, Ryota; Harless, Patrick
2012-01-01
The first part of the article focuses on deriving the essential properties of arithmetic sequences by appealing to students' sense making and reasoning. The second part describes how to guide students to translate their knowledge of arithmetic sequences into an understanding of linear equations. Ryota Matsuura originally wrote these lessons for…
Trinker, Horst
2011-10-28
We study the distribution of triples of codewords of codes and ordered codes. Schrijver [A. Schrijver, New code upper bounds from the Terwilliger algebra and semidefinite programming, IEEE Trans. Inform. Theory 51 (8) (2005) 2859-2866] used the triple distribution of a code to establish a bound on the number of codewords based on semidefinite programming. In the first part of this work, we generalize this approach for ordered codes. In the second part, we consider linear codes and linear ordered codes and present a MacWilliams-type identity for the triple distribution of their dual code. Based on the non-negativity of this linear transform, we establish a linear programming bound and conclude with a table of parameters for which this bound yields better results than the standard linear programming bound.
Automatic Design of Synthetic Gene Circuits through Mixed Integer Non-linear Programming
Huynh, Linh; Kececioglu, John; Köppe, Matthias; Tagkopoulos, Ilias
2012-01-01
Automatic design of synthetic gene circuits poses a significant challenge to synthetic biology, primarily due to the complexity of biological systems, and the lack of rigorous optimization methods that can cope with the combinatorial explosion as the number of biological parts increases. Current optimization methods for synthetic gene design rely on heuristic algorithms that are usually not deterministic, deliver sub-optimal solutions, and provide no guaranties on convergence or error bounds. Here, we introduce an optimization framework for the problem of part selection in synthetic gene circuits that is based on mixed integer non-linear programming (MINLP), which is a deterministic method that finds the globally optimal solution and guarantees convergence in finite time. Given a synthetic gene circuit, a library of characterized parts, and user-defined constraints, our method can find the optimal selection of parts that satisfy the constraints and best approximates the objective function given by the user. We evaluated the proposed method in the design of three synthetic circuits (a toggle switch, a transcriptional cascade, and a band detector), with both experimentally constructed and synthetic promoter libraries. Scalability and robustness analysis shows that the proposed framework scales well with the library size and the solution space. The work described here is a step towards a unifying, realistic framework for the automated design of biological circuits. PMID:22536398
ERIC Educational Resources Information Center
Zandieh, Michelle; Ellis, Jessica; Rasmussen, Chris
2017-01-01
As part of a larger study of student understanding of concepts in linear algebra, we interviewed 10 university linear algebra students as to their conceptions of functions from high school algebra and linear transformation from their study of linear algebra. An overarching goal of this study was to examine how linear algebra students see linear…
Economic analysis and assessment of syngas production using a modeling approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Hakkwan; Parajuli, Prem B.; Yu, Fei
Economic analysis and modeling are essential and important issues for the development of current feedstock and process technology for bio-gasification. The objective of this study was to develop an economic model and apply to predict the unit cost of syngas production from a micro-scale bio-gasification facility. An economic model was programmed in C++ computer programming language and developed using a parametric cost approach, which included processes to calculate the total capital costs and the total operating costs. The model used measured economic data from the bio-gasification facility at Mississippi State University. The modeling results showed that the unit cost ofmore » syngas production was $1.217 for a 60 Nm-3 h-1 capacity bio-gasifier. The operating cost was the major part of the total production cost. The equipment purchase cost and the labor cost were the largest part of the total capital cost and the total operating cost, respectively. Sensitivity analysis indicated that labor costs rank the top as followed by equipment cost, loan life, feedstock cost, interest rate, utility cost, and waste treatment cost. The unit cost of syngas production increased with the increase of all parameters with exception of loan life. The annual cost regarding equipment, labor, feedstock, waste treatment, and utility cost showed a linear relationship with percent changes, while loan life and annual interest rate showed a non-linear relationship. This study provides the useful information for economic analysis and assessment of the syngas production using a modeling approach.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abe, T.; et al.
This Resource Book reviews the physics opportunities of a next-generation e+e- linear collider and discusses options for the experimental program. Part 3 reviews the possible experiments on that can be done at a linear collider on strongly coupled electroweak symmetry breaking, exotic particles, and extra dimensions, and on the top quark, QCD, and two-photon physics. It also discusses the improved precision electroweak measurements that this collider will make available.
Reges, José E. O.; Salazar, A. O.; Maitelli, Carla W. S. P.; Carvalho, Lucas G.; Britto, Ursula J. B.
2016-01-01
This work is a contribution to the development of flow sensors in the oil and gas industry. It presents a methodology to measure the flow rates into multiple-zone water-injection wells from fluid temperature profiles and estimate the measurement uncertainty. First, a method to iteratively calculate the zonal flow rates using the Ramey (exponential) model was described. Next, this model was linearized to perform an uncertainty analysis. Then, a computer program to calculate the injected flow rates from experimental temperature profiles was developed. In the experimental part, a fluid temperature profile from a dual-zone water-injection well located in the Northeast Brazilian region was collected. Thus, calculated and measured flow rates were compared. The results proved that linearization error is negligible for practical purposes and the relative uncertainty increases as the flow rate decreases. The calculated values from both the Ramey and linear models were very close to the measured flow rates, presenting a difference of only 4.58 m³/d and 2.38 m³/d, respectively. Finally, the measurement uncertainties from the Ramey and linear models were equal to 1.22% and 1.40% (for injection zone 1); 10.47% and 9.88% (for injection zone 2). Therefore, the methodology was successfully validated and all objectives of this work were achieved. PMID:27420068
A nonlinear H-infinity approach to optimal control of the depth of anaesthesia
NASA Astrophysics Data System (ADS)
Rigatos, Gerasimos; Rigatou, Efthymia; Zervos, Nikolaos
2016-12-01
Controlling the level of anaesthesia is important for improving the success rate of surgeries and for reducing the risks to which operated patients are exposed. This paper proposes a nonlinear H-infinity approach to optimal control of the level of anaesthesia. The dynamic model of the anaesthesia, which describes the concentration of the anaesthetic drug in different parts of the body, is subjected to linearization at local operating points. These are defined at each iteration of the control algorithm and consist of the present value of the system's state vector and of the last control input that was exerted on it. For this linearization Taylor series expansion is performed and the system's Jacobian matrices are computed. For the linearized model an H-infinity controller is designed. The feedback control gains are found by solving at each iteration of the control algorithm an algebraic Riccati equation. The modelling errors due to this approximate linearization are considered as disturbances which are compensated by the robustness of the control loop. The stability of the control loop is confirmed through Lyapunov analysis.
Evans function computation for the stability of travelling waves
NASA Astrophysics Data System (ADS)
Barker, B.; Humpherys, J.; Lyng, G.; Lytle, J.
2018-04-01
In recent years, the Evans function has become an important tool for the determination of stability of travelling waves. This function, a Wronskian of decaying solutions of the eigenvalue equation, is useful both analytically and computationally for the spectral analysis of the linearized operator about the wave. In particular, Evans-function computation allows one to locate any unstable eigenvalues of the linear operator (if they exist); this allows one to establish spectral stability of a given wave and identify bifurcation points (loss of stability) as model parameters vary. In this paper, we review computational aspects of the Evans function and apply it to multidimensional detonation waves. This article is part of the theme issue `Stability of nonlinear waves and patterns and related topics'.
NASA Astrophysics Data System (ADS)
Ege, Kerem; Boutillon, Xavier; Rébillat, Marc
2013-03-01
The piano soundboard transforms the string vibration into sound and therefore, its vibrations are of primary importance for the sound characteristics of the instrument. An original vibro-acoustical method is presented to isolate the soundboard nonlinearity from that of the exciting device (here: a loudspeaker) and to measure it. The nonlinear part of the soundboard response to an external excitation is quantitatively estimated for the first time, at ≈-40 dB below the linear part at the ff nuance. Given this essentially linear response, a modal identification is performed up to 3 kHz by means of a novel high resolution modal analysis technique [K. Ege, X. Boutillon, B. David, High-resolution modal analysis, Journal of Sound and Vibration 325 (4-5) (2009) 852-869]. Modal dampings (which, so far, were unknown for the piano in this frequency range) are determined in the mid-frequency domain where FFT-based methods fail to evaluate them with an acceptable precision. They turn out to be close to those imposed by wood. A finite-element modelling of the soundboard is also presented. The low-order modal shapes and the comparison between the corresponding experimental and numerical modal frequencies suggest that the boundary conditions can be considered as blocked, except at very low frequencies. The frequency-dependency of the estimated modal densities and the observation of modal shapes reveal two well-separated regimes. Below ≈1 kHz, the soundboard vibrates more or less like a homogeneous plate. Above that limit, the structural waves are confined by ribs, as already noticed by several authors, and localised in restricted areas (one or a few inter-rib spaces), presumably due to a slightly irregular spacing of the ribs across the soundboard.
NASA Astrophysics Data System (ADS)
Soulis, K. X.; Valiantzas, J. D.; Dercas, N.; Londra, P. A.
2009-01-01
The Soil Conservation Service Curve Number (SCS-CN) method is widely used for predicting direct runoff volume for a given rainfall event. The applicability of the SCS-CN method and the runoff generation mechanism were thoroughly analysed in a Mediterranean experimental watershed in Greece. The region is characterized by a Mediterranean semi-arid climate. A detailed land cover and soil survey using remote sensing and GIS techniques, showed that the watershed is dominated by coarse soils with high hydraulic conductivities, whereas a smaller part is covered with medium textured soils and impervious surfaces. The analysis indicated that the SCS-CN method fails to predict runoff for the storm events studied, and that there is a strong correlation between the CN values obtained from measured runoff and the rainfall depth. The hypothesis that this correlation could be attributed to the existence of an impermeable part in a very permeable watershed was examined in depth, by developing a numerical simulation water flow model for predicting surface runoff generated from each of the three soil types of the watershed. Numerical runs were performed using the HYDRUS-1D code. The results support the validity of this hypothesis for most of the events examined where the linear runoff formula provides better results than the SCS-CN method. The runoff coefficient of this formula can be taken equal to the percentage of the impervious area. However, the linear formula should be applied with caution in case of extreme events with very high rainfall intensities. In this case, the medium textured soils may significantly contribute to the total runoff and the linear formula may significantly underestimate the runoff produced.
NASA Astrophysics Data System (ADS)
Soulis, K. X.; Valiantzas, J. D.; Dercas, N.; Londra, P. A.
2009-05-01
The Soil Conservation Service Curve Number (SCS-CN) method is widely used for predicting direct runoff volume for a given rainfall event. The applicability of the SCS-CN method and the direct runoff generation mechanism were thoroughly analysed in a Mediterranean experimental watershed in Greece. The region is characterized by a Mediterranean semi-arid climate. A detailed land cover and soil survey using remote sensing and GIS techniques, showed that the watershed is dominated by coarse soils with high hydraulic conductivities, whereas a smaller part is covered with medium textured soils and impervious surfaces. The analysis indicated that the SCS-CN method fails to predict runoff for the storm events studied, and that there is a strong correlation between the CN values obtained from measured runoff and the rainfall depth. The hypothesis that this correlation could be attributed to the existence of an impermeable part in a very permeable watershed was examined in depth, by developing a numerical simulation water flow model for predicting surface runoff generated from each of the three soil types of the watershed. Numerical runs were performed using the HYDRUS-1D code. The results support the validity of this hypothesis for most of the events examined where the linear runoff formula provides better results than the SCS-CN method. The runoff coefficient of this formula can be taken equal to the percentage of the impervious area. However, the linear formula should be applied with caution in case of extreme events with very high rainfall intensities. In this case, the medium textured soils may significantly contribute to the total runoff and the linear formula may significantly underestimate the runoff produced.
Recurrent jellyfish blooms are a consequence of global oscillations.
Condon, Robert H; Duarte, Carlos M; Pitt, Kylie A; Robinson, Kelly L; Lucas, Cathy H; Sutherland, Kelly R; Mianzan, Hermes W; Bogeberg, Molly; Purcell, Jennifer E; Decker, Mary Beth; Uye, Shin-ichi; Madin, Laurence P; Brodeur, Richard D; Haddock, Steven H D; Malej, Alenka; Parry, Gregory D; Eriksen, Elena; Quiñones, Javier; Acha, Marcelo; Harvey, Michel; Arthur, James M; Graham, William M
2013-01-15
A perceived recent increase in global jellyfish abundance has been portrayed as a symptom of degraded oceans. This perception is based primarily on a few case studies and anecdotal evidence, but a formal analysis of global temporal trends in jellyfish populations has been missing. Here, we analyze all available long-term datasets on changes in jellyfish abundance across multiple coastal stations, using linear and logistic mixed models and effect-size analysis to show that there is no robust evidence for a global increase in jellyfish. Although there has been a small linear increase in jellyfish since the 1970s, this trend was unsubstantiated by effect-size analysis that showed no difference in the proportion of increasing vs. decreasing jellyfish populations over all time periods examined. Rather, the strongest nonrandom trend indicated jellyfish populations undergo larger, worldwide oscillations with an approximate 20-y periodicity, including a rising phase during the 1990s that contributed to the perception of a global increase in jellyfish abundance. Sustained monitoring is required over the next decade to elucidate with statistical confidence whether the weak increasing linear trend in jellyfish after 1970 is an actual shift in the baseline or part of an oscillation. Irrespective of the nature of increase, given the potential damage posed by jellyfish blooms to fisheries, tourism, and other human industries, our findings foretell recurrent phases of rise and fall in jellyfish populations that society should be prepared to face.
The Uncertainty of Long-term Linear Trend in Global SST Due to Internal Variation
NASA Astrophysics Data System (ADS)
Lian, Tao
2016-04-01
In most parts of the global ocean, the magnitude of the long-term linear trend in sea surface temperature (SST) is much smaller than the amplitude of local multi-scale internal variation. One can thus use the record of a specified period to arbitrarily determine the value and the sign of the long-term linear trend in regional SST, and further leading to controversial conclusions on how global SST responds to global warming in the recent history. Analyzing the linear trend coefficient estimated by the ordinary least-square method indicates that the linear trend consists of two parts: One related to the long-term change, and the other related to the multi-scale internal variation. The sign of the long-term change can be correctly reproduced only when the magnitude of the linear trend coefficient is greater than a theoretical threshold which scales the influence from the multi-scale internal variation. Otherwise, the sign of the linear trend coefficient will depend on the phase of the internal variation, or in the other words, the period being used. An improved least-square method is then proposed to reduce the theoretical threshold. When apply the new method to a global SST reconstruction from 1881 to 2013, we find that in a large part of Pacific, the southern Indian Ocean and North Atlantic, the influence from the multi-scale internal variation on the sign of the linear trend coefficient can-not be excluded. Therefore, the resulting warming or/and cooling linear trends in these regions can-not be fully assigned to global warming.
Update on Chemical Analysis of Recovered Hydrazine Family Fuels for Recycling
NASA Technical Reports Server (NTRS)
Davis, C. L.
1997-01-01
The National Aeronautics and Space Administration, Kennedy Space Center, has developed a program to re-use and/or recycle hypergolic propellants recovered from propellant systems. As part of this effort, new techniques were developed to analyze recovered propellants. At the 1996 PDCS, the paper 'Chemical Analysis of Recovered Hydrazine Family Fuels For Recycling' presented analytical techniques used in accordance with KSC specifications which define what recovered propellants are acceptable for recycling. This paper is a follow up to the 1996 paper. Lower detection limits and response linearity were examined for two gas chromatograph methods.
A Comparison Study between a Traditional and Experimental Program.
ERIC Educational Resources Information Center
Dogan, Hamide
This paper is part of a dissertation defended in January 2001 as part of the author's Ph.D. requirement. The study investigated the effects of use of Mathematica, a computer algebra system, in learning basic linear algebra concepts, It was done by means of comparing two first year linear algebra classes, one traditional and one Mathematica…
Method for measuring the contour of a machined part
Bieg, L.F.
1995-05-30
A method is disclosed for measuring the contour of a machined part with a contour gage apparatus, having a probe assembly including a probe tip for providing a measure of linear displacement of the tip on the surface of the part. The contour gage apparatus may be moved into and out of position for measuring the part while the part is still carried on the machining apparatus. Relative positions between the part and the probe tip may be changed, and a scanning operation is performed on the machined part by sweeping the part with the probe tip, whereby data points representing linear positions of the probe tip at prescribed rotation intervals in the position changes between the part and the probe tip are recorded. The method further allows real-time adjustment of the apparatus machining the part, including real-time adjustment of the machining apparatus in response to wear of the tool that occurs during machining. 5 figs.
Barkman, William E.; Dow, Thomas A.; Garrard, Kenneth P.; Marston, Zachary
2016-07-12
Systems and methods for performing on-machine measurements and automatic part alignment, including: a measurement component operable for determining the position of a part on a machine; and an actuation component operable for adjusting the position of the part by contacting the part with a predetermined force responsive to the determined position of the part. The measurement component consists of a transducer. The actuation component consists of a linear actuator. Optionally, the measurement component and the actuation component consist of a single linear actuator operable for contacting the part with a first lighter force for determining the position of the part and with a second harder force for adjusting the position of the part. The actuation component is utilized in a substantially horizontal configuration and the effects of gravitational drop of the part are accounted for in the force applied and the timing of the contact.
Method for measuring the contour of a machined part
Bieg, Lothar F.
1995-05-30
A method for measuring the contour of a machined part with a contour gage apparatus, having a probe assembly including a probe tip for providing a measure of linear displacement of the tip on the surface of the part. The contour gage apparatus may be moved into and out of position for measuring the part while the part is still carried on the machining apparatus. Relative positions between the part and the probe tip may be changed, and a scanning operation is performed on the machined part by sweeping the part with the probe tip, whereby data points representing linear positions of the probe tip at prescribed rotation intervals in the position changes between the part and the probe tip are recorded. The method further allows real-time adjustment of the apparatus machining the part, including real-time adjustment of the machining apparatus in response to wear of the tool that occurs during machining.
NASA Technical Reports Server (NTRS)
Schlaefke, Karlhans
1954-01-01
This paper, which is presented in three parts, is an analytical study of the behavior of landing gear shock struts, with various types of assumptions for the shock-strut characteristics. The effects of tire springing are neglected. The first part compares the behavior of struts with linear and quadratic damping. The second part considers struts with nonlinear spring characteristics and linear or quadratic damping. The third part treats the oleo-pneumatic strut with air-compression springing without damping and with damping proportional to velocity. It is indicated how the damping factor can be determined by experiment.
Optimal inventories for overhaul of repairable redundant systems - A Markov decision model
NASA Technical Reports Server (NTRS)
Schaefer, M. K.
1984-01-01
A Markovian decision model was developed to calculate the optimal inventory of repairable spare parts for an avionics control system for commercial aircraft. Total expected shortage costs, repair costs, and holding costs are minimized for a machine containing a single system of redundant parts. Transition probabilities are calculated for each repair state and repair rate, and optimal spare parts inventory and repair strategies are determined through linear programming. The linear programming solutions are given in a table.
NASA Astrophysics Data System (ADS)
Schnelle-Kreis, Jürgen; Sklorz, Martin; Peters, Anette; Cyrys, Josef; Zimmermann, Ralf
PM 2.5 Particle-associated semi-volatile organic compounds (SVOC) were determined in the city of Augsburg, Germany. Daily samples were collected at a central monitoring station from late summer to late autumn 2002. The concentrations of polycyclic aromatic hydrocarbons (PAH), oxidized PAH (O-PAH), n-alkanes, hopanes and long chain linear alkylbenzenes were determined by direct thermal desorption-gas chromatography-time of flight mass spectrometry (DTD-GC-TOFMS). Additionally, PM 2.5 particle mass and number concentrations were measured. The sampling campaign can be divided into two parts, distinguished by a lower temperature level in the second part of the campaign. The particulate mass concentration showed no significant changes, whereas most of the SVOC had significant higher mean and peak concentrations in the colder period. The analysis of the data showed an increased influence of non-traffic sources in the colder period, reflected by a weak shift in the PAH profile and a significant shift in the hopane pattern. Statistical analysis of the inter-group correlations was carried out. Eight clusters partly representing different sources of the aerosol have been identified.
Roy, Banibrata; Ripstein, Ira; Perry, Kyle; Cohen, Barry
2016-01-01
To determine whether the pre-medical Grade Point Average (GPA), Medical College Admission Test (MCAT), Internal examinations (Block) and National Board of Medical Examiners (NBME) scores are correlated with and predict the Medical Council of Canada Qualifying Examination Part I (MCCQE-1) scores. Data from 392 admitted students in the graduating classes of 2010-2013 at University of Manitoba (UofM), College of Medicine was considered. Pearson's correlation to assess the strength of the relationship, multiple linear regression to estimate MCCQE-1 score and stepwise linear regression to investigate the amount of variance were employed. Complete data from 367 (94%) students were studied. The MCCQE-1 had a moderate-to-large positive correlation with NBME scores and Block scores but a low correlation with GPA and MCAT scores. The multiple linear regression model gives a good estimate of the MCCQE-1 (R2 =0.604). Stepwise regression analysis demonstrated that 59.2% of the variation in the MCCQE-1 was accounted for by the NBME, but only 1.9% by the Block exams, and negligible variation came from the GPA and the MCAT. Amongst all the examinations used at UofM, the NBME is most closely correlated with MCCQE-1.
Improving Machining Accuracy of CNC Machines with Innovative Design Methods
NASA Astrophysics Data System (ADS)
Yemelyanov, N. V.; Yemelyanova, I. V.; Zubenko, V. L.
2018-03-01
The article considers achieving the machining accuracy of CNC machines by applying innovative methods in modelling and design of machining systems, drives and machine processes. The topological method of analysis involves visualizing the system as matrices of block graphs with a varying degree of detail between the upper and lower hierarchy levels. This approach combines the advantages of graph theory and the efficiency of decomposition methods, it also has visual clarity, which is inherent in both topological models and structural matrices, as well as the resiliency of linear algebra as part of the matrix-based research. The focus of the study is on the design of automated machine workstations, systems, machines and units, which can be broken into interrelated parts and presented as algebraic, topological and set-theoretical models. Every model can be transformed into a model of another type, and, as a result, can be interpreted as a system of linear and non-linear equations which solutions determine the system parameters. This paper analyses the dynamic parameters of the 1716PF4 machine at the stages of design and exploitation. Having researched the impact of the system dynamics on the component quality, the authors have developed a range of practical recommendations which have enabled one to reduce considerably the amplitude of relative motion, exclude some resonance zones within the spindle speed range of 0...6000 min-1 and improve machining accuracy.
Strength computation of forged parts taking into account strain hardening and damage
NASA Astrophysics Data System (ADS)
Cristescu, Michel L.
2004-06-01
Modern non-linear simulation software, such as FORGE 3 (registered trade mark of TRANSVALOR), are able to compute the residual stresses, the strain hardening and the damage during the forging process. A thermally dependent elasto-visco-plastic law is used to simulate the behavior of the material of the hot forged piece. A modified Lemaitre law coupled with elasticiy, plasticity and thermic is used to simulate the damage. After the simulation of the different steps of the forging process, the part is cooled and then virtually machined, in order to obtain the finished part. An elastic computation is then performed to equilibrate the residual stresses, so that we obtain the true geometry of the finished part after machining. The response of the part to the loadings it will sustain during it's life is then computed, taking into account the residual stresses, the strain hardening and the damage that occur during forging. This process is illustrated by the forging, virtual machining and stress analysis of an aluminium wheel hub.
Katkov, Igor I
2011-06-01
The Boyle-van't Hoff (BVH) law of physics has been widely used in cryobiology for calculation of the key osmotic parameters of cells and optimization of cryo-protocols. The proper use of linearization of the Boyle-vant'Hoff relationship for the osmotically inactive volume (v(b)) has been discussed in a rigorous way in (Katkov, Cryobiology, 2008, 57:142-149). Nevertheless, scientists in the field have been continuing to use inappropriate methods of linearization (and curve fitting) of the BVH data, plotting the BVH line and calculation of v(b). Here, we discuss the sources of incorrect linearization of the BVH relationship using concrete examples of recent publications, analyze the properties of the correct BVH line (which is unique for a given v(b)), provide appropriate statistical formulas for calculation of v(b) from the experimental data, and propose simplistic instructions (standard operation procedure, SOP) for proper normalization of the data, appropriate linearization and construction of the BVH plots, and correct calculation of v(b). The possible sources of non-linear behavior or poor fit of the data to the proper BVH line such as active water and/or solute transports, which can result in large discrepancy between the hyperosmotic and hypoosmotic parts of the BVH plot, are also discussed. Copyright © 2011 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Chen, Jiangwei; Dai, Yuyao; Yan, Lin; Zhao, Huimin
2018-04-01
In this paper, we shall demonstrate theoretically that steady bound electromagnetic eigenstate can arise in an infinite homogeneous isotropic linear metamaterial with zero-real-part-of-impedance and nonzero-imaginary-part-of-wave-vector, which is partly attributed to that, here, nonzero-imaginary-part-of-wave-vector is not involved with energy losses or gain. Altering value of real-part-of-impedance of the metamaterial, the bound electromagnetic eigenstate may become to be a progressive wave. Our work may be useful to further understand energy conversion and conservation properties of electromagnetic wave in the dispersive and absorptive medium and provides a feasible route to stop, store and release electromagnetic wave (light) conveniently by using metamaterial with near-zero-real-part-of-impedance.
Information analysis of posterior canal afferents in the turtle, Trachemys scripta elegans.
Rowe, Michael H; Neiman, Alexander B
2012-01-24
We have used sinusoidal and band-limited Gaussian noise stimuli along with information measures to characterize the linear and non-linear responses of morpho-physiologically identified posterior canal (PC) afferents and to examine the relationship between mutual information rate and other physiological parameters. Our major findings are: 1) spike generation in most PC afferents is effectively a stochastic renewal process, and spontaneous discharges are fully characterized by their first order statistics; 2) a regular discharge, as measured by normalized coefficient of variation (cv*), reduces intrinsic noise in afferent discharges at frequencies below the mean firing rate; 3) coherence and mutual information rates, calculated from responses to band-limited Gaussian noise, are jointly determined by gain and intrinsic noise (discharge regularity), the two major determinants of signal to noise ratio in the afferent response; 4) measures of optimal non-linear encoding were only moderately greater than optimal linear encoding, indicating that linear stimulus encoding is limited primarily by internal noise rather than by non-linearities; and 5) a leaky integrate and fire model reproduces these results and supports the suggestion that the combination of high discharge regularity and high discharge rates serves to extend the linear encoding range of afferents to higher frequencies. These results provide a framework for future assessments of afferent encoding of signals generated during natural head movements and for comparison with coding strategies used by other sensory systems. This article is part of a Special Issue entitled: Neural Coding. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Nakazawa, Shohei
1991-01-01
Formulations and algorithms implemented in the MHOST finite element program are discussed. The code uses a novel concept of the mixed iterative solution technique for the efficient 3-D computations of turbine engine hot section components. The general framework of variational formulation and solution algorithms are discussed which were derived from the mixed three field Hu-Washizu principle. This formulation enables the use of nodal interpolation for coordinates, displacements, strains, and stresses. Algorithmic description of the mixed iterative method includes variations for the quasi static, transient dynamic and buckling analyses. The global-local analysis procedure referred to as the subelement refinement is developed in the framework of the mixed iterative solution, of which the detail is presented. The numerically integrated isoparametric elements implemented in the framework is discussed. Methods to filter certain parts of strain and project the element discontinuous quantities to the nodes are developed for a family of linear elements. Integration algorithms are described for linear and nonlinear equations included in MHOST program.
Linear model for fast background subtraction in oligonucleotide microarrays.
Kroll, K Myriam; Barkema, Gerard T; Carlon, Enrico
2009-11-16
One important preprocessing step in the analysis of microarray data is background subtraction. In high-density oligonucleotide arrays this is recognized as a crucial step for the global performance of the data analysis from raw intensities to expression values. We propose here an algorithm for background estimation based on a model in which the cost function is quadratic in a set of fitting parameters such that minimization can be performed through linear algebra. The model incorporates two effects: 1) Correlated intensities between neighboring features in the chip and 2) sequence-dependent affinities for non-specific hybridization fitted by an extended nearest-neighbor model. The algorithm has been tested on 360 GeneChips from publicly available data of recent expression experiments. The algorithm is fast and accurate. Strong correlations between the fitted values for different experiments as well as between the free-energy parameters and their counterparts in aqueous solution indicate that the model captures a significant part of the underlying physical chemistry.
NASA Astrophysics Data System (ADS)
RóŻyło, Patryk; Debski, Hubert; Kral, Jan
2018-01-01
The subject of the research was a short thin-walled top-hat cross-section composite profile. The tested structure was subjected to axial compression. As part of the critical state research, critical load and the corresponding buckling mode was determined. Later in the study laminate damage areas were determined throughout numerical analysis. It was assumed that the profile is simply supported on the cross sections ends. Experimental tests were carried out on a universal testing machine Zwick Z100 and the results were compared with the results of numerical calculations. The eigenvalue problem and a non-linear problem of stability of thin-walled structures were carried out by the use of commercial software ABAQUS®. In the presented cases, it was assumed that the material is linear-elastic and non-linearity of the model results from the large displacements. Solution to the geometrically nonlinear problem was conducted by the use of the incremental-iterative Newton-Raphson method.
Missing-value estimation using linear and non-linear regression with Bayesian gene selection.
Zhou, Xiaobo; Wang, Xiaodong; Dougherty, Edward R
2003-11-22
Data from microarray experiments are usually in the form of large matrices of expression levels of genes under different experimental conditions. Owing to various reasons, there are frequently missing values. Estimating these missing values is important because they affect downstream analysis, such as clustering, classification and network design. Several methods of missing-value estimation are in use. The problem has two parts: (1) selection of genes for estimation and (2) design of an estimation rule. We propose Bayesian variable selection to obtain genes to be used for estimation, and employ both linear and nonlinear regression for the estimation rule itself. Fast implementation issues for these methods are discussed, including the use of QR decomposition for parameter estimation. The proposed methods are tested on data sets arising from hereditary breast cancer and small round blue-cell tumors. The results compare very favorably with currently used methods based on the normalized root-mean-square error. The appendix is available from http://gspsnap.tamu.edu/gspweb/zxb/missing_zxb/ (user: gspweb; passwd: gsplab).
Modeling lateral geniculate nucleus response with contrast gain control. Part 2: Analysis
Cope, Davis; Blakeslee, Barbara; McCourt, Mark E.
2014-01-01
Cope, Blakeslee and McCourt (2013) proposed a class of models for LGN ON-cell behavior consisting of a linear response with divisive normalization by local stimulus contrast. Here we analyze a specific model with the linear response defined by a difference-of-Gaussians filter and a circular Gaussian for the gain pool weighting function. For sinusoidal grating stimuli, the parameter region for band-pass behavior of the linear response is determined, the gain control response is shown to act as a switch (changing from “off” to “on” with increasing spatial frequency), and it is shown that large gain pools stabilize the optimal spatial frequency of the total nonlinear response at a fixed value independent of contrast and stimulus magnitude. Under- and super-saturation as well as contrast saturation occur as typical effects of stimulus magnitude. For circular spot stimuli, it is shown that large gain pools stabilize the spot size that yields the maximum response. PMID:24562034
NASA Astrophysics Data System (ADS)
Böberg, L.; Brösa, U.
1988-09-01
Turbulence in a pipe is derived directly from the Navier-Stokes equation. Analysis of numerical simulations revealed that small disturbances called 'mothers' induce other much stronger disturbances called 'daughters'. Daughters determine the look of turbulence, while mothers control the transfer of energy from the basic flow to the turbulent motion. From a practical point of view, ruling mothers means ruling turbulence. For theory, the mother-daughter process represents a mechanism permitting chaotic motion in a linearly stable system. The mechanism relies on a property of the linearized problem according to which the eigenfunctions become more and more collinear as the Reynolds number increases. The mathematical methods are described, comparisons with experiments are made, mothers and daughters are analyzed, also graphically, with full particulars, and the systematic construction of small systems of differential equations to mimic the non-linear process by means as simple as possible is explained. We suggest that more then 20 but less than 180 essential degrees of freedom take part in the onset of turbulence.
NASA Technical Reports Server (NTRS)
Matthews, Clarence W
1953-01-01
An analysis is made of the effects of compressibility on the pressure coefficients about several bodies of revolution by comparing experimentally determined pressure coefficients with corresponding pressure coefficients calculated by the use of the linearized equations of compressible flow. The results show that the theoretical methods predict the subsonic pressure-coefficient changes over the central part of the body but do not predict the pressure-coefficient changes near the nose. Extrapolation of the linearized subsonic theory into the mixed subsonic-supersonic flow region fails to predict a rearward movement of the negative pressure-coefficient peak which occurs after the critical stream Mach number has been attained. Two equations developed from a consideration of the subsonic compressible flow about a prolate spheroid are shown to predict, approximately, the change with Mach number of the subsonic pressure coefficients for regular bodies of revolution of fineness ratio 6 or greater.
Tackling non-linearities with the effective field theory of dark energy and modified gravity
NASA Astrophysics Data System (ADS)
Frusciante, Noemi; Papadomanolakis, Georgios
2017-12-01
We present the extension of the effective field theory framework to the mildly non-linear scales. The effective field theory approach has been successfully applied to the late time cosmic acceleration phenomenon and it has been shown to be a powerful method to obtain predictions about cosmological observables on linear scales. However, mildly non-linear scales need to be consistently considered when testing gravity theories because a large part of the data comes from those scales. Thus, non-linear corrections to predictions on observables coming from the linear analysis can help in discriminating among different gravity theories. We proceed firstly by identifying the necessary operators which need to be included in the effective field theory Lagrangian in order to go beyond the linear order in perturbations and then we construct the corresponding non-linear action. Moreover, we present the complete recipe to map any single field dark energy and modified gravity models into the non-linear effective field theory framework by considering a general action in the Arnowitt-Deser-Misner formalism. In order to illustrate this recipe we proceed to map the beyond-Horndeski theory and low-energy Hořava gravity into the effective field theory formalism. As a final step we derived the 4th order action in term of the curvature perturbation. This allowed us to identify the non-linear contributions coming from the linear order perturbations which at the next order act like source terms. Moreover, we confirm that the stability requirements, ensuring the positivity of the kinetic term and the speed of propagation for scalar mode, are automatically satisfied once the viability of the theory is demanded at linear level. The approach we present here will allow to construct, in a model independent way, all the relevant predictions on observables at mildly non-linear scales.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peng, Bo; Kowalski, Karol
In this paper we derive basic properties of the Green’s function matrix elements stemming from the exponential coupled cluster (CC) parametrization of the ground-state wave function. We demon- strate that all intermediates used to express retarded (or equivalently, ionized) part of the Green’s function in the ω-representation can be expressed through connected diagrams only. Similar proper- ties are also shared by the first order ω-derivatives of the retarded part of the CC Green’s function. This property can be extended to any order ω-derivatives of the Green’s function. Through the Dyson equation of CC Green’s function, the derivatives of corresponding CCmore » self-energy can be evaluated analytically. In analogy to the CC Green’s function, the corresponding CC self-energy is expressed in terms of connected diagrams only. Moreover, the ionized part of the CC Green’s func- tion satisfies the non-homogeneous linear system of ordinary differential equations, whose solution may be represented in the exponential form. Our analysis can be easily generalized to the advanced part of the CC Green’s function.« less
High-performance liquid chromatographic analysis of methadone hydrochloride oral solution.
Beasley, T H; Ziegler, H W
1977-12-01
A direct and rapid high-performance liquid chromatographic assay for methadone hydrochloride in a flavored oral solution dosage form is described. A syrup sample, one part diluted with three parts of water, is introduced onto a column packed with octadecylsilane bonded on 10 micrometer porous silica gel (reversed phase). A formic acid-ammonium formate-buffered mobile phase is linear programmed with acetonitrile. The absorbance is monitored continuously at 280 or 254 nm, using a flow-through, UV, double-beam photometer. An aqueous methadone hydrochloride solution is used for external standardization. The relative standard deviation was not more than 1.0%. Drug recovery from a syrup base was better than 99.8%.
MRI and Related Astrophysical Instabilities in the Lab
NASA Astrophysics Data System (ADS)
Goodman, Jeremy
2018-06-01
The dynamics of accretion in astronomical disks is only partly understood. Magnetorotational instability (MRI) is surely important but has been studied largely through linear analysis and numerical simulations rather than experiments. Also, it is unclear whether MRI is effective in protostellar disks, which are likely poor electrical conductors. Shear-driven hydrodynamic turbulence is very familiar in terrestrial flows, but simulations indicate that it is inhibited in disks. I summarize experimental progress and challenges relevant to both types of instability.
Aerodynamic preliminary analysis system. Part 2: User's manual and program description
NASA Technical Reports Server (NTRS)
Divan, P.; Dunn, K.; Kojima, J.
1978-01-01
A comprehensive aerodynamic analysis program based on linearized potential theory is described. The solution treats thickness and attitude problems at subsonic and supersonic speeds. Three dimensional configurations with or without jet flaps having multiple nonplanar surfaces of arbitrary planform and open or closed slender bodies or noncircular contour are analyzed. Longitudinal and lateral-directional static and rotary derivative solutions are generated. The analysis is implemented on a time sharing system in conjunction with an input tablet digitizer and an interactive graphics input/output display and editing terminal to maximize its responsiveness to the preliminary analysis problem. Nominal case computation time of 45 CPU seconds on the CDC 175 for a 200 panel simulation indicates the program provides an efficient analysis for systematically performing various aerodynamic configuration tradeoff and evaluation studies.
Determination of authenticity of brand perfume using electronic nose prototypes
NASA Astrophysics Data System (ADS)
Gebicki, Jacek; Szulczynski, Bartosz; Kaminski, Marian
2015-12-01
The paper presents the practical application of an electronic nose technique for fast and efficient discrimination between authentic and fake perfume samples. Two self-built electronic nose prototypes equipped with a set of semiconductor sensors were employed for that purpose. Additionally 10 volunteers took part in the sensory analysis. The following perfumes and their fake counterparts were analysed: Dior—Fahrenheit, Eisenberg—J’ose, YSL—La nuit de L’homme, 7 Loewe and Spice Bomb. The investigations were carried out using the headspace of the aqueous solutions. Data analysis utilized multidimensional techniques: principle component analysis (PCA), linear discrimination analysis (LDA), k-nearest neighbour (k-NN). The results obtained confirmed the legitimacy of the electronic nose technique as an alternative to the sensory analysis as far as the determination of authenticity of perfume is concerned.
Standardisation of DNA quantitation by image analysis: quality control of instrumentation.
Puech, M; Giroud, F
1999-05-01
DNA image analysis is frequently performed in clinical practice as a prognostic tool and to improve diagnosis. The precision of prognosis and diagnosis depends on the accuracy of analysis and particularly on the quality of image analysis systems. It has been reported that image analysis systems used for DNA quantification differ widely in their characteristics (Thunissen et al.: Cytometry 27: 21-25, 1997). This induces inter-laboratory variations when the same sample is analysed in different laboratories. In microscopic image analysis, the principal instrumentation errors arise from the optical and electronic parts of systems. They bring about problems of instability, non-linearity, and shading and glare phenomena. The aim of this study is to establish tools and standardised quality control procedures for microscopic image analysis systems. Specific reference standard slides have been developed to control instability, non-linearity, shading and glare phenomena and segmentation efficiency. Some systems have been controlled with these tools and these quality control procedures. Interpretation criteria and accuracy limits of these quality control procedures are proposed according to the conclusions of a European project called PRESS project (Prototype Reference Standard Slide). Beyond these limits, tested image analysis systems are not qualified to realise precise DNA analysis. The different procedures presented in this work determine if an image analysis system is qualified to deliver sufficiently precise DNA measurements for cancer case analysis. If the controlled systems are beyond the defined limits, some recommendations are given to find a solution to the problem.
ERIC Educational Resources Information Center
British Standards Institution, London (England).
To promote interchangeability of teaching machines and programs, so that the user is not so limited in his choice of programs, the British Standards Institute has offered a standard. Part I of the standard deals with linear teaching machines and programs that make use of the roll or sheet methods of presentation. Requirements cover: spools,…
NASA Technical Reports Server (NTRS)
Rauw, Marc O.
1993-01-01
The design of advanced Automatic Aircraft Control Systems (AACS's) can be improved upon considerably if the designer can access all models and tools required for control system design and analysis through a graphical user interface, from within one software environment. This MSc-thesis presents the first step in the development of such an environment, which is currently being done at the Section for Stability and Control of Delft University of Technology, Faculty of Aerospace Engineering. The environment is implemented within the commercially available software package MATLAB/SIMULINK. The report consists of two parts. Part 1 gives a detailed description of the AACS design environment. The heart of this environment is formed by the SIMULINK implementation of a nonlinear aircraft model in block-diagram format. The model has been worked out for the old laboratory aircraft of the Faculty, the DeHavilland DHC-2 'Beaver', but due to its modular structure, it can easily be adapted for other aircraft. Part 1 also describes MATLAB programs which can be applied for finding steady-state trimmed-flight conditions and for linearization of the aircraft model, and it shows how the built-in simulation routines of SIMULINK have been used for open-loop analysis of the aircraft dynamics. Apart from the implementation of the models and tools, a thorough treatment of the theoretical backgrounds is presented. Part 2 of this report presents a part of an autopilot design process for the 'Beaver' aircraft, which clearly demonstrates the power and flexibility of the AACS design environment from part 1. Evaluations of all longitudinal and lateral control laws by means of nonlinear simulations are treated in detail. A floppy disk containing all relevant MATLAB programs and SIMULINK models is provided as a supplement.
Normality of raw data in general linear models: The most widespread myth in statistics
Kery, Marc; Hatfield, Jeff S.
2003-01-01
In years of statistical consulting for ecologists and wildlife biologists, by far the most common misconception we have come across has been the one about normality in general linear models. These comprise a very large part of the statistical models used in ecology and include t tests, simple and multiple linear regression, polynomial regression, and analysis of variance (ANOVA) and covariance (ANCOVA). There is a widely held belief that the normality assumption pertains to the raw data rather than to the model residuals. We suspect that this error may also occur in countless published studies, whenever the normality assumption is tested prior to analysis. This may lead to the use of nonparametric alternatives (if there are any), when parametric tests would indeed be appropriate, or to use of transformations of raw data, which may introduce hidden assumptions such as multiplicative effects on the natural scale in the case of log-transformed data. Our aim here is to dispel this myth. We very briefly describe relevant theory for two cases of general linear models to show that the residuals need to be normally distributed if tests requiring normality are to be used, such as t and F tests. We then give two examples demonstrating that the distribution of the response variable may be nonnormal, and yet the residuals are well behaved. We do not go into the issue of how to test normality; instead we display the distributions of response variables and residuals graphically.
Chen, Shaoqiang; Yoshita, Masahiro; Sato, Aya; Ito, Takashi; Akiyama, Hidefumi; Yokoyama, Hiroyuki
2013-05-06
Picosecond-pulse-generation dynamics and pulse-width limiting factors via spectral filtering from intensely pulse-excited gain-switched 1.55-μm distributed-feedback laser diodes were studied. The spectral and temporal characteristics of the spectrally filtered pulses indicated that the short-wavelength component stems from the initial part of the gain-switched main pulse and has a nearly linear down-chirp of 5.2 ps/nm, whereas long-wavelength components include chirped pulse-lasing components and steady-state-lasing components. Rate-equation calculations with a model of linear change in refractive index with carrier density explained the major features of the experimental results. The analysis of the expected pulse widths with optimum spectral widths was also consistent with the experimental data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Korotkevich, Alexander O.; Lushnikov, Pavel M., E-mail: plushnik@math.unm.edu; Landau Institute for Theoretical Physics, 2 Kosygin Str., Moscow 119334
2015-01-15
We developed a linear theory of backward stimulated Brillouin scatter (BSBS) of a spatially and temporally random laser beam relevant for laser fusion. Our analysis reveals a new collective regime of BSBS (CBSBS). Its intensity threshold is controlled by diffraction, once cT{sub c} exceeds a laser speckle length, with T{sub c} the laser coherence time. The BSBS spatial gain rate is approximately the sum of that due to CBSBS, and a part which is independent of diffraction and varies linearly with T{sub c}. The CBSBS spatial gain rate may be reduced significantly by the temporal bandwidth of KrF-based laser systemsmore » compared to the bandwidth currently available to temporally smoothed glass-based laser systems.« less
L-shaped piezoelectric motor--part I: design and experimental analysis.
Avirovik, Dragan; Priya, Shashank
2012-01-01
This paper proposes an L-shaped piezoelectric motor consisting of two piezoelectric bimorphs of different lengths arranged perpendicularly to each other. The coupling of the bending vibration mode of the bimorphs results in an elliptical motion at the tip. A detailed finite element model was developed to optimize the dimensions of bimorph to achieve an effective coupling at the resonance frequency of 246 Hz. The motor was characterized by developing rotational and linear stages. The linear stage was tested with different friction contact surfaces and the maximum velocity was measured to be 12 mm/s. The rotational stage was used to obtain additional performance characteristics from the motor: maximum velocity of 120 rad/s, mechanical torque of 4.7 × 10-(5) N·m, and efficiency of 8.55%. © 2012 IEEE
How Can Health System Efficiency Be Improved in Canada?
Allin, Sara; Veillard, Jeremy; Wang, Li; Grignon, Michel
2015-01-01
Improving value for money in the health system is an often-stated policy goal. This study is the first to systematically measure the efficiency of health regions in Canada in producing health gains with their available resources, and to identify the factors that are associated with increased efficiency. Based on the objective elicited from decision-makers that the health system should ensure access to care for Canadians when they need it, we measured the efficiency with which regions reduce causes of death that are amenable to healthcare interventions using a linear programming approach (data envelopment analysis). Variations in efficiency were explained in part by public health factors, such as the prevalence of obesity and smoking in the population; in part by characteristics of the population, such as their average income; and in part by managerial factors, such as hospital readmissions. PMID:26571467
Wen, Fangfang; Cheng, Xuemei; Liu, Wei; Xuan, Min; Zhang, Lei; Zhao, Xin; Shan, Meng; Li, Yan; Teng, Liang; Wang, Zhengtao; Wang, Changhong
2014-12-01
The aerial parts of genus Peganum are officially used in traditional Chinese medicine. The paper aims to establish a high-performance liquid chromatography (HPLC) method for fingerprint analysis and simultaneous determination of three alkaloids and two flavonoids in aerial parts of genus Peganum, and to analyze accumulative difference of secondary metabolites in inter-species, individuals of plants, inter-/intra-population and from different growing seasons. HPLC analysis was performed on a C18 column with gradient elution using 0.1% trifloroacetic acid and acetonitrile as mobile phase and detected at 265 nm, by conventional methodology validation. For fingerprint analysis, the RSDs of relative retention time and relative peak area of the characteristic peaks were within 0.07-0.78 and 0.94-9.09%, respectively. For simultaneous determination of vasicine, harmaline, harmine, deacetylpeganetin and peganetin, all calibration curves showed good linearity (r > 0.9990) within the test range. The relative standard deviations of precision, repeatability and stability test did not exceed 2.37, 2.68 and 2.67%, respectively. The average recoveries for the five analytes were between 96.47 and 101.20%. HPLC fingerprints play a minor role in authenticating and differentiating the herbs of different species of genus Peganum. However, the secondary metabolites levels of alkaloids and flavonoids in aerial parts of genus Peganum rely on species-, habitat-, and growth season-dependent accumulation. Copyright © 2014 John Wiley & Sons, Ltd.
A Sensor-Based Method for Diagnostics of Machine Tool Linear Axes.
Vogl, Gregory W; Weiss, Brian A; Donmez, M Alkan
2015-01-01
A linear axis is a vital subsystem of machine tools, which are vital systems within many manufacturing operations. When installed and operating within a manufacturing facility, a machine tool needs to stay in good condition for parts production. All machine tools degrade during operations, yet knowledge of that degradation is illusive; specifically, accurately detecting degradation of linear axes is a manual and time-consuming process. Thus, manufacturers need automated and efficient methods to diagnose the condition of their machine tool linear axes without disruptions to production. The Prognostics and Health Management for Smart Manufacturing Systems (PHM4SMS) project at the National Institute of Standards and Technology (NIST) developed a sensor-based method to quickly estimate the performance degradation of linear axes. The multi-sensor-based method uses data collected from a 'sensor box' to identify changes in linear and angular errors due to axis degradation; the sensor box contains inclinometers, accelerometers, and rate gyroscopes to capture this data. The sensors are expected to be cost effective with respect to savings in production losses and scrapped parts for a machine tool. Numerical simulations, based on sensor bandwidth and noise specifications, show that changes in straightness and angular errors could be known with acceptable test uncertainty ratios. If a sensor box resides on a machine tool and data is collected periodically, then the degradation of the linear axes can be determined and used for diagnostics and prognostics to help optimize maintenance, production schedules, and ultimately part quality.
A Sensor-Based Method for Diagnostics of Machine Tool Linear Axes
Vogl, Gregory W.; Weiss, Brian A.; Donmez, M. Alkan
2017-01-01
A linear axis is a vital subsystem of machine tools, which are vital systems within many manufacturing operations. When installed and operating within a manufacturing facility, a machine tool needs to stay in good condition for parts production. All machine tools degrade during operations, yet knowledge of that degradation is illusive; specifically, accurately detecting degradation of linear axes is a manual and time-consuming process. Thus, manufacturers need automated and efficient methods to diagnose the condition of their machine tool linear axes without disruptions to production. The Prognostics and Health Management for Smart Manufacturing Systems (PHM4SMS) project at the National Institute of Standards and Technology (NIST) developed a sensor-based method to quickly estimate the performance degradation of linear axes. The multi-sensor-based method uses data collected from a ‘sensor box’ to identify changes in linear and angular errors due to axis degradation; the sensor box contains inclinometers, accelerometers, and rate gyroscopes to capture this data. The sensors are expected to be cost effective with respect to savings in production losses and scrapped parts for a machine tool. Numerical simulations, based on sensor bandwidth and noise specifications, show that changes in straightness and angular errors could be known with acceptable test uncertainty ratios. If a sensor box resides on a machine tool and data is collected periodically, then the degradation of the linear axes can be determined and used for diagnostics and prognostics to help optimize maintenance, production schedules, and ultimately part quality. PMID:28691039
Ding, Changfeng; Li, Xiaogang; Zhang, Taolin; Ma, Yibing; Wang, Xingxiang
2014-10-01
Soil environmental quality standards in respect of heavy metals for farmlands should be established considering both their effects on crop yield and their accumulation in the edible part. A greenhouse experiment was conducted to investigate the effects of chromium (Cr) on biomass production and Cr accumulation in carrot plants grown in a wide range of soils. The results revealed that carrot yield significantly decreased in 18 of the total 20 soils with Cr addition being the soil environmental quality standard of China. The Cr content of carrot grown in the five soils with pH>8.0 exceeded the maximum allowable level (0.5mgkg(-1)) according to the Chinese General Standard for Contaminants in Foods. The relationship between carrot Cr concentration and soil pH could be well fitted (R(2)=0.70, P<0.0001) by a linear-linear segmented regression model. The addition of Cr to soil influenced carrot yield firstly rather than the food quality. The major soil factors controlling Cr phytotoxicity and the prediction models were further identified and developed using path analysis and stepwise multiple linear regression analysis. Soil Cr thresholds for phytotoxicity meanwhile ensuring food safety were then derived on the condition of 10 percent yield reduction. Copyright © 2014 Elsevier Inc. All rights reserved.
Multilayer neural networks for reduced-rank approximation.
Diamantaras, K I; Kung, S Y
1994-01-01
This paper is developed in two parts. First, the authors formulate the solution to the general reduced-rank linear approximation problem relaxing the invertibility assumption of the input autocorrelation matrix used by previous authors. The authors' treatment unifies linear regression, Wiener filtering, full rank approximation, auto-association networks, SVD and principal component analysis (PCA) as special cases. The authors' analysis also shows that two-layer linear neural networks with reduced number of hidden units, trained with the least-squares error criterion, produce weights that correspond to the generalized singular value decomposition of the input-teacher cross-correlation matrix and the input data matrix. As a corollary the linear two-layer backpropagation model with reduced hidden layer extracts an arbitrary linear combination of the generalized singular vector components. Second, the authors investigate artificial neural network models for the solution of the related generalized eigenvalue problem. By introducing and utilizing the extended concept of deflation (originally proposed for the standard eigenvalue problem) the authors are able to find that a sequential version of linear BP can extract the exact generalized eigenvector components. The advantage of this approach is that it's easier to update the model structure by adding one more unit or pruning one or more units when the application requires it. An alternative approach for extracting the exact components is to use a set of lateral connections among the hidden units trained in such a way as to enforce orthogonality among the upper- and lower-layer weights. The authors call this the lateral orthogonalization network (LON) and show via theoretical analysis-and verify via simulation-that the network extracts the desired components. The advantage of the LON-based model is that it can be applied in a parallel fashion so that the components are extracted concurrently. Finally, the authors show the application of their results to the solution of the identification problem of systems whose excitation has a non-invertible autocorrelation matrix. Previous identification methods usually rely on the invertibility assumption of the input autocorrelation, therefore they can not be applied to this case.
Stienen, Martin N; Netuka, David; Demetriades, Andreas K; Ringel, Florian; Gautschi, Oliver P; Gempt, Jens; Kuhlen, Dominique; Schaller, Karl
2016-10-01
Substantial country differences in neurosurgical training throughout Europe have recently been described, ranging from subjective rating of training quality to objective working hours per week. The aim of this study was to analyse whether these differences translate into the results of the written and oral part of the European Board Examination in Neurological Surgery (EBE-NS). Country-specific composite scores for satisfaction with quality of theoretical and practical training, as well as working hours per week, were obtained from an electronic survey distributed among European neurosurgical residents between June 2014 and March 2015. These were related to anonymous country-specific results of the EBE-NS between 2009 and 2016, using uni- and multivariate linear regression analysis. A total of n = 1025 written and n = 63 oral examination results were included. There was a significant linear relationship between the country-specific EBE-NS result in the written part and the country-specific composite score for satisfaction with quality of theoretical training [adjusted regression coefficient (RC) -3.80, 95 % confidence interval (CI) -5.43-7 -2.17, p < 0.001], but not with practical training or working time. For the oral part, there was a linear relationship between the country-specific EBE-NS result and the country-specific composite score for satisfaction with quality of practical training (RC 9.47, 95 % CI 1.47-17.47, p = 0.021), however neither with satisfaction with quality of theoretical training nor with working time. With every one-step improvement on the country-specific satisfaction score for theoretical training, the score in the EBE-NS Part 1 increased by 3.8 %. With every one-step improvement on the country-specific satisfaction score for practical training, the score in the EBE-NS Part 2 increased by 9.47 %. Improving training conditions is likely to have a direct positive influence on the knowledge level of trainees, as measured by the EBE-NS. The effect of the actual working time on the theoretical and practical knowledge of neurosurgical trainees appears to be insignificant.
Nonlinear Transient Problems Using Structure Compatible Heat Transfer Code
NASA Technical Reports Server (NTRS)
Hou, Gene
2000-01-01
The report documents the recent effort to enhance a transient linear heat transfer code so as to solve nonlinear problems. The linear heat transfer code was originally developed by Dr. Kim Bey of NASA Largely and called the Structure-Compatible Heat Transfer (SCHT) code. The report includes four parts. The first part outlines the formulation of the heat transfer problem of concern. The second and the third parts give detailed procedures to construct the nonlinear finite element equations and the required Jacobian matrices for the nonlinear iterative method, Newton-Raphson method. The final part summarizes the results of the numerical experiments on the newly enhanced SCHT code.
Arbitrary order 2D virtual elements for polygonal meshes: part II, inelastic problem
NASA Astrophysics Data System (ADS)
Artioli, E.; Beirão da Veiga, L.; Lovadina, C.; Sacco, E.
2017-10-01
The present paper is the second part of a twofold work, whose first part is reported in Artioli et al. (Comput Mech, 2017. doi: 10.1007/s00466-017-1404-5), concerning a newly developed Virtual element method (VEM) for 2D continuum problems. The first part of the work proposed a study for linear elastic problem. The aim of this part is to explore the features of the VEM formulation when material nonlinearity is considered, showing that the accuracy and easiness of implementation discovered in the analysis inherent to the first part of the work are still retained. Three different nonlinear constitutive laws are considered in the VEM formulation. In particular, the generalized viscoelastic model, the classical Mises plasticity with isotropic/kinematic hardening and a shape memory alloy constitutive law are implemented. The versatility with respect to all the considered nonlinear material constitutive laws is demonstrated through several numerical examples, also remarking that the proposed 2D VEM formulation can be straightforwardly implemented as in a standard nonlinear structural finite element method framework.
Heteroscedasticity as a Basis of Direction Dependence in Reversible Linear Regression Models.
Wiedermann, Wolfgang; Artner, Richard; von Eye, Alexander
2017-01-01
Heteroscedasticity is a well-known issue in linear regression modeling. When heteroscedasticity is observed, researchers are advised to remedy possible model misspecification of the explanatory part of the model (e.g., considering alternative functional forms and/or omitted variables). The present contribution discusses another source of heteroscedasticity in observational data: Directional model misspecifications in the case of nonnormal variables. Directional misspecification refers to situations where alternative models are equally likely to explain the data-generating process (e.g., x → y versus y → x). It is shown that the homoscedasticity assumption is likely to be violated in models that erroneously treat true nonnormal predictors as response variables. Recently, Direction Dependence Analysis (DDA) has been proposed as a framework to empirically evaluate the direction of effects in linear models. The present study links the phenomenon of heteroscedasticity with DDA and describes visual diagnostics and nine homoscedasticity tests that can be used to make decisions concerning the direction of effects in linear models. Results of a Monte Carlo simulation that demonstrate the adequacy of the approach are presented. An empirical example is provided, and applicability of the methodology in cases of violated assumptions is discussed.
2016-01-01
The general morphological shape of plant-resembling fish and plant parts were compared using a geometric morphometrics approach. Three plant-mimetic fish species, Lobotes surinamensis (Lobotidae), Platax orbicularis (Ephippidae) and Canthidermis maculata (Balistidae), were compared during their early developmental stages with accompanying plant debris (i.e., leaves of several taxa) in the coastal subtropical waters around Kuchierabu-jima Island, closely facing the Kuroshio Current. The degree of similarity shared between the plant parts and co-occurring fish species was quantified, however fish remained morphologically distinct from their plant models. Such similarities were corroborated by analysis of covariance and linear discriminant analysis, in which relative body areas of fish were strongly related to plant models. Our results strengthen the paradigm that morphological clues can lead to ecological evidence to allow predictions of behavioural and habitat choice by mimetic fish, according to the degree of similarity shared with their respective models. The resemblance to plant parts detected in the three fish species may provide fitness advantages via convergent evolutionary effects. PMID:27547571
Fatigue analyses of the prototype Francis runners based on site measurements and simulations
NASA Astrophysics Data System (ADS)
Huang, X.; Chamberland-Lauzon, J.; Oram, C.; Klopfer, A.; Ruchonnet, N.
2014-03-01
With the increasing development of solar power and wind power which give an unstable output to the electrical grid, hydropower is required to give a rapid and flexible compensation, and the hydraulic turbines have to operate at off-design conditions frequently. Prototype Francis runners suffer from strong vibrations induced by high pressure pulsations at part load, low part load, speed-no-load and during start-stops and load rejections. Fatigue and damage may be caused by the alternating stress on the runner blades. Therefore, it becomes increasingly important to carry out fatigue analysis and life time assessment of the prototype Francis runners, especially at off-design conditions. This paper presents the fatigue analyses of the prototype Francis runners based on the strain gauge site measurements and numerical simulations. In the case of low part load, speed-no-load and transient events, since the Francis runners are subjected to complex hydraulic loading, which shows a stochastic characteristic, the rainflow counting method is used to obtain the number of cycles for various dynamic amplitude ranges. From middle load to full load, pressure pulsations caused by Rotor-stator- Interaction become the dominant hydraulic excitation of the runners. Forced response analysis is performed to calculate the maximum dynamic stress. The agreement between numerical and experimental stresses is evaluated using linear regression method. Taking into account the effect of the static stress on the S-N curve, the Miner's rule, a linear cumulative fatigue damage theory, is employed to calculate the damage factors of the prototype Francis runners at various operating conditions. The relative damage factors of the runners at different operating points are compared and discussed in detail.
Dynamic Modeling and Very Short-term Prediction of Wind Power Output Using Box-Cox Transformation
NASA Astrophysics Data System (ADS)
Urata, Kengo; Inoue, Masaki; Murayama, Dai; Adachi, Shuichi
2016-09-01
We propose a statistical modeling method of wind power output for very short-term prediction. The modeling method with a nonlinear model has cascade structure composed of two parts. One is a linear dynamic part that is driven by a Gaussian white noise and described by an autoregressive model. The other is a nonlinear static part that is driven by the output of the linear part. This nonlinear part is designed for output distribution matching: we shape the distribution of the model output to match with that of the wind power output. The constructed model is utilized for one-step ahead prediction of the wind power output. Furthermore, we study the relation between the prediction accuracy and the prediction horizon.
Lorkiewicz, Wiesław; Płoszaj, Tomasz; Jędrychowska-Dańska, Krystyna; Żądzińska, Elżbieta; Strapagiel, Dominik; Haduch, Elżbieta; Szczepanek, Anita; Grygiel, Ryszard; Witas, Henryk W
2015-01-01
For a long time, anthropological and genetic research on the Neolithic revolution in Europe was mainly concentrated on the mechanism of agricultural dispersal over different parts of the continent. Recently, attention has shifted towards population processes that occurred after the arrival of the first farmers, transforming the genetically very distinctive early Neolithic Linear Pottery Culture (LBK) and Mesolithic forager populations into present-day Central Europeans. The latest studies indicate that significant changes in this respect took place within the post-Linear Pottery cultures of the Early and Middle Neolithic which were a bridge between the allochthonous LBK and the first indigenous Neolithic culture of north-central Europe--the Funnel Beaker culture (TRB). The paper presents data on mtDNA haplotypes of a Middle Neolithic population dated to 4700/4600-4100/4000 BC belonging to the Brześć Kujawski Group of the Lengyel culture (BKG) from the Kuyavia region in north-central Poland. BKG communities constituted the border of the "Danubian World" in this part of Europe for approx. seven centuries, neighboring foragers of the North European Plain and the southern Baltic basin. MtDNA haplogroups were determined in 11 individuals, and four mtDNA macrohaplogroups were found (H, U5, T, and HV0). The overall haplogroup pattern did not deviate from other post-Linear Pottery populations from central Europe, although a complete lack of N1a and the presence of U5a are noteworthy. Of greatest importance is the observed link between the BKG and the TRB horizon, confirmed by an independent analysis of the craniometric variation of Mesolithic and Neolithic populations inhabiting central Europe. Estimated phylogenetic pattern suggests significant contribution of the post-Linear BKG communities to the origin of the subsequent Middle Neolithic cultures, such as the TRB.
Lattice algebra approach to multispectral analysis of ancient documents.
Valdiviezo-N, Juan C; Urcid, Gonzalo
2013-02-01
This paper introduces a lattice algebra procedure that can be used for the multispectral analysis of historical documents and artworks. Assuming the presence of linearly mixed spectral pixels captured in a multispectral scene, the proposed method computes the scaled min- and max-lattice associative memories to determine the purest pixels that best represent the spectra of single pigments. The estimation of fractional proportions of pure spectra at each image pixel is used to build pigment abundance maps that can be used for subsequent restoration of damaged parts. Application examples include multispectral images acquired from the Archimedes Palimpsest and a Mexican pre-Hispanic codex.
Recurrent jellyfish blooms are a consequence of global oscillations
Condon, Robert H.; Duarte, Carlos M.; Pitt, Kylie A.; Robinson, Kelly L.; Lucas, Cathy H.; Sutherland, Kelly R.; Mianzan, Hermes W.; Bogeberg, Molly; Purcell, Jennifer E.; Decker, Mary Beth; Uye, Shin-ichi; Madin, Laurence P.; Brodeur, Richard D.; Haddock, Steven H. D.; Malej, Alenka; Parry, Gregory D.; Eriksen, Elena; Quiñones, Javier; Acha, Marcelo; Harvey, Michel; Arthur, James M.; Graham, William M.
2013-01-01
A perceived recent increase in global jellyfish abundance has been portrayed as a symptom of degraded oceans. This perception is based primarily on a few case studies and anecdotal evidence, but a formal analysis of global temporal trends in jellyfish populations has been missing. Here, we analyze all available long-term datasets on changes in jellyfish abundance across multiple coastal stations, using linear and logistic mixed models and effect-size analysis to show that there is no robust evidence for a global increase in jellyfish. Although there has been a small linear increase in jellyfish since the 1970s, this trend was unsubstantiated by effect-size analysis that showed no difference in the proportion of increasing vs. decreasing jellyfish populations over all time periods examined. Rather, the strongest nonrandom trend indicated jellyfish populations undergo larger, worldwide oscillations with an approximate 20-y periodicity, including a rising phase during the 1990s that contributed to the perception of a global increase in jellyfish abundance. Sustained monitoring is required over the next decade to elucidate with statistical confidence whether the weak increasing linear trend in jellyfish after 1970 is an actual shift in the baseline or part of an oscillation. Irrespective of the nature of increase, given the potential damage posed by jellyfish blooms to fisheries, tourism, and other human industries, our findings foretell recurrent phases of rise and fall in jellyfish populations that society should be prepared to face. PMID:23277544
An Application of Linear Covariance Analysis to the Design of Responsive Near-Rendezvous Missions
2007-06-01
accurately before making large ma- neuvers. A fifth type of error is maneuver knowledge error (MKER). This error accounts for how well a spacecraft is able...utilized due in a large part to the cost of designing and launching spacecraft , in a market where currently there are not many options for launching...is then ordered to fire its thrusters to increase its orbital altitude to 800 km. Before the maneuver the spacecraft is moving with some velocity, V
Nonparallel stability of three-dimensional compressible boundary layers. Part 1: Stability analysis
NASA Technical Reports Server (NTRS)
El-Hady, N. M.
1980-01-01
A compressible linear stability theory is presented for nonparallel three-dimensional boundary-layer flows, taking into account the normal velocity component as well as the streamwise and spanwise variations of the basic flow. The method of multiple scales is used to account for the nonparallelism of the basic flow, and equations are derived for the spatial evolution of the disturbance amplitude and wavenumber. The numerical procedure for obtaining the solution of the nonparallel problem is outlined.
Quick-Turn Finite Element Analysis for Plug-and-Play Satellite Structures
2007-03-01
produced from 0.375 inch round stock and turned on a machine lathe to achieve the shoulder feature and drilled to make it hollow. Figure 3.1...component, a linear taper was machined from the connection shoulder to the solar panel connecting fork. The part was then turned using the machine lathe ...utilizing a modern five-axis Computer Numerical Code ( CNC ) machine mill, the process time could be reduced by as much as seventy-five percent and the
Analytic methods for questions pertaining to a randomized pretest, posttest, follow-up design.
Rausch, Joseph R; Maxwell, Scott E; Kelley, Ken
2003-09-01
Delineates 5 questions regarding group differences that are likely to be of interest to researchers within the framework of a randomized pretest, posttest, follow-up (PPF) design. These 5 questions are examined from a methodological perspective by comparing and discussing analysis of variance (ANOVA) and analysis of covariance (ANCOVA) methods and briefly discussing hierarchical linear modeling (HLM) for these questions. This article demonstrates that the pretest should be utilized as a covariate in the model rather than as a level of the time factor or as part of the dependent variable within the analysis of group differences. It is also demonstrated that how the posttest and the follow-up are utilized in the analysis of group differences is determined by the specific question asked by the researcher.
Computational Aeroelastic Analysis of the Semi-Span Super-Sonic Transport (S4T) Wind-Tunnel Model
NASA Technical Reports Server (NTRS)
Sanetrik, Mark D.; Silva, Walter A.; Hur, Jiyoung
2012-01-01
A summary of the computational aeroelastic analysis for the Semi-Span Super-Sonic Transport (S4T) wind-tunnel model is presented. A broad range of analysis techniques, including linear, nonlinear and Reduced Order Models (ROMs) were employed in support of a series of aeroelastic (AE) and aeroservoelastic (ASE) wind-tunnel tests conducted in the Transonic Dynamics Tunnel (TDT) at NASA Langley Research Center. This research was performed in support of the ASE element in the Supersonics Program, part of NASA's Fundamental Aeronautics Program. The analysis concentrated on open-loop flutter predictions, which were in good agreement with experimental results. This paper is one in a series that comprise a special S4T technical session, which summarizes the S4T project.
Evidence of codon usage in the nearest neighbor spacing distribution of bases in bacterial genomes
NASA Astrophysics Data System (ADS)
Higareda, M. F.; Geiger, O.; Mendoza, L.; Méndez-Sánchez, R. A.
2012-02-01
Statistical analysis of whole genomic sequences usually assumes a homogeneous nucleotide density throughout the genome, an assumption that has been proved incorrect for several organisms since the nucleotide density is only locally homogeneous. To avoid giving a single numerical value to this variable property, we propose the use of spectral statistics, which characterizes the density of nucleotides as a function of its position in the genome. We show that the cumulative density of bases in bacterial genomes can be separated into an average (or secular) plus a fluctuating part. Bacterial genomes can be divided into two groups according to the qualitative description of their secular part: linear and piecewise linear. These two groups of genomes show different properties when their nucleotide spacing distribution is studied. In order to analyze genomes having a variable nucleotide density, statistically, the use of unfolding is necessary, i.e., to get a separation between the secular part and the fluctuations. The unfolding allows an adequate comparison with the statistical properties of other genomes. With this methodology, four genomes were analyzed Burkholderia, Bacillus, Clostridium and Corynebacterium. Interestingly, the nearest neighbor spacing distributions or detrended distance distributions are very similar for species within the same genus but they are very different for species from different genera. This difference can be attributed to the difference in the codon usage.
Feature selection from hyperspectral imaging for guava fruit defects detection
NASA Astrophysics Data System (ADS)
Mat Jafri, Mohd. Zubir; Tan, Sou Ching
2017-06-01
Development of technology makes hyperspectral imaging commonly used for defect detection. In this research, a hyperspectral imaging system was setup in lab to target for guava fruits defect detection. Guava fruit was selected as the object as to our knowledge, there is fewer attempts were made for guava defect detection based on hyperspectral imaging. The common fluorescent light source was used to represent the uncontrolled lighting condition in lab and analysis was carried out in a specific wavelength range due to inefficiency of this particular light source. Based on the data, the reflectance intensity of this specific setup could be categorized in two groups. Sequential feature selection with linear discriminant (LD) and quadratic discriminant (QD) function were used to select features that could potentially be used in defects detection. Besides the ordinary training method, training dataset in discriminant was separated in two to cater for the uncontrolled lighting condition. These two parts were separated based on the brighter and dimmer area. Four evaluation matrixes were evaluated which are LD with common training method, QD with common training method, LD with two part training method and QD with two part training method. These evaluation matrixes were evaluated using F1-score with total 48 defected areas. Experiment shown that F1-score of linear discriminant with the compensated method hitting 0.8 score, which is the highest score among all.
Linear and Non-linear Information Flows In Rainfall Field
NASA Astrophysics Data System (ADS)
Molini, A.; La Barbera, P.; Lanza, L. G.
The rainfall process is the result of a complex framework of non-linear dynamical in- teractions between the different components of the atmosphere. It preserves the com- plexity and the intermittent features of the generating system in space and time as well as the strong dependence of these properties on the scale of observations. The understanding and quantification of how the non-linearity of the generating process comes to influence the single rain events constitute relevant research issues in the field of hydro-meteorology, especially in those applications where a timely and effective forecasting of heavy rain events is able to reduce the risk of failure. This work focuses on the characterization of the non-linear properties of the observed rain process and on the influence of these features on hydrological models. Among the goals of such a survey is the research of regular structures of the rainfall phenomenon and the study of the information flows within the rain field. The research focuses on three basic evo- lution directions for the system: in time, in space and between the different scales. In fact, the information flows that force the system to evolve represent in general a connection between the different locations in space, the different instants in time and, unless assuming the hypothesis of scale invariance is verified "a priori", the different characteristic scales. A first phase of the analysis is carried out by means of classic statistical methods, then a survey of the information flows within the field is devel- oped by means of techniques borrowed from the Information Theory, and finally an analysis of the rain signal in the time and frequency domains is performed, with par- ticular reference to its intermittent structure. The methods adopted in this last part of the work are both the classic techniques of statistical inference and a few procedures for the detection of non-linear and non-stationary features within the process starting from measured data.
Time-Frequency Analyses of Tide-Gauge Sensor Data
Erol, Serdar
2011-01-01
The real world phenomena being observed by sensors are generally non-stationary in nature. The classical linear techniques for analysis and modeling natural time-series observations are inefficient and should be replaced by non-linear techniques of whose theoretical aspects and performances are varied. In this manner adopting the most appropriate technique and strategy is essential in evaluating sensors’ data. In this study, two different time-series analysis approaches, namely least squares spectral analysis (LSSA) and wavelet analysis (continuous wavelet transform, cross wavelet transform and wavelet coherence algorithms as extensions of wavelet analysis), are applied to sea-level observations recorded by tide-gauge sensors, and the advantages and drawbacks of these methods are reviewed. The analyses were carried out using sea-level observations recorded at the Antalya-II and Erdek tide-gauge stations of the Turkish National Sea-Level Monitoring System. In the analyses, the useful information hidden in the noisy signals was detected, and the common features between the two sea-level time series were clarified. The tide-gauge records have data gaps in time because of issues such as instrumental shortcomings and power outages. Concerning the difficulties of the time-frequency analysis of data with voids, the sea-level observations were preprocessed, and the missing parts were predicted using the neural network method prior to the analysis. In conclusion the merits and limitations of the techniques in evaluating non-stationary observations by means of tide-gauge sensors records were documented and an analysis strategy for the sequential sensors observations was presented. PMID:22163829
Time-frequency analyses of tide-gauge sensor data.
Erol, Serdar
2011-01-01
The real world phenomena being observed by sensors are generally non-stationary in nature. The classical linear techniques for analysis and modeling natural time-series observations are inefficient and should be replaced by non-linear techniques of whose theoretical aspects and performances are varied. In this manner adopting the most appropriate technique and strategy is essential in evaluating sensors' data. In this study, two different time-series analysis approaches, namely least squares spectral analysis (LSSA) and wavelet analysis (continuous wavelet transform, cross wavelet transform and wavelet coherence algorithms as extensions of wavelet analysis), are applied to sea-level observations recorded by tide-gauge sensors, and the advantages and drawbacks of these methods are reviewed. The analyses were carried out using sea-level observations recorded at the Antalya-II and Erdek tide-gauge stations of the Turkish National Sea-Level Monitoring System. In the analyses, the useful information hidden in the noisy signals was detected, and the common features between the two sea-level time series were clarified. The tide-gauge records have data gaps in time because of issues such as instrumental shortcomings and power outages. Concerning the difficulties of the time-frequency analysis of data with voids, the sea-level observations were preprocessed, and the missing parts were predicted using the neural network method prior to the analysis. In conclusion the merits and limitations of the techniques in evaluating non-stationary observations by means of tide-gauge sensors records were documented and an analysis strategy for the sequential sensors observations was presented.
Moment method analysis of linearly tapered slot antennas
NASA Technical Reports Server (NTRS)
Koeksal, Adnan
1993-01-01
A method of moments (MOM) model for the analysis of the Linearly Tapered Slot Antenna (LTSA) is developed and implemented. The model employs an unequal size rectangular sectioning for conducting parts of the antenna. Piecewise sinusoidal basis functions are used for the expansion of conductor current. The effect of the dielectric is incorporated in the model by using equivalent volume polarization current density and solving the equivalent problem in free-space. The feed section of the antenna including the microstripline is handled rigorously in the MOM model by including slotline short circuit and microstripline currents among the unknowns. Comparison with measurements is made to demonstrate the validity of the model for both the air case and the dielectric case. Validity of the model is also verified by extending the model to handle the analysis of the skew-plate antenna and comparing the results to those of a skew-segmentation modeling results of the same structure and to available data in the literature. Variation of the radiation pattern for the air LTSA with length, height, and taper angle is investigated, and the results are tabulated. Numerical results for the effect of the dielectric thickness and permittivity are presented.
NASA Astrophysics Data System (ADS)
Rattez, Hadrien; Stefanou, Ioannis; Sulem, Jean
2018-06-01
A Thermo-Hydro-Mechanical (THM) model for Cosserat continua is developed to explore the influence of frictional heating and thermal pore fluid pressurization on the strain localization phenomenon. A general framework is presented to conduct a bifurcation analysis for elasto-plastic Cosserat continua with THM couplings and predict the onset of instability. The presence of internal lengths in Cosserat continua enables to estimate the thickness of the localization zone. This is done by performing a linear stability analysis of the system and looking for the selected wavelength corresponding to the instability mode with fastest finite growth coefficient. These concepts are applied to the study of fault zones under fast shearing. For doing so, we consider a model of a sheared saturated infinite granular layer. The influence of THM couplings on the bifurcation state and the shear band width is investigated. Taking representative parameters for a centroidal fault gouge, the evolution of the thickness of the localized zone under continuous shear is studied. Furthermore, the effect of grain crushing inside the shear band is explored by varying the internal length of the constitutive law.
Archetypes for Organisational Safety
NASA Technical Reports Server (NTRS)
Marais, Karen; Leveson, Nancy G.
2003-01-01
We propose a framework using system dynamics to model the dynamic behavior of organizations in accident analysis. Most current accident analysis techniques are event-based and do not adequately capture the dynamic complexity and non-linear interactions that characterize accidents in complex systems. In this paper we propose a set of system safety archetypes that model common safety culture flaws in organizations, i.e., the dynamic behaviour of organizations that often leads to accidents. As accident analysis and investigation tools, the archetypes can be used to develop dynamic models that describe the systemic and organizational factors contributing to the accident. The archetypes help clarify why safety-related decisions do not always result in the desired behavior, and how independent decisions in different parts of the organization can combine to impact safety.
NASA Technical Reports Server (NTRS)
Wolf, S. F.; Lipschutz, M. E.
1993-01-01
Multivariate statistical analysis techniques (linear discriminant analysis and logistic regression) can provide powerful discrimination tools which are generally unfamiliar to the planetary science community. Fall parameters were used to identify a group of 17 H chondrites (Cluster 1) that were part of a coorbital stream which intersected Earth's orbit in May, from 1855 - 1895, and can be distinguished from all other H chondrite falls. Using multivariate statistical techniques, it was demonstrated that a totally different criterion, labile trace element contents - hence thermal histories - or 13 Cluster 1 meteorites are distinguishable from those of 45 non-Cluster 1 H chondrites. Here, we focus upon the principles of multivariate statistical techniques and illustrate their application using non-meteoritic and meteoritic examples.
Integrated Composite Analyzer (ICAN): Users and programmers manual
NASA Technical Reports Server (NTRS)
Murthy, P. L. N.; Chamis, C. C.
1986-01-01
The use of and relevant equations programmed in a computer code designed to carry out a comprehensive linear analysis of multilayered fiber composites is described. The analysis contains the essential features required to effectively design structural components made from fiber composites. The inputs to the code are constituent material properties, factors reflecting the fabrication process, and composite geometry. The code performs micromechanics, macromechanics, and laminate analysis, including the hygrothermal response of fiber composites. The code outputs are the various ply and composite properties, composite structural response, and composite stress analysis results with details on failure. The code is in Fortran IV and can be used efficiently as a package in complex structural analysis programs. The input-output format is described extensively through the use of a sample problem. The program listing is also included. The code manual consists of two parts.
Inci, Ercan; Ekizoglu, Oguzhan; Turkay, Rustu; Aksoy, Sema; Can, Ismail Ozgur; Solmaz, Dilek; Sayin, Ibrahim
2016-10-01
Morphometric analysis of the mandibular ramus (MR) provides highly accurate data to discriminate sex. The objective of this study was to demonstrate the utility and accuracy of MR morphometric analysis for sex identification in a Turkish population.Four hundred fifteen Turkish patients (18-60 y; 201 male and 214 female) who had previously had multidetector computed tomography scans of the cranium were included in the study. Multidetector computed tomography images were obtained using three-dimensional reconstructions and a volume-rendering technique, and 8 linear and 3 angular values were measured. Univariate, bivariate, and multivariate discriminant analyses were performed, and the accuracy rates for determining sex were calculated.Mandibular ramus values produced high accuracy rates of 51% to 95.6%. Upper ramus vertical height had the highest rate at 95.6%, and bivariate analysis showed 89.7% to 98.6% accuracy rates with the highest ratios of mandibular flexure upper border and maximum ramus breadth. Stepwise discrimination analysis gave a 99% accuracy rate for all MR variables.Our study showed that the MR, in particular morphometric measures of the upper part of the ramus, can provide valuable data to determine sex in a Turkish population. The method combines both anthropological and radiologic studies.
ON THE DECOMPOSITION OF STRESS AND STRAIN TENSORS INTO SPHERICAL AND DEVIATORIC PARTS
Augusti, G.; Martin, J. B.; Prager, W.
1969-01-01
It is well known that Hooke's law for a linearly elastic, isotropic solid may be written in the form of two relations that involve only the spherical or only the deviatoric parts of the tensors of stress and strain. The example of the linearly elastic, transversely isotropic solid is used to show that this decomposition is not, in general, feasible for linearly elastic, anisotropic solids. The discussion is extended to a large class of work-hardening rigid, plastic solids, and it is shown that the considered decomposition can only be achieved for the incompressible solids of this class. PMID:16591754
Fu, Jingni; Zhang, Luning
2018-01-16
Relying on the nanometer-thick water core and large surface area-to-volume ratio (∼2 × 10 8 m -1 ) of common black film (CBF), we are able to use a pH-sensitive dye (carboxy-seminaphthorhodafluor-1, SNARF-1) to detect ammonia and acetic acid gas adsorption into the CBF, with the limit of detection reaching 0.8 ppm for NH 3 gas and 3 ppb for CH 3 COOH gas in the air. Data analysis reveals that fluorescence signal change is linearly proportional to the gas concentration up to 15 ppm and 65 ppb for NH 3 and CH 3 COOH, respectively.
NASA Astrophysics Data System (ADS)
Gehrels, J. C.; van Geer, F. C.; de Vries, J. J.
1994-05-01
Time series analysis of the fluctuations in shallow groundwater levels in the Netherlands lowlands have revealed a large-scale decline in head during recent decades as a result of an increase in land drainage and groundwater withdrawal. The situation is more ambiguous in large groundwater bodies located in the eastern part of the country, where the unsaturated zone increases from near zero along the edges to about 40 m in the centre of the area. As depth of the unsaturated zone increases, groundwater level reacts with an increasing delay to fluctuations in climate and influences of human activities. The aim of the present paper is to model groundwater level fluctuations in these areas using a linear stochastic transfer function model, relating groundwater levels to estimated precipitation excess, and to separate artificial components from the natural groundwater regime. In this way, the impact of groundwater withdrawal and the reclamation of a 1000 km 2 polder area on the groundwater levels in the adjoining higher ground could be assessed. It became evident that the linearity assumption of the transfer functions becomes a serious drawback in areas with the deepest groundwater levels, because of non-linear processes in the deep unsaturated zone and the non-synchronous arrival of recharge in the saturated zone. Comparison of the results from modelling the influence of reclamation with an analytical solution showed that the lowering of groundwater level is partly compensated by reduced discharge and therefore is less than expected.
Wiggly tails: A gravitational wave signature of massive fields around black holes
NASA Astrophysics Data System (ADS)
Degollado, Juan Carlos; Herdeiro, Carlos A. R.
2014-09-01
Massive fields can exist in long-lived configurations around black holes. We examine how the gravitational wave signal of a perturbed black hole is affected by such "dirtiness" within linear theory. As a concrete example, we consider the gravitational radiation emitted by the infall of a massive scalar field into a Schwarzschild black hole. Whereas part of the scalar field is absorbed/scattered by the black hole and triggers gravitational wave emission, another part lingers in long-lived quasibound states. Solving numerically the Teukolsky master equation for gravitational perturbations coupled to the massive Klein-Gordon equation, we find a characteristic gravitational wave signal, composed by a quasinormal ringing followed by a late time tail. In contrast to "clean" black holes, however, the late time tail contains small amplitude wiggles with the frequency of the dominating quasibound state. Additionally, an observer dependent beating pattern may also be seen. These features were already observed in fully nonlinear studies; our analysis shows they are present at linear level, and, since it reduces to a 1+1 dimensional numerical problem, allows for cleaner numerical data. Moreover, we discuss the power law of the tail and that it only becomes universal sufficiently far away from the dirty black hole. The wiggly tails, by constrast, are a generic feature that may be used as a smoking gun for the presence of massive fields around black holes, either as a linear cloud or as fully nonlinear hair.
NASA Astrophysics Data System (ADS)
Nasir, M. N. M.; Mezeix, L.; Aminanda, Y.; Seman, M. A.; Rivai, A.; Ali, K. M.
2016-02-01
This paper presents an original method in predicting the spring-back for composite aircraft structures using non-linear Finite Element Analysis (FEA) and is an extension of the previous accompanying study on flat geometry samples. Firstly, unidirectional prepreg lay-up samples are fabricated on moulds with different corner angles (30°, 45° and 90°) and the effect on spring-back deformation are observed. Then, the FEA model that was developed in the previous study on flat samples is utilized. The model maintains the physical mechanisms of spring-back such as ply stretching and tool-part interface properties with the additional mechanism in the corner effect and geometrical changes in the tool, part and the tool-part interface components. The comparative study between the experimental data and FEA results show that the FEA model predicts adequately the spring-back deformation within the range of corner angle tested.
NASA Technical Reports Server (NTRS)
Basilevsky, A. T.; Neukam, G.; Ivanov, B. A.; Werner, S. C.; vanGesselt, S.; Head, J. W.; Hauber, E.
2005-01-01
This study is based on the geological analysis of the HRSC images taken on the orbit 0143 (12 m/px in nadir channel). The study area includes the western segment of Olympus Mons and the adjacent lowland plains (Fig. 1). Part of the volcano above the scarp is rather flat and is called "summit plateau" below. What is often called the volcano scarp is a slope classified into three morphologic types: Type 1 (S1 in Fig.1) is the steepest and dominated by ravines in its upper part and by talus beneath; Type 2 (S2) is intermediate in steepness and dominated by downslope trending linear depressions, part of which have channel-like morphology; and Type 3 (S3), is the most gentle and covered by lava flows, continuing from the summit plateau down to the lowland plains.
How Can Health System Efficiency Be Improved in Canada?
Allin, Sara; Veillard, Jeremy; Wang, Li; Grignon, Michel
2015-08-01
Improving value for money in the health system is an often-stated policy goal. This study is the first to systematically measure the efficiency of health regions in Canada in producing health gains with their available resources, and to identify the factors that are associated with increased efficiency. Based on the objective elicited from decision-makers that the health system should ensure access to care for Canadians when they need it, we measured the efficiency with which regions reduce causes of death that are amenable to healthcare interventions using a linear programming approach (data envelopment analysis). Variations in efficiency were explained in part by public health factors, such as the prevalence of obesity and smoking in the population; in part by characteristics of the population, such as their average income; and in part by managerial factors, such as hospital readmissions. Copyright © 2015 Longwoods Publishing.
NASA Astrophysics Data System (ADS)
Pujiyanto; Yasin, M.; Rusydi, F.
2018-03-01
Development of lead ion detection systems is expected to have an advantage in terms of simplicity of the device and easy for concentration analysis of a lead ion with very high performance. One important part of lead ion detection systems are electrical signal acquisition parts. The electrical signal acquisition part uses the main electronic components: non inverting op-amplifier, instrumentation amplifier, multiplier circuit and logarithmic amplifier. Here will be shown the performance of lead ion detection systems when the existing electrical signal processors use commercial electronic components. The results that can be drawn from this experimental were the lead ion sensor that has been developed can be used to detect lead ions with a sensitivity of 10.48 mV/ppm with the linearity 97.11% and had a measurement range of 0.1 ppm to 80 ppm.
Hamiltonian modelling of relative motion.
Kasdin, N Jeremy; Gurfil, Pini
2004-05-01
This paper presents a Hamiltonian approach to modelling relative spacecraft motion based on derivation of canonical coordinates for the relative state-space dynamics. The Hamiltonian formulation facilitates the modelling of high-order terms and orbital perturbations while allowing us to obtain closed-form solutions to the relative motion problem. First, the Hamiltonian is partitioned into a linear term and a high-order term. The Hamilton-Jacobi equations are solved for the linear part by separation, and new constants for the relative motions are obtained, they are called epicyclic elements. The influence of higher order terms and perturbations, such as the oblateness of the Earth, are incorporated into the analysis by a variation of parameters procedure. Closed-form solutions for J(2-) and J(4-)invariant orbits and for periodic high-order unperturbed relative motion, in terms of the relative motion elements only, are obtained.
Temporal Gain Correction for X-Ray Calorimeter Spectrometers
NASA Technical Reports Server (NTRS)
Porter, F. S.; Chiao, M. P.; Eckart, M. E.; Fujimoto, R.; Ishisaki, Y.; Kelley, R. L.; Kilbourne, C. A.; Leutenegger, M. A.; McCammon, D.; Mitsuda, K.
2016-01-01
Calorimetric X-ray detectors are very sensitive to their environment. The boundary conditions can have a profound effect on the gain including heat sink temperature, the local radiation temperature, bias, and the temperature of the readout electronics. Any variation in the boundary conditions can cause temporal variations in the gain of the detector and compromise both the energy scale and the resolving power of the spectrometer. Most production X-ray calorimeter spectrometers, both on the ground and in space, have some means of tracking the gain as a function of time, often using a calibration spectral line. For small gain changes, a linear stretch correction is often sufficient. However, the detectors are intrinsically non-linear and often the event analysis, i.e., shaping, optimal filters etc., add additional non-linearity. Thus for large gain variations or when the best possible precision is required, a linear stretch correction is not sufficient. Here, we discuss a new correction technique based on non-linear interpolation of the energy-scale functions. Using Astro-HSXS calibration data, we demonstrate that the correction can recover the X-ray energy to better than 1 part in 104 over the entire spectral band to above 12 keV even for large-scale gain variations. This method will be used to correct any temporal drift of the on-orbit per-pixel gain using on-board calibration sources for the SXS instrument on the Astro-H observatory.
33 CFR 207.460 - Fox River, Wis.
Code of Federal Regulations, 2012 CFR
2012-07-01
... desiring to use the Kaukauna drydock will give notice to the U.S. Assistant Engineer in local charge at... per linear foot; $25 minimum charge. Barges, dump scows, and derrick boats, 65 cents per linear foot... made on such Sundays and holidays): For all vessels, 20 cents per linear foot per calendar day or part...
33 CFR 207.460 - Fox River, Wis.
Code of Federal Regulations, 2013 CFR
2013-07-01
... desiring to use the Kaukauna drydock will give notice to the U.S. Assistant Engineer in local charge at... per linear foot; $25 minimum charge. Barges, dump scows, and derrick boats, 65 cents per linear foot... made on such Sundays and holidays): For all vessels, 20 cents per linear foot per calendar day or part...
33 CFR 207.460 - Fox River, Wis.
Code of Federal Regulations, 2014 CFR
2014-07-01
... desiring to use the Kaukauna drydock will give notice to the U.S. Assistant Engineer in local charge at... per linear foot; $25 minimum charge. Barges, dump scows, and derrick boats, 65 cents per linear foot... made on such Sundays and holidays): For all vessels, 20 cents per linear foot per calendar day or part...
40 CFR Appendix B to Part 75 - Quality Assurance and Quality Control Procedures
Code of Federal Regulations, 2012 CFR
2012-07-01
... Systems 1.2.1Calibration Error Test and Linearity Check Procedures Keep a written record of the procedures used for daily calibration error tests and linearity checks (e.g., how gases are to be injected..., and when calibration adjustments should be made). Identify any calibration error test and linearity...
40 CFR Appendix B to Part 75 - Quality Assurance and Quality Control Procedures
Code of Federal Regulations, 2013 CFR
2013-07-01
... Systems 1.2.1Calibration Error Test and Linearity Check Procedures Keep a written record of the procedures used for daily calibration error tests and linearity checks (e.g., how gases are to be injected..., and when calibration adjustments should be made). Identify any calibration error test and linearity...
Linearization: Students Forget the Operating Point
ERIC Educational Resources Information Center
Roubal, J.; Husek, P.; Stecha, J.
2010-01-01
Linearization is a standard part of modeling and control design theory for a class of nonlinear dynamical systems taught in basic undergraduate courses. Although linearization is a straight-line methodology, it is not applied correctly by many students since they often forget to keep the operating point in mind. This paper explains the topic and…
Understanding Linear Functions and Their Representations
ERIC Educational Resources Information Center
Wells, Pamela J.
2015-01-01
Linear functions are an important part of the middle school mathematics curriculum. Students in the middle grades gain fluency by working with linear functions in a variety of representations (NCTM 2001). Presented in this article is an activity that was used with five eighth-grade classes at three different schools. The activity contains 15 cards…
Linear Classification of Dairy Cattle. Slide Script.
ERIC Educational Resources Information Center
Sipiorski, James; Spike, Peter
This slide script, part of a series of slide scripts designed for use in vocational agriculture classes, deals with principles of the linear classification of dairy cattle. Included in the guide are narrations for use with 63 slides, which illustrate the following areas that are considered in the linear classification system: stature, strength,…
NASA Technical Reports Server (NTRS)
Bakhtiari-Nejad, Maryam; Nguyen, Nhan T.; Krishnakumar, Kalmanje Srinvas
2009-01-01
This paper presents the application of Bounded Linear Stability Analysis (BLSA) method for metrics driven adaptive control. The bounded linear stability analysis method is used for analyzing stability of adaptive control models, without linearizing the adaptive laws. Metrics-driven adaptive control introduces a notion that adaptation should be driven by some stability metrics to achieve robustness. By the application of bounded linear stability analysis method the adaptive gain is adjusted during the adaptation in order to meet certain phase margin requirements. Analysis of metrics-driven adaptive control is evaluated for a linear damaged twin-engine generic transport model of aircraft. The analysis shows that the system with the adjusted adaptive gain becomes more robust to unmodeled dynamics or time delay.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hau, Jan-Niklas, E-mail: hau@fdy.tu-darmstadt.de; Oberlack, Martin; GSC CE, Technische Universität Darmstadt, Dolivostraße 15, 64293 Darmstadt
2015-12-15
Aerodynamic sound generation in shear flows is investigated in the light of the breakthrough in hydrodynamics stability theory in the 1990s, where generic phenomena of non-normal shear flow systems were understood. By applying the thereby emerged short-time/non-modal approach, the sole linear mechanism of wave generation by vortices in shear flows was captured [G. D. Chagelishvili, A. Tevzadze, G. Bodo, and S. S. Moiseev, “Linear mechanism of wave emergence from vortices in smooth shear flows,” Phys. Rev. Lett. 79, 3178-3181 (1997); B. F. Farrell and P. J. Ioannou, “Transient and asymptotic growth of two-dimensional perturbations in viscous compressible shear flow,” Phys.more » Fluids 12, 3021-3028 (2000); N. A. Bakas, “Mechanism underlying transient growth of planar perturbations in unbounded compressible shear flow,” J. Fluid Mech. 639, 479-507 (2009); and G. Favraud and V. Pagneux, “Superadiabatic evolution of acoustic and vorticity perturbations in Couette flow,” Phys. Rev. E 89, 033012 (2014)]. Its source is the non-normality induced linear mode-coupling, which becomes efficient at moderate Mach numbers that is defined for each perturbation harmonic as the ratio of the shear rate to its characteristic frequency. Based on the results by the non-modal approach, we investigate a two-dimensional homentropic constant shear flow and focus on the dynamical characteristics in the wavenumber plane. This allows to separate from each other the participants of the dynamical processes — vortex and wave modes — and to estimate the efficacy of the process of linear wave-generation. This process is analyzed and visualized on the example of a packet of vortex modes, localized in both, spectral and physical, planes. Further, by employing direct numerical simulations, the wave generation by chaotically distributed vortex modes is analyzed and the involved linear and nonlinear processes are identified. The generated acoustic field is anisotropic in the wavenumber plane, which results in highly directional linear sound radiation, whereas the nonlinearly generated waves are almost omni-directional. As part of this analysis, we compare the effectiveness of the linear and nonlinear mechanisms of wave generation within the range of validity of the rapid distortion theory and show the dominance of the linear aerodynamic sound generation. Finally, topological differences between the linear source term of the acoustic analogy equation and of the anisotropic non-normality induced linear mechanism of wave generation are found.« less
Linearization of digital derived rate algorithm for use in linear stability analysis
NASA Technical Reports Server (NTRS)
Graham, R. E.; Porada, T. W.
1985-01-01
The digital derived rate (DDR) algorithm is used to calculate the rate of rotation of the Centaur upper-stage rocket. The DDR is highly nonlinear algorithm, and classical linear stability analysis of the spacecraft cannot be performed without linearization. The performance of this rate algorithm is characterized by a gain and phase curve that drop off at the same frequency. This characteristic is desirable for many applications. A linearization technique for the DDR algorithm is investigated. The linearization method is described. Examples of the results of the linearization technique are illustrated, and the effects of linearization are described. A linear digital filter may be used as a substitute for performing classical linear stability analyses, while the DDR itself may be used in time response analysis.
Rasmuson, James O; Roggli, Victor L; Boelter, Fred W; Rasmuson, Eric J; Redinger, Charles F
2014-01-01
A detailed evaluation of the correlation and linearity of industrial hygiene retrospective exposure assessment (REA) for cumulative asbestos exposure with asbestos lung burden analysis (LBA) has not been previously performed, but both methods are utilized for case-control and cohort studies and other applications such as setting occupational exposure limits. (a) To correlate REA with asbestos LBA for a large number of cases from varied industries and exposure scenarios; (b) to evaluate the linearity, precision, and applicability of both industrial hygiene exposure reconstruction and LBA; and (c) to demonstrate validation methods for REA. A panel of four experienced industrial hygiene raters independently estimated the cumulative asbestos exposure for 363 cases with limited exposure details in which asbestos LBA had been independently determined. LBA for asbestos bodies was performed by a pathologist by both light microscopy and scanning electron microscopy (SEM) and free asbestos fibers by SEM. Precision, reliability, correlation and linearity were evaluated via intraclass correlation, regression analysis and analysis of covariance. Plaintiff's answers to interrogatories, work history sheets, work summaries or plaintiff's discovery depositions that were obtained in court cases involving asbestos were utilized by the pathologist to provide a summarized brief asbestos exposure and work history for each of the 363 cases. Linear relationships between REA and LBA were found when adjustment was made for asbestos fiber-type exposure differences. Significant correlation between REA and LBA was found with amphibole asbestos lung burden and mixed fiber-types, but not with chrysotile. The intraclass correlation coefficients (ICC) for the precision of the industrial hygiene rater cumulative asbestos exposure estimates and the precision of repeated laboratory analysis were found to be in the excellent range. The ICC estimates were performed independent of specific asbestos fiber-type. Both REA and pathology assessment are reliable and complementary predictive methods to characterize asbestos exposures. Correlation analysis between the two methods effectively validates both REA methodology and LBA procedures within the determined precision, particularly for cumulative amphibole asbestos exposures since chrysotile fibers, for the most part, are not retained in the lung for an extended period of time.
Design of two-dimensional channels with prescribed velocity distributions along the channel walls
NASA Technical Reports Server (NTRS)
Stanitz, John D
1953-01-01
A general method of design is developed for two-dimensional unbranched channels with prescribed velocities as a function of arc length along the channel walls. The method is developed for both compressible and incompressible, irrotational, nonviscous flow and applies to the design of elbows, diffusers, nozzles, and so forth. In part I solutions are obtained by relaxation methods; in part II solutions are obtained by a Green's function. Five numerical examples are given in part I including three elbow designs with the same prescribed velocity as a function of arc length along the channel walls but with incompressible, linearized compressible, and compressible flow. One numerical example is presented in part II for an accelerating elbow with linearized compressible flow, and the time required for the solution by a Green's function in part II was considerably less than the time required for the same solution by relaxation methods in part I.
Rowan, L.C.; Trautwein, C.M.; Purdy, T.L.
1990-01-01
This study was undertaken as part of the Conterminous U.S. Mineral Assessment Program (CUSMAP). The purpose of the study was to map linear features on Landsat Multispectral Scanner (MSS) images and a proprietary side-looking airborne radar (SLAR) image mosaic and to determine the spatial relationship between these linear features and the locations of metallic mineral occurrE-nces. The results show a close spatial association of linear features with metallic mineral occurrences in parts of the quadrangle, but in other areas the association is less well defined. Linear features are defined as distinct linear and slightly curvilinear elements mappable on MSS and SLAR images. The features generally represent linear segments of streams, ridges, and terminations of topographic features; however, they may also represent tonal patterns that are related to variations in lithology and vegetation. Most linear features in the Butte quadrangle probably represent underlying structural elements, such as fractures (with and without displacement), dikes, and alignment of fold axes. However, in areas underlain by sedimentary rocks, some of the linear features may reflect bedding traces. This report describes the geologic setting of the Butte quadrangle, the procedures used in mapping and analyzing the linear features, and the results of the study. Relationship of these features to placer and non-metal deposits were not analyzed in this study and are not discussed in this report.
Error Analysis Of Students Working About Word Problem Of Linear Program With NEA Procedure
NASA Astrophysics Data System (ADS)
Santoso, D. A.; Farid, A.; Ulum, B.
2017-06-01
Evaluation and assessment is an important part of learning. In evaluation process of learning, written test is still commonly used. However, the tests usually do not following-up by further evaluation. The process only up to grading stage not to evaluate the process and errors which done by students. Whereas if the student has a pattern error and process error, actions taken can be more focused on the fault and why is that happen. NEA procedure provides a way for educators to evaluate student progress more comprehensively. In this study, students’ mistakes in working on some word problem about linear programming have been analyzed. As a result, mistakes are often made students exist in the modeling phase (transformation) and process skills (process skill) with the overall percentage distribution respectively 20% and 15%. According to the observations, these errors occur most commonly due to lack of precision of students in modeling and in hastiness calculation. Error analysis with students on this matter, it is expected educators can determine or use the right way to solve it in the next lesson.
NASA Technical Reports Server (NTRS)
Ehlers, F. E.; Weatherill, W. H.
1982-01-01
A finite difference method for solving the unsteady transonic flow about harmonically oscillating wings is investigated. The procedure is based on separating the velocity potential into steady and unsteady parts and linearizing the resulting unsteady differential equation for small disturbances. The differential equation for the unsteady velocity potential is linear with spatially varying coefficients and with the time variable eliminated by assuming harmonic motion. A study is presented of the shock motion associated with an oscillating airfoil and its representation by the harmonic procedure. The effects of the shock motion and the resulting pressure pulse are shown to be included in the harmonic pressure distributions and the corresponding generalized forces. Analytical and experimental pressure distributions for the NACA 64A010 airfoil are compared for Mach numbers of 0.75, 0.80 and 0.842. A typical section, two-degree-of-freedom flutter analysis of a NACA 64A010 airfoil is performed. The results show a sharp transonic bucket in one case and abrupt changes in instability modes.
Sex assessment using measurements of the first lumbar vertebra.
Zheng, Wen Xu; Cheng, Fu Bo; Cheng, Kai Liang; Tian, Yong; Lai, Ying; Zhang, Wen Song; Zheng, Ya Juan; Li, You Qiong
2012-06-10
Sex determination is a vital part of the medico-legal system but can be difficult in cases where the integrity of the body has been compromised. The purpose of this study was to develop a technique for sex assessment from measurements of the first lumber vertebrate. Twenty-nine linear measurements and five ratios were collected from 113 Chinese adult males and 97 Chinese adult females using digital three-dimensional anthropometry methods. By using discriminant analysis, we found that 23 linear measurements and two ratios identified sexual dimorphism (P<0.01), with predictive accuracy ranging from 57.1% to 86.6%. Using a stepwise method of discriminant function analysis, we found three dimensions predicted sex with 88.6% accuracy: (a) upper end-plate width (EPWu), (b) left pedicle height (PHl), and (c) middle end-plate depth (EPDm). This study shows that a single first lumber vertebra can be used for this purpose, and that the discriminant equation will help forensic determination of sex in the Chinese population. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
State-variable analysis of non-linear circuits with a desk computer
NASA Technical Reports Server (NTRS)
Cohen, E.
1981-01-01
State variable analysis was used to analyze the transient performance of non-linear circuits on a desk top computer. The non-linearities considered were not restricted to any circuit element. All that is required for analysis is the relationship defining each non-linearity be known in terms of points on a curve.
Quantitative Anatomy of the Trapezius Muscle in the Human Fetus.
Badura, Mateusz; Grzonkowska, Magdalena; Baumgart, Mariusz; Szpinda, Michał
2016-01-01
The trapezius muscle consists of three parts that are capable of functioning independently. Its superior part together with the levator scapulae and rhomboids elevate the shoulder, the middle part retracts the scapula, while the inferior part lowers the shoulder. The present study aimed to supplement numerical data and to provide growth dynamics of the trapezius in the human fetus. Using methods of anatomical dissection, digital image analysis (NIS Elements AR 3.0), and statistics (Student's t-test, regression analysis), we measured the length, the width and the surface area of the trapezius in 30 fetuses of both sexes (13 k,17 ) aged 13-19 weeks. Neither sex nor laterality differences were found. All the studied parameters of the trapezius increased proportionately with age. The linear functions were computed as follows: y = -103.288 + 10.514 × age (r = 0.957) for total length of the trapezius muscle, y = -67.439 + 6.689 × age (r = 0.856) for length of its descending part, y = -8.493 + 1.033 × age (r = 0.53) for length of its transverse part, y = -27.545 + 2.802 × age (r = 0.791) for length of its ascending part, y = -19.970 + 2.505 × age (r = 0.875) for width of the trapezius muscle, and y = -2670.458 + 212.029 × age (r = 0.915) for its surface area. Neither sex nor laterality differences exist in the numerical data of the trapezius muscle in the human fetus. The descending part of trapezius is the longest, while its transverse part is the shortest. The growth dynamics of the fetal trapezius muscle follows proportionately.
NASA Astrophysics Data System (ADS)
Helman, E. Udi
This dissertation conducts research into the large-scale simulation of oligopolistic competition in wholesale electricity markets. The dissertation has two parts. Part I is an examination of the structure and properties of several spatial, or network, equilibrium models of oligopolistic electricity markets formulated as mixed linear complementarity problems (LCP). Part II is a large-scale application of such models to the electricity system that encompasses most of the United States east of the Rocky Mountains, the Eastern Interconnection. Part I consists of Chapters 1 to 6. The models developed in this part continue research into mixed LCP models of oligopolistic electricity markets initiated by Hobbs [67] and subsequently developed by Metzler [87] and Metzler, Hobbs and Pang [88]. Hobbs' central contribution is a network market model with Cournot competition in generation and a price-taking spatial arbitrage firm that eliminates spatial price discrimination by the Cournot firms. In one variant, the solution to this model is shown to be equivalent to the "no arbitrage" condition in a "pool" market, in which a Regional Transmission Operator optimizes spot sales such that the congestion price between two locations is exactly equivalent to the difference in the energy prices at those locations (commonly known as locational marginal pricing). Extensions to this model are presented in Chapters 5 and 6. One of these is a market model with a profit-maximizing arbitrage firm. This model is structured as a mathematical program with equilibrium constraints (MPEC), but due to the linearity of its constraints, can be solved as a mixed LCP. Part II consists of Chapters 7 to 12. The core of these chapters is a large-scale simulation of the U.S. Eastern Interconnection applying one of the Cournot competition with arbitrage models. This is the first oligopolistic equilibrium market model to encompass the full Eastern Interconnection with a realistic network representation (using a DC load flow approximation). Chapter 9 shows the price results. In contrast to prior market power simulations of these markets, much greater variability in price-cost margins is found when using a realistic model of hourly conditions on such a large network. Chapter 10 shows that the conventional concentration indices (HHIs) are poorly correlated with PCMs. Finally, Chapter 11 proposes that the simulation models are applied to merger analysis and provides two large-scale merger examples. (Abstract shortened by UMI.)
Some aspects of the analysis of geodetic strain observations in kinematic models
NASA Astrophysics Data System (ADS)
Welsch, W. M.
1986-11-01
Frequently, deformation processes are analyzed in static models. In many cases, this procedure is justified, in particular if the deformation occurring is a singular event. If. however, the deformation is a continuous process, as is the case, for instance, with recent crustal movements, the analysis in kinematic models is more commensurate with the problem because the factor "time" is considered an essential part of the model. Some specialities have to be considered when analyzing geodetic strain observations in kinematic models. They are dealt with in this paper. After a brief derivation of the basic kinematic model and the kinematic strain model, the following subjects are treated: the adjustment of the pointwise velocity field and the derivation of strain-rate parameters; the fixing of the kinematic reference system as part of the geodetic datum; statistical tests of models by testing linear hypotheses; the invariance of kinematic strain-rate parameters with respect to transformations of the coordinate-system and the geodetic datum; the interpolation of strain rates by finite-element methods. After the representation of some advanced models for the description of secular and episodic kinematic processes, the data analysis in dynamic models is regarded as a further generalization of deformation analysis.
Silva, Daniel R.; Brenzan, Mislaine A.; Kambara, Lauro M.; Cortez, Lucia E. R.; Cortez, Diógenes A. G.
2013-01-01
Background: Piper ovatum (Piperaceae) has been used in traditional medicine for the treatment of inflammations and as an analgesic. Previous studies have showed important biological activities of the extracts and amides from P. ovatum leaves. Objective: In this study, a high-performance liquid chromatographic (HPLC) method was developed and validated for quantitative determination of the amides in different parts of Piper ovatum. Materials and Methods: The analysis was carried out on a Metasil ODS column (150 × 4.6 mm, 5μm) at room temperature. HPLC conditions were as follows: acetonitrile (A), and water (B), 1.0% acetic acid. The gradient elution used was 0–30 min, 0-60% A; 30–40 min, 60% A. Flow rate used was 1.0mL/min, and detection at 280nm. Results: The validation using piperlonguminine, as the standard, demonstrated that the method shows linearity (linear correlation coefficient = 0.998), precision (relative standard deviation <5%) and accuracy (mean recovery = 103.78%) in the concentration range 31.25 – 500μg/mL. The limit of detection and quantification were 1.21 and 4.03μg/mL, respectively. This method allowed the identification and quantification of piperlonguminine and piperovatine in the hydroethanolic extracts of P. ovatum obtained from the leaves, stems and roots. All the extracts showed the same chromatographic profile. The leaves and roots contained the highest concentrations of piperlonguminine and the stems and leaves showed the most concentrations of piperovatine. Conclusion: This HPLC method is suitable for routine quantitative analysis of amides in extracts of Piper ovatum and phytopharmaceuticals containing this herb. PMID:24174818
Bibliography on Cold Regions Science and Technology. Volume 40, Part 1, 1986
1986-12-01
witer migration in an unaaturated frozen soil, morin clay, waa determined in horizontally cloaed »oil columns under linear temperature gradients...Peninsula At both ice fronts there is signiPcant tidal height energy in the first seven tidal species, indicating strong non- linear interaction, not all...dry soil weight, and increases with the increase in the molality linearly because of the linear freezing point depression. The curves of the
Comparison of linear synchronous and induction motors
DOT National Transportation Integrated Search
2004-06-01
A propulsion prade study was conducted as part of the Colorado Maglev Project of FTA's Urban Maglev Technology Development Program to identify and evaluate prospective linear motor designs that could potentially meet the system performance requiremen...
Supporting Students' Understanding of Linear Equations with One Variable Using Algebra Tiles
ERIC Educational Resources Information Center
Saraswati, Sari; Putri, Ratu Ilma Indra; Somakim
2016-01-01
This research aimed to describe how algebra tiles can support students' understanding of linear equations with one variable. This article is a part of a larger research on learning design of linear equations with one variable using algebra tiles combined with balancing method. Therefore, it will merely discuss one activity focused on how students…
ADDING REALISM TO NUCLEAR MATERIAL DISSOLVING ANALYSIS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williamson, B.
2011-08-15
Two new criticality modeling approaches have greatly increased the efficiency of dissolver operations in H-Canyon. The first new approach takes credit for the linear, physical distribution of the mass throughout the entire length of the fuel assembly. This distribution of mass is referred to as the linear density. Crediting the linear density of the fuel bundles results in using lower fissile concentrations, which allows higher masses to be charged to the dissolver. Also, this approach takes credit for the fact that only part of the fissile mass is wetted at a time. There are multiple assemblies stacked on top ofmore » each other in a bundle. On average, only 50-75% of the mass (the bottom two or three assemblies) is wetted at a time. This means that only 50-75% (depending on operating level) of the mass is moderated and is contributing to the reactivity of the system. The second new approach takes credit for the progression of the dissolving process. Previously, dissolving analysis looked at a snapshot in time where the same fissile material existed both in the wells and in the bulk solution at the same time. The second new approach models multiple consecutive phases that simulate the fissile material moving from a high concentration in the wells to a low concentration in the bulk solution. This approach is more realistic and allows higher fissile masses to be charged to the dissolver.« less
Hemanth, M; Deoli, Shilpi; Raghuveer, H P; Rani, M S; Hegde, Chatura; Vedavathi, B
2015-08-01
Orthodontic tooth movement is a complex procedure that occurs due to various biomechanical changes in the periodontium. Optimal orthodontic forces yield maximum tooth movement whereas if the forces fall beyond the optimal threshold it can cause deleterious effects. Among various types of tooth movements intrusion and lingual root torque are associated with causing root resoprtion, especially with the incisors. Therefore in this study, the stress patterns in the periodontal ligament (PDL) were evaluated with intrusion and lingual root torque using finite element method (FEM). A three-dimensional (3D) FEM model of the maxillary incisors was generated using SOLIDWORKS modeling software. Stresses in the PDL were evaluated with intrusive and lingual root torque movements by a 3D FEM using ANSYS software using linear stress analysis. It was observed that with the application of intrusive load compressive stresses were distributed at the apex whereas tensile stress was seen at the cervical margin. With the application of lingual root torque maximum compressive stress was distributed at the apex and tensile stress was distributed throughout the PDL. For intrusive and lingual root torque movements stress values over the PDL was within the range of optimal stress value as proposed by Lee, with a given force system by Proffit as optimum forces for orthodontic tooth movement using linear properties.
Investigation of x-ray spectra for iodinated contrast-enhanced dedicated breast CT
Glick, Stephen J.; Makeev, Andrey
2017-01-01
Abstract. Screening for breast cancer with mammography has been very successful, resulting in part to a reduction of breast cancer mortality by approximately 39% since 1990. However, mammography still has limitations in performance, especially for women with dense breast tissue. Iodinated contrast-enhanced, dedicated breast CT (BCT) has been proposed to improve lesion analysis and the accuracy of diagnostic workup for patients suspected of having breast cancer. A mathematical analysis to explore the use of various x-ray filters for iodinated contrast-enhanced BCT is presented. To assess task-based performance, the ideal linear observer signal-to-noise ratio (SNR) is used as a figure-of-merit under the assumptions of a linear, shift-invariant imaging system. To estimate signal and noise propagation through the BCT detector, a parallel-cascade model was used. The lesion model was embedded into a structured background and included a realistic level of iodine uptake. SNR was computed for 84,000 different exposure settings by varying the kV setting, x-ray filter materials and thickness, breast size, and composition and radiation dose. It is shown that some x-ray filter material/thickness combinations can provide up to 75% improvement in the linear ideal observer SNR over a conventionally used x-ray filter for BCT. This improvement in SNR can be traded off for substantial reductions in mean glandular dose. PMID:28149923
Hemanth, M; deoli, Shilpi; Raghuveer, H P; Rani, M S; Hegde, Chatura; Vedavathi, B
2015-01-01
Background: Orthodontic tooth movement is a complex procedure that occurs due to various biomechanical changes in the periodontium. Optimal orthodontic forces yield maximum tooth movement whereas if the forces fall beyond the optimal threshold it can cause deleterious effects. Among various types of tooth movements intrusion and lingual root torque are associated with causing root resoprtion, especially with the incisors. Therefore in this study, the stress patterns in the periodontal ligament (PDL) were evaluated with intrusion and lingual root torque using finite element method (FEM). Materials and Methods: A three-dimensional (3D) FEM model of the maxillary incisors was generated using SOLIDWORKS modeling software. Stresses in the PDL were evaluated with intrusive and lingual root torque movements by a 3D FEM using ANSYS software using linear stress analysis. Results: It was observed that with the application of intrusive load compressive stresses were distributed at the apex whereas tensile stress was seen at the cervical margin. With the application of lingual root torque maximum compressive stress was distributed at the apex and tensile stress was distributed throughout the PDL. Conclusion: For intrusive and lingual root torque movements stress values over the PDL was within the range of optimal stress value as proposed by Lee, with a given force system by Proffit as optimum forces for orthodontic tooth movement using linear properties. PMID:26464555
Prediction of aquatic toxicity mode of action using linear discriminant and random forest models.
Martin, Todd M; Grulke, Christopher M; Young, Douglas M; Russom, Christine L; Wang, Nina Y; Jackson, Crystal R; Barron, Mace G
2013-09-23
The ability to determine the mode of action (MOA) for a diverse group of chemicals is a critical part of ecological risk assessment and chemical regulation. However, existing MOA assignment approaches in ecotoxicology have been limited to a relatively few MOAs, have high uncertainty, or rely on professional judgment. In this study, machine based learning algorithms (linear discriminant analysis and random forest) were used to develop models for assigning aquatic toxicity MOA. These methods were selected since they have been shown to be able to correlate diverse data sets and provide an indication of the most important descriptors. A data set of MOA assignments for 924 chemicals was developed using a combination of high confidence assignments, international consensus classifications, ASTER (ASessment Tools for the Evaluation of Risk) predictions, and weight of evidence professional judgment based an assessment of structure and literature information. The overall data set was randomly divided into a training set (75%) and a validation set (25%) and then used to develop linear discriminant analysis (LDA) and random forest (RF) MOA assignment models. The LDA and RF models had high internal concordance and specificity and were able to produce overall prediction accuracies ranging from 84.5 to 87.7% for the validation set. These results demonstrate that computational chemistry approaches can be used to determine the acute toxicity MOAs across a large range of structures and mechanisms.
NASA Astrophysics Data System (ADS)
Nampally, Subhadra; Padhy, Simanchal; Trupti, S.; Prabhakar Prasad, P.; Seshunarayana, T.
2018-05-01
We study local site effects with detailed geotechnical and geophysical site characterization to evaluate the site-specific seismic hazard for the seismic microzonation of the Chennai city in South India. A Maximum Credible Earthquake (MCE) of magnitude 6.0 is considered based on the available seismotectonic and geological information of the study area. We synthesized strong ground motion records for this target event using stochastic finite-fault technique, based on a dynamic corner frequency approach, at different sites in the city, with the model parameters for the source, site, and path (attenuation) most appropriately selected for this region. We tested the influence of several model parameters on the characteristics of ground motion through simulations and found that stress drop largely influences both the amplitude and frequency of ground motion. To minimize its influence, we estimated stress drop after finite bandwidth correction, as expected from an M6 earthquake in Indian peninsula shield for accurately predicting the level of ground motion. Estimates of shear wave velocity averaged over the top 30 m of soil (V S30) are obtained from multichannel analysis of surface wave (MASW) at 210 sites at depths of 30 to 60 m below the ground surface. Using these V S30 values, along with the available geotechnical information and synthetic ground motion database obtained, equivalent linear one-dimensional site response analysis that approximates the nonlinear soil behavior within the linear analysis framework was performed using the computer program SHAKE2000. Fundamental natural frequency, Peak Ground Acceleration (PGA) at surface and rock levels, response spectrum at surface level for different damping coefficients, and amplification factors are presented at different sites of the city. Liquefaction study was done based on the V S30 and PGA values obtained. The major findings suggest show that the northeast part of the city is characterized by (i) low V S30 values (< 200 m/s) associated with alluvial deposits, (ii) relatively high PGA value, at the surface, of about 0.24 g, and (iii) factor of safety and liquefaction below unity at three sites (no. 12, no. 37, and no. 70). Thus, this part of the city is expected to experience damage for the expected M6 target event.
Improving Incremental Balance in the GSI 3DVAR Analysis System
NASA Technical Reports Server (NTRS)
Errico, Ronald M.; Yang, Runhua; Kleist, Daryl T.; Parrish, David F.; Derber, John C.; Treadon, Russ
2008-01-01
The Gridpoint Statistical Interpolation (GSI) analysis system is a unified global/regional 3DVAR analysis code that has been under development for several years at the National Centers for Environmental Prediction (NCEP)/Environmental Modeling Center. It has recently been implemented into operations at NCEP in both the global and North American data assimilation systems (GDAS and NDAS). An important aspect of this development has been improving the balance of the analysis produced by GSI. The improved balance between variables has been achieved through the inclusion of a Tangent Linear Normal Mode Constraint (TLNMC). The TLNMC method has proven to be very robust and effective. The TLNMC as part of the global GSI system has resulted in substantial improvement in data assimilation both at NCEP and at the NASA Global Modeling and Assimilation Office (GMAO).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spears, Robert Edward; Coleman, Justin Leigh
2015-08-01
Seismic analysis of nuclear structures is routinely performed using guidance provided in “Seismic Analysis of Safety-Related Nuclear Structures and Commentary (ASCE 4, 1998).” This document, which is currently under revision, provides detailed guidance on linear seismic soil-structure-interaction (SSI) analysis of nuclear structures. To accommodate the linear analysis, soil material properties are typically developed as shear modulus and damping ratio versus cyclic shear strain amplitude. A new Appendix in ASCE 4-2014 (draft) is being added to provide guidance for nonlinear time domain SSI analysis. To accommodate the nonlinear analysis, a more appropriate form of the soil material properties includes shear stressmore » and energy absorbed per cycle versus shear strain. Ideally, nonlinear soil model material properties would be established with soil testing appropriate for the nonlinear constitutive model being used. However, much of the soil testing done for SSI analysis is performed for use with linear analysis techniques. Consequently, a method is described in this paper that uses soil test data intended for linear analysis to develop nonlinear soil material properties. To produce nonlinear material properties that are equivalent to the linear material properties, the linear and nonlinear model hysteresis loops are considered. For equivalent material properties, the shear stress at peak shear strain and energy absorbed per cycle should match when comparing the linear and nonlinear model hysteresis loops. Consequently, nonlinear material properties are selected based on these criteria.« less
Statistical Techniques for Analyzing Process or "Similarity" Data in TID Hardness Assurance
NASA Technical Reports Server (NTRS)
Ladbury, R.
2010-01-01
We investigate techniques for estimating the contributions to TID hardness variability for families of linear bipolar technologies, determining how part-to-part and lot-to-lot variability change for different part types in the process.
Variable selection for marginal longitudinal generalized linear models.
Cantoni, Eva; Flemming, Joanna Mills; Ronchetti, Elvezio
2005-06-01
Variable selection is an essential part of any statistical analysis and yet has been somewhat neglected in the context of longitudinal data analysis. In this article, we propose a generalized version of Mallows's C(p) (GC(p)) suitable for use with both parametric and nonparametric models. GC(p) provides an estimate of a measure of model's adequacy for prediction. We examine its performance with popular marginal longitudinal models (fitted using GEE) and contrast results with what is typically done in practice: variable selection based on Wald-type or score-type tests. An application to real data further demonstrates the merits of our approach while at the same time emphasizing some important robust features inherent to GC(p).
Overview of NASA's Integrated Design and Engineering Analysis (IDEA)Environment
NASA Technical Reports Server (NTRS)
Robinson, Jeffrey S.; Martin John G.
2008-01-01
Historically, the design of subsonic and supersonic aircraft has been divided into separate technical disciplines (such as propulsion, aerodynamics and structures) each of which performs their design and analysis in relative isolation from others. This is possible in most cases either because the amount of interdisciplinary coupling is minimal or because the interactions can be treated as linear. The design of hypersonic airbreathing vehicles, like NASA s X-43, is quite the opposite. Such systems are dominated by strong non-linear interactions between disciplines. The design of these systems demands that a multi-disciplinary approach be taken. Furthermore, increased analytical fidelity at the conceptual design phase is highly desirable as many of the non-linearities are not captured by lower fidelity tools. Only when these systems are designed from a true multi-disciplinary perspective can the real performance benefits be achieved and complete vehicle systems be fielded. Toward this end, the Vehicle Analysis Branch at NASA Langley Research Center has been developing the Integrated Design & Engineering Analysis (IDEA) Environment. IDEA is a collaborative environment for parametrically modeling conceptual and preliminary launch vehicle configurations using the Adaptive Modeling Language (AML) as the underlying framework. The environment integrates geometry, configuration, propulsion, aerodynamics, aerothermodynamics, trajectory, closure and structural analysis into a generative, parametric, unified computational model where data is shared seamlessly between the different disciplines. Plans are also in place to incorporate life cycle analysis tools into the environment which will estimate vehicle operability, reliability and cost. IDEA is currently being funded by NASA s Hypersonics Project, a part of the Fundamental Aeronautics Program within the Aeronautics Research Mission Directorate. The environment is currently focused around a two-stage-to-orbit configuration with a turbine based combined cycle (TBCC) first stage and reusable rocket second stage. This paper provides an overview of the development of the IDEA environment, a description of the current status and detail of future plans.
Numerical analysis of composite STEEL-CONCRETE SECTIONS using integral equation of Volterra
NASA Astrophysics Data System (ADS)
Partov, Doncho; Kantchev, Vesselin
2011-09-01
The paper presents analysis of the stress and deflections changes due to creep in statically determinate composite steel-concrete beam. The mathematical model involves the equation of equilibrium, compatibility and constitutive relationship, i.e. an elastic law for the steel part and an integral-type creep law of Boltzmann — Volterra for the concrete part. On the basis of the theory of the viscoelastic body of Arutyunian-Trost-Bažant for determining the redistribution of stresses in beam section between concrete plate and steel beam with respect to time "t", two independent Volterra integral equations of the second kind have been derived. Numerical method based on linear approximation of the singular kernal function in the integral equation is presented. Example with the model proposed is investigated. The creep functions is suggested by the model CEB MC90-99 and the "ACI 209R-92 model. The elastic modulus of concrete E c (t) is assumed to be constant in time `t'. The obtained results from the both models are compared.
A new parallel-vector finite element analysis software on distributed-memory computers
NASA Technical Reports Server (NTRS)
Qin, Jiangning; Nguyen, Duc T.
1993-01-01
A new parallel-vector finite element analysis software package MPFEA (Massively Parallel-vector Finite Element Analysis) is developed for large-scale structural analysis on massively parallel computers with distributed-memory. MPFEA is designed for parallel generation and assembly of the global finite element stiffness matrices as well as parallel solution of the simultaneous linear equations, since these are often the major time-consuming parts of a finite element analysis. Block-skyline storage scheme along with vector-unrolling techniques are used to enhance the vector performance. Communications among processors are carried out concurrently with arithmetic operations to reduce the total execution time. Numerical results on the Intel iPSC/860 computers (such as the Intel Gamma with 128 processors and the Intel Touchstone Delta with 512 processors) are presented, including an aircraft structure and some very large truss structures, to demonstrate the efficiency and accuracy of MPFEA.
Multivariate Autoregressive Modeling and Granger Causality Analysis of Multiple Spike Trains
Krumin, Michael; Shoham, Shy
2010-01-01
Recent years have seen the emergence of microelectrode arrays and optical methods allowing simultaneous recording of spiking activity from populations of neurons in various parts of the nervous system. The analysis of multiple neural spike train data could benefit significantly from existing methods for multivariate time-series analysis which have proven to be very powerful in the modeling and analysis of continuous neural signals like EEG signals. However, those methods have not generally been well adapted to point processes. Here, we use our recent results on correlation distortions in multivariate Linear-Nonlinear-Poisson spiking neuron models to derive generalized Yule-Walker-type equations for fitting ‘‘hidden” Multivariate Autoregressive models. We use this new framework to perform Granger causality analysis in order to extract the directed information flow pattern in networks of simulated spiking neurons. We discuss the relative merits and limitations of the new method. PMID:20454705
NASA Astrophysics Data System (ADS)
Hiram Moon, Parry; Eberle Spencer, Domina
2005-09-01
Preface; Nomenclature; Historical introduction; Part I. Holors: 1. Index notation; 2. Holor algebra; 3. Gamma products; Part II. Transformations: 4. Tensors; 5. Akinetors; 6. Geometric spaces; Part III. Holor Calculus: 7. The linear connection; 8. The Riemann-Christoffel tensors; Part IV. Space Structure: 9. Non-Riemannian spaces; 10. Riemannian space; 11. Euclidean space; References; Index.
NASA Astrophysics Data System (ADS)
An, S.; Chen, X.
2015-12-01
Based on the MODIS MCD12Q2 remote sensing phenology product, we analyzed spatiotemporal variations of vegetation green-up, maturity, senescence and brown-off dates, and their relation to spatiotemporal patterns of air temperature and precipitation on the Qinghai-Tibet Plateau (QTP). From 2001 to 2012, phenological time series at about 11.7%~15.1% pixels indicate significant linear trends (P<0.1) with strong spatial consistency. Namely, pixels with significant phenological advancement and growing season lengthening are mainly distributed in the middle and eastern parts of the QTP, while pixels with significant phenological delay and growing season shortening are mainly distributed in the western and southern parts as well as the eastern edge of the QTP. Similar spatial patterns for positive and negative linear trends of the minimum and maximum EVI, and the time-integrated EVI during the growing season were detected in the above two regions, respectively. With regard to climatic factors, mean annual temperature shows an increased trend over the QTP except for the eastern edge, whereas annual precipitation displays an increased trend in the middle and eastern parts but a decreased trend in the western and southern parts as well as the eastern edge of the QTP. These findings suggest that phenological advancement, growing season lengthening, and vegetation activity enhancement in the middle and eastern parts might be attributed to coincident temperature and precipitation increase. By contrast, phenological delay, growing season shortening, and vegetation activity reduction in the western and southern parts as well as the eastern edge might be caused by opposite changes of temperature and precipitation, and strong evaporation induced water shortage. Furthermore, a partial correlation analysis indicates that green-up, maturity, and brown-off dates were influenced by preceding temperature and precipitation, while senescence date was affected by preceding precipitation.
Predictive models of safety based on audit findings: Part 2: Measurement of model validity.
Hsiao, Yu-Lin; Drury, Colin; Wu, Changxu; Paquet, Victor
2013-07-01
Part 1 of this study sequence developed a human factors/ergonomics (HF/E) based classification system (termed HFACS-MA) for safety audit findings and proved its measurement reliability. In Part 2, we used the human error categories of HFACS-MA as predictors of future safety performance. Audit records and monthly safety incident reports from two airlines submitted to their regulatory authority were available for analysis, covering over 6.5 years. Two participants derived consensus results of HF/E errors from the audit reports using HFACS-MA. We adopted Neural Network and Poisson regression methods to establish nonlinear and linear prediction models respectively. These models were tested for the validity of prediction of the safety data, and only Neural Network method resulted in substantially significant predictive ability for each airline. Alternative predictions from counting of audit findings and from time sequence of safety data produced some significant results, but of much smaller magnitude than HFACS-MA. The use of HF/E analysis of audit findings provided proactive predictors of future safety performance in the aviation maintenance field. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Arbitrary-Order Conservative and Consistent Remapping and a Theory of Linear Maps: Part II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ullrich, Paul A.; Devendran, Dharshi; Johansen, Hans
2016-04-01
The focus on this series of articles is on the generation of accurate, conservative, consistent, and (optionally) monotone linear offline maps. This paper is the second in the series. It extends on the first part by describing four examples of 2D linear maps that can be constructed in accordance with the theory of the earlier work. The focus is again on spherical geometry, although these techniques can be readily extended to arbitrary manifolds. The four maps include conservative, consistent, and (optionally) monotone linear maps (i) between two finite-volume meshes, (ii) from finite-volume to finite-element meshes using a projection-type approach, (iii)more » from finite-volume to finite-element meshes using volumetric integration, and (iv) between two finite-element meshes. Arbitrary order of accuracy is supported for each of the described nonmonotone maps.« less
Cairoli, Andrea; Piovani, Duccio; Jensen, Henrik Jeldtoft
2014-12-31
We propose a new procedure to monitor and forecast the onset of transitions in high-dimensional complex systems. We describe our procedure by an application to the tangled nature model of evolutionary ecology. The quasistable configurations of the full stochastic dynamics are taken as input for a stability analysis by means of the deterministic mean-field equations. Numerical analysis of the high-dimensional stability matrix allows us to identify unstable directions associated with eigenvalues with a positive real part. The overlap of the instantaneous configuration vector of the full stochastic system with the eigenvectors of the unstable directions of the deterministic mean-field approximation is found to be a good early warning of the transitions occurring intermittently.
NASA Astrophysics Data System (ADS)
Chen, Ying-Ying; Jin, Fei-Fei
2018-03-01
The eastern equatorial Pacific has a pronounced westward propagating SST annual cycle resulting from ocean-atmosphere interactions with equatorial semiannual solar forcing and off-equatorial annual solar forcing conveyed to the equator. In this two-part paper, a simple linear coupled framework is proposed to quantify the internal dynamics and external forcing for a better understanding of the linear part of the dynamics annual cycle. It is shown that an essential internal dynamical factor is the SST damping rate which measures the coupled stability in a similar way as the Bjerknes instability index for the El Niño-Southern Oscillation. It comprises three major negative terms (dynamic damping due to the Ekman pumping feedback, mean circulation advection, and thermodynamic feedback) and two positive terms (thermocline feedback and zonal advection). Another dynamical factor is the westward-propagation speed that is mainly determined by the thermodynamic feedback, the Ekman pumping feedback, and the mean circulation. The external forcing is measured by the annual and semiannual forcing factors. These linear internal and external factors, which can be estimated from data, determine the amplitude of the annual cycle.
Effects of Buffer Size and Shape on Associations between the Built Environment and Energy Balance
Berrigan, David; Hart, Jaime E.; Hipp, J. Aaron; Hoehner, Christine M.; Kerr, Jacqueline; Major, Jacqueline M.; Oka, Masayoshi; Laden, Francine
2014-01-01
Uncertainty in the relevant spatial context may drive heterogeneity in findings on the built environment and energy balance. To estimate the effect of this uncertainty, we conducted a sensitivity analysis defining intersection and business densities and counts within different buffer sizes and shapes on associations with self-reported walking and body mass index. Linear regression results indicated that the scale and shape of buffers influenced study results and may partly explain the inconsistent findings in the built environment and energy balance literature. PMID:24607875
Ionizing radiation measurements on LDEF: A0015 Free flyer biostack experiment
NASA Technical Reports Server (NTRS)
Benton, E. V.; Frank, A. L.; Benton, E. R.; Csige, I.; Frigo, L. A.
1995-01-01
This report covers the analysis of passive radiation detectors flown as part of the A0015 Free Flyer Biostack on LDEF (Long Duration Exposure Facility). LET (linear energy transfer) spectra and track density measurements were made with CR-39 and Polycarbonate plastic nuclear track detectors. Measurements of total absorbed dose were carried out using Thermoluminescent Detectors. Thermal and resonance neutron dose equivalents were measured with LiF/CR-39 detectors. High energy neutron and proton dose equivalents were measured with fission foil/CR-39 detectors.
Sedentary lifestyle and state variation in coronary heart disease mortality.
Yeager, K K; Anda, R F; Macera, C A; Donehoo, R S; Eaker, E D
1995-01-01
Using linear regression, the authors demonstrated a strong association between State-specific coronary heart disease mortality rates and State prevalence of sedentary lifestyle (r2 = 0.34; P = 0.0002) that remained significant after controlling for the prevalence of diagnosed hypertension, smoking, and overweight among the State's population. This ecologic analysis suggests that sedentary lifestyle may explain State variation in coronary heart disease mortality and reinforces the need to include physical activity promotion as a part of programs in the States to prevent heart disease. PMID:7838933
1989-10-15
reinfied with c doinAwatfhe exhibit imptant fallure./ amle behavior" in mode I. me II and mixed mode /fl. as well as In ompresslon. The failure xqunce...In addition, when KO/Kc-2.5, the linear perturbation results overestimate KI/Kc along CFt . Both the previous observations would imply that crack growth...Viscoelastic Regions with Application to 3-D Crack Analysis.", Ph.D. thesis , Massachusseus Institute of Technology, 1987. Fares N. and Li V.C., (1988
Stark cell optoacoustic detection of constituent gases in sample
NASA Technical Reports Server (NTRS)
Margolis, J. S.; Shumate, M. S. (Inventor)
1980-01-01
An optoacoustic detector for gas analysis is implemented with Stark effect cell modulation for switching a beam in and out of coincidence with a spectral line of a constituent gas in order to eliminate the heating effect of laser energy in the cell as a source of background noise. By using a multiline laser, and linearly sweeping the DC bias voltage while exciting the cell with a multiline laser, it is possible to obtain a spectrum from which to determine the combinations of excited constituents and determine their concentrations in parts per million.
Trace gas emissions to the atmosphere by biomass burning in the west African savannas
NASA Technical Reports Server (NTRS)
Frouin, Robert J.; Iacobellis, Samuel F.; Razafimpanilo, Herisoa; Somerville, Richard C. J.
1994-01-01
Savanna fires and atmospheric carbon dioxide (CO2) detection and estimating burned area using Advanced Very High Resolution Radiometer_(AVHRR) reflectance data are investigated in this two part research project. The first part involves carbon dioxide flux estimates and a three-dimensional transport model to quantify the effect of north African savanna fires on atmospheric CO2 concentration, including CO2 spatial and temporal variability patterns and their significance to global emissions. The second article describes two methods used to determine burned area from AVHRR data. The article discusses the relationship between the percentage of burned area and AVHRR channel 2 reflectance (the linear method) and Normalized Difference Vegetation Index (NDVI) (the nonlinear method). A comparative performance analysis of each method is described.
A general methodology for population analysis
NASA Astrophysics Data System (ADS)
Lazov, Petar; Lazov, Igor
2014-12-01
For a given population with N - current and M - maximum number of entities, modeled by a Birth-Death Process (BDP) with size M+1, we introduce utilization parameter ρ, ratio of the primary birth and death rates in that BDP, which, physically, determines (equilibrium) macrostates of the population, and information parameter ν, which has an interpretation as population information stiffness. The BDP, modeling the population, is in the state n, n=0,1,…,M, if N=n. In presence of these two key metrics, applying continuity law, equilibrium balance equations concerning the probability distribution pn, n=0,1,…,M, of the quantity N, pn=Prob{N=n}, in equilibrium, and conservation law, and relying on the fundamental concepts population information and population entropy, we develop a general methodology for population analysis; thereto, by definition, population entropy is uncertainty, related to the population. In this approach, what is its essential contribution, the population information consists of three basic parts: elastic (Hooke's) or absorption/emission part, synchronization or inelastic part and null part; the first two parts, which determine uniquely the null part (the null part connects them), are the two basic components of the Information Spectrum of the population. Population entropy, as mean value of population information, follows this division of the information. A given population can function in information elastic, antielastic and inelastic regime. In an information linear population, the synchronization part of the information and entropy is absent. The population size, M+1, is the third key metric in this methodology. Namely, right supposing a population with infinite size, the most of the key quantities and results for populations with finite size, emerged in this methodology, vanish.
Analysis and application of ERTS-1 data for regional geological mapping
NASA Technical Reports Server (NTRS)
Gold, D. P.; Parizek, R. R.; Alexander, S. A.
1973-01-01
Combined visual and digital techniques of analysing ERTS-1 data for geologic information have been tried on selected areas in Pennsylvania. The major physiolographic and structural provinces show up well. Supervised mapping, following the imaged expression of known geologic features on ERTS band 5 enlargements (1:250,000) of parts of eastern Pennsylvania, delimited the Diabase Sills and the Precambrian rocks of the Reading Prong with remarkable accuracy. From unsupervised mapping, transgressive linear features are apparent in unexpected density, and exhibit strong control over river valley and stream channel directions. They are unaffected by bedrock type, age, or primary structural boundaries, which suggests they are either rejuvenated basement joint directions on different scales, or they are a recently impressed structure possibly associated with a drifting North American plate. With ground mapping and underflight data, 6 scales of linear features have been recognized.
Zhang, Ridong; Tao, Jili; Lu, Renquan; Jin, Qibing
2018-02-01
Modeling of distributed parameter systems is difficult because of their nonlinearity and infinite-dimensional characteristics. Based on principal component analysis (PCA), a hybrid modeling strategy that consists of a decoupled linear autoregressive exogenous (ARX) model and a nonlinear radial basis function (RBF) neural network model are proposed. The spatial-temporal output is first divided into a few dominant spatial basis functions and finite-dimensional temporal series by PCA. Then, a decoupled ARX model is designed to model the linear dynamics of the dominant modes of the time series. The nonlinear residual part is subsequently parameterized by RBFs, where genetic algorithm is utilized to optimize their hidden layer structure and the parameters. Finally, the nonlinear spatial-temporal dynamic system is obtained after the time/space reconstruction. Simulation results of a catalytic rod and a heat conduction equation demonstrate the effectiveness of the proposed strategy compared to several other methods.
Xiaoqiu Zuo; Urs Buehlmann; R. Edward Thomas
2004-01-01
Solving the least-cost lumber grade mix problem allows dimension mills to minimize the cost of dimension part production. This problem, due to its economic importance, has attracted much attention from researchers and industry in the past. Most solutions used linear programming models and assumed that a simple linear relationship existed between lumber grade mix and...
NASA Astrophysics Data System (ADS)
Guelachvili, G.
This document is part of Subvolume B `Linear Triatomic Molecules', Part 9, of Volume 20 `Molecular Constants mostly from Infrared Spectroscopy' of Landolt-Börnstein Group II `Molecules and Radicals'.
A new standing-wave-type linear ultrasonic motor based on in-plane modes.
Shi, Yunlai; Zhao, Chunsheng
2011-05-01
This paper presents a new standing-wave-type linear ultrasonic motor using combination of the first longitudinal and the second bending modes. Two piezoelectric plates in combination with a metal thin plate are used to construct the stator. The superior point of the stator is its isosceles triangular structure part of the stator, which can amplify the displacement in horizontal direction of the stator in perpendicular direction when the stator is operated in the first longitudinal mode. The influence of the base angle θ of the triangular structure part on the amplitude of the driving foot has been analyzed by numerical analysis. Four prototype stators with different angles θ have been fabricated and the experimental investigation of these stators has validated the numerical simulation. The overall dimensions of the prototype stators are no more than 40 mm (length) × 20 mm (width) × 5 mm (thickness). Driven by an AC signal with the driving frequency of 53.3 kHz, the no-load speed and the maximal thrust of the prototype motor using the stator with base angle 20° were 98 mm/s and 3.2N, respectively. The effective elliptical motion trajectory of the contact point of the stator can be achieved by the isosceles triangular structure part using only two PZTs, and thus it makes the motor low cost in fabrication, simple in structure and easy to realize miniaturization. Copyright © 2010 Elsevier B.V. All rights reserved.
Gain optimization with non-linear controls
NASA Technical Reports Server (NTRS)
Slater, G. L.; Kandadai, R. D.
1984-01-01
An algorithm has been developed for the analysis and design of controls for non-linear systems. The technical approach is to use statistical linearization to model the non-linear dynamics of a system by a quasi-Gaussian model. A covariance analysis is performed to determine the behavior of the dynamical system and a quadratic cost function. Expressions for the cost function and its derivatives are determined so that numerical optimization techniques can be applied to determine optimal feedback laws. The primary application for this paper is centered about the design of controls for nominally linear systems but where the controls are saturated or limited by fixed constraints. The analysis is general, however, and numerical computation requires only that the specific non-linearity be considered in the analysis.
THE RESPONSE OF DRUG EXPENDITURE TO NON-LINEAR CONTRACT DESIGN: EVIDENCE FROM MEDICARE PART D*
Einav, Liran; Finkelstein, Amy; Schrimpf, Paul
2016-01-01
We study the demand response to non-linear price schedules using data on insurance contracts and prescription drug purchases in Medicare Part D. We exploit the kink in individuals’ budget set created by the famous “donut hole,” where insurance becomes discontinuously much less generous on the margin, to provide descriptive evidence of the drug purchase response to a price increase. We then specify and estimate a simple dynamic model of drug use that allows us to quantify the spending response along the entire non-linear budget set. We use the model for counterfactual analysis of the increase in spending from “filling” the donut hole, as will be required by 2020 under the Affordable Care Act. In our baseline model, which considers spending decisions within a single year, we estimate that “filling” the donut hole will increase annual drug spending by about $150, or about 8 percent. About one-quarter of this spending increase reflects “anticipatory” behavior, coming from beneficiaries whose spending prior to the policy change would leave them short of reaching the donut hole. We also present descriptive evidence of cross-year substitution of spending by individuals who reach the kink, which motivates a simple extension to our baseline model that allows – in a highly stylized way – for individuals to engage in such cross year substitution. Our estimates from this extension suggest that a large share of the $150 drug spending increase could be attributed to cross-year substitution, and the net increase could be as little as $45 per year. PMID:26769984
A nonlinear optimal control approach to stabilization of a macroeconomic development model
NASA Astrophysics Data System (ADS)
Rigatos, G.; Siano, P.; Ghosh, T.; Sarno, D.
2017-11-01
A nonlinear optimal (H-infinity) control approach is proposed for the problem of stabilization of the dynamics of a macroeconomic development model that is known as the Grossman-Helpman model of endogenous product cycles. The dynamics of the macroeconomic development model is divided in two parts. The first one describes economic activities in a developed country and the second part describes variation of economic activities in a country under development which tries to modify its production so as to serve the needs of the developed country. The article shows that through control of the macroeconomic model of the developed country, one can finally control the dynamics of the economy in the country under development. The control method through which this is achieved is the nonlinear H-infinity control. The macroeconomic model for the country under development undergoes approximate linearization round a temporary operating point. This is defined at each time instant by the present value of the system's state vector and the last value of the control input vector that was exerted on it. The linearization is based on Taylor series expansion and the computation of the associated Jacobian matrices. For the linearized model an H-infinity feedback controller is computed. The controller's gain is calculated by solving an algebraic Riccati equation at each iteration of the control method. The asymptotic stability of the control approach is proven through Lyapunov analysis. This assures that the state variables of the macroeconomic model of the country under development will finally converge to the designated reference values.
Fricova, Dominika; Valach, Matus; Farkas, Zoltan; Pfeiffer, Ilona; Kucsera, Judit; Tomaska, Lubomir; Nosek, Jozef
2010-01-01
As a part of our initiative aimed at a large-scale comparative analysis of fungal mitochondrial genomes, we determined the complete DNA sequence of the mitochondrial genome of the yeast Candida subhashii and found that it exhibits a number of peculiar features. First, the mitochondrial genome is represented by linear dsDNA molecules of uniform length (29 795 bp), with an unusually high content of guanine and cytosine residues (52.7 %). Second, the coding sequences lack introns; thus, the genome has a relatively compact organization. Third, the termini of the linear molecules consist of long inverted repeats and seem to contain a protein covalently bound to terminal nucleotides at the 5′ ends. This architecture resembles the telomeres in a number of linear viral and plasmid DNA genomes classified as invertrons, in which the terminal proteins serve as specific primers for the initiation of DNA synthesis. Finally, although the mitochondrial genome of C. subhashii contains essentially the same set of genes as other closely related pathogenic Candida species, we identified additional ORFs encoding two homologues of the family B protein-priming DNA polymerases and an unknown protein. The terminal structures and the genes for DNA polymerases are reminiscent of linear mitochondrial plasmids, indicating that this genome architecture might have emerged from fortuitous recombination between an ancestral, presumably circular, mitochondrial genome and an invertron-like element. PMID:20395267
Analysis of the two-point velocity correlations in turbulent boundary layer flows
NASA Technical Reports Server (NTRS)
Oberlack, M.
1995-01-01
The general objective of the present work is to explore the use of Rapid Distortion Theory (RDT) in analysis of the two-point statistics of the log-layer. RDT is applicable only to unsteady flows where the non-linear turbulence-turbulence interaction can be neglected in comparison to linear turbulence-mean interactions. Here we propose to use RDT to examine the structure of the large energy-containing scales and their interaction with the mean flow in the log-region. The contents of the work are twofold: First, two-point analysis methods will be used to derive the law-of-the-wall for the special case of zero mean pressure gradient. The basic assumptions needed are one-dimensionality in the mean flow and homogeneity of the fluctuations. It will be shown that a formal solution of the two-point correlation equation can be obtained as a power series in the von Karman constant, known to be on the order of 0.4. In the second part, a detailed analysis of the two-point correlation function in the log-layer will be given. The fundamental set of equations and a functional relation for the two-point correlation function will be derived. An asymptotic expansion procedure will be used in the log-layer to match Kolmogorov's universal range and the one-point correlations to the inviscid outer region valid for large correlation distances.
NASA Astrophysics Data System (ADS)
Fujimura, Toshio; Takeshita, Kunimasa; Suzuki, Ryosuke O.
2018-04-01
An analytical approximate solution to non-linear solute- and heat-transfer equations in the unsteady-state mushy zone of Fe-C plain steel has been obtained, assuming a linear relationship between the solid fraction and the temperature of the mushy zone. The heat transfer equations for both the solid and liquid zone along with the boundary conditions have been linked with the equations to solve the whole equations. The model predictions ( e.g., the solidification constants and the effective partition ratio) agree with the generally accepted values and with a separately performed numerical analysis. The solidus temperature predicted by the model is in the intermediate range of the reported formulas. The model and Neuman's solution are consistent in the low carbon range. A conventional numerical heat analysis ( i.e., an equivalent specific heat method using the solidus temperature predicted by the model) is consistent with the model predictions for Fe-C plain steels. The model presented herein simplifies the computations to solve the solute- and heat-transfer simultaneous equations while searching for a solidus temperature as a part of the solution. Thus, this model can reduce the complexity of analyses considering the heat- and solute-transfer phenomena in the mushy zone.
Variational Bayesian Parameter Estimation Techniques for the General Linear Model
Starke, Ludger; Ostwald, Dirk
2017-01-01
Variational Bayes (VB), variational maximum likelihood (VML), restricted maximum likelihood (ReML), and maximum likelihood (ML) are cornerstone parametric statistical estimation techniques in the analysis of functional neuroimaging data. However, the theoretical underpinnings of these model parameter estimation techniques are rarely covered in introductory statistical texts. Because of the widespread practical use of VB, VML, ReML, and ML in the neuroimaging community, we reasoned that a theoretical treatment of their relationships and their application in a basic modeling scenario may be helpful for both neuroimaging novices and practitioners alike. In this technical study, we thus revisit the conceptual and formal underpinnings of VB, VML, ReML, and ML and provide a detailed account of their mathematical relationships and implementational details. We further apply VB, VML, ReML, and ML to the general linear model (GLM) with non-spherical error covariance as commonly encountered in the first-level analysis of fMRI data. To this end, we explicitly derive the corresponding free energy objective functions and ensuing iterative algorithms. Finally, in the applied part of our study, we evaluate the parameter and model recovery properties of VB, VML, ReML, and ML, first in an exemplary setting and then in the analysis of experimental fMRI data acquired from a single participant under visual stimulation. PMID:28966572
Galindo, I.; Romero, M. C.; Sánchez, N.; Morales, J. M.
2016-01-01
Risk management stakeholders in high-populated volcanic islands should be provided with the latest high-quality volcanic information. We present here the first volcanic susceptibility map of Lanzarote and Chinijo Islands and their submarine flanks based on updated chronostratigraphical and volcano structural data, as well as on the geomorphological analysis of the bathymetric data of the submarine flanks. The role of the structural elements in the volcanic susceptibility analysis has been reviewed: vents have been considered since they indicate where previous eruptions took place; eruptive fissures provide information about the stress field as they are the superficial expression of the dyke conduit; eroded dykes have been discarded since they are single non-feeder dykes intruded in deep parts of Miocene-Pliocene volcanic edifices; main faults have been taken into account only in those cases where they could modified the superficial movement of magma. The application of kernel density estimation via a linear diffusion process for the volcanic susceptibility assessment has been applied successfully to Lanzarote and could be applied to other fissure volcanic fields worldwide since the results provide information about the probable area where an eruption could take place but also about the main direction of the probable volcanic fissures. PMID:27265878
Galindo, I; Romero, M C; Sánchez, N; Morales, J M
2016-06-06
Risk management stakeholders in high-populated volcanic islands should be provided with the latest high-quality volcanic information. We present here the first volcanic susceptibility map of Lanzarote and Chinijo Islands and their submarine flanks based on updated chronostratigraphical and volcano structural data, as well as on the geomorphological analysis of the bathymetric data of the submarine flanks. The role of the structural elements in the volcanic susceptibility analysis has been reviewed: vents have been considered since they indicate where previous eruptions took place; eruptive fissures provide information about the stress field as they are the superficial expression of the dyke conduit; eroded dykes have been discarded since they are single non-feeder dykes intruded in deep parts of Miocene-Pliocene volcanic edifices; main faults have been taken into account only in those cases where they could modified the superficial movement of magma. The application of kernel density estimation via a linear diffusion process for the volcanic susceptibility assessment has been applied successfully to Lanzarote and could be applied to other fissure volcanic fields worldwide since the results provide information about the probable area where an eruption could take place but also about the main direction of the probable volcanic fissures.
NASA Astrophysics Data System (ADS)
Galindo, I.; Romero, M. C.; Sánchez, N.; Morales, J. M.
2016-06-01
Risk management stakeholders in high-populated volcanic islands should be provided with the latest high-quality volcanic information. We present here the first volcanic susceptibility map of Lanzarote and Chinijo Islands and their submarine flanks based on updated chronostratigraphical and volcano structural data, as well as on the geomorphological analysis of the bathymetric data of the submarine flanks. The role of the structural elements in the volcanic susceptibility analysis has been reviewed: vents have been considered since they indicate where previous eruptions took place; eruptive fissures provide information about the stress field as they are the superficial expression of the dyke conduit; eroded dykes have been discarded since they are single non-feeder dykes intruded in deep parts of Miocene-Pliocene volcanic edifices; main faults have been taken into account only in those cases where they could modified the superficial movement of magma. The application of kernel density estimation via a linear diffusion process for the volcanic susceptibility assessment has been applied successfully to Lanzarote and could be applied to other fissure volcanic fields worldwide since the results provide information about the probable area where an eruption could take place but also about the main direction of the probable volcanic fissures.
Long linear MWIR and LWIR HgCdTe infrared detection arrays for high resolution imaging
NASA Astrophysics Data System (ADS)
Chamonal, Jean-Paul; Audebert, Patrick; Medina, Philippe; Destefanis, Gérard; Deschamps, Joel R.; Girard, Michel; Chatard, Jean-Pierre
2018-04-01
This paper, "Long linear MWIR and LWIR HgCdTe infrared detection arrays for high resolution imaging," was presented as part of International Conference on Space Optics—ICSO 1997, held in Toulouse, France.
NASA Astrophysics Data System (ADS)
Zhang, Meng; Sun, Chen-Nan; Zhang, Xiang; Goh, Phoi Chin; Wei, Jun; Li, Hua; Hardacre, David
2018-03-01
The laser powder bed fusion (L-PBF) technique builds parts with higher static strength than the conventional manufacturing processes through the formation of ultrafine grains. However, its fatigue endurance strength σ f does not match the increased monotonic tensile strength σ b. This work examines the monotonic and fatigue properties of as-built and heat-treated L-PBF stainless steel 316L. It was found that the general linear relation σ f = mσ b for describing conventional ferrous materials is not applicable to L-PBF parts because of the influence of porosity. Instead, the ductility parameter correlated linearly with fatigue strength and was proposed as the new fatigue assessment criterion for porous L-PBF parts. Annealed parts conformed to the strength-ductility trade-off. Fatigue resistance was reduced at short lives, but the effect was partially offset by the higher ductility such that comparing with an as-built part of equivalent monotonic strength, the heat-treated parts were more fatigue resistant.
Effects of urbanization on climate of İstanbul and Ankara
NASA Astrophysics Data System (ADS)
Karaca, Mehmet; Tayanç, Mete; Toros, Hüseyi˙n.
The purpose of this work is to study regional climate change and investigate the effects of urbanization on climates of two largest cities in Turkey: İstanbul and Ankara. Air temperature (mean, maximum and minimum) data of İstanbul and Ankara are analyzed to study regional climate change and to understand the possible effects of urbanization on the climate of these regions owing to industrialization and large flux of migration from rural parts of the country. For the trend analysis, linear regression and the sequential version of the Mann-Kendall test is used. A significant upward trend is found in the urban temperatures of southern İstanbul, which is the most highly populated and industrialized part of the city compared to its rural parts. Northern stations do not show any warming trend; instead, they have a cooling trend. Urbanization and industrialization in the southern part of İstanbul has a negative effect on regional cooling. In spite of Ankara's urban geometry and air pollution problem, the urban station in Ankara does not show any warming trend. A significant urban heat island intensity ( urban-rural) is not observed in Ankara.
Application of Bounded Linear Stability Analysis Method for Metrics-Driven Adaptive Control
NASA Technical Reports Server (NTRS)
Bakhtiari-Nejad, Maryam; Nguyen, Nhan T.; Krishnakumar, Kalmanje
2009-01-01
This paper presents the application of Bounded Linear Stability Analysis (BLSA) method for metrics-driven adaptive control. The bounded linear stability analysis method is used for analyzing stability of adaptive control models, without linearizing the adaptive laws. Metrics-driven adaptive control introduces a notion that adaptation should be driven by some stability metrics to achieve robustness. By the application of bounded linear stability analysis method the adaptive gain is adjusted during the adaptation in order to meet certain phase margin requirements. Analysis of metrics-driven adaptive control is evaluated for a second order system that represents a pitch attitude control of a generic transport aircraft. The analysis shows that the system with the metrics-conforming variable adaptive gain becomes more robust to unmodeled dynamics or time delay. The effect of analysis time-window for BLSA is also evaluated in order to meet the stability margin criteria.
Computational fluid dynamic modelling of cavitation
NASA Technical Reports Server (NTRS)
Deshpande, Manish; Feng, Jinzhang; Merkle, Charles L.
1993-01-01
Models in sheet cavitation in cryogenic fluids are developed for use in Euler and Navier-Stokes codes. The models are based upon earlier potential-flow models but enable the cavity inception point, length, and shape to be determined as part of the computation. In the present paper, numerical solutions are compared with experimental measurements for both pressure distribution and cavity length. Comparisons between models are also presented. The CFD model provides a relatively simple modification to an existing code to enable cavitation performance predictions to be included. The analysis also has the added ability of incorporating thermodynamic effects of cryogenic fluids into the analysis. Extensions of the current two-dimensional steady state analysis to three-dimensions and/or time-dependent flows are, in principle, straightforward although geometrical issues become more complicated. Linearized models, however offer promise of providing effective cavitation modeling in three-dimensions. This analysis presents good potential for improved understanding of many phenomena associated with cavity flows.
Structural Code Considerations for Solar Rooftop Installations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dwyer, Stephen F.; Dwyer, Brian P.; Sanchez, Alfred
2014-12-01
Residential rooftop solar panel installations are limited in part by the high cost of structural related code requirements for field installation. Permitting solar installations is difficult because there is a belief among residential permitting authorities that typical residential rooftops may be structurally inadequate to support the additional load associated with a photovoltaic (PV) solar installation. Typical engineering methods utilized to calculate stresses on a roof structure involve simplifying assumptions that render a complex non-linear structure to a basic determinate beam. This method of analysis neglects the composite action of the entire roof structure, yielding a conservative analysis based on amore » rafter or top chord of a truss. Consequently, the analysis can result in an overly conservative structural analysis. A literature review was conducted to gain a better understanding of the conservative nature of the regulations and codes governing residential construction and the associated structural system calculations.« less
SCBUCKLE user's manual: Buckling analysis program for simple supported and clamped panels
NASA Technical Reports Server (NTRS)
Cruz, Juan R.
1993-01-01
The program SCBUCKLE calculates the buckling loads and mode shapes of cylindrically curved, rectangular panels. The panel is assumed to have no imperfections. SCBUCKLE is capable of analyzing specially orthotropic symmetric panels (i.e., A(sub 16) = A(sub 26) = 0.0, D(sub 16) = D(sub 26) = 0.0, B(sub ij) = 0.0). The analysis includes first-order transverse shear theory and is capable of modeling sandwich panels. The analysis supports two types of boundary conditions: either simply supported or clamped on all four edges. The panel can be subjected to linearly varying normal loads N(sub x) and N(sub y) in addition to a constant shear load N(sub xy). The applied loads can be divided into two parts: a preload component; and a variable (eigenvalue-dependent) component. The analysis is based on the modified Donnell's equations for shallow shells. The governing equations are solved by Galerkin's method.
Urs Buehlmann; D. Earl Kline; Janice K. Wiedenbeck; R., Jr. Noble
2008-01-01
Cutting-bill requirements, among other factors, influence the yield obtained when cutting lumber into parts. The first part of this 2-part series described how different cutting-bill part sizes, when added to an existing cutting-bill, affect lumber yield, and quantified these observations. To accomplish this, the study employed linear least squares estimation technique...
NASA Astrophysics Data System (ADS)
Konor, Celal S.; Randall, David A.
2018-05-01
We have used a normal-mode analysis to investigate the impacts of the horizontal and vertical discretizations on the numerical solutions of the nonhydrostatic anelastic inertia-gravity modes on a midlatitude f plane. The dispersion equations are derived from the linearized anelastic equations that are discretized on the Z, C, D, CD, (DC), A, E and B horizontal grids, and on the L and CP vertical grids. The effects of both horizontal grid spacing and vertical wavenumber are analyzed, and the role of nonhydrostatic effects is discussed. We also compare the results of the normal-mode analyses with numerical solutions obtained by running linearized numerical models based on the various horizontal grids. The sources and behaviors of the computational modes in the numerical simulations are also examined.Our normal-mode analyses with the Z, C, D, A, E and B grids generally confirm the conclusions of previous shallow-water studies for the cyclone-resolving scales (with low horizontal wavenumbers). We conclude that, aided by nonhydrostatic effects, the Z and C grids become overall more accurate for cloud-resolving resolutions (with high horizontal wavenumbers) than for the cyclone-resolving scales.A companion paper, Part 2, discusses the impacts of the discretization on the Rossby modes on a midlatitude β plane.
Part mutual information for quantifying direct associations in networks.
Zhao, Juan; Zhou, Yiwei; Zhang, Xiujun; Chen, Luonan
2016-05-03
Quantitatively identifying direct dependencies between variables is an important task in data analysis, in particular for reconstructing various types of networks and causal relations in science and engineering. One of the most widely used criteria is partial correlation, but it can only measure linearly direct association and miss nonlinear associations. However, based on conditional independence, conditional mutual information (CMI) is able to quantify nonlinearly direct relationships among variables from the observed data, superior to linear measures, but suffers from a serious problem of underestimation, in particular for those variables with tight associations in a network, which severely limits its applications. In this work, we propose a new concept, "partial independence," with a new measure, "part mutual information" (PMI), which not only can overcome the problem of CMI but also retains the quantification properties of both mutual information (MI) and CMI. Specifically, we first defined PMI to measure nonlinearly direct dependencies between variables and then derived its relations with MI and CMI. Finally, we used a number of simulated data as benchmark examples to numerically demonstrate PMI features and further real gene expression data from Escherichia coli and yeast to reconstruct gene regulatory networks, which all validated the advantages of PMI for accurately quantifying nonlinearly direct associations in networks.
NASA Astrophysics Data System (ADS)
Hidalgo-Salazar, Miguel A.; Correa, Juan P.
2018-03-01
In this work Linear Low Density Polyethylene-nonwoven industrial Fique fiber mat (LLDPE-Fique) and Epoxy Resin-nonwoven industrial Fique fiber mat (EP-Fique) biocomposites were prepared using thermocompression and resin film infusion processes. Neat polymeric matrices and its biocomposites were tested following ASTM standards in order to evaluate tensile and flexural mechanical properties. Also, thermal behavior of these materials has been studied by differential scanning calorimetry (DSC) and thermogravimetric analysis (TGA). Tensile and flexural test revealed that nonwoven Fique reinforced composites exhibited higher modulus and strength but lower deformation capability as compared with LLDPE and EP neat matrices. TG thermograms showed that nonwoven Fique fibers incorporation has an effect on the thermal stability of the composites. On the other hand, Fique fibers did not change the crystallization and melting processes of the LLDPE matrix but restricts the motion of EP macromolecules chains thus increases the Tg of the EP-Fique composite. Finally, this work opens the possibility of considering non-woven Fique fibers as a reinforcement material with a high potential for the manufacture of biocomposites for automotive applications. In addition to the processing test specimens, it was also possible to manufacture a part of LLDPE-Fique, and one part of EP-Fique.
Advanced Statistical Analyses to Reduce Inconsistency of Bond Strength Data.
Minamino, T; Mine, A; Shintani, A; Higashi, M; Kawaguchi-Uemura, A; Kabetani, T; Hagino, R; Imai, D; Tajiri, Y; Matsumoto, M; Yatani, H
2017-11-01
This study was designed to clarify the interrelationship of factors that affect the value of microtensile bond strength (µTBS), focusing on nondestructive testing by which information of the specimens can be stored and quantified. µTBS test specimens were prepared from 10 noncarious human molars. Six factors of µTBS test specimens were evaluated: presence of voids at the interface, X-ray absorption coefficient of resin, X-ray absorption coefficient of dentin, length of dentin part, size of adhesion area, and individual differences of teeth. All specimens were observed nondestructively by optical coherence tomography and micro-computed tomography before µTBS testing. After µTBS testing, the effect of these factors on µTBS data was analyzed by the general linear model, linear mixed effects regression model, and nonlinear regression model with 95% confidence intervals. By the general linear model, a significant difference in individual differences of teeth was observed ( P < 0.001). A significantly positive correlation was shown between µTBS and length of dentin part ( P < 0.001); however, there was no significant nonlinearity ( P = 0.157). Moreover, a significantly negative correlation was observed between µTBS and size of adhesion area ( P = 0.001), with significant nonlinearity ( P = 0.014). No correlation was observed between µTBS and X-ray absorption coefficient of resin ( P = 0.147), and there was no significant nonlinearity ( P = 0.089). Additionally, a significantly positive correlation was observed between µTBS and X-ray absorption coefficient of dentin ( P = 0.022), with significant nonlinearity ( P = 0.036). A significant difference was also observed between the presence and absence of voids by linear mixed effects regression analysis. Our results showed correlations between various parameters of tooth specimens and µTBS data. To evaluate the performance of the adhesive more precisely, the effect of tooth variability and a method to reduce variation in bond strength values should also be considered.
Corner-point criterion for assessing nonlinear image processing imagers
NASA Astrophysics Data System (ADS)
Landeau, Stéphane; Pigois, Laurent; Foing, Jean-Paul; Deshors, Gilles; Swiathy, Greggory
2017-10-01
Range performance modeling of optronics imagers attempts to characterize the ability to resolve details in the image. Today, digital image processing is systematically used in conjunction with the optoelectronic system to correct its defects or to exploit tiny detection signals to increase performance. In order to characterize these processing having adaptive and non-linear properties, it becomes necessary to stimulate the imagers with test patterns whose properties are similar to the actual scene image ones, in terms of dynamic range, contours, texture and singular points. This paper presents an approach based on a Corner-Point (CP) resolution criterion, derived from the Probability of Correct Resolution (PCR) of binary fractal patterns. The fundamental principle lies in the respectful perception of the CP direction of one pixel minority value among the majority value of a 2×2 pixels block. The evaluation procedure considers the actual image as its multi-resolution CP transformation, taking the role of Ground Truth (GT). After a spatial registration between the degraded image and the original one, the degradation is statistically measured by comparing the GT with the degraded image CP transformation, in terms of localized PCR at the region of interest. The paper defines this CP criterion and presents the developed evaluation techniques, such as the measurement of the number of CP resolved on the target, the transformation CP and its inverse transform that make it possible to reconstruct an image of the perceived CPs. Then, this criterion is compared with the standard Johnson criterion, in the case of a linear blur and noise degradation. The evaluation of an imaging system integrating an image display and a visual perception is considered, by proposing an analysis scheme combining two methods: a CP measurement for the highly non-linear part (imaging) with real signature test target and conventional methods for the more linear part (displaying). The application to color imaging is proposed, with a discussion about the choice of the working color space depending on the type of image enhancement processing used.
Lorkiewicz, Wiesław; Płoszaj, Tomasz; Jędrychowska-Dańska, Krystyna; Żądzińska, Elżbieta; Strapagiel, Dominik; Haduch, Elżbieta; Szczepanek, Anita; Grygiel, Ryszard; Witas, Henryk W.
2015-01-01
For a long time, anthropological and genetic research on the Neolithic revolution in Europe was mainly concentrated on the mechanism of agricultural dispersal over different parts of the continent. Recently, attention has shifted towards population processes that occurred after the arrival of the first farmers, transforming the genetically very distinctive early Neolithic Linear Pottery Culture (LBK) and Mesolithic forager populations into present-day Central Europeans. The latest studies indicate that significant changes in this respect took place within the post-Linear Pottery cultures of the Early and Middle Neolithic which were a bridge between the allochthonous LBK and the first indigenous Neolithic culture of north-central Europe—the Funnel Beaker culture (TRB). The paper presents data on mtDNA haplotypes of a Middle Neolithic population dated to 4700/4600–4100/4000 BC belonging to the Brześć Kujawski Group of the Lengyel culture (BKG) from the Kuyavia region in north-central Poland. BKG communities constituted the border of the “Danubian World” in this part of Europe for approx. seven centuries, neighboring foragers of the North European Plain and the southern Baltic basin. MtDNA haplogroups were determined in 11 individuals, and four mtDNA macrohaplogroups were found (H, U5, T, and HV0). The overall haplogroup pattern did not deviate from other post-Linear Pottery populations from central Europe, although a complete lack of N1a and the presence of U5a are noteworthy. Of greatest importance is the observed link between the BKG and the TRB horizon, confirmed by an independent analysis of the craniometric variation of Mesolithic and Neolithic populations inhabiting central Europe. Estimated phylogenetic pattern suggests significant contribution of the post-Linear BKG communities to the origin of the subsequent Middle Neolithic cultures, such as the TRB. PMID:25714361
NASA Technical Reports Server (NTRS)
Tuey, R. C.
1972-01-01
Computer solutions of linear programming problems are outlined. Information covers vector spaces, convex sets, and matrix algebra elements for solving simultaneous linear equations. Dual problems, reduced cost analysis, ranges, and error analysis are illustrated.
POLYSACCHARIDES FROM CELL WALLS OF AUREOBASIDIUM (PULLULARIA) PULLULANS. PART I. GLUCANS,
The cell wall of Aureobasidium (Pullularia) pullulans contains three types of beta - glucan . One, extracted with dilute alkali, has a linear backbone...insoluble in dilute alkali contains a highly crystalline, essentially linear linked glucan and an amorphous glucan . (Author)
Analysis of the urban green areas of Nicosia: the case study of Linear Park of Pedieos River
NASA Astrophysics Data System (ADS)
Zanos, Pavlos; Georgi, Julia
2017-09-01
At present, the need for creating outdoor green areas is unquestionable. Their value is shown through their use for recreation, sports, cultural and socioeconomic purposes, the ecology and especially biodiversity, which has always been considered as one of the most important factors in recent years, as well as in the future. With the creation of new parks and open green spaces, the legacy will be continued for the next generations, with designs that will be pleasantly utilized through the years. In the first part of this study, we examined the way the largest urban green spaces in Nicosia affect and contribute to the lifestyle of the inhabitants of the city, as well as the reasons why the citizens of Cyprus embraced urban parks in their everyday life, making them so popular. The present paper, therefore, analyses both the effect and the changes in the urban structure while urban green spaces in the city of Nicosia are being created, as well as which areas are affected, how they are affected and to what extent. We have conducted a field-based survey, providing the urban parks' visitors with questionnaires. This enabled us to draw a wealth of essential conclusions concerning the visitors' preferences. We have also listed both the positive and negative impacts of urban green spaces on both the economic and urban design sectors, as well as on Cypriots' recreation time. The green areas of Nicosia, along with their detailed analysis, are extensively presented in this study. Moreover, in the second part of this study, the G.I.S program was used to create a space presentation of the urban linear park of Pedieos, where the area was mapped and the positive and negative elements of the park were analysed. In this part, ways to address the emerging issues are also proposed.
NASA Astrophysics Data System (ADS)
Walker, Ernest L.
1994-05-01
This paper presents results of a theoretical investigation to evaluate the performance of code division multiple access communications over multimode optical fiber channels in an asynchronous, multiuser communication network environment. The system is evaluated using Gold sequences for spectral spreading of the baseband signal from each user employing direct-sequence biphase shift keying and intensity modulation techniques. The transmission channel model employed is a lossless linear system approximation of the field transfer function for the alpha -profile multimode optical fiber. Due to channel model complexity, a correlation receiver model employing a suboptimal receive filter was used in calculating the peak output signal at the ith receiver. In Part 1, the performance measures for the system, i.e., signal-to-noise ratio and bit error probability for the ith receiver, are derived as functions of channel characteristics, spectral spreading, number of active users, and the bit energy to noise (white) spectral density ratio. In Part 2, the overall system performance is evaluated.
NASA Astrophysics Data System (ADS)
Hennenberg, M.; Slavtchev, S.; Valchev, G.
2013-12-01
When an isothermal ferrofluid is submitted to an oscillating magnetic field, the initially motionless liquid free surface can start to oscillate. This physical phenomenon is similar to the Faraday instability for usual Newtonian liquids subjected to a mechanical oscillation. In the present paper, we consider the magnetic field as a sum of a constant part and a time periodic part. Two different cases for the constant part of the field, being vertical in the first one or horizontal in the second one are studied. Assuming both ferrofluid magnetization and magnetic field to be collinear, we develop the linear stability analysis of the motionless reference state taking into account the Kelvin magnetic forces. The Laplace law describing the free surface deformation reduces to Hill's equation, which is studied using the classical method of Ince and Erdelyi. Inside this framework, we obtain the transition conditions leading to the free surface oscillations.
NASA Astrophysics Data System (ADS)
Durand, S.; Tellier, C. R.
1996-02-01
This paper constitutes the first part of a work devoted to applications of piezoresistance effects in germanium and silicon semiconductors. In this part, emphasis is placed on a formal explanation of non-linear effects. We propose a brief phenomenological description based on the multi-valleys model of semiconductors before to adopt a macroscopic tensorial model from which general analytical expressions for primed non-linear piezoresistance coefficients are derived. Graphical representations of linear and non-linear piezoresistance coefficients allows us to characterize the influence of the two angles of cut and of directions of alignment. The second part will primarily deal with specific applications for piezoresistive sensors. Cette publication constitue la première partie d'un travail consacré aux applications des effets piézorésistifs dans les semiconducteurs germanium et silicium. Cette partie traite essentiellement de la modélisation des effets non-linéaires. Après une description phénoménologique à partir du modèle de bande des semiconducteurs nous développons un modèle tensoriel macroscopique et nous proposons des équations générales analytiques exprimant les coefficients piézorésistifs non-linéaires dans des repères tournés. Des représentations graphiques des variations des coefficients piézorésistifs linéaires et non-linéaires permettent une pré-caractérisation de l'influence des angles de coupes et des directions d'alignement avant l'étude d'applications spécifiques qui feront l'objet de la deuxième partie.
NASA Astrophysics Data System (ADS)
Ibrahim, Raouf A.
2005-06-01
The problem of liquid sloshing in moving or stationary containers remains of great concern to aerospace, civil, and nuclear engineers; physicists; designers of road tankers and ship tankers; and mathematicians. Beginning with the fundamentals of liquid sloshing theory, this book takes the reader systematically from basic theory to advanced analytical and experimental results in a self-contained and coherent format. The book is divided into four sections. Part I deals with the theory of linear liquid sloshing dynamics; Part II addresses the nonlinear theory of liquid sloshing dynamics, Faraday waves, and sloshing impacts; Part III presents the problem of linear and nonlinear interaction of liquid sloshing dynamics with elastic containers and supported structures; and Part IV considers the fluid dynamics in spinning containers and microgravity sloshing. This book will be invaluable to researchers and graduate students in mechanical and aeronautical engineering, designers of liquid containers, and applied mathematicians.
Features of control systems analysis with discrete control devices using mathematical packages
NASA Astrophysics Data System (ADS)
Yakovleva, E. M.; Faerman, V. A.
2017-02-01
The article contains presentation of basic provisions of the theory of automatic pulse control systems as well as methods of analysis of such systems using the mathematical software widespread in the academic environment. The pulse systems under research are considered as analogues systems interacting among themselves, including sensors, amplifiers, controlled objects, and discrete parts. To describe such systems, one uses a mathematical apparatus of difference equations as well as discrete transfer functions. To obtain a transfer function of the open-loop system, being important from the point of view of the analysis of control systems, one uses mathematical packages Mathcad and Matlab. Despite identity of the obtained result, the way of its achievement from the point of view of user’s action is various for the specified means. In particular, Matlab uses a structural model of the control system while Mathcad allows only execution of a chain of operator transforms. It is worth noting that distinctions taking place allow considering transformation of signals during interaction of the linear and continuous parts of the control system from different sides. The latter can be used in an educational process for the best assimilation of the course of the control system theory by students.
Evaluation and Analysis of F-16XL Wind Tunnel Data From Static and Dynamic Tests
NASA Technical Reports Server (NTRS)
Kim, Sungwan; Murphy, Patrick C.; Klein, Vladislav
2004-01-01
A series of wind tunnel tests were conducted in the NASA Langley Research Center as part of an ongoing effort to develop and test mathematical models for aircraft rigid-body aerodynamics in nonlinear unsteady flight regimes. Analysis of measurement accuracy, especially for nonlinear dynamic systems that may exhibit complicated behaviors, is an essential component of this ongoing effort. In this report, tools for harmonic analysis of dynamic data and assessing measurement accuracy are presented. A linear aerodynamic model is assumed that is appropriate for conventional forced-oscillation experiments, although more general models can be used with these tools. Application of the tools to experimental data is demonstrated and results indicate the levels of uncertainty in output measurements that can arise from experimental setup, calibration procedures, mechanical limitations, and input errors.
Advanced analysis technique for the evaluation of linear alternators and linear motors
NASA Technical Reports Server (NTRS)
Holliday, Jeffrey C.
1995-01-01
A method for the mathematical analysis of linear alternator and linear motor devices and designs is described, and an example of its use is included. The technique seeks to surpass other methods of analysis by including more rigorous treatment of phenomena normally omitted or coarsely approximated such as eddy braking, non-linear material properties, and power losses generated within structures surrounding the device. The technique is broadly applicable to linear alternators and linear motors involving iron yoke structures and moving permanent magnets. The technique involves the application of Amperian current equivalents to the modeling of the moving permanent magnet components within a finite element formulation. The resulting steady state and transient mode field solutions can simultaneously account for the moving and static field sources within and around the device.
CSOLNP: Numerical Optimization Engine for Solving Non-linearly Constrained Problems.
Zahery, Mahsa; Maes, Hermine H; Neale, Michael C
2017-08-01
We introduce the optimizer CSOLNP, which is a C++ implementation of the R package RSOLNP (Ghalanos & Theussl, 2012, Rsolnp: General non-linear optimization using augmented Lagrange multiplier method. R package version, 1) alongside some improvements. CSOLNP solves non-linearly constrained optimization problems using a Sequential Quadratic Programming (SQP) algorithm. CSOLNP, NPSOL (a very popular implementation of SQP method in FORTRAN (Gill et al., 1986, User's guide for NPSOL (version 4.0): A Fortran package for nonlinear programming (No. SOL-86-2). Stanford, CA: Stanford University Systems Optimization Laboratory), and SLSQP (another SQP implementation available as part of the NLOPT collection (Johnson, 2014, The NLopt nonlinear-optimization package. Retrieved from http://ab-initio.mit.edu/nlopt)) are three optimizers available in OpenMx package. These optimizers are compared in terms of runtimes, final objective values, and memory consumption. A Monte Carlo analysis of the performance of the optimizers was performed on ordinal and continuous models with five variables and one or two factors. While the relative difference between the objective values is less than 0.5%, CSOLNP is in general faster than NPSOL and SLSQP for ordinal analysis. As for continuous data, none of the optimizers performs consistently faster than the others. In terms of memory usage, we used Valgrind's heap profiler tool, called Massif, on one-factor threshold models. CSOLNP and NPSOL consume the same amount of memory, while SLSQP uses 71 MB more memory than the other two optimizers.
Fu, Haiyan; Fan, Yao; Zhang, Xu; Lan, Hanyue; Yang, Tianming; Shao, Mei; Li, Sihan
2015-01-01
As an effective method, the fingerprint technique, which emphasized the whole compositions of samples, has already been used in various fields, especially in identifying and assessing the quality of herbal medicines. High-performance liquid chromatography (HPLC) and near-infrared (NIR), with their unique characteristics of reliability, versatility, precision, and simple measurement, played an important role among all the fingerprint techniques. In this paper, a supervised pattern recognition method based on PLSDA algorithm by HPLC and NIR has been established to identify the information of Hibiscus mutabilis L. and Berberidis radix, two common kinds of herbal medicines. By comparing component analysis (PCA), linear discriminant analysis (LDA), and particularly partial least squares discriminant analysis (PLSDA) with different fingerprint preprocessing of NIR spectra variables, PLSDA model showed perfect functions on the analysis of samples as well as chromatograms. Most important, this pattern recognition method by HPLC and NIR can be used to identify different collection parts, collection time, and different origins or various species belonging to the same genera of herbal medicines which proved to be a promising approach for the identification of complex information of herbal medicines. PMID:26345990
Van de Voorde, Tim; Vlaeminck, Jeroen; Canters, Frank
2008-01-01
Urban growth and its related environmental problems call for sustainable urban management policies to safeguard the quality of urban environments. Vegetation plays an important part in this as it provides ecological, social, health and economic benefits to a city's inhabitants. Remotely sensed data are of great value to monitor urban green and despite the clear advantages of contemporary high resolution images, the benefits of medium resolution data should not be discarded. The objective of this research was to estimate fractional vegetation cover from a Landsat ETM+ image with sub-pixel classification, and to compare accuracies obtained with multiple stepwise regression analysis, linear spectral unmixing and multi-layer perceptrons (MLP) at the level of meaningful urban spatial entities. Despite the small, but nevertheless statistically significant differences at pixel level between the alternative approaches, the spatial pattern of vegetation cover and estimation errors is clearly distinctive at neighbourhood level. At this spatially aggregated level, a simple regression model appears to attain sufficient accuracy. For mapping at a spatially more detailed level, the MLP seems to be the most appropriate choice. Brightness normalisation only appeared to affect the linear models, especially the linear spectral unmixing. PMID:27879914
Iterative Methods to Solve Linear RF Fields in Hot Plasma
NASA Astrophysics Data System (ADS)
Spencer, Joseph; Svidzinski, Vladimir; Evstatiev, Evstati; Galkin, Sergei; Kim, Jin-Soo
2014-10-01
Most magnetic plasma confinement devices use radio frequency (RF) waves for current drive and/or heating. Numerical modeling of RF fields is an important part of performance analysis of such devices and a predictive tool aiding design and development of future devices. Prior attempts at this modeling have mostly used direct solvers to solve the formulated linear equations. Full wave modeling of RF fields in hot plasma with 3D nonuniformities is mostly prohibited, with memory demands of a direct solver placing a significant limitation on spatial resolution. Iterative methods can significantly increase spatial resolution. We explore the feasibility of using iterative methods in 3D full wave modeling. The linear wave equation is formulated using two approaches: for cold plasmas the local cold plasma dielectric tensor is used (resolving resonances by particle collisions), while for hot plasmas the conductivity kernel (which includes a nonlocal dielectric response) is calculated by integrating along test particle orbits. The wave equation is discretized using a finite difference approach. The initial guess is important in iterative methods, and we examine different initial guesses including the solution to the cold plasma wave equation. Work is supported by the U.S. DOE SBIR program.
Aschenbrenner, Anna-Katharina; Kwon, Moonhyuk; Conrad, Jürgen; Ro, Dae-Kyun; Spring, Otmar
2016-04-01
Sunflower is known to produce a variety of bisabolene-type sesquiterpenes and accumulates these substances in trichomes of leaves, stems and flowering parts. A bioinformatics approach was used to identify the enzyme responsible for the initial step in the biosynthesis of these compounds from its precursor farnesyl pyrophosphate. Based on sequence similarity with a known bisabolene synthases from Arabidopsis thaliana AtTPS12, candidate genes of Helianthus were searched in EST-database and used to design specific primers. PCR experiments identified two candidates in the RNA pool of linear glandular trichomes of sunflower. Their sequences contained the typical motifs of sesquiterpene synthases and their expression in yeast functionally characterized them as bisabolene synthases. Spectroscopic analysis identified the stereochemistry of the product of both enzymes as (Z)-γ-bisabolene. The origin of the two sunflower bisabolene synthase genes from the transcripts of linear trichomes indicates that they may be involved in the synthesis of sesquiterpenes produced in these trichomes. Comparison of the amino acid sequences of the sunflower bisabolene synthases showed high similarity with sesquiterpene synthases from other Asteracean species and indicated putative evolutionary origin from a β-farnesene synthase. Copyright © 2016 Elsevier Ltd. All rights reserved.
Skylab study of water quality. [Kansas reservoirs
NASA Technical Reports Server (NTRS)
Yarger, H. L. (Principal Investigator); Mccauley, J. R.
1974-01-01
The author has identified the following significant results. Analysis of S-190A imagery from 1 EREP pass over 3 reservoirs in Kansas establishes a strong linear correlation between the red/green radiance ratio and suspended solids. This result compares quite favorably to ERTS MSS CCT results. The linear fits RMS for Skylab is 6 ppm as compared to 12 ppm for ERTS. All of the ERTS satellite passes yielded fairly linear results with typical RMS values of 12 ppm. However, a few of the individual passes did yield RMS values of 5 or 6 ppm which is comparable to the one Skylab pass analyzed. In view of the cloudy conditions in the Skylab photos, yet good results, the indications are that S-190A may do somewhat better than the ERTS MSS in determining suspended load. More S-190A data is needed to confirm this. As was the case with the ERTS MSS, the Skylab S-190A showed no strong correlation with other water quality parameters. S-190B photos because of their high resolution can provide much first look information regarding relative degrees of turbidity within various parts of large lakes and among smaller bodies of water.
NASA Technical Reports Server (NTRS)
Hoppin, R. A. (Principal Investigator); Caldwell, J.; Lehman, D.; Palmer, S.; Pan, K. L.; Swenson, A.
1976-01-01
The author has identified the following significant results. S190B imagery was the best single product from which fairly detailed structural and some lithologic mapping could be accomplished in the Big Horn basin, the Owl Creek Mountains, and the northern Big Horn Mountains. The Nye-Bowler lineament could not be extended east of its presently mapped location although a linear (fault or monocline) was noted that may be part of the lineament, but north of postulated extensions. Much more structure was discernible in the Big Horn basin than could be seen on LANDSAT-1 imagery; RB-57 color IR photography, in turn, revealed additional folds and faults. A number of linears, several of which could be identified as faults and one a monocline, cut obliquely the east-west trending Owl Creek uplift. The heavy forest cover of the Black Hills makes direct lithologic delineation impossible. However, drainage and linear overlays revealed differences in pattern between the areas of exposed Precambrian crystalline core and the flanking Paleozoic rocks. S192 data, even precision corrected segments, were not of much use.
NASA Technical Reports Server (NTRS)
Lee, Kang N.; Arya, Vinod K.; Halford, Gary R.; Barrett, Charles A.
1996-01-01
Sapphire fiber-reinforced MA956 composites hold promise for significant weight savings and increased high-temperature structural capability, as compared to unreinforced MA956. As part of an overall assessment of the high-temperature characteristics of this material system, cyclic oxidation behavior was studied at 1093 C and 1204 C. Initially, both sets of coupons exhibited parabolic oxidation kinetics. Later, monolithic MA956 exhibited spallation and a linear weight loss, whereas the composite showed a linear weight gain without spallation. Weight loss of the monolithic MA956 resulted from the linking of a multiplicity of randomly oriented and closely spaced surface cracks that facilitated ready spallation. By contrast, cracking of the composite's oxide layer was nonintersecting and aligned nominally parallel with the orientation of the subsurface reinforcing fibers. Oxidative lifetime of monolithic MA956 was projected from the observed oxidation kinetics. Linear elastic, finite element continuum, and micromechanics analyses were performed on coupons of the monolithic and composite materials. Results of the analyses qualitatively agreed well with the observed oxide cracking and spallation behavior of both the MA956 and the Sapphire/MA956 composite coupons.
A European multicenter study on the analytical performance of the VERIS HBV assay.
Braun, Patrick; Delgado, Rafael; Drago, Monica; Fanti, Diana; Fleury, Hervé; Izopet, Jacques; Lombardi, Alessandra; Mancon, Alessandro; Marcos, Maria Angeles; Sauné, Karine; O Shea, Siobhan; Pérez-Rivilla, Alfredo; Ramble, John; Trimoulet, Pascale; Vila, Jordi; Whittaker, Duncan; Artus, Alain; Rhodes, Daniel
Hepatitis B viral load monitoring is an essential part of managing patients with chronic Hepatits B infection. Beckman Coulter has developed the VERIS HBV Assay for use on the fully automated Beckman Coulter DxN VERIS Molecular Diagnostics System. 1 OBJECTIVES: To evaluate the analytical performance of the VERIS HBV Assay at multiple European virology laboratories. Precision, analytical sensitivity, negative sample performance, linearity and performance with major HBV genotypes/subtypes for the VERIS HBV Assay was evaluated. Precision showed an SD of 0.15 log 10 IU/mL or less for each level tested. Analytical sensitivity determined by probit analysis was between 6.8-8.0 IU/mL. Clinical specificity on 90 unique patient samples was 100.0%. Performance with 754 negative samples demonstrated 100.0% not detected results, and a carryover study showed no cross contamination. Linearity using clinical samples was shown from 1.23-8.23 log 10 IU/mL and the assay detected and showed linearity with major HBV genotypes/subtypes. The VERIS HBV Assay demonstrated comparable analytical performance to other currently marketed assays for HBV DNA monitoring. Copyright © 2017 Elsevier B.V. All rights reserved.
Circuit-based versus full-wave modelling of active microwave circuits
NASA Astrophysics Data System (ADS)
Bukvić, Branko; Ilić, Andjelija Ž.; Ilić, Milan M.
2018-03-01
Modern full-wave computational tools enable rigorous simulations of linear parts of complex microwave circuits within minutes, taking into account all physical electromagnetic (EM) phenomena. Non-linear components and other discrete elements of the hybrid microwave circuit are then easily added within the circuit simulator. This combined full-wave and circuit-based analysis is a must in the final stages of the circuit design, although initial designs and optimisations are still faster and more comfortably done completely in the circuit-based environment, which offers real-time solutions at the expense of accuracy. However, due to insufficient information and general lack of specific case studies, practitioners still struggle when choosing an appropriate analysis method, or a component model, because different choices lead to different solutions, often with uncertain accuracy and unexplained discrepancies arising between the simulations and measurements. We here design a reconfigurable power amplifier, as a case study, using both circuit-based solver and a full-wave EM solver. We compare numerical simulations with measurements on the manufactured prototypes, discussing the obtained differences, pointing out the importance of measured parameters de-embedding, appropriate modelling of discrete components and giving specific recipes for good modelling practices.
Social determinants of childhood asthma symptoms: an ecological study in urban Latin America.
Fattore, Gisel L; Santos, Carlos A T; Barreto, Mauricio L
2014-04-01
Asthma is an important public health problem in urban Latin America. This study aimed to analyze the role of socioeconomic and environmental factors as potential determinants of asthma symptoms prevalence in children from Latin American (LA) urban centers. We selected 31 LA urban centers with complete data, and an ecological analysis was performed. According to our theoretical framework, the explanatory variables were classified in three levels: distal, intermediate, and proximate. The association between variables in the three levels and prevalence of asthma symptoms was examined by bivariate and multivariate linear regression analysis weighed by sample size. In a second stage, we fitted several linear regression models introducing sequentially the variables according to the predefined hierarchy. In the final hierarchical model Gini Index, crowding, sanitation, variation in infant mortality rates and homicide rates, explained great part of the variance in asthma prevalence between centers (R(2) = 75.0 %). We found a strong association between socioeconomic and environmental variables and prevalence of asthma symptoms in LA urban children, and according to our hierarchical framework and the results found we suggest that social inequalities (measured by the Gini Index) is a central determinant to explain high prevalence of asthma in LA.
Drew, L.J.; Grunsky, E.C.; Sutphin, D.M.; Woodruff, L.G.
2010-01-01
Soils collected in 2004 along two North American continental-scale transects were subjected to geochemical and mineralogical analyses. In previous interpretations of these analyses, data were expressed in weight percent and parts per million, and thus were subject to the effect of the constant-sum phenomenon. In a new approach to the data, this effect was removed by using centered log-ratio transformations to 'open' the mineralogical and geochemical arrays. Multivariate analyses, including principal component and linear discriminant analyses, of the centered log-ratio data reveal the effects of soil-forming processes, including soil parent material, weathering, and soil age, at the continental-scale of the data arrays that were not readily apparent in the more conventionally presented data. Linear discriminant analysis of the data arrays indicates that the majority of the soil samples collected along the transects can be more successfully classified with Level 1 ecological regional-scale classification by the soil geochemistry than soil mineralogy. A primary objective of this study is to discover and describe, in a parsimonious way, geochemical processes that are both independent and inter-dependent and manifested through compositional data including estimates of the elements and corresponding mineralogy. ?? 2010.
Linear Back-Drive Differentials
NASA Technical Reports Server (NTRS)
Waydo, Peter
2003-01-01
Linear back-drive differentials have been proposed as alternatives to conventional gear differentials for applications in which there is only limited rotational motion (e.g., oscillation). The finite nature of the rotation makes it possible to optimize a linear back-drive differential in ways that would not be possible for gear differentials or other differentials that are required to be capable of unlimited rotation. As a result, relative to gear differentials, linear back-drive differentials could be more compact and less massive, could contain fewer complex parts, and could be less sensitive to variations in the viscosities of lubricants. Linear back-drive differentials would operate according to established principles of power ball screws and linear-motion drives, but would utilize these principles in an innovative way. One major characteristic of such mechanisms that would be exploited in linear back-drive differentials is the possibility of designing them to drive or back-drive with similar efficiency and energy input: in other words, such a mechanism can be designed so that a rotating screw can drive a nut linearly or the linear motion of the nut can cause the screw to rotate. A linear back-drive differential (see figure) would include two collinear shafts connected to two parts that are intended to engage in limited opposing rotations. The linear back-drive differential would also include a nut that would be free to translate along its axis but not to rotate. The inner surface of the nut would be right-hand threaded at one end and left-hand threaded at the opposite end to engage corresponding right- and left-handed threads on the shafts. A rotation and torque introduced into the system via one shaft would drive the nut in linear motion. The nut, in turn, would back-drive the other shaft, creating a reaction torque. Balls would reduce friction, making it possible for the shaft/nut coupling on each side to operate with 90 percent efficiency.
Wang, D Z; Wang, C; Shen, C F; Zhang, Y; Zhang, H; Song, G D; Xue, X D; Xu, Z L; Zhang, S; Jiang, G H
2017-05-10
We described the time trend of acute myocardial infarction (AMI) from 1999 to 2013 in Tianjin incidence rate with Cochran-Armitage trend (CAT) test and linear regression analysis, and the results were compared. Based on actual population, CAT test had much stronger statistical power than linear regression analysis for both overall incidence trend and age specific incidence trend (Cochran-Armitage trend P value
NASA Technical Reports Server (NTRS)
Reddy, T. S. R.; Srivastava, R.; Mehmed, Oral
2002-01-01
An aeroelastic analysis system for flutter and forced response analysis of turbomachines based on a two-dimensional linearized unsteady Euler solver has been developed. The ASTROP2 code, an aeroelastic stability analysis program for turbomachinery, was used as a basis for this development. The ASTROP2 code uses strip theory to couple a two dimensional aerodynamic model with a three dimensional structural model. The code was modified to include forced response capability. The formulation was also modified to include aeroelastic analysis with mistuning. A linearized unsteady Euler solver, LINFLX2D is added to model the unsteady aerodynamics in ASTROP2. By calculating the unsteady aerodynamic loads using LINFLX2D, it is possible to include the effects of transonic flow on flutter and forced response in the analysis. The stability is inferred from an eigenvalue analysis. The revised code, ASTROP2-LE for ASTROP2 code using Linearized Euler aerodynamics, is validated by comparing the predictions with those obtained using linear unsteady aerodynamic solutions.
1992-01-01
the uncertainty. The above method can give an estimate of the precision of the * analysis. However, determining the accuracy can not be done as...speciation has been determined from analyzing model samples as well as comparison with other methods and combinations of other methods with this method . 3...laboratory. The output of the sensor is characterized over its working range and an appropriate response factor determined by linear regression of the
Large-Nc masses of light mesons from QCD sum rules for nonlinear radial Regge trajectories
NASA Astrophysics Data System (ADS)
Afonin, S. S.; Solomko, T. D.
2018-04-01
The large-Nc masses of light vector, axial, scalar and pseudoscalar mesons are calculated from QCD spectral sum rules for a particular ansatz interpolating the radial Regge trajectories. The ansatz includes a linear part plus exponentially degreasing corrections to the meson masses and residues. The form of corrections was proposed some time ago for consistency with analytical structure of Operator Product Expansion of the two-point correlation functions. We revised that original analysis and found the second solution for the proposed sum rules. The given solution describes better the spectrum of vector and axial mesons.
Comments on the "Byzantine Self-Stabilizing Pulse Synchronization" Protocol: Counter-examples
NASA Technical Reports Server (NTRS)
Malekpour, Mahyar R.; Siminiceanu, Radu
2006-01-01
Embedded distributed systems have become an integral part of many safety-critical applications. There have been many attempts to solve the self-stabilization problem of clocks across a distributed system. An analysis of one such protocol called the Byzantine Self-Stabilizing Pulse Synchronization (BSS-Pulse-Synch) protocol from a paper entitled "Linear Time Byzantine Self-Stabilizing Clock Synchronization" by Daliot, et al., is presented in this report. This report also includes a discussion of the complexity and pitfalls of designing self-stabilizing protocols and provides counter-examples for the claims of the above protocol.
Artificial Neural Networks: an overview and their use in the analysis of the AMPHORA-3 dataset.
Buscema, Paolo Massimo; Massini, Giulia; Maurelli, Guido
2014-10-01
The Artificial Adaptive Systems (AAS) are theories with which generative algebras are able to create artificial models simulating natural phenomenon. Artificial Neural Networks (ANNs) are the more diffused and best-known learning system models in the AAS. This article describes an overview of ANNs, noting its advantages and limitations for analyzing dynamic, complex, non-linear, multidimensional processes. An example of a specific ANN application to alcohol consumption in Spain, as part of the EU AMPHORA-3 project, during 1961-2006 is presented. Study's limitations are noted and future needed research using ANN methodologies are suggested.
Threshold law for positron-atom impact ionisation
NASA Technical Reports Server (NTRS)
Temkin, A.
1982-01-01
The threshold law for ionisation of atoms by positron impact is adduced in analogy with our approach to the electron-atom ionization. It is concluded the Coulomb-dipole region of the potential gives the essential part of the interaction in both cases and leads to the same kind of result: a modulated linear law. An additional process which enters positron ionization is positronium formation in the continuum, but that will not dominate the threshold yield. The result is in sharp contrast to the positron threshold law as recently derived by Klar on the basis of a Wannier-type analysis.
Effects of buffer size and shape on associations between the built environment and energy balance.
James, Peter; Berrigan, David; Hart, Jaime E; Hipp, J Aaron; Hoehner, Christine M; Kerr, Jacqueline; Major, Jacqueline M; Oka, Masayoshi; Laden, Francine
2014-05-01
Uncertainty in the relevant spatial context may drive heterogeneity in findings on the built environment and energy balance. To estimate the effect of this uncertainty, we conducted a sensitivity analysis defining intersection and business densities and counts within different buffer sizes and shapes on associations with self-reported walking and body mass index. Linear regression results indicated that the scale and shape of buffers influenced study results and may partly explain the inconsistent findings in the built environment and energy balance literature. Copyright © 2014 Elsevier Ltd. All rights reserved.
Linear quadratic regulators with eigenvalue placement in a horizontal strip
NASA Technical Reports Server (NTRS)
Shieh, Leang S.; Dib, Hani M.; Ganesan, Sekar
1987-01-01
A method for optimally shifting the imaginary parts of the open-loop poles of a multivariable control system to the desirable closed-loop locations is presented. The optimal solution with respect to a quadratic performance index is obtained by solving a linear matrix Liapunov equation.
Investigation on Constrained Matrix Factorization for Hyperspectral Image Analysis
2005-07-25
analysis. Keywords: matrix factorization; nonnegative matrix factorization; linear mixture model ; unsupervised linear unmixing; hyperspectral imagery...spatial resolution permits different materials present in the area covered by a single pixel. The linear mixture model says that a pixel reflectance in...in r. In the linear mixture model , r is considered as the linear mixture of m1, m2, …, mP as nMαr += (1) where n is included to account for
ERIC Educational Resources Information Center
Gonzalez-Vega, Laureano
1999-01-01
Using a Computer Algebra System (CAS) to help with the teaching of an elementary course in linear algebra can be one way to introduce computer algebra, numerical analysis, data structures, and algorithms. Highlights the advantages and disadvantages of this approach to the teaching of linear algebra. (Author/MM)
2018-01-01
Purpose The present study aimed to identify the learning preferences of dental students and to characterize their relationship with academic performance at a dental school in Isfahan, Iran. Methods This cross-sectional descriptive study included 200 undergraduate dental students from October to November 2016. Data were collected using a 2-part questionnaire. The first part included demographic data, and the second part was a Persian-language version of the visual, aural, read/write, and kinesthetic questionnaire. Data analysis was conducted with the chi-square test, 1-way analysis of variance, and multiple linear regression. Results The response rate was 86.6%. Approximately half of the students (51.5%) had multimodal learning preferences. Among the unimodal group (48.5%), the most common mode was aural (24.0%), followed by kinesthetic (15.5%), reading-writing (8.0%), and visual (1.0%). There was a significant association between academic performance and the reading/writing learning style preference (P< 0.01). Conclusion Multimodal learning styles were the most preferred. Among single-mode learning styles, the aural style was most common, followed by the kinesthetic style. Students with a reading/writing preference had better academic performance. The results of this study provide useful information for preparing a more problem-based curriculum with active learning strategies. PMID:29575848
Luong, J; Gras, R; Shellie, R A; Cortes, H J
2013-07-05
The detection of sulfur compounds in different hydrocarbon matrices, from light hydrocarbon feedstocks to medium synthetic crude oil feeds provides meaningful information for optimization of refining processes as well as demonstration of compliance with petroleum product specifications. With the incorporation of planar microfluidic devices in a novel chromatographic configuration, sulfur compounds from hydrogen sulfide to alkyl dibenzothiophenes and heavier distributions of sulfur compounds over a wide range of matrices spanning across a boiling point range of more than 650°C can be characterized, using one single analytical configuration in less than 25min. In tandem with a sulfur chemiluminescence detector for sulfur analysis is a flame ionization detector. The flame ionization detector can be used to establish the boiling point range of the sulfur compounds in various hydrocarbon fractions for elemental specific simulated distillation analysis as well as profiling the hydrocarbon matrices for process optimization. Repeatability of less than 3% RSD (n=20) over a range of 0.5-1000 parts per million (v/v) was obtained with a limit of detection of 50 parts per billion and a linear range of 0.5-1000 parts per million with a correlation co-efficient of 0.998. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.
2003-01-01
The use of stress predictions from equivalent linearization analyses in the computation of high-cycle fatigue life is examined. Stresses so obtained differ in behavior from the fully nonlinear analysis in both spectral shape and amplitude. Consequently, fatigue life predictions made using this data will be affected. Comparisons of fatigue life predictions based upon the stress response obtained from equivalent linear and numerical simulation analyses are made to determine the range over which the equivalent linear analysis is applicable.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Banks, J. W.; Henshaw, W. D.; Schwendeman, D. W.
A stable partitioned algorithm is developed for fluid-structure interaction (FSI) problems involving viscous incompressible flow and rigid bodies. This added-mass partitioned (AMP) algorithm remains stable, without sub-iterations, for light and even zero mass rigid bodies when added-mass and viscous added-damping effects are large. The scheme is based on a generalized Robin interface condition for the fluid pressure that includes terms involving the linear acceleration and angular acceleration of the rigid body. Added mass effects are handled in the Robin condition by inclusion of a boundary integral term that depends on the pressure. Added-damping effects due to the viscous shear forcesmore » on the body are treated by inclusion of added-damping tensors that are derived through a linearization of the integrals defining the force and torque. Added-damping effects may be important at low Reynolds number, or, for example, in the case of a rotating cylinder or rotating sphere when the rotational moments of inertia are small. In this second part of a two-part series, the general formulation of the AMP scheme is presented including the form of the AMP interface conditions and added-damping tensors for general geometries. A fully second-order accurate implementation of the AMP scheme is developed in two dimensions based on a fractional-step method for the incompressible Navier-Stokes equations using finite difference methods and overlapping grids to handle the moving geometry. Here, the numerical scheme is verified on a number of difficult benchmark problems.« less
Banks, J. W.; Henshaw, W. D.; Schwendeman, D. W.; ...
2017-01-20
A stable partitioned algorithm is developed for fluid-structure interaction (FSI) problems involving viscous incompressible flow and rigid bodies. This added-mass partitioned (AMP) algorithm remains stable, without sub-iterations, for light and even zero mass rigid bodies when added-mass and viscous added-damping effects are large. The scheme is based on a generalized Robin interface condition for the fluid pressure that includes terms involving the linear acceleration and angular acceleration of the rigid body. Added mass effects are handled in the Robin condition by inclusion of a boundary integral term that depends on the pressure. Added-damping effects due to the viscous shear forcesmore » on the body are treated by inclusion of added-damping tensors that are derived through a linearization of the integrals defining the force and torque. Added-damping effects may be important at low Reynolds number, or, for example, in the case of a rotating cylinder or rotating sphere when the rotational moments of inertia are small. In this second part of a two-part series, the general formulation of the AMP scheme is presented including the form of the AMP interface conditions and added-damping tensors for general geometries. A fully second-order accurate implementation of the AMP scheme is developed in two dimensions based on a fractional-step method for the incompressible Navier-Stokes equations using finite difference methods and overlapping grids to handle the moving geometry. Here, the numerical scheme is verified on a number of difficult benchmark problems.« less
Calculating Required Substructure Damping to Meet Prescribed System Damping Levels
2007-06-01
Rorres, Elementary Linear Algebra . New Jersey: John Wiley & Sons, 2005. 2. Klaus-Jurgen Bathe, Finite Element Procedures. New Jersey: Prentice Hall...will be covered in the explanation of orthogonal complement. The definitions are extracted from the book “ Linear Algebra and its Applications” by...TA = left nullspace of A; dimension m-r Applying the first part of the fundamental theorem of Linear Algebra we can now talk about the orthogonal
Sensitivity Analysis of the Integrated Medical Model for ISS Programs
NASA Technical Reports Server (NTRS)
Goodenow, D. A.; Myers, J. G.; Arellano, J.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Young, M.
2016-01-01
Sensitivity analysis estimates the relative contribution of the uncertainty in input values to the uncertainty of model outputs. Partial Rank Correlation Coefficient (PRCC) and Standardized Rank Regression Coefficient (SRRC) are methods of conducting sensitivity analysis on nonlinear simulation models like the Integrated Medical Model (IMM). The PRCC method estimates the sensitivity using partial correlation of the ranks of the generated input values to each generated output value. The partial part is so named because adjustments are made for the linear effects of all the other input values in the calculation of correlation between a particular input and each output. In SRRC, standardized regression-based coefficients measure the sensitivity of each input, adjusted for all the other inputs, on each output. Because the relative ranking of each of the inputs and outputs is used, as opposed to the values themselves, both methods accommodate the nonlinear relationship of the underlying model. As part of the IMM v4.0 validation study, simulations are available that predict 33 person-missions on ISS and 111 person-missions on STS. These simulated data predictions feed the sensitivity analysis procedures. The inputs to the sensitivity procedures include the number occurrences of each of the one hundred IMM medical conditions generated over the simulations and the associated IMM outputs: total quality time lost (QTL), number of evacuations (EVAC), and number of loss of crew lives (LOCL). The IMM team will report the results of using PRCC and SRRC on IMM v4.0 predictions of the ISS and STS missions created as part of the external validation study. Tornado plots will assist in the visualization of the condition-related input sensitivities to each of the main outcomes. The outcomes of this sensitivity analysis will drive review focus by identifying conditions where changes in uncertainty could drive changes in overall model output uncertainty. These efforts are an integral part of the overall verification, validation, and credibility review of IMM v4.0.
Study of free-piston Stirling engine driven linear alternators
NASA Technical Reports Server (NTRS)
Nasar, S. A.; Chen, C.
1987-01-01
The analysis, design and operation of single phase, single slot tubular permanent magnet linear alternator is presented. Included is the no-load and on-load magnetic field investigation, permanent magnet's leakage field analysis, parameter identification, design guidelines and an optimal design of a permanent magnet linear alternator. For analysis of the magnetic field, a simplified magnetic circuit is utilized. The analysis accounts for saturation, leakage and armature reaction.
SPAR reference manual. [for stress analysis
NASA Technical Reports Server (NTRS)
Whetstone, W. D.
1974-01-01
SPAR is a system of related programs which may be operated either in batch or demand (teletype) mode. Information exchange between programs is automatically accomplished through one or more direct access libraries, known collectively as the data complex. Card input is command-oriented, in free-field form. Capabilities available in the first production release of the system are fully documented, and include linear stress analysis, linear bifurcation buckling analysis, and linear vibrational analysis.
New analysis of magnetic tornadoes
NASA Astrophysics Data System (ADS)
Arter, Wayne
2017-04-01
The recent work[1] showed how the equations of ideal, compressible magnetohydrodynamics (MHD) may be elegantly formulated in terms of Lie derivatives, building on the work of Helmholtz, Walen and Arnold. The ``linear" fields approach reduces ideal MHD to a low order set of non-linear ordinary differential equations capable of further simplification, so has the potential to enrich understanding of this difficult subject, which has application both to laboratory and geophysical/astrophysical plasmas. The just published work [2] extends the linear fields' solution of compressible nonlinear MHD to the case where the magnetic field depends on superlinear powers of position vector, usually but not always, expressed in Cartesian components. Implications of the resulting Lie-Taylor series expansion for physical applicability of the Dolzhansky-Kirchhoff (D-K) ``linear field" equations are found to be positive. It is demonstrated how resistivity may be included in the D-K model. Arguments are put forward that the D-K equations may be regarded as illustrating properties of nonlinear MHD in the same sense that the Lorenz equations inform about the onset of convective turbulence. It is thereby suggested that the Lie-Taylor series approach may lead to valuable insights into MHD turbulence, especially fast timescale transients and the role of plasmoids. This work has been part-funded by the RCUK Energy Programme. 1. Arter, W. 2013 ``Potential vorticity formulation of compressible magnetohydrodynamics. Phys. Rev. Lett. 110, 015004." (doi:10.1103/PhysRevLett.110.015004) 2. Arter, W. 2017 ``Beyond linear fields: the Lie-Taylor expansion", Proc. R. Soc. A473, 20160525; http://dx.doi.org/10.1098/rspa.2016.0525
Theory and praxis pf map analsys in CHEF part 1: Linear normal form
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michelotti, Leo; /Fermilab
2008-10-01
This memo begins a series which, put together, could comprise the 'CHEF Documentation Project' if there were such a thing. The first--and perhaps only--three will telegraphically describe theory, algorithms, implementation and usage of the normal form map analysis procedures encoded in CHEF's collection of libraries. [1] This one will begin the sequence by explaining the linear manipulations that connect the Jacobian matrix of a symplectic mapping to its normal form. It is a 'Reader's Digest' version of material I wrote in Intermediate Classical Dynamics (ICD) [2] and randomly scattered across technical memos, seminar viewgraphs, and lecture notes for the pastmore » quarter century. Much of its content is old, well known, and in some places borders on the trivial.1 Nevertheless, completeness requires their inclusion. The primary objective is the 'fundamental theorem' on normalization written on page 8. I plan to describe the nonlinear procedures in a subsequent memo and devote a third to laying out algorithms and lines of code, connecting them with equations written in the first two. Originally this was to be done in one short paper, but I jettisoned that approach after its first section exceeded a dozen pages. The organization of this document is as follows. A brief description of notation is followed by a section containing a general treatment of the linear problem. After the 'fundamental theorem' is proved, two further subsections discuss the generation of equilibrium distributions and issue of 'phase'. The final major section reviews parameterizations--that is, lattice functions--in two and four dimensions with a passing glance at the six-dimensional version. Appearances to the contrary, for the most part I have tried to restrict consideration to matters needed to understand the code in CHEF's libraries.« less
Equivalent circuit simulation of HPEM-induced transient responses at nonlinear loads
NASA Astrophysics Data System (ADS)
Kotzev, Miroslav; Bi, Xiaotang; Kreitlow, Matthias; Gronwald, Frank
2017-09-01
In this paper the equivalent circuit modeling of a nonlinearly loaded loop antenna and its transient responses to HPEM field excitations are investigated. For the circuit modeling the general strategy to characterize the nonlinearly loaded antenna by a linear and a nonlinear circuit part is pursued. The linear circuit part can be determined by standard methods of antenna theory and numerical field computation. The modeling of the nonlinear circuit part requires realistic circuit models of the nonlinear loads that are given by Schottky diodes. Combining both parts, appropriate circuit models are obtained and analyzed by means of a standard SPICE circuit simulator. It is the main result that in this way full-wave simulation results can be reproduced. Furthermore it is clearly seen that the equivalent circuit modeling offers considerable advantages with respect to computation speed and also leads to improved physical insights regarding the coupling between HPEM field excitation and nonlinearly loaded loop antenna.
Grid-Independent Compressive Imaging and Fourier Phase Retrieval
ERIC Educational Resources Information Center
Liao, Wenjing
2013-01-01
This dissertation is composed of two parts. In the first part techniques of band exclusion(BE) and local optimization(LO) are proposed to solve linear continuum inverse problems independently of the grid spacing. The second part is devoted to the Fourier phase retrieval problem. Many situations in optics, medical imaging and signal processing call…
Pharmacists' wages and salaries: The part-time versus full-time dichotomy.
Carvajal, Manuel J; Popovici, Ioana
2016-01-01
Recent years have seen significant growth in part-time work among pharmacy personnel. If preferences and outlooks of part-time and full-time workers differ, job-related incentives may not have the same effect on both groups; different management practices may be necessary to cope with rapidly evolving workforces. To compare wage-and-salary responses to the number of hours worked, human-capital stock, and job-related preferences between full-time and part-time pharmacists. The analysis focused on the pharmacist workforce because, unlike other professions, remuneration is fairly linear with respect to the amount of time worked. Data were collected from a self-reported survey of licensed pharmacists in southern Florida (U.S. State). The sample consisted of 979 full-time and 254 part-time respondents. Using ordinary least squares, a model estimated, separately for full-time and part-time pharmacists, annual wage-and-salary earnings as functions of average workweek, human-capital stock, and job-related preferences. Practitioners working less than 36 h/week were driven almost exclusively by pay, whereas practitioners working 36 h or more exhibited a more comprehensive approach to their work experience that included variables beyond monetary remuneration. Managing part-time pharmacists calls for emphasis on wage-and-salary issues. Job-security and gender- and children-related concerns, such as flexibility, should be oriented toward full-time practitioners. Copyright © 2016 Elsevier Inc. All rights reserved.
Computer Power. Part 2: Electrical Power Problems and Their Amelioration.
ERIC Educational Resources Information Center
Price, Bennett J.
1989-01-01
Describes electrical power problems that affect computer users, including spikes, sags, outages, noise, frequency variations, and static electricity. Ways in which these problems may be diagnosed and cured are discussed. Sidebars consider transformers; power distribution units; surge currents/linear and non-linear loads; and sizing the power…
Ladstätter, Felix; Garrosa, Eva; Moreno-Jiménez, Bernardo; Ponsoda, Vicente; Reales Aviles, José Manuel; Dai, Junming
2016-01-01
Artificial neural networks are sophisticated modelling and prediction tools capable of extracting complex, non-linear relationships between predictor (input) and predicted (output) variables. This study explores this capacity by modelling non-linearities in the hardiness-modulated burnout process with a neural network. Specifically, two multi-layer feed-forward artificial neural networks are concatenated in an attempt to model the composite non-linear burnout process. Sensitivity analysis, a Monte Carlo-based global simulation technique, is then utilised to examine the first-order effects of the predictor variables on the burnout sub-dimensions and consequences. Results show that (1) this concatenated artificial neural network approach is feasible to model the burnout process, (2) sensitivity analysis is a prolific method to study the relative importance of predictor variables and (3) the relationships among variables involved in the development of burnout and its consequences are to different degrees non-linear. Many relationships among variables (e.g., stressors and strains) are not linear, yet researchers use linear methods such as Pearson correlation or linear regression to analyse these relationships. Artificial neural network analysis is an innovative method to analyse non-linear relationships and in combination with sensitivity analysis superior to linear methods.
Russo, Marina; Fanali, Chiara; Tripodo, Giusy; Dugo, Paola; Muleo, Rosario; Dugo, Laura; De Gara, Laura; Mondello, Luigi
2018-06-01
The analysis of pomegranate phenolic compounds belonging to different classes in different fruit parts was performed by high-performance liquid chromatography coupled with photodiode array and mass spectrometry detection. Two different separation methods were optimized for the analysis of anthocyanins and hydrolyzable tannins along with phenolic acids and flavonoids. Two C 18 columns, core-shell and fully porous particle stationary phases, were used. The parameters for separation of phenolic compounds were optimized considering chromatographic resolution and analysis time. Thirty-five phenolic compounds were found, and 28 of them were tentatively identified as belonging to four different phenolic compound classes; namely, anthocyanins, phenolic acids, hydrolyzable tannins, and flavonoids. Quantitative analysis was performed with a mixture of nine phenolic compounds belonging to phenolic compound classes representative of pomegranate. The method was then fully validated in terms of retention time precision, expressed as the relative standard deviation, limit of detection, limit of quantification, and linearity range. Phenolic compounds were analyzed directly in pomegranate juice, and after solvent extraction with a mixture of water and methanol with a small percentage of acid in peel and pulp samples. The accuracy of the extraction method was also assessed, and satisfactory values were obtained. Finally, the method was used to study identified analytes in pomegranate juice, peel, and pulp of six different Italian varieties and one international variety. Differences in phenolic compound profiles among the different pomegranate parts were observed. Pomegranate peel samples showed a high concentration of phenolic compounds, ellagitannins being the most abundant ones, with respect to pulp and juice samples for each variety. With the same samples, total phenols and antioxidant activity were evaluated through colorimetric assays, and the results were correlated among them.
A Thermodynamic Theory Of Solid Viscoelasticity. Part 1: Linear Viscoelasticity.
NASA Technical Reports Server (NTRS)
Freed, Alan D.; Leonov, Arkady I.
2002-01-01
The present series of three consecutive papers develops a general theory for linear and finite solid viscoelasticity. Because the most important object for nonlinear studies are rubber-like materials, the general approach is specified in a form convenient for solving problems important for many industries that involve rubber-like materials. General linear and nonlinear theories for non-isothermal deformations of viscoelastic solids are developed based on the quasi-linear approach of non-equilibrium thermodynamics. In this, the first paper of the series, we analyze non-isothermal linear viscoelasticity, which is applicable in a range of small strains not only to all synthetic polymers and bio-polymers but also to some non-polymeric materials. Although the linear case seems to be well developed, there still are some reasons to implement a thermodynamic derivation of constitutive equations for solid-like, non-isothermal, linear viscoelasticity. The most important is the thermodynamic modeling of thermo-rheological complexity , i.e. different temperature dependences of relaxation parameters in various parts of relaxation spectrum. A special structure of interaction matrices is established for different physical mechanisms contributed to the normal relaxation modes. This structure seems to be in accord with observations, and creates a simple mathematical framework for both continuum and molecular theories of the thermo-rheological complex relaxation phenomena. Finally, a unified approach is briefly discussed that, in principle, allows combining both the long time (discrete) and short time (continuous) descriptions of relaxation behaviors for polymers in the rubbery and glassy regions.
Automatic classification of spectral units in the Aristarchus plateau
NASA Astrophysics Data System (ADS)
Erard, S.; Le Mouelic, S.; Langevin, Y.
1999-09-01
A reduction scheme has been recently proposed for the NIR images of Clementine (Le Mouelic et al, JGR 1999). This reduction has been used to build an integrated UVvis-NIR image cube of the Aristarchus region, from which compositional and maturity variations can be studied (Pinet et al, LPSC 1999). We will present an analysis of this image cube, providing a classification in spectral types and spectral units. The image cube is processed with Gmode analysis using three different data sets: Normalized spectra provide a classification based mainly on spectral slope variations (ie. maturity and volcanic glasses). This analysis discriminates between craters plus ejecta, mare basalts, and DMD. Olivine-rich areas and Aristarchus central peak are also recognized. Continuum-removed spectra provide a classification more related to compositional variations, which correctly identifies olivine and pyroxenes-rich areas (in Aristarchus, Krieger, Schiaparelli\\ldots). A third analysis uses spectral parameters related to maturity and Fe composition (reflectance, 1 mu m band depth, and spectral slope) rather than intensities. It provides the most spatially consistent picture, but fails in detecting Vallis Schroeteri and DMDs. A supplementary unit, younger and rich in pyroxene, is found on Aristarchus south rim. In conclusion, Gmode analysis can discriminate between different spectral types already identified with more classic methods (PCA, linear mixing\\ldots). No previous assumption is made on the data structure, such as endmembers number and nature, or linear relationship between input variables. The variability of the spectral types is intrinsically accounted for, so that the level of analysis is always restricted to meaningful limits. A complete classification should integrate several analyses based on different sets of parameters. Gmode is therefore a powerful light toll to perform first look analysis of spectral imaging data. This research has been partly founded by the French Programme National de Planetologie.
Evaluation of flaws in carbon steel piping. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zahoor, A.; Gamble, R.M.; Mehta, H.S.
1986-10-01
The objective of this program was to develop flaw evaluation procedures and allowable flaw sizes for ferritic piping used in light water reactor (LWR) power generation facilities. The program results provide relevant ASME Code groups with the information necessary to define flaw evaluation procedures, allowable flaw sizes, and their associated bases for Section XI of the code. Because there are several possible flaw-related failure modes for ferritic piping over the LWR operating temperature range, three analysis methods were employed to develop the evaluation procedures. These include limit load analysis for plastic collapse, elastic plastic fracture mechanics (EPFM) analysis for ductilemore » tearing, and linear elastic fracture mechanics (LEFM) analysis for non ductile crack extension. To ensure the appropriate analysis method is used in an evaluation, a step by step procedure also is provided to identify the relevant acceptance standard or procedure on a case by case basis. The tensile strength and toughness properties required to complete the flaw evaluation for any of the three analysis methods are included in the evaluation procedure. The flaw evaluation standards are provided in tabular form for the plastic collapse and ductile tearing modes, where the allowable part through flaw depth is defined as a function of load and flaw length. For non ductile crack extension, linear elastic fracture mechanics analysis methods, similar to those in Appendix A of Section XI, are defined. Evaluation flaw sizes and procedures are developed for both longitudinal and circumferential flaw orientations and normal/upset and emergency/faulted operating conditions. The tables are based on margins on load of 2.77 and 1.39 for circumferential flaws and 3.0 and 1.5 for longitudinal flaws for normal/upset and emergency/faulted conditions, respectively.« less
Pietruski, Piotr; Majak, Marcin; Pawlowska, Elzbieta; Skiba, Adam; Antoszewski, Boguslaw
2017-04-01
The aim of this study was to use a novel system, 'Analyse It Doc' (A.I.D.) for a complex anthropometric analysis of the nasolabial region in patients with repaired unilateral complete cleft lip and palate and in healthy individuals. A set of standardized facial photographs in frontal, lateral and submental view have been taken in 50 non-cleft controls (mean age 20.6 years) and 42 patients with repaired unilateral complete cleft and palate (mean age 19.57 years). Then, based on linear, angular and area measurements taken from the digital photographs with the aid of the A.I.D. system, a photogrammetric analysis of intergroup differences in nasolabial morphology and symmetry was conducted. Patients with cleft lip and palate differed from the controls in terms of more than half of analysed angular measurements and proportion indices derived from linear and area measurements of the nasolabial region. The findings presented herein imply that despite primary surgical repair, patients with unilateral complete cleft lip and palate still show some degree of nasolabial dysmorphology. Furthermore, the study demonstrated that the novel computer system is suitable for a reliable, simple and time-efficient anthropometric analysis in a clinical setting. Copyright © 2017 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Wavelets, non-linearity and turbulence in fusion plasmas
NASA Astrophysics Data System (ADS)
van Milligen, B. Ph.
Introduction Linear spectral analysis tools Wavelet analysis Wavelet spectra and coherence Joint wavelet phase-frequency spectra Non-linear spectral analysis tools Wavelet bispectra and bicoherence Interpretation of the bicoherence Analysis of computer-generated data Coupled van der Pol oscillators A large eddy simulation model for two-fluid plasma turbulence A long wavelength plasma drift wave model Analysis of plasma edge turbulence from Langmuir probe data Radial coherence observed on the TJ-IU torsatron Bicoherence profile at the L/H transition on CCT Conclusions
Cavalcante Neto, Jorge L; Zamunér, Antonio R; Moreno, Bianca C; Silva, Ester; Tudella, Eloisa
2018-01-01
Children with Developmental Coordination Disorder (DCD) and children at risk for DCD (r-DCD) present motor impairments interfering in their school, leisure and daily activities. In addition, these children may have abnormalities in their cardiac autonomic control, which together with their motor impairments, restrict their health and functionality. Therefore, this study aimed to assess the cardiac autonomic control, by linear and nonlinear analysis, at supine and during an orthostatic stimulus in DCD, r-DCD and typically developed children. Thirteen DCD children (11 boys and 2 girls, aged 8.08 ± 0.79 years), 19 children at risk for DCD (13 boys and 6 girls, aged 8.10 ± 0.96 years) and 18 typically developed children, who constituted the control group (CG) (10 boys and 8 girls, aged 8.50 ± 0.96 years) underwent a heart rate variability (HRV) examination. R-R intervals were recorded in order to assess the cardiac autonomic control using a validated HR monitor. HRV was analyzed by linear and nonlinear methods and compared between r-DCD, DCD, and CG. The DCD group presented blunted cardiac autonomic adjustment to the orthostatic stimulus, which was not observed in r-DCD and CG. Regarding nonlinear analysis of HRV, the DCD group presented lower parasympathetic modulation in the supine position compared to the r-DCD and CG groups. In the within group analysis, only the DCD group did not increase HR from supine to standing posture. Symbolic analysis revealed a significant decrease in 2LV ( p < 0.0001) and 2UV ( p < 0.0001) indices from supine to orthostatic posture only in the CG. In conclusion, r-DCD and DCD children present cardiac autonomic dysfunction characterized by higher sympathetic, lower parasympathetic and lower complexity of cardiac autonomic control in the supine position, as well as a blunted autonomic adjustment to the orthostatic stimulus. Therefore, cardiovascular health improvement should be part of DCD children's management, even in cases of less severe motor impairment.
Non-normality and classification of amplification mechanisms in stability and resolvent analysis
NASA Astrophysics Data System (ADS)
Symon, Sean; Rosenberg, Kevin; Dawson, Scott T. M.; McKeon, Beverley J.
2018-05-01
Eigenspectra and pseudospectra of the mean-linearized Navier-Stokes operator are used to characterize amplification mechanisms in laminar and turbulent flows in which linear mechanisms are important. Success of mean flow (linear) stability analysis for a particular frequency is shown to depend on whether two scalar measures of non-normality agree: (1) the product between the resolvent norm and the distance from the imaginary axis to the closest eigenvalue and (2) the inverse of the inner product between the most amplified resolvent forcing and response modes. If they agree, the resolvent operator can be rewritten in its dyadic representation to reveal that the adjoint and forward stability modes are proportional to the forcing and response resolvent modes at that frequency. Hence the real parts of the eigenvalues are important since they are responsible for resonant amplification and the resolvent operator is low rank when the eigenvalues are sufficiently separated in the spectrum. If the amplification is pseudoresonant, then resolvent analysis is more suitable to understand the origin of observed flow structures. Two test cases are studied: low Reynolds number cylinder flow and turbulent channel flow. The first deals mainly with resonant mechanisms, hence the success of both classical and mean stability analysis with respect to predicting the critical Reynolds number and global frequency of the saturated flow. Both scalar measures of non-normality agree for the base and mean flows, and the region where the forcing and response modes overlap scales with the length of the recirculation bubble. In the case of turbulent channel flow, structures result from both resonant and pseudoresonant mechanisms, suggesting that both are necessary elements to sustain turbulence. Mean shear is exploited most efficiently by stationary disturbances while bounds on the pseudospectra illustrate how pseudoresonance is responsible for the most amplified disturbances at spatial wavenumbers and temporal frequencies corresponding to well-known turbulent structures. Some implications for flow control are discussed.
Discriminative analysis of non-linear brain connectivity for leukoaraiosis with resting-state fMRI
NASA Astrophysics Data System (ADS)
Lai, Youzhi; Xu, Lele; Yao, Li; Wu, Xia
2015-03-01
Leukoaraiosis (LA) describes diffuse white matter abnormalities on CT or MR brain scans, often seen in the normal elderly and in association with vascular risk factors such as hypertension, or in the context of cognitive impairment. The mechanism of cognitive dysfunction is still unclear. The recent clinical studies have revealed that the severity of LA was not corresponding to the cognitive level, and functional connectivity analysis is an appropriate method to detect the relation between LA and cognitive decline. However, existing functional connectivity analyses of LA have been mostly limited to linear associations. In this investigation, a novel measure utilizing the extended maximal information coefficient (eMIC) was applied to construct non-linear functional connectivity in 44 LA subjects (9 dementia, 25 mild cognitive impairment (MCI) and 10 cognitively normal (CN)). The strength of non-linear functional connections for the first 1% of discriminative power increased in MCI compared with CN and dementia, which was opposed to its linear counterpart. Further functional network analysis revealed that the changes of the non-linear and linear connectivity have similar but not completely the same spatial distribution in human brain. In the multivariate pattern analysis with multiple classifiers, the non-linear functional connectivity mostly identified dementia, MCI and CN from LA with a relatively higher accuracy rate than the linear measure. Our findings revealed the non-linear functional connectivity provided useful discriminative power in classification of LA, and the spatial distributed changes between the non-linear and linear measure may indicate the underlying mechanism of cognitive dysfunction in LA.
Change in Stiffness of Pavement Layers in the Linear Discontinuous Deformation Area
NASA Astrophysics Data System (ADS)
Grygierek, Marcin
2017-10-01
The underground mining exploitation causes deformations on the surface of the area which are classified as continuous or discontinuous. Mining deformations cause loosening or compression of the subsoil. Loosening has an impact on the reduction of the subsoil stiffness. As a result the reduction of subsoil stiffness causes loosening of construction layers built in that subsoil. Pavement is a specific case. If there happens to be loosening then the fatigue life of pavement is reduced and premature damages can be observed such as fatigue cracks or/and structural deformation. Discontinuous deformations are an especially interesting case. They not only cause the reduction of the stiffness of the subsoil and pavement layers but also cause rapid deterioration in roughness. Change of roughness is very dangerous especially on fast roads such as a highway. Lately there can be observed the so called linear discontinuous surface deformations in the lanes in the mining area. Unfortunately, the ‘in situ’ research, presenting experiments on the effect of linear discontinuous deformations on the pavement, is in short supply. It is especially crucial with regard to the design of pavement reinforcement and the specification of optimal length of the reinforced part of the road. The article presents the results of ‘in situ’ tests carried out on the chosen pavements where the so called linear discontinuous surface deformation has appeared. The genesis of the damage is connected with the underground mining exploitation. Falling Weight Deflectometer (FWD) has been used in researches. Measuring points were carried out with high frequency which helped to acquire a very interesting distribution of deflections. The distribution of deflections well shows the impact of linear discontinuous deformation on the changes in stiffness pavement layers. In the analysis of data from FWD there has been used back calculation which worked modulus of layers out. The results of researches and analysis have allowed to specify the scale of stiffness reduction of subsoil and pavement layers and, above all, to specify a minimal area of reinforcement. Therefore, the results of the analysis can be very helpful in determining the range of reinforcement as well as designing reinforcement. Of course, researches should be continued for better knowledge about the impact of discontinuous deformations on pavement.
Application of Higuchi's fractal dimension from basic to clinical neurophysiology: A review.
Kesić, Srdjan; Spasić, Sladjana Z
2016-09-01
For more than 20 years, Higuchi's fractal dimension (HFD), as a nonlinear method, has occupied an important place in the analysis of biological signals. The use of HFD has evolved from EEG and single neuron activity analysis to the most recent application in automated assessments of different clinical conditions. Our objective is to provide an updated review of the HFD method applied in basic and clinical neurophysiological research. This article summarizes and critically reviews a broad literature and major findings concerning the applications of HFD for measuring the complexity of neuronal activity during different neurophysiological conditions. The source of information used in this review comes from the PubMed, Scopus, Google Scholar and IEEE Xplore Digital Library databases. The review process substantiated the significance, advantages and shortcomings of HFD application within all key areas of basic and clinical neurophysiology. Therefore, the paper discusses HFD application alone, combined with other linear or nonlinear measures, or as a part of automated methods for analyzing neurophysiological signals. The speed, accuracy and cost of applying the HFD method for research and medical diagnosis make it stand out from the widely used linear methods. However, only a combination of HFD with other nonlinear methods ensures reliable and accurate analysis of a wide range of neurophysiological signals. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
International Linear Collider Technical Design Report (Volumes 1 through 4)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harrison M.
2013-03-27
The design report consists of four volumes: Volume 1, Executive Summary; Volume 2, Physics; Volume 3, Accelerator (Part I, R and D in the Technical Design Phase, and Part II, Baseline Design); and Volume 4, Detectors.
2014-10-20
Lava flows of Daedalia Planum can be seen at the top and bottom portions of this image from NASA 2001 Mars Odyssey spacecraft. The ridge and linear depression in the central part of the image are part of Mangala Fossa, a fault bounded graben.
Linear and Nonlinear Analysis of Brain Dynamics in Children with Cerebral Palsy
ERIC Educational Resources Information Center
Sajedi, Firoozeh; Ahmadlou, Mehran; Vameghi, Roshanak; Gharib, Masoud; Hemmati, Sahel
2013-01-01
This study was carried out to determine linear and nonlinear changes of brain dynamics and their relationships with the motor dysfunctions in CP children. For this purpose power of EEG frequency bands (as a linear analysis) and EEG fractality (as a nonlinear analysis) were computed in eyes-closed resting state and statistically compared between 26…
Cook, James P; Mahajan, Anubha; Morris, Andrew P
2017-02-01
Linear mixed models are increasingly used for the analysis of genome-wide association studies (GWAS) of binary phenotypes because they can efficiently and robustly account for population stratification and relatedness through inclusion of random effects for a genetic relationship matrix. However, the utility of linear (mixed) models in the context of meta-analysis of GWAS of binary phenotypes has not been previously explored. In this investigation, we present simulations to compare the performance of linear and logistic regression models under alternative weighting schemes in a fixed-effects meta-analysis framework, considering designs that incorporate variable case-control imbalance, confounding factors and population stratification. Our results demonstrate that linear models can be used for meta-analysis of GWAS of binary phenotypes, without loss of power, even in the presence of extreme case-control imbalance, provided that one of the following schemes is used: (i) effective sample size weighting of Z-scores or (ii) inverse-variance weighting of allelic effect sizes after conversion onto the log-odds scale. Our conclusions thus provide essential recommendations for the development of robust protocols for meta-analysis of binary phenotypes with linear models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, H.; Wu, S. Z.; Zhou, C. T.
2013-09-15
The dispersion relation of one-dimensional longitudinal plasma waves in relativistic homogeneous plasmas is investigated with both linear theory and Vlasov simulation in this paper. From the Vlasov-Poisson equations, the linear dispersion relation is derived for the proper one-dimensional Jüttner distribution. Numerically obtained linear dispersion relation as well as an approximate formula for plasma wave frequency in the long wavelength limit is given. The dispersion of longitudinal wave is also simulated with a relativistic Vlasov code. The real and imaginary parts of dispersion relation are well studied by varying wave number and plasma temperature. Simulation results are in agreement with establishedmore » linear theory.« less
Bearing-Load Modeling and Analysis Study for Mechanically Connected Structures
NASA Technical Reports Server (NTRS)
Knight, Norman F., Jr.
2006-01-01
Bearing-load response for a pin-loaded hole is studied within the context of two-dimensional finite element analyses. Pin-loaded-hole configurations are representative of mechanically connected structures, such as a stiffener fastened to a rib of an isogrid panel, that are idealized as part of a larger structural component. Within this context, the larger structural component may be idealized as a two-dimensional shell finite element model to identify load paths and high stress regions. Finite element modeling and analysis aspects of a pin-loaded hole are considered in the present paper including the use of linear and nonlinear springs to simulate the pin-bearing contact condition. Simulating pin-connected structures within a two-dimensional finite element analysis model using nonlinear spring or gap elements provides an effective way for accurate prediction of the local effective stress state and peak forces.
Auditory motion-specific mechanisms in the primate brain
Baumann, Simon; Dheerendra, Pradeep; Joly, Olivier; Hunter, David; Balezeau, Fabien; Sun, Li; Rees, Adrian; Petkov, Christopher I.; Thiele, Alexander; Griffiths, Timothy D.
2017-01-01
This work examined the mechanisms underlying auditory motion processing in the auditory cortex of awake monkeys using functional magnetic resonance imaging (fMRI). We tested to what extent auditory motion analysis can be explained by the linear combination of static spatial mechanisms, spectrotemporal processes, and their interaction. We found that the posterior auditory cortex, including A1 and the surrounding caudal belt and parabelt, is involved in auditory motion analysis. Static spatial and spectrotemporal processes were able to fully explain motion-induced activation in most parts of the auditory cortex, including A1, but not in circumscribed regions of the posterior belt and parabelt cortex. We show that in these regions motion-specific processes contribute to the activation, providing the first demonstration that auditory motion is not simply deduced from changes in static spatial location. These results demonstrate that parallel mechanisms for motion and static spatial analysis coexist within the auditory dorsal stream. PMID:28472038
NASA Technical Reports Server (NTRS)
Hardrath, H. F.; Newman, J. C., Jr.; Elber, W.; Poe, C. C., Jr.
1978-01-01
The limitations of linear elastic fracture mechanics in aircraft design and in the study of fatigue crack propagation in aircraft structures are discussed. NASA-Langley research to extend the capabilities of fracture mechanics to predict the maximum load that can be carried by a cracked part and to deal with aircraft design problems are reported. Achievements include: (1) improved stress intensity solutions for laboratory specimens; (2) fracture criterion for practical materials; (3) crack propagation predictions that account for mean stress and high maximum stress effects; (4) crack propagation predictions for variable amplitude loading; and (5) the prediction of crack growth and residual stress in built-up structural assemblies. These capabilities are incorporated into a first generation computerized analysis that allows for damage tolerance and tradeoffs with other disciplines to produce efficient designs that meet current airworthiness requirements.
Beam ion acceleration by ICRH in JET discharges
NASA Astrophysics Data System (ADS)
Budny, R. V.; Gorelenkova, M.; Bertelli, N.; JET Collaboration
2015-11-01
The ion Monte-Carlo orbit integrator NUBEAM, used in TRANSP has been enhanced to include an ``RF-kick'' operator to simulate the interaction of RF fields and fast ions. The RF quasi-linear operator (localized in space) uses a second R-Z orbit integrator. We apply this to analysis of recent JET discharges using ICRH with the ITER-like first wall. An example of results for a high performance Hybrid discharge for which standard TRANSP analysis simulated the DD neutron emission rate below measurements, re-analysis using the RF-kick operator results in increased beam parallel and perpendicular energy densities (~=40% and 15% respectively), and increased beam-thermal neutron emission (~= 35%), making the total rate closer to the measurement. Checks of the numerics, comparisons with measurements, and ITER implications will be presented. Supported in part by the US DoE contract DE-AC02-09CH11466 and by EUROfusion No 633053.
On the transition towards slow manifold in shallow-water and 3D Euler equations in a rotating frame
NASA Technical Reports Server (NTRS)
Mahalov, A.
1994-01-01
The long-time, asymptotic state of rotating homogeneous shallow-water equations is investigated. Our analysis is based on long-time averaged rotating shallow-water equations describing interactions of large-scale, horizontal, two-dimensional motions with surface inertial-gravity waves field for a shallow, uniformly rotating fluid layer. These equations are obtained in two steps: first by introducing a Poincare/Kelvin linear propagator directly into classical shallow-water equations, then by averaging. The averaged equations describe interaction of wave fields with large-scale motions on time scales long compared to the time scale 1/f(sub o) introduced by rotation (f(sub o)/2-angular velocity of background rotation). The present analysis is similar to the one presented by Waleffe (1991) for 3D Euler equations in a rotating frame. However, since three-wave interactions in rotating shallow-water equations are forbidden, the final equations describing the asymptotic state are simplified considerably. Special emphasis is given to a new conservation law found in the asymptotic state and decoupling of the dynamics of the divergence free part of the velocity field. The possible rising of a decoupled dynamics in the asymptotic state is also investigated for homogeneous turbulence subjected to a background rotation. In our analysis we use long-time expansion, where the velocity field is decomposed into the 'slow manifold' part (the manifold which is unaffected by the linear 'rapid' effects of rotation or the inertial waves) and a formal 3D disturbance. We derive the physical space version of the long-time averaged equations and consider an invariant, basis-free derivation. This formulation can be used to generalize Waleffe's (1991) helical decomposition to viscous inhomogeneous flows (e.g. problems in cylindrical geometry with no-slip boundary conditions on the cylinder surface and homogeneous in the vertical direction).
NASA Astrophysics Data System (ADS)
Patel, Niravkumar D.; Mehta, Rahul; Ali, Nawab; Soulsby, Michael; Chowdhury, Parimal
2013-04-01
The aim of this study was to determine composition of the leg bone tissue of rats that were exposed to simulated microgravity by Hind-Limb Suspension (HLS) by tail for one week. The leg bones were cross sectioned, cleaned of soft tissues, dried and sputter coated, and then placed horizontally on the stage of a Scanning Electron Microscope (SEM) for analysis. Interaction of a 17.5 keV electron beam, incident from the vertical direction on the sample, generated images using two detectors. X-rays emitted from the sample during electron bombardment were measured with an Energy Dispersive Spectroscopy (EDS) feature of SEM using a liquid-nitrogen cooled Si(Li) detector with a resolution of 144 eV at 5.9 keV (25Mn Kα x-ray). Kα- x-rays from carbon, oxygen, phosphorus and calcium formed the major peaks in the spectrum. Relative percentages of these elements were determined using a software that could also correct for ZAF factors namely Z(atomic number), A(X-ray absorption) and F(characteristic fluorescence). The x-rays from the control groups and from the experimental (HLS) groups were analyzed on well-defined parts (femur, tibia and knee) of the leg bone. The SEM analysis shows that there are definite changes in the hydroxyl or phosphate group of the main component of the bone structure, hydroxyapatite [Ca10(PO4)6(OH)2], due to hind limb suspension. In a separate experiment, entire leg bones (both from HLS and control rats) were subjected to mechanical stress by mean of a variable force. The stress vs. strain graph was fitted with linear and polynomial function, and the parameters reflecting the mechanical strength of the bone, under increasing stress, were calculated. From the slope of the linear part of the graph the Young's modulus for HLS bones were calculated and found to be 2.49 times smaller than those for control bones.
Can the Stark-Einstein law resolve the measurement problem from an animate perspective?
Thaheld, Fred H
2015-09-01
Analysis of the Stark-Einstein law as it applies to the retinal molecule, which is part of the rhodopsin molecule within the rod cells of the retina, reveals that it may provide the solution to the measurement problem from an animate perspective. That it represents a natural boundary where the Schrödinger equation or wave function automatically goes from linear to nonlinear while remaining in a deterministic state. It will be possible in the near future to subject this theory to empirical tests as has been previously proposed. This analysis provides a contrast to the many decades well studied and debated inanimate measurement problem and would represent an addition to the Stark-Einstein law involving information carried by the photon. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Myers, David E.; Pineda, Evan J.; Zalewski, Bart F.; Kosareo, Daniel N.; Kellas, Sotiris
2013-01-01
Four honeycomb sandwich panels, representing 1/16th arc segments of a 10-m diameter barrel section of the heavy lift launch vehicle, were manufactured under the NASA Composites for Exploration program and the NASA Space Launch Systems program. Two configurations were chosen for the panels: 6-ply facesheets with 1.125 in. honeycomb core and 8-ply facesheets with 1.000 in. honeycomb core. Additionally, two separate carbon fiber/epoxy material systems were chosen for the facesheets: inautoclave IM7/977-3 and out-of-autoclave T40-800b/5320-1. Smaller 3.00- by 5.00-ft panels were cut from the 1/16th barrel sections. These panels were tested under compressive loading at the NASA Langley Research Center. Furthermore, linear eigenvalue and geometrically nonlinear finite element analysis was performed to predict the compressive response of the 3.00- by 5.00-ft panels. This manuscript summarizes the experimental and analytical modeling efforts pertaining to the panel composed of 8-ply, IM7/977-3 facesheets (referred to Panel A). To improve the robustness of the geometrically nonlinear finite element model, measured surface imperfections were included in the geometry of the model. Both the linear and nonlinear models yield good qualitative and quantitative predictions. Additionally, it was predicted correctly that the panel would fail in buckling prior to failing in strength. Furthermore, several imperfection studies were performed to investigate the influence of geometric imperfections, fiber misalignments, and three-dimensional (3 D) effects on the compressive response of the panel.
A practical data processing workflow for multi-OMICS projects.
Kohl, Michael; Megger, Dominik A; Trippler, Martin; Meckel, Hagen; Ahrens, Maike; Bracht, Thilo; Weber, Frank; Hoffmann, Andreas-Claudius; Baba, Hideo A; Sitek, Barbara; Schlaak, Jörg F; Meyer, Helmut E; Stephan, Christian; Eisenacher, Martin
2014-01-01
Multi-OMICS approaches aim on the integration of quantitative data obtained for different biological molecules in order to understand their interrelation and the functioning of larger systems. This paper deals with several data integration and data processing issues that frequently occur within this context. To this end, the data processing workflow within the PROFILE project is presented, a multi-OMICS project that aims on identification of novel biomarkers and the development of new therapeutic targets for seven important liver diseases. Furthermore, a software called CrossPlatformCommander is sketched, which facilitates several steps of the proposed workflow in a semi-automatic manner. Application of the software is presented for the detection of novel biomarkers, their ranking and annotation with existing knowledge using the example of corresponding Transcriptomics and Proteomics data sets obtained from patients suffering from hepatocellular carcinoma. Additionally, a linear regression analysis of Transcriptomics vs. Proteomics data is presented and its performance assessed. It was shown, that for capturing profound relations between Transcriptomics and Proteomics data, a simple linear regression analysis is not sufficient and implementation and evaluation of alternative statistical approaches are needed. Additionally, the integration of multivariate variable selection and classification approaches is intended for further development of the software. Although this paper focuses only on the combination of data obtained from quantitative Proteomics and Transcriptomics experiments, several approaches and data integration steps are also applicable for other OMICS technologies. Keeping specific restrictions in mind the suggested workflow (or at least parts of it) may be used as a template for similar projects that make use of different high throughput techniques. This article is part of a Special Issue entitled: Computational Proteomics in the Post-Identification Era. Guest Editors: Martin Eisenacher and Christian Stephan. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Ge, Fei; Sielmann, Frank; Zhu, Xiuhua; Fraedrich, Klaus; Zhi, Xiefei; Peng, Ting; Wang, Lei
2017-12-01
The thermal forcing of the Tibetan Plateau (TP) is analyzed to investigate the formation and variability of Tibetan Plateau Summer Monsoon (TPSM), which affects the climates of the surrounding regions, in particular the Indian summer monsoon precipitation. Dynamic composites and statistical analyses indicate that the Indian summer monsoon precipitation is less/greater than normal during the strong/weak TPSM. Strong (weak) TPSM is associated with an anomalous near surface cyclone (anticyclone) over the western part of the Tibetan Plateau, enhancing (reducing) the westerly flow along its southern flank, suppressing (favoring) the meridional flow of warm and moist air from the Indian ocean and thus cutting (providing) moisture supply for the northern part of India and its monsoonal rainfall. These results are complemented by a dynamic and thermodynamic analysis: (i) A linear thermal vorticity forcing primarily describes the influence of the asymmetric heating of TP generating an anomalous stationary wave flux. Composite analysis of anomalous stationary wave flux activity (after Plumb in J Atmos Sci 42:217-229, 1985) strongly indicate that non-orographic effects (diabatic heating and/or interaction with transient eddies) of the Tibetan Plateau contribute to the generation of an anomalous cyclone (anti-cyclone) over the western TP. (ii) Anomalous TPSM generation shows that strong TPSM years are related to the positive surface sensible heating anomalies over the eastern TP favoring the strong diabatic heating in summer. While negative TPSM years are associated with the atmospheric circulation anomalies during the preceding spring, enhancing northerly dry-cold air intrusions into TP, which may weaken the condensational heat release in the middle and upper troposphere, leading to a weaker than normal summer monsoon over the TP in summer.
López-de-Ipiña, Karmele; Alonso, Jesus-Bernardino; Travieso, Carlos Manuel; Solé-Casals, Jordi; Egiraun, Harkaitz; Faundez-Zanuy, Marcos; Ezeiza, Aitzol; Barroso, Nora; Ecay-Torres, Miriam; Martinez-Lage, Pablo; de Lizardui, Unai Martinez
2013-01-01
The work presented here is part of a larger study to identify novel technologies and biomarkers for early Alzheimer disease (AD) detection and it focuses on evaluating the suitability of a new approach for early AD diagnosis by non-invasive methods. The purpose is to examine in a pilot study the potential of applying intelligent algorithms to speech features obtained from suspected patients in order to contribute to the improvement of diagnosis of AD and its degree of severity. In this sense, Artificial Neural Networks (ANN) have been used for the automatic classification of the two classes (AD and control subjects). Two human issues have been analyzed for feature selection: Spontaneous Speech and Emotional Response. Not only linear features but also non-linear ones, such as Fractal Dimension, have been explored. The approach is non invasive, low cost and without any side effects. Obtained experimental results were very satisfactory and promising for early diagnosis and classification of AD patients. PMID:23698268
Modeling and control design of a wind tunnel model support
NASA Technical Reports Server (NTRS)
Howe, David A.
1990-01-01
The 12-Foot Pressure Wind Tunnel at Ames Research Center is being restored. A major part of the restoration is the complete redesign of the aircraft model supports and their associated control systems. An accurate trajectory control servo system capable of positioning a model (with no measurable overshoot) is needed. Extremely small errors in scaled-model pitch angle can increase airline fuel costs for the final aircraft configuration by millions of dollars. In order to make a mechanism sufficiently accurate in pitch, a detailed structural and control-system model must be created and then simulated on a digital computer. The model must contain linear representations of the mechanical system, including masses, springs, and damping in order to determine system modes. Electrical components, both analog and digital, linear and nonlinear must also be simulated. The model of the entire closed-loop system must then be tuned to control the modes of the flexible model-support structure. The development of a system model, the control modal analysis, and the control-system design are discussed.
Lien, Chi-Hsiang; Tilbury, Karissa; Chen, Shean-Jen; Campagnola, Paul J
2013-01-01
Second Harmonic Generation (SHG) microscopy coupled with polarization analysis has great potential for use in tissue characterization, as molecular and supramolecular structural details can be extracted. Such measurements are difficult to perform quickly and accurately. Here we present a new method that uses a liquid crystal modulator (LCM) located in the infinity space of a SHG laser scanning microscope that allows the generation of any desired linear or circular polarization state. As the device contains no moving parts, polarization can be rotated accurately and faster than by manual or motorized control. The performance in terms of polarization purity was validated using Stokes vector polarimetry, and found to have minimal residual polarization ellipticity. SHG polarization imaging characteristics were validated against well-characterized specimens having cylindrical and/or linear symmetries. The LCM has a small footprint and can be implemented easily in any standard microscope and is cost effective relative to other technologies.
NASA Technical Reports Server (NTRS)
Weatherill, Warren H.; Ehlers, F. Edward
1989-01-01
A finite difference method for solving the unsteady transonic flow about harmonically oscillating wings is investigated. The procedure is based on separating the velocity potential into steady and unsteady parts and linearizing the resulting unsteady differential equation for small disturbances. The differential equation for the unsteady potential is linear with spatially varying coefficients and with the time variable eliminated by assuming harmonic motion. Difference equations are derived for harmonic transonic flow to include a coordinate transformation for swept and tapered planforms. A pilot program is developed for three-dimensional planar lifting surface configurations (including thickness) for the CRAY-XMP at Boeing Commercial Airplanes and for the CYBER VPS-32 at the NASA Langley Research Center. An investigation is made of the effect of the location of the outer boundaries on accuracy for very small reduced frequencies. Finally, the pilot program is applied to the flutter analysis of a rectangular wing.
Lien, Chi-Hsiang; Tilbury, Karissa; Chen, Shean-Jen; Campagnola, Paul J.
2013-01-01
Second Harmonic Generation (SHG) microscopy coupled with polarization analysis has great potential for use in tissue characterization, as molecular and supramolecular structural details can be extracted. Such measurements are difficult to perform quickly and accurately. Here we present a new method that uses a liquid crystal modulator (LCM) located in the infinity space of a SHG laser scanning microscope that allows the generation of any desired linear or circular polarization state. As the device contains no moving parts, polarization can be rotated accurately and faster than by manual or motorized control. The performance in terms of polarization purity was validated using Stokes vector polarimetry, and found to have minimal residual polarization ellipticity. SHG polarization imaging characteristics were validated against well-characterized specimens having cylindrical and/or linear symmetries. The LCM has a small footprint and can be implemented easily in any standard microscope and is cost effective relative to other technologies. PMID:24156059
NASA Astrophysics Data System (ADS)
Kwon, Young-Sam; Li, Fucai
2018-03-01
In this paper we study the incompressible limit of the degenerate quantum compressible Navier-Stokes equations in a periodic domain T3 and the whole space R3 with general initial data. In the periodic case, by applying the refined relative entropy method and carrying out the detailed analysis on the oscillations of velocity, we prove rigorously that the gradient part of the weak solutions (velocity) of the degenerate quantum compressible Navier-Stokes equations converge to the strong solution of the incompressible Navier-Stokes equations. Our results improve considerably the ones obtained by Yang, Ju and Yang [25] where only the well-prepared initial data case is considered. While for the whole space case, thanks to the Strichartz's estimates of linear wave equations, we can obtain the convergence of the weak solutions of the degenerate quantum compressible Navier-Stokes equations to the strong solution of the incompressible Navier-Stokes/Euler equations with a linear damping term. Moreover, the convergence rates are also given.
Pulse-by-pulse energy measurement at the Stanford Linear Collider
NASA Astrophysics Data System (ADS)
Blaylock, G.; Briggs, D.; Collins, B.; Petree, M.
1992-01-01
The Stanford Linear Collider (SLC) collides a beam of electrons and positrons at 92 GeV. It is the first colliding linac, and produces Z(sup 0) particles for High-Energy Physics measurements. The energy of each beam must be measured to one part in 10(exp 4) on every collision (120 Hz). An Energy Spectrometer in each beam line after the collision produces two stripes of high-energy synchrotron radiation with critical energy of a few MeV. The distance between these two stripes at an imaging plane measures the beam energy. The Wire-Imaging Synchrotron Radiation Detector (WISRD) system comprises a novel detector, data acquisition electronics, readout, and analysis. The detector comprises an array of wires for each synchrotron stripe. The electronics measure secondary emission charge on each wire of each array. A Macintosh II (using THINK C, THINK Class Library) and DSP coprocessor (using ANSI C) acquire and analyze the data, and display and report the results for SLC operation.
Thermal mechanical analysis of sprag clutches
NASA Technical Reports Server (NTRS)
Mullen, Robert L.; Zab, Ronald Joseph; Kurniawan, Antonius S.
1992-01-01
Work done at Case Western Reserve University on the Thermal Mechanical analysis of sprag helicopter clutches is reported. The report is presented in two parts. The first part is a description of a test rig for the measurement of the heat generated by high speed sprag clutch assemblies during cyclic torsional loading. The second part describes a finite element modeling procedure for sliding contact. The test rig provides a cyclic torsional load of 756 inch-pounds at 5000 rpm using a four-square arrangement. The sprag clutch test unit was placed between the high speed pinions of the circulating power loop. The test unit was designed to have replaceable inner ad outer races, which contain the instrumentation to monitor the sprag clutch. The torque loading device was chosen to be a water cooled magnetic clutch, which is controlled either manually or through a computer. In the second part, a Generalized Eulerian-Lagrangian formulation for non-linear dynamic problems is developed for solid materials. This formulation is derived from the basic laws and axioms of continuum mechanics. The novel aspect of this method is that we are able to investigate the physics in the spatial region of interest as material flows through it without having to follow material points. A finite element approximation to the governing equations is developed. Iterative Methods for the solution of the discrete finite element equations are explored. A FORTRAN program to implement this formulation is developed and a number of solutions to problems of sliding contact are presented.
D'Antone, Carmelisa; Punturo, Rosalda; Vaccaro, Carmela
2017-04-01
A geochemical and statistical approach has allowed identifying in rare earth elements (REEs) absorption a good fingerprinting mark for determining the territoriality and the provenance of Vitis vinifera L. in the district of Mount Etna (southern Italy). Our aim is to define the REEs distribution in different parts of the plants which grow in the same volcanic soil and under the same climate conditions, and therefore to assess whether REEs distribution may reflect the composition of the provenance soil or if plants can selectively absorb REEs in order to recognize the fingerprint in the Etna Volcano soils as well as the REEs pattern characteristic of each cultivar of V. vinifera L. The characteristic pattern of REEs has been determined by ICP-MS analyses in the soils and in the selected grapevine varieties for all the following parts: leaves, seeds, juice, skin, and berries. These geochemical criteria, together with the multivariate statistical analysis of the principal component analysis (PCA) and of the linear discriminant analysis (LDA) that can be summarized with the box plot, suggest that leaves mostly absorb REEs than the other parts of the plant. This work investigates the various parts of the plant in order to verify if each grape variety presents a characteristic geochemical pattern in the absorption of REEs in relationship with the geochemical features of the soil so to highlight the individual compositional fingerprint. Based on REE patterns, our study is a useful tool that allows characterizing the differences among the grape varieties and lays the foundation for the use of REEs in the geographic origin of the Mount Etna wine district.
A Linear Stochastic Dynamical Model of ENSO. Part II: Analysis.
NASA Astrophysics Data System (ADS)
Thompson, C. J.; Battisti, D. S.
2001-02-01
In this study the behavior of a linear, intermediate model of ENSO is examined under stochastic forcing. The model was developed in a companion paper (Part I) and is derived from the Zebiak-Cane ENSO model. Four variants of the model are used whose stabilities range from slightly damped to moderately damped. Each model is run as a simulation while being perturbed by noise that is uncorrelated (white) in space and time. The statistics of the model output show the moderately damped models to be more realistic than the slightly damped models. The moderately damped models have power spectra that are quantitatively quite similar to observations, and a seasonal pattern of variance that is qualitatively similar to observations. All models produce ENSOs that are phase locked to the annual cycle, and all display the `spring barrier' characteristic in their autocorrelation patterns, though in the models this `barrier' occurs during the summer and is less intense than in the observations (inclusion of nonlinear effects is shown to partially remedy this deficiency). The more realistic models also show a decadal variability in the lagged autocorrelation pattern that is qualitatively similar to observations.Analysis of the models shows that the greatest part of the variability comes from perturbations that project onto the first singular vector, which then grow rapidly into the ENSO mode. Essentially, the model output represents many instances of the ENSO mode, with random phase and amplitude, stimulated by the noise through the optimal transient growth of the singular vectors.The limit of predictability for each model is calculated and it is shown that the more realistic (moderately damped) models have worse potential predictability (9-15 months) than the deterministic chaotic models that have been studied widely in the literature. The predictability limits are strongly correlated with the stability of the models' ENSO mode-the more highly damped models having much shorter limits of predictability. A comparison of the two most realistic models shows that even though these models have similar statistics, they have very different predictability limits. The models have a strong seasonal dependence to their predictability limits.The results of this study (with the companion paper) suggest that the linear, stable dynamical model of ENSO is indeed a plausible hypothesis for the observed ENSO. With very reasonable levels of stochastic forcing, the model produces realistic levels of variance, has a realistic spectrum, and qualitatively reproduces the observed seasonal pattern of variance, the autocorrelation pattern, and the ENSO-like decadal variability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Konor, Celal S.; Randall, David A.
We use a normal-mode analysis to investigate the impacts of the horizontal and vertical discretizations on the numerical solutions of the quasi-geostrophic anelastic baroclinic and barotropic Rossby modes on a midlatitude β plane. The dispersion equations are derived for the linearized anelastic system, discretized on the Z, C, D, CD, (DC), A, E and B horizontal grids, and on the L and CP vertical grids. The effects of various horizontal grid spacings and vertical wavenumbers are discussed. A companion paper, Part 1, discusses the impacts of the discretization on the inertia–gravity modes on a midlatitude f plane.The results of our normal-modemore » analyses for the Rossby waves overall support the conclusions of the previous studies obtained with the shallow-water equations. We identify an area of disagreement with the E-grid solution.« less
Analytical study on the thermal performance of a partially wet constructal T-shaped fin
NASA Astrophysics Data System (ADS)
Hazarika, Saheera Azmi; Zeeshan, Mohd; Bhanja, Dipankar; Nath, Sujit
2017-07-01
The present paper addresses the thermal analysis of a T-shaped fin under partially wet condition by adopting a cubic variation of the humidity ratio of saturated air with the corresponding fin surface temperature. The point separating the dry and wet parts may lie either in the flange or stem part of the fin and so, two different cases having different governing equations and boundary conditions are analyzed in this paper. Since the governing equations are highly non-linear, they are solved by using an analytical technique called the Differential Transform Method and subsequently, the dry fin length, temperature distribution and fin performances are evaluated and analyzed for a wide range of the various psychometric, geometric and thermo-physical parameters. Finally, it can be highlighted that relative humidity has a pronounced effect on the performance parameters when the fin surface is partially wet whereas this effect is marginally small for fully wet surface.
Advanced Aerodynamic Design of Passive Porosity Control Effectors
NASA Technical Reports Server (NTRS)
Hunter, Craig A.; Viken, Sally A.; Wood, Richard M.; Bauer, Steven X. S.
2001-01-01
This paper describes aerodynamic design work aimed at developing a passive porosity control effector system for a generic tailless fighter aircraft. As part of this work, a computational design tool was developed and used to layout passive porosity effector systems for longitudinal and lateral-directional control at a low-speed, high angle of attack condition. Aerodynamic analysis was conducted using the NASA Langley computational fluid dynamics code USM3D, in conjunction with a newly formulated surface boundary condition for passive porosity. Results indicate that passive porosity effectors can provide maneuver control increments that equal and exceed those of conventional aerodynamic effectors for low-speed, high-alpha flight, with control levels that are a linear function of porous area. This work demonstrates the tremendous potential of passive porosity to yield simple control effector systems that have no external moving parts and will preserve an aircraft's fixed outer mold line.
Analysis of Mode II Crack in Bilayered Composite Beam
NASA Astrophysics Data System (ADS)
Rizov, Victor I.; Mladensky, Angel S.
2012-06-01
Mode II crack problem in cantilever bilayered composite beams is considered. Two configurations are analyzed. In the first configuration the crack arms have equal heights while in the second one the arms have different heights. The modulus of elasticity and the shear modulus of the beam un-cracked part in the former case and the moment of inertia in the latter are derived as functions of the two layers characteristics. The expressions for the strain energy release rate,
Are EUR and GBP different words for the same currency?
NASA Astrophysics Data System (ADS)
Ivanova, K.; Ausloos, M.
2002-05-01
The British Pound (GBP) is not part of the Euro (EUR) monetary system. In order to find out arguments on whether GBP should join the EUR or not correlations are calculated between GBP exchange rates with respect to various currencies: USD, JPY, CHF, DKK, the currencies forming EUR and a reconstructed EUR for the time interval from 1993 till June 30, 2000. The distribution of fluctuations of the exchange rates is Gaussian for the central part of the distribution, but has fat tails for the large size fluctuations. Within the Detrended Fluctuation Analysis (DFA) statistical method the power law behavior describing the root-mean-square deviation from a linear trend of the exchange rate fluctuations is obtained as a function of time for the time interval of interest. The time-dependent exponent evolution of the exchange rate fluctuations is given. Statistical considerations imply that the GBP is already behaving as a true EUR.
NASA Astrophysics Data System (ADS)
Fishkova, T. Ya.
2018-01-01
An optimal set of geometric and electrical parameters of a high-aperture electrostatic charged-particle spectrograph with a range of simultaneously recorded energies of E/ E min = 1-50 has been found by computer simulation, which is especially important for the energy analysis of charged particles during fast processes in various materials. The spectrograph consists of two coaxial electrodes with end faces closed by flat electrodes. The external electrode with a conical-cylindrical form is cut into parts with potentials that increase linearly, except for the last cylindrical part, which is electrically connected to the rear end electrode. The internal cylindrical electrode and the front end electrode are grounded. In the entire energy range, the system is sharply focused on the internal cylindrical electrode, which provides an energy resolution of no worse than 3 × 10-3.
Konor, Celal S.; Randall, David A.
2018-05-08
We use a normal-mode analysis to investigate the impacts of the horizontal and vertical discretizations on the numerical solutions of the quasi-geostrophic anelastic baroclinic and barotropic Rossby modes on a midlatitude β plane. The dispersion equations are derived for the linearized anelastic system, discretized on the Z, C, D, CD, (DC), A, E and B horizontal grids, and on the L and CP vertical grids. The effects of various horizontal grid spacings and vertical wavenumbers are discussed. A companion paper, Part 1, discusses the impacts of the discretization on the inertia–gravity modes on a midlatitude f plane.The results of our normal-modemore » analyses for the Rossby waves overall support the conclusions of the previous studies obtained with the shallow-water equations. We identify an area of disagreement with the E-grid solution.« less
NASA Astrophysics Data System (ADS)
Konor, Celal S.; Randall, David A.
2018-05-01
We use a normal-mode analysis to investigate the impacts of the horizontal and vertical discretizations on the numerical solutions of the quasi-geostrophic anelastic baroclinic and barotropic Rossby modes on a midlatitude β plane. The dispersion equations are derived for the linearized anelastic system, discretized on the Z, C, D, CD, (DC), A, E and B horizontal grids, and on the L and CP vertical grids. The effects of various horizontal grid spacings and vertical wavenumbers are discussed. A companion paper, Part 1, discusses the impacts of the discretization on the inertia-gravity modes on a midlatitude f plane.The results of our normal-mode analyses for the Rossby waves overall support the conclusions of the previous studies obtained with the shallow-water equations. We identify an area of disagreement with the E-grid solution.
NASA Technical Reports Server (NTRS)
Blum, P. W.; Harris, I.
1975-01-01
The equations of horizontal motion of the neutral atmosphere between 120 and 500 km are integrated with the inclusion of all nonlinear terms of the convective derivative and the viscous forces due to vertical and horizontal velocity gradients. Empirical models of the distribution of neutral and charged particles are assumed to be known. The model of velocities developed is a steady state model. In Part I the mathematical method used in the integration of the Navier-Stokes equations is described and the various forces are analyzed. Results of the method given in Part I are presented with comparison with previous calculations and observations of upper atmospheric winds. Conclusions are that nonlinear effects are only significant in the equatorial region, especially at solstice conditions and that nonlinear effects do not produce any superrotation.
Classroom Demonstrations of Polymer Principles Part II. Polymer Formation.
ERIC Educational Resources Information Center
Rodriguez, F.; And Others
1987-01-01
This is part two in a series on classroom demonstrations of polymer principles. Described is how large molecules can be assembled from subunits (the process of polymerization). Examples chosen include both linear and branched or cross-linked molecules. (RH)
Lifelong modelling of properties for materials with technological memory
NASA Astrophysics Data System (ADS)
Falaleev, AP; Meshkov, VV; Vetrogon, AA; Ogrizkov, SV; Shymchenko, AV
2016-10-01
An investigation of real automobile parts produced from dual phase steel during standard periods of life cycle is presented, which considers such processes as stamping, exploitation, automobile accident, and further repair. The development of the phenomenological model of the mechanical properties of such parts was based on the two surface plastic theory of Chaboche. As a consequence of the composite structure of dual phase steel, it was shown that local mechanical properties of parts produced from this material change significantly their during their life cycle, depending on accumulated plastic deformations and thermal treatments. Such mechanical property changes have a considerable impact on the accuracy of the computer modelling of automobile behaviour. The most significant errors of modelling were obtained at the critical operating conditions, such as crashes and accidents. The model developed takes into account the kinematics (Bauschinger effect), isotropic hardening, non-linear elastic steel behaviour and changes caused by the thermal treatment. Using finite element analysis, the model allows the evaluation of the passive safety of a repaired car body, and enables increased restoration accuracy following an accident. The model was confirmed experimentally for parts produced from dual phase steel DP780.
NASA Astrophysics Data System (ADS)
Zlatkina, O. Yu
2018-04-01
There is a relationship between the service properties of component parts and their geometry; therefore, to predict and control the operational characteristics of parts and machines, it is necessary to measure their geometrical specifications. In modern production, a coordinate measuring machine is the advanced measuring instrument of the products geometrical specifications. The analysis of publications has shown that during the coordinate measurements the problems of choosing locating chart of parts and coordination have not been sufficiently studied. A special role in the coordination of the part is played by the coordinate axes informational content. Informational content is the sum of the degrees of freedom limited by the elementary item of a part. The coordinate planes of a rectangular coordinate system have different informational content (three, two, and one). The coordinate axes have informational content of four, two and zero. The higher the informational content of the coordinate plane or axis, the higher its priority for reading angular and linear coordinates is. The geometrical model production of the coordinate measurements object taking into account the information content of coordinate planes and coordinate axes allows us to clearly reveal the interrelationship of the coordinates of the deviations in location, sizes and deviations of their surfaces shape. The geometrical model helps to select the optimal locating chart of parts for bringing the machine coordinate system to the part coordinate system. The article presents an algorithm the model production of geometrical specifications using the example of the piston rod of a compressor.
Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models
ERIC Educational Resources Information Center
Wagler, Amy E.
2014-01-01
Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…
40 CFR Appendix II to Part 1042 - Steady-State Duty Cycles
Code of Federal Regulations, 2011 CFR
2011-07-01
... the maximum test power. 3 Advance from one mode to the next within a 20-second transition phase. During the transition phase, command a linear progression from the torque setting of the current mode to... transition phase, command a linear progression from the torque setting of the current mode to the torque...
2015-05-22
The linear wall at the bottom of this image from NASA 2001 Mars Odyssey spacecraft is a fault. The linear depression caused by faulting is part of a long depression called Mangala Fossae. Orbit Number: 58979 Latitude: -17.9823 Longitude: 210.806 Instrument: VIS Captured: 2015-04-01 00:54 http://photojournal.jpl.nasa.gov/catalog/PIA19468
A Framework for Mathematical Thinking: The Case of Linear Algebra
ERIC Educational Resources Information Center
Stewart, Sepideh; Thomas, Michael O. J.
2009-01-01
Linear algebra is one of the unavoidable advanced courses that many mathematics students encounter at university level. The research reported here was part of the first author's recent PhD study, where she created and applied a theoretical framework combining the strengths of two major mathematics education theories in order to investigate the…
Mass perturbation techniques for tuning and decoupling of a Disk Resonator Gyroscope
NASA Astrophysics Data System (ADS)
Schwartz, David
Axisymmetric microelectromechanical (MEM) vibratory rate gyroscopes are designed so that the two Coriolis-coupled modes exploited for rate sensing possess equal modal frequencies and so that the central post which attaches the resonator to the sensor case is a nodal point of the these two modes. The former quality maximizes the signal-to-noise ratio of the sensor, while the latter quality eliminates any coupling of linear acceleration to the modes of interest, which, if present, creates spurious rate signals in response to linear vibration of the sensor case. When the gyro resonators are fabricated, however, small mass and stiffness asymmetries cause the frequencies of the two modes to deviate from each other and couple these modes to linear acceleration. In a resonator post-fabrication step, these effects can be reduced by altering the mass distribution of the resonator. In this dissertation, a scale model of the axisymmetric resonator of the Disk Resonator Gyroscope (DRG) is used to develop and test methods that successfully reduce frequency detuning (Part I) and linear acceleration coupling (Part II) through guided mass perturbations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Konor, Celal S.; Randall, David A.
We have used a normal-mode analysis to investigate the impacts of the horizontal and vertical discretizations on the numerical solutions of the nonhydrostatic anelastic inertia–gravity modes on a midlatitude f plane. The dispersion equations are derived from the linearized anelastic equations that are discretized on the Z, C, D, CD, (DC), A, E and B horizontal grids, and on the L and CP vertical grids. The effects of both horizontal grid spacing and vertical wavenumber are analyzed, and the role of nonhydrostatic effects is discussed. We also compare the results of the normal-mode analyses with numerical solutions obtained by runningmore » linearized numerical models based on the various horizontal grids. The sources and behaviors of the computational modes in the numerical simulations are also examined.Our normal-mode analyses with the Z, C, D, A, E and B grids generally confirm the conclusions of previous shallow-water studies for the cyclone-resolving scales (with low horizontal wavenumbers). We conclude that, aided by nonhydrostatic effects, the Z and C grids become overall more accurate for cloud-resolving resolutions (with high horizontal wavenumbers) than for the cyclone-resolving scales.A companion paper, Part 2, discusses the impacts of the discretization on the Rossby modes on a midlatitude β plane.« less
Konor, Celal S.; Randall, David A.
2018-05-08
We have used a normal-mode analysis to investigate the impacts of the horizontal and vertical discretizations on the numerical solutions of the nonhydrostatic anelastic inertia–gravity modes on a midlatitude f plane. The dispersion equations are derived from the linearized anelastic equations that are discretized on the Z, C, D, CD, (DC), A, E and B horizontal grids, and on the L and CP vertical grids. The effects of both horizontal grid spacing and vertical wavenumber are analyzed, and the role of nonhydrostatic effects is discussed. We also compare the results of the normal-mode analyses with numerical solutions obtained by runningmore » linearized numerical models based on the various horizontal grids. The sources and behaviors of the computational modes in the numerical simulations are also examined.Our normal-mode analyses with the Z, C, D, A, E and B grids generally confirm the conclusions of previous shallow-water studies for the cyclone-resolving scales (with low horizontal wavenumbers). We conclude that, aided by nonhydrostatic effects, the Z and C grids become overall more accurate for cloud-resolving resolutions (with high horizontal wavenumbers) than for the cyclone-resolving scales.A companion paper, Part 2, discusses the impacts of the discretization on the Rossby modes on a midlatitude β plane.« less
The first ANDES elements: 9-DOF plate bending triangles
NASA Technical Reports Server (NTRS)
Militello, Carmelo; Felippa, Carlos A.
1991-01-01
New elements are derived to validate and assess the assumed natural deviatoric strain (ANDES) formulation. This is a brand new variant of the assumed natural strain (ANS) formulation of finite elements, which has recently attracted attention as an effective method for constructing high-performance elements for linear and nonlinear analysis. The ANDES formulation is based on an extended parametrized variational principle developed in recent publications. The key concept is that only the deviatoric part of the strains is assumed over the element whereas the mean strain part is discarded in favor of a constant stress assumption. Unlike conventional ANS elements, ANDES elements satisfy the individual element test (a stringent form of the patch test) a priori while retaining the favorable distortion-insensitivity properties of ANS elements. The first application of this formulation is the development of several Kirchhoff plate bending triangular elements with the standard nine degrees of freedom. Linear curvature variations are sampled along the three sides with the corners as gage reading points. These sample values are interpolated over the triangle using three schemes. Two schemes merge back to conventional ANS elements, one being identical to the Discrete Kirchhoff Triangle (DKT), whereas the third one produces two new ANDES elements. Numerical experiments indicate that one of the ANDES element is relatively insensitive to distortion compared to previously derived high-performance plate-bending elements, while retaining accuracy for nondistorted elements.
Annotating Socio-Cultural Structures in Text
2012-10-31
parts of speech (POS) within text, using the Stanford Part of Speech Tagger (Stanford Log-Linear, 2011). The ERDC-CERL taxonomy is then used to...annotated NP/VP Pane: Shows the sentence parsed using the Parts of Speech tagger Document View Pane: Specifies the document (being annotated) in three...first parsed using the Stanford Parts of Speech tagger and converted to an XML document both components which are done through the Import function
NASA Astrophysics Data System (ADS)
Zuo, S.; Dai, S.; Ren, Y.; Yu, Z.
2017-12-01
Scientifically revealing the spatial heterogeneity and the relationship between the fragmentation of urban landscape and the direct carbon emissions are of great significance to land management and urban planning. In fact, the linear and nonlinear effects among the various factors resulted in the carbon emission spatial map. However, there is lack of the studies on the direct and indirect relations between the carbon emission and the city functional spatial form changes, which could not be reflected by the land use change. The linear strength and direction of the single factor could be calculated through the correlation and Geographically Weighted Regression (GWR) analysis, the nonlinear power of one factor and the interaction power of each two factors could be quantified by the Geodetector analysis. Therefore, we compared the landscape fragmentation metrics of the urban land cover and functional district patches to characterize the landscape form and then revealed the relations between the landscape fragmentation level and the direct the carbon emissions based on the three methods. The results showed that fragmentation decreased and the fragmented patches clustered at the coarser resolution. The direct CO2 emission density and the population density increased when the fragmentation level aggregated. The correlation analysis indicated the weak linear relation between them. The spatial variation of GWR output indicated the fragmentation indicator (MESH) had the positive influence on the carbon emission located in the relatively high emission region, and the negative effects regions accounted for the small part of the area. The Geodetector which explores the nonlinear relation identified the DIVISION and MESH as the most powerful direct factor for the land cover patches, NP and PD for the functional district patches, and the interactions between fragmentation indicator (MESH) and urban sprawl metrics (PUA and DIS) had the greatly increased explanation powers on the urban carbon emission. Overall, this study provides a framework to understand the relation between the urban landscape fragmentation and the carbon emission for the low carbon city construction planning in the other cities.
Multiphysics analysis of liquid metal annular linear induction pumps: A project overview
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maidana, Carlos Omar; Nieminen, Juha E.
Liquid metal-cooled fission reactors are both moderated and cooled by a liquid metal solution. These reactors are typically very compact and they can be used in regular electric power production, for naval and space propulsion systems or in fission surface power systems for planetary exploration. The coupling between the electromagnetics and thermo-fluid mechanical phenomena observed in liquid metal thermo-magnetic systems for nuclear and space applications gives rise to complex engineering magnetohydrodynamics and numerical problems. It is known that electromagnetic pumps have a number of advantages over rotating mechanisms: absence of moving parts, low noise and vibration level, simplicity of flowmore » rate regulation, easy maintenance and so on. However, while developing annular linear induction pumps, we are faced with a significant problem of magnetohydrodynamic instability arising in the device. The complex flow behavior in this type of devices includes a time-varying Lorentz force and pressure pulsation due to the time-varying electromagnetic fields and the induced convective currents that originates from the liquid metal flow, leading to instability problems along the device geometry. The determinations of the geometry and electrical configuration of liquid metal thermo-magnetic devices give rise to a complex inverse magnetohydrodynamic field problem were techniques for global optimization should be used, magnetohydrodynamics instabilities understood –or quantified- and multiphysics models developed and analyzed. Lastly, we present a project overview as well as a few computational models developed to study liquid metal annular linear induction pumps using first principles and the a few results of our multi-physics analysis.« less
Multiphysics analysis of liquid metal annular linear induction pumps: A project overview
Maidana, Carlos Omar; Nieminen, Juha E.
2016-03-14
Liquid metal-cooled fission reactors are both moderated and cooled by a liquid metal solution. These reactors are typically very compact and they can be used in regular electric power production, for naval and space propulsion systems or in fission surface power systems for planetary exploration. The coupling between the electromagnetics and thermo-fluid mechanical phenomena observed in liquid metal thermo-magnetic systems for nuclear and space applications gives rise to complex engineering magnetohydrodynamics and numerical problems. It is known that electromagnetic pumps have a number of advantages over rotating mechanisms: absence of moving parts, low noise and vibration level, simplicity of flowmore » rate regulation, easy maintenance and so on. However, while developing annular linear induction pumps, we are faced with a significant problem of magnetohydrodynamic instability arising in the device. The complex flow behavior in this type of devices includes a time-varying Lorentz force and pressure pulsation due to the time-varying electromagnetic fields and the induced convective currents that originates from the liquid metal flow, leading to instability problems along the device geometry. The determinations of the geometry and electrical configuration of liquid metal thermo-magnetic devices give rise to a complex inverse magnetohydrodynamic field problem were techniques for global optimization should be used, magnetohydrodynamics instabilities understood –or quantified- and multiphysics models developed and analyzed. Lastly, we present a project overview as well as a few computational models developed to study liquid metal annular linear induction pumps using first principles and the a few results of our multi-physics analysis.« less
Conjugate gradient type methods for linear systems with complex symmetric coefficient matrices
NASA Technical Reports Server (NTRS)
Freund, Roland
1989-01-01
We consider conjugate gradient type methods for the solution of large sparse linear system Ax equals b with complex symmetric coefficient matrices A equals A(T). Such linear systems arise in important applications, such as the numerical solution of the complex Helmholtz equation. Furthermore, most complex non-Hermitian linear systems which occur in practice are actually complex symmetric. We investigate conjugate gradient type iterations which are based on a variant of the nonsymmetric Lanczos algorithm for complex symmetric matrices. We propose a new approach with iterates defined by a quasi-minimal residual property. The resulting algorithm presents several advantages over the standard biconjugate gradient method. We also include some remarks on the obvious approach to general complex linear systems by solving equivalent real linear systems for the real and imaginary parts of x. Finally, numerical experiments for linear systems arising from the complex Helmholtz equation are reported.
Search for Linear Polarization of the Cosmic Background Radiation
DOE R&D Accomplishments Database
Lubin, P. M.; Smoot, G. F.
1978-10-01
We present preliminary measurements of the linear polarization of the cosmic microwave background (3 deg K blackbody) radiation. These ground-based measurements are made at 9 mm wavelength. We find no evidence for linear polarization, and set an upper limit for a polarized component of 0.8 m deg K with a 95% confidence level. This implies that the present rate of expansion of the Universe is isotropic to one part in 10{sup 6}, assuming no re-ionization of the primordial plasma after recombination
On bipartite pure-state entanglement structure in terms of disentanglement
NASA Astrophysics Data System (ADS)
Herbut, Fedor
2006-12-01
Schrödinger's disentanglement [E. Schrödinger, Proc. Cambridge Philos. Soc. 31, 555 (1935)], i.e., remote state decomposition, as a physical way to study entanglement, is carried one step further with respect to previous work in investigating the qualitative side of entanglement in any bipartite state vector. Remote measurement (or, equivalently, remote orthogonal state decomposition) from previous work is generalized to remote linearly independent complete state decomposition both in the nonselective and the selective versions. The results are displayed in terms of commutative square diagrams, which show the power and beauty of the physical meaning of the (antiunitary) correlation operator inherent in the given bipartite state vector. This operator, together with the subsystem states (reduced density operators), constitutes the so-called correlated subsystem picture. It is the central part of the antilinear representation of a bipartite state vector, and it is a kind of core of its entanglement structure. The generalization of previously elaborated disentanglement expounded in this article is a synthesis of the antilinear representation of bipartite state vectors, which is reviewed, and the relevant results of [Cassinelli et al., J. Math. Anal. Appl. 210, 472 (1997)] in mathematical analysis, which are summed up. Linearly independent bases (finite or infinite) are shown to be almost as useful in some quantum mechanical studies as orthonormal ones. Finally, it is shown that linearly independent remote pure-state preparation carries the highest probability of occurrence. This singles out linearly independent remote influence from all possible ones.
Gobrecht, Alexia; Bendoula, Ryad; Roger, Jean-Michel; Bellon-Maurel, Véronique
2015-01-01
Visible and Near Infrared (Vis-NIR) Spectroscopy is a powerful non destructive analytical method used to analyze major compounds in bulk materials and products and requiring no sample preparation. It is widely used in routine analysis and also in-line in industries, in-vivo with biomedical applications or in-field for agricultural and environmental applications. However, highly scattering samples subvert Beer-Lambert law's linear relationship between spectral absorbance and the concentrations. Instead of spectral pre-processing, which is commonly used by Vis-NIR spectroscopists to mitigate the scattering effect, we put forward an optical method, based on Polarized Light Spectroscopy to improve the absorbance signal measurement on highly scattering samples. This method selects part of the signal which is less impacted by scattering. The resulted signal is combined in the Absorption/Remission function defined in Dahm's Representative Layer Theory to compute an absorbance signal fulfilling Beer-Lambert's law, i.e. being linearly related to concentration of the chemicals composing the sample. The underpinning theories have been experimentally evaluated on scattering samples in liquid form and in powdered form. The method produced more accurate spectra and the Pearson's coefficient assessing the linearity between the absorbance spectra and the concentration of the added dye improved from 0.94 to 0.99 for liquid samples and 0.84-0.97 for powdered samples. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Sakthy Priya, S.; Alexandar, A.; Surendran, P.; Lakshmanan, A.; Rameshkumar, P.; Sagayaraj, P.
2017-04-01
An efficient organic nonlinear optical single crystal of L-arginine maleate dihydrate (LAMD) has been grown by slow evaporation solution technique (SEST) and slow cooling technique (SCT). The crystalline perfection of the crystal was examined using high-resolution X-ray diffractometry (HRXRD) analysis. Photoluminescence study confirmed the optical properties and defects level in the crystal lattice. Electromechanical behaviour was observed using piezoelectric co-efficient (d33) analysis. The photoconductivity analysis confirmed the negative photoconducting nature of the material. The dielectric constant and loss were measured as a function of frequency with varying temperature and vice-versa. The laser damage threshold (LDT) measurement was carried out using Nd:YAG Laser with a wavelength of 1064 nm (Focal length is 35 cm) and the obtained results showed that LDT value of the crystal is high compared to KDP crystal. The high laser damage threshold of the grown crystal makes it a potential candidate for second and higher order nonlinear optical device application. The third order nonlinear optical parameters of LAMD crystal is determined by open-aperture and closed-aperture studies using Z-scan technique. The third order linear and nonlinear optical parameters such as the nonlinear refractive index (n2), two photon absorption coefficient (β), Real part (Reχ3) and imaginary part (Imχ3) of third-order nonlinear optical susceptibility are calculated.
Study on magnetic circuit of moving magnet linear compressor
NASA Astrophysics Data System (ADS)
Xia, Ming; Chen, Xiaoping; Chen, Jun
2015-05-01
The moving magnet linear compressors are very popular in the tactical miniature stirling cryocoolers. The magnetic circuit of LFC3600 moving magnet linear compressor, manufactured by Kunming institute of Physics, was studied in this study. Three methods of the analysis theory, numerical calculation and experiment study were applied in the analysis process. The calculated formula of magnetic reluctance and magnetomotive force were given in theoretical analysis model. The magnetic flux density and magnetic flux line were analyzed in numerical analysis model. A testing method was designed to test the magnetic flux density of the linear compressor. When the piston of the motor was in the equilibrium position, the value of the magnetic flux density was at the maximum of 0.27T. The results were almost equal to the ones from numerical analysis.
Standing wave contributions to the linear interference effect in stratosphere-troposphere coupling
NASA Astrophysics Data System (ADS)
Watt-Meyer, Oliver; Kushner, Paul
2014-05-01
A body of literature by Hayashi and others [Hayashi 1973, 1977, 1979; Pratt, 1976] developed a decomposition of the wavenumber-frequency spectrum into standing and travelling waves. These techniques directly decompose the power spectrum—that is, the amplitudes squared—into standing and travelling parts. This, incorrectly, does not allow for a term representing the covariance between these waves. We propose a simple decomposition based on the 2D Fourier transform which allows one to directly compute the variance of the standing and travelling waves, as well as the covariance between them. Applying this decomposition to geopotential height anomalies in the Northern Hemisphere winter, we show the dominance of standing waves for planetary wavenumbers 1 through 3, especially in the stratosphere, and that wave-1 anomalies have a significant westward travelling component in the high-latitude (60N to 80N) troposphere. Variations in the relative zonal phasing between a wave anomaly and the background climatological wave pattern—the "linear interference" effect—are known to explain a large part of the planetary wave driving of the polar stratosphere in both hemispheres. While the linear interference effect is robust across observations, models of varying degrees of complexity, and in response to various types of perturbations, it is not well understood dynamically. We use the above-described decomposition into standing and travelling waves to investigate the drivers of linear interference. We find that the linear part of the wave activity flux is primarily driven by the standing waves, at all vertical levels. This can be understood by noting that the longitudinal positions of the antinodes of the standing waves are typically close to being aligned with the maximum and minimum of the background climatology. We discuss implications for predictability of wave activity flux, and hence polar vortex strength variability.
NASA Technical Reports Server (NTRS)
Ottander, John A.; Hall, Robert A.; Powers, J. F.
2018-01-01
A method is presented that allows for the prediction of the magnitude of limit cycles due to adverse control-slosh interaction in liquid propelled space vehicles using non-linear slosh damping. Such a method is an alternative to the industry practice of assuming linear damping and relying on: mechanical slosh baffles to achieve desired stability margins; accepting minimal slosh stability margins; or time domain non-linear analysis to accept time periods of poor stability. Sinusoidal input describing functional analysis is used to develop a relationship between the non-linear slosh damping and an equivalent linear damping at a given slosh amplitude. In addition, a more accurate analytical prediction of the danger zone for slosh mass locations in a vehicle under proportional and derivative attitude control is presented. This method is used in the control-slosh stability analysis of the NASA Space Launch System.
Decomposition of fluctuating initial conditions and flow harmonics
NASA Astrophysics Data System (ADS)
Qian, Wei-Liang; Mota, Philipe; Andrade, Rone; Gardim, Fernando; Grassi, Frédérique; Hama, Yogiro; Kodama, Takeshi
2014-01-01
Collective flow observed in heavy-ion collisions is largely attributed to initial geometrical fluctuations, and it is the hydrodynamic evolution of the system that transforms those initial spatial irregularities into final state momentum anisotropies. Cumulant analysis provides a mathematical tool to decompose those initial fluctuations in terms of radial and azimuthal components. It is usually thought that a specified order of azimuthal cumulant, for the most part, linearly produces flow harmonics of the same order. In this work, by considering the most central collisions (0%-5%), we carry out a systematic study on the connection between cumulants and flow harmonics using a hydrodynamic code called NeXSPheRIO. We conduct three types of calculation, by explicitly decomposing the initial conditions into components corresponding to a given eccentricity and studying the out-coming flow through hydrodynamic evolution. It is found that for initial conditions deviating significantly from Gaussian, such as those from NeXuS, the linearity between eccentricities and flow harmonics partially breaks down. Combined with the effect of coupling between cumulants of different orders, it causes the production of extra flow harmonics of higher orders. We argue that these results can be seen as a natural consequence of the non-linear nature of hydrodynamics, and they can be understood intuitively in terms of the peripheral-tube model.
Waterman, Kenneth C; Swanson, Jon T; Lippold, Blake L
2014-10-01
Three competing mathematical fitting models (a point-by-point estimation method, a linear fit method, and an isoconversion method) of chemical stability (related substance growth) when using high temperature data to predict room temperature shelf-life were employed in a detailed comparison. In each case, complex degradant formation behavior was analyzed by both exponential and linear forms of the Arrhenius equation. A hypothetical reaction was used where a drug (A) degrades to a primary degradant (B), which in turn degrades to a secondary degradation product (C). Calculated data with the fitting models were compared with the projected room-temperature shelf-lives of B and C, using one to four time points (in addition to the origin) for each of three accelerated temperatures. Isoconversion methods were found to provide more accurate estimates of shelf-life at ambient conditions. Of the methods for estimating isoconversion, bracketing the specification limit at each condition produced the best estimates and was considerably more accurate than when extrapolation was required. Good estimates of isoconversion produced similar shelf-life estimates fitting either linear or nonlinear forms of the Arrhenius equation, whereas poor isoconversion estimates favored one method or the other depending on which condition was most in error. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.
Linear approximations of nonlinear systems
NASA Technical Reports Server (NTRS)
Hunt, L. R.; Su, R.
1983-01-01
The development of a method for designing an automatic flight controller for short and vertical take off aircraft is discussed. This technique involves transformations of nonlinear systems to controllable linear systems and takes into account the nonlinearities of the aircraft. In general, the transformations cannot always be given in closed form. Using partial differential equations, an approximate linear system called the modified tangent model was introduced. A linear transformation of this tangent model to Brunovsky canonical form can be constructed, and from this the linear part (about a state space point x sub 0) of an exact transformation for the nonlinear system can be found. It is shown that a canonical expansion in Lie brackets about the point x sub 0 yields the same modified tangent model.
NASA Technical Reports Server (NTRS)
Melcher, Kevin J.
1997-01-01
The NASA Lewis Research Center is developing analytical methods and software tools to create a bridge between the controls and computational fluid dynamics (CFD) disciplines. Traditionally, control design engineers have used coarse nonlinear simulations to generate information for the design of new propulsion system controls. However, such traditional methods are not adequate for modeling the propulsion systems of complex, high-speed vehicles like the High Speed Civil Transport. To properly model the relevant flow physics of high-speed propulsion systems, one must use simulations based on CFD methods. Such CFD simulations have become useful tools for engineers that are designing propulsion system components. The analysis techniques and software being developed as part of this effort are an attempt to evolve CFD into a useful tool for control design as well. One major aspect of this research is the generation of linear models from steady-state CFD results. CFD simulations, often used during the design of high-speed inlets, yield high resolution operating point data. Under a NASA grant, the University of Akron has developed analytical techniques and software tools that use these data to generate linear models for control design. The resulting linear models have the same number of states as the original CFD simulation, so they are still very large and computationally cumbersome. Model reduction techniques have been successfully applied to reduce these large linear models by several orders of magnitude without significantly changing the dynamic response. The result is an accurate, easy to use, low-order linear model that takes less time to generate than those generated by traditional means. The development of methods for generating low-order linear models from steady-state CFD is most complete at the one-dimensional level, where software is available to generate models with different kinds of input and output variables. One-dimensional methods have been extended somewhat so that linear models can also be generated from two- and three-dimensional steady-state results. Standard techniques are adequate for reducing the order of one-dimensional CFD-based linear models. However, reduction of linear models based on two- and three-dimensional CFD results is complicated by very sparse, ill-conditioned matrices. Some novel approaches are being investigated to solve this problem.
Ramírez-Gualito, Karla; Richter, Monique; Matzapetakis, Manolis; Singer, David; Berger, Stefan
2013-04-26
Rational design of peptide vaccines becomes important for the treatment of some diseases such as Alzheimer's disease (AD) and related disorders. In this study, as part of a larger effort to explore correlations of structure and activity, we attempt to characterize the doubly phosphorylated chimeric peptide vaccine targeting a hyperphosphorylated epitope of the Tau protein. The 28-mer linear chimeric peptide consists of the double phosphorylated B cell epitope Tau₂₂₉₋₂₃₇[pThr231/pSer235] and the immunomodulatory T cell epitope Ag85B₂₄₁₋₂₅₅ originating from the well-known antigen Ag85B of the Mycobacterium tuberculosis, linked by a four amino acid sequence -GPSL-. NMR chemical shift analysis of our construct demonstrated that the synthesized peptide is essentially unfolded with a tendency to form a β-turn due to the linker. In conclusion, the -GPSL- unit presumably connects the two parts of the vaccine without transferring any structural information from one part to the other. Therefore, the double phosphorylated epitope of the Tau peptide is flexible and accessible.
Evaluation of the use of a singularity element in finite element analysis of center-cracked plates
NASA Technical Reports Server (NTRS)
Mendelson, A.; Gross, B.; Srawley, J., E.
1972-01-01
Two different methods are applied to the analyses of finite width linear elastic plates with central cracks. Both methods give displacements as a primary part of the solution. One method makes use of Fourier transforms. The second method employs a coarse mesh of triangular second-order finite elements in conjunction with a single singularity element subjected to appropriate additional constraints. The displacements obtained by these two methods are in very good agreement. The results suggest considerable potential for the use of a cracked element for related crack problems, particularly in connection with the extension to nonlinear material behavior.
On the Hilbert-Huang Transform Theoretical Foundation
NASA Technical Reports Server (NTRS)
Kizhner, Semion; Blank, Karin; Huang, Norden E.
2004-01-01
The Hilbert-Huang Transform [HHT] is a novel empirical method for spectrum analysis of non-linear and non-stationary signals. The HHT is a recent development and much remains to be done to establish the theoretical foundation of the HHT algorithms. This paper develops the theoretical foundation for the convergence of the HHT sifting algorithm and it proves that the finest spectrum scale will always be the first generated by the HHT Empirical Mode Decomposition (EMD) algorithm. The theoretical foundation for cutting an extrema data points set into two parts is also developed. This then allows parallel signal processing for the HHT computationally complex sifting algorithm and its optimization in hardware.
On the accuracy of various large axial displacement formulae for crooked columns
NASA Astrophysics Data System (ADS)
Mallis, J.; Kounadis, A. N.
1988-11-01
The axial displacements of an initially crooked, simply supported column, subjected to an axial compressive force at its end, are determined by using several variants of the axial strain-displacement relationship. Their accuracy and range of applicability are thoroughly discussed by comparing the corresponding results with those of the exact elastica analysis in which the compressibility effect of the bar axis is accounted for. Among other findings, the important conclusion is drawn that the simplified linear kinematic relation leads to a sufficiently accurate evaluation of the initial part of the postbuckling path which is of significant importance for structural design purposes.
New class of control laws for robotic manipulators. I - Nonadaptive case. II - Adaptive case
NASA Technical Reports Server (NTRS)
Wen, John T.; Bayard, David S.
1988-01-01
A new class of exponentially stabilizing control laws for joint level control of robot arms is discussed. Closed-loop exponential stability has been demonstrated for both the set point and tracking control problems by a slight modification of the energy Lyapunov function and the use of a lemma which handles third-order terms in the Lyapunov function derivatives. In the second part, these control laws are adapted in a simple fashion to achieve asymptotically stable adaptive control. The analysis addresses the nonlinear dynamics directly without approximation, linearization, or ad hoc assumptions, and uses a parameterization based on physical (time-invariant) quantities.
NASA Astrophysics Data System (ADS)
Guelachvili, G.; Picqué, N.
This document is part of Subvolume C 'Non-linear Triatomic Molecules', Part 1 'H2O (HOH)', Part α'H2 16O (H16OH)' of Volume 20 'Molecular Constants Mostly from Infrared Spectroscopy' of Landolt-Börnstein - Group II 'Molecules and Radicals'.
Li, Feiming; Gimpel, John R; Arenson, Ethan; Song, Hao; Bates, Bruce P; Ludwin, Fredric
2014-04-01
Few studies have investigated how well scores from the Comprehensive Osteopathic Medical Licensing Examination-USA (COMLEX-USA) series predict resident outcomes, such as performance on board certification examinations. To determine how well COMLEX-USA predicts performance on the American Osteopathic Board of Emergency Medicine (AOBEM) Part I certification examination. The target study population was first-time examinees who took AOBEM Part I in 2011 and 2012 with matched performances on COMLEX-USA Level 1, Level 2-Cognitive Evaluation (CE), and Level 3. Pearson correlations were computed between AOBEM Part I first-attempt scores and COMLEX-USA performances to measure the association between these examinations. Stepwise linear regression analysis was conducted to predict AOBEM Part I scores by the 3 COMLEX-USA scores. An independent t test was conducted to compare mean COMLEX-USA performances between candidates who passed and who failed AOBEM Part I, and a stepwise logistic regression analysis was used to predict the log-odds of passing AOBEM Part I on the basis of COMLEX-USA scores. Scores from AOBEM Part I had the highest correlation with COMLEX-USA Level 3 scores (.57) and slightly lower correlation with COMLEX-USA Level 2-CE scores (.53). The lowest correlation was between AOBEM Part I and COMLEX-USA Level 1 scores (.47). According to the stepwise regression model, COMLEX-USA Level 1 and Level 2-CE scores, which residency programs often use as selection criteria, together explained 30% of variance in AOBEM Part I scores. Adding Level 3 scores explained 37% of variance. The independent t test indicated that the 397 examinees passing AOBEM Part I performed significantly better than the 54 examinees failing AOBEM Part I in all 3 COMLEX-USA levels (P<.001 for all 3 levels). The logistic regression model showed that COMLEX-USA Level 1 and Level 3 scores predicted the log-odds of passing AOBEM Part I (P=.03 and P<.001, respectively). The present study empirically supported the predictive and discriminant validities of the COMLEX-USA series in relation to the AOBEM Part I certification examination. Although residency programs may use COMLEX-USA Level 1 and Level 2-CE scores as partial criteria in selecting residents, Level 3 scores, though typically not available at the time of application, are actually the most statistically related to performances on AOBEM Part I.
Krylov Subspace Methods for Complex Non-Hermitian Linear Systems. Thesis
NASA Technical Reports Server (NTRS)
Freund, Roland W.
1991-01-01
We consider Krylov subspace methods for the solution of large sparse linear systems Ax = b with complex non-Hermitian coefficient matrices. Such linear systems arise in important applications, such as inverse scattering, numerical solution of time-dependent Schrodinger equations, underwater acoustics, eddy current computations, numerical computations in quantum chromodynamics, and numerical conformal mapping. Typically, the resulting coefficient matrices A exhibit special structures, such as complex symmetry, or they are shifted Hermitian matrices. In this paper, we first describe a Krylov subspace approach with iterates defined by a quasi-minimal residual property, the QMR method, for solving general complex non-Hermitian linear systems. Then, we study special Krylov subspace methods designed for the two families of complex symmetric respectively shifted Hermitian linear systems. We also include some results concerning the obvious approach to general complex linear systems by solving equivalent real linear systems for the real and imaginary parts of x. Finally, numerical experiments for linear systems arising from the complex Helmholtz equation are reported.
Nonlinear effects in a plain journal bearing. I - Analytical study. II - Results
NASA Technical Reports Server (NTRS)
Choy, F. K.; Braun, M. J.; Hu, Y.
1991-01-01
In the first part of this work, a numerical model is presented which couples the variable-property Reynolds equation with a rotor-dynamics model for the calculation of a plain journal bearing's nonlinear characteristics when working with a cryogenic fluid, LOX. The effects of load on the linear/nonlinear plain journal bearing characteristics are analyzed and presented in a parametric form. The second part of this work presents numerical results obtained for specific parametric-study input variables (lubricant inlet temperature, external load, angular rotational speed, and axial misalignment). Attention is given to the interrelations between pressure profiles and bearing linear and nonlinear characteristics.
Polarization-polarization correlation measurement --- Experimental test of the PPCO methods
NASA Astrophysics Data System (ADS)
Droste, Ch.; Starosta, K.; Wierzchucka, A.; Morek, T.; Rohoziński, S. G.; Srebrny, J.; Wesolowski, E.; Bergstrem, M.; Herskind, B.
1998-04-01
A significant fraction of modern multidetector arrays used for "in-beam" gamma-ray spectroscopy consist of a detectors which are sensitive to linear polarization of gamma quanta. This yields the opportunity to carry out correlation measurements between the gamma rays registered in polarimeters to get information concerning spins and parities of excited nuclear states. The aim of the present work was to study the ability of the polarization- polarization correlation method (the PPCO method). The correlation between the linear polarization of one gamma quantum and the polarization of the second quantum emitted in a cascade from an oriented nucleus (due to a heavy ion reaction) was studied in detail. The appropriate formulae and methods of analysis are presented. The experimental test of the method was performed using the EUROGAM II array. The CLOVER detectors are the parts of the array used as polarimeters. The ^164Yb nucleus was produced via the ^138Ba(^30Si, 4n) reaction. It was found that the PPCO method together with the standard DCO analysis and the polarization- direction correlation method (PDCO) can be helpful for spin, parity and multipolarity assignments. The results suggest that the PPCO method can be applied to modern spectrometers in which a large number of detectors (e.g. CLOVER) are sensitive to polarization of gamma rays.
Demographic and clinical features related to perceived discrimination in schizophrenia.
Fresán, Ana; Robles-García, Rebeca; Madrigal, Eduardo; Tovilla-Zarate, Carlos-Alfonso; Martínez-López, Nicolás; Arango de Montis, Iván
2018-04-01
Perceived discrimination contributes to the development of internalized stigma among those with schizophrenia. Evidence on demographic and clinical factors related to the perception of discrimination among this population is both contradictory and scarce in low- and middle-income countries. Accordingly, the main purpose of this study is to determine the demographic and clinical factors predicting the perception of discrimination among Mexican patients with schizophrenia. Two hundred and seventeen adults with paranoid schizophrenia completed an interview on their demographic status and clinical characteristics. Symptom severity was assessed using the Positive and Negative Syndrome Scale; and perceived discrimination using 13 items from the King's Internalized Stigma Scale. Bivariate linear associations were determined to identify the variables of interest to be included in a linear regression analysis. Years of education, age of illness onset and length of hospitalization were associated with discrimination. However, only age of illness onset and length of hospitalization emerged as predictors of perceived discrimination in the final regression analysis, with longer length of hospitalization being the independent variable with the greatest contribution. Fortunately, this is a modifiable factor regarding the perception of discrimination and self-stigma. Strategies for achieving this as part of community-based mental health care are also discussed. Copyright © 2017 Elsevier B.V. All rights reserved.
Statistics of vacuum breakdown in the high-gradient and low-rate regime
NASA Astrophysics Data System (ADS)
Wuensch, Walter; Degiovanni, Alberto; Calatroni, Sergio; Korsbäck, Anders; Djurabekova, Flyura; Rajamäki, Robin; Giner-Navarro, Jorge
2017-01-01
In an increasing number of high-gradient linear accelerator applications, accelerating structures must operate with both high surface electric fields and low breakdown rates. Understanding the statistical properties of breakdown occurrence in such a regime is of practical importance for optimizing accelerator conditioning and operation algorithms, as well as of interest for efforts to understand the physical processes which underlie the breakdown phenomenon. Experimental data of breakdown has been collected in two distinct high-gradient experimental set-ups: A prototype linear accelerating structure operated in the Compact Linear Collider Xbox 12 GHz test stands, and a parallel plate electrode system operated with pulsed DC in the kV range. Collected data is presented, analyzed and compared. The two systems show similar, distinctive, two-part distributions of number of pulses between breakdowns, with each part corresponding to a specific, constant event rate. The correlation between distance and number of pulses between breakdown indicates that the two parts of the distribution, and their corresponding event rates, represent independent primary and induced follow-up breakdowns. The similarity of results from pulsed DC to 12 GHz rf indicates a similar vacuum arc triggering mechanism over the range of conditions covered by the experiments.
NASA Astrophysics Data System (ADS)
Ionita-Scholz, Monica; Tallaksen, Lena M.; Scholz, Patrick
2017-04-01
This study introduces a novel method of estimating the decay time, mean period and forcing statistics of drought conditions over large spatial domains, demonstrated here for southern part of Europe (10°E - 40°E, 35°N - 50°N). It uses a two-dimensional stochastically forced damped linear oscillator model with the model parameters estimated from a Principal Oscillation Pattern (POP) analysis and associated observed power spectra. POP is a diagnostic technique that aims to derive the space-time characteristics of a data set objectively. This analysis is performed on an extended observational time series of 114 years (1902 - 2015) of the Standardized Precipitation Evapotranspiration Index for an accumulation period of 12 months (SPEI12), based on the Climate Research Unit (CRU TS v. 3.24) data set. The POP analysis reveals four exceptionally stable modes of variability, which together explain more than 62% of the total explained variance. The most stable POP mode, which explains 16.3% of the total explained variance, is characterized by a period of oscillation of 14 years and a decay time of 31 years. The real part of POP1 is characterized by a monopole-like structure with the highest loadings over Portugal, western part of Spain and Turkey. The second stable mode, which explains 15.9% of the total explained variance, is characterized by a period of oscillation of 20 years and a decay time of 26.4 years. The spatial structure of the real part of POP2 has a dipole-like structure with the highest positive loadings over France, southern Germany and Romania and negative loadings over southern part of Spain. The third POP mode, in terms of stability, explains 14.0% of the total variance and is characterized by a period of oscillation of 33 years and a decay time of 43.5 years. The real part of POP3 is characterized by negative loadings over the eastern part of Europe and positive loadings over Turkey. The fourth stable POP mode, explaining 15.5% of the total variance, is characterized by an oscillation of 65 years and a damping time of 54 years. The spatial structure of POP4 is characterized by positive loadings over France and negative loadings over the southern part of the Iberian Peninsula and the eastern part of Europe. The stable POP modes identified could be related to preferred modes of climate variability that are characterized by similar oscillation periods (e.g. the Atlantic Multidecadal Oscillation, which is defined as a coherent pattern of variability in basin-wide North Atlantic sea surface temperatures with a period of 60-80 years). The decadal components identified by the POP analysis can be used operationally by decision makers as early predictors of drought conditions over the southern part of Europe.
NASA Astrophysics Data System (ADS)
Ebrahimzadeh, Faezeh; Tsai, Jason Sheng-Hong; Chung, Min-Ching; Liao, Ying Ting; Guo, Shu-Mei; Shieh, Leang-San; Wang, Li
2017-01-01
Contrastive to Part 1, Part 2 presents a generalised optimal linear quadratic digital tracker (LQDT) with universal applications for the discrete-time (DT) systems. This includes (1) a generalised optimal LQDT design for the system with the pre-specified trajectories of the output and the control input and additionally with both the input-to-output direct-feedthrough term and known/estimated system disturbances or extra input/output signals; (2) a new optimal filter-shaped proportional plus integral state-feedback LQDT design for non-square non-minimum phase DT systems to achieve a minimum-phase-like tracking performance; (3) a new approach for computing the control zeros of the given non-square DT systems; and (4) a one-learning-epoch input-constrained iterative learning LQDT design for the repetitive DT systems.
Cancer risk assessments for inorganic arsenic have been based on human epidemiological data, assuming a linear dose-response below the range of observation of tumors. Part of the reason for the continued use of the linear approach in arsenic risk assessments is the lack of an ad...
ERIC Educational Resources Information Center
Anderson, Daniel
2012-01-01
This manuscript provides an overview of hierarchical linear modeling (HLM), as part of a series of papers covering topics relevant to consumers of educational research. HLM is tremendously flexible, allowing researchers to specify relations across multiple "levels" of the educational system (e.g., students, classrooms, schools, etc.).…
Xiang, Xiang; Sha, Xiuxiu; Su, Shulan; Zhu, Zhenhua; Guo, Sheng; Yan, Hui; Qian, Dawei; Duan, Jin-Ao
2018-03-01
Salvia miltiorrhiza, a traditional Chinese medicine, is a widely used herbal medicine to treat cardiovascular and cerebrovascular diseases. In this study, ultraviolet (UV)-visible spectrophotometry and ultra-high performance liquid chromatography with triple quadrupole tandem mass spectrometry analytical methods were used for rapid quantification of polysaccharides and 21 nucleosides and amino acids in S. miltiorrhiza to determine 17 samples of different tissues from different areas. Based on the total contents, hierarchical clustering analysis and principal components analysis were performed to classify these samples. The established methods were validated with good linearity, precision, repeatability, stability, and recovery. Chemical analysis revealed a higher content of total analytes in the sample of inflorescence from Nanjing (34.17 mg/g), sample of root and rhizome from Shaanxi (34.13 mg/g) and sample of stem and leaf from Nanjing (31.14 mg/g), respectively, indicating that root and rhizome from Shaanxi and the aerial parts from Nanjing exhibited the highest quality due to their highest content. In addition, contents of nucleosides and amino acids in the aerial parts (14.67 mg/g) were much higher than that in roots and rhizomes (9.17 mg/g). This study suggested that UV-visible spectrophotometry and ultra-high performance liquid chromatography with triple quadrupole tandem mass spectrometry are effective techniques to analyze polysaccharides, nucleosides, and amino acids in plants, and they provided valuable information for the development and utilization value of the aerial parts of S. miltiorrhiza. This analysis would also provide useful information for the quality control of S. miltiorrhiza. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Bounded Linear Stability Margin Analysis of Nonlinear Hybrid Adaptive Control
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.; Boskovic, Jovan D.
2008-01-01
This paper presents a bounded linear stability analysis for a hybrid adaptive control that blends both direct and indirect adaptive control. Stability and convergence of nonlinear adaptive control are analyzed using an approximate linear equivalent system. A stability margin analysis shows that a large adaptive gain can lead to a reduced phase margin. This method can enable metrics-driven adaptive control whereby the adaptive gain is adjusted to meet stability margin requirements.
Application of Local Linear Embedding to Nonlinear Exploratory Latent Structure Analysis
ERIC Educational Resources Information Center
Wang, Haonan; Iyer, Hari
2007-01-01
In this paper we discuss the use of a recent dimension reduction technique called Locally Linear Embedding, introduced by Roweis and Saul, for performing an exploratory latent structure analysis. The coordinate variables from the locally linear embedding describing the manifold on which the data reside serve as the latent variable scores. We…
Determining Predictor Importance in Hierarchical Linear Models Using Dominance Analysis
ERIC Educational Resources Information Center
Luo, Wen; Azen, Razia
2013-01-01
Dominance analysis (DA) is a method used to evaluate the relative importance of predictors that was originally proposed for linear regression models. This article proposes an extension of DA that allows researchers to determine the relative importance of predictors in hierarchical linear models (HLM). Commonly used measures of model adequacy in…
Common pitfalls in statistical analysis: Linear regression analysis
Aggarwal, Rakesh; Ranganathan, Priya
2017-01-01
In a previous article in this series, we explained correlation analysis which describes the strength of relationship between two continuous variables. In this article, we deal with linear regression analysis which predicts the value of one continuous variable from another. We also discuss the assumptions and pitfalls associated with this analysis. PMID:28447022
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gremos, K.; Sendlein, L.V.A.
1993-03-01
Significant areas of the continental US (Kentucky included) are underlain by karstified limestone. In many of these areas agriculture is a leading business and a potential non-point source of pollution to the groundwater. A study is underway to assess the Best Management Practices (BMP) on a farm in north-central Woodford County in Kentucky. As part of the study, various computer-based decision models for integrated farm operation will be assessed. Because surface area and run off are integral parts of all of these models, diversion of surface run off through karst features such as sinkholes will modify predictions from these models.more » This study utilizes areal photographs to identify all sinkholes on the property and characterize their morphometric parameters such as length, width, depth, and area and distribution. Sink hole areas represent approximately 10 percent of the area and all but a few discharge within the basin monitored as part of the model. The bedrock geology and fractures of the area have been defined using fracture trace analysis and a rectified drainage linear analysis. Surface drainage patterns, spring distribution, and stream and spring discharge data have been collected. Dye tracing has identified groundwater basins whose catchment area is outside the boundaries of the study site.« less
Aspects of effective supersymmetric theories
NASA Astrophysics Data System (ADS)
Tziveloglou, Panteleimon
This work consists of two parts. In the first part we construct the complete extension of the Minimal Supersymmetric Standard Model by higher dimensional effective operators and then study its phenomenology. These operators encapsulate the effects on LHC physics of any kind of new degrees of freedom at the multiTeV scale. The effective analysis includes the case where the multiTeV physics is the supersymmetry breaking sector itself. In that case the appropriate framework is nonlinear supersymmetry. We choose to realize the nonlinear symmetry by the method of constrained superfields. Beyond the new effective couplings, the analysis suggests an interpretation of the 'little hierarchy problem' as an indication of new physics at multiTeV scale. In the second part we explore the power of constrained superfields in extended supersymmetry. It is known that in N = 2 supersymmetry the gauge kinetic function cannot depend on hypermultiplet scalars. However, it is also known that the low energy effective action of a D-brane in an N = 2 supersymmetric bulk includes the DBI action, where the gauge kinetic function does depend on the dilaton. We show how the nonlinearization of the second SUSY (imposed by the presence of the D-brane) opens this possibility, by constructing the global N = 1 linear + 1 nonlinear invariant coupling of a hypermultiplet with a gauge multiplet. The constructed theory enjoys interesting features, including a novel super-Higgs mechanism without gravity.
Low-complexity stochastic modeling of wall-bounded shear flows
NASA Astrophysics Data System (ADS)
Zare, Armin
Turbulent flows are ubiquitous in nature and they appear in many engineering applications. Transition to turbulence, in general, increases skin-friction drag in air/water vehicles compromising their fuel-efficiency and reduces the efficiency and longevity of wind turbines. While traditional flow control techniques combine physical intuition with costly experiments, their effectiveness can be significantly enhanced by control design based on low-complexity models and optimization. In this dissertation, we develop a theoretical and computational framework for the low-complexity stochastic modeling of wall-bounded shear flows. Part I of the dissertation is devoted to the development of a modeling framework which incorporates data-driven techniques to refine physics-based models. We consider the problem of completing partially known sample statistics in a way that is consistent with underlying stochastically driven linear dynamics. Neither the statistics nor the dynamics are precisely known. Thus, our objective is to reconcile the two in a parsimonious manner. To this end, we formulate optimization problems to identify the dynamics and directionality of input excitation in order to explain and complete available covariance data. For problem sizes that general-purpose solvers cannot handle, we develop customized optimization algorithms based on alternating direction methods. The solution to the optimization problem provides information about critical directions that have maximal effect in bringing model and statistics in agreement. In Part II, we employ our modeling framework to account for statistical signatures of turbulent channel flow using low-complexity stochastic dynamical models. We demonstrate that white-in-time stochastic forcing is not sufficient to explain turbulent flow statistics and develop models for colored-in-time forcing of the linearized Navier-Stokes equations. We also examine the efficacy of stochastically forced linearized NS equations and their parabolized equivalents in the receptivity analysis of velocity fluctuations to external sources of excitation as well as capturing the effect of the slowly-varying base flow on streamwise streaks and Tollmien-Schlichting waves. In Part III, we develop a model-based approach to design surface actuation of turbulent channel flow in the form of streamwise traveling waves. This approach is capable of identifying the drag reducing trends of traveling waves in a simulation-free manner. We also use the stochastically forced linearized NS equations to examine the Reynolds number independent effects of spanwise wall oscillations on drag reduction in turbulent channel flows. This allows us to extend the predictive capability of our simulation-free approach to high Reynolds numbers.
A phenomenological biological dose model for proton therapy based on linear energy transfer spectra.
Rørvik, Eivind; Thörnqvist, Sara; Stokkevåg, Camilla H; Dahle, Tordis J; Fjaera, Lars Fredrik; Ytre-Hauge, Kristian S
2017-06-01
The relative biological effectiveness (RBE) of protons varies with the radiation quality, quantified by the linear energy transfer (LET). Most phenomenological models employ a linear dependency of the dose-averaged LET (LET d ) to calculate the biological dose. However, several experiments have indicated a possible non-linear trend. Our aim was to investigate if biological dose models including non-linear LET dependencies should be considered, by introducing a LET spectrum based dose model. The RBE-LET relationship was investigated by fitting of polynomials from 1st to 5th degree to a database of 85 data points from aerobic in vitro experiments. We included both unweighted and weighted regression, the latter taking into account experimental uncertainties. Statistical testing was performed to decide whether higher degree polynomials provided better fits to the data as compared to lower degrees. The newly developed models were compared to three published LET d based models for a simulated spread out Bragg peak (SOBP) scenario. The statistical analysis of the weighted regression analysis favored a non-linear RBE-LET relationship, with the quartic polynomial found to best represent the experimental data (P = 0.010). The results of the unweighted regression analysis were on the borderline of statistical significance for non-linear functions (P = 0.053), and with the current database a linear dependency could not be rejected. For the SOBP scenario, the weighted non-linear model estimated a similar mean RBE value (1.14) compared to the three established models (1.13-1.17). The unweighted model calculated a considerably higher RBE value (1.22). The analysis indicated that non-linear models could give a better representation of the RBE-LET relationship. However, this is not decisive, as inclusion of the experimental uncertainties in the regression analysis had a significant impact on the determination and ranking of the models. As differences between the models were observed for the SOBP scenario, both non-linear LET spectrum- and linear LET d based models should be further evaluated in clinically realistic scenarios. © 2017 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Sanyal, Shankha; Banerjee, Archi; Patranabis, Anirban; Banerjee, Kaushik; Sengupta, Ranjan; Ghosh, Dipak
2016-11-01
MFDFA (the most rigorous technique to assess multifractality) was performed on four Hindustani music samples played on same 'raga' sung by the same performer. Each music sample was divided into six parts and 'multifractal spectral width' was determined for each part corresponding to the four samples. The results obtained reveal that different parts of all the four sound signals possess spectral width of widely varying values. This gives a cue of the so called 'musical improvisation' in all music samples, keeping in mind they belong to the bandish part of the same raga. Formal compositions in Hindustani raga are juxtaposed with the improvised portions, where an artist manoeuvers his/her own creativity to bring out a mood that is specific for that particular performance, which is known as 'improvisation'. Further, this observation hints at the association of different emotions even in the same bandish of the same raga performed by the same artist, this interesting observation cannot be revealed unless rigorous non-linear technique explores the nature of musical structure. In the second part, we applied MFDXA technique to explore more in-depth about 'improvisation' and association with emotion. This technique is applied to find the degree of cross-correlation (γx) between the different parts of the samples. Pronounced correlation has been observed in the middle parts of the all the four samples evident from higher values of γx whereas the other parts show weak correlation. This gets further support from the values of spectral width from different parts of the sample - width of those parts is significantly different from other parts. This observation is extremely new both in respect of musical structure of so called improvisation and associated emotion. The importance of this study in application area of cognitive music therapy is immense.
NASA Technical Reports Server (NTRS)
Slater, G. L.; Shelley, Stuart; Jacobson, Mark
1993-01-01
In this paper, the design, analysis, and test of a low cost, linear proof mass actuator for vibration control is presented. The actuator is based on a linear induction coil from a large computer disk drive. Such disk drives are readily available and provide the linear actuator, current feedback amplifier, and power supply for a highly effective, yet inexpensive, experimental laboratory actuator. The device is implemented as a force command input system, and the performance is virtually the same as other, more sophisticated, linear proof mass systems.
Wideband Fully-Programmable Dual-Mode CMOS Analogue Front-End for Electrical Impedance Spectroscopy
Valente, Virgilio; Demosthenous, Andreas
2016-01-01
This paper presents a multi-channel dual-mode CMOS analogue front-end (AFE) for electrochemical and bioimpedance analysis. Current-mode and voltage-mode readouts, integrated on the same chip, can provide an adaptable platform to correlate single-cell biosensor studies with large-scale tissue or organ analysis for real-time cancer detection, imaging and characterization. The chip, implemented in a 180-nm CMOS technology, combines two current-readout (CR) channels and four voltage-readout (VR) channels suitable for both bipolar and tetrapolar electrical impedance spectroscopy (EIS) analysis. Each VR channel occupies an area of 0.48 mm2, is capable of an operational bandwidth of 8 MHz and a linear gain in the range between −6 dB and 42 dB. The gain of the CR channel can be set to 10 kΩ, 50 kΩ or 100 kΩ and is capable of 80-dB dynamic range, with a very linear response for input currents between 10 nA and 100 μA. Each CR channel occupies an area of 0.21 mm2. The chip consumes between 530 μA and 690 μA per channel and operates from a 1.8-V supply. The chip was used to measure the impedance of capacitive interdigitated electrodes in saline solution. Measurements show close matching with results obtained using a commercial impedance analyser. The chip will be part of a fully flexible and configurable fully-integrated dual-mode EIS system for impedance sensors and bioimpedance analysis. PMID:27463721
High-throughput analysis of peptide binding modules
Liu, Bernard A.; Engelmann, Brett; Nash, Piers D.
2014-01-01
Modular protein interaction domains that recognize linear peptide motifs are found in hundreds of proteins within the human genome. Some protein interaction domains such as SH2, 14-3-3, Chromo and Bromo domains serve to recognize post-translational modification of amino acids (such as phosphorylation, acetylation, methylation etc.) and translate these into discrete cellular responses. Other modules such as SH3 and PDZ domains recognize linear peptide epitopes and serve to organize protein complexes based on localization and regions of elevated concentration. In both cases, the ability to nucleate specific signaling complexes is in large part dependent on the selectivity of a given protein module for its cognate peptide ligand. High throughput analysis of peptide-binding domains by peptide or protein arrays, phage display, mass spectrometry or other HTP techniques provides new insight into the potential protein-protein interactions prescribed by individual or even whole families of modules. Systems level analyses have also promoted a deeper understanding of the underlying principles that govern selective protein-protein interactions and how selectivity evolves. Lastly, there is a growing appreciation for the limitations and potential pitfalls of high-throughput analysis of protein-peptide interactomes. This review will examine some of the common approaches utilized for large-scale studies of protein interaction domains and suggest a set of standards for the analysis and validation of datasets from large-scale studies of peptide-binding modules. We will also highlight how data from large-scale studies of modular interaction domain families can provide insight into systems level properties such as the linguistics of selective interactions. PMID:22610655
Analysis and comparison of end effects in linear switched reluctance and hybrid motors
NASA Astrophysics Data System (ADS)
Barhoumi, El Manaa; Abo-Khalil, Ahmed Galal; Berrouche, Youcef; Wurtz, Frederic
2017-03-01
This paper presents and discusses the longitudinal and transversal end effects which affects the propulsive force of linear motors. Generally, the modeling of linear machine considers the forces distortion due to the specific geometry of linear actuators. The insertion of permanent magnets on the stator allows improving the propulsive force produced by switched reluctance linear motors. Also, the inserted permanent magnets in the hybrid structure allow reducing considerably the ends effects observed in linear motors. The analysis was conducted using 2D and 3D finite elements method. The permanent magnet reinforces the flux produced by the winding and reorients it which allows modifying the impact of end effects. Presented simulations and discussions show the importance of this study to characterize the end effects in two different linear motors.
Tornow, Ralf P.; Odstrcilik, Jan; Mayer, Markus A.; Gazarek, Jiri; Jan, Jiri; Kubena, Tomas; Cernosek, Pavel
2013-01-01
The retinal ganglion axons are an important part of the visual system, which can be directly observed by fundus camera. The layer they form together inside the retina is the retinal nerve fiber layer (RNFL). This paper describes results of a texture RNFL analysis in color fundus photographs and compares these results with quantitative measurement of RNFL thickness obtained from optical coherence tomography on normal subjects. It is shown that local mean value, standard deviation, and Shannon entropy extracted from the green and blue channel of fundus images are correlated with corresponding RNFL thickness. The linear correlation coefficients achieved values 0.694, 0.547, and 0.512 for respective features measured on 439 retinal positions in the peripapillary area from 23 eyes of 15 different normal subjects. PMID:24454526
Kolar, Radim; Tornow, Ralf P; Laemmer, Robert; Odstrcilik, Jan; Mayer, Markus A; Gazarek, Jiri; Jan, Jiri; Kubena, Tomas; Cernosek, Pavel
2013-01-01
The retinal ganglion axons are an important part of the visual system, which can be directly observed by fundus camera. The layer they form together inside the retina is the retinal nerve fiber layer (RNFL). This paper describes results of a texture RNFL analysis in color fundus photographs and compares these results with quantitative measurement of RNFL thickness obtained from optical coherence tomography on normal subjects. It is shown that local mean value, standard deviation, and Shannon entropy extracted from the green and blue channel of fundus images are correlated with corresponding RNFL thickness. The linear correlation coefficients achieved values 0.694, 0.547, and 0.512 for respective features measured on 439 retinal positions in the peripapillary area from 23 eyes of 15 different normal subjects.
Toward a dynamical theory of body movement in musical performance
Demos, Alexander P.; Chaffin, Roger; Kant, Vivek
2014-01-01
Musicians sway expressively as they play in ways that seem clearly related to the music, but quantifying the relationship has been difficult. We suggest that a complex systems framework and its accompanying tools for analyzing non-linear dynamical systems can help identify the motor synergies involved. Synergies are temporary assemblies of parts that come together to accomplish specific goals. We assume that the goal of the performer is to convey musical structure and expression to the audience and to other performers. We provide examples of how dynamical systems tools, such as recurrence quantification analysis (RQA), can be used to examine performers' movements and relate them to the musical structure and to the musician's expressive intentions. We show how detrended fluctuation analysis (DFA) can be used to identify synergies and discover how they are affected by the performer's expressive intentions. PMID:24904490
Remote sensing investigations of fugitive soil arsenic and its effects on vegetation reflectance
NASA Astrophysics Data System (ADS)
Slonecker, E. Terrence
2007-12-01
Three different remote sensing technologies were evaluated in support of the remediation of fugitive arsenic and other hazardous waste-related risks to human and ecological health at the Spring Valley Formerly Used Defense Site in northwest Washington D.C., an area of widespread soil arsenic contamination as a result of World War I research and development of chemical weapons. The first evaluation involved the value of information derived from the interpretation of historical aerial photographs. Historical aerial photographs dating back as far as 1918 provided a wealth of information about chemical weapons testing, storage, handling and disposal of these hazardous materials. When analyzed by a trained photo-analyst, the 1918 aerial photographs resulted in 42 features of potential interest. When compared with current remedial activities and known areas of contamination, 33 of 42 or 78.5 % of the features were spatially correlated with current areas of contamination or remedial activity. The second investigation involved the phytoremediation of arsenic through the use of Pteris ferns and the evaluation of the spectral properties of these ferns. Three hundred ferns were grown in controlled laboratory conditions in soils amended with five levels (0, 20, 50, 100 and 200 parts per million) of sodium arsenate. After 20 weeks, the Pteris ferns were shown to have an average uptake concentration of over 4,000 parts per million each. Additionally, statistical analysis of the spectral signature from each fern showed that the frond arsenic concentration could be reasonably predicted with a linear model when the concentration was equal or greater than 500 parts per million. Third, hyperspectral imagery of Spring Valley was obtained and analyzed with a suite of spectral analysis software tools. Results showed the grasses growing in areas of known high soil arsenic could be identified and mapped at an approximate 85% level of accuracy when the hyperspectral image was processed with a linear spectral unmixing algorithm and mapped with a maximum likelihood classifier. The information provided by these various remote sensing technologies presents a non-contact and potentially important alternative to the information needs of the hazardous waste remediation process, and is an important area for future environmental research.
Davidsson, Richard; Genin, Frédéric; Bengtsson, Martin; Laurell, Thomas; Emnéus, Jenny
2004-10-01
Chemiluminescent (CL) enzyme-based flow-through microchip biosensors (micro-biosensors) for detection of glucose and ethanol were developed for the purpose of monitoring real-time production and release of glucose and ethanol from microchip immobilised yeast cells. Part I of this study focuses on the development and optimisation of the micro-biosensors in a microfluidic sequential injection analysis (microSIA) system. Glucose oxidase (GOX) or alcohol oxidase (AOX) was co-immobilised with horseradish peroxidase (HRP) on porous silicon flow through microchips. The hydrogen peroxide produced from oxidation of the corresponding analyte (glucose or ethanol) took part in the chemiluminescent (CL) oxidation of luminol catalysed by HRP enhanced by addition of p-iodophenol (PIP). All steps in the microSIA system, including control of syringe pump, multiposition valve (MPV) and data readout, were computer controlled. The influence of flow rate and luminol- and PIP concentration were investigated using a 2(3)-factor experiment using the GOX-HRP sensor. It was found that all estimated single factors and the highest order of interaction were significant. The optimum was found at 250 microM luminol and 150 microM PIP at a flow rate of 18 microl min(-1), the latter as a compromise between signal intensity and analysis time. Using the optimised system settings one sample was processed within 5 min. Two different immobilisation chemistries were investigated for both micro-biosensors based on 3-aminopropyltriethoxsilane (APTS)- or polyethylenimine (PEI) functionalisation followed by glutaraldehyde (GA) activation. GOX-HRP micro-biosensors responded linear in a log-log format within the range 10-1000 microM glucose. Both had an operational stability of at least 8 days, but the PEI-GOX-HRP sensor was more sensitive. The AOX-HRP micro-biosensors responded linear (log-log) in the range between 1 and 10 mM ethanol, but the PEI-AOX-HRP sensor was in general more sensitive. Both sensors had an operational stability of at least 8 h, but with a half-life of 2-3 days.
Introducing Nonlinear Pricing into Consumer Choice Theory.
ERIC Educational Resources Information Center
DeSalvo, Joseph S.; Huq, Mobinul
2002-01-01
Describes and contrasts nonlinear and linear pricing in consumer choice theory. Discusses the types of nonlinear pricing: block-declining tariff, two-part tariff, three-part tariff, and quality discounts or premia. States that understanding nonlinear pricing enhances student comprehension of consumer choice theory. Suggests teaching the concept in…
Primal-dual techniques for online algorithms and mechanisms
NASA Astrophysics Data System (ADS)
Liaghat, Vahid
An offline algorithm is one that knows the entire input in advance. An online algorithm, however, processes its input in a serial fashion. In contrast to offline algorithms, an online algorithm works in a local fashion and has to make irrevocable decisions without having the entire input. Online algorithms are often not optimal since their irrevocable decisions may turn out to be inefficient after receiving the rest of the input. For a given online problem, the goal is to design algorithms which are competitive against the offline optimal solutions. In a classical offline scenario, it is often common to see a dual analysis of problems that can be formulated as a linear or convex program. Primal-dual and dual-fitting techniques have been successfully applied to many such problems. Unfortunately, the usual tricks come short in an online setting since an online algorithm should make decisions without knowing even the whole program. In this thesis, we study the competitive analysis of fundamental problems in the literature such as different variants of online matching and online Steiner connectivity, via online dual techniques. Although there are many generic tools for solving an optimization problem in the offline paradigm, in comparison, much less is known for tackling online problems. The main focus of this work is to design generic techniques for solving integral linear optimization problems where the solution space is restricted via a set of linear constraints. A general family of these problems are online packing/covering problems. Our work shows that for several seemingly unrelated problems, primal-dual techniques can be successfully applied as a unifying approach for analyzing these problems. We believe this leads to generic algorithmic frameworks for solving online problems. In the first part of the thesis, we show the effectiveness of our techniques in the stochastic settings and their applications in Bayesian mechanism design. In particular, we introduce new techniques for solving a fundamental linear optimization problem, namely, the stochastic generalized assignment problem (GAP). This packing problem generalizes various problems such as online matching, ad allocation, bin packing, etc. We furthermore show applications of such results in the mechanism design by introducing Prophet Secretary, a novel Bayesian model for online auctions. In the second part of the thesis, we focus on the covering problems. We develop the framework of "Disk Painting" for a general class of network design problems that can be characterized by proper functions. This class generalizes the node-weighted and edge-weighted variants of several well-known Steiner connectivity problems. We furthermore design a generic technique for solving the prize-collecting variants of these problems when there exists a dual analysis for the non-prize-collecting counterparts. Hence, we solve the online prize-collecting variants of several network design problems for the first time. Finally we focus on designing techniques for online problems with mixed packing/covering constraints. We initiate the study of degree-bounded graph optimization problems in the online setting by designing an online algorithm with a tight competitive ratio for the degree-bounded Steiner forest problem. We hope these techniques establishes a starting point for the analysis of the important class of online degree-bounded optimization on graphs.
Hossain, Ahmed; Beyene, Joseph
2014-01-01
This article compares baseline, average, and longitudinal data analysis methods for identifying genetic variants in genome-wide association study using the Genetic Analysis Workshop 18 data. We apply methods that include (a) linear mixed models with baseline measures, (b) random intercept linear mixed models with mean measures outcome, and (c) random intercept linear mixed models with longitudinal measurements. In the linear mixed models, covariates are included as fixed effects, whereas relatedness among individuals is incorporated as the variance-covariance structure of the random effect for the individuals. The overall strategy of applying linear mixed models decorrelate the data is based on Aulchenko et al.'s GRAMMAR. By analyzing systolic and diastolic blood pressure, which are used separately as outcomes, we compare the 3 methods in identifying a known genetic variant that is associated with blood pressure from chromosome 3 and simulated phenotype data. We also analyze the real phenotype data to illustrate the methods. We conclude that the linear mixed model with longitudinal measurements of diastolic blood pressure is the most accurate at identifying the known single-nucleotide polymorphism among the methods, but linear mixed models with baseline measures perform best with systolic blood pressure as the outcome.
Doorn, J; Storteboom, T T R; Mulder, A M; de Jong, W H A; Rottier, B L; Kema, I P
2015-07-01
Measurement of chloride in sweat is an essential part of the diagnostic algorithm for cystic fibrosis. The lack in sensitivity and reproducibility of current methods led us to develop an ion chromatography/high-performance liquid chromatography (IC/HPLC) method, suitable for the analysis of both chloride and sodium in small volumes of sweat. Precision, linearity and limit of detection of an in-house developed IC/HPLC method were established. Method comparison between the newly developed IC/HPLC method and the traditional Chlorocounter was performed, and trueness was determined using Passing Bablok method comparison with external quality assurance material (Royal College of Pathologists of Australasia). Precision and linearity fulfill criteria as established by UK guidelines are comparable with inductively coupled plasma-mass spectrometry methods. Passing Bablok analysis demonstrated excellent correlation between IC/HPLC measurements and external quality assessment target values, for both chloride and sodium. With a limit of quantitation of 0.95 mmol/L, our method is suitable for the analysis of small amounts of sweat and can thus be used in combination with the Macroduct collection system. Although a chromatographic application results in a somewhat more expensive test compared to a Chlorocounter test, more accurate measurements are achieved. In addition, simultaneous measurements of sodium concentrations will result in better detection of false positives, less test repeating and thus faster and more accurate and effective diagnosis. The described IC/HPLC method, therefore, provides a precise, relatively cheap and easy-to-handle application for the analysis of both chloride and sodium in sweat. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
ERIC Educational Resources Information Center
Preacher, Kristopher J.; Curran, Patrick J.; Bauer, Daniel J.
2006-01-01
Simple slopes, regions of significance, and confidence bands are commonly used to evaluate interactions in multiple linear regression (MLR) models, and the use of these techniques has recently been extended to multilevel or hierarchical linear modeling (HLM) and latent curve analysis (LCA). However, conducting these tests and plotting the…
Sum-of-Squares-Based Region of Attraction Analysis for Gain-Scheduled Three-Loop Autopilot
NASA Astrophysics Data System (ADS)
Seo, Min-Won; Kwon, Hyuck-Hoon; Choi, Han-Lim
2018-04-01
A conventional method of designing a missile autopilot is to linearize the original nonlinear dynamics at several trim points, then to determine linear controllers for each linearized model, and finally implement gain-scheduling technique. The validation of such a controller is often based on linear system analysis for the linear closed-loop system at the trim conditions. Although this type of gain-scheduled linear autopilot works well in practice, validation based solely on linear analysis may not be sufficient to fully characterize the closed-loop system especially when the aerodynamic coefficients exhibit substantial nonlinearity with respect to the flight condition. The purpose of this paper is to present a methodology for analyzing the stability of a gain-scheduled controller in a setting close to the original nonlinear setting. The method is based on sum-of-squares (SOS) optimization that can be used to characterize the region of attraction of a polynomial system by solving convex optimization problems. The applicability of the proposed SOS-based methodology is verified on a short-period autopilot of a skid-to-turn missile.
Anderson, Carl A; McRae, Allan F; Visscher, Peter M
2006-07-01
Standard quantitative trait loci (QTL) mapping techniques commonly assume that the trait is both fully observed and normally distributed. When considering survival or age-at-onset traits these assumptions are often incorrect. Methods have been developed to map QTL for survival traits; however, they are both computationally intensive and not available in standard genome analysis software packages. We propose a grouped linear regression method for the analysis of continuous survival data. Using simulation we compare this method to both the Cox and Weibull proportional hazards models and a standard linear regression method that ignores censoring. The grouped linear regression method is of equivalent power to both the Cox and Weibull proportional hazards methods and is significantly better than the standard linear regression method when censored observations are present. The method is also robust to the proportion of censored individuals and the underlying distribution of the trait. On the basis of linear regression methodology, the grouped linear regression model is computationally simple and fast and can be implemented readily in freely available statistical software.
Sarpeshkar, R
2014-03-28
We analyse the pros and cons of analog versus digital computation in living cells. Our analysis is based on fundamental laws of noise in gene and protein expression, which set limits on the energy, time, space, molecular count and part-count resources needed to compute at a given level of precision. We conclude that analog computation is significantly more efficient in its use of resources than deterministic digital computation even at relatively high levels of precision in the cell. Based on this analysis, we conclude that synthetic biology must use analog, collective analog, probabilistic and hybrid analog-digital computational approaches; otherwise, even relatively simple synthetic computations in cells such as addition will exceed energy and molecular-count budgets. We present schematics for efficiently representing analog DNA-protein computation in cells. Analog electronic flow in subthreshold transistors and analog molecular flux in chemical reactions obey Boltzmann exponential laws of thermodynamics and are described by astoundingly similar logarithmic electrochemical potentials. Therefore, cytomorphic circuits can help to map circuit designs between electronic and biochemical domains. We review recent work that uses positive-feedback linearization circuits to architect wide-dynamic-range logarithmic analog computation in Escherichia coli using three transcription factors, nearly two orders of magnitude more efficient in parts than prior digital implementations.
Dynamic evolution of interface roughness during friction and wear processes.
Kubiak, K J; Bigerelle, M; Mathia, T G; Dubois, A; Dubar, L
2014-01-01
Dynamic evolution of surface roughness and influence of initial roughness (S(a) = 0.282-6.73 µm) during friction and wear processes has been analyzed experimentally. The mirror polished and rough surfaces (28 samples in total) have been prepared by surface polishing on Ti-6Al-4V and AISI 1045 samples. Friction and wear have been tested in classical sphere/plane configuration using linear reciprocating tribometer with very small displacement from 130 to 200 µm. After an initial period of rapid degradation, dynamic evolution of surface roughness converges to certain level specific to a given tribosystem. However, roughness at such dynamic interface is still increasing and analysis of initial roughness influence revealed that to certain extent, a rheology effect of interface can be observed and dynamic evolution of roughness will depend on initial condition and history of interface roughness evolution. Multiscale analysis shows that morphology created in wear process is composed from nano, micro, and macro scale roughness. Therefore, mechanical parts working under very severe contact conditions, like rotor/blade contact, screws, clutch, etc. with poor initial surface finishing are susceptible to have much shorter lifetime than a quality finished parts. © Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Mansouri, Edris; Feizi, Faranak; Jafari Rad, Alireza; Arian, Mehran
2018-03-01
This paper uses multivariate regression to create a mathematical model for iron skarn exploration in the Sarvian area, central Iran, using multivariate regression for mineral prospectivity mapping (MPM). The main target of this paper is to apply multivariate regression analysis (as an MPM method) to map iron outcrops in the northeastern part of the study area in order to discover new iron deposits in other parts of the study area. Two types of multivariate regression models using two linear equations were employed to discover new mineral deposits. This method is one of the reliable methods for processing satellite images. ASTER satellite images (14 bands) were used as unique independent variables (UIVs), and iron outcrops were mapped as dependent variables for MPM. According to the results of the probability value (p value), coefficient of determination value (R2) and adjusted determination coefficient (Radj2), the second regression model (which consistent of multiple UIVs) fitted better than other models. The accuracy of the model was confirmed by iron outcrops map and geological observation. Based on field observation, iron mineralization occurs at the contact of limestone and intrusive rocks (skarn type).
Sarpeshkar, R.
2014-01-01
We analyse the pros and cons of analog versus digital computation in living cells. Our analysis is based on fundamental laws of noise in gene and protein expression, which set limits on the energy, time, space, molecular count and part-count resources needed to compute at a given level of precision. We conclude that analog computation is significantly more efficient in its use of resources than deterministic digital computation even at relatively high levels of precision in the cell. Based on this analysis, we conclude that synthetic biology must use analog, collective analog, probabilistic and hybrid analog–digital computational approaches; otherwise, even relatively simple synthetic computations in cells such as addition will exceed energy and molecular-count budgets. We present schematics for efficiently representing analog DNA–protein computation in cells. Analog electronic flow in subthreshold transistors and analog molecular flux in chemical reactions obey Boltzmann exponential laws of thermodynamics and are described by astoundingly similar logarithmic electrochemical potentials. Therefore, cytomorphic circuits can help to map circuit designs between electronic and biochemical domains. We review recent work that uses positive-feedback linearization circuits to architect wide-dynamic-range logarithmic analog computation in Escherichia coli using three transcription factors, nearly two orders of magnitude more efficient in parts than prior digital implementations. PMID:24567476
Weng, Zebin; Zeng, Fei; Zhu, Zhenhua; Qian, Dawei; Guo, Sheng; Wang, Hanqing; Duan, Jin-Ao
2018-07-15
The root of Sophora flavescens Ait. has long been used as a crude drug in China and other Asian countries for thousands of years. The quinolizidine alkaloids and flavonoids are considered as the main bioactive components in this plant. To determine the distribution and content of the flavonoids in different organs of this plant, a rapid, sensitive and reproducible method was established using ultra-high-performance liquid chromatography coupled with a triple quadrupole electrospray tandem mass spectrometry. A total of sixteen flavonoids including five different types (isoflavones, pterocarpans, flavones, flavonols and prenylflavonoids) were simultaneously determined in 10 min. The established method was fully validated in terms of linearity, sensitivity, precision, repeatability as well as recovery and successfully applied in the methanolic extracts of S. flavescens parts (root, stem, leaf, pod and seed). The analysis results indicated that the distribution and contents of different type of flavonoids showed remarkable differences among the five organs of S. flavescens. This study might be useful for the rational utilization of S. flavescens resource. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Hou, W. Z.; Li, Z. Q.; Zheng, F. X.; Qie, L. L.
2018-04-01
This paper evaluates the information content for the retrieval of key aerosol microphysical and surface properties for multispectral single-viewing satellite polarimetric measurements cantered at 410, 443, 555, 670, 865, 1610 and 2250 nm over bright land. To conduct the information content analysis, the synthetic data are simulated by the Unified Linearized Vector Radiative Transfer Model (UNLVTM) with the intensity and polarization together over bare soil surface for various scenarios. Following the optimal estimation theory, a principal component analysis method is employed to reconstruct the multispectral surface reflectance from 410 nm to 2250 nm, and then integrated with a linear one-parametric BPDF model to represent the contribution of polarized surface reflectance, thus further to decouple the surface-atmosphere contribution from the TOA measurements. Focusing on two different aerosol models with the aerosol optical depth equal to 0.8 at 550 nm, the total DFS and DFS component of each retrieval aerosol and surface parameter are analysed. The DFS results show that the key aerosol microphysical properties, such as the fine- and coarse-mode columnar volume concentration, the effective radius and the real part of complex refractive index at 550 nm, could be well retrieved with the surface parameters simultaneously over bare soil surface type. The findings of this study can provide the guidance to the inversion algorithm development over bright surface land by taking full use of the single-viewing satellite polarimetric measurements.